Tag Archives: web site

Repper

Repper is a free online tool that creates kaleidoscopic tiled images. It allows you to interactively generate a tiled image from a rectangular or triangular section of an input image. You can upload your own image and generate a tileable image fragment. Unlike the tiling provided by say the Windows desktop, every additional tile is mirrored compared to the adjacent tile. This makes the end result usable as a graphic pattern. Recognizable objects like faces or text will, however, partly be repeated upside down and mirrored.

Repper is by studio:ludens, a design site run by two young Dutch industrial designers.  Although you retain the copyright to your own images (you did own the copyright, right?), the tileable fragment becomes public domain (under the Creative Commons license) and a copy of the tile is saved at the Repper web site. This has benefits: it can inspire other users, but you can also link to the copy on Repper’s web site in your own web pages. This saves having to figure out how to upload a copy of the file to another online server.

Example

I selected an abstract image with a natural color palette as a test: it is a weird pattern in a 16th century stone church pillar.

Church pillar in Naarden
Church pillar in Naarden

And here are some patterns created from the above image using Repper:

Pattern #1
pattern_98262EA5-14F5-6C37-B202-43D407384159:
Light bulbs? Christmas tree ornaments?
Pattern #2
pattern_EBEC021D-0673-C244-5964-43E07C10EDB4:
Under the electron microscope?

 

Pattern #3

pattern_B6BE3049-EEBE-2C52-436F-43E3CA7BF77E Uncle Albert's vest

Pattern #4

pattern_685D719D-E982-73C5-7974-43EB4F1185A8 <<<:-:>>>

Usage

Although you can undoubtedly do the same with Photoshop, Repper is easy to use: you immediately see the results of selecting different parts of your original image or of changing the dimensions of the area being copied. The site also allows you to see what others have created (but not their input images) or to play with example images instead of your own.

Relevance

Repper may be worth checking out – even if you don’t need it – just for fun. Typical applications are:

  • background images of web sites (the web site generates the line of HTML code to add the pattern)
  • “social profiles”, a more trendy equivalent of the previous one (Twitter, Windows Live Spaces, Hyves)
  • the desktop on your monitor (select the tiling option)
  • background of printed material like posters or pamphlets

If nothing else, Repper can teach you how much humans value symmetry.

Gravatar

I have registered a gravatar for myself. It automatically gives me a recognizable avatar on gravatar-enabled websites.

It is the same image of a “map lichen” that I use on MSN Messenger.

You can define your own gravatar for free at www.gavatar.com (another site by Matt Mullenweg).

2006_norway_276_crop_frame

WordPress and Spam

Akismet

In late February 2009, I suddenly started getting about 50 spam comments per day on this blog. They all had similar content and were coming from only 2 or 3 sources. The blog is running on a GoDaddy-hosted site and is a manually installed configuration of WordPress 2.7.1 (rather than the WordPress installation provided by GoDaddy).

So, after first manually deleting all spam, I activated the famous (but oddly named) Akismet spam filter. It is by Matt Mullenweg, one of the WordPress founders. It seems to do the job. And it generates nice-looking statistics of the amount of spam received per day.

Puzzle: bug or feature?

The statistics show indeed roughly 50 new spam comments per day. But these do not show up in the Spam category under comments. Akismet has a procedure to check the spam filter by logging out and posting a comment to your own site under user name “viagra-test-123″. That does creating a posting which is classified under Spam and which can be reviewed.

Three friends using WordPress did not have a real solution/explanation. Searching the Akismet.com site also didn’t help. Wild guess (I have no evidence): Akismet has 2 levels of spam. Span that is absolutely certain to be spam, and is not saved and thus not reviewable. And normal spam (including spam generated by viagra-test-123) that is saved for 15 days and is reviewable by the user.

Note that this is not a big deal assuming that Akismet is accurate enough to avoid false positives. But I need to see the spam to judge…

Update (10 March 09)

After a few days, the filter started to behave. First spam comments started showing up in a reviewable location. A few days later, it started treating non-spam comments (“Ham”) correctly. The amount of incoming spam also dropped by an oder of magnitude. No good explanation yet.

Akismet spam statistics for peter.vdhamer.com

Akismet spam statistics for peter.vdhamer.com

Firefox-compatibility of web sites

I have started to Firefox-proof the web site for Tim. Firefox support is important because that browser has achieved a market share of 20%(particularly among savvy users) that is still growing. Once that is done, I should get one of the Apple users to test the web site under Safari (3rd place browser after Internet Explorer and Firefox).

Tables of tightly spaced image fragments

One of the differences that gave a lot of headaches was (quoting from Tedster):

Say you’ve got some images held together nice and tight in a group of table cells. And in Explorer that’s exactly what happens – the images interface with no gaps at all. Maybe now after some hard work your mark-up has validated to some nice DTD – maybe XHTML strict. But when you view your page in a recent browser (especially Mozilla, Netscape) you find a fat extra space, commonly below the image.

It turns out that Internet Explorer, although displaying what I intended, was actually wrong: it was ignoring the CSS standard. And Firefox was simply right, in the sense that it did follow the standard. A readable explanation and a few alternative solutions can be found here (thanks to Rob Jansen op de Haar for finding this). One of the most curious things you can learn from the explanation (ultimately about what the standard says about image alignment and text baselines) is that modern browsers try to distinguish between older and newer web pages. If the web page starts with something fancy like
‹!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"›
the browser will assume you know what you are doing and will strictly do what the standard says (“standards mode”). If such a line is not found (“quirks mode”) it will revert to old rendering conventions used in older browsers because it assumes this is an old page written before the rule book changed (or more accurately: written before there was a rule book).

PHP-based website

I have also been busy converting the web site from plain HTML to pages generated using PHP scripting. This will hopefully clean up the code and simplify achieving a consistent look-and-feel across pages. It also creates the option of making the pages more intelligent or dynamic (the main reason people use PHP in the first place). Till now, the results are not conclusive: the amount of lines of code is growing, despite moving to a higher level of abstraction. The total amount of lines of code will hopefully drop once more pages have been converted to using the shared code (currently called include.php).