Tag Archives: benchmarking

DxOMark @ 251 cameras

test

DxOMark Sensor scores for 251 cameras (click to view larger)

Here is another update about new cameras tested by www.dxomark.com. The test only looks at the noise and dynamic range peformance of cameras – it doesn’t cover resolution, speed, ease-of-use, durability, etc.

Since my previous posting, 5 new cameras have been tested by DxO Labs. Modern cameras span a range of over 60 points. A 3 point difference is barely visible to specialists, 10 points is readily visible, 30 points tends to be obvious even when someone is not paying attention to image quality at all:

  • Sony A3000/A5000 (78 and 79 points).
    The pricing of APS-C system cameras with a state-of-the-art sensor has dropped below US$ 500 with the introduction of the Sony A3000 and A5000. Despite the Alpha branding, these are basically NEX models (Sony has dropped the usage of the NEX brand). They thus have E lens mount (as used in the NEX series) rather than the A-mount (as used in the Alpha 77).
  • Leica S medium format (76 points).
    The Leica S medium format camera, despite its $28,000 price,  does not really have a state-of-the-art sensor. It “still” uses a CCD sensor technology, although  recently medium format models with a Sony-built CMOS sensor have been recently announced (by Hasselblad, Phase One and Pentax). CMOS sensors should manage to make medium format cameras more all-round cameras again. Arguably, because medium format cameras are often used in studios or tripods, they historically had more emphasis on resolution, color fidelity and lens quality than on low light or high dynamic range.
  • Leica X Vario (78 points).
    Leica also gives you the option of buying the X Vario which actually performs similarly to the Sony A3000 or A5000, but at a Leica price.
  • Olympus Stylus 1 (51 points).
    The Olympus Stylus 1 scores surprisingly low for a new camera with its SLR-like looks. But looks are misleading here. If you look carefully at the specs, it turns out to have a very small sensor with a 4.66x crop factor. This puts in in the same league as the Canon Powershot S120. The Stylus 1 (51 points) is outperformed by the more compact S120 (56 points).
Canon Powershot S120 next to the Olympus Stylus 1 (www.camerasize.com)

Canon Powershot S120 and the Olympus Stylus 1 have the same sensor size (www.camerasize.com)

State of the DxOMark (camera nerdiness)

DxOMark scores as of Nov 2013 (click to view larger).

I updated my overview chart with available high-end cameras. See http://www.dxomark.com/Cameras/Camera-Sensor-Database for detailed benchmark results for specific cameras. As usual, there is a lot of interesting information in such an overview:

  • Assuming you care about low light and high dynamic range performance, the best cameras have full-frame sensors (the blue dots). You knew that – right? Well, surprisingly full-frame sensors beat even larger (purple, pink, red) sensors. So don’t bother spending big money on a medium format camera unless you really need the super-high resolution. Or need it show that your equipment is clearly on a price class of its own.
  • The so-called APS-C cameras with 1.5x or 1.6x sensors have improved. Examples: the Nikon D5200 and D5300.
  • The Sony NEX-5R mirrorless (which is 1.5x) has a slightly higher price than an APS-C SLR, but the body is smaller and the performance is competitive. Mirrorless models should have the same performance as an SLR with a comparable sensor. A mirror doesn’t add image quality – it just makes a click sound like ke-lick.
  • Canon still has a long way to go to catch up with its APS-C sensors (1.6x). The Canon 70D performs slightly better than the old Canon 7D, but a comparison to Nikon or Sony tends to be embarrassing.
  • Recent Micro Four Thirds cameras (Olympus & Panasonic) have improved and are even ahead of Canon’s APS-C (1.6x) models.
  • The Sony RX100 and RX100-II are still doing fine – at least considering their small sensor size (2.7x or 1/1″ ). The Nikon Series 1 is technically not state-of-the art, but nice if you like white or pink gear: it targets a young Asian lifestyle market.
  • The premium pocket cameras have improved. Especially the 1/1.7″ sensor models such as the Canon Powershot S120 and G16 and their Nikon equivalents.
  • The best deals if you need a high quality model can be found at the top edge of the cloud in diagram “b”: you get the highest quality in that price range. Note that the prices shown are official prices at introduction, and will differ from current street prices. These deals include:
    • The Nikon D600 and D610. These are essentially the same camera, but the D610 resolves a dust issue.
    • The new Sony A7R mirrorless. Note that this model uses Sony E-mount lenses, but actually requires new Sony full-frame E-mount lenses called “FE”. So it will take a while until there are enough lens options.
    • The Sony RX1 and RX1R. These look overpriced (and probably are – although I ordered one myself), but their price does include an excellent 35mm Zeiss f/2.0 lens. On the other hand, they do not come with an optical or electronic viewfinder. These cost about 500 US $ or Euro extra. Lens hood pricing is  joke (so look into the Photodiox accessories).
    • The Nikon D5200 or D5300. Both have a 24 MPixels state-of-the-art sensor, but the newer one gives sharper images (no AA filter) if your lenses are up to the challenge.
    • The Nikon D3200. Also 24 MPixels with state-of-the-art sensor technology.
    • The Pentax K50 and K500. A somewhat overlooked brand.
    • The Nikon Coolpix P330. A “take me everywhere” camera at a lower price point than the excellent Nikon Coolpix A or FujiFilm’s X-100s models.

Note that some major new camera models are not shown because DxO Labs simply hasn’t tested them yet. These include:

  1. The new full-frame Nikon Df (with the professional Nikon D4′s 16 Mpixel sensor). It should score about 89 (D4) for $3000 – nice, but not sensational unless you insist on a retro look and feel.
  2. Most FujiFilm X-Trans models have not been tested. Tests may be delayed because they have a non-standard color filter array (complicating raw conversion). The CFA design allows the sensor to work without a low pass filter. Alternatively, the missing tests may be because FujiFilm is not enthusiastic about their cameras’ DxOMark scores (pure speculation on my part, but the FujiFilm X-100 didn’t score exceptionally well).  FujiFilm high-end cameras are getting a lot of attention from serious photographers who prefer small, unobtrusive cameras with a classic mechanical feel.
  3. The Sony A7. Many people wouldn’t really benefit from 36 MPixels (Sony A7R) without an image stabilizer or a tripod or high-end lenses.

For a detailed explanation of what the benchmark itself means, see http://www.luminous-landscape.com/essays/dxomark_sensor_for_benchmarking_cameras2.shtml. Note that the number of tested cameras has meanwhile increased from 183 to 236 models.

 

Caption

Technical tips for amateur photographers

Sometimes there are tips I don’t offer to experienced photography enthusiasts unless they ask the right question first. This is because I feel they should already know all this, and might be offended that I could imagine they didn’t – especially if the issue is pretty important.

But… in practice nobody can know everything. Especially when some things in the photography world have changed gradually or are non-trivial. So many photographers still use best practices that were suitable years ago, but may be outdated. So, pssst… here is a checklist of several such issues. Where applicable, I tried to add evidence.

1. Good cameras and photographers deserve Raw

The question is whether to set the camera so that it generates Raw image files. Your camera’s default is to generate JPG files. Raw files can be seen as the camera’s native format, and may need to be converted to JPG at some point.

Adobe called their open Raw format “Digital Negatives” or DNG. So, to borrow their analogy, the underlying question is whether to save or to automatically discard your digital negatives. If you currently generate JPG files, you are automatically developing and discarding those “negatives“. As the analogy suggests, this is automatic and thus convenient. But it has certain drawbacks.

Originally Raw files were considered a tool for pixel fetishists: the digital photo that you really needed (for printing, for mailing, for websites, for displaying) was in JPG format because most software could only handle JPG files. So Raw files added an extra step and were thus considered by many to be a waste of time, energy and storage space. Furthermore the JPG file format is well standardized, while Raw files were vendor-specific image formats that might become obsolete in the future.

This situation has changed. By now JPG is a 20 year-old image format that is still good enough as an output format, but isn’t really sophisticated enough to store the images which current camera sensors can capture. So assuming you have a good camera, one could argue that you lose a few year’s worth of camera industry innovation if you choose to exclusively work in Raw. Obviously not everybody will care, but it is at least worth knowing what your choice is.

In more technical terms, current sensors have 12 or 14 bit of accuracy and JPG was designed in the time that 8 bit was considered enough. Furthermore classic JPG was designed to discard fine detail and color nuances in order to save storage space: JPG is essentially a form of “lossy compression”. To JPG‘s credit, the tradeoff between image quality and file size is adjustable, but cameras only give you limited control over this tradeoff. Raw files, on the other hand, use “lossless compression”, store all 12-14 bits of information and produce files that may be 1.5-4× larger (this depends on the ISO setting).

Here is an example of the image quality lost by using JPG. These are 100% crops of an image of a bee keeper’s working clothes taken with my Canon 5D Mark II. The Raw and JPG images were actually simultaneously recorded in-camera and are thus automatically generated from the same exposure. I selected the camera’s highest quality setting. No changes were applied to the resulting Raw and JPG files in post-processing: these are pretty much the defaults.

Hold the mouse over the images to exchange the Raw and JPG images. Loading may take a few seconds.

The JPG version clearly has less contrast (but that could be fixed in post-processing) but a lot of details are lost: at this extreme magnification, the photo starts to look like a water-color painting.

The full image (100 ISO, 70-200mm f/4L IS lens, 106 mm, f/6.7, 1/500s).

The claim that “Raw is an extra intermediate step” is a misconception: when you print or view or zoom into an image, you are generating a new derived image – generally at a lower resolution. So in a way, JPG is a detour rather than the short route: your camera natively speaks Raw, compresses the image because storage space was formerly a major concern, after which the image is decompressed so that it can be used (viewed, edited, printed), can be sent (e-mail) or can be shared (web).

In modern software like Lightroom or Google’s Picasa, Raw and JPG both serve as valid input formats. When you adjust an image (e.g. change brightness, crop it, remove dust, adapt the contrast…) no output image file is generated. Only the (tiny) adjustment commands themselves are saved.

Photoshop and its alternatives also support all major Raw formats and can generate many different output formats. If you may edit the image again in the future, JPG is seldom recommended as the intermediate storage format because every additional detour via JPG causes more loss of quality.

Thus, although Raw has few drawbacks nowadays, here are some legitimate excuses to still use JPG:

  1. If you have a camera phone or compact camera and it probably only supports JPG. This means the camera is probably not good enough to worry about subtle differences in image quality.
  2. In general, if quality is not an issue, JPG is good enough. An extreme example: I use JPG to take pictures of street names, etc instead of writing them down ;-)
  3. If you take studio images, you may have the time and skills to tune the lighting, composition and camera settings so that you don’t need to adjust the image at all in post-processing. If you never change your image (no dust, ideal exposure, ideal contast…) JPG may be good enough. You still loose some sharpness, but for portraits sharpness may even be somewhat undesirable.
  4. If your camera has an obscure proprietary Raw format, you may want to use something else for archiving. This is seldom the case today: small manufacturers use Adobe’s well-documented (but not really open) DNG Raw format and major manufacturers get enough software support for their current and older formats anyway.

To strengthen the case for Raw, Your Honor, here are some more comparisons:

Hold the mouse over the image to swap the Raw and JPG images.

Again, the JPG version largely loses skin texture and fabric detail. You won’t really notice this on small prints, but it will limit your ability to crop or enlarge your pictures.

The corresponding full image (100 ISO, 111 mm, f/6.7, 1/500, 70-200 f/4L IS)

As a last example, here is a macro image of bees taken with a macro lens (while wearing the protective clothing you saw in the first picture).

Hold the mouse over the images to swap the Raw and JPG images.

The JPG has less sharpness for the hairs, less texture in the smooth bits and darker shadows (although the latter can likely be fixed).

The corresponding full image (320 ISO, 100 mm f/2.8 macro lens, f/5, 1/250)

More examples at higher ISO values and using different cameras can be found on here on the www.DPReview.com site. Note that when you compare different cameras, you are often also varying resolution and varying the choice of lens. But the a quick look at different cameras confirms the conclusion that a good photo made with good equipment deserves Raw.


2. Photoshop or Lightroom?

Adobe’s Photoshop dates back to the late 1980′s. It is one of the most famous software tools and even brands in the world. Unfortunately it has also grown into a huge program with many features and large add-ons (e.g. Bridge). This is partly because it is used by very diverse types of users. And this is partly because it was designed to be the single, ultimate tool for modifying or generating images.

It is consequently reasonably tricky to learn, and requires a pretty disciplined approach to manipulating images.

In 2006 Adobe launched a new product, Adobe Photoshop Lightroom, that only targets the basic needs of (serious) photographers. It is thus designed to cover pretty much everything a photographer does with photos. It focuses on helping the photographer do common tasks efficiently – rather than on providing an ultimate toolbox which can do everything… providing that you can find out how.

The main benefits of Lightroom (compared to Photoshop CS are):

  1. It is lightweight, but still targeted at professional photographers and serious amateurs. “Lightweight” implies easier to use, easier to learn and significantly less expensive.
  2. Lightroom also keeps track of your files (thus covering Bridge functionality). It does keywording, searching, browsing, etc. The files themselves can be stored with normal file names in a normal directory structure. There is support for having different versions of a file (“virtual copies”).
  3. You never create output images unless you need to send (“export”) a file to someone/somewhere else. You only store the original image. And information is recorded on what modifications you selected. This allows you to change your mind and adapt the image later without any loss of quality. It also makes it irrelevant in what order you do modifications, and it avoids having to store and track multiple intermediate or alternative versions of the same image.
    This approach actually works faster on large images because the computer only calculates changes at the resolution or the crop that you are viewing at that moment: on a screen you either see a low resolution overview image, or a high-resolution partial image. This is because screens are typically 1 to 2 MPixels while your camera is likely between 10 and 20 MPixels.
  4. There is no support for layers. Some uses of layers are handled by the previous point. But there are things you can do in Photoshop which you simply cannot do in Lightroom. Many professional photographers thus actually own both, but spend most of their time in Lightroom.
  5. Lightroom covers the entire workflow in one user interface: importing and managing collections of images, common and some less common image enhancements (“Develop”), professional quality printing, and exporting images to a website or web service.

So if the final output of your work is typically still essentially a single photo, Lightroom (or its competitor Aperture) may give you all you need in an elegant, efficient but professional-strength tool.

Here is a rather extreme example of how far Lightroom can adjust a (Raw) image.

Original version of 2011_Paris_249 (100 ISO, 105 mm, f/5.6, 1/200).

Edited version that stresses the shadows above (!) the tower.

The changes made to this particular image were:

  1. Reduced the exposure by 3.5 (!) stops. Yes, the original was a Raw file.
  2. Removing dust spots
  3. Rendering as black and white.
  4. Cropping off the bottom of the image
  5. Increasing the contrast
  6. Applying automatic lens corrections for distortion, vignetting, etc

The order in which such changes are made is irrelevant – unlike Photoshop, the enhancements are applied in a fixed order determined by Lightroom. This “fixed order” may sound inflexible, but actually allows you to apply the required changes in any order you like and backtrack on earlier decisions without having to start over.

Below is a cropped version of the original image superimposed with the final image. The images move slightly due to the applied lens correction.

Cropped original version of 2011_Paris_249. Place mouse over image to see edited version.

Obviously many images need less or even no editing. But, on the other hand, Lightroom even supports some enhancements that are fancier than those shown above: gradient filters and brush-based local enhancements.

Some of the main things you cannot do in Lightroom 3.x:

  • no layer support (although there is a form of masking)
  • no hundreds of creative filters
  • no HDR or panorama stitching (Lightroom can invoke Photoshop to do both)
  • no soft proofing of print jobs (this has been added in Lightroom 4)

So … for photographers, using Photoshop Lightroom as your main tool saves you time. And the end result is likely to be a bit better. This is simply because Lightroom was design for photographers. Photoshop nowadays targets graphic artists, website designers, engineers, print ad developers, etc. You may find that you still need Photoshop occasionally. But Lightroom provides integration support for extra tools and plugins – including Photoshop.


Candidates for additional topics in this article or series:

  • Calibrate the colors and brightness of your screen.
  • I recommend using adding artificial vignetting to many photos
  • image stabilizers as “digital tripods”. 10x slower shutter speed.
  • Beginners should never use the flash.

NAS and Lightroom performance

My digital photos are stored on an inexpensive NAS. This CH3SNAS consists of dual 3.5″ SATA drives of, in my case, 1 TByte each. Each drive contains a copy of each photo (RAID 1) for robustness. Lightroom 3 maintains a catalog of these photos (with associated keywords, metadata and a cache of previews) on the computer’s local hard disk. Unfortunately, although the NAS is fine for archiving and backup tasks, Lightroom’s access to the stored photos is rather slow. The question is thus whether I can get better read performance by tweaking this setup, or need to upgrade to a fancier NAS.

The NAS and its drives

The equipment:

  • the NAS is a small Linux-based ARM9 box with 64 MBytes of memory
  • the drives in the NAS and in my desktop PC are all:
    • Samsung Spinpoint F1 HD103UJ
    • drive specs: SATA-300, 7200 RPM, 32 MB cache
  • the NAS is connected to the client PC via a switch. The router, the NAS and the PC are capable of running Gigabit Ethernet.
  • the relevant partition is formated as RAID 1 (although I don’t recommend that any more) meaning that each file is simply stored on both drives for safety

Basic drive performance

I benchmarked one of the Samsung Spinpoint F1 HD103UJ drives mounted inside a desktop PC using HD Tune Pro. This test thus tests what the drive can do under normal (non-NAS) conditions.

Detail: I restricted the part of the drive under test to 0.75 Terrabytes because the data on the NAS was confined to a 750 GB RAID 1 partition on each drive. This doesn’t change the measurements significantly.

Samsung Spinpoint F1 HD103UJ performance

The average transfer rate is thus 84 MB/s while the average latency was 13.5 ms. In other words, the drives themselves can sustain read speeds of 5 GBytes/minute if the files are big enough. I am ignoring write performance because it gives similar results and is less relevant for my usage (“read-mostly”).

Usage of the NAS

I currently have 26,000 digital photos (JPG, RAW, occasionally other stuff) requiring 225 GBytes of space. In RAID 1, this takes 225 GBytes per drive.

The average size of a single photo is roughly 17 MBytes (a mix of recent JPGs and two types of RAW). A worst case photo (Raw, full resolution, depends on compressibility) can exceed 30 MBytes.

NAS performance across the network

Copying a 16 GByte directory consisting of 902 photos (Egypt) from the NAS to the local disk:

  • took 26.5 minutes = 11 MBytes/s = 0.6 GBytes/minute = 34 photos/minute
  • generating an average network traffic of 10.1 MByte/s (received). Note sure where the 10% discrepancy comes from.
  • CPU load on the NAS (log in with ssh, top command) is about 50%

To check this, I copied the same directory from the NAS to the NAS (to a non-RAID partition):

  • took 53.5 minutes = 4.9 MBytes/s = 0.3 GBytes/minute = 17 photos/minute
  • generating network traffic of 5 MBytes/second per direction

This is consistent (enough): the NAS now needs to read and write the data. It incidentally shows that the time needed to store data on the local hard disk in the 11 MBytes/s test case was apparently negligible.

So the problem is that the drives can read (or write) data at 5 GBytes/minute, but the NAS is only reading at 0.6 GBytes/minute. The “34 photos/minute” also implies that the NAS performance can easily limit the performance of browsing of photos that are not stored in the cache.

One reviewer, however, measured 21 MBytes/s rather than my 11 MBytes/s. So this gives hope that performance can be tweaked.

Optimizations and errors found

  • I found I had 100% CPU load on the NAS, even when running on a 100 Mbps link: 50% went to samba, 25% to inotify_itunes and 25% to inotify_upnp. Disabling iTunes and univeral Plug-and-Play using the Web interface thus got the CPU load down to about 50%. Apparently this is a bug in older versions of the CH3SNAS firmware that causes these two processes to eat all remaining idle time. Apart from wasting power, they undoubtedly don’t help performance.
  • The NAS was still configured to run at 100 Mbps, despite having a 1000 Mbps Ethernet link to the router (and beyond).
  • Adobe itself just announced that the imminent version of Lightroom would fix “Library: Sub-optimal preview rendering performance could impact application performance“. Whatever that means, it is always welcome.

Checking the NAS performance locally

[ coming ]

So where is the bottleneck?

[ coming ]