Assuming you care about low light and high dynamic range performance, the best cameras have full-frame sensors (the blue dots). You knew that – right? Well, surprisingly full-frame sensors beat even larger (purple, pink, red) sensors. So don’t bother spending big money on a medium format camera unless you really need the super-high resolution. Or need it show that your equipment is clearly on a price class of its own.
The so-called APS-C cameras with 1.5x or 1.6x sensors have improved. Examples: the Nikon D5200 and D5300.
The Sony NEX-5R mirrorless (which is 1.5x) has a slightly higher price than an APS-C SLR, but the body is smaller and the performance is competitive. Mirrorless models should have the same performance as an SLR with a comparable sensor. A mirror doesn’t add image quality – it just makes a click sound like ke-lick.
Canon still has a long way to go to catch up with its APS-C sensors (1.6x). The Canon 70D performs slightly better than the old Canon 7D, but a comparison to Nikon or Sony tends to be embarrassing.
Recent Micro Four Thirds cameras (Olympus & Panasonic) have improved and are even ahead of Canon’s APS-C (1.6x) models.
The Sony RX100 and RX100-II are still doing fine – at least considering their small sensor size (2.7x or 1/1″ ). The Nikon Series 1 is technically not state-of-the art, but nice if you like white or pink gear: it targets a young Asian lifestyle market.
The premium pocket cameras have improved. Especially the 1/1.7″ sensor models such as the Canon Powershot S120 and G16 and their Nikon equivalents.
The best deals if you need a high quality model can be found at the top edge of the cloud in diagram “b”: you get the highest quality in that price range. Note that the prices shown are official prices at introduction, and will differ from current street prices. These deals include:
The Nikon D600 and D610. These are essentially the same camera, but the D610 resolves a dust issue.
The new Sony A7R mirrorless. Note that this model uses Sony E-mount lenses, but actually requires new Sony full-frame E-mount lenses called “FE”. So it will take a while until there are enough lens options.
The Sony RX1 and RX1R. These look overpriced (and probably are – although I ordered one myself), but their price does include an excellent 35mm Zeiss f/2.0 lens. On the other hand, they do not come with an optical or electronic viewfinder. These cost about 500 US $ or Euro extra. Lens hood pricing is joke (so look into the Photodiox accessories).
The Nikon D5200 or D5300. Both have a 24 MPixels state-of-the-art sensor, but the newer one gives sharper images (no AA filter) if your lenses are up to the challenge.
The Nikon D3200. Also 24 MPixels with state-of-the-art sensor technology.
The Pentax K50 and K500. A somewhat overlooked brand.
The Nikon Coolpix P330. A “take me everywhere” camera at a lower price point than the excellent Nikon Coolpix A or FujiFilm’s X-100s models.
Note that some major new camera models are not shown because DxO Labs simply hasn’t tested them yet. These include:
The new full-frame Nikon Df (with the professional Nikon D4’s 16 Mpixel sensor). It should score about 89 (D4) for $3000 – nice, but not sensational unless you insist on a retro look and feel.
Most FujiFilm X-Trans models have not been tested. Tests may be delayed because they have a non-standard color filter array (complicating raw conversion). The CFA design allows the sensor to work without a low pass filter. Alternatively, the missing tests may be because FujiFilm is not enthusiastic about their cameras’ DxOMark scores (pure speculation on my part, but the FujiFilm X-100 didn’t score exceptionally well). FujiFilm high-end cameras are getting a lot of attention from serious photographers who prefer small, unobtrusive cameras with a classic mechanical feel.
The Sony A7. Many people wouldn’t really benefit from 36 MPixels (Sony A7R) without an image stabilizer or a tripod or high-end lenses.
I have been working on an update to my original DxOMark article. That update has just been published on Luminous Landscape, a well-known photography site operated by the Canadian landscape photographer and publicist Michael Reichmann.
A slightly newer version of the article is available at the DxOMark website. It features four extra cameras and almost identical text.
The article covers various aspects of image sensor size and its impact on image quality. The article is built around original benchmark data measured by DxO Labs. I have rehashed their data (with permission) to stress basic trends and highlight a few topics:
Benchmark data for over 180 high-end cameras (starting at about $400).
Which benchmark numbers by www.dxomark.com are most relevant for your needs?
The technical relationships between sensor noise, dynamic range and resolution.
A comparison of what noise does at low ISO and at high ISO (this is trickier than “doubling the ISO reduces the signal-to-noise ratio by 2×”).
The implications of using “mirrorless” cameras (and associated smaller sensors) on image quality.
The image quality of the new wave of cameras that use Sony’s new Exmor sensor with its excellent low ISO dynamic range performance. With a bit of speculation of whether Canon (that normally doesn’t use Sony sensors) can catch up with Nikon (that does regularly use Sony sensors).
You can contact me about the article via comments on this website. I will also try to keep an eye on comments on the LuLa and the DxOMark fora (forums).
If you own a Canon 600D, a Canon 60D, Canon 5D Mark II, or certain of their predecessors, you might be interested to hear that you can extend the capabilities of your camera for free (although a donation is requested). This is not by replacing the camera’s internal software by a newer version (recommended, but this mainly fixes bugs), but by adding software from a bunch of non-Canon developers. This Magic Lantern software extends the existing Canon software with many new features that target technically inclined videographers and photographers.
Features for photographers
Magic Lantern was originally created mainly for those who use Canon DSLRs for serious video work. I don’t know much about video, so I will only describe features that help photographers.
The features are somehow largely centered around Liveview and likely benefit photographers most who sometimes need to do “slow” photography: they use a tripod, use tethering in a studio to check focus, have a complex setup or simply want to have maximum control. Having said that, Magic Lantern states that it has benefits as well for photographers that are in a constant hurry: it gives you the option of putting certain options that you use a lot under a particular button.
A few of the key features:
focus peaking – whereby the Liveview image displays which parts of the image are in focus. Useful when you want to carefully control what is in focus. This can be seen as an alternative to tethering your camera to a computer via USB in the studio.
exposure clipping – the Liveview image can show which parts of the image will be too light and too dark using overlaid zebra stripe patterns.
more on-screen data – for example the current main camera mode (e.g. M), focal length and focus distance.
focus loupe – you can see part of the image zoomed in 2x or 3x to check sharpness. This feature is fancier that Canon’s counterpart and can even simulate what a split screen focus aid used to look like.
interval timer – you can take 100 pictures at 60 second intervals to show a flower opening. Or 1000 pictures at 1 hour intervals of a construction site – all providing you can get your battery to last.
triggering exposures – the shutter can automatically fire if the scene brightness or content changes significantly. Essentially a makeshift motion sensor.
automatic HDR – not only can the camera take a series of images at different exposures automatically, but it can take the entire series at one press of the button. It can even determine how many exposures are required automatically (or manually) and give you a rough preview of the merged image. Pretty cool. Essentially this gives your 5D2 a feature found in the 5D3, but without the artsy options: you do your real HDR merging afterwards on a computer.
improved mirror lockup – flip up the mirror a few seconds before taking the picture to reduce vibrations. The Canon equivalent is relatively tedious to operate.
The actual list of features is about as long as the list of features that your camera originally came with. So some people only use 2 or 3 of the new features. Others actually do read the software manual and experiment around (takes an evening – just like Canon’s firmware).
Installation and risk
There are risks involved in tinkering with complex equipment. My feeling is that the risk is comparable to opening up PCs to upgrade memory. If you never did something similar, you can get someone else to install Magic Lanterns (ML) and show you the basics.
The risk is lower than you might expect because ML doesn’t simply overwrite Canon’s software: it runs as an add-on and (in most cases) you will not see changes to the menus provided by Canon. There is a simple procedure to uninstall ML entirely.
This is essentially how ML works under the hood:
A minor modification to Canon’s software makes the camera Magic Lantern aware. Comparable to a boot loader on a PC. ML is incidentally not the only party that does this (there seems to be a USB remote controller that uses the same trick to extend Canon’s software).
Whenever you activate the camera, the firmware first checks for the presence of special non-image files on your flash card. If found, it loads Magic Lantern from the flash card. This does not visibibly delay camera operation. The ML software sits alongside the Canon software in camera memory (RAM). If the ML files are not found on the flash card (or you hold down a button while turning it on), Magic Lantern is not loaded and you get unmodified camera behavior. Alternatively, you can choose to carry memory cards with and without ML.
the optical viewfinder information display is unchanged
the LCD viewfinder for LiveView displays significantly different information
Canon’s own menus (Menu button) are for 99% unchanged
you can view ML’s own menus by pressing the Erase button while in Liveview mode
Whenever you make changes to ML settings this is written to the flash card for the next session. Some changes are also stored in the camera’s non-volatile memory (e.g. when ML menu’s interact with existing Canon features?)
The ML files stay on the flash card, even if you erase the card using the camera. Actually ML formats the card and then writes the ML files back from memory. If you erase or format the card entirely using a PC, you need to reinstall the ML files onto the card. Until then, you will be operating without ML when you use that card.
Quality and stability
I cannot give you hard numbers, but since version 2.3 the stability seems to be close to that of Canon’s own software. Both have occasional bugs and both try to fix these bugs as soon as possible. ML is an open source project, so anyone with (considerable) programming skills can contribute.
All this doesn’t mean you can never run into a problem: ML software adds complexity to the entire setup, and strange combinations of features may give strange results. But if you stick to mainstream usage of the features (= use them more or less as documented) you should be alright.
Some features are clearly marked as “for very advanced users”. One example is the ability to take pictures in a low-res format while in Liveview mode without any shutter motion or sound whatsoever. A bit weird, and it actually seems to work, but you won’t be using this unless you are a video technician or are motivated enough to figure out how to deal with these “422” encoded frames.
A final example is a menu item called “Don’t press this”. The user manual just says not to press it. Actually it probably doesn’t do any harm (otherwise why give it such a tempting name), but I don’t want to press it just yet. I suspect it contains a game that is totally not camera related. After all, your camera is just a computer with an industrial strength webcam attached as a peripheral (at least that is how geeks tend to see it).
So far, things are going well with my own use. And ML has thousands of heavy users who rely on it on a daily basis. The documentation is actually pretty good – including the description of the risks involved. But…
It will only install on the latest version of Canon’s firmware. So you need to upgrade a 5D2 to v2.12 before you can install ML. A sensible choice by ML to minimize risk.
Running ML will slightly increase battery drain. Essentially because it gives the ARM processor more work to do because of extra features. It will increase batter drain a lot if you start using Liveview more than you previously did.
ML increases overall system complexity somewhat: it is like upgrading from a 5D Mark II to a 5D Mark III – more features which you may or may not use.
ML is not available on all currently Canon cameras (notably not the 7D or 5D Mark III so far). ML is written by volunteers and all this is a lot of work.
Something could go wrong. But the manual explains how to get the camera up and running again in the more common cases. As far as I can tell, the risk of loosing images stored on the flash card is absent, but there is a risk that you may need to briefly remove the batter to recover. A quote from the Magic Lantern FAQ:
In practice, we are doing our best to prevent these situations, and thousands of users are enjoying it without problems. However, this does not represent a guarantee – use it at your own risk.
DxOMark Sensor is a raw benchmark for camera bodies. It is “raw” not just because it looks at Raw file image quality. It is also raw in the sense that it provides data for cooking up hands-on reviews that cover all aspects of a camera.
Note: a version of this article was published on Luminous Landscape on January 28th 2011.
DxOMark Sensor Scope
DxOMark Sensor is the new name of DxO’s original metric for camera body image quality. The name “sensor” is a bit misleading as the benchmark covers whatever happens to the light or signal from the point it has left the lens up to the point when the raw file is decoded. Other camera properties such as image quality, like ease-of-use, speed, price, and lens sharpness, are all out of scope.
Note that DxO also provides a second benchmark called DxOMark Score which tests lens/body combinations and which does include lens sharpness.
DxOMark Sensor applies to:
high-end digital cameras (mainly SLRs and interchangeable lens models),
when generating Raw output files (JPG introduces too many extra issues),
including whatever impacts image quality within the camera (except for the lens!), and
regardless of sensor resolution (more on this later).
The DxOMark Sensor benchmark essentially “only” covers noise under varying lighting conditions and in its various manifestations.
Purpose of the Benchmark
Benchmark data such as DxOMark Sensor give photographers a way to compare camera image quality. This helps people decide whether to upgrade or what to buy – despite that having a low noise camera is nowhere near the top of the list of things that make photos great.
Benchmarks may actually also influence future industry direction. This is analogous to, for example, automotive mileage or safety tests: even when the test definitions are not perfect, vendors will try to optimize their designs to score well on important tests.
Although DxO Labs is a commercial organization, it provides this benchmark data for free because DxO needs to measure the data anyway (e.g. for their Raw converter) and because it uses its DxOMark website to increase brand awareness. The measurements and graphs are incidentally not in the public domain, but can be redistributed under certain conditions.
Purpose of This Article
The data shown here is derived from DxOMark’s website. My graphs don’t replace DxOMark’s graphs and tables: you should use the DxOMark website to compare specific camera models. I simply created new graphs to stress certain overall trends and phenomena – originally for my own needs.
This article thus addresses various interrelated questions:
What do the DxOMark Sensor results mean?
How valid are the benchmark scores?
Why do large sensors outperform smaller ones?
Why don’t MPixels say much about image quality?
What can we learn about the cameras and industry from the DxOMark data?
During the journey I will slip in a basic course on Sensor Performance for Dummies. This is good for your nerd rating because it is actually rooted in quantum physics and discrete-event statistics. And I even threw in a few Greek λetteρs to remind you that we are on the no man’s land between science, engineering and marketing.
If this gets to be a bit too much for your purposes, just concentrate on the graphs containing benchmark results. Questions like “Please define photon shot noise” will not be asked on the exam.
Four Top-Level Graphs
Figure 1a-d shows the DxOMark Sensor score along each vertical axis. The scores are currently between 20 and 90. Scores above 100 are theoretically possible. Don’t get hung up on differences of only a few score points: 5 points is roughly the smallest visible difference in actual photos (DxO: “equivalent to 1/3 stop”). The measurements themselves appear to be repeatable to within one or two points.
The DxOMark Sensor score is itself based on three more detailed scores which we will discuss later. The graphs in Figure 1 show:
a. the impact of different physical sensor sizes on the overall score,
b. the overall score versus a price indication for the camera body,
c. how digital cameras have improved over the years, and
d. how image quality relates to sensor MPixels.
To save you some scrolling (and squinting), each of these four graphs in Figure 1 will be repeated (and enlarged) when it is discussed.
Sensor Size impacts Image Quality
This is one of the graphs shown in Fig 1.
The horizontal axis in Figure 1a represents relative sensor size. The dimensions of a “full-frame” sensor (24×36 mm) are used as reference. A value of 0.5 thus means that the sensor’s diagonal is half the size of a full-frame sensor and that the crop factor is twice that of a full-frame sensor. The axis is “logarithmic”, meaning that every 2× increase in sensor size spans the same horizontal distance: the steps from 0.2 to 0.4 to 0.8 to 1.6 are all equidistant.
Figure 1a shows (from left to right):
so-called 1/2.33” sensors in super-zoom bridge cameras,
so-called 1/1.7″ sensors (5.7×7.6 mm) typically found in high-end compact cameras,
Some cameras are labeled with an abbreviated model number. Thus 1D4 is short for Canon EOS 1D Mark IV and α55 is the Sony STL Alpha A55. Please use the original DxOMark graphs for looking up specific cameras.
The color scale used in all my graphs indicates the size of the sensor: orange represents the tiny sensors, 4/3 and APS-C are shown in shades of green, cyan is mainly Canon’s 1.3x EOS 1D APS-H series, blue is for full-frame, and magenta and red are the “medium-format” sensors.
Note that mainstream compact cameras with tiny (1/2.5″) sensors and correspondingly lower image quality are hardly covered in DxOMark’s database – partly because they can’t generate the required Raw files. It is also worth noting that the super-zoom models with the smallest sensors (e.g. Olympus’ SP 570 UZ) at first glance resemble SLRs.
Figure 1a shows quite some interesting information:
As a general rule, larger sensors outperform smaller ones….
…but newer models generally outperform older models. In particular, two new APS-C models (Nikon’s D7000 and Pentax’ K-5) outperform the older 1.3× sensors and even most full-frame (1.0×) sensors due to a significantly lower noise floor.
The performance of the mirrorless Sony NEX-5 is in line with its 1.5× APS-C sensor. Its mirrorless design and its use of an electronic viewfinder have no impact on image quality: a classic SLR swings its mirror out of the way during exposure. So the lack of a mirror doesn’t impact image quality.
The Sony Alpha 55 , with its notable semi-transparent mirror, performs roughly as you would expect given its APS-C sensor. But because its semi-transparent mirror doesn’t swing out of the way, 30% of the light never reaches the sensor. Note the performance gap between the Alpha 55 and the Nikon D7000 or Pentax K-5: the higher score (lower noise) of the latter two could be explained by the light diverted by the Alpha 55’s stationary semi-transparent mirror.
Surprisingly, except for the 1/1.7″ segment, none of the Canon models are currently best-in-class compared to their competition. This is partly because Canon’s two full-frame models (5D Mark II and 1Ds Mark III) are currently 2 and 3 years old. And because both of Canon’s 2010 APS-C models (550D and 60D) are entry-level models which don’t outperform the fancier Canon 7D introduced in 2009 (see Figure 2).
As we are digressing anyway now, Figure 2 shows that Nikon (gray text labels) originally lagged behind Canon (colored text labels) in terms of the image quality of its D-SLR sensors. But with the introduction of the Nikon D3 in mid 2007, Nikon appears to have overtaken Canon in DSLR image quality – at least for now.
Figure 2 also clearly shows that sensor size has a significant impact on image quality. Even Canon’s two APS-C series (300D-550D versus 10D-60D) have very similar image quality despite their price difference.
Price and Image Quality
Some highlights that can be seen in Figure 1b:
Note the logarithmic horizontal scale: the DxOMark camera data covers a 1:100 price ratio ($400 – $40k).
Some models at the bottom of the cloud are older models and are no longer manufactured. Their indicative price is apparently what they cost on the used market. The lowest blue (1.0×) model is thus the original Canon 1Ds from 2002.
The new 9.5 k$ Pentax 645D costs half as much as the other medium-format cameras. It costs about the same as the most expensive full-frame model (Nikon D3x). Although it benefits from its large sensor size, its image quality is similar to the new Pentax K-5 which costs merely 15% of the 645D’s price.
Doubling your budget should get you more image quality within the price range up to $2000. Above $2000, you have to be very selective to get any significant increase in image quality – regardless of how much you are willing to spend: you are partly paying for the small series in which these products are manufactured.
Older versus Newer Models
The historical data in Fig. 1c shows the 126 models in DxOMark’s database at the start of 2011. Various early digital SLR models that mainly have historical significance were not tested by DxO. Other observations:
Most compact cameras are also absent. These numerous models (e.g. by Canon, Casio, FujiFilm, Nikon, Olympus, Panasonic, Pentax, Samsung, Sony) typically have 1/2.3″ or 1/2.5″ sensors (crop factor of 6×). This market segment largely caters to those looking for ease-of-use rather than cutting-edge image quality. Consequently most compact models don’t support Raw mode and were not tested.
With the exception of the Panasonic GH-1, the Four-Thirds category (darker green) has not made much progress so far. The GH-2 actually has a marginally lower score than its predecessor. This reflects a slight increase in (resolution-normalized) noise under both high- and low-lighting conditions.
The tested Hasselblad models (H3D, 2007) have been gradually overtaken in image quality by full-frame models and even two APS-C models. The newer Hasselblad models (H4D, 2009) have not been tested so far, but should benefit from their increased sensor size.
Having Too Many MPixels Often Doesn’t Help
Apart from the fact that DxOMark Sensor only covers image quality, it is important to realize that the DxOMark Sensor score does not directly reward sensors that have above-average resolutions.
Instead, the score is a measure for achievable print quality for typical use cases where print quality is seldom limited by sensor resolution. So why didn’t DxO somehow factor sensor resolution into the DxOMark Sensor score?
Firstly, this is because current sensor resolution is generally high enough for producing gallery-quality prints. In fact, software typically silently scales down resolution during printing. And secondly, lens sharpness (rather than sensor resolution) is often the weakest link when it comes to achievable resolution. 60 line pairs per mm is considered an exceptional lens resolution. D-SLR sensors have a typical pixel pitch of 4-8 µm, corresponding to 125-60 line pairs per mm.
Let’s check this by estimating the required print resolution. For 250 DPI print resolution, A4 (8.3″×11.7″) or A3 prints require 5 and 10 MPixels respectively when printed with some borders.Because 250 DPI equals 100 pixels per mm², our eyes will have a tough time assessing this sharpness without a loupe. In my own experience with my old 6 MPixel Canon 10D, even slightly cropped images give you great A3 prints without any fancy digital acrobatics – providing that you use high quality lenses.
These numbers are a bit surprising when you consider that sensors only measure one color per “pixel” and thus lack information compared to screen pixels (see Bayer mosaic). But the camera industry is quite good at reconstructing the missing color information using fancy demosaicing algorithms. It also helps that our eyes are not especially good at seeing sudden color changes unless they coincide with sudden brightness changes. So even when viewed at “100%”, camera pixels can look surprisingly sharp.
But wouldn’t we need more pixels for larger prints such as A2 paper? Not necessarily: if you view big prints from a larger distance in order to see the entire composition, the required resolution saturates at the (angular) resolving power of our eyes.
You will be hard-pressed to buy a modern SLR camera with less than 12 MPixels (see Figure 3), so those extra MPixels allow you to crop your images (“digital zoom” during post-processing,again assuming your lenses are top-notch) – and to impress your male friends.
Figure 3 shows how MPixel values evolved over time. The vertical axis thus corresponds to the general public’s rather inaccurate view that MPixels mean image quality. This view can be tested by comparing Figures 2 (image quality) and 3 (MPixels). For example, take the yellow Canon G-series: between the G10 and G11, the resolution was actually reduced from 14.7 to 10 MPixels while the image quality went up. These new 10 MPixel models (G10, G11 and their respective twins, the S90, S95) were well received by the photographers looking for a small extra pocket camera.
But Having Too Many MPixels Doesn’t Hurt Either
More MPixels imply larger image files and obviously slow down processing and file transfers. But the good news is that extreme MPixel counts do not necessarily harm image quality – despite some tenacious claims to the contrary.
The reason for this is that when you scale down to a lower resolution (often automatically done when you print or view the results), the resulting noise and Dynamic range are equivalent to what you would have gotten if you had started off with a sensor which had the required target resolution.
Let’s look at this more closely – but without scaring you off with actual formulas.
Figure 4 shows an analogy: measuring the rate of rainfall by collecting rain in measuring cups. We could measure the rainfall with a single large bowl. Or, alternatively, we could use 4, 16 or 64 smaller cups. In all these cases the effective area used for catching drops is kept the same.
In the case with 64 cups, I exposed these cups to a simulated rainfall that caused each cup to get on average 5 drops of rain during the exposure. For visual clarity I used really big drops (hailstones) or really small cups. However, for the signal-to-noise ratio the size of the cups doesn’t matter. Due to the statistics ( Poisson distribution with “λ=5″, in the jargon), on average only 17% of the cups will contain exactly 5 drops of rain. Some will have 4 drops (17% chance) or 6 drops (15% chance), but some (4%) may even contain 9 drops or stay empty during the measurement interval (0.7%).
This phenomenon explains a major source of pixel noise (“photon shot noise”) which is unavoidable and especially noticeable with small pixels, in dark shadows and at high ISO settings. The corresponding light level is shown projected as a gray-scale image below the cups: empty cups correspond to black pixels and full cups to white pixels.
Now let’s look at the array with 16 (instead of 64) cups. Each cup is 4× larger and will thus, on average, catch 20 drops instead of 5 drops. But, after scaling, the measurements obviously result in the same estimated rainfall. Due to statistics, we may occasionally (9% chance) encounter 20 drops in cup, but we will likely also encounter 18 (8%), 21 (9%), and 25 (5%) drops. The chances of observing 4 or 36 drops are negligible – but non-zero. So, although larger cups will have slightly more variation in terms of drops than smaller cups, the variations expressed in uncertainty in the amount of rainfall/m2 will actually decrease as the cup size increases.
So the point is that when using smaller cups/pixels, proper scaling using all available measurement data allows us to get exactly the same signal and noise levels as when using bigger cups/pixels. In terms of cups, a set of 4 cups will tell you exactly what a single bigger cup would have measured: just pour the content of 4 cups into one big cup.
Per-pixel Sensor Noise
Our cups-and-drops analogy gives a basic model of pixel behavior when there is enough light. Real pixels in say a 12 MPixel APS-C Nikon D300 can hold in the order of 40,000 free electrons knocked loose by those speedy photons. For compact cameras that number is lower because they have smaller photodiodes, for medium-format sensors that number can be higher.
λ=40,000 implies a noise level of 200 (= square-root of 40,000) electrons and thus a signal-to-noise ratio of 200:1 (“46 dB” in engineer-speak). This is under the best possible circumstances: it holds for the noise within an extreme image highlight at the camera’s lowest ISO setting. So instead of λ=5, λ=20, λ=80 and λ=320 as shown in Figure 4, actual sensors have values like λ=40,000. At λ=40,000 the basic principle and the math stays the same, although the noise levels can be imperceptible.
However, when parts of the image are exposed four stops lower (-4 EV, 6% gray) than the highlights, you catch 40,000 / (2×2×2×2) drops or λ=2,500. This gives a noise level of 50 drops. So the signal-to-noise ratio is now down to 50:1 (“33 dB”). That’s still pretty good, but you might be able to notice the noise. This is why you sometimes see noise in shadows even at 100 ISO.
If we make matters worse by boosting the ISO from say 100 to 3200 ISO, we are essentially underexposing by a massive 32×. You knew that ISO settings with digital cameras were ‘only’ underexposing, and brightening the results by analog amplification or digital scaling, didn’t you? So exposing our dark 6% gray at 3200 ISO, leaves us with an average signal level of just 78 electrons, with a noise level of at least 9 electrons – resulting in a highly visible signal-to-noise ratio of 9:1.
It is worth noting that, except for the number 40,000 electrons for the “full well capacity”, none of this can be changed by smart engineers or negotiated about by their managers. It’s just math.
But… Per-Pixel Noise Is Not Very Relevant
This gets us back to “smaller pixels give higher noise levels per-pixel”. But per-sensor-pixel noise is the wrong metric for prints (or, for that matter, any other way to view an image in its entirety). Printing implies scaling (let’s assume down) to a fixed resolution. If the resolution scaling is done carefully, it exactly cancels out the extra per-pixel noise which you get by starting off with smaller pixels.
So the following options for reducing image resolution – according to this basic model – give you the same signal levels and the same noise levels:
Starting off with a sensor which has large pixels (low resolution) with the same total light-sensitive area.
Using a higher resolution sensor, but combining the analog quantities before going digital. This is like pouring 4 small cups into a bowl before measuring (“analog binning”).
Using a higher resolution sensor, measuring the output per pixel and then scaling the results down by averaging (“digital binning”[ 20]).
Using a higher resolution sensor, capturing all the information in a file, and letting a PC do the downscaling.
An example: this means that a 60 MPixel sensor in a Phase One P65+ camera back shouldgive the same print quality and the same DxOMark Sensor score as:
a hypothetical 15 MPixel sensor with the same medium-format sensor size
an image that is downscaled within the camera to 15 MPixels
an image that is downscaled during post-processing to 15 MPixels
By coincidence (as I later heard from a DxO expert) the benchmarking guys had actually tested the second scenario for the P65+ digital back: in its “Sensor+” mode with 15 MPixel Raw output files, it gets the same DxOMark Sensor score as in its 60 MPixel native mode. This helps reassure us of the usability of the model use for scaling noise when the resolution is scaled.
Resolution and DxOMark Sensor Score
As discussed above, the DxOMark Sensor score is “normalized” to compensate for differences in sensor resolution. To summarize: the DxOMark Sensor benchmark doesn’t “punish” high-resolution sensors for having lots of small pixels that are each individually noisier. And similarly, the benchmark doesn’t favor using large pixels despite their lower per-pixel noise. This is not some kind of ideology: it is just estimating the resulting noise level when viewing the entire image.
OK. Let’s go back to the data shown in Figure 1d. Despite all the theory which explains why MPixels shouldn’t impact image-level noise, Figure 1d does show a trend that higher-resolution sensors produce higher DxOMark Sensor scores -which essentially means “less noise”.
Question: So why don’t we find 10-16 MPixel sensors with top DxOMark Sensor scores?
Answer: Technically it can be done, but it’s not a commercially interesting product. To make one, you use a large sensor (like the D3x) or even larger, and fill it with say 12 MPixels. But, as we explained above, this hypothetical 12 MPixel D3x-lite should perform just like a real D3x whose output images were downscaled to a lower resolution. So there is no major benefit of designing such a hypothetical D3x-lite compared to a D3x – and you would lose the option of using the high-resolution mode
Question: If high-resolution is painless, why not provide say 50 MPixel APS-C sensors?
Answer: The pixel pitch would drop down to about 2.5 µm. At that resolution, lenses are generally the bottleneck -so you won’t see much improvement in resolution. And for extremely small pixels, the assumed idealized scaling (with an assumed constant fill factor and constant quantum efficiency) may no longer hold: four 2.5×2.5 µm sensors together would capture less light than one 5×5 µm sensor (wiring gets in the way, mechanical tolerances on filters, “fill factor”, etc). This increase in noise at some point would reduce the DxOMark Sensor score.
Impact of larger sensor on our lenses
It should be clear by now that larger sensors (rather than larger pixels!) can produce less noisy images. This is simply because a larger sensor area can capture more light – and for reasonable resolutions this is pretty independent of the amount of MPixels the sensor’s surface has been divided into.
But to capture more light within the same exposure time, you need a proportionally larger lens. An example:
Take a 105 mm f/2.8 lens on a full-frame camera as reference.
And now we compare it to a medium-format camera with twice the sensor surface area of a full-frame sensor.
If we try to use the 105 mm lens, it may not properly fill the 1.41× larger image circle. And if it did, we would have an increased field of view – which is not a fair comparison. So we use a 150 mm lens with a suitable image circle instead of the 105 mm full-frame lens.
If the 150 mm lens is also f/2.8, we would get the same exposure times. But f/2.8 at 150 mm requires the effective diameter of the front lens to be 141% larger than a 105 mm f/2.8 lens.
This means that the diameter of the front lens has increased proportionally with the diagonal of the image sensor. And that the area of the front lens has increased proportionally to the surface area of the sensor.
Which sounds sensible: bigger sensors require bigger glass if you want the same shutter speeds. Alternatively, you can use a 150 mm f/4 lens. Either you underexpose your image 2×, and get no noise level improvement over the original full-frame sensor. Or you expose twice as long, using a tripod if needed. But then it would have been fairer to benchmark against a 105 mm f/4 lens as well.
Q: Why couldn’t I overexpose the full-frame camera to catch more light just like the medium-format camera?
A: Just like film, silicon saturates at a particular level of photos per unit area. To avoid that, you have to close the shutter before the highlights have reached that level.
In this final part, we examine how the DxOMark Sensor score relates to three more basic metrics.
So What Were We Measuring Again?
The DxOMark Sensor score is itself computed using (measured and then resolution-normalized) figures for:
Noise levels: what is the highest ISO level that still gives a specific print quality?
Dynamic Range: ability to simultaneously render highlights and dark shadows under good lighting (low-ISO) conditions
Color Sensitivity or “color depth”: how much color (“chroma”) noise is there, particularly in the shadows under good lighting (low-ISO) conditions All this data (and more!) is measured and provided by DxOMark on their website.
The 3 metrics are shown in Figures 5, 6 and 7.As DxOMark’s vice-president of marketing, Nicolas Touchard, explained during a telephone interview:
The DxOMark Sensor score is under normal conditions a weighted average of noise, dynamic range and color sensitivity information. But some nonlinearities are deliberately included in the algorithm to avoid clear weakness in one area from being hidden by clear strengths in one of the other areas.
It is worth noting that these three underlying measurements are to some degree interrelated because they are all tied to sensor noise: Dynamic Range is the ratio between the brightest signal and the background noise (at low ISO). Color sensitivity or Color Depth represents whether small color differences are masked by chroma noise. And Low-light ISO tells you what ISO levels give equivalent noise levels on different cameras.
Although this means that some degree of correlation between the three underlying measurements is inevitable, different cameras do come out on top for each sub-benchmark. This confirms that we are not just getting to see the same data presented in three different ways.
DxO at some point tried to link the metrics to different types of photography, but DxO is fortunately starting to deemphasize this as the mapping between measurement and use cases was not very helpful. Here were the mappings:
Enough-light = low ISO
This metric assumes that you use a tripod if needed. Many non-landscape photos can also have a large contrast: architecture, portraits, night photography, weddings. A higher Dynamic Range also allows you to make larger exposure errors.
Challenging = high ISO
This metric assumes you are forced to go to higher ISO. This is relevant for many other types of photography: street, wildlife, news, weddings, night, concerts, and family. Most photographers need to resort to high-ISO settings regularly. And some need it on a daily basis.
Challenging = high ISO
This metric assumes you have enough light but may be a fair indication of what you would get with little light. Essentially it measures choma noise in the dark parts of a low-ISO image. Portraits may not be especially critical as chroma noise could be filtered out (at the cost of resolution) or you may be able to increase your lighting levels.
So all-in-all, I indeed wouldn’t take the names Landscape, Sport, and Portrait too seriously. At best they are nicknames, and particularly “Portrait” is the least accurate of the bunch.
We will discuss how the 126 cameras perform on these three metrics below.
Dynamic Range at Low ISO
Here is DxOMark’s definition for their Dynamic Range metric:
Dynamic Range corresponds to the ratio between the highest brightness a camera can capture [..] and the lowest brightness [..] when noise is [as strong as the actual signal].
So far, this is a pretty standard definition. It tells you how many aperture stops of light (EV = bit = factors of two) can be captured in a single exposure. It is analogous to asking how much water a bucket can hold, expressed in units that represent the smallest reliably measurable volume.
Hunting a bit more through the documentation you find that the Dynamic Range value (in “Print” mode) is
normalized to compensate for differences in sensor resolution.
This scaling normalizes to a resolution of 8 MPixel. The choice to use 8 MPixels is irrelevant: it only gives an offset (in EV) in the Dynamic Range scores. And you will find that the Dynamic Range used in the overall benchmarking is the maximum Dynamic Range as
measured for the lowest available ISO setting [typically between 50 and 200 ISO].
Today’s sensor with the highest Dynamic Range score (the Pentax K-5) spans 14 stops at 80 ISO. DxOMark’s Dynamic Range plot for the K-5 shows that its Dynamic Range drops by almost one 1 EV each time the ISO is doubled. The ISO setting for the K-5 thus corresponds closely to an ideal amplifier that amplifies both signal level and noise level equally without adding noise of its own. That is nice.
Various other cameras like Canon’s 5D Mark II shows hardly any Dynamic Range improvements when you decrease the ISO from 800 to 100. This indicates significant background noise in the 5D2 that has been largely avoided in the K-5 or Nikon D7000.
The data in Figure 5 confirm that larger sensors tend to have a larger Dynamic Range than smaller ones, but there is still a very significant variation within any sensor size. The exceptional Dynamic Range figures for the K-5 and D7000 will likely be exceeded by next generation full-frame and medium-format cameras.
The Dynamic Range scores of the FujiFilm FinePix S3 and S5 models are worth pointing out here because they have exceptional Dynamic Ranges, especially considering that they were introduced back in 2004/2006. This was achieved by combining large and small photodiodes on the same sensor. The small photodiodes capture the highlights, while the larger ones simultaneously capture the rest of the image.
Exercise: If you want to play with the data a bit, you can look up (under DxOMark’s tab “Full SNR”) the gray level at which the signal-to-noise ratio drops to 0 dB for the 80 ISO curve. For the K-5 this is a near-black with only 0.008% reflectivity. The brightest representable shade is 100%. So the ratio is 100/0.008 = 12500:1 which gives log(12500)/log(2) = 13.6 stops.
But we are not done yet: the “Full SNR” values in that particular DxO graph are not resolution-normalized. So we still need to scale from 16.4 MPixels down to 8 MPixels. This is a resolution ratio of roughly 2:1. The noise scales with the square root of this ratio, thus giving an extra 0.4 stop [ sqrt(16.4/8)-1 ] of Dynamic Range when scaled to 8 MPixels. The value listed by DxOMark for their normalized Dynamic Range should thus be roughly 13.6+0.4=14.0. The actual listed value is 14.1. Apart from proving that we still kind of understand how the benchmark works, this exercise shows that a twofold difference in resolution corresponds to 0.4 EV difference in Dynamic Range.
Low-Light ISO Score
Here is DxOMark’s definition for their low-light ISO score:
Low-Light ISO is then the highest ISO setting for the camera such that the Signal-to-Noise ratio reaches this 30dB value [32:1 ratio at 18% middle grey] while keeping a good Dynamic Range of 9 EVs [512:1 ratio] and a Color Depth of 18 bits [roughly 64×64×64 colors].
This is a rather complex definition with multiple built-in non-linearities: you are essentially supposed to increase the ISO value until you exceed any one of the three rules. Due to this definition, the outcome can be anywhere in the ISO range -not just values normally considered to be high ISO.
Again, Low-Light ISO is normalized to an arbitrary reference resolution of 8 MPixels.
The general idea behind this Low-Light ISO metric is simple: it tests which ISO level still gives acceptable image quality using a semi-arbitrary criterion for what “acceptable” means. As Figure 6 shows, the best camera on this particular benchmark is the Nikon D3s (not to be confused with the D3x). Note that the 10 best ranking models on this benchmark all happen to have full-frame sensors.
The gray scaling line in Figure 6 shows how other sensor sizes would score if they performed just as well as the Nikon D3s – but with an estimated handicap to reflect differences in sensor size. Thus a Four-Thirds sensor has a 4× smaller sensor area than a full-frame sensor, and thus would require 4× more light falling on this 4× smaller area in order to achieve the same signal-to-noise ratio. Indeed, some cameras like the Panasonic FZ28, the Canon S/G-series, the FujiFilm S100fs, the Panasonic GH1 and two new APS-C models perform close to this scaling line.
But the slope of the scaling line also predicts that a typical medium-format sensor should be able to deliver “acceptable” (according to the semi-arbitrary definition) images at 6400 ISO. This is 5-10 times better than the actually measured performance for medium-format sensors. Although commercially it may not be a big deal because these SUVs of the camera world are generally used on tripods or in studios with sufficient lighting, I don’t have a technical explanation yet for this performance.
Similarly, I hadn’t expected that the smallest sensors would quite manage to reach these scaled noise levels. This doesn’t mean these sensors have very low noise. On the contrary: they have to be used at e.g. 200 ISO to get the same print quality as the leading full-frame sensor at 3200 ISO. But given this unavoidable phenomenon, some actually do an admirable job.
Exercise: If you want to play with the data a bit, you can look up (under “Full SNR”) the ISO setting at which 18% gray gives a 30 dB (5 EV) signal-to-noise ratio. You should get a value for the K-5 around 600 ISO. To get the more relevant resolution-normalized ISO value, you have to replace the 30 dB criterion by 26.7 dB to compensate for resolution normalization. This should result in a score close to the 1162 ISO in DxOMark’s own results.
Low-ISO Color Sensitivity
Here is DxOMark’s definition for their Color Depth score:
Color Depth is the maximum achievable color sensitivity, expressed in bits. It indicates the number of different colors that the sensor is able to distinguish given its noise.
The metric thus looks at local color variations caused by noise. It does not cover color accuracy – presumably because that can be corrected in post processing and maybe because it opens an eXtra Large can of worms.
The benchmark values for Color Depth are again normalized with respect to sensor resolution. And, again, the phrase “maximum achievable” means that this is the Color Sensitivity at the lowest (e.g. 100) ISO settings.
As shown in Figure 7, larger sensors clearly have a larger Color Depth score. This is largely explainable by their lower noise at 100 ISO as shown with Figures 4 and 6. But color noise also depends on the choice and performance of the microscopic color filters that allow the photodiodes to measure color information (not shown in Figure 4). If less saturated color filters (“pink instead of red”) were used, the different color channels would respond only marginally differently to different colors. This would lead to higher general sensitivity of the camera, but would introduce more noise when converting to a standard color space.
For more information on the role of the “color response” of color filter arrays, see this white paper where DxO points out the impact of differences in color filter design between the Nikon D5000 and the Canon 500D.
A Color Depth value of 24 bit incidentally means that there is a total of 24 bits of information in the three color channels.
So How Fair is the DxOMark Sensor Score?
There is no simple objective answer to this important question. Probably every image quality expert would have a somewhat different personal preference for a benchmark like this. But my impression is that the benchmark is pretty useful: I analyzed the model and the data, but didn’t find any serious flaws. Furthermore, results like Figure 2 appear to be pretty consistent with traditional hands-on reviews: models that were stronger [weaker] than state-of-the-art when they were introduced (such as the Canon 40D [50D]) show up as expected in the DxOMark data. And, again, having a pretty solid metric by an independent party is better than endless discussions about what an ideal metric might look like.
The list of critical notes, suggestions and open issues that I ran into so far are all relatively subtle:
Undoubtedly complexity is a fact-of-life when you design sensors. And to DxOMark’s credit, they allow you use just a single figure score to compare camera body image quality. But say you have a difference of 5, 10 or 20 points: I found it very difficult to figure out what to look for in a series of real-world test photographs to confirm the difference. In fact, Theuwissen’s parameterized model for sensor noise suggests that one should be able to characterize key sensor behavior in fewer graphs, measurements and numbers.
Documentation about the way the final DxOMark Sensor score is computed from Dynamic Range, Color Sensitivity and Low-light ISO scores is not currently available. I don’t know if some manufacturers have access to this information or have figured it out by themselves. But I would prefer to level the playing field by publishing the (probably simple compared to what we already know) formula to compute DxOMark Sensor score from the 3 lower-level metrics (that are documented well enough for most purposes).
Fixed Pattern Noise treatment.
FPN is caused by physical or electrical non-uniformities in the sensor and can be largely corrected – although many cameras (like my own 5D2 don’t do this at normal exposure intervals). DxOMark does not attempt to distinguish between FPN noise (that can be subtracted away in say Photoshop) as opposed to irregular (“temporal”) noise. So if a camera would automatically corrects for FPN, it scores well on the test.
How important is Dynamic Range? – Photographers run out of Dynamic Range once in a while: usually in terms of “burnt” or “clipped” highlights. What DxOMark measures is more subtle: if you make an exposure series, what quality level will the best image have? In photographer-speak, what shadow noise do you get if you do an ideal “expose to the right” exposure. A high Dynamic Range sensor is good, but chances are that you can’t print or even view this without special software. The Landscape/Sport/Portrait terms can easily confuse people who take this literally. I am tempted to interpret the 3 metrics as Dynamic Range (as DxO does), Luminance Noise (instead of Low-Light), and Chroma Noise (instead of Color Sensitivity). Those are quantities you find more often in reviews.
Why measure Color Depth at low ISO? – I doubt people can actually see color noise at low ISO. It’s hard enough to spot regular noise at low ISO, and chroma noise is even harder to see. High-ISO chroma noise seems more relevant. I suspect that the choice to use low-ISO Color Depth is an artifact of originally trying to define a metric that matched studio portrait conditions.
Metric measureable per ISO setting? – It might have been clearer to have a single “perceived image quality” metric that could be measured at different ISO levels. This is particularly relevant because some cameras excel in high ISO conditions (requires a low noise floor) while others excel in low ISO conditions (requires large sensor).
Sensor size visualization – DxOMark’s online graphs allow you to plot scores with MPixels along the horizontal axis. It would be nice to add a setting that shows sensor size instead of MPixels. This would (just like in this article) cluster comparable products together. Representing sensor size in all graphs using color might also be a worthwhile improvement because photographers tend to consider different sensor sizes as different kinds of cameras (unlike MPixel ratings).
 The repeatability of the score can be estimated by comparing the scores for virtually identical cameras. Thus, for example, the database contains a pre-production Canon 550D as well as the actual production model. Similarly, the Canon S95 and G12 models are also believed to have the same technology in a different housing.
 This is the preferred way to visualize things when the ratio between numbers is more meaningful than the difference between the numbers.
 The scale is a continuous color gradient (Matlab-style colormap). If you want to use the same coloring convention formula to represent sensor size, contact me for help.
 Sony calls this “translucent”, but this is technically not a very appropriate term. Frosted glass is translucent. Using the right term keeps Ken Rockwell happy
 70% of the light reaches the sensor. That is equivalent to loosing 0.5 stop of light. 15 points was 1 stop according to DxO, so photographing through Sony’s pellicle mirror (or through a 0.5 EV gray filter) should cost about 8 DxOMark Sensor points. Adding 8 points to the Sony Alpha 55’s score (73) brings the camera on par with the Nikon D7000 (80) and Pentax K-5 (82) which are believed to use a very similar Sony sensor.
 Because Canon is pretty much the only supplier in the 1.6× APS-C and 1.3× APS-H categories, you should compare these against e.g. 1.5× APS-C.
 Canon essentially created the mass-market for D-SLRs and had set an aggressive initial pace for innovation and price decreases.
 Some people say we are seeing Sony overtake Canon in sensor quality rather than seeing Nikon overtake Canon: Canon makes its own image sensors and Nikon reportedly buys its SLR sensors from Sony. This view is credible given that Sony’s α55 and Pentax’ K-5 (officially known to use Sony sensor) are also both best-in-class in terms of actual sensor performance. So it is quite possible that such companies will start to become serious competition for Canon and Nikon (at least in terms of sensor quality) in the coming years.
 The Pentax 645D has three times more pixels than the Pentax K-5. But as will be discussed later, this may not be as important for image quality as it may seem.
 As sensor folks say, they have the same “fill factor” or as chip designers say “it’s an optical shrink”. The bowl and cup shapes share here are horizontally scaled versions of each other, thus leading to identical fill factors.
 If you have the time and courage to dive deeper, there is a tutorial series at www.harvestimaging.com that quantifies numerous sources of sensor noise. It is by Albert Theuwissen, a leading expert on image quality modeling. I created a kind of synopsis of this 100page series in another posting .
 Expressed in millimeters, or in water volume per unit of area.
 Cups that on average catch λ drops during the exposure to rain will on average have a standard deviation of sqrt(λ) drops. To estimate the rainfall ρ we get ρ = λ× drop_volume / measurement_area. The expected value of ρ is independent of cup size. And the variation of ρ decreases when larger cups are used. In Figure 4, ρ would be the depth of the water in the cups if the cups had been cylindrical. So as λ is increased (bigger cups or longer exposure), the Signal-to-Noise ratio improves. But ultimately we care about how hard it rains, rather than caring about droplets per measuring cup. If you measure rainfall with a ruler to see how deep the puddles are, you will get a result that doesn’t depend on cup size, and the noise due to drop statistics will decrease for larger cups.
Measure the amount of water in the cup by weighing each cup. If you don’t subtract the weight of the empty cup, you have a significant “offset”. If you do subtract the weight of empty cups, the correction will not be perfect.
Assume some random errors when measuring the amount of water per cup. This “temporal” noise has a fixed standard deviation, and has most impact when the cups are nearly empty.
Assume that the cups are not perfectly shaped (“Fixed Pattern Noise”). Maybe rows or columns of cups came from the same batch and have correlating manufacturing deviations (“row or column Fixed Pattern Noise”).
Drill a hole near the top of each cup so that excess water from one cup doesn’t overflow into neighboring cups. The holes will have slight variations in their location or size: “saturation or anti-blooming non-uniformity”.
Place the cups in a tray of water. If the cups are slightly leaky (unglazed flower pots), you will get some water leaking in from the surroundings into the cups (“dark current or dark signal”). Not all cups will leak equally fast (“dark signal non-uniformity”). And at higher temperatures, you will see a bit faster leakage (sorry, it would be too tricky to emulate the exponential temperature dependency without some really fancy materials).
Break a few cups or their measurement scales (“defective pixels”).
 You would get the same statistics when you measure rain using 2 liter pans. Two liters correspond to about 40,000 drops.
 Note that although this scaling story holds for photon shot noise and dark current shot noise, other noise sources don’t necessarily scale in the same way. In particular, some very high-end CCDs can use a special analog trick (“charge binning”) to sum the pixels, thus reducing the amount of times that a readout is required. This would reduce temporal noise by a further sqrt(N) where N is the number of pixels that are binned. Apart from the fact that only exotic sensors have this capability (Phase One’s Pixel+ technology), DxOMark’s data suggest that this extra improvement doesn’t play a significant role.
 Some cameras like the Canon 5D Mark II do this digitally. Canon calls these Raw modes SRaw and they have strange MPixel ratios like 5.2 : 10.0 : 21.0.
 The above does not mean that you will get exactly the same resolution-normalized results for any down-scaling scenario. It just says basic scaling laws tell us it should be possible to get close.
 Actually a quick search showed that the Phase One’s 150mm f/2.8 lens and Nikon’s 105 mm f/2.8 lens weigh the same and the Phase One has an only slightly larger filter size. But the Nikon is a macro lens and the Phase One isn’t. So maybe these two designs are internally too different or one is especially optimistic about its aperture.
 In some cases you can increase the dynamic range by taking N identical noisy exposures and averaging out the noise afterwards. This improves the SNR of temporal noise by sqrt(N) but is generally not a very attractive technique.
 According to the theory, this could be either “temporal” (normal) noise or “fixed pattern” (nonuniformity) noise in the sensor. Fixed pattern noise can be corrected via various computational or calibration tricks.
 The benchmark doesn’t depend on the actual steps (e.g. 1.0 stop or 1/3 stop) in which a user can adjust the ISO setting. Intermediate values are generated by interpolation.
 Strictly speaking, the definition doesn’t allow you to express the Low-Light ISO behavior of a camera with a small enough sensor if the camera fails to meet one or more of the three criteria at its base ISO setting. But one of the tested models (Panasonic DMC FZ28) actually has a Low-Light ISO rating that falls below the (both nominal and actual) ISO range of the camera. So apparently this benchmark accepts extrapolated results.
 Arguably the Canon S90 is the best low-light camera in the database – at least when we take its limited size into account. In fact, creating an array of about 20 identical S90 sensors would result in a full-frame sensor which would, at least in theory, slightly outperform the reigning Nikon D3s! And (again assuming one could do the tiling seamlessly and could handle all the resulting data) would result in a 200 MPixel übersensor. Or a larger 400 MPixel medium-format sensor that outperforms all current medium-format sensors. Actually this may put Canon’s 120 MPixel “proof-of-concept” APS-H sensor (August 24th 2010) into perspective: when scaled from to full-frame, it would also have 200 MPixels.
 In particular, DxOMark’s analysis is that Color Filter Array colors that have too much overlap in their transmission spectra increase chroma noise. Too little overlap decreases chroma noise at the cost of more luminance noise. This is an example how the details of a benchmark can impact design choices.
 It doesn’t mean that each channel is sampled at 8 bit: each channel is typically sampled at 12-16 bit. The actual formulas for Color Depth reflect the amount of noise in each channel and are too complex to explain here (integrals).
 This is more or less fair because that this is what the user would like to happen. But the camera may have modes to turn this on (for 1+ second exposures) or the user could bother to take a reference exposure with the lens cap on, and then perform the compensation in software. In such cases, the noise figures from DxOMark are too high. If you really want to manually subtract a “dark frame”: make sure you use the same exposure time and ISO setting and temperature as the real image. Note that you don’t need a tripod for this. But you do want to avoid light leakage – particularly for light coming via the lens.
The Canon 5D Mark 2 was announced on Sep 17th 2008 and many early orders were fulfilled around Christmas. Unfortunately, the camera turned out to have a defect that was particularly noticeable in pictures featuring Christmas lights. The problem shows up as black dots directly to the right of small bright lights. Here “to the right” assumes landscape mode and this translates to top (or bottom) in portrait mode. The problem shows up on RAW images and presumably in JPG images as well.
I have only photo showing this, taken during a Winter evening (Canon 70-200mm f/4L IS, 800 ISO, 1 stop underexposed, f/4, 1/60 s, 21 MPixel Raw, camera firmware 1.0.6).
Pixel peeping to the max
If you would plot the intensity scanning from left-to-right through one of these small highlights, you expect to see pixel brightness rise to the maximum measurable intensity, a plateau at this maximum intensity (255 on an 8-bit scale), a decrease down to background intensity. BUT (see actual scan at the end of the article), the ramp down “overshoots” and forms a small black dot.
You normally only see these black dots when viewing at 100% and can miss them unless you are looking for them. They appear to be a digital processing artifact. One clue is the phrase “to the right of..”. The lens itself has axial symmetry, so will not know what we users consider to be the right side of the image. The sensor itself (at the photosite level) also cannot behave like this. So the problem apparently lies in analog or digital signal processing inside the camera.
Speculations on the cause
One likely culprit seems to be the “highlight tone priority” feature which attempts to avoid blown highlights. This presumably gives a local HDR-like treatment: the area around a highlight is digitally underexposed to compress the scene’s dynamic range. This helps keep the bride’s bright white wedding dress from showing burnt out highlights.
If the “highlight tone priority” algorithm works left-to-right, you could imagine that a bright spot will result in a local adjacent dark spot – just like you are temporarily blinded by the headlights of an oncoming car at night: the feedback loop in your vision which reduces the pupils and probably sensitivity of the retina itself needs some time to adjust to the “normal” darkness again.
But this assumes that “to the right” is somehow associated with “later” in time. A digital filter is normally design to work symmetrically: the information needed to compensate in all directions (left, right, up, down) is available once you have a bitmap stored in memory. So the “black dot to the right” is reminiscent of analog processing whereby the photo sites are read out left-to-right and first processed using analog circuitry. The fundamental reason for this is that an analog filter which fed with a time-dependent signal can only react to the present (current signal) or past (previous signal values) and not to the future (the still to be processed pixels). A simple analog gain control loop (e.g. used to regulate audio levels) shows such behaviour: after a strong signal, it may temporarily be blinded.
Fortunately, speculations about the detailed cause of black dots are no longer relevant. Canon supplied a firmware upgrade (1.07) in early January which fixes the problem.
The fact that it could be fixed by a digital modification may suggests that it is a digital algorithm problem (you can design algorithms that have this blinded-to-the-right phenomenon simply by emulating an analog filter), or that the analog processing is digitally controlled or that the problem can be masked by digital means. Current collective Internet opinion suggests that critical users seem satisfied with the patch. Unfortunately the firmware patch has the side-effect that the raw convertors created for the Canon 5D2 now need software upgrades but this is a one-time inconvenience for early adopters. So either this patch is fixing the problem rather than masking it, or the masking simply works well enough.
Since I took the above picture, I have upgraded to the 1.0.7 firmware and will hopefully not run into the problem again.
The details are even weirder
Intensity scan done manually at 11x magnification in Lightroom 2.2 (using Adobe Camera Raw version 5.2)
The graph shown above shows the intensity of the Red, Green, and Blue channels when I scanned manually (Lightroom) from left to right throught the topmost (of the the two) Christmas lights. The basics are obvious: the light has a width of about 16 pixels. Blue is a little less intense on the left side, leading to the yellow color. There is indeed a dip around X=2 which is clearly lower than that around X=16. The peak at X=6 is due to the proximity of a second light source.
But there are some puzzles:
The dip at X=2 is not convincingly lower than the intensity at X=-18 or X=13. This might be explainable by strong linearity: you firstly need to combine R/G/B into a single number which might just be lower at 2 than at 13. And you apparently need to subtract a black level from that. Some have reported that the dip is actually lower than the black level you subtract, leading to a negative light intensity.
The RGB readout of Lightroom shows the dip at X=2, while the darkest spot on the screen is clearly at X=0. This is likely a bug in Lightroom 2.2, but needs validation.
The RGB readout of Lightroom (at 11x) shows 11×11 pixel squares. Strangely the Lightroom cursor seems to read out varying intensity within a square (as if you could read out the intensity at sub-pixel resolution). So the actual values will tend to vary a bit if you repeat the experment.
What happens if you use a lens designed for the common 15×22 mm sensor on a full-frame (24×36 mm) camera?
To start with, I will regularly use the Nikon naming conventions below: DX for “digital” or 1.5× or 1.6× sensors. And FX for “full frame” or 24×36 mm sensors. Alternative names for the smaller sensors are “EF-S” (for Canon’s DX lens series), APS-C (after a particular format film), “reduced image circle” (the main issue), or “digital” (which was correct at the time that there were no FX sensors yet).
Can you safely mount a DX lens on an FX camera?
On Nikon you can mount a DX lens on an FX camera. And the camera (Nikon D3, D700, D3x) responds by automatically reducing the coverage of the viewfinder images and cropping the resulting JPG or raw image. This is a sensible thing to do as the lens was designed for smaller sensors: the extra field of view covered by the larger sensor will have (often much) lower quality than the DX part.
Canon’s EF-S lenses, in contrast, are mechanically blocked from mounting on an FX camera. Canon’s story is that, because the FX mirror is larger than the DX mirror, there is a risk of the larger DX mirror hitting the back of the EF-S lens and destroying the mirror and maybe the lens. In other words, this mechanical safety measure allows EF-S lenses to be designed that are physically unsafe to use on an FX camera like the Canon 1D or 5D series. This mechanical safety measure is actually a special plastic ring at the back of every EF-S lens. If you are brave enough, you can remove that ring with a screw driver (at your own risk) according to some sources (link1, link2, link3).
Third-party lenses and early cameras
There are a few special cases:
Canon’s first DX camera modelswere released before Canon decided to create a dedicated EF-S line of lenses. These early models (the Canon D30, D60 and 10D) were a bit of a transgender thing: a DX camera trapped in an FX casing. Adapting Canon’s EF-S 10-20 mm lens to fit on these pre-EF-S DX models should be pretty safe (small mirror).
Third-party lens suppliers also sold DX-type lenses, but probably all without the mechanical interlock. This was because the same optics was typically sold in Canon, Nikon, etc versions and particularly Nikon didn’t have the equivalent safety feature to block using a DX lens on an FX (film) body. One example is the Tamron 11-18 mm lens which was designed for DX-type cameras. It is possible to safely mount this lens on a full frame camera – the lens does not have parts that protrude further into the camera body than FX lenses, and it worked for me.
Shots with a DX wide-angle on an FX body
The following images show the results when you use this “Tamron AF 11-18 mm f/4.5-5.6 Di II LD Aspherical (IF) SP” on a Canon 5D Mk 2 body. The same should apply for other Canon FX bodies. And likely a Nikon FX body with the Nikon version of the lens.
Note that in the 18mm image, the left half shows the original vignetting from the camera, and in the right half vignetting has been corrected in software (Lightroom 2). Although the camera also has an in-camera PIC feature to compensate for vignetting, it by default only kicks-in for 25 specific (Canon) lenses.
One can argue that the version with the vignetting looks better: apparently the eye doesn’t mind vignetting in many cases. Note that in these images, chromatic aberration has been corrected using Lightroom. Chromatic aberration is a major problem with wide-angle lenses, especially far from the image center.
At 14mm, the vignetting becomes a bit disturbing at the top and probably needs to be cropped there. But again, the vignetting at the bottom is fine.
At 11mm, the vignetting is extreme. Unfortunately you cannot capture the entire image circle on this particular lens. If you crop enough to show a rectangular area without any vignetting, you might have been better off with taking the same shot at 14 mm or more. But it might be beneficial if you intend to create a wide view, but need a square image rather than the sensor’s 2:3 aspect ratio.
The cropping itself is not much of an issue on the various 20+ MPixel high-resolution FX cameras (Canon’s 1Ds, 5D Mk2, Nikon D3x) as the resolution is high enough – but it is still unlikely that zooming and cropping will give you better quality than using a longer focal distance and not cropping.
You shouldn’t expect miracles from a DX lens when used on an FX body. Here is a 100% crop of the 14 mm image shown above. The image might look better with additional sharpening.
A DX lens wide-angle lens can be usable on an FX body if you can get it to mount without risk of damaging your mirror (an issue for Canon EF-S lenses) and are more interested in getting the shot than worrying about edge sharpness. The edges of the image are probably unsharp – especially with this particular Tamron lens which is already weak for a DX image. Unsharp edges may be undesirable for landscapes or nearby buildings, but may look natural for a picture of the subject (castle, person) in the context of its surroundings (fields, room, street). In those cases, edge unsharpness just strengthens limited depth-of-field and helps focus attention on the subject itself.
Thus arguably this particular 11-18 mm Tamron lens can be used as a full-frame 18mm lens for some kinds of shots, and the even more extreme 11-14 mm settings might more be interesting as “special effects”. In general, vignetting often doesn’t harm an image and may even strengthen it.
I took the pictures below with a five-year old 6 Mpixel Canon 10D and a brand new 21 Mpixel Canon 5D Mark II. Both images were made using a 24-105 f/4L standard lens. Both images are shown here at 100% and both crops are taken close to the center of the image. The image was taken at f/4 – which is not particularly optimal as the lens is pretty good, but not ideal. Residual chromatic aberration was manually corrected (in Lightroom 2.2) in both images using the edges of the roof to see the aberrations clearly. The images were set to same color temperature. It is hard to tell whether all other in-camera settings are comparable. For now, I am just hoping that the default settings allow a reasonable first comparison.
100% crop of center of image
The test thus stress the more fundamental implications of the difference in sensor size (and potentially in in-camera processing) rather than any system implications like having to change the zoom settings or move the camera position to get a similar field of view with the same lens.
So what can we learn from this test?
As was to be expected from the camera specifications, the 5D Mark II has a slightly higher pixel density. It corresponds to an 8 MPixel version of the 6 MPixel Canon 10D (say the Canon 20D or 350D or 30D) and can thus capture a bit more detail. This fundamental difference in image quality is, however, visible but not too dramatic.
Fine details are more visible on the 5D Mark II. This is best seen in the wings of the gryphons. This might be due to the difference in pixel density, but it might also be due to differences in spatial filtering inside the camera (the optical anti-aliasing filter or the demosaicing and sharpening algorithms). It can be argued that if one camera does less sharpening than the other (at these settings), this can be compensated by further sharpened in software.
Obviously the 5D2’s full frame sensor captures a lot of image (see below) which is simply mechanically cropped in the 10D because the sensor is 15×23 mm instead of 24×36 mm. We can use these extra pixels to do a number of things:
We can simply create a 60%×60% larger poster with slighty better per-pixel sharpness. Never mind what can be seen on the image: you just get more sharp pixels as far as the sensor is concerned.
Or we can use those extra pixels to sometimes crop the full frame image to improve composition or to give extra “digital zoom”: if you throw away more than half of the pixels, you still have the quality of say a Canon 20D or 30D.
Interestingly Nikon allows you to do this digital zoom trick in the Nikon D3, but Canon doesn’t support it in their high-end cameras. This is because Nikon needs to support DX (small sensor) lenses on their FX (large sensor) bodies. In the Canon world this trick is not supported/recommended: EF-S (small sensor) lenses are made to be physically incompatible with full-frame bodies (supposedly to prevent damage to the mirror).
Finally, as a variation of the first option, we could try to take the same picture from the same location using the larger sensor. To get the same reduced field of view of the smaller sensor camera, you need to use a longer lens (38mm lens) on the full frame camera. If we assume the lens has the same (pixels/mm) sharpness as the original 24mm lens, and can more or less sustain that sharpness across the wider sensor, you get 21 Mpixels instead of 8 Mpixels (Canon 30D) with roughly the same per-pixel quality. Unfortunately, in reality, you won’t get all of that improvement, because the smaller sensor is using the part of the image that has the best quality: the center. So the extra pixels further from the center are (especially on wide-angle lenses) of lesser quality than the ones which the small sensor gets. Thus, in this example, you can clearly already see some vignetting in the corners of the image (despite having partially manually corrected this in Lightroom).
What can be improved in the test?
It would be nicer from a didactic perspective to use a Canon 20D/30D/350D instead of the 10D. That gives the exact same pixel density of 2.4 Mpixel/cm2 as the 21 MPixel full frame sensor.
And it would be better to use a tripod rather than relying on the image stabilizer. And take the images sooner after each other to avoid the difference in shutter speed. Or to at least have had more light to minimize the impact of the shutter speed in the first place (these pictures were taken in the late afternoon on a very cloudy winters day). Obviously a few images were taken to make sure that the phenomena were reasonably repeatable.
More relevantly, it would help to use an even better lens while avoiding the maximum aperture to get lens limitations out of the way as much as possible.