In this January update, ten new cameras and four extra labels were added. Sorted on descending price, these are:
Sony A7 (a 24 MPixel full-frame mirrorless camera)
Sony DSC-RX10 (performs like Sony’s two RX100 models)
Sony NEX 5T (likely the last NEX-branded model)
Sony A3000 (great image quality at low cost)
Note that all four happen to be Sony. I add labels in the (a) graph to cameras that are notable from a technical perspective, and to the (b) graph when they have an interesting price for their performance level.
Although not tested yet, it will be very interesting to see how the two medium format backs perform with Sony’s new 50 MPixel sensor. They might becoming the new record holder (although the Phase One IQ250 doesn’t allow you to shoot about 6400 ISO).
Assuming you care about low light and high dynamic range performance, the best cameras have full-frame sensors (the blue dots). You knew that – right? Well, surprisingly full-frame sensors beat even larger (purple, pink, red) sensors. So don’t bother spending big money on a medium format camera unless you really need the super-high resolution. Or need it show that your equipment is clearly on a price class of its own.
The so-called APS-C cameras with 1.5x or 1.6x sensors have improved. Examples: the Nikon D5200 and D5300.
The Sony NEX-5R mirrorless (which is 1.5x) has a slightly higher price than an APS-C SLR, but the body is smaller and the performance is competitive. Mirrorless models should have the same performance as an SLR with a comparable sensor. A mirror doesn’t add image quality – it just makes a click sound like ke-lick.
Canon still has a long way to go to catch up with its APS-C sensors (1.6x). The Canon 70D performs slightly better than the old Canon 7D, but a comparison to Nikon or Sony tends to be embarrassing.
Recent Micro Four Thirds cameras (Olympus & Panasonic) have improved and are even ahead of Canon’s APS-C (1.6x) models.
The Sony RX100 and RX100-II are still doing fine – at least considering their small sensor size (2.7x or 1/1″ ). The Nikon Series 1 is technically not state-of-the art, but nice if you like white or pink gear: it targets a young Asian lifestyle market.
The premium pocket cameras have improved. Especially the 1/1.7″ sensor models such as the Canon Powershot S120 and G16 and their Nikon equivalents.
The best deals if you need a high quality model can be found at the top edge of the cloud in diagram “b”: you get the highest quality in that price range. Note that the prices shown are official prices at introduction, and will differ from current street prices. These deals include:
The Nikon D600 and D610. These are essentially the same camera, but the D610 resolves a dust issue.
The new Sony A7R mirrorless. Note that this model uses Sony E-mount lenses, but actually requires new Sony full-frame E-mount lenses called “FE”. So it will take a while until there are enough lens options.
The Sony RX1 and RX1R. These look overpriced (and probably are – although I ordered one myself), but their price does include an excellent 35mm Zeiss f/2.0 lens. On the other hand, they do not come with an optical or electronic viewfinder. These cost about 500 US $ or Euro extra. Lens hood pricing is joke (so look into the Photodiox accessories).
The Nikon D5200 or D5300. Both have a 24 MPixels state-of-the-art sensor, but the newer one gives sharper images (no AA filter) if your lenses are up to the challenge.
The Nikon D3200. Also 24 MPixels with state-of-the-art sensor technology.
The Pentax K50 and K500. A somewhat overlooked brand.
The Nikon Coolpix P330. A “take me everywhere” camera at a lower price point than the excellent Nikon Coolpix A or FujiFilm’s X-100s models.
Note that some major new camera models are not shown because DxO Labs simply hasn’t tested them yet. These include:
The new full-frame Nikon Df (with the professional Nikon D4’s 16 Mpixel sensor). It should score about 89 (D4) for $3000 – nice, but not sensational unless you insist on a retro look and feel.
Most FujiFilm X-Trans models have not been tested. Tests may be delayed because they have a non-standard color filter array (complicating raw conversion). The CFA design allows the sensor to work without a low pass filter. Alternatively, the missing tests may be because FujiFilm is not enthusiastic about their cameras’ DxOMark scores (pure speculation on my part, but the FujiFilm X-100 didn’t score exceptionally well). FujiFilm high-end cameras are getting a lot of attention from serious photographers who prefer small, unobtrusive cameras with a classic mechanical feel.
The Sony A7. Many people wouldn’t really benefit from 36 MPixels (Sony A7R) without an image stabilizer or a tripod or high-end lenses.
I have been working on an update to my original DxOMark article. That update has just been published on Luminous Landscape, a well-known photography site operated by the Canadian landscape photographer and publicist Michael Reichmann.
A slightly newer version of the article is available at the DxOMark website. It features four extra cameras and almost identical text.
The article covers various aspects of image sensor size and its impact on image quality. The article is built around original benchmark data measured by DxO Labs. I have rehashed their data (with permission) to stress basic trends and highlight a few topics:
Benchmark data for over 180 high-end cameras (starting at about $400).
Which benchmark numbers by www.dxomark.com are most relevant for your needs?
The technical relationships between sensor noise, dynamic range and resolution.
A comparison of what noise does at low ISO and at high ISO (this is trickier than “doubling the ISO reduces the signal-to-noise ratio by 2×”).
The implications of using “mirrorless” cameras (and associated smaller sensors) on image quality.
The image quality of the new wave of cameras that use Sony’s new Exmor sensor with its excellent low ISO dynamic range performance. With a bit of speculation of whether Canon (that normally doesn’t use Sony sensors) can catch up with Nikon (that does regularly use Sony sensors).
You can contact me about the article via comments on this website. I will also try to keep an eye on comments on the LuLa and the DxOMark fora (forums).
In 2009, Fujifilm was granted a patent on an organic layer to improve CMOS image sensor performance. The thin organic layer converts incoming light to electrical charge (like a solar cell) that is then detected by more or less conventional CMOS image sensor circuitry.
Fujifilm’s “organic sensor” technology started to receive attention recently due to speculation that it would be featured in a future high-end compact camera (expected in early 2012; possibly called the “LX10“). This camera is expected to have an interchangeable lens and target the same market as the highly successful Fujifilm X100.
Update: the Fujifilm X-Pro1 (March 2012) turned out to not contain this particular technology. Instead it has a color filter array layout which avoids the need for an anti-aliasing filter.
Although Fujifilm has not confirmed the use of their organic conversion layer in the forthcoming camera, a Fujifilm executive claimed that the interchangeable lens camera (possibly with an APS-C sized sensor) will outperform the current generation of full-frame sensors in terms of noise.
A color sensor using this technology uses a conventional color filter array to make individual pixels sensitive to green, red and blue light. In a 2009 white paper, Fuji states that the layer is insensitive to infrared light. Therefore no IR absorption filter is needed, which is in itself a minor benefit.
The organic layer is labeled “Panchromatic Photoelectric Conversion Layer” in the above diagram by Fuji. The layer is 0.5 μm thick, and converts almost all visible light to an electrical charge (electrons and holes). The organic layer, which closely resembles a solar cell technology, is sandwiched between a negatively charged transparent electrode (like in LCDs) and an array of positively charged square “Pixel Electrodes” that form the actual imaging pixels. The latter are not transparent.
In the 2009 prototype the pixels are spaced 3 μm apart. This would result in a resolution of roughly 40 MPixels for an APS-C size sensor or 22 MPixels for a Four-Thirds size sensor. Due to the gaps between the pixel electrodes, 85% of the surface area is used to capture charge. The possible loss of 15% of the incoming light can probably be neglected anyway, but these gaps could be further reduced if Fuji wants to and may not play any measurable role at all if the free electrons generated above the gaps reach the pixel electrodes anyway due to the applied electric field.
So a key benefit of this technology is that it is efficient compared to alternative ways to make arrays of relatively small pixels. With small pixels in a conventional CMOS sensor, the gaps between pixels (or the inverse “fill factor”) becomes a problem, especially because the wiring needing to access the pixels lies on the upper (outside) layers of the sensor. Backside illumination (BSI) addresses the latter by grind down the chip and illuminating the sensor from the back.
Fuji stresses that their sensor doesn’t need a micro lens array to funnel the light to the pixels. Apart from a reduction in component count, this avoids color errors at the edges of the sensor. This occurs with small pixels due to crosstalk between the color channels when light hits the sensor at an extreme angle.
One key benefit of this sensor technology should be its sensitivity to light: Fuji measured a quantum efficiency of 65% in 2009 (for green light at 550 nm) but stated that this can be improved by adding anti-reflection coatings. The quantum efficiency figure seems competitive compared to relatively expensive backside-illuminated sensors as used in recent high-end sensors with small pixel dimensions (such as the Nikon P-100, Canon S100, iPhone 4 and iPhone 4s).
This translates to a slight sensitivity gain – and would behave like removing a very light gray neutral-density filter. Using the available data, I am unable to calculate how big this improvement is expressed in ISO. Note that the prototype sensor described in 2009 will obviously differ from Fuji’s current technology anyway.
A more interesting question is what impact the technology change has on dynamic range. The ability to funnel more light to the sensor helps sensitivity and noise. But dynamic range depends on the ratio between the maximum amount of charge the pixel can hold (at saturation) compared to the noise level. Given the high expectations set by Fuji for the 2012 camera, the benefits of the organic layer must somehow translate to a dynamic range improvement (as differences in quantum efficiency between modern sensors tend to be limited).
It is a bit early to speculate on the impact of an unannounced camera of only leaked images seem to exist. Especially when we can only guess whether it utilizes Fuji’s organic photoelectric conversion layer. But what would happen if the technology delivers? In particular, would it really close the image quality gap between say full frame and APS-C sensors?
The key question for strategists is thus whether an organic layer would improve all sensor sizes and sensor resolutions to the same degree. If so, it could simply improve the quality level of what could be manufactured, but doesn’t change the landscape of Four-Thirds versus APS-C versus Full Frame versus Medium Format sensors. Fuji’s white paper starts off with
We proposed a new CMOS image sensor with a thin overlaid panchromatic organic photoelectric conversion layer as the best candidate for sensors with reduced pixel size.
and its title also mentions reduces pixel size as the highlight of this technology.
This suggests that the organic layer helps increase the design of high-resolution sensors regardless of their size, by improving their light gathering efficiency. This implies improved performance of tiny compact camera sensors with mere 1 μm pixels, and would provide an alternative to the cumbersome back-side illumination technology used in some compact cameras and camera phones.
It could also pave the way for 20 MPixel Four-Thirds sensors and 40 Mpixel APS-C sensors, and 80 MPixel full-frame sensors. Such (relatively speaking) extreme resolutions when using 3 μm pixels might be overkill for many lenses, but would indirectly increase image sharpness by promoting cameras designed without an anti-aliasing filter. Such an AA filter softens the image to get rid of details that are too fine to be resolved. But in doing so, AA filters also blur the finest details which the sensor can capture (in essence an anti-aliasing filter is a sheet of very finely frosted glass).
Because extreme resolution is not always needed and is sometimes even useless, a high-end camera could conceivably provided modes whereby the full pixel resolution can be downscaled within the camera. For example a full-frame camera 80 MPixel camera might normally record at 40, 20 or 10 MPixel resolution. But can be set to resolve in 80 MPixels if the user wants this. Current Canon SLRs already have a similar resolution scaling feature that Canon calls SRAW, but the option is not used very much because it gives loss of detail without shrinking the file size much. Note however that this route should give fine image quality, but likely costs more energy than the equivalent lower resolution sensor (more samples to digitize, more digital operations).
During late 2009 and all of 2010, Albert Theuwissen published an 26-part series of postings on image sensor noise on his Harvest Imaging website. The series explores various sources of image sensor noise and their relationship to signal strength. The series targets sensor designers, and those who use sensors in challenging applications. Probably many of the Harvest Imaging readers know Prof. Theuwissen from his courses, workshops and conferences.
The series centers around a proprietary simulation model (written in Matlab/C?) where Theuwissen selectively isolates each noise source encountered in a sensor to show its impact on overall image noise. Every installment of the series centers around a graph (James Janesick’s Photon Transfer Curve, PTC) that plots sensor noise against sensor signal. The graph is thus closely related to the signal-to-noise ratio, but the graph stresses how the ratio varies
as the sensor is exposed to darkness for varying durations, and
as the sensor is exposed to light for varying exposures.
If you are not a sensor expert, you can try to use the series to learn about sensor behavior – provided you can handle a bunch of basic formulas and are willing to learn the associated terminology (which is is not entirely consistent across postings). Hence these notes which try to follow the terminology used in later postings (where up to 15 noise sources needed to be distinguished).
Incidentally, the word “Harvest” in the domain name of Albert’s website is after the title of a Neil Young album: Prof. Theuwissen is somewhat of a Neil Young fan.
CDS = correlated double sampling (a kind of self-calibration technique with differential amplifiers, see Wikipedia)
DN = digital number (simply the digital value read out after measuring the analog signal)
DSNU = Dark Signal Non-Uniformity (differences in dark current signal build-up due to variations between individual pixels)
FPN = fixed pattern noise (small pixel-to-pixel deviations that don’t change over time)
k = gain in DN/electron
PRNU = photo-response non-uniformity (differences is pixel sensitivity to light)
PTC = photon transfer curve = signal versus noise graph
RTS = random telegraph signals (random jumping between fixed output levels)
The following table lists all the noise sources that occur in absolute darkness. They also occur when there is light (but with light there are extra noise sources). They are discussed in more detail below.
All noise sources are measured by resetting the pixels, and then reading out the pixel after a short or longer delay. I classified the noise sources based on their time behavior (the table columns) and their source (the table rows). The central message of the Harvest Imaging series is that you can distinguish these noise sources in actual measurements by analyzing noise build-up over time (to distinguish the table columns) and by different ways of averaging the individual pixel measurements (to isolate fixed sensor line and column pattern noise).
The conclusion of the series is that you can distinguish many of the noise sources by appropriate measurements on a sensor. And that the estimated parameter values can be pretty accurate.
The Amplifier offset (p=154) has such a bad effect on low signal measurements that it is assumed to be corrected away in most of the PTC graphs.
A noise source not listed above, Saturation Non-Uniformity (p=142), is only relevant for severely overexposed pixels. This can happen during normal exposures, but this part of the dynamic range is normally hidden from the user because of its non-linearity and non-uniformity.
Careful: there may be multiple definitions of Temporal Pixel Noise: including or excluding temporal row/column noises. When you just measure the pixel noise, you get “including”, but when you do a lot of analysis or create a synthetic model, you get “excluding”. A similar problem may exist for pixel-level FPN.
The sample sensor used in all computations
Calculations are done on a hypothetical 160×120 pixel sensor. Given the assumed full-well capacity of 17,500 electrons, the pixels may have a pitch of around 3-5 μm. So the data would correspond to a small section of a larger sensor with pixel dimensions that are likely between compact camera pixels and SLR camera pixels. The physical dimensions are not directly relevant for any of the calculations.
Note that the Fixed-Pattern Noise column are not new noise sources: they exist (presumably with the same magnitude) in the absence of light but are measured again below in the presence of light. PRNU and Photon Shot Noise, in contrast, are really new noise sources that increase noise when photons fall on the sensor.
In addition, photons obviously also cause a signal component (which looks like dark current, but is proportional to the photon flux) which is what the sensor is meant to measure in the first place. It is not shown in the above table because it is not a noise source.