In 2009, Fujifilm was granted a patent on an organic layer to improve CMOS image sensor performance. The thin organic layer converts incoming light to electrical charge (like a solar cell) that is then detected by more or less conventional CMOS image sensor circuitry.
Fujifilm’s “organic sensor” technology started to receive attention recently due to speculation that it would be featured in a future high-end compact camera (expected in early 2012; possibly called the “LX10“). This camera is expected to have an interchangeable lens and target the same market as the highly successful Fujifilm X100.
Update: the Fujifilm X-Pro1 (March 2012) turned out to not contain this particular technology. Instead it has a color filter array layout which avoids the need for an anti-aliasing filter.
Although Fujifilm has not confirmed the use of their organic conversion layer in the forthcoming camera, a Fujifilm executive claimed that the interchangeable lens camera (possibly with an APS-C sized sensor) will outperform the current generation of full-frame sensors in terms of noise.
A color sensor using this technology uses a conventional color filter array to make individual pixels sensitive to green, red and blue light. In a 2009 white paper, Fuji states that the layer is insensitive to infrared light. Therefore no IR absorption filter is needed, which is in itself a minor benefit.
The organic layer is labeled “Panchromatic Photoelectric Conversion Layer” in the above diagram by Fuji. The layer is 0.5 μm thick, and converts almost all visible light to an electrical charge (electrons and holes). The organic layer, which closely resembles a solar cell technology, is sandwiched between a negatively charged transparent electrode (like in LCDs) and an array of positively charged square “Pixel Electrodes” that form the actual imaging pixels. The latter are not transparent.
In the 2009 prototype the pixels are spaced 3 μm apart. This would result in a resolution of roughly 40 MPixels for an APS-C size sensor or 22 MPixels for a Four-Thirds size sensor. Due to the gaps between the pixel electrodes, 85% of the surface area is used to capture charge. The possible loss of 15% of the incoming light can probably be neglected anyway, but these gaps could be further reduced if Fuji wants to and may not play any measurable role at all if the free electrons generated above the gaps reach the pixel electrodes anyway due to the applied electric field.
So a key benefit of this technology is that it is efficient compared to alternative ways to make arrays of relatively small pixels. With small pixels in a conventional CMOS sensor, the gaps between pixels (or the inverse “fill factor”) becomes a problem, especially because the wiring needing to access the pixels lies on the upper (outside) layers of the sensor. Backside illumination (BSI) addresses the latter by grind down the chip and illuminating the sensor from the back.
Fuji stresses that their sensor doesn’t need a micro lens array to funnel the light to the pixels. Apart from a reduction in component count, this avoids color errors at the edges of the sensor. This occurs with small pixels due to crosstalk between the color channels when light hits the sensor at an extreme angle.
One key benefit of this sensor technology should be its sensitivity to light: Fuji measured a quantum efficiency of 65% in 2009 (for green light at 550 nm) but stated that this can be improved by adding anti-reflection coatings. The quantum efficiency figure seems competitive compared to relatively expensive backside-illuminated sensors as used in recent high-end sensors with small pixel dimensions (such as the Nikon P-100, Canon S100, iPhone 4 and iPhone 4s).
This translates to a slight sensitivity gain – and would behave like removing a very light gray neutral-density filter. Using the available data, I am unable to calculate how big this improvement is expressed in ISO. Note that the prototype sensor described in 2009 will obviously differ from Fuji’s current technology anyway.
A more interesting question is what impact the technology change has on dynamic range. The ability to funnel more light to the sensor helps sensitivity and noise. But dynamic range depends on the ratio between the maximum amount of charge the pixel can hold (at saturation) compared to the noise level. Given the high expectations set by Fuji for the 2012 camera, the benefits of the organic layer must somehow translate to a dynamic range improvement (as differences in quantum efficiency between modern sensors tend to be limited).
It is a bit early to speculate on the impact of an unannounced camera of only leaked images seem to exist. Especially when we can only guess whether it utilizes Fuji’s organic photoelectric conversion layer. But what would happen if the technology delivers? In particular, would it really close the image quality gap between say full frame and APS-C sensors?
The key question for strategists is thus whether an organic layer would improve all sensor sizes and sensor resolutions to the same degree. If so, it could simply improve the quality level of what could be manufactured, but doesn’t change the landscape of Four-Thirds versus APS-C versus Full Frame versus Medium Format sensors. Fuji’s white paper starts off with
We proposed a new CMOS image sensor with a thin overlaid panchromatic organic photoelectric conversion layer as the best candidate for sensors with reduced pixel size.
and its title also mentions reduces pixel size as the highlight of this technology.
This suggests that the organic layer helps increase the design of high-resolution sensors regardless of their size, by improving their light gathering efficiency. This implies improved performance of tiny compact camera sensors with mere 1 μm pixels, and would provide an alternative to the cumbersome back-side illumination technology used in some compact cameras and camera phones.
It could also pave the way for 20 MPixel Four-Thirds sensors and 40 Mpixel APS-C sensors, and 80 MPixel full-frame sensors. Such (relatively speaking) extreme resolutions when using 3 μm pixels might be overkill for many lenses, but would indirectly increase image sharpness by promoting cameras designed without an anti-aliasing filter. Such an AA filter softens the image to get rid of details that are too fine to be resolved. But in doing so, AA filters also blur the finest details which the sensor can capture (in essence an anti-aliasing filter is a sheet of very finely frosted glass).
Because extreme resolution is not always needed and is sometimes even useless, a high-end camera could conceivably provided modes whereby the full pixel resolution can be downscaled within the camera. For example a full-frame camera 80 MPixel camera might normally record at 40, 20 or 10 MPixel resolution. But can be set to resolve in 80 MPixels if the user wants this. Current Canon SLRs already have a similar resolution scaling feature that Canon calls SRAW, but the option is not used very much because it gives loss of detail without shrinking the file size much. Note however that this route should give fine image quality, but likely costs more energy than the equivalent lower resolution sensor (more samples to digitize, more digital operations).
[ last modification: 26-Nov-2011 ]