Lightroom 4’s PV2012 image enhancement algorithm

In 2009 I wrote a small article here about a new class of image enhancement algorithms. Such algorithms made pictures look sharper by added local contrast and brought out details in both shadows and highlights. And they did this without adding halos around high-contrast edges such as the transition between foreground and sky. The article focused on a research paper by Farbman, Fattal, Lischinski and Szeliski (FFLS, 2008).

The brand new Lightroom 4 (and the associated versions of Adobe Camera Raw 6.7) has now incorporated similar technology that is based on a newer research paper entitled “Local Laplacian Filters: Edge-aware Image Processing with a Laplacian Pyramid” by Paris, Hasinoff and Kautz (PHK, 2011).

The implications of this new Lightroom feature for photographers are significant enough that I will gradually add more details here as I upgrade to Lightroom 4 (possible since March 5th) and get hands-on experience with it.

What’s so important about this technology?

On superficial examination, the two mentioned research papers (and there are many more where that came from) seem similar enough. They are both trying to give users the ability to make details in images more striking (local contrast enhancement) without creating undesirable artifacts such as halos.

The weird thing about this research is that they don’t distinguish between minor image tweaking (e.g. typical raw converters sharpen images to some default degree) and image modifications like HDR which also sharpen the image but can lead to unnatural results if overused. This is probably because there is no sharp boundary between enhancing a little and going overboard with your the settings.

Thus many of the example images in the research papers look like what we photographers would consider HDR photography: in HDR photography, you want to show a broad range of lightings without making the local contrast look flat, and without making the overall picture look artificial. Probably the authors would answer that HDR is an application area where this kind of algorithm is needed – but they can also be used for “normal” images shot with a single exposure in a Normal Dynamic Range situation.

Something similar applies to the Lighting module in DxO Lab’s “DxO Optics Plus”. It probably uses a similar type of approach to boost local contrast. I tend to consider it an HDR-like technique.

Alternative algorithms

A claimed key benefit PHK as chosen by Adobe is that the algorithm is simpler and thus takes less processing time. In fact, the paper’s introduction starts off by saying that the widely known (in the right circles) Laplacian Pyramid technology was underrated, leading previous researchers (including FFLS) to develop more complex algorithms to compensate for its shortcomings.

It is worth noting that the first author in PHK works for Adobe. But these are scientifically reviewed papers published in respectable conferences – meaning the authors have to be very rigorous about their claims, when these hold and what evidence and even counter-evidence there is for these claims.

Where do I find this in Lightroom 4?

Now shipping in Lightroom 4 (and Photoshop CS6 Camera Raw) – the tools for adjusting shadows, highlights, and clarity are based on a fast version of the local Laplacian filters we introduced at SIGGRAPH 2011


Should we care?

New image enhancement algorithm

KammaGamma is a website about rather technical image enhancement topics. After almost a year (!) a new posting showed up with a heads-up for a scientific paper (Proc. ACM Siggraph 2008). It was published by an Israeli university in collaboration with Microsoft Research. The university’s web site has a short and fast-paced video introducing the work. It contains images and image sequences showing the result of the algorithm – so even if you can’t follow all of the nerdiness, it shows what they claim.

As far as I can tell, the authors want to be able to adjust different spatial frequencies (example: the overall image structure, course details and fine details) independently of each other. This requires splitting an image into layers that can be processed independently and later – if required – composed back into a single image.

Arguably this sounds like a 2D equivalent for a 1D audio equalizer: you want independent control over bass, mid-tones and high-frequencies. You can use this to rid of all high frequencies and show just the course structure of the landscape (making it, in extreme cases, look like a cartoon). Or you can alternatively sharpen the fine details. In their video, the authors actually also make this 1D analogy (1D graphs), but probably don’t use the audio analogy. Arguably the author want the equivalent of picking up the bass instruments (big drum) while suppressing the piccolo – but while having the drum sound like a hi-fi recording: big drums produce sharp transitions which also contain high frequencies.

In image terms, the challenge seems to decompose the image into large-scale objects and (one or more scales of) smaller details. But without making the large-scale objects look blurry: a building removed of its details should look like a bunch of polygons with sharp edges. Rather than an image taken in the fog (as with a standard low-pass filter). Similarly, enhancing the high frequencies should increase brick texture, without adding halos around the edges of the building. Examples of the latter (a beach scene) can be found here (the new algorithm is WLS, a competitor is BLF).

The paper apparently uses a least-squares fitting algorithm to decompose the image into different layers of detail. Potential applications (incomplete – I have only found time to partially read the original article ;-):

  • halo-free sharpening
  • creating a graphical version of a photographic image
  • creation of High Dynamic Range images with high contrast details
  • any image processing which wants to work on details without creating edge artifacts (blurring or ringing)