NAS and Lightroom performance

My digital photos are stored on an inexpensive NAS. This CH3SNAS consists of dual 3.5″ SATA drives of, in my case, 1 TByte each. Each drive contains a copy of each photo (RAID 1) for robustness. Lightroom 3 maintains a catalog of these photos (with associated keywords, metadata and a cache of previews) on the computer’s local hard disk. Unfortunately, although the NAS is fine for archiving and backup tasks, Lightroom’s access to the stored photos is rather slow. The question is thus whether I can get better read performance by tweaking this setup, or need to upgrade to a fancier NAS.

The NAS and its drives

The equipment:

  • the NAS is a small Linux-based ARM9 box with 64 MBytes of memory
  • the drives in the NAS and in my desktop PC are all:
    • Samsung Spinpoint F1 HD103UJ
    • drive specs: SATA-300, 7200 RPM, 32 MB cache
  • the NAS is connected to the client PC via a switch. The router, the NAS and the PC are capable of running Gigabit Ethernet.
  • the relevant partition is formated as RAID 1 (although I don’t recommend that any more) meaning that each file is simply stored on both drives for safety

Basic drive performance

I benchmarked one of the Samsung Spinpoint F1 HD103UJ drives mounted inside a desktop PC using HD Tune Pro. This test thus tests what the drive can do under normal (non-NAS) conditions.

Detail: I restricted the part of the drive under test to 0.75 Terrabytes because the data on the NAS was confined to a 750 GB RAID 1 partition on each drive. This doesn’t change the measurements significantly.

Samsung Spinpoint F1 HD103UJ performance

The average transfer rate is thus 84 MB/s while the average latency was 13.5 ms. In other words, the drives themselves can sustain read speeds of 5 GBytes/minute if the files are big enough. I am ignoring write performance because it gives similar results and is less relevant for my usage (“read-mostly”).

Usage of the NAS

I currently have 26,000 digital photos (JPG, RAW, occasionally other stuff) requiring 225 GBytes of space. In RAID 1, this takes 225 GBytes per drive.

The average size of a single photo is roughly 17 MBytes (a mix of recent JPGs and two types of RAW). A worst case photo (Raw, full resolution, depends on compressibility) can exceed 30 MBytes.

NAS performance across the network

Copying a 16 GByte directory consisting of 902 photos (Egypt) from the NAS to the local disk:

  • took 26.5 minutes = 11 MBytes/s = 0.6 GBytes/minute = 34 photos/minute
  • generating an average network traffic of 10.1 MByte/s (received). Note sure where the 10% discrepancy comes from.
  • CPU load on the NAS (log in with ssh, top command) is about 50%

To check this, I copied the same directory from the NAS to the NAS (to a non-RAID partition):

  • took 53.5 minutes = 4.9 MBytes/s = 0.3 GBytes/minute = 17 photos/minute
  • generating network traffic of 5 MBytes/second per direction

This is consistent (enough): the NAS now needs to read and write the data. It incidentally shows that the time needed to store data on the local hard disk in the 11 MBytes/s test case was apparently negligible.

So the problem is that the drives can read (or write) data at 5 GBytes/minute, but the NAS is only reading at 0.6 GBytes/minute. The “34 photos/minute” also implies that the NAS performance can easily limit the performance of browsing of photos that are not stored in the cache.

One reviewer, however, measured 21 MBytes/s rather than my 11 MBytes/s. So this gives hope that performance can be tweaked.

Optimizations and errors found

  • I found I had 100% CPU load on the NAS, even when running on a 100 Mbps link: 50% went to samba, 25% to inotify_itunes and 25% to inotify_upnp. Disabling iTunes and univeral Plug-and-Play using the Web interface thus got the CPU load down to about 50%. Apparently this is a bug in older versions of the CH3SNAS firmware that causes these two processes to eat all remaining idle time. Apart from wasting power, they undoubtedly don’t help performance.
  • The NAS was still configured to run at 100 Mbps, despite having a 1000 Mbps Ethernet link to the router (and beyond).
  • Adobe itself just announced that the imminent version of Lightroom would fix “Library: Sub-optimal preview rendering performance could impact application performance“. Whatever that means, it is always welcome.

Checking the NAS performance locally

[ coming ]

So where is the bottleneck?

[ coming ]

This entry was posted in Information Technology, Photo Technology and tagged , , . Bookmark the permalink.

2 Responses to NAS and Lightroom performance

  1. Greg says:

    Peter, what is the best solution for NAS storage allowing access from mutiple workstations using managment software like Lightroom. Many articles about what doesn’t work. Has anyone figured this out or come close?

  2. pvdhamer says:

    Lightroom 3 doesn’t really support managing the same photo’s across multiple computers or multiple users and actively tries to prevent this: the catalog is supposed to be on one of the computers. The actual photos can be on a shared network drive (aka NAS). What are you trying to achieve? Lightroom doesn’t give you a failsafe solution. So a workaround will require that you know what you are doing and can handle the risks of moving .lrcat files around or merging them.

    Single user who uses alternative computers? Then you can copy the catalog to your machine before starting Lightroom. Should be doable with a kind of simple script. But there is a risk that you loose track of where the latest version is, and loose work.

    If you have multiple users, you can give each their own local catalog that each points to photos on the same NAS. If the catalogs don’t overlap at all, it is IMO very safe. If the catalogs overlap, but no two users/machines modify the same directories/folders/shoots, you might be safe enough.

    If you have multiple users, each with their own copy of the catalog, you will get catalog merging problems. If you have a nerdy users, you might figure out a way to survive. But it is living on the edge.

    Hope this helps, Peter.

Comments are closed.