Optimum way to serve 70,000 static files (jpg)?

I need to serve around 70,000 static files (jpg) using nginx. Should I dump them all in a single directory, or is there a better (efficient) way ? Since the filenames are numeric, I considered having a directory structure like:

xxx/xxxx/xxx

The OS is CentOS 5.1


Solution 1:

Benchmark, benchmark, benchmark! You'll probably find no significant difference between the two options, meaning that your time is better spent on other problems. If you do benchmark and find no real difference, go with whichever scheme is easier -- what's easy to code if only programs have to access the files, or what's easy for humans to work with if people need to frequently work with the files.

As to whichever one is faster, directory lookup time is, I believe, proportional to the logarithm of the number of files in the directory. So each of three lookups for the nested structure will be faster than one big lookup, but the total of all three will probably be larger.

But don't trust me, I don't have a clue what I'm doing! Measure performance when it matters!

Solution 2:

it really depends on the file system you're using to store the files.

some filesystems (like ext2 and to a lesser extent ext3) are hideously slow when you have thousands of files in one directory, so using subdirectories is a very good idea.

other filesystems, like XFS or reiserfs(*), don't slow down with thousands of files in one directory, so it doesn't matter whether you have one big directory or lots of smaller subdirectories.

(*) reiserfs has some nice features but it's an experimental toy that has a history of catastrophic failures. don't use it on anything even remotely important.