I'm wondering why Nautilus is very slow when opening a directory containing lots of files. My /usr/lib dir for example has 1900 files and it takes approximately 5+ seconds to show everything. It has been like this since I installed Ubuntu few months ago and it's really quite annoying sometimes. I don't have powerful hardware but I know that Windows Explorer is so much faster than this.

Is there anything that can be done to speed it up?

Ubuntu 10.04


Tracing the execution of nautilus shows that the slowness is due to a combination of two factors:

  • It's smart about displaying useful information about each file. It looks inside the contents of files to determine what icon to use, and possibly show a preview. This can be toned down by turning previews off in the preferences.

  • It does a lot of useless work (such as stating each file multiple times, and checking /proc/filesystems even for non-directories). All you can do is learn programming, improve the program and send a patch. Or at least send the authors a feature request (please make it faster).

  • It calls several external processes for each directory, I haven't explored what they do.


In the "Preview" tab under "Edit -> Preferences", try switching all the options to "Never".

It also helped me enormously to turn off "Assistive Technologies". You can do this in "System -> Preferences -> Assistive Technologies". Uncheck "Enable assistive technologies".

You'll have to log out and back in for the latter change to take effect.


This reminded me of a talk I had with Alexander Larsson, the lead developer for Nautilus and other projects including GVFS.

Giles his answer, specifically the bit about Nautilus looking inside the content of files, touches on the major reason why Nautilus is "slow". However Giles doesn't explain why this is slow, which might be obvious to some, but not to others. Here's what Alex had to say:

Say you start with a blank slate, i.e. you have not accessed the filesystem at all. Now say you run stat(“/some/dir/file”). First the kernel has to find the file, which in technical terms is called the inode. It starts by looking in the filesystem superblock, which stores the inode of the root directory. Then it opens the root directory, finds “some”, opens that, finds “dir”, etc. eventually finding the inode for file.

Then you have to actually read the inode data. After first read this is also cached in RAM. So, a read only has to happen once.

Think of the HD like an old record player, once you’re in the right place with the needle you can keep reading stuff fast as it rotates. However, once you need to move to a different place, called “seeking” you’re doing something very different. You need to physically move the arm, then wait for the platter to spin until the right place is under the needle. This kind of physical motion is inherently slow so seek times for disks are pretty long.

So, when do we seek? It depends on the filesystem layout of course. Filesystems try to store files consecutively as to increase read performance, and they generally also try to store inodes for a single directory near each other but it all depends on things like when the files are written, filesystem fragmentation, etc. So, in the worst case, each stat of a file will cause a seek and then each open of the file will cause a second seek. So, thats why things take such a long time when nothing is cached.

Some filesystems are better than others, defragmentation might help. You can do some things in apps. For instance, GIO sorts the received inodes from readdir() before stating them hoping that the inode number has some sort of relation to disk order (it generally has) thus minimizing random seeks back and forth.

One important thing is to design your data storage and apps to minimize seeking. For instance, this is why Nautilus reading /usr/bin is slow, because the files in there generally have no extension we need to do magic sniffing for each. So, we need to open each file => one seek per file => slooooow. Another example is apps that store information in lots of small files, like gconf used to do, also a bad idea. Anyway, in practice I don’t think there is much you can do except try to hide the latencies.

He ended with the following note:

The real fix for this whole dilemma is to move away from rotating media. I hear the Intel SSDs are awesome. Linus swears by them.

:-)