File access times and caching

I’ve been hanging around the Nautilus IRC channel for quite a few years already and have gotten quite familiar with the devs. Every single one of them is a great individual and always willing to lend a helping hand.

I thought I’d ask Alexander Larsson, a Red Hat employee and the lead developer for Nautilus and other projects including GVFS, for his insights as to why initial status checks take so much longer than consecutive ones. I already had a general idea as to why (mostly the idea of caching) but was not familiar with the details and quite interested in the opinion of somebody I consider a true expert.

The resulting answer was so informative I just had to take the time to share it with all of you. Here’s how Alex explained it:

Well, you know about caching I suppose? At this point we’re talking about the kernel using spare RAM to keep information about whats on the disk.

Say you start with a blank slate, i.e. you have not accessed the filesystem at all. Now say you run stat(“/some/dir/file”). First the kernel has to find the file, which in technical terms is called the inode. It starts by looking in the filesystem superblock, which stores the inode of the root directory. Then it opens the root directory, finds “some”, opens that, finds “dir”, etc. eventually finding the inode for file.

However, on a second access to /some/dir/file it uses the “dcache” (directory cache) which keeps around a set of recently accessed paths like /some, /some/dir, and /some/dir/file. So, it can now find the inode without any disk I/O.

Then you have to actually read the inode data. After first read this is also cached in RAM. So, a read only has to happen once.

I interupted Alex for a moment to ask him how long entries are maintained in the cache.

It depends on what else the system is doing. If you actually start reading the file data, that is also cached (unless you hint the kernel not to do it). In general Linux tries to use all memory that is not otherwise allocated for cache and has a form of least recently used policy for what to throw out. So, if there is any form of memory pressure, the oldest cache info is thrown out. So, reading lots of data is a good way to invalidate caches. Which is why there are ways to hint the kernel that the data read will not be reused.

He continued:

Now, if you look at a HD performance sheet you see pretty impressive performance figures, maybe a disk can read 10MB/s, which surely sounds a lot more than what some itty bitty svn info is. I mean, if your svn status took 1s does that mean it had to read 10 meg of data? The problem is that the read rates is when you read consecutive data from the disk.

Think of the HD like an old record player, once you’re in the right place with the needle you can keep reading stuff fast as it rotates. However, once you need to move to a different place, called “seeking” you’re doing something very different. You need to physically move the arm, then wait for the platter to spin until the right place is under the needle. This kind of physical motion is inherently slow so seek times for disks are pretty long.

So, when do we seek? It depends on the filesystem layout of course. Filesystems try to store files consecutively as to increase read performance, and they generally also try to store inodes for a single directory near each other but it all depends on things like when the files are written, filesystem fragmentation, etc. So, in the worst case, each stat of a file will cause a seek and then each open of the file will cause a second seek. So, thats why things take such a long time when nothing is cached.

Some filesystems are better than others, defragmentation might help. You can do some things in apps. For instance, GIO sorts the received inodes from readdir() before stating them hoping that the inode number has some sort of relation to disk order (it generally has) thus minimizing random seeks back and forth.

One important thing is to design your data storage and apps to minimize seeking. For instance, this is why Nautilus reading /usr/bin is slow, because the files in there generally have no extension we need to do magic sniffing for each. So, we need to open each file => one seek per file => slooooow. Another example is apps that store information in lots of small files, like gconf used to do, also a bad idea. Anyway, in practice I don’t think there is much you can do except try to hide the latencies.

I also asked Alex why Thunar was so fast at loading the same directories, however a few moments later it occurred to me that the reason I thought Thunar was fast might have been because I was opening /usr/bin in it after I had already previously opened it in Nautilus, therefor the seeking and caching Alex talked about had already occurred. Alex responded with a resonant “aha!”. This may well be the reason why so many people say Nautilus is slow compared to X, they’re probably doing the same thing.

He then explained to me how to clear the cache, it turns out all you have to do is execute the following command as root:

sync; echo 3 > /proc/sys/vm/drop_caches

After doing so you’ll see that Thunar will also take quite a while to load up a directory such as /usr/bin, because of the same reason Nautilus does. Another interesting tidbit Alex pointed out was timing gvfs-ls directly. He told me to try out the following commands and stated “you’ll be surprised”.

sync; echo 3 > /proc/sys/vm/drop_caches
time gvfs-ls -a "standard::content-type" /usr/bin/ > /dev/null
sync; echo 3 > /proc/sys/vm/drop_caches
time gvfs-ls  /usr/bin/ > /dev/null

Note that the first gvfs-ls command took 16 seconds and the latter took 1.5 seconds. He explained that the only difference is that the first one reads the first 2k of each file. With the answer he provided earlier it should be quite obvious why there’s such a difference between the two commands.

Alex ended with the following note:

The real fix for this whole dilemma is to move away from rotating media. I hear the intel SSDs are teh shit. Linus swears by them.

I hope you all found this information as interesting as I did.

One response to “File access times and caching”

  1. Mitch Farquhar says:

    Thanks for sharing this – it explains a lot