Speeding up Mercurial on NFS

Matt Mackall mpm at selenic.com
Tue Dec 7 11:25:18 CST 2010


On Tue, 2010-12-07 at 16:34 +0100, Martin Geisler wrote:
> Matt Mackall <mpm at selenic.com> writes:
> 
> > On Mon, 2010-12-06 at 08:23 -0800, mg at lazybytes.net wrote:
> >
> >> Two years ago, someone asked if Git could be made to run faster over
> >> NFS. The result was a patch that made Git use threads to preload data
> >> in parallel. This gave a 5 time speedup:
> >>
> >>   http://kerneltrap.org/mailarchive/git/2008/11/14/4089834/thread
> >>
> >> I have tried to replicate these results with some simple test
> >> programs:
> >>
> >>   http://bitbucket.org/mg/parallelwalk
> >>
> >> The 'walker' program is a single-threaded C-based program that will
> >> walk a directory tree as fast as possible, 'pywalker' is a
> >> Python-based version of it, but with support for multiple threads.
> >
> > Hmm, to the extent that pywalker spends its time waiting in syscalls,
> > it might avoid the GIL.
> 
> Yes, that's the idea: each thread releases the GIL (I checked the Python
> source) when it does os.lstat and lets another thread run Python byte
> code until that thread hits os.lstat.
> 
> >> The results of these tests:
> >>
> >>       processes  threads
> >>   1:  24 sec     24 sec
> >>   2:  24 sec     19 sec
> >>   4:  26 sec     20 sec
> >>   6:  24 sec     19 sec
> >>   8:  23 sec     19 sec
> >
> > What do your C results look like?
> 
> I only have a single threaded C program, and it was about 30% faster
> than the Python program, if I recall correctly. I can test on the office
> machine tomorrow.
> 
> >> This is on a NFS setup where I export a directory and mount it back
> >> on localhost. I add an artificial network delay of 0.2 ms (0.1 ms in
> >> each direction) with
> >>
> >>   % sudo tc qdisc change dev lo root netem delay 0.1ms
> >>
> >> and I empty the disk cache before each run:
> >>
> >>   % sync; sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
> >
> > Is this the cache on the client or server?
> 
> Sort of both -- I'm using the same machine as client and server with the
> NFS export mounted back on localhost.

I don't think that's the right test at all.

A cold cache NFS request is going to have a network round trip and a
seek/read, and can't be parallelized: there's only one disk head.

A hot cache NFS request is going to have just a network round trip and
can be parallelized: you can fill the pipe and CPU with them.

A typical NFS server is going to have lots of cache, and that cache is
going to be coherent with what's on the disk and thus won't expire.

If you want to emulate cold cache on the client and hot cache on the
server, it should be sufficient to unmount and remount the loopback.

-- 
Mathematics is the supreme nostalgia of our time.




More information about the Mercurial-devel mailing list