[PATCH] largefiles: don't cache largefiles for pulled heads by default

natosha at unity3d.com natosha at unity3d.com
Sat Feb 9 15:09:22 CST 2013


# HG changeset patch
# User Na'Tosha Bard <natosha at unity3d.com>
# Date 1360444062 0
# Node ID ec59cb6ffec44f330b52f4a0a41b117de38a0987
# Parent  7f26c8bcbd74c0248acea8247f7b12b4aafe5a53
largefiles: don't cache largefiles for pulled heads by default

After discussion, we've agreed that largefiles for newly pulled heads should
not be cached by default.  The use case for this is using largefiles repos
with multiple remote servers (and therefore multiple remote largefiles caches),
where users will be pulling from non-default locations on a regular basis.  We
think this use case will be significantly less common than the use case where
all largefiles are stored on the same central server, so the default should be
no caching.

The old behavior can be obtained by passing the --cache-largefiles flag to
pull.

diff -r 7f26c8bcbd74 -r ec59cb6ffec4 hgext/largefiles/__init__.py
--- a/hgext/largefiles/__init__.py	Sat Feb 09 15:25:46 2013 +0000
+++ b/hgext/largefiles/__init__.py	Sat Feb 09 21:07:42 2013 +0000
@@ -41,13 +41,17 @@
 enabled for this to work.
 
 When you pull a changeset that affects largefiles from a remote
-repository, the largefiles for the changeset usually won't be
-pulled down until you update to the revision (there is one exception
-to this case).  However, when you update to such a revision, any
-largefiles needed by that revision are downloaded and cached (if
-they have never been downloaded before).  This means that network
-access may be required to update to changesets you have no
-previously updated to.
+repository, the largefiles for the changeset won't be pulled down.
+Instead, when you later update to such a revision, any largefiles
+needed by that revision are downloaded and cached (if they have
+never been downloaded before).  This means that network access may
+be required to update to changesets you have previously updated to.
+
+If you know you are pulling from a non-default location and want to
+ensure that you will have the largefiles needed to merge or rebase
+with new heads that you are pulling, then you can pull with the
+--cache-largefiles flag to pre-emptively download any largefiles
+that are new in the heads you are pulling.
 
 The one exception to the "largefiles won't be pulled until you update
 to a revision that changes them" rule is when you pull new heads.
diff -r 7f26c8bcbd74 -r ec59cb6ffec4 hgext/largefiles/overrides.py
--- a/hgext/largefiles/overrides.py	Sat Feb 09 15:25:46 2013 +0000
+++ b/hgext/largefiles/overrides.py	Sat Feb 09 21:07:42 2013 +0000
@@ -731,19 +731,21 @@
         repo.lfpullsource = source
         oldheads = lfutil.getcurrentheads(repo)
         result = orig(ui, repo, source, **opts)
-        # If we do not have the new largefiles for any new heads we pulled, we
-        # will run into a problem later if we try to merge or rebase with one of
-        # these heads, so cache the largefiles now directly into the system
-        # cache.
-        numcached = 0
-        heads = lfutil.getcurrentheads(repo)
-        newheads = set(heads).difference(set(oldheads))
-        if len(newheads) > 0:
-            ui.status(_("caching largefiles for %s heads\n") % len(newheads))
-        for head in newheads:
-            (cached, missing) = lfcommands.cachelfiles(ui, repo, head)
-            numcached += len(cached)
-        ui.status(_("%d largefiles cached\n") % numcached)
+        if opts.get('cache_largefiles'):
+            # If you are pulling from a remote location that is not your
+            # default location, you may want to cache largefiles for new heads
+            # that have been pulled, so you can easily merge or rebase with
+            # them later
+            numcached = 0
+            heads = lfutil.getcurrentheads(repo)
+            newheads = set(heads).difference(set(oldheads))
+            if len(newheads) > 0:
+                ui.status(_("caching largefiles for %s heads\n") %
+                          len(newheads))
+            for head in newheads:
+                (cached, missing) = lfcommands.cachelfiles(ui, repo, head)
+                numcached += len(cached)
+            ui.status(_("%d largefiles cached\n") % numcached)
     if opts.get('all_largefiles'):
         revspostpull = len(repo)
         revs = []
diff -r 7f26c8bcbd74 -r ec59cb6ffec4 hgext/largefiles/uisetup.py
--- a/hgext/largefiles/uisetup.py	Sat Feb 09 15:25:46 2013 +0000
+++ b/hgext/largefiles/uisetup.py	Sat Feb 09 21:07:42 2013 +0000
@@ -79,7 +79,9 @@
     entry = extensions.wrapcommand(commands.table, 'pull',
                                    overrides.overridepull)
     pullopt = [('', 'all-largefiles', None,
-                 _('download all pulled versions of largefiles'))]
+                 _('download all pulled versions of largefiles')),
+               ('', 'cache-largefiles', None,
+                 _('caches new largefiles in all pulled heads'))]
     entry[1].extend(pullopt)
     entry = extensions.wrapcommand(commands.table, 'clone',
                                    overrides.overrideclone)
diff -r 7f26c8bcbd74 -r ec59cb6ffec4 tests/test-largefiles-cache.t
--- a/tests/test-largefiles-cache.t	Sat Feb 09 15:25:46 2013 +0000
+++ b/tests/test-largefiles-cache.t	Sat Feb 09 21:07:42 2013 +0000
@@ -37,8 +37,6 @@
   adding file changes
   added 1 changesets with 1 changes to 1 files
   (run 'hg update' to get a working copy)
-  caching largefiles for 1 heads
-  0 largefiles cached
 
 Update working directory to "tip", which requires largefile("large"),
 but there is no cache file for it.  So, hg must treat it as
diff -r 7f26c8bcbd74 -r ec59cb6ffec4 tests/test-largefiles.t
--- a/tests/test-largefiles.t	Sat Feb 09 15:25:46 2013 +0000
+++ b/tests/test-largefiles.t	Sat Feb 09 21:07:42 2013 +0000
@@ -883,9 +883,7 @@
   adding file changes
   added 6 changesets with 16 changes to 8 files
   (run 'hg update' to get a working copy)
-  caching largefiles for 1 heads
-  3 largefiles cached
-  3 additional largefiles cached
+  6 additional largefiles cached
   $ cd ..
 
 Rebasing between two repositories does not revert largefiles to old
@@ -974,8 +972,6 @@
   adding file changes
   added 1 changesets with 2 changes to 2 files (+1 heads)
   (run 'hg heads' to see heads, 'hg merge' to merge)
-  caching largefiles for 1 heads
-  0 largefiles cached
   $ hg rebase
   Invoking status precommit hook
   M sub/normal4
@@ -1265,7 +1261,8 @@
   $ hg commit -m "Modify large4 to test merge"
   Invoking status precommit hook
   M sub/large4
-  $ hg pull ../e
+# Test --cache-largefiles flag
+  $ hg pull --cache-largefiles ../e
   pulling from ../e
   searching for changes
   adding changesets


More information about the Mercurial-devel mailing list