D2096: infinitepush: move the extension to core from fb-hgext

pulkit (Pulkit Goyal) phabricator at mercurial-scm.org
Fri Feb 9 11:45:27 UTC 2018


pulkit created this revision.
Herald added a subscriber: mercurial-devel.
Herald added a reviewer: hg-reviewers.

REVISION SUMMARY
  This patch moves the infinitepush extension from fb-hgext to core. The
  extension is used to store incoming bundles during a push in bundlestore rather
  than applying them to the revlog.
  
  The extension was copied from the repository revision at
  https://phab.mercurial-scm.org/rFBHGXf27f094e91553d3cae5167c0b1c42ae940f888d5 and following changes were made:
  
  - added `from __future__ import absolute_import` where missing
  - fixed module imports to follow the core style
  - minor fixes for test-check-code.t
  - registered the configs
  - adding the testedwith value to match core's convention
  - removed double newlines to make test-check-commit.t happy
  - added one line doc about extension and marked it as experimental
  
  Only one test file test-infinitepush-bundlestore.t is moved to core and
  following changes are made to file:
  
  - remove dependency of library.sh
  - split the tests into two tests i.e. test-infinitepush.t and test-infinitepush-bundlestore.t
  - removed testing related to other facebook's extensions pushrebase, inhibit, fbamend
  
  library-infinitepush.sh is also copied from fb-hgext from the same revision and
  following changes are made:
  
  - change the path to infinitepush extension as it's in core with this patch
  - removed sql handling from the file as we are not testing that initially
  
  Currently at this revision, test-check-module-imports.t does not pass as there
  is import of a module from fb/hgext in one the of the file which will be removed
  in the next patch.
  
  This extension right now has a lot of things which we don't require in core like
  `--to`, `--create` flags to `hg bookmark`, logic related to remotenames
  extension and another facebook's extensions, custom bundle2parts which can be
  prevented by using bookmarks bundle part and also logic related to sql store
  which is probably we don't want initially.
  
  The next patches in this series will remove all the unwanted and unrequired
  things from the extension and will make this a nice one.
  
  The end goal is to have a very lighweight extension with no or very less
  wrapping on the client side.

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D2096

AFFECTED FILES
  hgext/infinitepush/README
  hgext/infinitepush/__init__.py
  hgext/infinitepush/backupcommands.py
  hgext/infinitepush/bundleparts.py
  hgext/infinitepush/common.py
  hgext/infinitepush/fileindexapi.py
  hgext/infinitepush/indexapi.py
  hgext/infinitepush/infinitepushcommands.py
  hgext/infinitepush/schema.sql
  hgext/infinitepush/sqlindexapi.py
  hgext/infinitepush/store.py
  tests/library-infinitepush.sh
  tests/test-infinitepush-bundlestore.t
  tests/test-infinitepush.t

CHANGE DETAILS

diff --git a/tests/test-infinitepush.t b/tests/test-infinitepush.t
new file mode 100644
--- /dev/null
+++ b/tests/test-infinitepush.t
@@ -0,0 +1,318 @@
+Testing infinipush extension and the confi options provided by it
+
+Setup
+
+  $ . "$TESTDIR/library-infinitepush.sh"
+  $ cp $HGRCPATH $TESTTMP/defaulthgrc
+  $ setupcommon
+  $ hg init repo
+  $ cd repo
+  $ setupserver
+  $ echo initialcommit > initialcommit
+  $ hg ci -Aqm "initialcommit"
+  $ hg phase --public .
+
+  $ cd ..
+  $ hg clone ssh://user@dummy/repo client -q
+
+Create two heads. Push first head alone, then two heads together. Make sure that
+multihead push works.
+  $ cd client
+  $ echo multihead1 > multihead1
+  $ hg add multihead1
+  $ hg ci -m "multihead1"
+  $ hg up null
+  0 files updated, 0 files merged, 2 files removed, 0 files unresolved
+  $ echo multihead2 > multihead2
+  $ hg ci -Am "multihead2"
+  adding multihead2
+  created new head
+  $ hg push -r . --bundle-store
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 1 commit:
+  remote:     ee4802bf6864  multihead2
+  $ hg push -r '1:2' --bundle-store
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 2 commits:
+  remote:     bc22f9a30a82  multihead1
+  remote:     ee4802bf6864  multihead2
+  $ scratchnodes
+  bc22f9a30a821118244deacbd732e394ed0b686c ab1bc557aa090a9e4145512c734b6e8a828393a5
+  ee4802bf6864326a6b3dcfff5a03abc2a0a69b8f ab1bc557aa090a9e4145512c734b6e8a828393a5
+
+Create two new scratch bookmarks
+  $ hg up 0
+  1 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  $ echo scratchfirstpart > scratchfirstpart
+  $ hg ci -Am "scratchfirstpart"
+  adding scratchfirstpart
+  created new head
+  $ hg push -r . --to scratch/firstpart --create
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 1 commit:
+  remote:     176993b87e39  scratchfirstpart
+  $ hg up 0
+  0 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  $ echo scratchsecondpart > scratchsecondpart
+  $ hg ci -Am "scratchsecondpart"
+  adding scratchsecondpart
+  created new head
+  $ hg push -r . --to scratch/secondpart --create
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 1 commit:
+  remote:     8db3891c220e  scratchsecondpart
+
+Pull two bookmarks from the second client
+  $ cd ..
+  $ hg clone ssh://user@dummy/repo client2 -q
+  $ cd client2
+  $ hg pull -B scratch/firstpart -B scratch/secondpart
+  pulling from ssh://user@dummy/repo
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files (+1 heads)
+  new changesets * (glob)
+  (run 'hg heads' to see heads, 'hg merge' to merge)
+  $ hg log -r scratch/secondpart -T '{node}'
+  8db3891c220e216f6da214e8254bd4371f55efca (no-eol)
+  $ hg log -r scratch/firstpart -T '{node}'
+  176993b87e39bd88d66a2cccadabe33f0b346339 (no-eol)
+Make two commits to the scratch branch
+
+  $ echo testpullbycommithash1 > testpullbycommithash1
+  $ hg ci -Am "testpullbycommithash1"
+  adding testpullbycommithash1
+  created new head
+  $ hg log -r '.' -T '{node}\n' > ../testpullbycommithash1
+  $ echo testpullbycommithash2 > testpullbycommithash2
+  $ hg ci -Aqm "testpullbycommithash2"
+  $ hg push -r . --to scratch/mybranch --create -q
+
+Create third client and pull by commit hash.
+Make sure testpullbycommithash2 has not fetched
+  $ cd ..
+  $ hg clone ssh://user@dummy/repo client3 -q
+  $ cd client3
+  $ hg pull -r `cat ../testpullbycommithash1`
+  pulling from ssh://user@dummy/repo
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+  new changesets 33910bfe6ffe
+  (run 'hg update' to get a working copy)
+  $ hg log -G -T '{desc} {phase} {bookmarks}'
+  o  testpullbycommithash1 draft
+  |
+  @  initialcommit public
+  
+Make public commit in the repo and pull it.
+Make sure phase on the client is public.
+  $ cd ../repo
+  $ echo publiccommit > publiccommit
+  $ hg ci -Aqm "publiccommit"
+  $ hg phase --public .
+  $ cd ../client3
+  $ hg pull
+  pulling from ssh://user@dummy/repo
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files (+1 heads)
+  new changesets a79b6597f322
+  (run 'hg heads' to see heads, 'hg merge' to merge)
+  $ hg log -G -T '{desc} {phase} {bookmarks} {node|short}'
+  o  publiccommit public  a79b6597f322
+  |
+  | o  testpullbycommithash1 draft  33910bfe6ffe
+  |/
+  @  initialcommit public  67145f466344
+  
+  $ hg up a79b6597f322
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ echo scratchontopofpublic > scratchontopofpublic
+  $ hg ci -Aqm "scratchontopofpublic"
+  $ hg push -r . --to scratch/scratchontopofpublic --create
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 1 commit:
+  remote:     c70aee6da07d  scratchontopofpublic
+  $ cd ../client2
+  $ hg pull -B scratch/scratchontopofpublic
+  pulling from ssh://user@dummy/repo
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files (+1 heads)
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+  new changesets a79b6597f322:c70aee6da07d
+  (run 'hg heads .' to see heads, 'hg merge' to merge)
+  $ hg log -r scratch/scratchontopofpublic -T '{phase}'
+  draft (no-eol)
+Strip scratchontopofpublic commit and do hg update
+  $ hg log -r tip -T '{node}\n'
+  c70aee6da07d7cdb9897375473690df3a8563339
+  $ echo "[extensions]" >> .hg/hgrc
+  $ echo "strip=" >> .hg/hgrc
+  $ hg strip -q tip
+  $ hg up c70aee6da07d7cdb9897375473690df3a8563339
+  'c70aee6da07d7cdb9897375473690df3a8563339' does not exist locally - looking for it remotely...
+  pulling from ssh://user@dummy/repo
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+  new changesets c70aee6da07d
+  (run 'hg update' to get a working copy)
+  'c70aee6da07d7cdb9897375473690df3a8563339' found remotely
+  2 files updated, 0 files merged, 2 files removed, 0 files unresolved
+
+Trying to pull from bad path
+  $ hg strip -q tip
+  $ hg --config paths.default=badpath up c70aee6da07d7cdb9897375473690df3a8563339
+  'c70aee6da07d7cdb9897375473690df3a8563339' does not exist locally - looking for it remotely...
+  pulling from $TESTTMP/client2/badpath (glob)
+  pull failed: repository $TESTTMP/client2/badpath not found
+  abort: unknown revision 'c70aee6da07d7cdb9897375473690df3a8563339'!
+  [255]
+
+Strip commit and pull it using hg update with bookmark name
+  $ hg strip -q d8fde0ddfc96
+  $ hg book -d scratch/mybranch
+  $ hg up scratch/mybranch
+  'scratch/mybranch' does not exist locally - looking for it remotely...
+  pulling from ssh://user@dummy/repo
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 2 files
+  new changesets d8fde0ddfc96
+  (run 'hg update' to get a working copy)
+  'scratch/mybranch' found remotely
+  2 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  (activating bookmark scratch/mybranch)
+  $ hg log -r scratch/mybranch -T '{node}'
+  d8fde0ddfc962183977f92d2bc52d303b8840f9d (no-eol)
+
+Test debugfillinfinitepushmetadata
+  $ cd ../repo
+  $ hg debugfillinfinitepushmetadata
+  abort: nodes are not specified
+  [255]
+  $ hg debugfillinfinitepushmetadata --node randomnode
+  abort: node randomnode is not found
+  [255]
+  $ hg debugfillinfinitepushmetadata --node d8fde0ddfc962183977f92d2bc52d303b8840f9d
+  $ cat .hg/scratchbranches/index/nodemetadatamap/d8fde0ddfc962183977f92d2bc52d303b8840f9d
+  {"changed_files": {"testpullbycommithash2": {"adds": 1, "isbinary": false, "removes": 0, "status": "added"}}} (no-eol)
+
+  $ cd ../client
+  $ hg up d8fde0ddfc962183977f92d2bc52d303b8840f9d
+  'd8fde0ddfc962183977f92d2bc52d303b8840f9d' does not exist locally - looking for it remotely...
+  pulling from ssh://user@dummy/repo
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 2 changesets with 2 changes to 2 files (+1 heads)
+  new changesets 33910bfe6ffe:d8fde0ddfc96
+  (run 'hg heads .' to see heads, 'hg merge' to merge)
+  'd8fde0ddfc962183977f92d2bc52d303b8840f9d' found remotely
+  2 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  $ echo file > file
+  $ hg add file
+  $ hg rm testpullbycommithash2
+  $ hg ci -m 'add and rm files'
+  $ hg log -r . -T '{node}\n'
+  3edfe7e9089ab9f728eb8e0d0c62a5d18cf19239
+  $ hg cp file cpfile
+  $ hg mv file mvfile
+  $ hg ci -m 'cpfile and mvfile'
+  $ hg log -r . -T '{node}\n'
+  c7ac39f638c6b39bcdacf868fa21b6195670f8ae
+  $ hg push -r . --bundle-store
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 4 commits:
+  remote:     33910bfe6ffe  testpullbycommithash1
+  remote:     d8fde0ddfc96  testpullbycommithash2
+  remote:     3edfe7e9089a  add and rm files
+  remote:     c7ac39f638c6  cpfile and mvfile
+  $ cd ../repo
+  $ hg debugfillinfinitepushmetadata --node 3edfe7e9089ab9f728eb8e0d0c62a5d18cf19239 --node c7ac39f638c6b39bcdacf868fa21b6195670f8ae
+  $ cat .hg/scratchbranches/index/nodemetadatamap/3edfe7e9089ab9f728eb8e0d0c62a5d18cf19239
+  {"changed_files": {"file": {"adds": 1, "isbinary": false, "removes": 0, "status": "added"}, "testpullbycommithash2": {"adds": 0, "isbinary": false, "removes": 1, "status": "removed"}}} (no-eol)
+  $ cat .hg/scratchbranches/index/nodemetadatamap/c7ac39f638c6b39bcdacf868fa21b6195670f8ae
+  {"changed_files": {"cpfile": {"adds": 1, "copies": "file", "isbinary": false, "removes": 0, "status": "added"}, "file": {"adds": 0, "isbinary": false, "removes": 1, "status": "removed"}, "mvfile": {"adds": 1, "copies": "file", "isbinary": false, "removes": 0, "status": "added"}}} (no-eol)
+
+Test infinitepush.metadatafilelimit number
+  $ cd ../client
+  $ echo file > file
+  $ hg add file
+  $ echo file1 > file1
+  $ hg add file1
+  $ echo file2 > file2
+  $ hg add file2
+  $ hg ci -m 'add many files'
+  $ hg log -r . -T '{node}'
+  09904fb20c53ff351bd3b1d47681f569a4dab7e5 (no-eol)
+  $ hg push -r . --bundle-store
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 5 commits:
+  remote:     33910bfe6ffe  testpullbycommithash1
+  remote:     d8fde0ddfc96  testpullbycommithash2
+  remote:     3edfe7e9089a  add and rm files
+  remote:     c7ac39f638c6  cpfile and mvfile
+  remote:     09904fb20c53  add many files
+
+  $ cd ../repo
+  $ hg debugfillinfinitepushmetadata --node 09904fb20c53ff351bd3b1d47681f569a4dab7e5 --config infinitepush.metadatafilelimit=2
+  $ cat .hg/scratchbranches/index/nodemetadatamap/09904fb20c53ff351bd3b1d47681f569a4dab7e5
+  {"changed_files": {"file": {"adds": 1, "isbinary": false, "removes": 0, "status": "added"}, "file1": {"adds": 1, "isbinary": false, "removes": 0, "status": "added"}}, "changed_files_truncated": true} (no-eol)
+
+Test infinitepush.fillmetadatabranchpattern
+  $ cd ../repo
+  $ cat >> .hg/hgrc << EOF
+  > [infinitepush]
+  > fillmetadatabranchpattern=re:scratch/fillmetadata/.*
+  > EOF
+  $ cd ../client
+  $ echo tofillmetadata > tofillmetadata
+  $ hg ci -Aqm "tofillmetadata"
+  $ hg log -r . -T '{node}\n'
+  d2b0410d4da084bc534b1d90df0de9eb21583496
+  $ hg push -r . --to scratch/fillmetadata/fill --create
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 6 commits:
+  remote:     33910bfe6ffe  testpullbycommithash1
+  remote:     d8fde0ddfc96  testpullbycommithash2
+  remote:     3edfe7e9089a  add and rm files
+  remote:     c7ac39f638c6  cpfile and mvfile
+  remote:     09904fb20c53  add many files
+  remote:     d2b0410d4da0  tofillmetadata
+
+Make sure background process finished
+  $ sleep 3
+  $ cd ../repo
+  $ cat .hg/scratchbranches/index/nodemetadatamap/d2b0410d4da084bc534b1d90df0de9eb21583496
+  {"changed_files": {"tofillmetadata": {"adds": 1, "isbinary": false, "removes": 0, "status": "added"}}} (no-eol)
diff --git a/tests/test-infinitepush-bundlestore.t b/tests/test-infinitepush-bundlestore.t
new file mode 100644
--- /dev/null
+++ b/tests/test-infinitepush-bundlestore.t
@@ -0,0 +1,417 @@
+
+Create an ondisk bundlestore in .hg/scratchbranches
+  $ . "$TESTDIR/library-infinitepush.sh"
+  $ cp $HGRCPATH $TESTTMP/defaulthgrc
+  $ setupcommon
+  $ mkcommit() {
+  >    echo "$1" > "$1"
+  >    hg add "$1"
+  >    hg ci -m "$1"
+  > }
+  $ hg init repo
+  $ cd repo
+
+Check that we can send a scratch on the server and it does not show there in
+the history but is stored on disk
+  $ setupserver
+  $ cd ..
+  $ hg clone ssh://user@dummy/repo client -q
+  $ cd client
+  $ mkcommit initialcommit
+  $ hg push -r . --create
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: adding changesets
+  remote: adding manifests
+  remote: adding file changes
+  remote: added 1 changesets with 1 changes to 1 files
+  $ mkcommit scratchcommit
+  $ hg push -r . --to scratch/mybranch --create
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 1 commit:
+  remote:     20759b6926ce  scratchcommit
+  $ hg log -G
+  @  changeset:   1:20759b6926ce
+  |  bookmark:    scratch/mybranch
+  |  tag:         tip
+  |  user:        test
+  |  date:        Thu Jan 01 00:00:00 1970 +0000
+  |  summary:     scratchcommit
+  |
+  o  changeset:   0:67145f466344
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     initialcommit
+  
+  $ hg log -G -R ../repo
+  o  changeset:   0:67145f466344
+     tag:         tip
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     initialcommit
+  
+  $ find ../repo/.hg/scratchbranches | sort
+  ../repo/.hg/scratchbranches
+  ../repo/.hg/scratchbranches/filebundlestore
+  ../repo/.hg/scratchbranches/filebundlestore/b9
+  ../repo/.hg/scratchbranches/filebundlestore/b9/e1
+  ../repo/.hg/scratchbranches/filebundlestore/b9/e1/b9e1ee5f93fb6d7c42496fc176c09839639dd9cc
+  ../repo/.hg/scratchbranches/index
+  ../repo/.hg/scratchbranches/index/bookmarkmap
+  ../repo/.hg/scratchbranches/index/bookmarkmap/scratch
+  ../repo/.hg/scratchbranches/index/bookmarkmap/scratch/mybranch
+  ../repo/.hg/scratchbranches/index/nodemap
+  ../repo/.hg/scratchbranches/index/nodemap/20759b6926ce827d5a8c73eb1fa9726d6f7defb2
+
+From another client we can get the scratchbranch if we ask for it explicitely
+
+  $ cd ..
+  $ hg clone ssh://user@dummy/repo client2 -q
+  $ cd client2
+  $ hg pull -B scratch/mybranch --traceback
+  pulling from ssh://user@dummy/repo
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+  new changesets 20759b6926ce
+  (run 'hg update' to get a working copy)
+  $ hg log -G
+  o  changeset:   1:20759b6926ce
+  |  bookmark:    scratch/mybranch
+  |  tag:         tip
+  |  user:        test
+  |  date:        Thu Jan 01 00:00:00 1970 +0000
+  |  summary:     scratchcommit
+  |
+  @  changeset:   0:67145f466344
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     initialcommit
+  
+  $ cd ..
+
+Push to non-scratch bookmark
+
+  $ cd client
+  $ hg up 0
+  0 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  $ mkcommit newcommit
+  created new head
+  $ hg push -r .
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: adding changesets
+  remote: adding manifests
+  remote: adding file changes
+  remote: added 1 changesets with 1 changes to 1 files
+  $ hg log -G -T '{desc} {phase} {bookmarks}'
+  @  newcommit public
+  |
+  | o  scratchcommit draft scratch/mybranch
+  |/
+  o  initialcommit public
+  
+
+Push to scratch branch
+  $ cd ../client2
+  $ hg up -q scratch/mybranch
+  $ mkcommit 'new scratch commit'
+  $ hg push -r . --to scratch/mybranch
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 2 commits:
+  remote:     20759b6926ce  scratchcommit
+  remote:     1de1d7d92f89  new scratch commit
+  $ hg log -G -T '{desc} {phase} {bookmarks}'
+  @  new scratch commit draft scratch/mybranch
+  |
+  o  scratchcommit draft
+  |
+  o  initialcommit public
+  
+  $ scratchnodes
+  1de1d7d92f8965260391d0513fe8a8d5973d3042 bed63daed3beba97fff2e819a148cf415c217a85
+  20759b6926ce827d5a8c73eb1fa9726d6f7defb2 bed63daed3beba97fff2e819a148cf415c217a85
+
+  $ scratchbookmarks
+  scratch/mybranch 1de1d7d92f8965260391d0513fe8a8d5973d3042
+
+Push scratch bookmark with no new revs
+  $ hg push -r . --to scratch/anotherbranch --create
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 2 commits:
+  remote:     20759b6926ce  scratchcommit
+  remote:     1de1d7d92f89  new scratch commit
+  $ hg log -G -T '{desc} {phase} {bookmarks}'
+  @  new scratch commit draft scratch/anotherbranch scratch/mybranch
+  |
+  o  scratchcommit draft
+  |
+  o  initialcommit public
+  
+  $ scratchbookmarks
+  scratch/anotherbranch 1de1d7d92f8965260391d0513fe8a8d5973d3042
+  scratch/mybranch 1de1d7d92f8965260391d0513fe8a8d5973d3042
+
+Pull scratch and non-scratch bookmark at the same time
+
+  $ hg -R ../repo book newbook
+  $ cd ../client
+  $ hg pull -B newbook -B scratch/mybranch --traceback
+  pulling from ssh://user@dummy/repo
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 2 files
+  adding remote bookmark newbook
+  new changesets 1de1d7d92f89
+  (run 'hg update' to get a working copy)
+  $ hg log -G -T '{desc} {phase} {bookmarks}'
+  o  new scratch commit draft scratch/mybranch
+  |
+  | @  newcommit public
+  | |
+  o |  scratchcommit draft
+  |/
+  o  initialcommit public
+  
+
+Push scratch revision without bookmark with --bundle-store
+
+  $ hg up -q tip
+  $ mkcommit scratchcommitnobook
+  $ hg log -G -T '{desc} {phase} {bookmarks}'
+  @  scratchcommitnobook draft
+  |
+  o  new scratch commit draft scratch/mybranch
+  |
+  | o  newcommit public
+  | |
+  o |  scratchcommit draft
+  |/
+  o  initialcommit public
+  
+  $ hg push -r . --bundle-store
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 3 commits:
+  remote:     20759b6926ce  scratchcommit
+  remote:     1de1d7d92f89  new scratch commit
+  remote:     2b5d271c7e0d  scratchcommitnobook
+  $ hg -R ../repo log -G -T '{desc} {phase}'
+  o  newcommit public
+  |
+  o  initialcommit public
+  
+
+  $ scratchnodes
+  1de1d7d92f8965260391d0513fe8a8d5973d3042 66fa08ff107451320512817bed42b7f467a1bec3
+  20759b6926ce827d5a8c73eb1fa9726d6f7defb2 66fa08ff107451320512817bed42b7f467a1bec3
+  2b5d271c7e0d25d811359a314d413ebcc75c9524 66fa08ff107451320512817bed42b7f467a1bec3
+
+Test with pushrebase
+  $ mkcommit scratchcommitwithpushrebase
+  $ hg push -r . --to scratch/mybranch
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 4 commits:
+  remote:     20759b6926ce  scratchcommit
+  remote:     1de1d7d92f89  new scratch commit
+  remote:     2b5d271c7e0d  scratchcommitnobook
+  remote:     d8c4f54ab678  scratchcommitwithpushrebase
+  $ hg -R ../repo log -G -T '{desc} {phase}'
+  o  newcommit public
+  |
+  o  initialcommit public
+  
+  $ scratchnodes
+  1de1d7d92f8965260391d0513fe8a8d5973d3042 e3cb2ac50f9e1e6a5ead3217fc21236c84af4397
+  20759b6926ce827d5a8c73eb1fa9726d6f7defb2 e3cb2ac50f9e1e6a5ead3217fc21236c84af4397
+  2b5d271c7e0d25d811359a314d413ebcc75c9524 e3cb2ac50f9e1e6a5ead3217fc21236c84af4397
+  d8c4f54ab678fd67cb90bb3f272a2dc6513a59a7 e3cb2ac50f9e1e6a5ead3217fc21236c84af4397
+
+Change the order of pushrebase and infinitepush
+  $ mkcommit scratchcommitwithpushrebase2
+  $ hg push -r . --to scratch/mybranch
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 5 commits:
+  remote:     20759b6926ce  scratchcommit
+  remote:     1de1d7d92f89  new scratch commit
+  remote:     2b5d271c7e0d  scratchcommitnobook
+  remote:     d8c4f54ab678  scratchcommitwithpushrebase
+  remote:     6c10d49fe927  scratchcommitwithpushrebase2
+  $ hg -R ../repo log -G -T '{desc} {phase}'
+  o  newcommit public
+  |
+  o  initialcommit public
+  
+  $ scratchnodes
+  1de1d7d92f8965260391d0513fe8a8d5973d3042 cd0586065eaf8b483698518f5fc32531e36fd8e0
+  20759b6926ce827d5a8c73eb1fa9726d6f7defb2 cd0586065eaf8b483698518f5fc32531e36fd8e0
+  2b5d271c7e0d25d811359a314d413ebcc75c9524 cd0586065eaf8b483698518f5fc32531e36fd8e0
+  6c10d49fe92751666c40263f96721b918170d3da cd0586065eaf8b483698518f5fc32531e36fd8e0
+  d8c4f54ab678fd67cb90bb3f272a2dc6513a59a7 cd0586065eaf8b483698518f5fc32531e36fd8e0
+
+Non-fastforward scratch bookmark push
+
+  $ hg log -GT "{rev}:{node} {desc}\n"
+  @  6:6c10d49fe92751666c40263f96721b918170d3da scratchcommitwithpushrebase2
+  |
+  o  5:d8c4f54ab678fd67cb90bb3f272a2dc6513a59a7 scratchcommitwithpushrebase
+  |
+  o  4:2b5d271c7e0d25d811359a314d413ebcc75c9524 scratchcommitnobook
+  |
+  o  3:1de1d7d92f8965260391d0513fe8a8d5973d3042 new scratch commit
+  |
+  | o  2:91894e11e8255bf41aa5434b7b98e8b2aa2786eb newcommit
+  | |
+  o |  1:20759b6926ce827d5a8c73eb1fa9726d6f7defb2 scratchcommit
+  |/
+  o  0:67145f4663446a9580364f70034fea6e21293b6f initialcommit
+  
+  $ hg up 6c10d49fe927
+  0 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ echo 1 > amend
+  $ hg add amend
+  $ hg ci --amend -m 'scratch amended commit'
+  saved backup bundle to $TESTTMP/client/.hg/strip-backup/6c10d49fe927-c99ffec5-amend.hg (glob)
+  $ hg log -G -T '{desc} {phase} {bookmarks}'
+  @  scratch amended commit draft scratch/mybranch
+  |
+  o  scratchcommitwithpushrebase draft
+  |
+  o  scratchcommitnobook draft
+  |
+  o  new scratch commit draft
+  |
+  | o  newcommit public
+  | |
+  o |  scratchcommit draft
+  |/
+  o  initialcommit public
+  
+
+  $ scratchbookmarks
+  scratch/anotherbranch 1de1d7d92f8965260391d0513fe8a8d5973d3042
+  scratch/mybranch 6c10d49fe92751666c40263f96721b918170d3da
+  $ hg push -r . --to scratch/mybranch
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: non-forward push
+  remote: (use --non-forward-move to override)
+  abort: push failed on remote
+  [255]
+
+  $ hg push -r . --to scratch/mybranch --non-forward-move
+  pushing to ssh://user@dummy/repo
+  searching for changes
+  remote: pushing 5 commits:
+  remote:     20759b6926ce  scratchcommit
+  remote:     1de1d7d92f89  new scratch commit
+  remote:     2b5d271c7e0d  scratchcommitnobook
+  remote:     d8c4f54ab678  scratchcommitwithpushrebase
+  remote:     8872775dd97a  scratch amended commit
+  $ scratchbookmarks
+  scratch/anotherbranch 1de1d7d92f8965260391d0513fe8a8d5973d3042
+  scratch/mybranch 8872775dd97a750e1533dc1fbbca665644b32547
+  $ hg log -G -T '{desc} {phase} {bookmarks}'
+  @  scratch amended commit draft scratch/mybranch
+  |
+  o  scratchcommitwithpushrebase draft
+  |
+  o  scratchcommitnobook draft
+  |
+  o  new scratch commit draft
+  |
+  | o  newcommit public
+  | |
+  o |  scratchcommit draft
+  |/
+  o  initialcommit public
+  
+Check that push path is not ignored. Add new path to the hgrc
+  $ cat >> .hg/hgrc << EOF
+  > [paths]
+  > peer=ssh://user@dummy/client2
+  > EOF
+
+Checkout last non-scrath commit
+  $ hg up 91894e11e8255
+  1 files updated, 0 files merged, 6 files removed, 0 files unresolved
+  $ mkcommit peercommit
+Use --force because this push creates new head
+  $ hg push peer -r . -f
+  pushing to ssh://user@dummy/client2
+  searching for changes
+  remote: adding changesets
+  remote: adding manifests
+  remote: adding file changes
+  remote: added 2 changesets with 2 changes to 2 files (+1 heads)
+  $ hg -R ../repo log -G -T '{desc} {phase} {bookmarks}'
+  o  newcommit public
+  |
+  o  initialcommit public
+  
+  $ hg -R ../client2 log -G -T '{desc} {phase} {bookmarks}'
+  o  peercommit public
+  |
+  o  newcommit public
+  |
+  | @  new scratch commit draft scratch/anotherbranch scratch/mybranch
+  | |
+  | o  scratchcommit draft
+  |/
+  o  initialcommit public
+  
+  $ hg book --list-remote scratch/*
+     scratch/anotherbranch     1de1d7d92f8965260391d0513fe8a8d5973d3042
+     scratch/mybranch          8872775dd97a750e1533dc1fbbca665644b32547
+  $ hg book --list-remote
+  abort: --list-remote requires a bookmark pattern
+  (use "hg book" to get a list of your local bookmarks)
+  [255]
+  $ hg book --config infinitepush.defaultremotepatterns=scratch/another* --list-remote
+  abort: --list-remote requires a bookmark pattern
+  (use "hg book" to get a list of your local bookmarks)
+  [255]
+  $ hg book --list-remote scratch/my
+  $ hg book --list-remote scratch/my*
+     scratch/mybranch          8872775dd97a750e1533dc1fbbca665644b32547
+  $ hg book --list-remote scratch/my* -T json
+  [
+   {
+    "bookmark": "scratch/mybranch",
+    "node": "8872775dd97a750e1533dc1fbbca665644b32547"
+   }
+  ]
+  $ cd ../repo
+  $ hg book scratch/serversidebook
+  $ hg book serversidebook
+  $ cd ../client
+  $ hg book --list-remote scratch/* -T json
+  [
+   {
+    "bookmark": "scratch/anotherbranch",
+    "node": "1de1d7d92f8965260391d0513fe8a8d5973d3042"
+   },
+   {
+    "bookmark": "scratch/mybranch",
+    "node": "8872775dd97a750e1533dc1fbbca665644b32547"
+   },
+   {
+    "bookmark": "scratch/serversidebook",
+    "node": "0000000000000000000000000000000000000000"
+   }
+  ]
+
+Push to svn server should fail
+  $ hg push svn+ssh://svn.vip.facebook.com/svnroot/tfb/trunk/www -r . --to scratch/serversidebook
+  abort: infinite push does not work with svn repo
+  (did you forget to `hg push default`?)
+  [255]
diff --git a/tests/library-infinitepush.sh b/tests/library-infinitepush.sh
new file mode 100644
--- /dev/null
+++ b/tests/library-infinitepush.sh
@@ -0,0 +1,49 @@
+scratchnodes() {
+  for node in `find ../repo/.hg/scratchbranches/index/nodemap/* | sort`; do
+     echo ${node##*/} `cat $node`
+  done
+}
+
+scratchbookmarks() {
+  for bookmark in `find ../repo/.hg/scratchbranches/index/bookmarkmap/* -type f | sort`; do
+     echo "${bookmark##*/bookmarkmap/} `cat $bookmark`"
+  done
+}
+
+setupcommon() {
+  cat >> $HGRCPATH << EOF
+[extensions]
+infinitepush=
+[ui]
+ssh = python "$TESTDIR/dummyssh"
+[infinitepush]
+branchpattern=re:scratch/.*
+EOF
+}
+
+setupserver() {
+cat >> .hg/hgrc << EOF
+[infinitepush]
+server=yes
+indextype=disk
+storetype=disk
+reponame=babar
+EOF
+}
+
+waitbgbackup() {
+  sleep 1
+  hg debugwaitbackup
+}
+
+mkcommitautobackup() {
+    echo $1 > $1
+    hg add $1
+    hg ci -m $1 --config infinitepushbackup.autobackup=True
+}
+
+setuplogdir() {
+  mkdir $TESTTMP/logs
+  chmod 0755 $TESTTMP/logs
+  chmod +t $TESTTMP/logs
+}
diff --git a/hgext/infinitepush/store.py b/hgext/infinitepush/store.py
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/store.py
@@ -0,0 +1,155 @@
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# based on bundleheads extension by Gregory Szorc <gps at mozilla.com>
+
+from __future__ import absolute_import
+
+import abc
+import hashlib
+import os
+import subprocess
+import tempfile
+
+NamedTemporaryFile = tempfile.NamedTemporaryFile
+
+class BundleWriteException(Exception):
+    pass
+
+class BundleReadException(Exception):
+    pass
+
+class abstractbundlestore(object):
+    """Defines the interface for bundle stores.
+
+    A bundle store is an entity that stores raw bundle data. It is a simple
+    key-value store. However, the keys are chosen by the store. The keys can
+    be any Python object understood by the corresponding bundle index (see
+    ``abstractbundleindex`` below).
+    """
+    __metaclass__ = abc.ABCMeta
+
+    @abc.abstractmethod
+    def write(self, data):
+        """Write bundle data to the store.
+
+        This function receives the raw data to be written as a str.
+        Throws BundleWriteException
+        The key of the written data MUST be returned.
+        """
+
+    @abc.abstractmethod
+    def read(self, key):
+        """Obtain bundle data for a key.
+
+        Returns None if the bundle isn't known.
+        Throws BundleReadException
+        The returned object should be a file object supporting read()
+        and close().
+        """
+
+class filebundlestore(object):
+    """bundle store in filesystem
+
+    meant for storing bundles somewhere on disk and on network filesystems
+    """
+    def __init__(self, ui, repo):
+        self.ui = ui
+        self.repo = repo
+        self.storepath = ui.configpath('scratchbranch', 'storepath')
+        if not self.storepath:
+            self.storepath = self.repo.vfs.join("scratchbranches",
+                                                "filebundlestore")
+        if not os.path.exists(self.storepath):
+            os.makedirs(self.storepath)
+
+    def _dirpath(self, hashvalue):
+        """First two bytes of the hash are the name of the upper
+        level directory, next two bytes are the name of the
+        next level directory"""
+        return os.path.join(self.storepath, hashvalue[0:2], hashvalue[2:4])
+
+    def _filepath(self, filename):
+        return os.path.join(self._dirpath(filename), filename)
+
+    def write(self, data):
+        filename = hashlib.sha1(data).hexdigest()
+        dirpath = self._dirpath(filename)
+
+        if not os.path.exists(dirpath):
+            os.makedirs(dirpath)
+
+        with open(self._filepath(filename), 'w') as f:
+            f.write(data)
+
+        return filename
+
+    def read(self, key):
+        try:
+            f = open(self._filepath(key), 'r')
+        except IOError:
+            return None
+
+        return f.read()
+
+class externalbundlestore(abstractbundlestore):
+    def __init__(self, put_binary, put_args, get_binary, get_args):
+        """
+        `put_binary` - path to binary file which uploads bundle to external
+            storage and prints key to stdout
+        `put_args` - format string with additional args to `put_binary`
+                     {filename} replacement field can be used.
+        `get_binary` - path to binary file which accepts filename and key
+            (in that order), downloads bundle from store and saves it to file
+        `get_args` - format string with additional args to `get_binary`.
+                     {filename} and {handle} replacement field can be used.
+        """
+
+        self.put_args = put_args
+        self.get_args = get_args
+        self.put_binary = put_binary
+        self.get_binary = get_binary
+
+    def _call_binary(self, args):
+        p = subprocess.Popen(
+            args, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
+            close_fds=True)
+        stdout, stderr = p.communicate()
+        returncode = p.returncode
+        return returncode, stdout, stderr
+
+    def write(self, data):
+        # Won't work on windows because you can't open file second time without
+        # closing it
+        with NamedTemporaryFile() as temp:
+            temp.write(data)
+            temp.flush()
+            temp.seek(0)
+            formatted_args = [arg.format(filename=temp.name)
+                              for arg in self.put_args]
+            returncode, stdout, stderr = self._call_binary(
+                [self.put_binary] + formatted_args)
+
+            if returncode != 0:
+                raise BundleWriteException(
+                    'Failed to upload to external store: %s' % stderr)
+            stdout_lines = stdout.splitlines()
+            if len(stdout_lines) == 1:
+                return stdout_lines[0]
+            else:
+                raise BundleWriteException(
+                    'Bad output from %s: %s' % (self.put_binary, stdout))
+
+    def read(self, handle):
+        # Won't work on windows because you can't open file second time without
+        # closing it
+        with NamedTemporaryFile() as temp:
+            formatted_args = [arg.format(filename=temp.name, handle=handle)
+                              for arg in self.get_args]
+            returncode, stdout, stderr = self._call_binary(
+                [self.get_binary] + formatted_args)
+
+            if returncode != 0:
+                raise BundleReadException(
+                    'Failed to download from external store: %s' % stderr)
+            return temp.read()
diff --git a/hgext/infinitepush/sqlindexapi.py b/hgext/infinitepush/sqlindexapi.py
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/sqlindexapi.py
@@ -0,0 +1,257 @@
+# Infinite push
+#
+# Copyright 2016 Facebook, Inc.
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+from __future__ import absolute_import
+
+import logging
+import os
+import time
+
+import warnings
+import mysql.connector
+
+from . import indexapi
+
+def _convertbookmarkpattern(pattern):
+    pattern = pattern.replace('_', '\\_')
+    pattern = pattern.replace('%', '\\%')
+    if pattern.endswith('*'):
+        pattern = pattern[:-1] + '%'
+    return pattern
+
+class sqlindexapi(indexapi.indexapi):
+    '''
+    Sql backend for infinitepush index. See schema.sql
+    '''
+
+    def __init__(self, reponame, host, port,
+                 database, user, password, logfile, loglevel,
+                 waittimeout=300, locktimeout=120):
+        super(sqlindexapi, self).__init__()
+        self.reponame = reponame
+        self.sqlargs = {
+            'host': host,
+            'port': port,
+            'database': database,
+            'user': user,
+            'password': password,
+        }
+        self.sqlconn = None
+        self.sqlcursor = None
+        if not logfile:
+            logfile = os.devnull
+        logging.basicConfig(filename=logfile)
+        self.log = logging.getLogger()
+        self.log.setLevel(loglevel)
+        self._connected = False
+        self._waittimeout = waittimeout
+        self._locktimeout = locktimeout
+
+    def sqlconnect(self):
+        if self.sqlconn:
+            raise indexapi.indexexception("SQL connection already open")
+        if self.sqlcursor:
+            raise indexapi.indexexception("SQL cursor already open without"
+                                          " connection")
+        retry = 3
+        while True:
+            try:
+                self.sqlconn = mysql.connector.connect(
+                    force_ipv6=True, **self.sqlargs)
+
+                # Code is copy-pasted from hgsql. Bug fixes need to be
+                # back-ported!
+                # The default behavior is to return byte arrays, when we
+                # need strings. This custom convert returns strings.
+                self.sqlconn.set_converter_class(CustomConverter)
+                self.sqlconn.autocommit = False
+                break
+            except mysql.connector.errors.Error:
+                # mysql can be flakey occasionally, so do some minimal
+                # retrying.
+                retry -= 1
+                if retry == 0:
+                    raise
+                time.sleep(0.2)
+
+        waittimeout = self.sqlconn.converter.escape('%s' % self._waittimeout)
+
+        self.sqlcursor = self.sqlconn.cursor()
+        self.sqlcursor.execute("SET wait_timeout=%s" % waittimeout)
+        self.sqlcursor.execute("SET innodb_lock_wait_timeout=%s" %
+                               self._locktimeout)
+        self._connected = True
+
+    def close(self):
+        """Cleans up the metadata store connection."""
+        with warnings.catch_warnings():
+            warnings.simplefilter("ignore")
+            self.sqlcursor.close()
+            self.sqlconn.close()
+        self.sqlcursor = None
+        self.sqlconn = None
+
+    def __enter__(self):
+        if not self._connected:
+            self.sqlconnect()
+        return self
+
+    def __exit__(self, exc_type, exc_val, exc_tb):
+        if exc_type is None:
+            self.sqlconn.commit()
+        else:
+            self.sqlconn.rollback()
+
+    def addbundle(self, bundleid, nodesctx):
+        if not self._connected:
+            self.sqlconnect()
+        self.log.info("ADD BUNDLE %r %r" % (self.reponame, bundleid))
+        self.sqlcursor.execute(
+            "INSERT INTO bundles(bundle, reponame) VALUES "
+            "(%s, %s)", params=(bundleid, self.reponame))
+        for ctx in nodesctx:
+            self.sqlcursor.execute(
+                "INSERT INTO nodestobundle(node, bundle, reponame) "
+                "VALUES (%s, %s, %s) ON DUPLICATE KEY UPDATE "
+                "bundle=VALUES(bundle)",
+                params=(ctx.hex(), bundleid, self.reponame))
+
+            extra = ctx.extra()
+            author_name = ctx.user()
+            committer_name = extra.get('committer', ctx.user())
+            author_date = int(ctx.date()[0])
+            committer_date = int(extra.get('committer_date', author_date))
+            self.sqlcursor.execute(
+                "INSERT IGNORE INTO nodesmetadata(node, message, p1, p2, "
+                "author, committer, author_date, committer_date, "
+                "reponame) VALUES "
+                "(%s, %s, %s, %s, %s, %s, %s, %s, %s)",
+                params=(ctx.hex(), ctx.description(),
+                        ctx.p1().hex(), ctx.p2().hex(), author_name,
+                        committer_name, author_date, committer_date,
+                        self.reponame)
+            )
+
+    def addbookmark(self, bookmark, node):
+        """Takes a bookmark name and hash, and records mapping in the metadata
+        store."""
+        if not self._connected:
+            self.sqlconnect()
+        self.log.info(
+            "ADD BOOKMARKS %r bookmark: %r node: %r" %
+            (self.reponame, bookmark, node))
+        self.sqlcursor.execute(
+            "INSERT INTO bookmarkstonode(bookmark, node, reponame) "
+            "VALUES (%s, %s, %s) ON DUPLICATE KEY UPDATE node=VALUES(node)",
+            params=(bookmark, node, self.reponame))
+
+    def addmanybookmarks(self, bookmarks):
+        if not self._connected:
+            self.sqlconnect()
+        args = []
+        values = []
+        for bookmark, node in bookmarks.iteritems():
+            args.append('(%s, %s, %s)')
+            values.extend((bookmark, node, self.reponame))
+        args = ','.join(args)
+
+        self.sqlcursor.execute(
+            "INSERT INTO bookmarkstonode(bookmark, node, reponame) "
+            "VALUES %s ON DUPLICATE KEY UPDATE node=VALUES(node)" % args,
+            params=values)
+
+    def deletebookmarks(self, patterns):
+        """Accepts list of bookmark patterns and deletes them.
+        If `commit` is set then bookmark will actually be deleted. Otherwise
+        deletion will be delayed until the end of transaction.
+        """
+        if not self._connected:
+            self.sqlconnect()
+        self.log.info("DELETE BOOKMARKS: %s" % patterns)
+        for pattern in patterns:
+            pattern = _convertbookmarkpattern(pattern)
+            self.sqlcursor.execute(
+                "DELETE from bookmarkstonode WHERE bookmark LIKE (%s) "
+                "and reponame = %s",
+                params=(pattern, self.reponame))
+
+    def getbundle(self, node):
+        """Returns the bundleid for the bundle that contains the given node."""
+        if not self._connected:
+            self.sqlconnect()
+        self.log.info("GET BUNDLE %r %r" % (self.reponame, node))
+        self.sqlcursor.execute(
+            "SELECT bundle from nodestobundle "
+            "WHERE node = %s AND reponame = %s", params=(node, self.reponame))
+        result = self.sqlcursor.fetchall()
+        if len(result) != 1 or len(result[0]) != 1:
+            self.log.info("No matching node")
+            return None
+        bundle = result[0][0]
+        self.log.info("Found bundle %r" % bundle)
+        return bundle
+
+    def getnode(self, bookmark):
+        """Returns the node for the given bookmark. None if it doesn't exist."""
+        if not self._connected:
+            self.sqlconnect()
+        self.log.info(
+            "GET NODE reponame: %r bookmark: %r" % (self.reponame, bookmark))
+        self.sqlcursor.execute(
+            "SELECT node from bookmarkstonode WHERE "
+            "bookmark = %s AND reponame = %s", params=(bookmark, self.reponame))
+        result = self.sqlcursor.fetchall()
+        if len(result) != 1 or len(result[0]) != 1:
+            self.log.info("No matching bookmark")
+            return None
+        node = result[0][0]
+        self.log.info("Found node %r" % node)
+        return node
+
+    def getbookmarks(self, query):
+        if not self._connected:
+            self.sqlconnect()
+        self.log.info(
+            "QUERY BOOKMARKS reponame: %r query: %r" % (self.reponame, query))
+        query = _convertbookmarkpattern(query)
+        self.sqlcursor.execute(
+            "SELECT bookmark, node from bookmarkstonode WHERE "
+            "reponame = %s AND bookmark LIKE %s",
+            params=(self.reponame, query))
+        result = self.sqlcursor.fetchall()
+        bookmarks = {}
+        for row in result:
+            if len(row) != 2:
+                self.log.info("Bad row returned: %s" % row)
+                continue
+            bookmarks[row[0]] = row[1]
+        return bookmarks
+
+    def saveoptionaljsonmetadata(self, node, jsonmetadata):
+        if not self._connected:
+            self.sqlconnect()
+        self.log.info(
+            ("INSERT METADATA, QUERY BOOKMARKS reponame: %r " +
+             "node: %r, jsonmetadata: %s") %
+            (self.reponame, node, jsonmetadata))
+
+        self.sqlcursor.execute(
+            "UPDATE nodesmetadata SET optional_json_metadata=%s WHERE "
+            "reponame=%s AND node=%s",
+            params=(jsonmetadata, self.reponame, node))
+
+class CustomConverter(mysql.connector.conversion.MySQLConverter):
+    """Ensure that all values being returned are returned as python string
+    (versus the default byte arrays)."""
+    def _STRING_to_python(self, value, dsc=None):
+        return str(value)
+
+    def _VAR_STRING_to_python(self, value, dsc=None):
+        return str(value)
+
+    def _BLOB_to_python(self, value, dsc=None):
+        return str(value)
diff --git a/hgext/infinitepush/schema.sql b/hgext/infinitepush/schema.sql
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/schema.sql
@@ -0,0 +1,33 @@
+CREATE TABLE `bookmarkstonode` (
+  `node` varbinary(64) NOT NULL,
+  `bookmark` varbinary(512) NOT NULL,
+  `reponame` varbinary(255) NOT NULL,
+  PRIMARY KEY (`reponame`,`bookmark`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+CREATE TABLE `bundles` (
+  `bundle` varbinary(512) NOT NULL,
+  `reponame` varbinary(255) NOT NULL,
+  PRIMARY KEY (`bundle`,`reponame`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+CREATE TABLE `nodestobundle` (
+  `node` varbinary(64) NOT NULL,
+  `bundle` varbinary(512) NOT NULL,
+  `reponame` varbinary(255) NOT NULL,
+  PRIMARY KEY (`node`,`reponame`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
+
+CREATE TABLE `nodesmetadata` (
+  `node` varbinary(64) NOT NULL,
+  `message` mediumblob NOT NULL,
+  `p1` varbinary(64) NOT NULL,
+  `p2` varbinary(64) DEFAULT NULL,
+  `author` varbinary(255) NOT NULL,
+  `committer` varbinary(255) DEFAULT NULL,
+  `author_date` bigint(20) NOT NULL,
+  `committer_date` bigint(20) DEFAULT NULL,
+  `reponame` varbinary(255) NOT NULL,
+  `optional_json_metadata` mediumblob,
+  PRIMARY KEY (`reponame`,`node`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8;
diff --git a/hgext/infinitepush/infinitepushcommands.py b/hgext/infinitepush/infinitepushcommands.py
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/infinitepushcommands.py
@@ -0,0 +1,102 @@
+# Copyright 2016 Facebook, Inc.
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+"""
+config::
+
+    [infinitepush]
+    # limit number of files in the node metadata. This is to make sure we don't
+    # waste too much space on huge codemod commits.
+    metadatafilelimit = 100
+"""
+
+from __future__ import absolute_import
+
+import json
+
+from mercurial.node import bin
+from mercurial.i18n import _
+
+from mercurial import (
+    copies as copiesmod,
+    encoding,
+    error,
+    hg,
+    patch,
+    registrar,
+    scmutil,
+    util,
+)
+
+from . import (
+    backupcommands,
+    common,
+)
+
+downloadbundle = common.downloadbundle
+
+cmdtable = backupcommands.cmdtable
+command = registrar.command(cmdtable)
+
+ at command('debugfillinfinitepushmetadata',
+         [('', 'node', [], 'node to fill metadata for')])
+def debugfillinfinitepushmetadata(ui, repo, **opts):
+    '''Special command that fills infinitepush metadata for a node
+    '''
+
+    nodes = opts['node']
+    if not nodes:
+        raise error.Abort(_('nodes are not specified'))
+
+    filelimit = ui.configint('infinitepush', 'metadatafilelimit', 100)
+    nodesmetadata = {}
+    for node in nodes:
+        index = repo.bundlestore.index
+        if not bool(index.getbundle(node)):
+            raise error.Abort(_('node %s is not found') % node)
+
+        if node not in repo:
+            newbundlefile = downloadbundle(repo, bin(node))
+            bundlepath = "bundle:%s+%s" % (repo.root, newbundlefile)
+            bundlerepo = hg.repository(ui, bundlepath)
+            repo = bundlerepo
+
+        p1 = repo[node].p1().node()
+        diffopts = patch.diffallopts(ui, {})
+        match = scmutil.matchall(repo)
+        chunks = patch.diff(repo, p1, node, match, None, diffopts, relroot='')
+        difflines = util.iterlines(chunks)
+
+        states = 'modified added removed deleted unknown ignored clean'.split()
+        status = repo.status(p1, node)
+        status = zip(states, status)
+
+        filestatus = {}
+        for state, files in status:
+            for f in files:
+                filestatus[f] = state
+
+        diffstat = patch.diffstatdata(difflines)
+        changed_files = {}
+        copies = copiesmod.pathcopies(repo[p1], repo[node])
+        for filename, adds, removes, isbinary in diffstat[:filelimit]:
+            # use special encoding that allows non-utf8 filenames
+            filename = encoding.jsonescape(filename, paranoid=True)
+            changed_files[filename] = {
+                'adds': adds, 'removes': removes, 'isbinary': isbinary,
+                'status': filestatus.get(filename, 'unknown')
+            }
+            if filename in copies:
+                changed_files[filename]['copies'] = copies[filename]
+
+        output = {}
+        output['changed_files'] = changed_files
+        if len(diffstat) > filelimit:
+            output['changed_files_truncated'] = True
+        nodesmetadata[node] = output
+
+    with index:
+        for node, metadata in nodesmetadata.iteritems():
+            dumped = json.dumps(metadata, sort_keys=True)
+            index.saveoptionaljsonmetadata(node, dumped)
diff --git a/hgext/infinitepush/indexapi.py b/hgext/infinitepush/indexapi.py
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/indexapi.py
@@ -0,0 +1,70 @@
+# Infinite push
+#
+# Copyright 2016 Facebook, Inc.
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+from __future__ import absolute_import
+
+class indexapi(object):
+    """Class that manages access to infinitepush index.
+
+    This class is a context manager and all write operations (like
+    deletebookmarks, addbookmark etc) should use `with` statement:
+
+      with index:
+          index.deletebookmarks(...)
+          ...
+    """
+
+    def __init__(self):
+        """Initializes the metadata store connection."""
+
+    def close(self):
+        """Cleans up the metadata store connection."""
+
+    def __enter__(self):
+        return self
+
+    def __exit__(self, exc_type, exc_val, exc_tb):
+        pass
+
+    def addbundle(self, bundleid, nodesctx):
+        """Takes a bundleid and a list of node contexts for each node
+        in that bundle and records that."""
+        raise NotImplementedError()
+
+    def addbookmark(self, bookmark, node):
+        """Takes a bookmark name and hash, and records mapping in the metadata
+        store."""
+        raise NotImplementedError()
+
+    def addmanybookmarks(self, bookmarks):
+        """Takes a dict with mapping from bookmark to hash and records mapping
+        in the metadata store."""
+        raise NotImplementedError()
+
+    def deletebookmarks(self, patterns):
+        """Accepts list of bookmarks and deletes them.
+        """
+        raise NotImplementedError()
+
+    def getbundle(self, node):
+        """Returns the bundleid for the bundle that contains the given node."""
+        raise NotImplementedError()
+
+    def getnode(self, bookmark):
+        """Returns the node for the given bookmark. None if it doesn't exist."""
+        raise NotImplementedError()
+
+    def getbookmarks(self, query):
+        """Returns bookmarks that match the query"""
+        raise NotImplementedError()
+
+    def saveoptionaljsonmetadata(self, node, jsonmetadata):
+        """Saves optional metadata for a given node"""
+        raise NotImplementedError()
+
+class indexexception(Exception):
+    pass
diff --git a/hgext/infinitepush/fileindexapi.py b/hgext/infinitepush/fileindexapi.py
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/fileindexapi.py
@@ -0,0 +1,107 @@
+# Infinite push
+#
+# Copyright 2016 Facebook, Inc.
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+"""
+    [infinitepush]
+    # Server-side option. Used only if indextype=disk.
+    # Filesystem path to the index store
+    indexpath = PATH
+"""
+
+from __future__ import absolute_import
+
+import os
+
+from mercurial import util
+
+from . import indexapi
+
+class fileindexapi(indexapi.indexapi):
+    def __init__(self, repo):
+        super(fileindexapi, self).__init__()
+        self._repo = repo
+        root = repo.ui.config('infinitepush', 'indexpath')
+        if not root:
+            root = os.path.join('scratchbranches', 'index')
+
+        self._nodemap = os.path.join(root, 'nodemap')
+        self._bookmarkmap = os.path.join(root, 'bookmarkmap')
+        self._metadatamap = os.path.join(root, 'nodemetadatamap')
+        self._lock = None
+
+    def __enter__(self):
+        self._lock = self._repo.wlock()
+        return self
+
+    def __exit__(self, exc_type, exc_val, exc_tb):
+        if self._lock:
+            self._lock.__exit__(exc_type, exc_val, exc_tb)
+
+    def addbundle(self, bundleid, nodesctx):
+        for node in nodesctx:
+            nodepath = os.path.join(self._nodemap, node.hex())
+            self._write(nodepath, bundleid)
+
+    def addbookmark(self, bookmark, node):
+        bookmarkpath = os.path.join(self._bookmarkmap, bookmark)
+        self._write(bookmarkpath, node)
+
+    def addmanybookmarks(self, bookmarks):
+        for bookmark, node in bookmarks.items():
+            self.addbookmark(bookmark, node)
+
+    def deletebookmarks(self, patterns):
+        for pattern in patterns:
+            for bookmark, _ in self._listbookmarks(pattern):
+                bookmarkpath = os.path.join(self._bookmarkmap, bookmark)
+                self._delete(bookmarkpath)
+
+    def getbundle(self, node):
+        nodepath = os.path.join(self._nodemap, node)
+        return self._read(nodepath)
+
+    def getnode(self, bookmark):
+        bookmarkpath = os.path.join(self._bookmarkmap, bookmark)
+        return self._read(bookmarkpath)
+
+    def getbookmarks(self, query):
+        return dict(self._listbookmarks(query))
+
+    def saveoptionaljsonmetadata(self, node, jsonmetadata):
+        vfs = self._repo.vfs
+        vfs.write(os.path.join(self._metadatamap, node), jsonmetadata)
+
+    def _listbookmarks(self, pattern):
+        if pattern.endswith('*'):
+            pattern = 're:^' + pattern[:-1] + '.*'
+        kind, pat, matcher = util.stringmatcher(pattern)
+        prefixlen = len(self._bookmarkmap) + 1
+        for dirpath, _, books in self._repo.vfs.walk(self._bookmarkmap):
+            for book in books:
+                bookmark = os.path.join(dirpath, book)[prefixlen:]
+                if not matcher(bookmark):
+                    continue
+                yield bookmark, self._read(os.path.join(dirpath, book))
+
+    def _write(self, path, value):
+        vfs = self._repo.vfs
+        dirname = vfs.dirname(path)
+        if not vfs.exists(dirname):
+            vfs.makedirs(dirname)
+
+        vfs.write(path, value)
+
+    def _read(self, path):
+        vfs = self._repo.vfs
+        if not vfs.exists(path):
+            return None
+        return vfs.read(path)
+
+    def _delete(self, path):
+        vfs = self._repo.vfs
+        if not vfs.exists(path):
+            return
+        return vfs.unlink(path)
diff --git a/hgext/infinitepush/common.py b/hgext/infinitepush/common.py
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/common.py
@@ -0,0 +1,58 @@
+# Copyright 2017 Facebook, Inc.
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+from __future__ import absolute_import
+
+import json
+import os
+import struct
+import tempfile
+
+from mercurial.node import hex
+
+from mercurial import (
+    error,
+    extensions,
+)
+
+def isremotebooksenabled(ui):
+    return ('remotenames' in extensions._extensions and
+            ui.configbool('remotenames', 'bookmarks'))
+
+def encodebookmarks(bookmarks):
+    encoded = {}
+    for bookmark, node in bookmarks.iteritems():
+        encoded[bookmark] = node
+    dumped = json.dumps(encoded)
+    result = struct.pack('>i', len(dumped)) + dumped
+    return result
+
+def downloadbundle(repo, unknownbinhead):
+    index = repo.bundlestore.index
+    store = repo.bundlestore.store
+    bundleid = index.getbundle(hex(unknownbinhead))
+    if bundleid is None:
+        raise error.Abort('%s head is not known' % hex(unknownbinhead))
+    bundleraw = store.read(bundleid)
+    return _makebundlefromraw(bundleraw)
+
+def _makebundlefromraw(data):
+    fp = None
+    fd, bundlefile = tempfile.mkstemp()
+    try:  # guards bundlefile
+        try:  # guards fp
+            fp = os.fdopen(fd, 'wb')
+            fp.write(data)
+        finally:
+            fp.close()
+    except Exception:
+        try:
+            os.unlink(bundlefile)
+        except Exception:
+            # we would rather see the original exception
+            pass
+        raise
+
+    return bundlefile
diff --git a/hgext/infinitepush/bundleparts.py b/hgext/infinitepush/bundleparts.py
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/bundleparts.py
@@ -0,0 +1,143 @@
+# Copyright 2017 Facebook, Inc.
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+from __future__ import absolute_import
+
+from mercurial.i18n import _
+
+from mercurial import (
+    bundle2,
+    changegroup,
+    error,
+    extensions,
+    revsetlang,
+    util,
+)
+
+from . import common
+
+encodebookmarks = common.encodebookmarks
+isremotebooksenabled = common.isremotebooksenabled
+
+scratchbranchparttype = 'b2x:infinitepush'
+scratchbookmarksparttype = 'b2x:infinitepushscratchbookmarks'
+
+def getscratchbranchparts(repo, peer, outgoing, confignonforwardmove,
+                         ui, bookmark, create):
+    if not outgoing.missing:
+        raise error.Abort(_('no commits to push'))
+
+    if scratchbranchparttype not in bundle2.bundle2caps(peer):
+        raise error.Abort(_('no server support for %r') % scratchbranchparttype)
+
+    _validaterevset(repo, revsetlang.formatspec('%ln', outgoing.missing),
+                    bookmark)
+
+    supportedversions = changegroup.supportedoutgoingversions(repo)
+    # Explicitly avoid using '01' changegroup version in infinitepush to
+    # support general delta
+    supportedversions.discard('01')
+    cgversion = min(supportedversions)
+    _handlelfs(repo, outgoing.missing)
+    cg = changegroup.makestream(repo, outgoing, cgversion, 'push')
+
+    params = {}
+    params['cgversion'] = cgversion
+    if bookmark:
+        params['bookmark'] = bookmark
+        # 'prevbooknode' is necessary for pushkey reply part
+        params['bookprevnode'] = ''
+        if bookmark in repo:
+            params['bookprevnode'] = repo[bookmark].hex()
+        if create:
+            params['create'] = '1'
+    if confignonforwardmove:
+        params['force'] = '1'
+
+    # Do not send pushback bundle2 part with bookmarks if remotenames extension
+    # is enabled. It will be handled manually in `_push()`
+    if not isremotebooksenabled(ui):
+        params['pushbackbookmarks'] = '1'
+
+    parts = []
+
+    # .upper() marks this as a mandatory part: server will abort if there's no
+    #  handler
+    parts.append(bundle2.bundlepart(
+        scratchbranchparttype.upper(),
+        advisoryparams=params.iteritems(),
+        data=cg))
+
+    try:
+        treemod = extensions.find('treemanifest')
+        mfnodes = []
+        for node in outgoing.missing:
+            mfnodes.append(('', repo[node].manifestnode()))
+
+        # Only include the tree parts if they all exist
+        if not repo.manifestlog.datastore.getmissing(mfnodes):
+            parts.append(treemod.createtreepackpart(
+                repo, outgoing, treemod.TREEGROUP_PARTTYPE2))
+    except KeyError:
+        pass
+
+    return parts
+
+def getscratchbookmarkspart(peer, bookmarks):
+    if scratchbookmarksparttype not in bundle2.bundle2caps(peer):
+        raise error.Abort(
+            _('no server support for %r') % scratchbookmarksparttype)
+
+    return bundle2.bundlepart(
+        scratchbookmarksparttype.upper(),
+        data=encodebookmarks(bookmarks))
+
+def _validaterevset(repo, revset, bookmark):
+    """Abort if the revs to be pushed aren't valid for a scratch branch."""
+    if not repo.revs(revset):
+        raise error.Abort(_('nothing to push'))
+    if bookmark:
+        # Allow bundle with many heads only if no bookmark is specified
+        heads = repo.revs('heads(%r)', revset)
+        if len(heads) > 1:
+            raise error.Abort(
+                _('cannot push more than one head to a scratch branch'))
+
+def _handlelfs(repo, missing):
+    '''Special case if lfs is enabled
+
+    If lfs is enabled then we need to call prepush hook
+    to make sure large files are uploaded to lfs
+    '''
+    try:
+        lfsmod = extensions.find('lfs')
+        lfsmod.wrapper.uploadblobsfromrevs(repo, missing)
+    except KeyError:
+        # Ignore if lfs extension is not enabled
+        return
+
+class copiedpart(object):
+    """a copy of unbundlepart content that can be consumed later"""
+
+    def __init__(self, part):
+        # copy "public properties"
+        self.type = part.type
+        self.id = part.id
+        self.mandatory = part.mandatory
+        self.mandatoryparams = part.mandatoryparams
+        self.advisoryparams = part.advisoryparams
+        self.params = part.params
+        self.mandatorykeys = part.mandatorykeys
+        # copy the buffer
+        self._io = util.stringio(part.read())
+
+    def consume(self):
+        return
+
+    def read(self, size=None):
+        if size is None:
+            return self._io.read()
+        else:
+            return self._io.read(size)
diff --git a/hgext/infinitepush/backupcommands.py b/hgext/infinitepush/backupcommands.py
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/backupcommands.py
@@ -0,0 +1,992 @@
+# Copyright 2017 Facebook, Inc.
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+"""
+    [infinitepushbackup]
+    # Whether to enable automatic backups. If this option is True then a backup
+    # process will be started after every mercurial command that modifies the
+    # repo, for example, commit, amend, histedit, rebase etc.
+    autobackup = False
+
+    # path to the directory where pushback logs should be stored
+    logdir = path/to/dir
+
+    # Backup at most maxheadstobackup heads, other heads are ignored.
+    # Negative number means backup everything.
+    maxheadstobackup = -1
+
+    # Nodes that should not be backed up. Ancestors of these nodes won't be
+    # backed up either
+    dontbackupnodes = []
+
+    # Special option that may be used to trigger re-backuping. For example,
+    # if there was a bug in infinitepush backups, then changing the value of
+    # this option will force all clients to make a "clean" backup
+    backupgeneration = 0
+
+    # Hostname value to use. If not specified then socket.gethostname() will
+    # be used
+    hostname = ''
+
+    # Enable reporting of infinitepush backup status as a summary at the end
+    # of smartlog.
+    enablestatus = False
+
+    # Whether or not to save information about the latest successful backup.
+    # This information includes the local revision number and unix timestamp
+    # of the last time we successfully made a backup.
+    savelatestbackupinfo = False
+"""
+
+from __future__ import absolute_import
+
+import collections
+import errno
+import json
+import os
+import re
+import socket
+import stat
+import subprocess
+import time
+
+from mercurial.node import (
+    bin,
+    hex,
+    nullrev,
+    short,
+)
+
+from mercurial.i18n import _
+
+from mercurial import (
+    bundle2,
+    changegroup,
+    commands,
+    discovery,
+    dispatch,
+    encoding,
+    error,
+    extensions,
+    hg,
+    localrepo,
+    lock as lockmod,
+    phases,
+    policy,
+    registrar,
+    scmutil,
+    util,
+)
+
+from . import bundleparts
+
+getscratchbookmarkspart = bundleparts.getscratchbookmarkspart
+getscratchbranchparts = bundleparts.getscratchbranchparts
+
+from hgext3rd import shareutil
+
+osutil = policy.importmod(r'osutil')
+
+cmdtable = {}
+command = registrar.command(cmdtable)
+revsetpredicate = registrar.revsetpredicate()
+templatekeyword = registrar.templatekeyword()
+
+backupbookmarktuple = collections.namedtuple('backupbookmarktuple',
+                                 ['hostname', 'reporoot', 'localbookmark'])
+
+class backupstate(object):
+    def __init__(self):
+        self.heads = set()
+        self.localbookmarks = {}
+
+    def empty(self):
+        return not self.heads and not self.localbookmarks
+
+class WrongPermissionsException(Exception):
+    def __init__(self, logdir):
+        self.logdir = logdir
+
+restoreoptions = [
+     ('', 'reporoot', '', 'root of the repo to restore'),
+     ('', 'user', '', 'user who ran the backup'),
+     ('', 'hostname', '', 'hostname of the repo to restore'),
+]
+
+_backuplockname = 'infinitepushbackup.lock'
+
+def extsetup(ui):
+    if ui.configbool('infinitepushbackup', 'autobackup', False):
+        extensions.wrapfunction(dispatch, 'runcommand',
+                                _autobackupruncommandwrapper)
+        extensions.wrapfunction(localrepo.localrepository, 'transaction',
+                                _transaction)
+
+ at command('pushbackup',
+         [('', 'background', None, 'run backup in background')])
+def backup(ui, repo, dest=None, **opts):
+    """
+    Pushes commits, bookmarks and heads to infinitepush.
+    New non-extinct commits are saved since the last `hg pushbackup`
+    or since 0 revision if this backup is the first.
+    Local bookmarks are saved remotely as:
+        infinitepush/backups/USERNAME/HOST/REPOROOT/bookmarks/LOCAL_BOOKMARK
+    Local heads are saved remotely as:
+        infinitepush/backups/USERNAME/HOST/REPOROOT/heads/HEAD_HASH
+    """
+
+    if opts.get('background'):
+        _dobackgroundbackup(ui, repo, dest)
+        return 0
+
+    try:
+        # Wait at most 30 seconds, because that's the average backup time
+        timeout = 30
+        srcrepo = shareutil.getsrcrepo(repo)
+        with lockmod.lock(srcrepo.vfs, _backuplockname, timeout=timeout):
+            return _dobackup(ui, repo, dest, **opts)
+    except error.LockHeld as e:
+        if e.errno == errno.ETIMEDOUT:
+            ui.warn(_('timeout waiting on backup lock\n'))
+            return 0
+        else:
+            raise
+
+ at command('pullbackup', restoreoptions)
+def restore(ui, repo, dest=None, **opts):
+    """
+    Pulls commits from infinitepush that were previously saved with
+    `hg pushbackup`.
+    If user has only one backup for the `dest` repo then it will be restored.
+    But user may have backed up many local repos that points to `dest` repo.
+    These local repos may reside on different hosts or in different
+    repo roots. It makes restore ambiguous; `--reporoot` and `--hostname`
+    options are used to disambiguate.
+    """
+
+    other = _getremote(repo, ui, dest, **opts)
+
+    sourcereporoot = opts.get('reporoot')
+    sourcehostname = opts.get('hostname')
+    namingmgr = BackupBookmarkNamingManager(ui, repo, opts.get('user'))
+    allbackupstates = _downloadbackupstate(ui, other, sourcereporoot,
+                                           sourcehostname, namingmgr)
+    if len(allbackupstates) == 0:
+        ui.warn(_('no backups found!'))
+        return 1
+    _checkbackupstates(allbackupstates)
+
+    __, backupstate = allbackupstates.popitem()
+    pullcmd, pullopts = _getcommandandoptions('^pull')
+    # pull backuped heads and nodes that are pointed by bookmarks
+    pullopts['rev'] = list(backupstate.heads |
+                           set(backupstate.localbookmarks.values()))
+    if dest:
+        pullopts['source'] = dest
+    result = pullcmd(ui, repo, **pullopts)
+
+    with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
+        changes = []
+        for book, hexnode in backupstate.localbookmarks.iteritems():
+            if hexnode in repo:
+                changes.append((book, bin(hexnode)))
+            else:
+                ui.warn(_('%s not found, not creating %s bookmark') %
+                        (hexnode, book))
+        repo._bookmarks.applychanges(repo, tr, changes)
+
+    # manually write local backup state and flag to not autobackup
+    # just after we restored, which would be pointless
+    _writelocalbackupstate(repo.vfs,
+                           list(backupstate.heads),
+                           backupstate.localbookmarks)
+    repo.ignoreautobackup = True
+
+    return result
+
+ at command('getavailablebackups',
+    [('', 'user', '', _('username, defaults to current user')),
+     ('', 'json', None, _('print available backups in json format'))])
+def getavailablebackups(ui, repo, dest=None, **opts):
+    other = _getremote(repo, ui, dest, **opts)
+
+    sourcereporoot = opts.get('reporoot')
+    sourcehostname = opts.get('hostname')
+
+    namingmgr = BackupBookmarkNamingManager(ui, repo, opts.get('user'))
+    allbackupstates = _downloadbackupstate(ui, other, sourcereporoot,
+                                           sourcehostname, namingmgr)
+
+    if opts.get('json'):
+        jsondict = collections.defaultdict(list)
+        for hostname, reporoot in allbackupstates.keys():
+            jsondict[hostname].append(reporoot)
+            # make sure the output is sorted. That's not an efficient way to
+            # keep list sorted but we don't have that many backups.
+            jsondict[hostname].sort()
+        ui.write('%s\n' % json.dumps(jsondict))
+    else:
+        if not allbackupstates:
+            ui.write(_('no backups available for %s\n') % namingmgr.username)
+
+        ui.write(_('user %s has %d available backups:\n') %
+                 (namingmgr.username, len(allbackupstates)))
+
+        for hostname, reporoot in sorted(allbackupstates.keys()):
+            ui.write(_('%s on %s\n') % (reporoot, hostname))
+
+ at command('debugcheckbackup',
+         [('', 'all', None, _('check all backups that user have')),
+         ] + restoreoptions)
+def checkbackup(ui, repo, dest=None, **opts):
+    """
+    Checks that all the nodes that backup needs are available in bundlestore
+    This command can check either specific backup (see restoreoptions) or all
+    backups for the user
+    """
+
+    sourcereporoot = opts.get('reporoot')
+    sourcehostname = opts.get('hostname')
+
+    other = _getremote(repo, ui, dest, **opts)
+    namingmgr = BackupBookmarkNamingManager(ui, repo, opts.get('user'))
+    allbackupstates = _downloadbackupstate(ui, other, sourcereporoot,
+                                           sourcehostname, namingmgr)
+    if not opts.get('all'):
+        _checkbackupstates(allbackupstates)
+
+    ret = 0
+    while allbackupstates:
+        key, bkpstate = allbackupstates.popitem()
+        ui.status(_('checking %s on %s\n') % (key[1], key[0]))
+        if not _dobackupcheck(bkpstate, ui, repo, dest, **opts):
+            ret = 255
+    return ret
+
+ at command('debugwaitbackup', [('', 'timeout', '', 'timeout value')])
+def waitbackup(ui, repo, timeout):
+    try:
+        if timeout:
+            timeout = int(timeout)
+        else:
+            timeout = -1
+    except ValueError:
+        raise error.Abort('timeout should be integer')
+
+    try:
+        repo = shareutil.getsrcrepo(repo)
+        with lockmod.lock(repo.vfs, _backuplockname, timeout=timeout):
+            pass
+    except error.LockHeld as e:
+        if e.errno == errno.ETIMEDOUT:
+            raise error.Abort(_('timeout while waiting for backup'))
+        raise
+
+ at command('isbackedup',
+     [('r', 'rev', [], _('show the specified revision or revset'), _('REV'))])
+def isbackedup(ui, repo, **opts):
+    """checks if commit was backed up to infinitepush
+
+    If no revision are specified then it checks working copy parent
+    """
+
+    revs = opts.get('rev')
+    if not revs:
+        revs = ['.']
+    bkpstate = _readlocalbackupstate(ui, repo)
+    unfi = repo.unfiltered()
+    backeduprevs = unfi.revs('draft() and ::%ls', bkpstate.heads)
+    for r in scmutil.revrange(unfi, revs):
+        ui.write(_(unfi[r].hex() + ' '))
+        ui.write(_('backed up' if r in backeduprevs else 'not backed up'))
+        ui.write(_('\n'))
+
+ at revsetpredicate('backedup')
+def backedup(repo, subset, x):
+    """Draft changesets that have been backed up by infinitepush"""
+    unfi = repo.unfiltered()
+    bkpstate = _readlocalbackupstate(repo.ui, repo)
+    return subset & unfi.revs('draft() and ::%ls and not hidden()',
+                              bkpstate.heads)
+
+ at revsetpredicate('notbackedup')
+def notbackedup(repo, subset, x):
+    """Changesets that have not yet been backed up by infinitepush"""
+    bkpstate = _readlocalbackupstate(repo.ui, repo)
+    bkpheads = set(bkpstate.heads)
+    candidates = set(_backupheads(repo.ui, repo))
+    notbackeduprevs = set()
+    # Find all revisions that are ancestors of the expected backup heads,
+    # stopping when we reach either a public commit or a known backup head.
+    while candidates:
+        candidate = candidates.pop()
+        if candidate not in bkpheads:
+            ctx = repo[candidate]
+            rev = ctx.rev()
+            if rev not in notbackeduprevs and ctx.phase() != phases.public:
+                # This rev may not have been backed up.  Record it, and add its
+                # parents as candidates.
+                notbackeduprevs.add(rev)
+                candidates.update([p.hex() for p in ctx.parents()])
+    if notbackeduprevs:
+        # Some revisions in this set may actually have been backed up by
+        # virtue of being an ancestor of a different backup head, which may
+        # have been hidden since the backup was made.  Find these and remove
+        # them from the set.
+        unfi = repo.unfiltered()
+        candidates = bkpheads
+        while candidates:
+            candidate = candidates.pop()
+            if candidate in unfi:
+                ctx = unfi[candidate]
+                if ctx.phase() != phases.public:
+                    notbackeduprevs.discard(ctx.rev())
+                    candidates.update([p.hex() for p in ctx.parents()])
+    return subset & notbackeduprevs
+
+ at templatekeyword('backingup')
+def backingup(repo, ctx, **args):
+    """Whether infinitepush is currently backing up commits."""
+    # If the backup lock exists then a backup should be in progress.
+    srcrepo = shareutil.getsrcrepo(repo)
+    return srcrepo.vfs.lexists(_backuplockname)
+
+def smartlogsummary(ui, repo):
+    if not ui.configbool('infinitepushbackup', 'enablestatus'):
+        return
+
+    # Don't output the summary if a backup is currently in progress.
+    srcrepo = shareutil.getsrcrepo(repo)
+    if srcrepo.vfs.lexists(_backuplockname):
+        return
+
+    unbackeduprevs = repo.revs('notbackedup()')
+
+    # Count the number of changesets that haven't been backed up for 10 minutes.
+    # If there is only one, also print out its hash.
+    backuptime = time.time() - 10 * 60  # 10 minutes ago
+    count = 0
+    singleunbackeduprev = None
+    for rev in unbackeduprevs:
+        if repo[rev].date()[0] <= backuptime:
+            singleunbackeduprev = rev
+            count += 1
+    if count > 0:
+        if count > 1:
+            ui.warn(_('note: %d changesets are not backed up.\n') % count)
+        else:
+            ui.warn(_('note: changeset %s is not backed up.\n') %
+                    short(repo[singleunbackeduprev].node()))
+        ui.warn(_('Run `hg pushbackup` to perform a backup.  If this fails,\n'
+                  'please report to the Source Control @ FB group.\n'))
+
+def _autobackupruncommandwrapper(orig, lui, repo, cmd, fullargs, *args):
+    '''
+    If this wrapper is enabled then auto backup is started after every command
+    that modifies a repository.
+    Since we don't want to start auto backup after read-only commands,
+    then this wrapper checks if this command opened at least one transaction.
+    If yes then background backup will be started.
+    '''
+
+    # For chg, do not wrap the "serve" runcommand call
+    if 'CHGINTERNALMARK' in encoding.environ:
+        return orig(lui, repo, cmd, fullargs, *args)
+
+    try:
+        return orig(lui, repo, cmd, fullargs, *args)
+    finally:
+        if getattr(repo, 'txnwasopened', False) \
+                and not getattr(repo, 'ignoreautobackup', False):
+            lui.debug("starting infinitepush autobackup in the background\n")
+            _dobackgroundbackup(lui, repo)
+
+def _transaction(orig, self, *args, **kwargs):
+    ''' Wrapper that records if a transaction was opened.
+
+    If a transaction was opened then we want to start background backup process.
+    This hook records the fact that transaction was opened.
+    '''
+    self.txnwasopened = True
+    return orig(self, *args, **kwargs)
+
+def _backupheads(ui, repo):
+    """Returns the set of heads that should be backed up in this repo."""
+    maxheadstobackup = ui.configint('infinitepushbackup',
+                                    'maxheadstobackup', -1)
+
+    revset = 'heads(draft()) & not obsolete()'
+
+    backupheads = [ctx.hex() for ctx in repo.set(revset)]
+    if maxheadstobackup > 0:
+        backupheads = backupheads[-maxheadstobackup:]
+    elif maxheadstobackup == 0:
+        backupheads = []
+    return set(backupheads)
+
+def _dobackup(ui, repo, dest, **opts):
+    ui.status(_('starting backup %s\n') % time.strftime('%H:%M:%S %d %b %Y %Z'))
+    start = time.time()
+    # to handle multiple working copies correctly
+    repo = shareutil.getsrcrepo(repo)
+    currentbkpgenerationvalue = _readbackupgenerationfile(repo.vfs)
+    newbkpgenerationvalue = ui.configint('infinitepushbackup',
+                                         'backupgeneration', 0)
+    if currentbkpgenerationvalue != newbkpgenerationvalue:
+        # Unlinking local backup state will trigger re-backuping
+        _deletebackupstate(repo)
+        _writebackupgenerationfile(repo.vfs, newbkpgenerationvalue)
+    bkpstate = _readlocalbackupstate(ui, repo)
+
+    # this variable stores the local store info (tip numeric revision and date)
+    # which we use to quickly tell if our backup is stale
+    afterbackupinfo = _getlocalinfo(repo)
+
+    # This variable will store what heads will be saved in backup state file
+    # if backup finishes successfully
+    afterbackupheads = _backupheads(ui, repo)
+    other = _getremote(repo, ui, dest, **opts)
+    outgoing, badhexnodes = _getrevstobackup(repo, ui, other,
+                                             afterbackupheads - bkpstate.heads)
+    # If remotefilelog extension is enabled then there can be nodes that we
+    # can't backup. In this case let's remove them from afterbackupheads
+    afterbackupheads.difference_update(badhexnodes)
+
+    # As afterbackupheads this variable stores what heads will be saved in
+    # backup state file if backup finishes successfully
+    afterbackuplocalbooks = _getlocalbookmarks(repo)
+    afterbackuplocalbooks = _filterbookmarks(
+        afterbackuplocalbooks, repo, afterbackupheads)
+
+    newheads = afterbackupheads - bkpstate.heads
+    removedheads = bkpstate.heads - afterbackupheads
+    newbookmarks = _dictdiff(afterbackuplocalbooks, bkpstate.localbookmarks)
+    removedbookmarks = _dictdiff(bkpstate.localbookmarks, afterbackuplocalbooks)
+
+    namingmgr = BackupBookmarkNamingManager(ui, repo)
+    bookmarkstobackup = _getbookmarkstobackup(
+        repo, newbookmarks, removedbookmarks,
+        newheads, removedheads, namingmgr)
+
+    # Special case if backup state is empty. Clean all backup bookmarks from the
+    # server.
+    if bkpstate.empty():
+        bookmarkstobackup[namingmgr.getbackupheadprefix()] = ''
+        bookmarkstobackup[namingmgr.getbackupbookmarkprefix()] = ''
+
+    # Wrap deltaparent function to make sure that bundle takes less space
+    # See _deltaparent comments for details
+    extensions.wrapfunction(changegroup.cg2packer, 'deltaparent', _deltaparent)
+    try:
+        bundler = _createbundler(ui, repo, other)
+        bundler.addparam("infinitepush", "True")
+        backup = False
+        if outgoing and outgoing.missing:
+            backup = True
+            parts = getscratchbranchparts(repo, other, outgoing,
+                                          confignonforwardmove=False,
+                                          ui=ui, bookmark=None,
+                                          create=False)
+            for part in parts:
+                bundler.addpart(part)
+
+        if bookmarkstobackup:
+            backup = True
+            bundler.addpart(getscratchbookmarkspart(other, bookmarkstobackup))
+
+        if backup:
+            _sendbundle(bundler, other)
+            _writelocalbackupstate(repo.vfs, afterbackupheads,
+                                   afterbackuplocalbooks)
+            if ui.config('infinitepushbackup', 'savelatestbackupinfo'):
+                _writelocalbackupinfo(repo.vfs, **afterbackupinfo)
+        else:
+            ui.status(_('nothing to backup\n'))
+    finally:
+        # cleanup ensures that all pipes are flushed
+        cleanup = getattr(other, '_cleanup', None) or getattr(other, 'cleanup')
+        try:
+            cleanup()
+        except Exception:
+            ui.warn(_('remote connection cleanup failed\n'))
+        ui.status(_('finished in %f seconds\n') % (time.time() - start))
+        extensions.unwrapfunction(changegroup.cg2packer, 'deltaparent',
+                                  _deltaparent)
+    return 0
+
+def _dobackgroundbackup(ui, repo, dest=None):
+    background_cmd = ['hg', 'pushbackup']
+    if dest:
+        background_cmd.append(dest)
+    logfile = None
+    logdir = ui.config('infinitepushbackup', 'logdir')
+    if logdir:
+        # make newly created files and dirs non-writable
+        oldumask = os.umask(0o022)
+        try:
+            try:
+                username = util.shortuser(ui.username())
+            except Exception:
+                username = 'unknown'
+
+            if not _checkcommonlogdir(logdir):
+                raise WrongPermissionsException(logdir)
+
+            userlogdir = os.path.join(logdir, username)
+            util.makedirs(userlogdir)
+
+            if not _checkuserlogdir(userlogdir):
+                raise WrongPermissionsException(userlogdir)
+
+            reporoot = repo.origroot
+            reponame = os.path.basename(reporoot)
+            _removeoldlogfiles(userlogdir, reponame)
+            logfile = _getlogfilename(logdir, username, reponame)
+        except (OSError, IOError) as e:
+            ui.debug('infinitepush backup log is disabled: %s\n' % e)
+        except WrongPermissionsException as e:
+            ui.debug(('%s directory has incorrect permission, ' +
+                     'infinitepush backup logging will be disabled\n') %
+                     e.logdir)
+        finally:
+            os.umask(oldumask)
+
+    if not logfile:
+        logfile = os.devnull
+
+    with open(logfile, 'a') as f:
+        subprocess.Popen(background_cmd, shell=False, stdout=f,
+                         stderr=subprocess.STDOUT)
+
+def _dobackupcheck(bkpstate, ui, repo, dest, **opts):
+    remotehexnodes = sorted(
+        set(bkpstate.heads).union(bkpstate.localbookmarks.values()))
+    if not remotehexnodes:
+        return True
+    other = _getremote(repo, ui, dest, **opts)
+    batch = other.iterbatch()
+    for hexnode in remotehexnodes:
+        batch.lookup(hexnode)
+    batch.submit()
+    lookupresults = batch.results()
+    i = 0
+    try:
+        for i, r in enumerate(lookupresults):
+            # iterate over results to make it throw if revision
+            # was not found
+            pass
+        return True
+    except error.RepoError:
+        ui.warn(_('unknown revision %r\n') % remotehexnodes[i])
+        return False
+
+_backuplatestinfofile = 'infinitepushlatestbackupinfo'
+_backupstatefile = 'infinitepushbackupstate'
+_backupgenerationfile = 'infinitepushbackupgeneration'
+
+# Common helper functions
+def _getlocalinfo(repo):
+    localinfo = {}
+    localinfo['rev'] = repo[repo.changelog.tip()].rev()
+    localinfo['time'] = int(time.time())
+    return localinfo
+
+def _getlocalbookmarks(repo):
+    localbookmarks = {}
+    for bookmark, node in repo._bookmarks.iteritems():
+        hexnode = hex(node)
+        localbookmarks[bookmark] = hexnode
+    return localbookmarks
+
+def _filterbookmarks(localbookmarks, repo, headstobackup):
+    '''Filters out some bookmarks from being backed up
+
+    Filters out bookmarks that do not point to ancestors of headstobackup or
+    public commits
+    '''
+
+    headrevstobackup = [repo[hexhead].rev() for hexhead in headstobackup]
+    ancestors = repo.changelog.ancestors(headrevstobackup, inclusive=True)
+    filteredbooks = {}
+    for bookmark, hexnode in localbookmarks.iteritems():
+        if (repo[hexnode].rev() in ancestors or
+                repo[hexnode].phase() == phases.public):
+            filteredbooks[bookmark] = hexnode
+    return filteredbooks
+
+def _downloadbackupstate(ui, other, sourcereporoot, sourcehostname, namingmgr):
+    pattern = namingmgr.getcommonuserprefix()
+    fetchedbookmarks = other.listkeyspatterns('bookmarks', patterns=[pattern])
+    allbackupstates = collections.defaultdict(backupstate)
+    for book, hexnode in fetchedbookmarks.iteritems():
+        parsed = _parsebackupbookmark(book, namingmgr)
+        if parsed:
+            if sourcereporoot and sourcereporoot != parsed.reporoot:
+                continue
+            if sourcehostname and sourcehostname != parsed.hostname:
+                continue
+            key = (parsed.hostname, parsed.reporoot)
+            if parsed.localbookmark:
+                bookname = parsed.localbookmark
+                allbackupstates[key].localbookmarks[bookname] = hexnode
+            else:
+                allbackupstates[key].heads.add(hexnode)
+        else:
+            ui.warn(_('wrong format of backup bookmark: %s') % book)
+
+    return allbackupstates
+
+def _checkbackupstates(allbackupstates):
+    if len(allbackupstates) == 0:
+        raise error.Abort('no backups found!')
+
+    hostnames = set(key[0] for key in allbackupstates.iterkeys())
+    reporoots = set(key[1] for key in allbackupstates.iterkeys())
+
+    if len(hostnames) > 1:
+        raise error.Abort(
+            _('ambiguous hostname to restore: %s') % sorted(hostnames),
+            hint=_('set --hostname to disambiguate'))
+
+    if len(reporoots) > 1:
+        raise error.Abort(
+            _('ambiguous repo root to restore: %s') % sorted(reporoots),
+            hint=_('set --reporoot to disambiguate'))
+
+class BackupBookmarkNamingManager(object):
+    def __init__(self, ui, repo, username=None):
+        self.ui = ui
+        self.repo = repo
+        if not username:
+            username = util.shortuser(ui.username())
+        self.username = username
+
+        self.hostname = self.ui.config('infinitepushbackup', 'hostname')
+        if not self.hostname:
+            self.hostname = socket.gethostname()
+
+    def getcommonuserprefix(self):
+        return '/'.join((self._getcommonuserprefix(), '*'))
+
+    def getcommonprefix(self):
+        return '/'.join((self._getcommonprefix(), '*'))
+
+    def getbackupbookmarkprefix(self):
+        return '/'.join((self._getbackupbookmarkprefix(), '*'))
+
+    def getbackupbookmarkname(self, bookmark):
+        bookmark = _escapebookmark(bookmark)
+        return '/'.join((self._getbackupbookmarkprefix(), bookmark))
+
+    def getbackupheadprefix(self):
+        return '/'.join((self._getbackupheadprefix(), '*'))
+
+    def getbackupheadname(self, hexhead):
+        return '/'.join((self._getbackupheadprefix(), hexhead))
+
+    def _getbackupbookmarkprefix(self):
+        return '/'.join((self._getcommonprefix(), 'bookmarks'))
+
+    def _getbackupheadprefix(self):
+        return '/'.join((self._getcommonprefix(), 'heads'))
+
+    def _getcommonuserprefix(self):
+        return '/'.join(('infinitepush', 'backups', self.username))
+
+    def _getcommonprefix(self):
+        reporoot = self.repo.origroot
+
+        result = '/'.join((self._getcommonuserprefix(), self.hostname))
+        if not reporoot.startswith('/'):
+            result += '/'
+        result += reporoot
+        if result.endswith('/'):
+            result = result[:-1]
+        return result
+
+def _escapebookmark(bookmark):
+    '''
+    If `bookmark` contains "bookmarks" as a substring then replace it with
+    "bookmarksbookmarks". This will make parsing remote bookmark name
+    unambigious.
+    '''
+
+    bookmark = encoding.fromlocal(bookmark)
+    return bookmark.replace('bookmarks', 'bookmarksbookmarks')
+
+def _unescapebookmark(bookmark):
+    bookmark = encoding.tolocal(bookmark)
+    return bookmark.replace('bookmarksbookmarks', 'bookmarks')
+
+def _getremote(repo, ui, dest, **opts):
+    path = ui.paths.getpath(dest, default=('infinitepush', 'default'))
+    if not path:
+        raise error.Abort(_('default repository not configured!'),
+                         hint=_("see 'hg help config.paths'"))
+    dest = path.pushloc or path.loc
+    return hg.peer(repo, opts, dest)
+
+def _getcommandandoptions(command):
+    cmd = commands.table[command][0]
+    opts = dict(opt[1:3] for opt in commands.table[command][1])
+    return cmd, opts
+
+# Backup helper functions
+
+def _deltaparent(orig, self, revlog, rev, p1, p2, prev):
+    # This version of deltaparent prefers p1 over prev to use less space
+    dp = revlog.deltaparent(rev)
+    if dp == nullrev and not revlog.storedeltachains:
+        # send full snapshot only if revlog configured to do so
+        return nullrev
+    return p1
+
+def _getbookmarkstobackup(repo, newbookmarks, removedbookmarks,
+                          newheads, removedheads, namingmgr):
+    bookmarkstobackup = {}
+
+    for bookmark, hexnode in removedbookmarks.items():
+        backupbookmark = namingmgr.getbackupbookmarkname(bookmark)
+        bookmarkstobackup[backupbookmark] = ''
+
+    for bookmark, hexnode in newbookmarks.items():
+        backupbookmark = namingmgr.getbackupbookmarkname(bookmark)
+        bookmarkstobackup[backupbookmark] = hexnode
+
+    for hexhead in removedheads:
+        headbookmarksname = namingmgr.getbackupheadname(hexhead)
+        bookmarkstobackup[headbookmarksname] = ''
+
+    for hexhead in newheads:
+        headbookmarksname = namingmgr.getbackupheadname(hexhead)
+        bookmarkstobackup[headbookmarksname] = hexhead
+
+    return bookmarkstobackup
+
+def _createbundler(ui, repo, other):
+    bundler = bundle2.bundle20(ui, bundle2.bundle2caps(other))
+    # Disallow pushback because we want to avoid taking repo locks.
+    # And we don't need pushback anyway
+    capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo,
+                                                      allowpushback=False))
+    bundler.newpart('replycaps', data=capsblob)
+    return bundler
+
+def _sendbundle(bundler, other):
+    stream = util.chunkbuffer(bundler.getchunks())
+    try:
+        other.unbundle(stream, ['force'], other.url())
+    except error.BundleValueError as exc:
+        raise error.Abort(_('missing support for %s') % exc)
+
+def findcommonoutgoing(repo, ui, other, heads):
+    if heads:
+        # Avoid using remotenames fastheaddiscovery heuristic. It uses
+        # remotenames file to quickly find commonoutgoing set, but it can
+        # result in sending public commits to infinitepush servers.
+        # For example:
+        #
+        #        o draft
+        #       /
+        #      o C1
+        #      |
+        #     ...
+        #      |
+        #      o remote/master
+        #
+        # pushbackup in that case results in sending to the infinitepush server
+        # all public commits from 'remote/master' to C1. It increases size of
+        # the bundle + it may result in storing data about public commits
+        # in infinitepush table.
+
+        with ui.configoverride({("remotenames", "fastheaddiscovery"): False}):
+            nodes = map(repo.changelog.node, heads)
+            return discovery.findcommonoutgoing(repo, other, onlyheads=nodes)
+    else:
+        return None
+
+def _getrevstobackup(repo, ui, other, headstobackup):
+    # In rare cases it's possible to have a local node without filelogs.
+    # This is possible if remotefilelog is enabled and if the node was
+    # stripped server-side. We want to filter out these bad nodes and all
+    # of their descendants.
+    badnodes = ui.configlist('infinitepushbackup', 'dontbackupnodes', [])
+    badnodes = [node for node in badnodes if node in repo]
+    badrevs = [repo[node].rev() for node in badnodes]
+    badnodesdescendants = repo.set('%ld::', badrevs) if badrevs else set()
+    badnodesdescendants = set(ctx.hex() for ctx in badnodesdescendants)
+    filteredheads = filter(lambda head: head in badnodesdescendants,
+                           headstobackup)
+
+    if filteredheads:
+        ui.warn(_('filtering nodes: %s\n') % filteredheads)
+        ui.log('infinitepushbackup', 'corrupted nodes found',
+               infinitepushbackupcorruptednodes='failure')
+    headstobackup = filter(lambda head: head not in badnodesdescendants,
+                           headstobackup)
+
+    revs = list(repo[hexnode].rev() for hexnode in headstobackup)
+    outgoing = findcommonoutgoing(repo, ui, other, revs)
+    nodeslimit = 1000
+    if outgoing and len(outgoing.missing) > nodeslimit:
+        # trying to push too many nodes usually means that there is a bug
+        # somewhere. Let's be safe and avoid pushing too many nodes at once
+        raise error.Abort('trying to back up too many nodes: %d' %
+                          (len(outgoing.missing),))
+    return outgoing, set(filteredheads)
+
+def _localbackupstateexists(repo):
+    return repo.vfs.exists(_backupstatefile)
+
+def _deletebackupstate(repo):
+    return repo.vfs.tryunlink(_backupstatefile)
+
+def _readlocalbackupstate(ui, repo):
+    repo = shareutil.getsrcrepo(repo)
+    if not _localbackupstateexists(repo):
+        return backupstate()
+
+    with repo.vfs(_backupstatefile) as f:
+        try:
+            state = json.loads(f.read())
+            if (not isinstance(state['bookmarks'], dict) or
+                    not isinstance(state['heads'], list)):
+                raise ValueError('bad types of bookmarks or heads')
+
+            result = backupstate()
+            result.heads = set(map(str, state['heads']))
+            result.localbookmarks = state['bookmarks']
+            return result
+        except (ValueError, KeyError, TypeError) as e:
+            ui.warn(_('corrupt file: %s (%s)\n') % (_backupstatefile, e))
+            return backupstate()
+    return backupstate()
+
+def _writelocalbackupstate(vfs, heads, bookmarks):
+    with vfs(_backupstatefile, 'w') as f:
+        f.write(json.dumps({'heads': list(heads), 'bookmarks': bookmarks}))
+
+def _readbackupgenerationfile(vfs):
+    try:
+        with vfs(_backupgenerationfile) as f:
+            return int(f.read())
+    except (IOError, OSError, ValueError):
+        return 0
+
+def _writebackupgenerationfile(vfs, backupgenerationvalue):
+    with vfs(_backupgenerationfile, 'w', atomictemp=True) as f:
+        f.write(str(backupgenerationvalue))
+
+def _writelocalbackupinfo(vfs, rev, time):
+    with vfs(_backuplatestinfofile, 'w', atomictemp=True) as f:
+        f.write(('backuprevision=%d\nbackuptime=%d\n') % (rev, time))
+
+# Restore helper functions
+def _parsebackupbookmark(backupbookmark, namingmgr):
+    '''Parses backup bookmark and returns info about it
+
+    Backup bookmark may represent either a local bookmark or a head.
+    Returns None if backup bookmark has wrong format or tuple.
+    First entry is a hostname where this bookmark came from.
+    Second entry is a root of the repo where this bookmark came from.
+    Third entry in a tuple is local bookmark if backup bookmark
+    represents a local bookmark and None otherwise.
+    '''
+
+    backupbookmarkprefix = namingmgr._getcommonuserprefix()
+    commonre = '^{0}/([-\w.]+)(/.*)'.format(re.escape(backupbookmarkprefix))
+    bookmarkre = commonre + '/bookmarks/(.*)$'
+    headsre = commonre + '/heads/[a-f0-9]{40}$'
+
+    match = re.search(bookmarkre, backupbookmark)
+    if not match:
+        match = re.search(headsre, backupbookmark)
+        if not match:
+            return None
+        # It's a local head not a local bookmark.
+        # That's why localbookmark is None
+        return backupbookmarktuple(hostname=match.group(1),
+                                   reporoot=match.group(2),
+                                   localbookmark=None)
+
+    return backupbookmarktuple(hostname=match.group(1),
+                               reporoot=match.group(2),
+                               localbookmark=_unescapebookmark(match.group(3)))
+
+_timeformat = '%Y%m%d'
+
+def _getlogfilename(logdir, username, reponame):
+    '''Returns name of the log file for particular user and repo
+
+    Different users have different directories inside logdir. Log filename
+    consists of reponame (basename of repo path) and current day
+    (see _timeformat). That means that two different repos with the same name
+    can share the same log file. This is not a big problem so we ignore it.
+    '''
+
+    currentday = time.strftime(_timeformat)
+    return os.path.join(logdir, username, reponame + currentday)
+
+def _removeoldlogfiles(userlogdir, reponame):
+    existinglogfiles = []
+    for entry in osutil.listdir(userlogdir):
+        filename = entry[0]
+        fullpath = os.path.join(userlogdir, filename)
+        if filename.startswith(reponame) and os.path.isfile(fullpath):
+            try:
+                time.strptime(filename[len(reponame):], _timeformat)
+            except ValueError:
+                continue
+            existinglogfiles.append(filename)
+
+    # _timeformat gives us a property that if we sort log file names in
+    # descending order then newer files are going to be in the beginning
+    existinglogfiles = sorted(existinglogfiles, reverse=True)
+    # Delete logs that are older than 5 days
+    maxlogfilenumber = 5
+    if len(existinglogfiles) > maxlogfilenumber:
+        for filename in existinglogfiles[maxlogfilenumber:]:
+            os.unlink(os.path.join(userlogdir, filename))
+
+def _checkcommonlogdir(logdir):
+    '''Checks permissions of the log directory
+
+    We want log directory to actually be a directory, have restricting
+    deletion flag set (sticky bit)
+    '''
+
+    try:
+        st = os.stat(logdir)
+        return stat.S_ISDIR(st.st_mode) and st.st_mode & stat.S_ISVTX
+    except OSError:
+        # is raised by os.stat()
+        return False
+
+def _checkuserlogdir(userlogdir):
+    '''Checks permissions of the user log directory
+
+    We want user log directory to be writable only by the user who created it
+    and be owned by `username`
+    '''
+
+    try:
+        st = os.stat(userlogdir)
+        # Check that `userlogdir` is owned by `username`
+        if os.getuid() != st.st_uid:
+            return False
+        return ((st.st_mode & (stat.S_IWUSR | stat.S_IWGRP | stat.S_IWOTH)) ==
+                stat.S_IWUSR)
+    except OSError:
+        # is raised by os.stat()
+        return False
+
+def _dictdiff(first, second):
+    '''Returns new dict that contains items from the first dict that are missing
+    from the second dict.
+    '''
+    result = {}
+    for book, hexnode in first.items():
+        if second.get(book) != hexnode:
+            result[book] = hexnode
+    return result
diff --git a/hgext/infinitepush/__init__.py b/hgext/infinitepush/__init__.py
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/__init__.py
@@ -0,0 +1,1428 @@
+# Infinite push
+#
+# Copyright 2016 Facebook, Inc.
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+""" store some pushes in a remote blob store on the server (EXPERIMENTAL)
+
+    [infinitepush]
+    # Server-side and client-side option. Pattern of the infinitepush bookmark
+    branchpattern = PATTERN
+
+    # Server or client
+    server = False
+
+    # Server-side option. Possible values: 'disk' or 'sql'. Fails if not set
+    indextype = disk
+
+    # Server-side option. Used only if indextype=sql.
+    # Format: 'IP:PORT:DB_NAME:USER:PASSWORD'
+    sqlhost = IP:PORT:DB_NAME:USER:PASSWORD
+
+    # Server-side option. Used only if indextype=disk.
+    # Filesystem path to the index store
+    indexpath = PATH
+
+    # Server-side option. Possible values: 'disk' or 'external'
+    # Fails if not set
+    storetype = disk
+
+    # Server-side option.
+    # Path to the binary that will save bundle to the bundlestore
+    # Formatted cmd line will be passed to it (see `put_args`)
+    put_binary = put
+
+    # Serser-side option. Used only if storetype=external.
+    # Format cmd-line string for put binary. Placeholder: {filename}
+    put_args = {filename}
+
+    # Server-side option.
+    # Path to the binary that get bundle from the bundlestore.
+    # Formatted cmd line will be passed to it (see `get_args`)
+    get_binary = get
+
+    # Serser-side option. Used only if storetype=external.
+    # Format cmd-line string for get binary. Placeholders: {filename} {handle}
+    get_args = {filename} {handle}
+
+    # Server-side option
+    logfile = FIlE
+
+    # Server-side option
+    loglevel = DEBUG
+
+    # Server-side option. Used only if indextype=sql.
+    # Sets mysql wait_timeout option.
+    waittimeout = 300
+
+    # Server-side option. Used only if indextype=sql.
+    # Sets mysql innodb_lock_wait_timeout option.
+    locktimeout = 120
+
+    # Server-side option. Used only if indextype=sql.
+    # Name of the repository
+    reponame = ''
+
+    # Client-side option. Used by --list-remote option. List of remote scratch
+    # patterns to list if no patterns are specified.
+    defaultremotepatterns = ['*']
+
+    # Server-side option. If bookmark that was pushed matches
+    # `fillmetadatabranchpattern` then background
+    # `hg debugfillinfinitepushmetadata` process will save metadata
+    # in infinitepush index for nodes that are ancestor of the bookmark.
+    fillmetadatabranchpattern = ''
+
+    # Instructs infinitepush to forward all received bundle2 parts to the
+    # bundle for storage. Defaults to False.
+    storeallparts = True
+
+    [remotenames]
+    # Client-side option
+    # This option should be set only if remotenames extension is enabled.
+    # Whether remote bookmarks are tracked by remotenames extension.
+    bookmarks = True
+"""
+
+from __future__ import absolute_import
+
+import collections
+import contextlib
+import errno
+import functools
+import json
+import logging
+import os
+import random
+import re
+import socket
+import struct
+import subprocess
+import sys
+import tempfile
+import time
+
+from mercurial.node import (
+    bin,
+    hex,
+)
+
+from mercurial.i18n import _
+
+from mercurial import (
+    bundle2,
+    changegroup,
+    commands,
+    discovery,
+    encoding,
+    error,
+    exchange,
+    extensions,
+    hg,
+    localrepo,
+    peer,
+    phases,
+    pushkey,
+    registrar,
+    util,
+    wireproto,
+)
+
+from . import (
+    backupcommands,
+    bundleparts,
+    common,
+    infinitepushcommands,
+)
+
+# Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
+# extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
+# be specifying the version(s) of Mercurial they are tested with, or
+# leave the attribute unspecified.
+testedwith = 'ships-with-hg-core'
+
+configtable = {}
+configitem = registrar.configitem(configtable)
+
+configitem('infinitepush', 'server',
+    default=False,
+)
+configitem('infinitepush', 'storetype',
+    default='',
+)
+configitem('infinitepush', 'indextype',
+    default='',
+)
+configitem('infinitepush', 'indexpath',
+    default='',
+)
+configitem('infinitepush', 'fillmetadatabranchpattern',
+    default='',
+)
+configitem('infinitepush', 'storeallparts',
+    default=False,
+)
+configitem('infinitepush', 'reponame',
+    default='',
+)
+configitem('infinitepush', 'bundle-stream',
+    default=False,
+)
+configitem('scratchbranch', 'storepath',
+    default='',
+)
+configitem('infinitepush', 'branchpattern',
+    default='',
+)
+configitem('infinitepush', 'metadatafilelimit',
+    default=100,
+)
+configitem('infinitepushbackup', 'autobackup',
+    default=False,
+)
+configitem('experimental', 'server-bundlestore-bookmark',
+    default='',
+)
+configitem('experimental', 'server-bundlestore-create',
+    default='',
+)
+configitem('experimental', 'infinitepush-scratchpush',
+    default=False,
+)
+configitem('experimental', 'non-forward-move',
+    default=False,
+)
+
+pushrebaseparttype = 'b2x:rebase'
+experimental = 'experimental'
+configbookmark = 'server-bundlestore-bookmark'
+configcreate = 'server-bundlestore-create'
+configscratchpush = 'infinitepush-scratchpush'
+confignonforwardmove = 'non-forward-move'
+
+scratchbranchparttype = bundleparts.scratchbranchparttype
+cmdtable = infinitepushcommands.cmdtable
+revsetpredicate = backupcommands.revsetpredicate
+templatekeyword = backupcommands.templatekeyword
+_scratchbranchmatcher = lambda x: False
+_maybehash = re.compile(r'^[a-f0-9]+$').search
+
+def _buildexternalbundlestore(ui):
+    put_args = ui.configlist('infinitepush', 'put_args', [])
+    put_binary = ui.config('infinitepush', 'put_binary')
+    if not put_binary:
+        raise error.Abort('put binary is not specified')
+    get_args = ui.configlist('infinitepush', 'get_args', [])
+    get_binary = ui.config('infinitepush', 'get_binary')
+    if not get_binary:
+        raise error.Abort('get binary is not specified')
+    from . import store
+    return store.externalbundlestore(put_binary, put_args, get_binary, get_args)
+
+def _buildsqlindex(ui):
+    sqlhost = ui.config('infinitepush', 'sqlhost')
+    if not sqlhost:
+        raise error.Abort(_('please set infinitepush.sqlhost'))
+    host, port, db, user, password = sqlhost.split(':')
+    reponame = ui.config('infinitepush', 'reponame')
+    if not reponame:
+        raise error.Abort(_('please set infinitepush.reponame'))
+
+    logfile = ui.config('infinitepush', 'logfile', '')
+    waittimeout = ui.configint('infinitepush', 'waittimeout', 300)
+    locktimeout = ui.configint('infinitepush', 'locktimeout', 120)
+    from . import sqlindexapi
+    return sqlindexapi.sqlindexapi(
+        reponame, host, port, db, user, password,
+        logfile, _getloglevel(ui), waittimeout=waittimeout,
+        locktimeout=locktimeout)
+
+def _getloglevel(ui):
+    loglevel = ui.config('infinitepush', 'loglevel', 'DEBUG')
+    numeric_loglevel = getattr(logging, loglevel.upper(), None)
+    if not isinstance(numeric_loglevel, int):
+        raise error.Abort(_('invalid log level %s') % loglevel)
+    return numeric_loglevel
+
+def _tryhoist(ui, remotebookmark):
+    '''returns a bookmarks with hoisted part removed
+
+    Remotenames extension has a 'hoist' config that allows to use remote
+    bookmarks without specifying remote path. For example, 'hg update master'
+    works as well as 'hg update remote/master'. We want to allow the same in
+    infinitepush.
+    '''
+
+    if common.isremotebooksenabled(ui):
+        hoist = ui.config('remotenames', 'hoist') + '/'
+        if remotebookmark.startswith(hoist):
+            return remotebookmark[len(hoist):]
+    return remotebookmark
+
+class bundlestore(object):
+    def __init__(self, repo):
+        self._repo = repo
+        storetype = self._repo.ui.config('infinitepush', 'storetype', '')
+        if storetype == 'disk':
+            from . import store
+            self.store = store.filebundlestore(self._repo.ui, self._repo)
+        elif storetype == 'external':
+            self.store = _buildexternalbundlestore(self._repo.ui)
+        else:
+            raise error.Abort(
+                _('unknown infinitepush store type specified %s') % storetype)
+
+        indextype = self._repo.ui.config('infinitepush', 'indextype', '')
+        if indextype == 'disk':
+            from . import fileindexapi
+            self.index = fileindexapi.fileindexapi(self._repo)
+        elif indextype == 'sql':
+            self.index = _buildsqlindex(self._repo.ui)
+        else:
+            raise error.Abort(
+                _('unknown infinitepush index type specified %s') % indextype)
+
+def _isserver(ui):
+    return ui.configbool('infinitepush', 'server')
+
+def reposetup(ui, repo):
+    if _isserver(ui) and repo.local():
+        repo.bundlestore = bundlestore(repo)
+
+def uisetup(ui):
+    # remotenames circumvents the default push implementation entirely, so make
+    # sure we load after it so that we wrap it.
+    order = extensions._order
+    order.remove('infinitepush')
+    order.append('infinitepush')
+    extensions._order = order
+
+def extsetup(ui):
+    # Allow writing backup files outside the normal lock
+    localrepo.localrepository._wlockfreeprefix.update([
+        backupcommands._backupstatefile,
+        backupcommands._backupgenerationfile,
+        backupcommands._backuplatestinfofile,
+    ])
+
+    commonsetup(ui)
+    if _isserver(ui):
+        serverextsetup(ui)
+    else:
+        clientextsetup(ui)
+
+def commonsetup(ui):
+    wireproto.commands['listkeyspatterns'] = (
+        wireprotolistkeyspatterns, 'namespace patterns')
+    scratchbranchpat = ui.config('infinitepush', 'branchpattern')
+    if scratchbranchpat:
+        global _scratchbranchmatcher
+        kind, pat, _scratchbranchmatcher = util.stringmatcher(scratchbranchpat)
+
+def serverextsetup(ui):
+    origpushkeyhandler = bundle2.parthandlermapping['pushkey']
+
+    def newpushkeyhandler(*args, **kwargs):
+        bundle2pushkey(origpushkeyhandler, *args, **kwargs)
+    newpushkeyhandler.params = origpushkeyhandler.params
+    bundle2.parthandlermapping['pushkey'] = newpushkeyhandler
+
+    orighandlephasehandler = bundle2.parthandlermapping['phase-heads']
+    newphaseheadshandler = lambda *args, **kwargs: \
+        bundle2handlephases(orighandlephasehandler, *args, **kwargs)
+    newphaseheadshandler.params = orighandlephasehandler.params
+    bundle2.parthandlermapping['phase-heads'] = newphaseheadshandler
+
+    extensions.wrapfunction(localrepo.localrepository, 'listkeys',
+                            localrepolistkeys)
+    wireproto.commands['lookup'] = (
+        _lookupwrap(wireproto.commands['lookup'][0]), 'key')
+    extensions.wrapfunction(exchange, 'getbundlechunks', getbundlechunks)
+
+    extensions.wrapfunction(bundle2, 'processparts', processparts)
+
+def clientextsetup(ui):
+    entry = extensions.wrapcommand(commands.table, 'push', _push)
+    # Don't add the 'to' arg if it already exists
+    if not any(a for a in entry[1] if a[1] == 'to'):
+        entry[1].append(('', 'to', '', _('push revs to this bookmark')))
+
+    if not any(a for a in entry[1] if a[1] == 'non-forward-move'):
+        entry[1].append(('', 'non-forward-move', None,
+                         _('allows moving a remote bookmark to an '
+                           'arbitrary place')))
+
+    if not any(a for a in entry[1] if a[1] == 'create'):
+        entry[1].append(
+            ('', 'create', None, _('create a new remote bookmark')))
+
+    entry[1].append(
+        ('', 'bundle-store', None,
+         _('force push to go to bundle store (EXPERIMENTAL)')))
+
+    bookcmd = extensions.wrapcommand(commands.table, 'bookmarks', exbookmarks)
+    bookcmd[1].append(
+        ('', 'list-remote', None,
+         'list remote bookmarks. '
+         'Positional arguments are interpreted as wildcard patterns. '
+         'Only allowed wildcard is \'*\' in the end of the pattern. '
+         'If no positional arguments are specified then it will list '
+         'the most "important" remote bookmarks. '
+         'Otherwise it will list remote bookmarks '
+         'that match at least one pattern '
+         ''))
+    bookcmd[1].append(
+        ('', 'remote-path', '',
+         'name of the remote path to list the bookmarks'))
+
+    extensions.wrapcommand(commands.table, 'pull', _pull)
+    extensions.wrapcommand(commands.table, 'update', _update)
+
+    extensions.wrapfunction(discovery, 'checkheads', _checkheads)
+    extensions.wrapfunction(bundle2, '_addpartsfromopts', _addpartsfromopts)
+
+    wireproto.wirepeer.listkeyspatterns = listkeyspatterns
+
+    # Move infinitepush part before pushrebase part
+    # to avoid generation of both parts.
+    partorder = exchange.b2partsgenorder
+    index = partorder.index('changeset')
+    if pushrebaseparttype in partorder:
+        index = min(index, partorder.index(pushrebaseparttype))
+    partorder.insert(
+        index, partorder.pop(partorder.index(scratchbranchparttype)))
+
+    def wrapsmartlog(loaded):
+        if not loaded:
+            return
+        smartlogmod = extensions.find('smartlog')
+        extensions.wrapcommand(smartlogmod.cmdtable, 'smartlog', _smartlog)
+    extensions.afterloaded('smartlog', wrapsmartlog)
+    backupcommands.extsetup(ui)
+
+def _smartlog(orig, ui, repo, **opts):
+    res = orig(ui, repo, **opts)
+    backupcommands.smartlogsummary(ui, repo)
+    return res
+
+def _showbookmarks(ui, bookmarks, **opts):
+    # Copy-paste from commands.py
+    fm = ui.formatter('bookmarks', opts)
+    for bmark, n in sorted(bookmarks.iteritems()):
+        fm.startitem()
+        if not ui.quiet:
+            fm.plain('   ')
+        fm.write('bookmark', '%s', bmark)
+        pad = ' ' * (25 - encoding.colwidth(bmark))
+        fm.condwrite(not ui.quiet, 'node', pad + ' %s', n)
+        fm.plain('\n')
+    fm.end()
+
+def exbookmarks(orig, ui, repo, *names, **opts):
+    pattern = opts.get('list_remote')
+    delete = opts.get('delete')
+    remotepath = opts.get('remote_path')
+    path = ui.paths.getpath(remotepath or None, default=('default'))
+    if pattern:
+        destpath = path.pushloc or path.loc
+        other = hg.peer(repo, opts, destpath)
+        if not names:
+            raise error.Abort(
+                '--list-remote requires a bookmark pattern',
+                hint='use "hg book" to get a list of your local bookmarks')
+        else:
+            fetchedbookmarks = other.listkeyspatterns('bookmarks',
+                                                      patterns=names)
+        _showbookmarks(ui, fetchedbookmarks, **opts)
+        return
+    elif delete and 'remotenames' in extensions._extensions:
+        existing_local_bms = set(repo._bookmarks.keys())
+        scratch_bms = []
+        other_bms = []
+        for name in names:
+            if _scratchbranchmatcher(name) and name not in existing_local_bms:
+                scratch_bms.append(name)
+            else:
+                other_bms.append(name)
+
+        if len(scratch_bms) > 0:
+            if remotepath == '':
+                remotepath = 'default'
+            _deleteinfinitepushbookmarks(ui,
+                                         repo,
+                                         remotepath,
+                                         scratch_bms)
+
+        if len(other_bms) > 0 or len(scratch_bms) == 0:
+            return orig(ui, repo, *other_bms, **opts)
+    else:
+        return orig(ui, repo, *names, **opts)
+
+def _checkheads(orig, pushop):
+    if pushop.ui.configbool(experimental, configscratchpush, False):
+        return
+    return orig(pushop)
+
+def _addpartsfromopts(orig, ui, repo, bundler, *args, **kwargs):
+    """ adds a stream level part to bundle2 storing whether this is an
+    infinitepush bundle or not
+    This functionality is hidden behind a config option:
+
+    [infinitepush]
+    bundle-stream = True
+    """
+    if ui.configbool('infinitepush', 'bundle-stream', False):
+        bundler.addparam('infinitepush', True)
+    return orig(ui, repo, bundler, *args, **kwargs)
+
+def wireprotolistkeyspatterns(repo, proto, namespace, patterns):
+    patterns = wireproto.decodelist(patterns)
+    d = repo.listkeys(encoding.tolocal(namespace), patterns).iteritems()
+    return pushkey.encodekeys(d)
+
+def localrepolistkeys(orig, self, namespace, patterns=None):
+    if namespace == 'bookmarks' and patterns:
+        index = self.bundlestore.index
+        results = {}
+        bookmarks = orig(self, namespace)
+        for pattern in patterns:
+            results.update(index.getbookmarks(pattern))
+            if pattern.endswith('*'):
+                pattern = 're:^' + pattern[:-1] + '.*'
+            kind, pat, matcher = util.stringmatcher(pattern)
+            for bookmark, node in bookmarks.iteritems():
+                if matcher(bookmark):
+                    results[bookmark] = node
+        return results
+    else:
+        return orig(self, namespace)
+
+ at peer.batchable
+def listkeyspatterns(self, namespace, patterns):
+    if not self.capable('pushkey'):
+        yield {}, None
+    f = peer.future()
+    self.ui.debug('preparing listkeys for "%s" with pattern "%s"\n' %
+                  (namespace, patterns))
+    yield {
+        'namespace': encoding.fromlocal(namespace),
+        'patterns': wireproto.encodelist(patterns)
+    }, f
+    d = f.value
+    self.ui.debug('received listkey for "%s": %i bytes\n'
+                  % (namespace, len(d)))
+    yield pushkey.decodekeys(d)
+
+def _readbundlerevs(bundlerepo):
+    return list(bundlerepo.revs('bundle()'))
+
+def _includefilelogstobundle(bundlecaps, bundlerepo, bundlerevs, ui):
+    '''Tells remotefilelog to include all changed files to the changegroup
+
+    By default remotefilelog doesn't include file content to the changegroup.
+    But we need to include it if we are fetching from bundlestore.
+    '''
+    changedfiles = set()
+    cl = bundlerepo.changelog
+    for r in bundlerevs:
+        # [3] means changed files
+        changedfiles.update(cl.read(r)[3])
+    if not changedfiles:
+        return bundlecaps
+
+    changedfiles = '\0'.join(changedfiles)
+    newcaps = []
+    appended = False
+    for cap in (bundlecaps or []):
+        if cap.startswith('excludepattern='):
+            newcaps.append('\0'.join((cap, changedfiles)))
+            appended = True
+        else:
+            newcaps.append(cap)
+    if not appended:
+        # Not found excludepattern cap. Just append it
+        newcaps.append('excludepattern=' + changedfiles)
+
+    return newcaps
+
+def _rebundle(bundlerepo, bundleroots, unknownhead):
+    '''
+    Bundle may include more revision then user requested. For example,
+    if user asks for revision but bundle also consists its descendants.
+    This function will filter out all revision that user is not requested.
+    '''
+    parts = []
+
+    version = '02'
+    outgoing = discovery.outgoing(bundlerepo, commonheads=bundleroots,
+                                  missingheads=[unknownhead])
+    cgstream = changegroup.makestream(bundlerepo, outgoing, version, 'pull')
+    cgstream = util.chunkbuffer(cgstream).read()
+    cgpart = bundle2.bundlepart('changegroup', data=cgstream)
+    cgpart.addparam('version', version)
+    parts.append(cgpart)
+
+    try:
+        treemod = extensions.find('treemanifest')
+    except KeyError:
+        pass
+    else:
+        if treemod._cansendtrees(bundlerepo, outgoing.missing):
+            treepart = treemod.createtreepackpart(bundlerepo, outgoing,
+                                                  treemod.TREEGROUP_PARTTYPE2)
+            parts.append(treepart)
+
+    return parts
+
+def _getbundleroots(oldrepo, bundlerepo, bundlerevs):
+    cl = bundlerepo.changelog
+    bundleroots = []
+    for rev in bundlerevs:
+        node = cl.node(rev)
+        parents = cl.parents(node)
+        for parent in parents:
+            # include all revs that exist in the main repo
+            # to make sure that bundle may apply client-side
+            if parent in oldrepo:
+                bundleroots.append(parent)
+    return bundleroots
+
+def _needsrebundling(head, bundlerepo):
+    bundleheads = list(bundlerepo.revs('heads(bundle())'))
+    return not (len(bundleheads) == 1 and
+                bundlerepo[bundleheads[0]].node() == head)
+
+def _generateoutputparts(head, bundlerepo, bundleroots, bundlefile):
+    '''generates bundle that will be send to the user
+
+    returns tuple with raw bundle string and bundle type
+    '''
+    parts = []
+    if not _needsrebundling(head, bundlerepo):
+        with util.posixfile(bundlefile, "rb") as f:
+            unbundler = exchange.readbundle(bundlerepo.ui, f, bundlefile)
+            if isinstance(unbundler, changegroup.cg1unpacker):
+                part = bundle2.bundlepart('changegroup',
+                                          data=unbundler._stream.read())
+                part.addparam('version', '01')
+                parts.append(part)
+            elif isinstance(unbundler, bundle2.unbundle20):
+                haschangegroup = False
+                for part in unbundler.iterparts():
+                    if part.type == 'changegroup':
+                        haschangegroup = True
+                    newpart = bundle2.bundlepart(part.type, data=part.read())
+                    for key, value in part.params.iteritems():
+                        newpart.addparam(key, value)
+                    parts.append(newpart)
+
+                if not haschangegroup:
+                    raise error.Abort(
+                        'unexpected bundle without changegroup part, ' +
+                        'head: %s' % hex(head),
+                        hint='report to administrator')
+            else:
+                raise error.Abort('unknown bundle type')
+    else:
+        parts = _rebundle(bundlerepo, bundleroots, head)
+
+    return parts
+
+def getbundlechunks(orig, repo, source, heads=None, bundlecaps=None, **kwargs):
+    heads = heads or []
+    # newheads are parents of roots of scratch bundles that were requested
+    newphases = {}
+    scratchbundles = []
+    newheads = []
+    scratchheads = []
+    nodestobundle = {}
+    allbundlestocleanup = []
+    try:
+        for head in heads:
+            if head not in repo.changelog.nodemap:
+                if head not in nodestobundle:
+                    newbundlefile = common.downloadbundle(repo, head)
+                    bundlepath = "bundle:%s+%s" % (repo.root, newbundlefile)
+                    bundlerepo = hg.repository(repo.ui, bundlepath)
+
+                    allbundlestocleanup.append((bundlerepo, newbundlefile))
+                    bundlerevs = set(_readbundlerevs(bundlerepo))
+                    bundlecaps = _includefilelogstobundle(
+                        bundlecaps, bundlerepo, bundlerevs, repo.ui)
+                    cl = bundlerepo.changelog
+                    bundleroots = _getbundleroots(repo, bundlerepo, bundlerevs)
+                    for rev in bundlerevs:
+                        node = cl.node(rev)
+                        newphases[hex(node)] = str(phases.draft)
+                        nodestobundle[node] = (bundlerepo, bundleroots,
+                                               newbundlefile)
+
+                scratchbundles.append(
+                    _generateoutputparts(head, *nodestobundle[head]))
+                newheads.extend(bundleroots)
+                scratchheads.append(head)
+    finally:
+        for bundlerepo, bundlefile in allbundlestocleanup:
+            bundlerepo.close()
+            try:
+                os.unlink(bundlefile)
+            except (IOError, OSError):
+                # if we can't cleanup the file then just ignore the error,
+                # no need to fail
+                pass
+
+    pullfrombundlestore = bool(scratchbundles)
+    wrappedchangegrouppart = False
+    wrappedlistkeys = False
+    oldchangegrouppart = exchange.getbundle2partsmapping['changegroup']
+    try:
+        def _changegrouppart(bundler, *args, **kwargs):
+            # Order is important here. First add non-scratch part
+            # and only then add parts with scratch bundles because
+            # non-scratch part contains parents of roots of scratch bundles.
+            result = oldchangegrouppart(bundler, *args, **kwargs)
+            for bundle in scratchbundles:
+                for part in bundle:
+                    bundler.addpart(part)
+            return result
+
+        exchange.getbundle2partsmapping['changegroup'] = _changegrouppart
+        wrappedchangegrouppart = True
+
+        def _listkeys(orig, self, namespace):
+            origvalues = orig(self, namespace)
+            if namespace == 'phases' and pullfrombundlestore:
+                if origvalues.get('publishing') == 'True':
+                    # Make repo non-publishing to preserve draft phase
+                    del origvalues['publishing']
+                origvalues.update(newphases)
+            return origvalues
+
+        extensions.wrapfunction(localrepo.localrepository, 'listkeys',
+                                _listkeys)
+        wrappedlistkeys = True
+        heads = list((set(newheads) | set(heads)) - set(scratchheads))
+        result = orig(repo, source, heads=heads,
+                      bundlecaps=bundlecaps, **kwargs)
+    finally:
+        if wrappedchangegrouppart:
+            exchange.getbundle2partsmapping['changegroup'] = oldchangegrouppart
+        if wrappedlistkeys:
+            extensions.unwrapfunction(localrepo.localrepository, 'listkeys',
+                                      _listkeys)
+    return result
+
+def _lookupwrap(orig):
+    def _lookup(repo, proto, key):
+        localkey = encoding.tolocal(key)
+
+        if isinstance(localkey, str) and _scratchbranchmatcher(localkey):
+            scratchnode = repo.bundlestore.index.getnode(localkey)
+            if scratchnode:
+                return "%s %s\n" % (1, scratchnode)
+            else:
+                return "%s %s\n" % (0, 'scratch branch %s not found' % localkey)
+        else:
+            try:
+                r = hex(repo.lookup(localkey))
+                return "%s %s\n" % (1, r)
+            except Exception as inst:
+                if repo.bundlestore.index.getbundle(localkey):
+                    return "%s %s\n" % (1, localkey)
+                else:
+                    r = str(inst)
+                    return "%s %s\n" % (0, r)
+    return _lookup
+
+def _decodebookmarks(stream):
+    sizeofjsonsize = struct.calcsize('>i')
+    size = struct.unpack('>i', stream.read(sizeofjsonsize))[0]
+    unicodedict = json.loads(stream.read(size))
+    # python json module always returns unicode strings. We need to convert
+    # it back to bytes string
+    result = {}
+    for bookmark, node in unicodedict.iteritems():
+        bookmark = bookmark.encode('ascii')
+        node = node.encode('ascii')
+        result[bookmark] = node
+    return result
+
+def _update(orig, ui, repo, node=None, rev=None, **opts):
+    if rev and node:
+        raise error.Abort(_("please specify just one revision"))
+
+    if not opts.get('date') and (rev or node) not in repo:
+        mayberemote = rev or node
+        mayberemote = _tryhoist(ui, mayberemote)
+        dopull = False
+        kwargs = {}
+        if _scratchbranchmatcher(mayberemote):
+            dopull = True
+            kwargs['bookmark'] = [mayberemote]
+        elif len(mayberemote) == 40 and _maybehash(mayberemote):
+            dopull = True
+            kwargs['rev'] = [mayberemote]
+
+        if dopull:
+            ui.warn(
+                _("'%s' does not exist locally - looking for it " +
+                  "remotely...\n") % mayberemote)
+            # Try pulling node from remote repo
+            try:
+                cmdname = '^pull'
+                pullcmd = commands.table[cmdname][0]
+                pullopts = dict(opt[1:3] for opt in commands.table[cmdname][1])
+                pullopts.update(kwargs)
+                pullcmd(ui, repo, **pullopts)
+            except Exception:
+                ui.warn(_('pull failed: %s\n') % sys.exc_info()[1])
+            else:
+                ui.warn(_("'%s' found remotely\n") % mayberemote)
+    return orig(ui, repo, node, rev, **opts)
+
+def _pull(orig, ui, repo, source="default", **opts):
+    # Copy paste from `pull` command
+    source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
+
+    scratchbookmarks = {}
+    unfi = repo.unfiltered()
+    unknownnodes = []
+    for rev in opts.get('rev', []):
+        if rev not in unfi:
+            unknownnodes.append(rev)
+    if opts.get('bookmark'):
+        bookmarks = []
+        revs = opts.get('rev') or []
+        for bookmark in opts.get('bookmark'):
+            if _scratchbranchmatcher(bookmark):
+                # rev is not known yet
+                # it will be fetched with listkeyspatterns next
+                scratchbookmarks[bookmark] = 'REVTOFETCH'
+            else:
+                bookmarks.append(bookmark)
+
+        if scratchbookmarks:
+            other = hg.peer(repo, opts, source)
+            fetchedbookmarks = other.listkeyspatterns(
+                'bookmarks', patterns=scratchbookmarks)
+            for bookmark in scratchbookmarks:
+                if bookmark not in fetchedbookmarks:
+                    raise error.Abort('remote bookmark %s not found!' %
+                                      bookmark)
+                scratchbookmarks[bookmark] = fetchedbookmarks[bookmark]
+                revs.append(fetchedbookmarks[bookmark])
+        opts['bookmark'] = bookmarks
+        opts['rev'] = revs
+
+    try:
+        inhibitmod = extensions.find('inhibit')
+    except KeyError:
+        # Ignore if inhibit is not enabled
+        pass
+    else:
+        # Pulling revisions that were filtered results in a error.
+        # Let's inhibit them
+        unfi = repo.unfiltered()
+        for rev in opts.get('rev', []):
+            try:
+                repo[rev]
+            except error.FilteredRepoLookupError:
+                node = unfi[rev].node()
+                inhibitmod.revive([repo.unfiltered()[node]])
+            except error.RepoLookupError:
+                pass
+
+    if scratchbookmarks or unknownnodes:
+        # Set anyincoming to True
+        extensions.wrapfunction(discovery, 'findcommonincoming',
+                                _findcommonincoming)
+    try:
+        # Remote scratch bookmarks will be deleted because remotenames doesn't
+        # know about them. Let's save it before pull and restore after
+        remotescratchbookmarks = _readscratchremotebookmarks(ui, repo, source)
+        result = orig(ui, repo, source, **opts)
+        # TODO(stash): race condition is possible
+        # if scratch bookmarks was updated right after orig.
+        # But that's unlikely and shouldn't be harmful.
+        if common.isremotebooksenabled(ui):
+            remotescratchbookmarks.update(scratchbookmarks)
+            _saveremotebookmarks(repo, remotescratchbookmarks, source)
+        else:
+            _savelocalbookmarks(repo, scratchbookmarks)
+        return result
+    finally:
+        if scratchbookmarks:
+            extensions.unwrapfunction(discovery, 'findcommonincoming')
+
+def _readscratchremotebookmarks(ui, repo, other):
+    if common.isremotebooksenabled(ui):
+        remotenamesext = extensions.find('remotenames')
+        remotepath = remotenamesext.activepath(repo.ui, other)
+        result = {}
+        # Let's refresh remotenames to make sure we have it up to date
+        # Seems that `repo.names['remotebookmarks']` may return stale bookmarks
+        # and it results in deleting scratch bookmarks. Our best guess how to
+        # fix it is to use `clearnames()`
+        repo._remotenames.clearnames()
+        for remotebookmark in repo.names['remotebookmarks'].listnames(repo):
+            path, bookname = remotenamesext.splitremotename(remotebookmark)
+            if path == remotepath and _scratchbranchmatcher(bookname):
+                nodes = repo.names['remotebookmarks'].nodes(repo,
+                                                            remotebookmark)
+                if nodes:
+                    result[bookname] = hex(nodes[0])
+        return result
+    else:
+        return {}
+
+def _saveremotebookmarks(repo, newbookmarks, remote):
+    remotenamesext = extensions.find('remotenames')
+    remotepath = remotenamesext.activepath(repo.ui, remote)
+    branches = collections.defaultdict(list)
+    bookmarks = {}
+    remotenames = remotenamesext.readremotenames(repo)
+    for hexnode, nametype, remote, rname in remotenames:
+        if remote != remotepath:
+            continue
+        if nametype == 'bookmarks':
+            if rname in newbookmarks:
+                # It's possible if we have a normal bookmark that matches
+                # scratch branch pattern. In this case just use the current
+                # bookmark node
+                del newbookmarks[rname]
+            bookmarks[rname] = hexnode
+        elif nametype == 'branches':
+            # saveremotenames expects 20 byte binary nodes for branches
+            branches[rname].append(bin(hexnode))
+
+    for bookmark, hexnode in newbookmarks.iteritems():
+        bookmarks[bookmark] = hexnode
+    remotenamesext.saveremotenames(repo, remotepath, branches, bookmarks)
+
+def _savelocalbookmarks(repo, bookmarks):
+    if not bookmarks:
+        return
+    with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
+        changes = []
+        for scratchbook, node in bookmarks.iteritems():
+            changectx = repo[node]
+            changes.append((scratchbook, changectx.node()))
+        repo._bookmarks.applychanges(repo, tr, changes)
+
+def _findcommonincoming(orig, *args, **kwargs):
+    common, inc, remoteheads = orig(*args, **kwargs)
+    return common, True, remoteheads
+
+def _push(orig, ui, repo, dest=None, *args, **opts):
+    bookmark = opts.get('to') or ''
+    create = opts.get('create') or False
+
+    oldphasemove = None
+    overrides = {(experimental, configbookmark): bookmark,
+                 (experimental, configcreate): create}
+
+    with ui.configoverride(overrides, 'infinitepush'):
+        scratchpush = opts.get('bundle_store')
+        if _scratchbranchmatcher(bookmark):
+            # Hack to fix interaction with remotenames. Remotenames push
+            # '--to' bookmark to the server but we don't want to push scratch
+            # bookmark to the server. Let's delete '--to' and '--create' and
+            # also set allow_anon to True (because if --to is not set
+            # remotenames will think that we are pushing anonymoush head)
+            if 'to' in opts:
+                del opts['to']
+            if 'create' in opts:
+                del opts['create']
+            opts['allow_anon'] = True
+            scratchpush = True
+            # bundle2 can be sent back after push (for example, bundle2
+            # containing `pushkey` part to update bookmarks)
+            ui.setconfig(experimental, 'bundle2.pushback', True)
+
+        ui.setconfig(experimental, confignonforwardmove,
+                     opts.get('non_forward_move'), '--non-forward-move')
+        if scratchpush:
+            ui.setconfig(experimental, configscratchpush, True)
+            oldphasemove = extensions.wrapfunction(exchange,
+                                                   '_localphasemove',
+                                                   _phasemove)
+        # Copy-paste from `push` command
+        path = ui.paths.getpath(dest, default=('default-push', 'default'))
+        if not path:
+            raise error.Abort(_('default repository not configured!'),
+                             hint=_("see 'hg help config.paths'"))
+        destpath = path.pushloc or path.loc
+        if destpath.startswith('svn+') and scratchpush:
+            raise error.Abort('infinite push does not work with svn repo',
+                              hint='did you forget to `hg push default`?')
+        # Remote scratch bookmarks will be deleted because remotenames doesn't
+        # know about them. Let's save it before push and restore after
+        remotescratchbookmarks = _readscratchremotebookmarks(ui, repo, destpath)
+        result = orig(ui, repo, dest, *args, **opts)
+        if common.isremotebooksenabled(ui):
+            if bookmark and scratchpush:
+                other = hg.peer(repo, opts, destpath)
+                fetchedbookmarks = other.listkeyspatterns('bookmarks',
+                                                          patterns=[bookmark])
+                remotescratchbookmarks.update(fetchedbookmarks)
+            _saveremotebookmarks(repo, remotescratchbookmarks, destpath)
+    if oldphasemove:
+        exchange._localphasemove = oldphasemove
+    return result
+
+def _deleteinfinitepushbookmarks(ui, repo, path, names):
+    """Prune remote names by removing the bookmarks we don't want anymore,
+    then writing the result back to disk
+    """
+    remotenamesext = extensions.find('remotenames')
+
+    # remotename format is:
+    # (node, nametype ("branches" or "bookmarks"), remote, name)
+    nametype_idx = 1
+    remote_idx = 2
+    name_idx = 3
+    remotenames = [remotename for remotename in \
+                   remotenamesext.readremotenames(repo) \
+                   if remotename[remote_idx] == path]
+    remote_bm_names = [remotename[name_idx] for remotename in \
+                       remotenames if remotename[nametype_idx] == "bookmarks"]
+
+    for name in names:
+        if name not in remote_bm_names:
+            raise error.Abort(_("infinitepush bookmark '{}' does not exist "
+                                "in path '{}'").format(name, path))
+
+    bookmarks = {}
+    branches = collections.defaultdict(list)
+    for node, nametype, remote, name in remotenames:
+        if nametype == "bookmarks" and name not in names:
+            bookmarks[name] = node
+        elif nametype == "branches":
+            # saveremotenames wants binary nodes for branches
+            branches[name].append(bin(node))
+
+    remotenamesext.saveremotenames(repo, path, branches, bookmarks)
+
+def _phasemove(orig, pushop, nodes, phase=phases.public):
+    """prevent commits from being marked public
+
+    Since these are going to a scratch branch, they aren't really being
+    published."""
+
+    if phase != phases.public:
+        orig(pushop, nodes, phase)
+
+ at exchange.b2partsgenerator(scratchbranchparttype)
+def partgen(pushop, bundler):
+    bookmark = pushop.ui.config(experimental, configbookmark)
+    create = pushop.ui.configbool(experimental, configcreate)
+    scratchpush = pushop.ui.configbool(experimental, configscratchpush)
+    if 'changesets' in pushop.stepsdone or not scratchpush:
+        return
+
+    if scratchbranchparttype not in bundle2.bundle2caps(pushop.remote):
+        return
+
+    pushop.stepsdone.add('changesets')
+    pushop.stepsdone.add('treepack')
+    if not pushop.outgoing.missing:
+        pushop.ui.status(_('no changes found\n'))
+        pushop.cgresult = 0
+        return
+
+    # This parameter tells the server that the following bundle is an
+    # infinitepush. This let's it switch the part processing to our infinitepush
+    # code path.
+    bundler.addparam("infinitepush", "True")
+
+    nonforwardmove = pushop.force or pushop.ui.configbool(experimental,
+                                                          confignonforwardmove)
+    scratchparts = bundleparts.getscratchbranchparts(pushop.repo,
+                                                     pushop.remote,
+                                                     pushop.outgoing,
+                                                     nonforwardmove,
+                                                     pushop.ui,
+                                                     bookmark,
+                                                     create)
+
+    for scratchpart in scratchparts:
+        bundler.addpart(scratchpart)
+
+    def handlereply(op):
+        # server either succeeds or aborts; no code to read
+        pushop.cgresult = 1
+
+    return handlereply
+
+bundle2.capabilities[bundleparts.scratchbranchparttype] = ()
+bundle2.capabilities[bundleparts.scratchbookmarksparttype] = ()
+
+def _getrevs(bundle, oldnode, force, bookmark):
+    'extracts and validates the revs to be imported'
+    revs = [bundle[r] for r in bundle.revs('sort(bundle())')]
+
+    # new bookmark
+    if oldnode is None:
+        return revs
+
+    # Fast forward update
+    if oldnode in bundle and list(bundle.set('bundle() & %s::', oldnode)):
+        return revs
+
+    # Forced non-fast forward update
+    if force:
+        return revs
+    else:
+        raise error.Abort(_('non-forward push'),
+                          hint=_('use --non-forward-move to override'))
+
+ at contextlib.contextmanager
+def logservicecall(logger, service, **kwargs):
+    start = time.time()
+    logger(service, eventtype='start', **kwargs)
+    try:
+        yield
+        logger(service, eventtype='success',
+               elapsedms=(time.time() - start) * 1000, **kwargs)
+    except Exception as e:
+        logger(service, eventtype='failure',
+               elapsedms=(time.time() - start) * 1000, errormsg=str(e),
+               **kwargs)
+        raise
+
+def _getorcreateinfinitepushlogger(op):
+    logger = op.records['infinitepushlogger']
+    if not logger:
+        ui = op.repo.ui
+        try:
+            username = util.getuser()
+        except Exception:
+            username = 'unknown'
+        # Generate random request id to be able to find all logged entries
+        # for the same request. Since requestid is pseudo-generated it may
+        # not be unique, but we assume that (hostname, username, requestid)
+        # is unique.
+        random.seed()
+        requestid = random.randint(0, 2000000000)
+        hostname = socket.gethostname()
+        logger = functools.partial(ui.log, 'infinitepush', user=username,
+                                   requestid=requestid, hostname=hostname,
+                                   reponame=ui.config('infinitepush',
+                                                      'reponame'))
+        op.records.add('infinitepushlogger', logger)
+    else:
+        logger = logger[0]
+    return logger
+
+def processparts(orig, repo, op, unbundler):
+    if unbundler.params.get('infinitepush') != 'True':
+        return orig(repo, op, unbundler)
+
+    handleallparts = repo.ui.configbool('infinitepush', 'storeallparts')
+
+    partforwardingwhitelist = []
+    try:
+        treemfmod = extensions.find('treemanifest')
+        partforwardingwhitelist.append(treemfmod.TREEGROUP_PARTTYPE2)
+    except KeyError:
+        pass
+
+    bundler = bundle2.bundle20(repo.ui)
+    cgparams = None
+    scratchbookpart = None
+    with bundle2.partiterator(repo, op, unbundler) as parts:
+        for part in parts:
+            bundlepart = None
+            if part.type == 'replycaps':
+                # This configures the current operation to allow reply parts.
+                bundle2._processpart(op, part)
+            elif part.type == bundleparts.scratchbranchparttype:
+                # Scratch branch parts need to be converted to normal
+                # changegroup parts, and the extra parameters stored for later
+                # when we upload to the store. Eventually those parameters will
+                # be put on the actual bundle instead of this part, then we can
+                # send a vanilla changegroup instead of the scratchbranch part.
+                cgversion = part.params.get('cgversion', '01')
+                bundlepart = bundle2.bundlepart('changegroup', data=part.read())
+                bundlepart.addparam('version', cgversion)
+                cgparams = part.params
+
+                # If we're not dumping all parts into the new bundle, we need to
+                # alert the future pushkey and phase-heads handler to skip
+                # the part.
+                if not handleallparts:
+                    op.records.add(scratchbranchparttype + '_skippushkey', True)
+                    op.records.add(scratchbranchparttype + '_skipphaseheads',
+                                   True)
+            elif part.type == bundleparts.scratchbookmarksparttype:
+                # Save this for later processing. Details below.
+                #
+                # Upstream https://phab.mercurial-scm.org/D1389 and its
+                # follow-ups stop part.seek support to reduce memory usage
+                # (https://bz.mercurial-scm.org/5691). So we need to copy
+                # the part so it can be consumed later.
+                scratchbookpart = bundleparts.copiedpart(part)
+            else:
+                if handleallparts or part.type in partforwardingwhitelist:
+                    # Ideally we would not process any parts, and instead just
+                    # forward them to the bundle for storage, but since this
+                    # differs from previous behavior, we need to put it behind a
+                    # config flag for incremental rollout.
+                    bundlepart = bundle2.bundlepart(part.type, data=part.read())
+                    for key, value in part.params.iteritems():
+                        bundlepart.addparam(key, value)
+
+                    # Certain parts require a response
+                    if part.type == 'pushkey':
+                        if op.reply is not None:
+                            rpart = op.reply.newpart('reply:pushkey')
+                            rpart.addparam('in-reply-to', str(part.id),
+                                           mandatory=False)
+                            rpart.addparam('return', '1', mandatory=False)
+                else:
+                    bundle2._processpart(op, part)
+
+            if handleallparts:
+                op.records.add(part.type, {
+                    'return': 1,
+                })
+            if bundlepart:
+                bundler.addpart(bundlepart)
+
+    # If commits were sent, store them
+    if cgparams:
+        buf = util.chunkbuffer(bundler.getchunks())
+        fd, bundlefile = tempfile.mkstemp()
+        try:
+            try:
+                fp = os.fdopen(fd, 'wb')
+                fp.write(buf.read())
+            finally:
+                fp.close()
+            storebundle(op, cgparams, bundlefile)
+        finally:
+            try:
+                os.unlink(bundlefile)
+            except Exception:
+                # we would rather see the original exception
+                pass
+
+    # The scratch bookmark part is sent as part of a push backup. It needs to be
+    # processed after the main bundle has been stored, so that any commits it
+    # references are available in the store.
+    if scratchbookpart:
+        bundle2._processpart(op, scratchbookpart)
+
+def storebundle(op, params, bundlefile):
+    log = _getorcreateinfinitepushlogger(op)
+    parthandlerstart = time.time()
+    log(scratchbranchparttype, eventtype='start')
+    index = op.repo.bundlestore.index
+    store = op.repo.bundlestore.store
+    op.records.add(scratchbranchparttype + '_skippushkey', True)
+
+    bundle = None
+    try:  # guards bundle
+        bundlepath = "bundle:%s+%s" % (op.repo.root, bundlefile)
+        bundle = hg.repository(op.repo.ui, bundlepath)
+
+        bookmark = params.get('bookmark')
+        bookprevnode = params.get('bookprevnode', '')
+        create = params.get('create')
+        force = params.get('force')
+
+        if bookmark:
+            oldnode = index.getnode(bookmark)
+
+            if not oldnode and not create:
+                raise error.Abort("unknown bookmark %s" % bookmark,
+                                  hint="use --create if you want to create one")
+        else:
+            oldnode = None
+        bundleheads = bundle.revs('heads(bundle())')
+        if bookmark and len(bundleheads) > 1:
+            raise error.Abort(
+                _('cannot push more than one head to a scratch branch'))
+
+        revs = _getrevs(bundle, oldnode, force, bookmark)
+
+        # Notify the user of what is being pushed
+        plural = 's' if len(revs) > 1 else ''
+        op.repo.ui.warn(_("pushing %s commit%s:\n") % (len(revs), plural))
+        maxoutput = 10
+        for i in range(0, min(len(revs), maxoutput)):
+            firstline = bundle[revs[i]].description().split('\n')[0][:50]
+            op.repo.ui.warn(("    %s  %s\n") % (revs[i], firstline))
+
+        if len(revs) > maxoutput + 1:
+            op.repo.ui.warn(("    ...\n"))
+            firstline = bundle[revs[-1]].description().split('\n')[0][:50]
+            op.repo.ui.warn(("    %s  %s\n") % (revs[-1], firstline))
+
+        nodesctx = [bundle[rev] for rev in revs]
+        inindex = lambda rev: bool(index.getbundle(bundle[rev].hex()))
+        if bundleheads:
+            newheadscount = sum(not inindex(rev) for rev in bundleheads)
+        else:
+            newheadscount = 0
+        # If there's a bookmark specified, there should be only one head,
+        # so we choose the last node, which will be that head.
+        # If a bug or malicious client allows there to be a bookmark
+        # with multiple heads, we will place the bookmark on the last head.
+        bookmarknode = nodesctx[-1].hex() if nodesctx else None
+        key = None
+        if newheadscount:
+            with open(bundlefile, 'r') as f:
+                bundledata = f.read()
+                with logservicecall(log, 'bundlestore',
+                                    bundlesize=len(bundledata)):
+                    bundlesizelimit = 100 * 1024 * 1024  # 100 MB
+                    if len(bundledata) > bundlesizelimit:
+                        error_msg = ('bundle is too big: %d bytes. ' +
+                                     'max allowed size is 100 MB')
+                        raise error.Abort(error_msg % (len(bundledata),))
+                    key = store.write(bundledata)
+
+        with logservicecall(log, 'index', newheadscount=newheadscount), index:
+            if key:
+                index.addbundle(key, nodesctx)
+            if bookmark:
+                index.addbookmark(bookmark, bookmarknode)
+                _maybeaddpushbackpart(op, bookmark, bookmarknode,
+                                      bookprevnode, params)
+        log(scratchbranchparttype, eventtype='success',
+            elapsedms=(time.time() - parthandlerstart) * 1000)
+
+        fillmetadatabranchpattern = op.repo.ui.config(
+            'infinitepush', 'fillmetadatabranchpattern', '')
+        if bookmark and fillmetadatabranchpattern:
+            __, __, matcher = util.stringmatcher(fillmetadatabranchpattern)
+            if matcher(bookmark):
+                _asyncsavemetadata(op.repo.root,
+                                   [ctx.hex() for ctx in nodesctx])
+    except Exception as e:
+        log(scratchbranchparttype, eventtype='failure',
+            elapsedms=(time.time() - parthandlerstart) * 1000,
+            errormsg=str(e))
+        raise
+    finally:
+        if bundle:
+            bundle.close()
+
+ at bundle2.b2streamparamhandler('infinitepush')
+def processinfinitepush(unbundler, param, value):
+    """ process the bundle2 stream level parameter containing whether this push
+    is an infinitepush or not. """
+    if value and unbundler.ui.configbool('infinitepush',
+                                         'bundle-stream', False):
+        pass
+
+ at bundle2.parthandler(scratchbranchparttype,
+                     ('bookmark', 'bookprevnode' 'create', 'force',
+                      'pushbackbookmarks', 'cgversion'))
+def bundle2scratchbranch(op, part):
+    '''unbundle a bundle2 part containing a changegroup to store'''
+
+    bundler = bundle2.bundle20(op.repo.ui)
+    cgversion = part.params.get('cgversion', '01')
+    cgpart = bundle2.bundlepart('changegroup', data=part.read())
+    cgpart.addparam('version', cgversion)
+    bundler.addpart(cgpart)
+    buf = util.chunkbuffer(bundler.getchunks())
+
+    fd, bundlefile = tempfile.mkstemp()
+    try:
+        try:
+            fp = os.fdopen(fd, 'wb')
+            fp.write(buf.read())
+        finally:
+            fp.close()
+        storebundle(op, part.params, bundlefile)
+    finally:
+        try:
+            os.unlink(bundlefile)
+        except OSError as e:
+            if e.errno != errno.ENOENT:
+                raise
+
+    return 1
+
+ at bundle2.parthandler(bundleparts.scratchbookmarksparttype)
+def bundle2scratchbookmarks(op, part):
+    '''Handler deletes bookmarks first then adds new bookmarks.
+    '''
+    index = op.repo.bundlestore.index
+    decodedbookmarks = _decodebookmarks(part)
+    toinsert = {}
+    todelete = []
+    for bookmark, node in decodedbookmarks.iteritems():
+        if node:
+            toinsert[bookmark] = node
+        else:
+            todelete.append(bookmark)
+    log = _getorcreateinfinitepushlogger(op)
+    with logservicecall(log, bundleparts.scratchbookmarksparttype), index:
+        if todelete:
+            index.deletebookmarks(todelete)
+        if toinsert:
+            index.addmanybookmarks(toinsert)
+
+def _maybeaddpushbackpart(op, bookmark, newnode, oldnode, params):
+    if params.get('pushbackbookmarks'):
+        if op.reply and 'pushback' in op.reply.capabilities:
+            params = {
+                'namespace': 'bookmarks',
+                'key': bookmark,
+                'new': newnode,
+                'old': oldnode,
+            }
+            op.reply.newpart('pushkey', mandatoryparams=params.iteritems())
+
+def bundle2pushkey(orig, op, part):
+    '''Wrapper of bundle2.handlepushkey()
+
+    The only goal is to skip calling the original function if flag is set.
+    It's set if infinitepush push is happening.
+    '''
+    if op.records[scratchbranchparttype + '_skippushkey']:
+        if op.reply is not None:
+            rpart = op.reply.newpart('reply:pushkey')
+            rpart.addparam('in-reply-to', str(part.id), mandatory=False)
+            rpart.addparam('return', '1', mandatory=False)
+        return 1
+
+    return orig(op, part)
+
+def bundle2handlephases(orig, op, part):
+    '''Wrapper of bundle2.handlephases()
+
+    The only goal is to skip calling the original function if flag is set.
+    It's set if infinitepush push is happening.
+    '''
+
+    if op.records[scratchbranchparttype + '_skipphaseheads']:
+        return
+
+    return orig(op, part)
+
+def _asyncsavemetadata(root, nodes):
+    '''starts a separate process that fills metadata for the nodes
+
+    This function creates a separate process and doesn't wait for it's
+    completion. This was done to avoid slowing down pushes
+    '''
+
+    maxnodes = 50
+    if len(nodes) > maxnodes:
+        return
+    nodesargs = []
+    for node in nodes:
+        nodesargs.append('--node')
+        nodesargs.append(node)
+    with open(os.devnull, 'w+b') as devnull:
+        cmdline = [util.hgexecutable(), 'debugfillinfinitepushmetadata',
+                   '-R', root] + nodesargs
+        # Process will run in background. We don't care about the return code
+        subprocess.Popen(cmdline, close_fds=True, shell=False,
+                         stdin=devnull, stdout=devnull, stderr=devnull)
diff --git a/hgext/infinitepush/README b/hgext/infinitepush/README
new file mode 100644
--- /dev/null
+++ b/hgext/infinitepush/README
@@ -0,0 +1,23 @@
+## What is it?
+
+This extension adds ability to save certain pushes to a remote blob store
+as bundles and to serve commits from remote blob store.
+The revisions are stored on disk or in everstore.
+The metadata are stored in sql or on disk.
+
+## Config options
+
+infinitepush.branchpattern: pattern to detect a scratchbranch, example
+                            're:scratch/.+'
+
+infinitepush.indextype: disk or sql for the metadata
+infinitepush.reponame: only relevant for sql metadata backend, reponame to put in
+                       sql
+
+infinitepush.indexpath: only relevant for ondisk metadata backend, the path to
+                        store the index on disk. If not set will be under .hg
+                        in a folder named filebundlestore
+
+infinitepush.storepath: only relevant for ondisk metadata backend, the path to
+                        store the bundles. If not set, it will be
+                        .hg/filebundlestore



To: pulkit, #hg-reviewers
Cc: mercurial-devel


More information about the Mercurial-devel mailing list