D4928: sqlitestore: file storage backend using SQLite (RFC)

indygreg (Gregory Szorc) phabricator at mercurial-scm.org
Tue Oct 9 15:53:21 UTC 2018


indygreg created this revision.
Herald added subscribers: mercurial-devel, mjpieters.
Herald added a reviewer: hg-reviewers.

REVISION SUMMARY
  DON'T LAND. STILL EXPERIMENTAL.
  
  This commit provides an extension which uses SQLite to store file
  data (as opposed to revlogs).
  
  As the inline documentation describes, there are still several
  aspects to the extension that are incomplete. But it's a start.
  The extension does support basic clone, checkout, and commit
  workflows, which makes it suitable for simple use cases.
  
  One notable missing feature is support for "bundlerepos." This is
  probably responsible for the most test failures when the extension
  is activated as part of the test suite.
  
  All revision data is stored in SQLite. Data is stored as either
  raw chunks or zstd compressed chunks. And the stored chunks are
  deltas. This makes things very similar to revlogs.
  
  Unlike revlogs, the extension doesn't yet enforce a limit on delta
  chain length. This is an obvious limitation and should be addressed.
  This is somewhat mitigated by the use of zstd, which is much faster
  than zlib to decompress.
  
  There is a dedicated table for storing deltas. Deltas are stored
  by the SHA-1 hash of their content. The "fileindex" table has
  columns that reference the delta for each revision and the base
  delta that delta should be applied against. A recursive SQL query
  is used to resolve the delta chain along with the delta data.
  
  By storing deltas by hash, we are able to de-duplicate delta storage!
  With revlogs, the same deltas in different revlogs would result in
  duplicate storage of that delta. In this scheme, inserting the
  duplicate delta is a no-op and delta chains simply reference the
  existing delta.
  
  When initially implementing this extension, I did not have
  content-indexed deltas and deltas could be duplicated across files
  (just like revlogs). When I implemented content-indexed deltas, the
  size of the SQLite database for a full clone of mozilla-unified
  dropped:
  
  before: 2,554,261,504 bytes
  after:  2,488,754,176 bytes
  
  Surprisingly, this is still larger than the bytes size of revlog
  files:
  
  revlog files: 2,104,861,230 bytes
  du -b:        2,254,381,614
  
  I would have expected storage to be smaller since we're not limiting
  delta chain length and since we're using zstd instead of zlib. I
  suspect the SQLite indexes and per-column overhead account for the
  bulk of the differences. (Keep in mind that revlog uses a 64-byte
  packed struct for revision index data and deltas are stored without
  padding. Aside from the 12 unused bytes in the 32 byte node field,
  revlogs are pretty efficient.) Another source of overhead is file
  name storage. With revlogs, file names are stored in the filesystem.
  But with SQLite, we need to store file names in the database. This is
  roughly equivalent to the size of the fncache file, which for the
  mozilla-unified repository is ~34MB.
  
  Since the SQLite database isn't append-only and since delta chains
  can reference any delta, this opens some interesting possibilities.
  For example, we could store deltas in reverse, such that fulltexts
  are stored for newer revisions and deltas are applied to reconstruct
  older revisions. This is likely a more optimal storage strategy for
  version control, as new data tends to be more frequently accessed
  than old data. We would obviously need wire protocol support for
  transferring revision data from newest to oldest. And we would
  probably need some kind of mechanism for "re-encoding" stores. But
  it should be doable.
  
  This extension is very much experimental quality. There are a handful
  of features that don't work. It probably isn't suitable for day-to-day
  use. But it could be used in limited cases (e.g. read-only checkouts
  like in CI). And it is also a good proving ground for alternate
  storage backends. As we continue to define interfaces for all things
  storage, it will be useful to have a viable alternate storage backend
  to see how things shake out in practice.
  
  There are currently a handful of failures in test-storage.py. Most
  are due to lack of censoring support. Having the storage-level unit
  tests has proved to be insanely useful when developing this extension.
  Those tests caught numerous bugs during development and I'm convinced
  those tests are the way forward for ensuring alternate storage backends
  are compatible.

REPOSITORY
  rHG Mercurial

REVISION DETAIL
  https://phab.mercurial-scm.org/D4928

AFFECTED FILES
  hgext/sqlitestore.py
  tests/test-storage.py

CHANGE DETAILS

diff --git a/tests/test-storage.py b/tests/test-storage.py
--- a/tests/test-storage.py
+++ b/tests/test-storage.py
@@ -17,6 +17,10 @@
     storage as storagetesting,
 )
 
+from hgext import (
+    sqlitestore,
+)
+
 STATE = {
     'lastindex': 0,
     'ui': uimod.ui(),
@@ -69,5 +73,33 @@
                                                              maketransaction,
                                                              addrawrevision)
 
+def makesqlitefile(self):
+    path = STATE['vfs'].join('db-%d.db' % STATE['lastindex'])
+    STATE['lastindex'] += 1
+
+    db = sqlitestore.makedb(path)
+
+    return sqlitestore.sqlitefilestore(db, 'dummy-path')
+
+def addrawrevisionsqlite(self, fl, tr, node, p1, p2, linkrev, rawtext=None,
+                         delta=None, censored=False, ellipsis=False,
+                         extstored=False):
+    if censored | ellipsis | extstored:
+        raise error.Abort('support for custom storage flags not supported')
+
+    if rawtext is not None:
+        fl._addrawrevision(node, rawtext, tr, linkrev, p1, p2)
+    elif delta is not None:
+        raise error.Abort('support for storing raw deltas not yet supported')
+    else:
+        raise error.Abort('must supply rawtext or delta arguments')
+
+sqlitefileindextests = storagetesting.makeifileindextests(
+    makesqlitefile, maketransaction, addrawrevisionsqlite)
+sqlitefiledatatests = storagetesting.makeifiledatatests(
+    makesqlitefile, maketransaction, addrawrevisionsqlite)
+sqlitefilemutationtests = storagetesting.makeifilemutationtests(
+    makesqlitefile, maketransaction, addrawrevisionsqlite)
+
 if __name__ == '__main__':
     silenttestrunner.main(__name__)
diff --git a/hgext/sqlitestore.py b/hgext/sqlitestore.py
new file mode 100644
--- /dev/null
+++ b/hgext/sqlitestore.py
@@ -0,0 +1,889 @@
+# sqlitestore.py - Storage backend that uses SQLite
+#
+# Copyright 2018 Gregory Szorc <gregory.szorc at gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+"""store repository data in a SQLite database (EXPERIMENTAL)
+
+The sqlitestore extension enables the storage of repository data in SQLite.
+
+This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
+GUARANTEES. This means that repositories created with this extension may
+only be usable with the exact version of this extension that was used. The
+extension attempts to enforce this in order to prevent repository corruption.
+
+In addition, several features are not yet supported or have known bugs:
+
+* Only some data is stored in SQLite. Changeset, manifest, and other repository
+  data is not yet stored in SQLite.
+* Transactions are not robust. If the process is aborted at the right time
+  during transaction close/rollback, data could be in an inconsistent state.
+  This problem will diminish once all repository data is tracked by SQLite.
+* Bundle repositories do not work (the ability to use e.g.
+  `hg -R <bundle-file> log` to automatically overlay a bundle on top of the
+  existing repository).
+
+To use, activate the extension and set the ``storage.new-repo-backend`` config
+option to ``sqlite`` to enable new repositories to use SQLite for storage.
+"""
+
+from __future__ import absolute_import
+
+import hashlib
+import sqlite3
+import threading
+
+from mercurial.i18n import _
+from mercurial.node import (
+    nullid,
+    nullrev,
+    short,
+)
+from mercurial import (
+    ancestor,
+    dagop,
+    error,
+    extensions,
+    localrepo,
+    mdiff,
+    pycompat,
+    repository,
+    util,
+    verify,
+    zstd,
+)
+from mercurial.thirdparty import (
+    attr,
+)
+from mercurial.utils import (
+    interfaceutil,
+    storageutil,
+)
+
+# Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
+# extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
+# be specifying the version(s) of Mercurial they are tested with, or
+# leave the attribute unspecified.
+testedwith = 'ships-with-hg-core'
+
+REQUIREMENT = b'exp-sqlite-001'
+
+CURRENT_SCHEMA_VERSION = 1
+
+COMPRESSION_NONE = 1
+COMPRESSION_ZSTD = 2
+
+CREATE_SCHEMA = [
+    # Deltas are stored as content-indexed blobs.
+    # compression column defines COMPRESSION_* constant for how the
+    # delta is encoded.
+
+    b'CREATE TABLE delta ('
+    b'    id INTEGER PRIMARY KEY, '
+    b'    compression INTEGER NOT NULL, '
+    b'    hash BLOB UNIQUE ON CONFLICT ABORT, '
+    b'    delta BLOB NOT NULL '
+    b')',
+
+    # Tracked paths are denormalized to integers to avoid redundant
+    # storage of the path name.
+    b'CREATE TABLE filepath ('
+    b'    id INTEGER PRIMARY KEY, '
+    b'    path BLOB NOT NULL '
+    b')',
+
+    b'CREATE UNIQUE INDEX filepath_path '
+    b'    ON filepath (path)',
+
+    # We have a single table for all file revision data.
+    # Each file revision is uniquely described by a (path, rev) and
+    # (path, node).
+    #
+    # Revision data is stored as a pointer to the delta producing this
+    # revision and the file revision whose delta should be applied before
+    # that one. One can reconstruct the delta chain by recursively following
+    # the delta base revision pointers until one encounters NULL.
+    b'CREATE TABLE fileindex ('
+    b'    id INTEGER PRIMARY KEY, '
+    b'    pathid INTEGER REFERENCES filepath(id), '
+    b'    revnum INTEGER NOT NULL, '
+    b'    p1rev INTEGER NOT NULL, '
+    b'    p2rev INTEGER NOT NULL, '
+    b'    linkrev INTEGER NOT NULL, '
+    b'    deltaid INTEGER REFERENCES delta(id), '
+    b'    deltabaseid INTEGER REFERENCES fileindex(id), '
+    b'    node BLOB NOT NULL '
+    b')',
+
+    b'CREATE UNIQUE INDEX fileindex_pathrevnum '
+    b'    ON fileindex (pathid, revnum)',
+
+    b'CREATE UNIQUE INDEX fileindex_pathnode '
+    b'    ON fileindex (pathid, node)',
+
+    # Provide a view to facilitate simpler querying.
+    b'CREATE VIEW filedata AS '
+    b'SELECT '
+    b'    fileindex.id AS id, '
+    b'    filepath.id AS pathid, '
+    b'    filepath.path AS path, '
+    b'    fileindex.revnum AS revnum, '
+    b'    fileindex.node AS node, '
+    b'    fileindex.p1rev AS p1rev, '
+    b'    fileindex.p2rev AS p2rev, '
+    b'    fileindex.linkrev AS linkrev, '
+    b'    fileindex.deltaid AS deltaid, '
+    b'    fileindex.deltabaseid AS deltabaseid '
+    b'FROM filepath, fileindex '
+    b'WHERE fileindex.pathid=filepath.id',
+
+    b'PRAGMA user_version=%d' % CURRENT_SCHEMA_VERSION,
+]
+
+class SQLiteStoreError(error.StorageError):
+    pass
+
+ at attr.s
+class revisionentry(object):
+    rid = attr.ib()
+    rev = attr.ib()
+    node = attr.ib()
+    p1rev = attr.ib()
+    p2rev = attr.ib()
+    p1node = attr.ib()
+    p2node = attr.ib()
+    linkrev = attr.ib()
+
+ at interfaceutil.implementer(repository.irevisiondelta)
+ at attr.s(slots=True)
+class sqliterevisiondelta(object):
+    node = attr.ib()
+    p1node = attr.ib()
+    p2node = attr.ib()
+    basenode = attr.ib()
+    flags = attr.ib()
+    baserevisionsize = attr.ib()
+    revision = attr.ib()
+    delta = attr.ib()
+    linknode = attr.ib(default=None)
+
+ at interfaceutil.implementer(repository.iverifyproblem)
+ at attr.s(frozen=True)
+class sqliteproblem(object):
+    warning = attr.ib(default=None)
+    error = attr.ib(default=None)
+    node = attr.ib(default=None)
+
+ at interfaceutil.implementer(repository.ifilestorage)
+class sqlitefilestore(object):
+    """Implements storage for an individual tracked path."""
+
+    def __init__(self, db, path):
+        self._db = db
+        self._path = path
+
+        self._pathid = None
+
+        # revnum -> node
+        self._revtonode = {}
+        # node -> revnum
+        self._nodetorev = {}
+        # node -> data structure
+        self._revisions = {}
+
+        self._revisioncache = util.lrucachedict(10)
+
+        self._refreshindex()
+
+        self._cctx = zstd.ZstdCompressor(level=3)
+        self._dctx = zstd.ZstdDecompressor()
+
+    def _refreshindex(self):
+        self._revtonode = {}
+        self._nodetorev = {}
+        self._revisions = {}
+
+        res = list(self._db.execute(
+            b'SELECT id FROM filepath WHERE path=?', (self._path,)))
+
+        if not res:
+            self._pathid = None
+            return
+
+        self._pathid = res[0][0]
+
+        res = self._db.execute(
+            b'SELECT id, revnum, node, p1rev, p2rev, linkrev '
+            b'FROM fileindex '
+            b'WHERE pathid=? '
+            b'ORDER BY revnum ASC',
+            (self._pathid,))
+
+        for i, row in enumerate(res):
+            rid, rev, node, p1rev, p2rev, linkrev = row
+
+            if i != rev:
+                raise SQLiteStoreError(_('sqlite database has inconsistent '
+                                         'revision numbers'))
+
+            if p1rev == nullrev:
+                p1node = nullid
+            else:
+                p1node = self._revtonode[p1rev]
+
+            if p2rev == nullrev:
+                p2node = nullid
+            else:
+                p2node = self._revtonode[p2rev]
+
+            entry = revisionentry(
+                rid=rid,
+                rev=rev,
+                node=node,
+                p1rev=p1rev,
+                p2rev=p2rev,
+                p1node=p1node,
+                p2node=p2node,
+                linkrev=linkrev)
+
+            self._revtonode[rev] = node
+            self._nodetorev[node] = rev
+            self._revisions[node] = entry
+
+    # Start of ifileindex interface.
+
+    def __len__(self):
+        return len(self._revisions)
+
+    def __iter__(self):
+        return iter(pycompat.xrange(len(self._revisions)))
+
+    def revs(self, start=0, stop=None):
+        return storageutil.iterrevs(len(self._revisions), start=start,
+                                    stop=stop)
+
+    def parents(self, node):
+        if node == nullid:
+            return nullid, nullid
+
+        if node not in self._revisions:
+            raise error.LookupError(node, self._path, _('no node'))
+
+        entry = self._revisions[node]
+        return entry.p1node, entry.p2node
+
+    def parentrevs(self, rev):
+        if rev == nullrev:
+            return nullrev, nullrev
+
+        if rev not in self._revtonode:
+            raise IndexError(rev)
+
+        entry = self._revisions[self._revtonode[rev]]
+        return entry.p1rev, entry.p2rev
+
+    def rev(self, node):
+        if node == nullid:
+            return nullrev
+
+        if node not in self._nodetorev:
+            raise error.LookupError(node, self._path, _('no node'))
+
+        return self._nodetorev[node]
+
+    def node(self, rev):
+        if rev == nullrev:
+            return nullid
+
+        if rev not in self._revtonode:
+            raise IndexError(rev)
+
+        return self._revtonode[rev]
+
+    def lookup(self, node):
+        return storageutil.fileidlookup(self, node, self._path)
+
+    def linkrev(self, rev):
+        if rev == nullrev:
+            return nullrev
+
+        if rev not in self._revtonode:
+            raise IndexError(rev)
+
+        entry = self._revisions[self._revtonode[rev]]
+        return entry.linkrev
+
+    def iscensored(self, rev):
+        if rev == nullrev:
+            return False
+
+        if rev not in self._revtonode:
+            raise IndexError(rev)
+
+        return False
+
+    def commonancestorsheads(self, node1, node2):
+        rev1 = self.rev(node1)
+        rev2 = self.rev(node2)
+
+        ancestors = ancestor.commonancestorsheads(self.parentrevs, rev1, rev2)
+        return pycompat.maplist(self.node, ancestors)
+
+    def descendants(self, revs):
+        # TODO we could implement this using a recursive SQL query, which
+        # might be faster.
+        return dagop.descendantrevs(revs, self.revs, self.parentrevs)
+
+    def heads(self, start=None, stop=None):
+        if start is None and stop is None:
+            if not len(self):
+                return [nullid]
+
+        startrev = self.rev(start) if start is not None else nullrev
+        stoprevs = {self.rev(n) for n in stop or []}
+
+        revs = dagop.headrevssubset(self.revs, self.parentrevs,
+                                    startrev=startrev, stoprevs=stoprevs)
+
+        return [self.node(rev) for rev in revs]
+
+    def children(self, node):
+        rev = self.rev(node)
+
+        res = self._db.execute(
+            b'SELECT'
+            b'  node '
+            b'  FROM filedata '
+            b'  WHERE path=? AND (p1rev=? OR p2rev=?) '
+            b'  ORDER BY revnum ASC',
+            (self._path, rev, rev))
+
+        return [row[0] for row in res]
+
+    # End of ifileindex interface.
+
+    # Start of ifiledata interface.
+
+    def size(self, rev):
+        if rev == nullrev:
+            return 0
+
+        if rev not in self._revtonode:
+            raise IndexError(rev)
+
+        node = self._revtonode[rev]
+
+        if self.renamed(node):
+            return len(self.read(node))
+
+        return len(self.revision(node))
+
+    def revision(self, node, raw=False):
+        if node in (nullid, nullrev):
+            return b''
+
+        if isinstance(node, int):
+            node = self.node(node)
+
+        if node not in self._nodetorev:
+            raise error.LookupError(node, self._path, _('no node'))
+
+        if node in self._revisioncache:
+            return self._revisioncache[node]
+
+        # Because we have a fulltext revision cache, we are able to
+        # short-circuit delta chain traversal and decompression as soon as
+        # we encounter a revision in the cache.
+
+        stoprids = {self._revisions[n].rid: n for n in self._revisioncache}
+
+        if not stoprids:
+            stoprids[-1] = None
+
+        # TODO the "not in ({stops})" here is likely slowing down the query
+        # because it needs to perform the lookup on every recursive invocation.
+        # This could probably be faster if we created a temporary query with
+        # baseid "poisoned" to null and limited the recursive filter to
+        # "is not null".
+        res = self._db.execute(
+            b'WITH RECURSIVE '
+            b'    deltachain(deltaid, baseid) AS ('
+            b'        SELECT deltaid, deltabaseid FROM fileindex '
+            b'            WHERE pathid=? AND node=? '
+            b'        UNION ALL '
+            b'        SELECT fileindex.deltaid, deltabaseid '
+            b'            FROM fileindex, deltachain '
+            b'            WHERE '
+            b'                fileindex.id=deltachain.baseid '
+            b'                AND deltachain.baseid IS NOT NULL '
+            b'                AND fileindex.id NOT IN ({stops}) '
+            b'    ) '
+            b'SELECT deltachain.baseid, compression, delta '
+            b'FROM deltachain, delta '
+            b'WHERE delta.id=deltachain.deltaid'.format(
+                stops=b','.join([b'?'] * len(stoprids))),
+            tuple([self._pathid, node] + stoprids.keys()))
+
+        deltas = []
+        lastdeltabaseid = None
+
+        for deltabaseid, compression, delta in res:
+            lastdeltabaseid = deltabaseid
+
+            if compression == COMPRESSION_ZSTD:
+                delta = self._dctx.decompress(delta)
+            elif compression == COMPRESSION_NONE:
+                delta = delta
+            else:
+                raise SQLiteStoreError('unhandled compression type: %d' %
+                                       compression)
+
+            deltas.append(delta)
+
+        if lastdeltabaseid in stoprids:
+            basetext = self._revisioncache[stoprids[lastdeltabaseid]]
+        else:
+            basetext = deltas.pop()
+
+        deltas.reverse()
+        fulltext = mdiff.patches(basetext, deltas)
+
+        # SQLite returns buffer instances for blob columns on Python 2. This
+        # type can propagate through the delta application layer. Because
+        # downstream callers assume revisions are bytes, cast as needed.
+        if not isinstance(fulltext, bytes):
+            fulltext = bytes(delta)
+
+        self._checkhash(fulltext, node)
+        self._revisioncache[node] = fulltext
+
+        return fulltext
+
+    def read(self, node):
+        return storageutil.filtermetadata(self.revision(node))
+
+    def renamed(self, node):
+        return storageutil.filerevisioncopied(self, node)
+
+    def cmp(self, node, fulltext):
+        return not storageutil.filedataequivalent(self, node, fulltext)
+
+    def emitrevisions(self, nodes, nodesorder=None, revisiondata=False,
+                      assumehaveparentrevisions=False, deltaprevious=False):
+        if nodesorder not in ('nodes', 'storage', None):
+            raise error.ProgrammingError('unhandled value for nodesorder: %s' %
+                                         nodesorder)
+
+        nodes = [n for n in nodes if n != nullid]
+
+        if not nodes:
+            return
+
+        # TODO perform in a single query.
+        res = self._db.execute(
+            b'SELECT revnum, deltaid FROM fileindex '
+            b'WHERE pathid=? '
+            b'    AND node in (%s)' % (b','.join([b'?'] * len(nodes))),
+            tuple([self._pathid] + nodes))
+
+        deltabases = {}
+
+        for rev, deltaid in res:
+            res = self._db.execute(
+                b'SELECT revnum from fileindex WHERE pathid=? AND deltaid=?',
+                (self._pathid, deltaid))
+            deltabases[rev] = res.fetchone()[0]
+
+        # TODO define revdifffn so we can use delta from storage.
+        for delta in storageutil.emitrevisions(
+            self, nodes, nodesorder, sqliterevisiondelta,
+            deltaparentfn=deltabases.__getitem__,
+            revisiondata=revisiondata,
+            assumehaveparentrevisions=assumehaveparentrevisions,
+            deltaprevious=deltaprevious):
+
+            yield delta
+
+    # End of ifiledata interface.
+
+    # Start of ifilemutation interface.
+
+    def add(self, filedata, meta, transaction, linkrev, p1, p2):
+        if meta or filedata.startswith(b'\x01\n'):
+            filedata = storageutil.packmeta(meta, filedata)
+
+        return self.addrevision(filedata, transaction, linkrev, p1, p2)
+
+    def addrevision(self, revisiondata, transaction, linkrev, p1, p2, node=None,
+                    flags=0, cachedelta=None):
+        if flags:
+            raise SQLiteStoreError(_('flags not supported on revisions'))
+
+        validatehash = node is not None
+        node = node or storageutil.hashrevisionsha1(revisiondata, p1, p2)
+
+        if validatehash:
+            self._checkhash(revisiondata, node, p1, p2)
+
+        if node in self._nodetorev:
+            return node
+
+        node = self._addrawrevision(node, revisiondata, transaction, linkrev,
+                                    p1, p2)
+
+        self._revisioncache[node] = revisiondata
+        return node
+
+    def addgroup(self, deltas, linkmapper, transaction, addrevisioncb=None):
+        nodes = []
+
+        for node, p1, p2, linknode, deltabase, delta, flags in deltas:
+            if flags:
+                raise SQLiteStoreError('cannot store revisions with flags')
+
+            linkrev = linkmapper(linknode)
+
+            nodes.append(node)
+
+            if node in self._revisions:
+                continue
+
+            if deltabase == nullid:
+                text = mdiff.patch(b'', delta)
+                storedelta = None
+            else:
+                text = None
+                storedelta = (deltabase, delta)
+
+            self._addrawrevision(node, text, transaction, linkrev, p1, p2,
+                                 storedelta=storedelta)
+
+            if addrevisioncb:
+                addrevisioncb(self, node)
+
+        return nodes
+
+    def censorrevision(self, tr, node, tombstone=b''):
+        raise SQLiteStoreError(_('SQLite store does not support censoring '
+                                 'revisions'))
+
+    def getstrippoint(self, minlink):
+        return storageutil.resolvestripinfo(minlink, len(self) - 1,
+                                            [self.rev(n) for n in self.heads()],
+                                            self.linkrev,
+                                            self.parentrevs)
+
+    def strip(self, minlink, transaction):
+        if not len(self):
+            return
+
+        rev, _ignored = self.getstrippoint(minlink)
+
+        if rev == len(self):
+            return
+
+        for rev in self.revs(rev):
+            self._db.execute(
+                b'DELETE FROM fileindex WHERE pathid=? AND node=?',
+                (self._pathid, self.node(rev)))
+
+        # TODO how should we garbage collect data in delta table?
+
+        self._refreshindex()
+
+    # End of ifilemutation interface.
+
+    # Start of ifilestorage interface.
+
+    def files(self):
+        return []
+
+    def storageinfo(self, exclusivefiles=False, sharedfiles=False,
+                    revisionscount=False, trackedsize=False,
+                    storedsize=False):
+        d = {}
+
+        if exclusivefiles:
+            d['exclusivefiles'] = []
+
+        if sharedfiles:
+            # TODO list sqlite file(s) here.
+            d['sharedfiles'] = []
+
+        if revisionscount:
+            d['revisionscount'] = len(self)
+
+        if trackedsize:
+            d['trackedsize'] = sum(len(self.revision(node))
+                                       for node in self._nodetorev)
+
+        if storedsize:
+            # TODO implement this?
+            d['storedsize'] = None
+
+        return d
+
+    def verifyintegrity(self, state):
+        state['skipread'] = set()
+
+        for rev in self:
+            node = self.node(rev)
+
+            try:
+                self.revision(node)
+            except Exception as e:
+                yield sqliteproblem(
+                    error=_('unpacking %s: %s') % (short(node), e),
+                    node=node)
+
+                state['skipread'].add(node)
+
+    # End of ifilestorage interface.
+
+    def _checkhash(self, fulltext, node, p1=None, p2=None):
+        if p1 is None and p2 is None:
+            p1, p2 = self.parents(node)
+
+        if node == storageutil.hashrevisionsha1(fulltext, p1, p2):
+            return
+
+        try:
+            del self._revisioncache[node]
+        except KeyError:
+            pass
+
+        raise SQLiteStoreError(_('integrity check failed on %s') %
+                               self._path)
+
+    def _addrawrevision(self, node, revisiondata, transaction, linkrev,
+                        p1, p2, storedelta=None):
+        if self._pathid is None:
+            res = self._db.execute(
+                b'INSERT INTO filepath (path) VALUES (?)', (self._path,))
+            self._pathid = res.lastrowid
+
+        # For simplicity, always store a delta against p1.
+        # TODO we need a lot more logic here to make behavior reasonable.
+
+        if storedelta:
+            deltabase, delta = storedelta
+        else:
+            assert revisiondata is not None
+            deltabase = p1
+
+            if deltabase == nullid:
+                delta = revisiondata
+            else:
+                delta = mdiff.textdiff(self.revision(self.rev(deltabase)),
+                                       revisiondata)
+
+        # File index stores a pointer to its delta and the parent delta.
+        # The parent delta is stored via a pointer to the fileindex PK.
+        if deltabase == nullid:
+            baseid = None
+        else:
+            baseid = self._revisions[deltabase].rid
+
+        # Deltas are stored with a hash of their content. This allows
+        # us to de-duplicate. The table is configured to ignore conflicts
+        # and it is faster to just insert and silently noop than to look
+        # first.
+        deltahash = hashlib.sha1(delta).digest()
+
+        deltablob = self._cctx.compress(delta)
+        compression = COMPRESSION_ZSTD
+
+        # Don't store compressed data if it isn't practical.
+        if len(deltablob) >= len(delta):
+            deltablob = delta
+            compression = COMPRESSION_NONE
+
+        try:
+            deltaid = self._db.execute(
+                b'INSERT INTO delta (compression, hash, delta) '
+                b'VALUES (?, ?, ?)',
+                (compression, deltahash, deltablob)).lastrowid
+        except sqlite3.IntegrityError:
+            deltaid = self._db.execute(
+                b'SELECT id FROM delta WHERE hash=?',
+                (deltahash,)).fetchone()[0]
+
+        rev = len(self)
+
+        if p1 == nullid:
+            p1rev = nullrev
+        else:
+            p1rev = self._nodetorev[p1]
+
+        if p2 == nullid:
+            p2rev = nullrev
+        else:
+            p2rev = self._nodetorev[p2]
+
+        rid = self._db.execute(
+            b'INSERT INTO fileindex ('
+            b'    pathid, revnum, node, p1rev, p2rev, linkrev, '
+            b'    deltaid, deltabaseid) '
+            b'    VALUES (?, ?, ?, ?, ?, ?, ?, ?)',
+            (self._pathid, rev, node, p1rev, p2rev, linkrev, deltaid, baseid)
+        ).lastrowid
+
+        entry = revisionentry(
+            rid=rid,
+            rev=rev,
+            node=node,
+            p1rev=p1rev,
+            p2rev=p2rev,
+            p1node=p1,
+            p2node=p2,
+            linkrev=linkrev)
+
+        self._nodetorev[node] = rev
+        self._revtonode[rev] = node
+        self._revisions[node] = entry
+
+        return node
+
+class sqliterepository(localrepo.localrepository):
+    def cancopy(self):
+        return False
+
+    def transaction(self, *args, **kwargs):
+        current = self.currenttransaction()
+
+        tr = super(sqliterepository, self).transaction(*args, **kwargs)
+
+        if current:
+            return tr
+
+        self._dbconn.execute('BEGIN TRANSACTION')
+
+        def committransaction(_):
+            self._dbconn.commit()
+
+        tr.addfinalize('sqlitestore', committransaction)
+
+        return tr
+
+    @property
+    def _dbconn(self):
+        # SQLite connections can only be used on the thread that created
+        # them. In most cases, this "just works." However, hgweb uses
+        # multiple threads.
+        tid = threading.current_thread().ident
+
+        if self._db:
+            if self._db[0] == tid:
+                return self._db[1]
+
+        db = makedb(self.svfs.join('db.sqlite'))
+        self._db = (tid, db)
+
+        return db
+
+def makedb(path):
+    """Construct a database handle for a database at path."""
+
+    db = sqlite3.connect(path)
+    db.text_factory = bytes
+
+    res = db.execute(b'PRAGMA user_version').fetchone()[0]
+
+    # New database.
+    if res == 0:
+        for statement in CREATE_SCHEMA:
+            db.execute(statement)
+
+        db.commit()
+
+    elif res == CURRENT_SCHEMA_VERSION:
+        pass
+
+    else:
+        raise error.Abort(_('sqlite database has unrecognized version'))
+
+    db.execute(b'PRAGMA journal_mode=WAL')
+
+    return db
+
+def featuresetup(ui, supported):
+    supported.add(REQUIREMENT)
+
+def newreporequirements(orig, ui, createopts):
+    if createopts['backend'] != 'sqlite':
+        return orig(ui, createopts)
+
+    # This restriction can be lifted once we have more confidence.
+    if 'sharedrepo' in createopts:
+        raise error.Abort(_('shared repositories not supported with SQLite '
+                            'store'))
+
+    # This filtering is out of an abundance of caution: we want to ensure
+    # we honor creation options and we do that by annotating exactly the
+    # creation options we recognize.
+    known = {
+        'narrowfiles',
+        'backend',
+    }
+
+    unsupported = set(createopts) - known
+    if unsupported:
+        raise error.Abort(_('SQLite store does not support repo creation '
+                            'option: %s') % ', '.join(sorted(unsupported)))
+
+    # Since we're a hybrid store that still relies on revlogs, we fall back
+    # to using the revlogv1 backend's storage requirements then adding our
+    # own requirement.
+    createopts['backend'] = 'revlogv1'
+    requirements = orig(ui, createopts)
+    requirements.add(REQUIREMENT)
+
+    return requirements
+
+ at interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
+class sqlitefilestorage(object):
+    """Repository file storage backed by SQLite."""
+    def file(self, path):
+        if path[0] == b'/':
+            path = path[1:]
+
+        return sqlitefilestore(self._dbconn, path)
+
+def makefilestorage(orig, requirements, **kwargs):
+    """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
+    if REQUIREMENT in requirements:
+        return sqlitefilestorage
+    else:
+        return orig(requirements=requirements, **kwargs)
+
+def makemain(orig, requirements, **kwargs):
+    if REQUIREMENT in requirements:
+        return sqliterepository
+
+    return orig(requirements=requirements, **kwargs)
+
+def verifierinit(orig, self, *args, **kwargs):
+    orig(self, *args, **kwargs)
+
+    # We don't care that files in the store don't align with what is
+    # advertised. So suppress these warnings.
+    self.warnorphanstorefiles = False
+
+def extsetup(ui):
+    localrepo.featuresetupfuncs.add(featuresetup)
+    extensions.wrapfunction(localrepo, 'newreporequirements',
+                            newreporequirements)
+    extensions.wrapfunction(localrepo, 'makefilestorage',
+                            makefilestorage)
+    extensions.wrapfunction(localrepo, 'makemain',
+                            makemain)
+    extensions.wrapfunction(verify.verifier, '__init__',
+                            verifierinit)
+
+def reposetup(ui, repo):
+    if isinstance(repo, sqliterepository):
+        repo._db = None
+
+    # TODO check for bundlerepository?



To: indygreg, #hg-reviewers
Cc: mjpieters, mercurial-devel


More information about the Mercurial-devel mailing list