Not a holy war - just some salient facts

Mike Meyer mwm at mired.org
Sat Apr 10 21:33:10 CDT 2010


On Thu, 08 Apr 2010 22:01:49 -0500
"Mark A. Flacy" <mflacy at verizon.net> wrote:

> On Thu, 2010-04-08 at 22:23 -0400, Mike Meyer wrote:
> > For this kind of thing, I'd say neither mercurial nor bazaar works
> > very well. Both of them store the repo with the files in question,
> > which means you need to keep backups of all this stuff. Using a
> > server-based VCS avoids that: you have the stuff that came with the
> > distribution, and then as you config the machine, you add/edit/commit
> > the changed files. You back up the central repo, and don't need to
> > worry about backing up the os/config parts of the clients.
> Put a *clone* of your repo on the "central server".  Push and pull to it
> as needed.

You've either 1) (if you set things up to automate the push) thrown
away the performance advantage mercurial gets from having a local repo
(bad); or 2) added another step to the sysadmin's job whenever
anything changes (even worse).

> Mercurial and bazaar will work just fine.

Of course they will. Perforce just works better in this case.

Given that you've thrown away mercurials primary advantage by
requiring pushes to a central server, you're left comparing
server-based scheme systems - even though some of them are DVCS's
implementing that work flow. For this one use case - tracking config
files in situ - that the perforce server tracks what is checked out on
the client is a major win. Everything I need to know to recreate any
client from scratch is available on the server, without having to back
up the client systems. That's not true with any VCS that keeps the
data about what's checked out in the workspace - which I believe is
pretty much every VCS *but* perforce.

A more reasonable approach to the entire problem might well be to keep
the various config files in hg on disk space that you think is
adequately protected, bundled with a make file to install them on the
machines in question. I believe that doing that reasonably with
mercurial requires subrepos to cooperate, but haven't spent any time
with them to know whether or not they do. And while I know this works
well for the distributed application I build for my clients, with
dozens of machines with identical configs; I'm not convinced it'll
work well for system config files when each of the machines is doing a
different job.

> Or even better: have all your machines use each other as clones!  You
> now have distributed backups on *any* machine that you use.

Of course, how good *that* is depends on how large a set of machines
you have. Me, with a handful of servers in a back room, the
"distributed" solution isn't so good, because losing that room loses
all the copies. On the other hand, since the perforce server sits on a
zfs raid pool with hourly snapshots, daily backups, and weekly offsite
clones (perfectly affordable, so long as I only have to do it for
*one* pool!) - so losing that room loses at most a weeks worth of
work. If I were a larger organization, with servers in three states or
two continents, the distributed mercurial solution might well provide
better backup - except for the information about what's checked out on
the client.

> Don't forget to backup your server!  :-)

As with any other critical data.

I've converted all my active client source code to mercurial, because
it's faster/easier for most common operations than perforce - but
those repos are sitting in the same zfs pool as my perforce server
data. Most of my personal work is also now in mercurial, for the same
reason. But I have as yet to find a mercurial config that I'm as happy
with as the existing perforce setup for in-situ config files.

     <mike
--
Mike Meyer <mwm at mired.org>		http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.

O< ascii ribbon campaign - stop html mail - www.asciiribbon.org


More information about the Mercurial mailing list