Commit 3a588dc4 authored by cvs2svn's avatar cvs2svn

This commit was manufactured by cvs2svn to create tag 'r0-13-5'.

git-svn-id: http://svn.savannah.nongnu.org/svn/rdiff-backup@567 2b77aa54-bcbc-44c9-a7ec-4f6cf2b41109
parent 40c0b79c
This diff is collapsed.
This diff is collapsed.
CVS README - Notes for people checking out of CVS
-------------------------------------------------
Getting rdiff-backup to run:
----------------------------
If you want to run a version of rdiff-backup checked out of CVS into
your $RDB_CVS directory, change to $RDB_CVS/rdiff_backup and run the
./compilec.py file:
cd $RDB_CVS/rdiff_backup; python compilec.py
With any luck, _librsync.so and C.so libraries will appear in that
directory. Then run rdiff-backup, making sure that all the files are
in your PYTHONPATH:
PYTHONPATH=$RDB_CVS $RDB_CVS/rdiff-backup <arguments>
Running the unit tests:
-----------------------
If you want to try some of tests, you first have to get the
testfiles.tar.gz tarball. It is available at
http://rdiff-backup.stanford.edu/testfiles.tar.gz
To untar it, root is required because the tarball contains device
files, files with various uid/gid, etc. If you don't have root, it's
ok, all the tests except for roottest.py may still work.
So, three steps:
1) Make sure the the C modules are compiled as explained above:
cd $RDB_CVS/rdiff_backup; python compilec.py
2) Untar the testfiles tarball, as root if you have it:
cd $RDB_CVS/testing; tar -xvzf testfiles.tar.gz
3) In the testing directory, run each of the *test.py files as
desired. For instance,
cd $RDB_CVS/testing; python rpathtest.py
If python restoretest.py doesn't work, try running
./makerestoretest3
This diff is collapsed.
INSTALLATION:
Thank you for trying rdiff-backup. To install, run:
python setup.py install
The build process can be also be run separately:
python setup.py build
The default prefix is /usr, so files are put in /usr/bin,
/usr/share/man/, etc. An alternate prefix can be specified using the
--prefix=<prefix> option. For example:
python setup.py install --prefix=/usr/local
The setup script expects to find librsync headers and libraries in the
default location, usually /usr/include and /usr/lib. If you want the
setup script to check different locations, use the --librsync-dir
switch or the LIBRSYNC_DIR environment variable. For instance,
python setup.py --librsync-dir=/usr/local build
instructs the setup program to look in /usr/local/include and
/usr/local/lib for the librsync files.
Finally, the --lflags and --libs options, and the LFLAGS and LIBS
environment variables are also recognized. Running setup.py with no
arguments will display some help.
REQUIREMENTS:
Remember that you must have Python 2.2 or later and librsync 0.9.6
or later installed. For Python, see http://www.python.org. The
rdiff-backup homepage at http://rdiff-backup.stanford.edu/ should have
a recent version of librsync; otherwise see the librsync homepage at
http://sourceforge.net/projects/librsync/.
For remote operation, rdiff-backup should be in installed and in the
PATH on remote system(s) (see man page for more information).
TROUBLESHOOTING:
If you have everything installed properly, and it still doesn't work,
see the enclosed FAQ.html, the web page at
http://rdiff-backup.stanford.edu, and/or the mailing list.
Change --exclude options with restore so excluded directories aren't
deleted. (Oliver Kaltenecker)
Check support/bug reports on Savannah occasionally.
Consider adding --datadir option (Jean-Sébastien GOETSCHY)
write test case for --calculate-statistics
Use ctime to check whether files have been changed. See message:
http://mail.gnu.org/archive/html/rdiff-backup-users/2003-06/msg00050.html
by Andrew Bressen.
Profile 0.13.x
Examine regress handling of acls/eas/resource forks.
Look into that strange regress error July 22nd on
prog/old/rdiff-backup/testing/testfiles/long...
---------[ Medium term ]---------------------------------------
Look into sparse file support (requested by Stelios K. Kyriacou)
Look into security.py code, do some sort of security audit.
Look at Kent Borg's suggestion for restore options and digests.
Add --list-files-changed-between or similar option, to list files that
have changed between two times
Add --dry-run option (target for v1.1.x)
Add # of increments option to --remove-older-than
---------[ Long term ]---------------------------------------
Think about adding Gaudet's idea for keeping track of renamed files.
Look into different inode generation techniques (see treescan, Dean
Gaudet's other post).
#!/usr/bin/env python
import os, re, shutil, time, sys, getopt
SourceDir = "rdiff_backup"
DistDir = "dist"
# Various details about the files must also be specified by the rpm
# spec template.
spec_template = "dist/rdiff-backup.spec.template"
# The Fedora distribution has its own spec template
fedora_spec_template = "dist/rdiff-backup.spec.template-fedora"
def CopyMan(destination, version):
"""Create updated man page at the specified location"""
fp = open(destination, "w")
date = time.strftime("%B %Y", time.localtime(time.time()))
version = "Version "+version
firstline = ('.TH RDIFF-BACKUP 1 "%s" "%s" "User Manuals"\n' %
(date, version))
fp.write(firstline)
infp = open("rdiff-backup.1", "r")
infp.readline()
fp.write(infp.read())
fp.close()
infp.close()
def MakeHTML(input_prefix, title):
"""Create html and wml versions from given html body
Input expected in <input_prefix>-body.html, and output will be put
in <input_prefix>.wml and <input_prefix>.html.
"""
body_fp = open("%s-body.html" % (input_prefix,), "r")
body_string = body_fp.read()
body_fp.close()
wml_fp = open("%s.wml" % (input_prefix,), "w")
wml_fp.write(
"""#include 'template.wml' home=. curpage=none title="%s"
<divert body>
<p><h2>%s</h2>
""" % (title, title))
wml_fp.write(body_string)
wml_fp.write("\n</divert>\n")
wml_fp.close()
html_fp = open("%s.html" % (input_prefix,), "w")
html_fp.write(
"""<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>%s</title>
</head>
<body>
<h1>%s</h1>
""" % (title, title))
html_fp.write(body_string)
html_fp.write("\n</body></html>")
html_fp.close()
def VersionedCopy(source, dest, munge_date = 0):
"""Copy source to dest, substituting $version with version
If munge_date is true, also replace $date with string like "August
8, 2003".
"""
fin = open(source, "rb")
inbuf = fin.read()
assert not fin.close()
outbuf = re.sub("\$version", Version, inbuf)
if outbuf == inbuf: assert 0, "No $version string replaced"
assert not re.search("\$version", outbuf), \
"Some $version strings not repleased in %s" % (source,)
if munge_date:
inbuf = outbuf
outbuf = re.sub("\$date", time.strftime("%B %d, %Y"), inbuf, 1)
if outbuf == inbuf: assert 0, "No $date string replaced"
assert not re.search("\$date", outbuf), "Two $date strings found"
fout = open(dest, "wb")
fout.write(outbuf)
assert not fout.close()
def MakeTar():
"""Create rdiff-backup tar file"""
tardir = "rdiff-backup-%s" % Version
tarfile = "rdiff-backup-%s.tar.gz" % Version
try:
os.lstat(tardir)
os.system("rm -rf " + tardir)
except OSError: pass
os.mkdir(tardir)
for filename in ["CHANGELOG", "COPYING", "README",
"FAQ.html", "examples.html",
SourceDir + "/cmodule.c",
SourceDir + "/_librsyncmodule.c",
DistDir + "/setup.py"]:
assert not os.system("cp %s %s" % (filename, tardir)), filename
os.mkdir(tardir+"/rdiff_backup")
for filename in ["eas_acls.py", "backup.py", "connection.py",
"FilenameMapping.py", "fs_abilities.py",
"Hardlink.py", "increment.py", "__init__.py",
"iterfile.py", "lazy.py", "librsync.py",
"log.py", "Main.py", "manage.py", "metadata.py",
"Rdiff.py", "regress.py", "restore.py",
"robust.py", "rorpiter.py", "rpath.py",
"Security.py", "selection.py",
"SetConnections.py", "static.py",
"statistics.py", "TempFile.py", "Time.py",
"user_group.py"]:
assert not os.system("cp %s/%s %s/rdiff_backup" %
(SourceDir, filename, tardir)), filename
VersionedCopy("%s/Globals.py" % (SourceDir,),
"%s/rdiff_backup/Globals.py" % (tardir,))
VersionedCopy("rdiff-backup", "%s/rdiff-backup" % (tardir,), 1)
VersionedCopy(DistDir + "/setup.py", "%s/setup.py" % (tardir,))
os.chmod(os.path.join(tardir, "setup.py"), 0755)
os.chmod(os.path.join(tardir, "rdiff-backup"), 0644)
CopyMan(os.path.join(tardir, "rdiff-backup.1"), Version)
os.system("tar -cvzf %s %s" % (tarfile, tardir))
shutil.rmtree(tardir)
return tarfile
def MakeSpecFile():
"""Create spec file using spec template"""
specfile, fedora_specfile= "rdiff-backup.spec", "rdiff-backup.spec-fedora"
VersionedCopy(spec_template, specfile)
VersionedCopy(fedora_spec_template, fedora_specfile)
return specfile, fedora_specfile
def parse_cmdline(arglist):
"""Returns action"""
global Version
def error():
print "Syntax: makedist [--html-only] [version_number]"
sys.exit(1)
optlist, args = getopt.getopt(arglist, "", ["html-only"])
if len(args) != 1: error()
else: Version = args[0]
for opt, arg in optlist:
if opt == "--html-only": return "HTML"
else: assert 0, "Bad argument"
return "All"
def Main():
action = parse_cmdline(sys.argv[1:])
print "Making HTML"
MakeHTML("FAQ", "rdiff-backup FAQ")
MakeHTML("examples", "rdiff-backup examples")
if action != "HTML":
assert action == "All"
print "Processing version " + Version
tarfile = MakeTar()
print "Made tar file " + tarfile
specfiles = MakeSpecFile()
print "Made specfiles ", specfiles
if __name__ == "__main__" and not globals().has_key('__no_execute__'):
Main()
#!/usr/bin/env python
#
# Builds two rpms, one for generic release, and the other for the
# Fedora project.
import os, sys
rpmroot = os.path.join(os.environ['HOME'], 'rpm')
if len(sys.argv) == 2:
Version = sys.argv[1]
specfile = "rdiff-backup.spec"
fedora_specfile = "rdiff-backup.spec-fedora"
print "Using specfiles %s, %s" % (specfile, fedora_specfile)
else:
print "Syntax: %s version_number" % sys.argv[0]
sys.exit(1)
base = "rdiff-backup-%s" % (Version,)
tarfile = base + ".tar.gz"
rpmbase = base + "-1"
i386_rpm = rpmbase + ".i386.rpm"
source_rpm = rpmbase + ".src.rpm"
fedora_rpmbase = base + "-0.fdr.1"
fedora_i386_rpm = fedora_rpmbase + ".i386.rpm"
fedora_source_rpm = fedora_rpmbase + ".src.rpm"
# These assume the rpm root directory $HOME/rpm. The
# nonstandard location allows for building by non-root user.
assert not os.system("cp %s %s/SOURCES" % (tarfile, rpmroot))
assert not os.system("rpmbuild -ba -v --sign " + specfile)
assert not os.system("mv %s/RPMS/i386/%s ." % (rpmroot, i386_rpm))
assert not os.system("mv %s/SRPMS/%s ." % (rpmroot, source_rpm))
assert not os.system("rpmbuild -ba -v --sign " + fedora_specfile)
assert not os.system("mv %s/RPMS/i386/%s ." % (rpmroot, fedora_i386_rpm))
assert not os.system("mv %s/SRPMS/%s ." % (rpmroot, fedora_source_rpm))
#!/usr/bin/env python
import sys, os
def RunCommand(cmd, ignore_error = 0):
print cmd
if ignore_error: os.system(cmd)
else: assert not os.system(cmd)
webprefix = "/home/ben/misc/html/mirror/rdiff-backup/"
if not sys.argv[1:]:
print 'Call with version number, as in "./makeweb 0.3.1"'
print "to move new rpms and tarballs. Now just remaking FAQ and man page."
print
else:
version = sys.argv[1]
RunCommand("cp *%s* %s" % (version, webprefix))
RunCommand("rman -f html -r '' rdiff-backup.1 > %srdiff-backup.1.html"
% webprefix)
RunCommand("cp FAQ.wml CHANGELOG %s" % webprefix)
if sys.argv[1:]:
RunCommand("mkdir %s/OLD/%s" % (webprefix, version), ignore_error = 1)
RunCommand("cp rdiff-backup-%s.tar.gz rdiff-backup-%s*rpm %s/OLD/%s" %
(version, version, webprefix, version))
os.chdir(webprefix)
print "cd ", webprefix
if sys.argv[1:]:
for filename in os.listdir('OLD/' + version):
try: os.lstat(filename)
except OSError: pass
else: os.remove(filename)
os.symlink('OLD/%s/%s' % (version, filename), filename)
RunCommand("rm latest latest.src.rpm latest.tar.gz", ignore_error = 1)
RunCommand("ln -s rdiff-backup-%s-1.src.rpm latest.src.rpm" % (version,))
os.symlink("rdiff-backup-%s.tar.gz" % (version,), 'latest.tar.gz')
os.symlink('OLD/%s' % (version,), 'latest')
RunCommand("./Make")
--- rdiff-backup.old Sat Apr 6 10:05:18 2002
+++ rdiff-backup Sat Apr 6 10:05:25 2002
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2
#
# rdiff-backup -- Mirror files while keeping incremental changes
# Version 0.7.1 released March 25, 2002
Summary: Convenient and transparent local/remote incremental mirror/backup
Name: rdiff-backup
Release: 1
URL: http://www.stanford.edu/~bescoto/rdiff-backup/
Source: %{name}-%{version}.tar.gz
Copyright: GPL
Group: Applications/Archiving
BuildRoot: %{_tmppath}/%{name}-root
requires: librsync, python2 >= 2.2
Patch: rdiff-backup-rh7x.patch
%description
rdiff-backup is a script, written in Python, that backs up one
directory to another and is intended to be run periodically (nightly
from cron for instance). The target directory ends up a copy of the
source directory, but extra reverse diffs are stored in the target
directory, so you can still recover files lost some time ago. The idea
is to combine the best features of a mirror and an incremental
backup. rdiff-backup can also operate in a bandwidth efficient manner
over a pipe, like rsync. Thus you can use rdiff-backup and ssh to
securely back a hard drive up to a remote location, and only the
differences from the previous backup will be transmitted.
%prep
%setup
%patch
%build
%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/bin
mkdir -p $RPM_BUILD_ROOT/usr/share/man/man1
install -m 755 rdiff-backup $RPM_BUILD_ROOT/usr/bin/rdiff-backup
install -m 644 rdiff-backup.1 $RPM_BUILD_ROOT/usr/share/man/man1/rdiff-backup.1
%clean
%files
%defattr(-,root,root)
/usr/bin/rdiff-backup
/usr/share/man/man1/rdiff-backup.1.gz
%doc CHANGELOG COPYING README FAQ.html
%changelog
* Sat Apr 6 2002 Ben Escoto <bescoto@stanford.edu>
- Made new version for Redhat 7.x series
* Sun Nov 4 2001 Ben Escoto <bescoto@stanford.edu>
- Initial RPM
%define PYTHON_NAME %((rpm -q --quiet python2 && echo python2) || echo python)
%define PYTHON_SEP %(rpm -q --quiet python2 && echo -)
%define PYTHON_VERSION %(python -c 'import sys; print sys.version[:3],')
%define NEXT_PYTHON_VERSION %(python -c 'import sys; print "%d.%d" % (sys.version_info[0], sys.version_info[1]+1),')
Version: $version
Summary: Convenient and transparent local/remote incremental mirror/backup
Name: rdiff-backup
Release: 1
Epoch: 0
URL: http://rdiff-backup.stanford.edu/
Source: http://rdiff-backup.stanford.edu/OLD/$version/rdiff-backup-$version.tar.gz
License: GPL
Group: Applications/Archiving
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
Requires: %{PYTHON_NAME} >= %{PYTHON_VERSION}, %{PYTHON_NAME} < %{NEXT_PYTHON_VERSION}
BuildPrereq: %{PYTHON_NAME}-devel >= 2.2, librsync-devel >= 0.9.6
%description
rdiff-backup is a script, written in Python, that backs up one
directory to another and is intended to be run periodically (nightly
from cron for instance). The target directory ends up a copy of the
source directory, but extra reverse diffs are stored in the target
directory, so you can still recover files lost some time ago. The idea
is to combine the best features of a mirror and an incremental
backup. rdiff-backup can also operate in a bandwidth efficient manner
over a pipe, like rsync. Thus you can use rdiff-backup and ssh to
securely back a hard drive up to a remote location, and only the
differences from the previous backup will be transmitted.
%prep
%setup -q
%build
%{PYTHON_NAME} setup.py build
%install
%{PYTHON_NAME} setup.py install --root $RPM_BUILD_ROOT
# Produce .pyo files for %ghost directive later
%{PYTHON_NAME} -Oc 'from compileall import *; compile_dir("'$RPM_BUILD_ROOT/%{_libdir}/python%{PYTHON_VERSION}/site-packages/rdiff_backup'")'
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
%{_bindir}/rdiff-backup
%{_mandir}/man1/rdiff-backup*
%dir %{_libdir}/%{PYTHON_NAME}%{PYTHON_SEP}%{PYTHON_VERSION}/site-packages/rdiff_backup
%{_libdir}/%{PYTHON_NAME}%{PYTHON_SEP}%{PYTHON_VERSION}/site-packages/rdiff_backup/*.py
%{_libdir}/%{PYTHON_NAME}%{PYTHON_SEP}%{PYTHON_VERSION}/site-packages/rdiff_backup/*.pyc
%{_libdir}/%{PYTHON_NAME}%{PYTHON_SEP}%{PYTHON_VERSION}/site-packages/rdiff_backup/*.so
%ghost %{_libdir}/%{PYTHON_NAME}%{PYTHON_SEP}%{PYTHON_VERSION}/site-packages/rdiff_backup/*.pyo
%doc CHANGELOG COPYING FAQ.html examples.html README
%changelog
* Sat Dec 15 2003 Ben Escoto <bescoto@stanford.edu> - 0.12.6-2
- Readded python2/python code; turns out not everyone calls it python
- A number of changes from Fedora rpm.
* Thu Sep 11 2003 Ben Escoto <bescoto@stanford.edu> - 0.12.4-1
- Removed code that selected between python2 and python; I think
everyone calls it python now.
* Thu Aug 8 2003 Ben Escoto <bescoto@stanford.edu>
- Set librsync >= 0.9.6, because rsync.h renamed to librsync.h
* Sun Jul 20 2003 Ben Escoto <bescoto@stanford.edu>
- Minor changes to comply with Fedora standards.
* Sun Jan 19 2002 Troels Arvin <troels@arvin.dk>
- Builds, no matter if Python 2.2 is called python2-2.2 or python-2.2.
* Sun Nov 4 2001 Ben Escoto <bescoto@stanford.edu>
- Initial RPM
%define PYTHON_VERSION %(python -c 'import sys; print sys.version[:3],')
%define NEXT_PYTHON_VERSION %(python -c 'import sys; print "%d.%d" % (sys.version_info[0], sys.version_info[1]+1),')
Version: $version
Summary: Convenient and transparent local/remote incremental mirror/backup
Name: rdiff-backup
Release: 0.fdr.1
Epoch: 0
URL: http://rdiff-backup.stanford.edu/
Source: http://rdiff-backup.stanford.edu/OLD/$version/rdiff-backup-$version.tar.gz
License: GPL
Group: Applications/Archiving
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
Requires: python >= 0:%{PYTHON_VERSION}, python < 0:%{NEXT_PYTHON_VERSION}
BuildRequires: python-devel >= 0:2.2, librsync-devel >= 0:0.9.6
%description
rdiff-backup is a script, written in Python, that backs up one
directory to another and is intended to be run periodically (nightly
from cron for instance). The target directory ends up a copy of the
source directory, but extra reverse diffs are stored in the target
directory, so you can still recover files lost some time ago. The idea
is to combine the best features of a mirror and an incremental
backup. rdiff-backup can also operate in a bandwidth efficient manner
over a pipe, like rsync. Thus you can use rdiff-backup and ssh to
securely back a hard drive up to a remote location, and only the
differences from the previous backup will be transmitted.
%prep
%setup -q
%build
python setup.py build
%install
python setup.py install --root $RPM_BUILD_ROOT
# Produce .pyo files for %ghost directive later
python -Oc 'from compileall import *; compile_dir("'$RPM_BUILD_ROOT/%{_libdir}/python%{PYTHON_VERSION}/site-packages/rdiff_backup'")'
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
%{_bindir}/rdiff-backup
%{_mandir}/man1/rdiff-backup*
%dir %{_libdir}/python%{PYTHON_VERSION}/site-packages/rdiff_backup
%{_libdir}/python%{PYTHON_VERSION}/site-packages/rdiff_backup/*.py
%{_libdir}/python%{PYTHON_VERSION}/site-packages/rdiff_backup/*.pyc
%{_libdir}/python%{PYTHON_VERSION}/site-packages/rdiff_backup/*.so
%ghost %{_libdir}/python%{PYTHON_VERSION}/site-packages/rdiff_backup/*.pyo
%doc CHANGELOG COPYING FAQ.html README
%changelog
* Sun Oct 05 2003 Ben Escoto <bescoto@stanford.edu> - 0:0.12.5-0.fdr.1
- Added epochs to python versions, more concise %%defines, %%ghost files
* Thu Aug 16 2003 Ben Escoto <bescoto@stanford.edu> - 0:0.12.3-0.fdr.4
- Implemented various suggestions of Fedora QA
* Sun Nov 4 2001 Ben Escoto <bescoto@stanford.edu>
- Initial RPM
#!/usr/bin/env python
import sys, os, getopt
from distutils.core import setup, Extension
version_string = "$version"
if sys.version_info[:2] < (2,2):
print "Sorry, rdiff-backup requires version 2.2 or later of python"
sys.exit(1)
# Defaults
lflags_arg = []
libname = ['rsync']
incdir_list = libdir_list = None
if os.name == 'posix':
LIBRSYNC_DIR = os.environ.get('LIBRSYNC_DIR', '')
LFLAGS = os.environ.get('LFLAGS', [])
LIBS = os.environ.get('LIBS', [])
# Handle --librsync-dir=[PATH] and --lflags=[FLAGS]
args = sys.argv[:]
for arg in args:
if arg.startswith('--librsync-dir='):
LIBRSYNC_DIR = arg.split('=')[1]
sys.argv.remove(arg)
elif arg.startswith('--lflags='):
LFLAGS = arg.split('=')[1].split()
sys.argv.remove(arg)
elif arg.startswith('--libs='):
LIBS = arg.split('=')[1].split()
sys.argv.remove(arg)
if LFLAGS or LIBS:
lflags_arg = LFLAGS + LIBS
if LIBRSYNC_DIR:
incdir_list = [os.path.join(LIBRSYNC_DIR, 'include')]
libdir_list = [os.path.join(LIBRSYNC_DIR, 'lib')]
if '-lrsync' in LIBS:
libname = []
setup(name="rdiff-backup",
version=version_string,
description="Local/remote mirroring+incremental backup",
author="Ben Escoto",
author_email="bescoto@stanford.edu",
url="http://rdiff-backup.stanford.edu",
packages = ['rdiff_backup'],
ext_modules = [Extension("rdiff_backup.C", ["cmodule.c"]),
Extension("rdiff_backup._librsync",
["_librsyncmodule.c"],
include_dirs=incdir_list,
library_dirs=libdir_list,
libraries=libname,
extra_link_args=lflags_arg)],
scripts = ['rdiff-backup'],
data_files = [('share/man/man1', ['rdiff-backup.1']),
('share/doc/rdiff-backup-%s' % (version_string,),
['CHANGELOG', 'COPYING', 'README', 'FAQ.html'])])
This diff is collapsed.
#!/usr/bin/env python
#
# Compresses old rdiff-backup increments. See
# http://www.stanford.edu/~bescoto/rdiff-backup for information on
# rdiff-backup.
from __future__ import nested_scopes, generators
import os, sys, getopt
rdiff_backup_location = "/usr/bin/rdiff-backup"
no_compression_regexp_string = None
__no_execute__ = 1
def print_help():
"""Print usage, exit"""
print """
Usage: compress-rdiff-backup-increments [options] mirror_directory
This script will compress the old rdiff-backup increments under
mirror_directory, in a format compatible with rdiff-backup version
0.7.1 and later. So for instance if you were using an old version
of rdiff-backup like this:
rdiff-backup /foo /backup
and now you want to take advantage of v0.7.1's space saving
compression, you can run:
compress-rdiff-backup-increments /backup
Options:
--rdiff-backup-location location
This script reads your rdiff-backup executable. The default
is "/usr/bin/rdiff-backup", so if your rdiff-backup is in a
different location, you must use this switch.
--no-compression-regexp regexp
Any increments whose base name match this regular expression
won't be compressed. This is generally used to avoid
compressing already compressed files. See the rdiff-backup
man page for the default.
"""
sys.exit(1)
def parse_args(arglist):
"""Check and evaluate command line arguments, return dirname"""
global rdiff_backup_location
global no_compression_regexp_string
try: optlist, args = getopt.getopt(arglist, "v:",
["rdiff-backup-location=",
"no-compression-regexp="])
except getopt.error: print_help()
for opt, arg in optlist:
if opt == "--no-compression-regexp":
no_compression_regexp_string = arg
elif opt == "--rdiff-backup-location": rdiff_backup_location = arg
else:
print "Bad option: ", opt
print_help()
if len(args) != 1:
print "Wrong number of arguments"
print_help()
return args[0]
def exec_rdiff_backup():
"""Execs rdiff-backup"""
try: execfile(rdiff_backup_location, globals())
except IOError:
print "Unable to read", rdiff_backup_location
print "You may need to use the --rdiff-backup-location argument"
sys.exit(1)
if not map(int, Globals.version.split(".")) >= [0, 7, 1]:
print "This script requires rdiff-backup version 0.7.1 or later,",
print "found version", Globals.version
sys.exit(1)
def gzip_file(rp):
"""gzip rp, adding .gz to path and deleting original"""
newrp = RPath(rp.conn, rp.base, rp.index[:-1] + (rp.index[-1]+".gz",))
if newrp.lstat():
print "Warning: %s already exists, skipping" % newrp.path
return
print "gzipping ", rp.path
newrp.write_from_fileobj(rp.open("rb"), compress = 1)
RPath.copy_attribs(rp, newrp)
rp.delete()
def Main():
dirname = parse_args(sys.argv[1:])
exec_rdiff_backup()
if no_compression_regexp_string is not None:
no_compression_regexp = re.compile(no_compression_regexp_string, re.I)
else: no_compression_regexp = \
re.compile(Globals.no_compression_regexp_string, re.I)
Globals.change_source_perms = 1
Globals.change_ownership = (os.getuid() == 0)
# Check to make sure rbdir exists
root_rp = RPath(Globals.local_connection, dirname)
rbdir = root_rp.append("rdiff-backup-data")
if not rbdir.lstat():
print "Cannot find %s, exiting" % rbdir.path
sys.exit(1)
for dsrp in DestructiveStepping.Iterate_with_Finalizer(rbdir, 1):
if (dsrp.isincfile() and dsrp.isreg() and
not dsrp.isinccompressed() and
(dsrp.getinctype() == "diff" or dsrp.getinctype() == "snapshot")
and dsrp.getsize() != 0 and
not no_compression_regexp.match(dsrp.getincbase_str())):
gzip_file(dsrp)
Main()
#!/usr/bin/env python
from __future__ import generators
import sys, os, stat
def usage():
print "Usage: find2dirs dir1 dir2"
print
print "Given the name of two directories, list all the files in both, one"
print "per line, but don't repeat a file even if it is in both directories"
sys.exit(1)
def getlist(base, ext = ""):
"""Return iterator yielding filenames from directory"""
if ext: yield ext
else: yield "."
fullname = os.path.join(base, ext)
if stat.S_ISDIR(stat.S_IFMT(os.lstat(fullname)[stat.ST_MODE])):
for subfile in os.listdir(fullname):
for fn in getlist(base, os.path.join(ext, subfile)): yield fn
def main(dir1, dir2):
d = {}
for fn in getlist(dir1): d[fn] = 1
for fn in getlist(dir2): d[fn] = 1
for fn in d.keys(): print fn
if not len(sys.argv) == 3: usage()
else: main(sys.argv[1], sys.argv[2])
#!/usr/bin/env python
"""init_smallfiles.py
This program makes a number of files of the given size in the
specified directory.
"""
import os, stat, sys, math
if len(sys.argv) > 5 or len(sys.argv) < 4:
print "Usage: init_files [directory name] [file size] [file count] [base]"
print
print "Creates file_count files in directory_name of size file_size."
print "The created directory has a tree type structure where each level"
print "has at most base files or directories in it. Default is 50."
sys.exit(1)
dirname = sys.argv[1]
filesize = int(sys.argv[2])
filecount = int(sys.argv[3])
block_size = 16384
block = "." * block_size
block_change = "." * (filesize % block_size)
if len(sys.argv) == 4: base = 50
else: base = int(sys.argv[4])
def make_file(path):
"""Make the file at path"""
fp = open(path, "w")
for i in xrange(int(math.floor(filesize/block_size))): fp.write(block)
fp.write(block_change)
fp.close()
def find_sublevels(count):
"""Return number of sublevels required for count files"""
return int(math.ceil(math.log(count)/math.log(base)))
def make_dir(dir, count):
"""Make count files in the directory, making subdirectories if necessary"""
print "Making directory %s with %d files" % (dir, count)
os.mkdir(dir)
level = find_sublevels(count)
assert count <= pow(base, level)
if level == 1:
for i in range(count): make_file(os.path.join(dir, "file%d" %i))
else:
files_per_subdir = pow(base, level-1)
full_dirs = int(count/files_per_subdir)
assert full_dirs <= base
for i in range(full_dirs):
make_dir(os.path.join(dir, "subdir%d" % i), files_per_subdir)
change = count - full_dirs*files_per_subdir
assert change >= 0
if change > 0:
make_dir(os.path.join(dir, "subdir%d" % full_dirs), change)
def start(dir):
try: os.stat(dir)
except os.error: pass
else:
print "Directory %s already exists, exiting." % dir
sys.exit(1)
make_dir(dirname, filecount)
start(dirname)
#!/usr/bin/env python
"""Use librsync to transform everything in one dir to another"""
import sys, os, librsync
dir1, dir2 = sys.argv[1:3]
for i in xrange(1000):
dir1fn = "%s/%s" % (dir1, i)
dir2fn = "%s/%s" % (dir2, i)
# Write signature file
f1 = open(dir1fn, "rb")
sigfile = open("sig", "wb")
librsync.filesig(f1, sigfile, 2048)
f1.close()
sigfile.close()
# Write delta file
f2 = open(dir2fn, "r")
sigfile = open("sig", "rb")
deltafile = open("delta", "wb")
librsync.filerdelta(sigfile, f2, deltafile)
f2.close()
sigfile.close()
deltafile.close()
# Write patched file
f1 = open(dir1fn, "rb")
newfile = open("%s/%s.out" % (dir1, i), "wb")
deltafile = open("delta", "rb")
librsync.filepatch(f1, deltafile, newfile)
f1.close()
deltafile.close()
newfile.close()
#!/usr/bin/env python
"""Make 10000 files consisting of data
Syntax: test.py directory_name number_of_files character filelength"""
import sys, os
dirname = sys.argv[1]
num_files = int(sys.argv[2])
character = sys.argv[3]
filelength = int(sys.argv[4])
os.mkdir(dirname)
for i in xrange(num_files):
fp = open("%s/%s" % (dirname, i), "w")
fp.write(character * filelength)
fp.close()
fp = open("%s.big" % dirname, "w")
fp.write(character * (filelength*num_files))
fp.close()
#!/usr/bin/python
import sys, os
#sys.path.insert(0, "../src")
from rdiff_backup.rpath import *
from rdiff_backup.connection import *
from rdiff_backup import Globals
lc = Globals.local_connection
for filename in sys.argv[1:]:
#print "Deleting %s" % filename
rp = RPath(lc, filename)
if rp.lstat(): rp.delete()
#os.system("rm -rf " + rp.path)
#!/usr/bin/env python
"""Run rdiff to transform everything in one dir to another"""
import sys, os
dir1, dir2 = sys.argv[1:3]
for i in xrange(1000):
assert not os.system("rdiff signature %s/%s sig" % (dir1, i))
assert not os.system("rdiff delta sig %s/%s diff" % (dir2, i))
assert not os.system("rdiff patch %s/%s diff %s/%s.out" %
(dir1, i, dir1, i))
#!/usr/bin/env python
"""remove-comments.py
Given a python program on standard input, spit one out on stdout that
should work the same, but has blank and comment lines removed.
"""
import sys, re
triple_regex = re.compile('"""')
def eattriple(initial_line_stripped):
"""Keep reading until end of doc string"""
assert initial_line_stripped.startswith('"""')
if triple_regex.search(initial_line_stripped[3:]): return
while 1:
line = sys.stdin.readline()
if not line or triple_regex.search(line): break
while 1:
line = sys.stdin.readline()
if not line: break
stripped = line.strip()
if not stripped: continue
if stripped[0] == "#": continue
if stripped.startswith('"""'):
eattriple(stripped)
continue
sys.stdout.write(line)
#!/usr/bin/env python
import sys
from rdiff_backup import librsync
blocksize = 64*1024 # just used in copying
librsync_blocksize = None # If not set, use defaults in _librsync module
def usage():
"""Print usage and then exit"""
print """
Usage: %(cmd)s "signature" basis_file signature_file
%(cmd)s "delta" signature_file new_file delta_file
%(cmd)s "patch" basis_file delta_file new_file
""" % {'cmd': sys.argv[0]}
sys.exit(1)
def copy_and_close(infp, outfp):
"""Copy file streams infp to outfp in blocks, closing when done"""
while 1:
buf = infp.read(blocksize)
if not buf: break
outfp.write(buf)
assert not infp.close() and not outfp.close()
def write_sig(input_path, output_path):
"""Open file at input_path, write signature to output_path"""
infp = open(input_path, "rb")
if librsync_blocksize: sigfp = librsync.SigFile(infp, librsync_blocksize)
else: sigfp = librsync.SigFile(infp)
copy_and_close(sigfp, open(output_path, "wb"))
def write_delta(sig_path, new_path, output_path):
"""Read signature and new file, write to delta to delta_path"""
deltafp = librsync.DeltaFile(open(sig_path, "rb"), open(new_path, "rb"))
copy_and_close(deltafp, open(output_path, "wb"))
def write_patch(basis_path, delta_path, out_path):
"""Patch file at basis_path with delta at delta_path, write to out_path"""
patchfp = librsync.PatchedFile(open(basis_path, "rb"),
open(delta_path, "rb"))
copy_and_close(patchfp, open(out_path, "wb"))
def check_different(filelist):
"""Make sure no files the same"""
d = {}
for file in filelist: d[file] = file
assert len(d) == len(filelist), "Error, must use all different filenames"
def Main():
"""Run program"""
if len(sys.argv) < 4: usage()
mode = sys.argv[1]
file_args = sys.argv[2:]
check_different(file_args)
if mode == "signature":
if len(file_args) != 2: usage()
write_sig(*file_args)
elif mode == "delta":
if len(file_args) != 3: usage()
write_delta(*file_args)
elif mode == "patch":
if len(file_args) != 3: usage()
write_patch(*file_args)
else: usage()
if __name__ == "__main__": Main()
#!/usr/bin/env python
# rdiff-backup -- Mirror files while keeping incremental changes
# Version $version released $date
# Copyright (C) 2001, 2002, 2003 Ben Escoto <bescoto@stanford.edu>
#
# This program is licensed under the GNU General Public License (GPL).
# you can redistribute it and/or modify it under the terms of the GNU
# General Public License as published by the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA; either
# version 2 of the License, or (at your option) any later version.
# Distributions of rdiff-backup should include a copy of the GPL in a
# file called COPYING. The GPL is also available online at
# http://www.gnu.org/copyleft/gpl.html.
#
# See http://rdiff-backup.stanford.edu/ for more information. Please
# send mail to me or the mailing list if you find bugs or have any
# suggestions.
import sys
import rdiff_backup.Main
if __name__ == "__main__" and not globals().has_key('__no_execute__'):
rdiff_backup.Main.Main(sys.argv[1:])
This diff is collapsed.
# Copyright 2002, 2003 Ben Escoto
#
# This file is part of rdiff-backup.
#
# rdiff-backup is free software; you can redistribute it and/or modify
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 2 of the License, or (at your
# option) any later version.
#
# rdiff-backup is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with rdiff-backup; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
"""Coordinate corresponding files with different names
For instance, some source filenames may contain characters not allowed
on the mirror end. These files must be called something different on
the mirror end, so we escape the offending characters with semicolons.
One problem/complication is that all this escaping may put files over
the 256 or whatever limit on the length of file names. (We just don't
handle that error.)
"""
import re, types
import Globals, log, rpath
max_filename_length = 255
# If true, enable character quoting, and set characters making
# regex-style range.
chars_to_quote = None
# These compiled regular expressions are used in quoting and unquoting
chars_to_quote_regexp = None
unquoting_regexp = None
# Use given char to quote. Default is set in Globals.
quoting_char = None
class QuotingException(Exception): pass
def set_init_quote_vals():
"""Set quoting value from Globals on all conns"""
for conn in Globals.connections:
conn.FilenameMapping.set_init_quote_vals_local()
def set_init_quote_vals_local():
"""Set value on local connection, initialize regexps"""
global chars_to_quote, quoting_char
chars_to_quote = Globals.chars_to_quote
if len(Globals.quoting_char) != 1:
log.Log.FatalError("Expected single character for quoting char,"
"got '%s' instead" % (Globals.quoting_char,))
quoting_char = Globals.quoting_char
init_quoting_regexps()
def init_quoting_regexps():
"""Compile quoting regular expressions"""
global chars_to_quote_regexp, unquoting_regexp
assert chars_to_quote and type(chars_to_quote) is types.StringType, \
"Chars to quote: '%s'" % (chars_to_quote,)
try:
chars_to_quote_regexp = \
re.compile("[%s]|%s" % (chars_to_quote, quoting_char), re.S)
unquoting_regexp = re.compile("%s[0-9]{3}" % quoting_char, re.S)
except re.error:
log.Log.FatalError("Error '%s' when processing char quote list '%s'" %
(re.error, chars_to_quote))
def quote(path):
"""Return quoted version of given path
Any characters quoted will be replaced by the quoting char and
the ascii number of the character. For instance, "10:11:12"
would go to "10;05811;05812" if ":" were quoted and ";" were
the quoting character.
"""
return chars_to_quote_regexp.sub(quote_single, path)
def quote_single(match):
"""Return replacement for a single character"""
return "%s%03d" % (quoting_char, ord(match.group()))
def unquote(path):
"""Return original version of quoted filename"""
return unquoting_regexp.sub(unquote_single, path)
def unquote_single(match):
"""Unquote a single quoted character"""
if not len(match.group()) == 4:
raise QuotingException("Quoted group wrong size: " + match.group())
try: return chr(int(match.group()[1:]))
except ValueError:
raise QuotingException("Quoted out of range: " + match.group())
class QuotedRPath(rpath.RPath):
"""RPath where the filename is quoted version of index
We use QuotedRPaths so we don't need to remember to quote RPaths
derived from this one (via append or new_index). Note that only
the index is quoted, not the base.
"""
def __init__(self, connection, base, index = (), data = None):
"""Make new QuotedRPath"""
quoted_index = tuple(map(quote, index))
rpath.RPath.__init__(self, connection, base, quoted_index, data)
self.index = index
def listdir(self):
"""Return list of unquoted filenames in current directory
We want them unquoted so that the results can be sorted
correctly and append()ed to the currect QuotedRPath.
"""
return map(unquote, self.conn.os.listdir(self.path))
def __str__(self):
return "QuotedPath: %s\nIndex: %s\nData: %s" % \
(self.path, self.index, self.data)
def isincfile(self):
"""Return true if path indicates increment, sets various variables"""
if not self.index: # consider the last component as quoted
dirname, basename = self.dirsplit()
temp_rp = rpath.RPath(self.conn, dirname, (unquote(basename),))
result = temp_rp.isincfile()
if result:
self.inc_basestr = unquote(temp_rp.inc_basestr)
self.inc_timestr = unquote(temp_rp.inc_timestr)
else:
result = rpath.RPath.isincfile(self)
if result: self.inc_basestr = unquote(self.inc_basestr)
return result
def get_quotedrpath(rp, separate_basename = 0):
"""Return quoted version of rpath rp"""
assert not rp.index # Why would we starting quoting "in the middle"?
if separate_basename:
dirname, basename = rp.dirsplit()
return QuotedRPath(rp.conn, dirname, (unquote(basename),), rp.data)
else: return QuotedRPath(rp.conn, rp.base, (), rp.data)
def get_quoted_sep_base(filename):
"""Get QuotedRPath from filename assuming last bit is quoted"""
return get_quotedrpath(rpath.RPath(Globals.local_connection, filename), 1)
This diff is collapsed.
# Copyright 2002 Ben Escoto
#
# This file is part of rdiff-backup.
#
# rdiff-backup is free software; you can redistribute it and/or modify
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 2 of the License, or (at your
# option) any later version.
#
# rdiff-backup is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with rdiff-backup; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
"""Preserve and restore hard links
If the preserve_hardlinks option is selected, linked files in the
source directory will be linked in the mirror directory. Linked files
are treated like any other with respect to incrementing, but their
link status can be retrieved because their device location and inode #
is written in the metadata file.
All these functions are meant to be executed on the mirror side. The
source side should only transmit inode information.
"""
from __future__ import generators
import cPickle
import Globals, Time, rpath, log, robust, errno
# The keys in this dictionary are (inode, devloc) pairs. The values
# are a pair (index, remaining_links, dest_key) where index is the
# rorp index of the first such linked file, remaining_links is the
# number of files hard linked to this one we may see, and key is
# either (dest_inode, dest_devloc) or None, and represents the
# hardlink info of the existing file on the destination.
_inode_index = None
def initialize_dictionaries():
"""Set all the hard link dictionaries to empty"""
global _inode_index
_inode_index = {}
def clear_dictionaries():
"""Delete all dictionaries"""
global _inode_index
_inode_index = None
def get_inode_key(rorp):
"""Return rorp's key for _inode_ dictionaries"""
return (rorp.getinode(), rorp.getdevloc())
def add_rorp(rorp, dest_rorp = None):
"""Process new rorp and update hard link dictionaries"""
if not rorp.isreg() or rorp.getnumlinks() < 2: return
rp_inode_key = get_inode_key(rorp)
if not _inode_index.has_key(rp_inode_key):
if not dest_rorp: dest_key = None
elif dest_rorp.getnumlinks() == 1: dest_key = "NA"
else: dest_key = get_inode_key(dest_rorp)
_inode_index[rp_inode_key] = (rorp.index, rorp.getnumlinks(), dest_key)
def del_rorp(rorp):
"""Remove rorp information from dictionary if seen all links"""
if not rorp.isreg() or rorp.getnumlinks() < 2: return
rp_inode_key = get_inode_key(rorp)
val = _inode_index.get(rp_inode_key)
if not val: return
index, remaining, dest_key = val
if remaining == 1: del _inode_index[rp_inode_key]
else: _inode_index[rp_inode_key] = (index, remaining-1, dest_key)
def rorp_eq(src_rorp, dest_rorp):
"""Compare hardlinked for equality
Return false if dest_rorp is linked differently, which can happen
if dest is linked more than source, or if it is represented by a
different inode.
"""
if (not src_rorp.isreg() or not dest_rorp.isreg() or
src_rorp.getnumlinks() == dest_rorp.getnumlinks() == 1):
return 1 # Hard links don't apply
if src_rorp.getnumlinks() < dest_rorp.getnumlinks(): return 0
src_key = get_inode_key(src_rorp)
index, remaining, dest_key = _inode_index[src_key]
if dest_key == "NA":
# Allow this to be ok for first comparison, but not any
# subsequent ones
_inode_index[src_key] = (index, remaining, None)
return 1
return dest_key == get_inode_key(dest_rorp)
def islinked(rorp):
"""True if rorp's index is already linked to something on src side"""
if not rorp.getnumlinks() > 1: return 0
dict_val = _inode_index.get(get_inode_key(rorp))
if not dict_val: return 0
return dict_val[0] != rorp.index # If equal, then rorp is first
def get_link_index(rorp):
"""Return first index on target side rorp is already linked to"""
return _inode_index[get_inode_key(rorp)][0]
def link_rp(diff_rorp, dest_rpath, dest_root = None):
"""Make dest_rpath into a link using link flag in diff_rorp"""
if not dest_root: dest_root = dest_rpath # use base of dest_rpath
dest_link_rpath = dest_root.new_index(diff_rorp.get_link_flag())
try: dest_rpath.hardlink(dest_link_rpath.path)
except EnvironmentError, exc:
# This can happen if the source of dest_link_rpath was deleted
# after it's linking info was recorded but before
# dest_link_rpath was written.
if errno.errorcode[exc[0]] == 'ENOENT':
dest_rpath.touch() # This will cause an UpdateError later
else: raise Exception("EnvironmentError '%s' linking %s to %s" %
(exc, dest_rpath.path, dest_link_rpath.path))
This diff is collapsed.
# Copyright 2002 Ben Escoto
#
# This file is part of rdiff-backup.
#
# rdiff-backup is free software; you can redistribute it and/or modify
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 2 of the License, or (at your
# option) any later version.
#
# rdiff-backup is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with rdiff-backup; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
"""Invoke rdiff utility to make signatures, deltas, or patch"""
import os, librsync
import Globals, log, static, TempFile, rpath
def get_signature(rp, blocksize = None):
"""Take signature of rpin file and return in file object"""
if not blocksize: blocksize = find_blocksize(rp.getsize())
log.Log("Getting signature of %s with blocksize %s" %
(rp.get_indexpath(), blocksize), 7)
return librsync.SigFile(rp.open("rb"), blocksize)
def find_blocksize(file_len):
"""Return a reasonable block size to use on files of length file_len
If the block size is too big, deltas will be bigger than is
necessary. If the block size is too small, making deltas and
patching can take a really long time.
"""
if file_len < 1024000: return 512 # set minimum of 512 bytes
else: # Split file into about 2000 pieces, rounding to 512
return long((file_len/(2000*512))*512)
def get_delta_sigfileobj(sig_fileobj, rp_new):
"""Like get_delta but signature is in a file object"""
log.Log("Getting delta of %s with signature stream" % (rp_new.path,), 7)
return librsync.DeltaFile(sig_fileobj, rp_new.open("rb"))
def get_delta_sigrp(rp_signature, rp_new):
"""Take signature rp and new rp, return delta file object"""
log.Log("Getting delta of %s with signature %s" %
(rp_new.path, rp_signature.get_indexpath()), 7)
return librsync.DeltaFile(rp_signature.open("rb"), rp_new.open("rb"))
def write_delta(basis, new, delta, compress = None):
"""Write rdiff delta which brings basis to new"""
log.Log("Writing delta %s from %s -> %s" %
(basis.path, new.path, delta.path), 7)
deltafile = librsync.DeltaFile(get_signature(basis), new.open("rb"))
delta.write_from_fileobj(deltafile, compress)
def write_patched_fp(basis_fp, delta_fp, out_fp):
"""Write patched file to out_fp given input fps. Closes input files"""
rpath.copyfileobj(librsync.PatchedFile(basis_fp, delta_fp), out_fp)
assert not basis_fp.close() and not delta_fp.close()
def write_via_tempfile(fp, rp):
"""Write fileobj fp to rp by writing to tempfile and renaming"""
tf = TempFile.new(rp)
tf.write_from_fileobj(fp)
rpath.rename(tf, rp)
def patch_local(rp_basis, rp_delta, outrp = None, delta_compressed = None):
"""Patch routine that must be run locally, writes to outrp
This should be run local to rp_basis because it needs to be a real
file (librsync may need to seek around in it). If outrp is None,
patch rp_basis instead.
"""
assert rp_basis.conn is Globals.local_connection
if delta_compressed: deltafile = rp_delta.open("rb", 1)
else: deltafile = rp_delta.open("rb")
sigfile = librsync.SigFile(rp_basis.open("rb"))
patchfile = librsync.PatchedFile(rp_basis.open("rb"), deltafile)
if outrp: outrp.write_from_fileobj(patchfile)
else: write_via_tempfile(patchfile, rp_basis)
def copy_local(rpin, rpout, rpnew = None):
"""Write rpnew == rpin using rpout as basis. rpout and rpnew local"""
assert rpout.conn is Globals.local_connection
deltafile = rpin.conn.librsync.DeltaFile(get_signature(rpout),
rpin.open("rb"))
patched_file = librsync.PatchedFile(rpout.open("rb"), deltafile)
if rpnew: rpnew.write_from_fileobj(patched_file)
else: write_via_tempfile(patched_file, rpout)
This diff is collapsed.
This diff is collapsed.
# Copyright 2002 Ben Escoto
#
# This file is part of rdiff-backup.
#
# rdiff-backup is free software; you can redistribute it and/or modify
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 2 of the License, or (at your
# option) any later version.
#
# rdiff-backup is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with rdiff-backup; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
"""Manage temp files
Earlier this had routines for keeping track of existing tempfiles.
Now we just use normal rpaths instead of the TempFile class.
"""
import os
import Globals, rpath
# To make collisions less likely, this gets put in the file name
# and incremented whenever a new file is requested.
_tfindex = 0
def new(rp_base):
"""Return new tempfile that isn't in use in same dir as rp_base"""
return new_in_dir(rp_base.get_parent_rp())
def new_in_dir(dir_rp):
"""Return new temp rpath in directory dir_rp"""
global _tfindex
assert dir_rp.conn is Globals.local_connection
while 1:
if _tfindex > 100000000:
Log("Warning: Resetting tempfile index", 2)
_tfindex = 0
tf = dir_rp.append('rdiff-backup.tmp.%d' % _tfindex)
_tfindex = _tfindex+1
if not tf.lstat(): return tf
class TempFile(rpath.RPath):
"""Like an RPath, but keep track of which ones are still here"""
def rename(self, rp_dest):
"""Rename temp file to permanent location, possibly overwriting"""
if not self.lstat(): # "Moving" empty file, so just delete
if rp_dest.lstat(): rp_dest.delete()
remove_listing(self)
return
if self.isdir() and not rp_dest.isdir():
# Cannot move a directory directly over another file
rp_dest.delete()
rpath.rename(self, rp_dest)
# Sometimes this just seems to fail silently, as in one
# hardlinked twin is moved over the other. So check to make
# sure below.
self.setdata()
if self.lstat():
rp_dest.delete()
rpath.rename(self, rp_dest)
self.setdata()
if self.lstat(): raise OSError("Cannot rename tmp file correctly")
remove_listing(self)
def delete(self):
rpath.RPath.delete(self)
remove_listing(self)
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#!/bin/sh
# This script will create the testing/restoretest3 directory as it
# needs to be for one of the tests in restoretest.py to work.
rm -rf testfiles/restoretest3
rdiff-backup --current-time 10000 testfiles/increment1 testfiles/restoretest3
rdiff-backup --current-time 20000 testfiles/increment2 testfiles/restoretest3
rdiff-backup --current-time 30000 testfiles/increment3 testfiles/restoretest3
rdiff-backup --current-time 40000 testfiles/increment4 testfiles/restoretest3
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment