Commit 02a18650 authored by Christian Theune's avatar Christian Theune

- restified

 - added test for custom openDetached() class
parent d8a3b4d1
##############################################################################
#
# Copyright (c) 2005 Zope Corporation and Contributors.
# Copyright (c) 2005-2007 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
......@@ -15,7 +15,7 @@
Transaction support for Blobs
=============================
We need a database with a blob supporting storage:
We need a database with a blob supporting storage::
>>> from ZODB.MappingStorage import MappingStorage
>>> from ZODB.Blobs.BlobStorage import BlobStorage
......@@ -30,8 +30,8 @@ We need a database with a blob supporting storage:
>>> connection1 = database.open()
>>> root1 = connection1.root()
>>> from ZODB.Blobs.Blob import Blob
Putting a Blob into a Connection works like any other Persistent object:
Putting a Blob into a Connection works like any other Persistent object::
>>> blob1 = Blob()
>>> blob1.open('w').write('this is blob 1')
......@@ -39,7 +39,7 @@ Putting a Blob into a Connection works like any other Persistent object:
>>> transaction.commit()
Aborting a transaction involving a blob write cleans up uncommitted
file data:
file data::
>>> dead_blob = Blob()
>>> dead_blob.open('w').write('this is a dead blob')
......@@ -53,7 +53,7 @@ file data:
False
Opening a blob gives us a filehandle. Getting data out of the
resulting filehandle is accomplished via the filehandle's read method:
resulting filehandle is accomplished via the filehandle's read method::
>>> connection2 = database.open()
>>> root2 = connection2.root()
......@@ -70,7 +70,7 @@ resulting filehandle is accomplished via the filehandle's read method:
Let's make another filehandle for read only to blob1a, this should bump
up its refcount by one, and each file handle has a reference to the
(same) underlying blob:
(same) underlying blob::
>>> blob1afh2 = blob1a.open("r")
>>> blob1afh2.blob._p_blob_refcounts()
......@@ -81,7 +81,7 @@ up its refcount by one, and each file handle has a reference to the
True
Let's close the first filehandle we got from the blob, this should decrease
its refcount by one:
its refcount by one::
>>> blob1afh1.close()
>>> blob1a._p_blob_refcounts()
......@@ -89,7 +89,7 @@ its refcount by one:
Let's abort this transaction, and ensure that the filehandles that we
opened are now closed and that the filehandle refcounts on the blob
object are cleared.
object are cleared::
>>> transaction.abort()
>>> blob1afh1.blob._p_blob_refcounts()
......@@ -106,7 +106,7 @@ object are cleared.
If we open a blob for append, its write refcount should be nonzero.
Additionally, writing any number of bytes to the blobfile should
result in the blob being marked "dirty" in the connection (we just
aborted above, so the object should be "clean" when we start):
aborted above, so the object should be "clean" when we start)::
>>> bool(blob1a._p_changed)
False
......@@ -120,7 +120,7 @@ aborted above, so the object should be "clean" when we start):
True
We can open more than one blob object during the course of a single
transaction:
transaction::
>>> blob2 = Blob()
>>> blob2.open('w').write('this is blob 3')
......@@ -133,7 +133,7 @@ transaction:
Since we committed the current transaction above, the aggregate
changes we've made to blob, blob1a (these refer to the same object) and
blob2 (a different object) should be evident:
blob2 (a different object) should be evident::
>>> blob1.open('r').read()
'this is blob 1woot!'
......@@ -145,7 +145,7 @@ blob2 (a different object) should be evident:
We shouldn't be able to persist a blob filehandle at commit time
(although the exception which is raised when an object cannot be
pickled appears to be particulary unhelpful for casual users at the
moment):
moment)::
>>> root1['wontwork'] = blob1.open('r')
>>> transaction.commit()
......@@ -153,12 +153,12 @@ moment):
...
TypeError: coercing to Unicode: need string or buffer, BlobFile found
Abort for good measure:
Abort for good measure::
>>> transaction.abort()
Attempting to change a blob simultaneously from two different
connections should result in a write conflict error.
connections should result in a write conflict error::
>>> tm1 = transaction.TransactionManager()
>>> tm2 = transaction.TransactionManager()
......@@ -179,7 +179,7 @@ connections should result in a write conflict error.
ConflictError: database conflict error (oid 0x01, class ZODB.Blobs.Blob.Blob)
After the conflict, the winning transaction's result is visible on both
connections:
connections::
>>> root3['blob1'].open('r').read()
'this is blob 1woot!this is from connection 3'
......@@ -188,7 +188,7 @@ connections:
'this is blob 1woot!this is from connection 3'
BlobStorages implementation of getSize() includes the blob data and adds it to
the underlying storages result of getSize():
the underlying storages result of getSize()::
>>> underlying_size = base_storage.getSize()
>>> blob_size = blob_storage.getSize()
......@@ -199,7 +199,7 @@ the underlying storages result of getSize():
Savepoints and Blobs
--------------------
We do support optimistic savepoints :
We do support optimistic savepoints ::
>>> connection5 = database.open()
>>> root5 = connection5.root()
......@@ -211,7 +211,7 @@ We do support optimistic savepoints :
>>> transaction.commit()
>>> root5['blob'].open("rb").read()
"I'm a happy blob."
>>> blob_fh = root5['blob'].open("a")
>>> blob_fh = root5['blob'].open("a")
>>> blob_fh.write(" And I'm singing.")
>>> blob_fh.close()
>>> root5['blob'].open("rb").read()
......@@ -221,9 +221,9 @@ We do support optimistic savepoints :
"I'm a happy blob. And I'm singing."
>>> transaction.get().commit()
We do not support non-optimistic savepoints:
We do not support non-optimistic savepoints::
>>> blob_fh = root5['blob'].open("a")
>>> blob_fh = root5['blob'].open("a")
>>> blob_fh.write(" And the weather is beautiful.")
>>> blob_fh.close()
>>> root5['blob'].open("rb").read()
......@@ -238,7 +238,7 @@ Reading Blobs outside of a transaction
--------------------------------------
If you want to read from a Blob outside of transaction boundaries (e.g. to
stream a file to the browser), you can use the openDetached() method:
stream a file to the browser), you can use the openDetached() method::
>>> connection6 = database.open()
>>> root6 = connection6.root()
......@@ -251,7 +251,7 @@ stream a file to the browser), you can use the openDetached() method:
>>> blob.openDetached().read()
"I'm a happy blob."
Of course, that doesn't work for empty blobs
Of course, that doesn't work for empty blobs::
>>> blob = Blob()
>>> blob.openDetached()
......@@ -259,7 +259,7 @@ Of course, that doesn't work for empty blobs
...
BlobError: Blob does not exist.
nor when the Blob is already opened for writing:
nor when the Blob is already opened for writing::
>>> blob = Blob()
>>> blob_fh = blob.open("wb")
......@@ -268,7 +268,24 @@ nor when the Blob is already opened for writing:
...
BlobError: Already opened for writing.
It does work when the transaction was aborted, though:
You can also pass a factory to the openDetached method that will be used to
instantiate the file. This is used for e.g. creating filestream iterators::
>>> class customfile(file):
... pass
>>> blob_fh.write('Something')
>>> blob_fh.close()
>>> fh = blob.openDetached(customfile)
>>> fh # doctest: +ELLIPSIS
<open file '...', mode 'rb' at 0x...>
>>> isinstance(fh, customfile)
True
Note: Nasty people could use a factory that opens the file for writing. This
would be evil.
It does work when the transaction was aborted, though::
>>> blob = Blob()
>>> blob_fh = blob.open("wb")
......@@ -288,7 +305,7 @@ It does work when the transaction was aborted, though:
Teardown
--------
We don't need the storage directory and databases anymore:
We don't need the storage directory and databases anymore::
>>> import shutil
>>> shutil.rmtree(blob_dir)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment