pax_global_header 0000666 0000000 0000000 00000000064 12737730500 0014516 g ustar 00root root 0000000 0000000 52 comment=d2095794acbfa2bc767052da833a5db7c4c5bb2c
ZEO-5.0.0a0/ 0000775 0000000 0000000 00000000000 12737730500 0012376 5 ustar 00root root 0000000 0000000 ZEO-5.0.0a0/.bzrignore 0000664 0000000 0000000 00000000074 12737730500 0014401 0 ustar 00root root 0000000 0000000 .installed.cfg
bin
build
develop-eggs
eggs
parts
*.egg-info
ZEO-5.0.0a0/.gitignore 0000664 0000000 0000000 00000000150 12737730500 0014362 0 ustar 00root root 0000000 0000000 *.pyc
__pycache__
*.egg-info
.installed.cfg
.tox
bin
develop-eggs
eggs
parts
testing.log
.dir-locals.el
ZEO-5.0.0a0/.travis.yml 0000664 0000000 0000000 00000000504 12737730500 0014506 0 ustar 00root root 0000000 0000000 language: python
sudo: false
matrix:
include:
- os: linux
python: 2.7
- os: linux
python: 3.4
- os: linux
python: 3.5
install:
- pip install -U setuptools
- python bootstrap.py
- bin/buildout
script:
- bin/test -v1j99
notifications:
email: false
ZEO-5.0.0a0/CHANGES.rst 0000664 0000000 0000000 00000004654 12737730500 0014211 0 ustar 00root root 0000000 0000000 Changelog
=========
5.0.0a0 (2016-07-08)
--------------------
This is a major ZEO revision, which replaces the ZEO network protocol
implementation.
New features:
- SSL support
- Optional client-side conflict resolution.
- Lots of modtly internal clean ups.
Dropped features:
- The ZEO authentication protocol.
This will be replaced by new authentication mechanims leveraging SSL.
- The ZEO monitor server.
- Full cache verification.
- Client suppprt for servers older than ZODB 3.9
- Server support for clients older than ZEO 4.2.0
4.2.0 (2016-06-15)
------------------
- Changed loadBefore to operate more like load behaved, especially
with regard to the load lock. This allowes ZEO to work with the
upcoming ZODB 5, which used loadbefore rather than load.
Reimplemented load using loadBefore, thus testing loadBefore
extensively via existing tests.
- Other changes to work with ZODB 5 (as well as ZODB 4)
- Fixed: the ZEO cache loadBefore method failed to utilize current data.
- Drop support for Python 2.6 and 3.2.
4.2.0b1 (2015-06-05)
--------------------
- Add support for PyPy.
4.1.0 (2015-01-06)
------------------
- Add support for Python 3.4.
- Added a new ``ruok`` client protocol for getting server status on
the ZEO port without creating a full-blown client connection and
without logging in the server log.
- Log errors on server side even if using multi threaded delay.
4.0.0 (2013-08-18)
------------------
- Avoid reading excess random bytes when setting up an auth_digest session.
- Optimize socket address enumeration in ZEO client (avoid non-TCP types).
- Improve Travis CI testing support.
- Assign names to all threads for better runtime debugging.
- Fix "assignment to keyword" error under Py3k in 'ZEO.scripts.zeoqueue'.
4.0.0b1 (2013-05-20)
--------------------
- Depend on ZODB >= 4.0.0b2
- Add support for Python 3.2 / 3.3.
4.0.0a1 (2012-11-19)
--------------------
First (in a long time) separate ZEO release.
Since ZODB 3.10.5:
- Storage servers now emit Serving and Closed events so subscribers
can discover addresses when dynamic port assignment (bind to port 0)
is used. This could, for example, be used to update address
information in a ZooKeeper database.
- Client storages have a method, new_addr, that can be used to change
the server address(es). This can be used, for example, to update a
dynamically determined server address from information in a
ZooKeeper database.
ZEO-5.0.0a0/COPYING 0000664 0000000 0000000 00000000133 12737730500 0013426 0 ustar 00root root 0000000 0000000 See:
- the copyright notice in: COPYRIGHT.txt
- The Zope Public License in LICENSE.txt
ZEO-5.0.0a0/COPYRIGHT.txt 0000664 0000000 0000000 00000000040 12737730500 0014501 0 ustar 00root root 0000000 0000000 Zope Foundation and Contributors ZEO-5.0.0a0/LICENSE.txt 0000664 0000000 0000000 00000004026 12737730500 0014223 0 ustar 00root root 0000000 0000000 Zope Public License (ZPL) Version 2.1
A copyright notice accompanies this license document that identifies the
copyright holders.
This license has been certified as open source. It has also been designated as
GPL compatible by the Free Software Foundation (FSF).
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions in source code must retain the accompanying copyright
notice, this list of conditions, and the following disclaimer.
2. Redistributions in binary form must reproduce the accompanying copyright
notice, this list of conditions, and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Names of the copyright holders must not be used to endorse or promote
products derived from this software without prior written permission from the
copyright holders.
4. The right to distribute this software or to use it for any purpose does not
give you the right to use Servicemarks (sm) or Trademarks (tm) of the
copyright
holders. Use of them is covered by separate agreement with the copyright
holders.
5. If any files are modified, you must cause the modified files to carry
prominent notices stating that you changed the files and the date of any
change.
Disclaimer
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY EXPRESSED
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
ZEO-5.0.0a0/MANIFEST.in 0000664 0000000 0000000 00000000175 12737730500 0014137 0 ustar 00root root 0000000 0000000 include *.rst
include *.txt
include *.py
include buildout.cfg
include tox.ini
recursive-include src *
global-exclude *.pyc
ZEO-5.0.0a0/README.rst 0000664 0000000 0000000 00000056030 12737730500 0014071 0 ustar 00root root 0000000 0000000 ==========================
ZEO HOWTO
==========================
.. contents::
Introduction
============
ZEO (Zope Enterprise Objects) is a client-server system for sharing a
single storage among many clients. When you use ZEO, a lower-level
storage, typically a file storage, is opened in the ZEO server
process. Client programs connect to this process using a ZEO
ClientStorage. ZEO provides a consistent view of the database to all
clients. The ZEO client and server communicate using a custom
protocol layered on top of TCP.
There are several features that affect the behavior of
ZEO. This section describes how a few of these features
work. Subsequent sections describe how to configure every option.
Client cache
------------
Each ZEO client keeps an on-disk cache of recently used data records
to avoid fetching those records from the server each time they are
requested. It is usually faster to read the objects from disk than it
is to fetch them over the network. The cache can also provide
read-only copies of objects during server outages.
The cache may be persistent or transient. If the cache is persistent,
then the cache files are retained for use after process restarts. A
non-persistent cache uses temporary files that are removed when the
client storage is closed.
The client cache size is configured when the ClientStorage is created.
The default size is 20MB, but the right size depends entirely on the
particular database. Setting the cache size too small can hurt
performance, but in most cases making it too big just wastes disk
space.
ZEO uses invalidations for cache consistency. Every time an object is
modified, the server sends a message to each client informing it of
the change. The client will discard the object from its cache when it
receives an invalidation. (It's actually a little more complicated,
but we won't get into that here.)
Each time a client connects to a server, it must verify that its cache
contents are still valid. (It did not receive any invalidation
messages while it was disconnected.) This involves asking the server
to replay invalidations it missed. If it's been disconnected too long,
it discards its cache.
Invalidation queue
------------------
The ZEO server keeps a queue of recent invalidation messages in
memory. When a client connects to the server, it sends the timestamp
of the most recent invalidation message it has received. If that
message is still in the invalidation queue, then the server sends the
client all the missing invalidations.
The default size of the invalidation queue is 100. If the
invalidation queue is larger, it will be more likely that a client
that reconnects will be able to verify its cache using the queue. On
the other hand, a large queue uses more memory on the server to store
the message. Invalidation messages tend to be small, perhaps a few
hundred bytes each on average; it depends on the number of objects
modified by a transaction.
You can also provide an invalidation age when configuring the
server. In this case, if the invalidation queue is too small, but a
client has been disconnected for a time interval that is less than the
invalidation age, then invalidations are replayed by iterating over
the lower-level storage on the server. If the age is too high, and
clients are disconneced for a long time, then this can put a lot of
load on the server.
Transaction timeouts
--------------------
A ZEO server can be configured to timeout a transaction if it takes
too long to complete. Only a single transaction can commit at a time;
so if one transaction takes too long, all other clients will be
delayed waiting for it. In the extreme, a client can hang during the
commit process. If the client hangs, the server will be unable to
commit other transactions until it restarts. A well-behaved client
will not hang, but the server can be configured with a transaction
timeout to guard against bugs that cause a client to hang.
If any transaction exceeds the timeout threshold, the client's
connection to the server will be closed and the transaction aborted.
Once the transaction is aborted, the server can start processing other
client's requests. Most transactions should take very little time to
commit. The timer begins for a transaction after all the data has
been sent to the server. At this point, the cost of commit should be
dominated by the cost of writing data to disk; it should be unusual
for a commit to take longer than 1 second. A transaction timeout of
30 seconds should tolerate heavy load and slow communications between
client and server, while guarding against hung servers.
When a transaction times out, the client can be left in an awkward
position. If the timeout occurs during the second phase of the two
phase commit, the client will log a panic message. This should only
cause problems if the client transaction involved multiple storages.
If it did, it is possible that some storages committed the client
changes and others did not.
Connection management
---------------------
A ZEO client manages its connection to the ZEO server. If it loses
the connection, it attempts to reconnect. While
it is disconnected, it can satisfy some reads by using its cache.
The client can be configured with multiple server addresses. In this
case, it assumes that each server has identical content and will use
any server that is available. It is possible to configure the client
to accept a read-only connection to one of these servers if no
read-write connection is available. If it has a read-only connection,
it will continue to poll for a read-write connection.
If a single address resolves to multiple IPv4 or IPv6 addresses,
the client will connect to an arbitrary of these addresses.
SSL
---
ZEO supports the use of SSL connections between servers and clients,
including certificate authentication. We're still understanding use
cases for this, so details of operation may change.
Installing software
===================
ZEO is installed like any other Python package using pip, buildout, or
pther Python packaging tools.
Running the server
==================
Typically, the ZEO server is run using the ``runzeo`` script that's
installed as part of a ZEO installation. The ``runzeo`` script
accepts command line options, the most important of which is the
``-C`` (``--configuration``) option. ZEO servers are best configured
via configuration files. The ``runzeo`` script also accepts some
command-line arguments for ad-hoc configurations, but there's an
easier way to run an ad-hoc server described below. For more on
configuraing a ZEO server see `Server configuration`_ below.
Server quick-start/ad-hoc operation
-----------------------------------
You can quickly start a ZEO server from a Python prompt::
import ZEO
address, stop = ZEO.server()
This runs a ZEO server on a dynamic address and using an in-memory
storage.
We can then create a ZEO client connection using the address
returned::
connection = ZEO.connection(addr)
This is a ZODB connection for a database opened on a client storage
instance created on the fly. This is a shorthand for::
db = ZEO.DB(addr)
connection = db.open()
Which is a short-hand for::
client_storage = ZEO.client(addr)
import ZODB
db = ZODB.db(client_storage)
connection = db.open()
If you exit the Python process, the storage exits as well, as it's run
in an in-process thread. It will leave behind it's configuration and
log file. This provides a handy way to get a configuration example.
You shut down the server more cleanly by calling the stop function
returned by the ``ZEO.server`` function. This will remove the
temporary configuration file.
To have data stored persistently, you can specify a file-storage path
name using a ``path`` parameter. If you want blob support, you can
specify a blob-file directory using the ``blob_dir`` directory.
You can also supply a port to listen on, full storage configuration
and ZEO server configuration options to the ``ZEO.server``
function. See it's documentation string for more information.
Server configuration
--------------------
The script runzeo.py runs the ZEO server. The server can be
configured using command-line arguments or a config file. This
document only describes the config file. Run runzeo.py
-h to see the list of command-line arguments.
The configuration file specifies the underlying storage the server
uses, the address it binds to, and a few other optional parameters.
An example is::
address zeo.example.com:8090
path /var/tmp/Data.fs
path /var/tmp/zeo.log
format %(asctime)s %(message)s
The format is similar to the Apache configuration format. Individual
settings have a name, 1 or more spaces and a value, as in::
address zeo.example.com:8090
Settings are grouped into hierarchical sections.
The example above configures a server to use a file storage from
``/var/tmp/Data.fs``. The server listens on port ``8090`` of
``zeo.example.com``. The ZEO server writes its log file to
``/var/tmp/zeo.log`` and uses a custom format for each line. Assuming the
example configuration it stored in ``zeo.config``, you can run a server by
typing::
runzeo -C zeo.config
A configuration file consists of a section and a storage
section, where the storage section can use any of the valid ZODB
storage types. It may also contain an eventlog configuration. See
ZODB documentation for information on configuring storages. See
`Configuring event logs `_ for information on configuring
server logs.
An **easy way to get started** with configuration is to run an add-hoc
server and copy the generated configuration.
The zeo section must list the address. All the other keys are
optional.
address
The address at which the server should listen. This can be in
the form 'host:port' to signify a TCP/IP connection or a
pathname string to signify a Unix domain socket connection (at
least one '/' is required). A hostname may be a DNS name or a
dotted IP address. If the hostname is omitted, the platform's
default behavior is used when binding the listening socket (''
is passed to socket.bind() as the hostname portion of the
address).
read-only
Flag indicating whether the server should operate in read-only
mode. Defaults to false. Note that even if the server is
operating in writable mode, individual storages may still be
read-only. But if the server is in read-only mode, no write
operations are allowed, even if the storages are writable. Note
that pack() is considered a read-only operation.
invalidation-queue-size
The storage server keeps a queue of the objects modified by the
last N transactions, where N == invalidation_queue_size. This
queue is used to support client cache verification when a client
disconnects for a short period of time.
invalidation-age
The maximum age of a client for which quick-verification
invalidations will be provided by iterating over the served
storage. This option should only be used if the served storage
supports efficient iteration from a starting point near the
end of the transaction history (e.g. end of file).
transaction-timeout
The maximum amount of time, in seconds, to wait for a
transaction to commit after acquiring the storage lock,
specified in seconds. If the transaction takes too long, the
client connection will be closed and the transaction aborted.
This defaults to 30 seconds.
client-conflict-resolution
Flag indicating that clients should perform conflict
resolution. This option defaults to false.
Server SSL configuration
~~~~~~~~~~~~~~~~~~~~~~~~
A server can optionally support SSL. Do do so, include a `ssl`
subsection of the ZEO section, as in::
address zeo.example.com:8090
certificate server_certificate.pem
key server_certificate_key.pem
path /var/tmp/Data.fs
path /var/tmp/zeo.log
format %(asctime)s %(message)s
The ``ssl`` section has settings:
certificate
The path to an SSL certificate file for the server. (required)
key
The path to the SSL key file for the server certificate (if not
included in certificate file).
password-function
The dotted name if an importable function that, when imported, returns
the password needed to unlock the key (if the key requires a password.)
authenticate
The path to a file or directory containing client certificates
to authenticate. ((See the ``cafile`` and ``capath``
parameters in the Python documentation for
``ssl.SSLContext.load_verify_locations``.)
If this setting is used. then certificate authentication is
used to authenticate clients. A client must be configured
with one of the certificates supplied using this setting.
This option assumes that you're using self-signed certificates.
Running the ZEO server as a daemon
----------------------------------
In an operational setting, you will want to run the ZEO server a
daemon process that is restarted when it dies. The zdaemon package
provides two tools for running daemons: zdrun.py and zdctl.py. You can
find zdaemon and it's documentation at
http://pypi.python.org/pypi/zdaemon.
Note that ``runzeo`` makes no attempt to implemnt a well behaved
daemon. It expects that functionality to be provided by a wrapper like
zdaemon or supervisord.
Rotating log files
------------------
``runzeo`` will re-initialize its logging subsystem when it receives a
SIGUSR2 signal. If you are using the standard event logger, you
should first rename the log file and then send the signal to the
server. The server will continue writing to the renamed log file
until it receives the signal. After it receives the signal, the
server will create a new file with the old name and write to it.
ZEO Clients
===========
To use a ZEO server, you need to connect to it using a ZEO client
storage. You create client storages either using a Python API or
using a ZODB storage configuration in a ZODB storage configuration
section.
Python API for creating a ZEO client storage
--------------------------------------------
To create a client storage from Python, use the ``ZEO.client``
function::
import ZEO
client = ZEO.client(8200)
In the example above, we created a client that connected to a storage
listening on port 8200 on local host. The first argument is an
address, or list of addresses to connect to. There are many additinal
options, decumented below that should be given as keyword arguments.
Addresses can be:
- A host/port tuple
- An integer, which implies that the host is '127.0.0.1'
- A unix domain socket file name.
Options:
cache_size
The cache size in bytes. This defaults to a 20MB.
cache
The ZEO cache to be used. This can be a file name, which will
cause a persisetnt standard persistent ZEO cache to be used and
stored in the given name. This can also be an object that
implements ``ZEO.interfaces.ICache``.
If not specified, then a non-persistent cache will be used.
blob_dir
The name of a directory to hold/cache blob data downloaded from the
server. This must be provided if blobs are to be used. (Of
course, the server storage must be configured to use blobs as
well.)
shared_blob_dir
A client can use a network files system (or a local directory if
the server runs on the same machine) to share a blob directory with
the server. This allows downloading of blobs (except via a
distributed file system) to be avoided.
blob_cache_size
The size of the blob cache in bytes. IF unset, then blobs will
accumulate. If set, then blobs are removed when the total size
exceeds this amount. Blobs accessed least recently are removed
first.
blob_cache_size_check
The total size of data to be downloaded to trigger blob cache size
reduction. The defaukt is 10 (percent). This controls how often to
remove blobs from the cache.
ssl
An ``ssl.SSLContext`` object used to make SSL connections.
ssl_server_hostname
Host name to use for SSL host name checks.
If using SSL and if host name checking is enabled in the given SSL
context then use this as the value to check. If an address is a
host/port pair, then this defaults to the host in the address.
read_only
Set to true for a read-only connection.
If false (the default), then request a read/write connection.
This option is ignored if ``read_only_fallback`` is set to a true value.
read_only_fallback
Set to true, then prefer a read/write connection, but be willing to
use a read-only connection. This defaults to a false value.
If ``read_only_fallback`` is set, then ``read_only`` is ignored.
wait_timeout
How long to wait for an initial connection, defaulting to 30
seconds. If an initial connection can't be made within this time
limit, then creation of the client storage will fail with a
``ZEO.Exceptions.ClientDisconnected`` exception.
After the initial connection, if the client is disconnected:
- In-flight server requests will fail with a
``ZEO.Exceptions.ClientDisconnected`` exception.
- New requests will block for up to ``wait_timeout`` waiting for a
connection to be established before failing with a
``ZEO.Exceptions.ClientDisconnected`` exception.
client_label
A short string to display in *server* logs for an event relating to
this client. This can be helpful when debugging.
disconnect_poll
The delay in seconds between attempts to connect to the
server, in seconds. Defaults to 1 second.
Configuration strings/files
---------------------------
ZODB databases and storages can be configured using configuration
files, or strings (extracted from configuration files). They use the
same syntax as the server configuration files described above, but
with different sections and options.
An application that used ZODB might configure it's database using a
string like::
cache-size-bytes 1000MB
path /var/lib/Data.fs
In this example, we configured a ZODB database with a object cache
size of 1GB. Inside the database, we configured a file storage. The
``filestorage`` section provided file-storage parameters. We saw a
similar section in the storage-server configuration example in `Server
configuration`_.
To configure a client storage, you use a ``clientstorage`` section,
but first you have to import it's definition, because ZEO isn't built
into ZODB. Here's an example::
cache-size-bytes 1000MB
%import ZEO
server 8200
In this example, we defined a client storage that connected to a
server on port 8200.
The following settings are supported:
cache-size
The cache size in bytes, KB or MB. This defaults to a 20MB.
Optional ``KB`` or ``MB`` suffixes can (and usually are) used to
specify units other than bytes.
cache-path
The file path of a persistent cache file
blob-dir
The name of a directory to hold/cache blob data downloaded from the
server. This must be provided if blobs are to be used. (Of
course, the server storage must be configured to use blobs as
well.)
shared-blob-dir
A client can use a network files system (or a local directory if
the server runs on the same machine) to share a blob directory with
the server. This allows downloading of blobs (except via a
distributed file system) to be avoided.
blob-cache-size
The size of the blob cache in bytes. IF unset, then blobs will
accumulate. If set, then blobs are removed when the total size
exceeds this amount. Blobs accessed least recently are removed
first.
blob-cache-size-check
The total size of data to be downloaded to trigger blob cache size
reduction. The defaukt is 10 (percent). This controls how often to
remove blobs from the cache.
read-only
Set to true for a read-only connection.
If false (the default), then request a read/write connection.
This option is ignored if ``read_only_fallback`` is set to a true value.
read-only-fallback
Set to true, then prefer a read/write connection, but be willing to
use a read-only connection. This defaults to a false value.
If ``read_only_fallback`` is set, then ``read_only`` is ignored.
wait_timeout
How long to wait for an initial connection, defaulting to 30
seconds. If an initial connection can't be made within this time
limit, then creation of the client storage will fail with a
``ZEO.Exceptions.ClientDisconnected`` exception.
After the initial connection, if the client is disconnected:
- In-flight server requests will fail with a
``ZEO.Exceptions.ClientDisconnected`` exception.
- New requests will block for up to ``wait_timeout`` waiting for a
connection to be established before failing with a
``ZEO.Exceptions.ClientDisconnected`` exception.
client_label
A short string to display in *server* logs for an event relating to
this client. This can be helpful when debugging.
disconnect_poll
The delay in seconds between attempts to connect to the
server, in seconds. Defaults to 1 second.
Client SSL configuration
~~~~~~~~~~~~~~~~~~~~~~~~
An ``ssl`` subsection can be used to enable and configure SSL, as in::
%import ZEO
server zeo.example.com8200
In the example above, SSL is enabled in it's simplest form:
- The cient expects the server to have a signed certificate, which the
client validates.
- The server server host name ``zeo.example.com`` is checked against
the server's certificate.
A number of settings can be provided to configure SSL:
certificate
The path to an SSL certificate file for the client. This is
needed to allow the server to authenticate the client.
key
The path to the SSL key file for the client certificate (if not
included in the certificate file).
password-function
A dotted name if an importable function that, when imported, returns
the password needed to unlock the key (if the key requires a password.)
authenticate
The path to a file or directory containing server certificates
to authenticate. ((See the ``cafile`` and ``capath``
parameters in the Python documentation for
``ssl.SSLContext.load_verify_locations``.)
If this setting is used. then certificate authentication is
used to authenticate the server. The server must be configuted
with one of the certificates supplied using this setting.
check-hostname
This is a boolean setting that defaults to true. Verify the
host name in the server certificate is as expected.
server-hostname
The expected server host name. This defaults to the host name
used in the server address. This option must be used when
``check-hostname`` is true and when a server address has no host
name (localhost, or unix domain socket) or when there is more
than one seerver and server hostnames differ.
Using this setting implies a true value for the ``check-hostname`` setting.
ZEO-5.0.0a0/asyncio-todo.rst 0000664 0000000 0000000 00000000543 12737730500 0015542 0 ustar 00root root 0000000 0000000 ZEO networking implemention based on asyncio to-dos
===================================================
First iteration, client only
----------------------------
- socketless tests for protocol and adapters
- Disconnect/reconnect strategy
- Integration with ClientStorage
Second iteration, server
------------------------
TBD after client release.
ZEO-5.0.0a0/bootstrap.py 0000664 0000000 0000000 00000014545 12737730500 0014776 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2006 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Bootstrap a buildout-based project
Simply run this script in a directory containing a buildout.cfg.
The script accepts buildout command-line options, so you can
use the -c option to specify an alternate configuration file.
"""
import os
import shutil
import sys
import tempfile
from optparse import OptionParser
tmpeggs = tempfile.mkdtemp()
usage = '''\
[DESIRED PYTHON FOR BUILDOUT] bootstrap.py [options]
Bootstraps a buildout-based project.
Simply run this script in a directory containing a buildout.cfg, using the
Python that you want bin/buildout to use.
Note that by using --find-links to point to local resources, you can keep
this script from going over the network.
'''
parser = OptionParser(usage=usage)
parser.add_option("-v", "--version", help="use a specific zc.buildout version")
parser.add_option("-t", "--accept-buildout-test-releases",
dest='accept_buildout_test_releases',
action="store_true", default=False,
help=("Normally, if you do not specify a --version, the "
"bootstrap script and buildout gets the newest "
"*final* versions of zc.buildout and its recipes and "
"extensions for you. If you use this flag, "
"bootstrap and buildout will get the newest releases "
"even if they are alphas or betas."))
parser.add_option("-c", "--config-file",
help=("Specify the path to the buildout configuration "
"file to be used."))
parser.add_option("-f", "--find-links",
help=("Specify a URL to search for buildout releases"))
parser.add_option("--allow-site-packages",
action="store_true", default=False,
help=("Let bootstrap.py use existing site packages"))
parser.add_option("--setuptools-version",
help="use a specific setuptools version")
options, args = parser.parse_args()
######################################################################
# load/install setuptools
try:
if options.allow_site_packages:
import setuptools
import pkg_resources
from urllib.request import urlopen
except ImportError:
from urllib2 import urlopen
ez = {}
exec(urlopen('https://bootstrap.pypa.io/ez_setup.py').read(), ez)
if not options.allow_site_packages:
# ez_setup imports site, which adds site packages
# this will remove them from the path to ensure that incompatible versions
# of setuptools are not in the path
import site
# inside a virtualenv, there is no 'getsitepackages'.
# We can't remove these reliably
if hasattr(site, 'getsitepackages'):
for sitepackage_path in site.getsitepackages():
sys.path[:] = [x for x in sys.path if sitepackage_path not in x]
setup_args = dict(to_dir=tmpeggs, download_delay=0)
if options.setuptools_version is not None:
setup_args['version'] = options.setuptools_version
ez['use_setuptools'](**setup_args)
import setuptools
import pkg_resources
# This does not (always?) update the default working set. We will
# do it.
for path in sys.path:
if path not in pkg_resources.working_set.entries:
pkg_resources.working_set.add_entry(path)
######################################################################
# Install buildout
ws = pkg_resources.working_set
cmd = [sys.executable, '-c',
'from setuptools.command.easy_install import main; main()',
'-mZqNxd', tmpeggs]
find_links = os.environ.get(
'bootstrap-testing-find-links',
options.find_links or
('http://downloads.buildout.org/'
if options.accept_buildout_test_releases else None)
)
if find_links:
cmd.extend(['-f', find_links])
setuptools_path = ws.find(
pkg_resources.Requirement.parse('setuptools')).location
requirement = 'zc.buildout'
version = options.version
if version is None and not options.accept_buildout_test_releases:
# Figure out the most recent final version of zc.buildout.
import setuptools.package_index
_final_parts = '*final-', '*final'
def _final_version(parsed_version):
try:
return not parsed_version.is_prerelease
except AttributeError:
# Older setuptools
for part in parsed_version:
if (part[:1] == '*') and (part not in _final_parts):
return False
return True
index = setuptools.package_index.PackageIndex(
search_path=[setuptools_path])
if find_links:
index.add_find_links((find_links,))
req = pkg_resources.Requirement.parse(requirement)
if index.obtain(req) is not None:
best = []
bestv = None
for dist in index[req.project_name]:
distv = dist.parsed_version
if _final_version(distv):
if bestv is None or distv > bestv:
best = [dist]
bestv = distv
elif distv == bestv:
best.append(dist)
if best:
best.sort()
version = best[-1].version
if version:
requirement = '=='.join((requirement, version))
cmd.append(requirement)
import subprocess
if subprocess.call(cmd, env=dict(os.environ, PYTHONPATH=setuptools_path)) != 0:
raise Exception(
"Failed to execute command:\n%s" % repr(cmd)[1:-1])
######################################################################
# Import and run buildout
ws.add_entry(tmpeggs)
ws.require(requirement)
import zc.buildout.buildout
if not [a for a in args if '=' not in a]:
args.append('bootstrap')
# if -c was provided, we push it back into args for buildout' main function
if options.config_file is not None:
args[0:0] = ['-c', options.config_file]
zc.buildout.buildout.main(args)
shutil.rmtree(tmpeggs)
ZEO-5.0.0a0/buildout.cfg 0000664 0000000 0000000 00000000550 12737730500 0014706 0 ustar 00root root 0000000 0000000 [buildout]
develop = .
parts =
test
scripts
versions = versions
[versions]
[test]
recipe = zc.recipe.testrunner
eggs =
ZEO [test]
initialization =
import os, tempfile
try: os.mkdir('tmp')
except: pass
tempfile.tempdir = os.path.abspath('tmp')
defaults = ['--all']
[scripts]
recipe = zc.recipe.egg
eggs =
ZEO [test]
interpreter = py
ZEO-5.0.0a0/doc/ 0000775 0000000 0000000 00000000000 12737730500 0013143 5 ustar 00root root 0000000 0000000 ZEO-5.0.0a0/doc/HOWTO-Blobs-NFS.txt 0000664 0000000 0000000 00000005450 12737730500 0016273 0 ustar 00root root 0000000 0000000 ===========================================
How to use NFS to make Blobs more efficient
===========================================
:Author: Christian Theune
Overview
========
When handling blobs, the biggest goal is to avoid writing operations that
require the blob data to be transferred using up IO resources.
When bringing a blob into the system, at least one O(N) operation has to
happen, e.g. when the blob is uploaded via a network server. The blob should
be extracted as a file on the final storage volume as early as possible,
avoiding further copies.
In a ZEO setup, all data is stored on a networked server and passed to it
using zrpc. This is a major problem for handling blobs, because it will lock
all transactions from committing when storing a single large blob. As a
default, this mechanism works but is not recommended for high-volume
installations.
Shared filesystem
=================
The solution for the transfer problem is to setup various storage parameters
so that blobs are always handled on a single volume that is shared via network
between ZEO servers and clients.
Step 1: Setup a writable shared filesystem for ZEO server and client
--------------------------------------------------------------------
On the ZEO server, create two directories on the volume that will be used by
this setup (assume the volume is accessible via $SERVER/):
- $SERVER/blobs
- $SERVER/tmp
Then export the $SERVER directory using a shared network filesystem like NFS.
Make sure it's writable by the ZEO clients.
Assume the exported directory is available on the client as $CLIENT.
Step 2: Application temporary directories
-----------------------------------------
Applications (i.e. Zope) will put uploaded data in a temporary directory
first. Adjust your TMPDIR, TMP or TEMP environment variable to point to the
shared filesystem:
$ export TMPDIR=$CLIENT/tmp
Step 3: ZEO client caches
-------------------------
Edit the file `zope.conf` on the ZEO client and adjust the configuration of
the `zeoclient` storage with two new variables::
blob-dir = $CLIENT/blobs
blob-cache-writable = yes
Step 4: ZEO server
------------------
Edit the file `zeo.conf` on the ZEO server to configure the blob directory.
Assuming the published storage of the ZEO server is a file storage, then the
configuration should look like this::
path $INSTANCE/var/Data.fs
blob-dir $SERVER/blobs
(Remember to manually replace $SERVER and $CLIENT with the exported directory
as accessible by either the ZEO server or the ZEO client.)
Conclusion
----------
At this point, after restarting your ZEO server and clients, the blob
directory will be shared and a minimum amount of IO will occur when working
with blobs.
ZEO-5.0.0a0/doc/zeo-client-cache-tracing.txt 0000664 0000000 0000000 00000016613 12737730500 0020452 0 ustar 00root root 0000000 0000000 ZEO Client Cache Tracing
========================
An important question for ZEO users is: how large should the ZEO
client cache be? ZEO 2 (as of ZEO 2.0b2) has a new feature that lets
you collect a trace of cache activity and tools to analyze this trace,
enabling you to make an informed decision about the cache size.
Don't confuse the ZEO client cache with the Zope object cache. The
ZEO client cache is only used when an object is not in the Zope object
cache; the ZEO client cache avoids roundtrips to the ZEO server.
Enabling Cache Tracing
----------------------
To enable cache tracing, you must use a persistent cache (specify a ``client``
name), and set the environment variable ZEO_CACHE_TRACE to a non-empty
value. The path to the trace file is derived from the path to the persistent
cache file by appending ".trace". If the file doesn't exist, ZEO will try to
create it. If the file does exist, it's opened for appending (previous trace
information is not overwritten). If there are problems with the file, a
warning message is logged. To start or stop tracing, the ZEO client process
(typically a Zope application server) must be restarted.
The trace file can grow pretty quickly; on a moderately loaded server, we
observed it growing by 7 MB per hour. The file consists of binary records,
each 34 bytes long if 8-byte oids are in use; a detailed description of the
record lay-out is given in stats.py. No sensitive data is logged: data
record sizes (but not data records), and binary object and transaction ids
are logged, but no object pickles, object types or names, user names,
transaction comments, access paths, or machine information (such as machine
name or IP address) are logged.
Analyzing a Cache Trace
-----------------------
The stats.py command-line tool is the first-line tool to analyze a cache
trace. Its default output consists of two parts: a one-line summary of
essential statistics for each segment of 15 minutes, interspersed with lines
indicating client restarts, followed by a more detailed summary of overall
statistics.
The most important statistic is the "hit rate", a percentage indicating how
many requests to load an object could be satisfied from the cache. Hit rates
around 70% are good. 90% is excellent. If you see a hit rate under 60% you
can probably improve the cache performance (and hence your Zope application
server's performance) by increasing the ZEO cache size. This is normally
configured using key ``cache_size`` in the ``zeoclient`` section of your
configuration file. The default cache size is 20 MB, which is small.
The stats.py tool shows its command line syntax when invoked without
arguments. The tracefile argument can be a gzipped file if it has a .gz
extension. It will be read from stdin (assuming uncompressed data) if the
tracefile argument is '-'.
Simulating Different Cache Sizes
--------------------------------
Based on a cache trace file, you can make a prediction of how well the cache
might do with a different cache size. The simul.py tool runs a simulation of
the ZEO client cache implementation based upon the events read from a trace
file. A new simulation is started each time the trace file records a client
restart event; if a trace file contains more than one restart event, a
separate line is printed for each simulation, and a line with overall
statistics is added at the end.
Example, assuming the trace file is in /tmp/cachetrace.log::
$ python simul.py -s 4 /tmp/cachetrace.log
CircularCacheSimulation, cache size 4,194,304 bytes
START TIME DURATION LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 22 22:22 39:09 3218856 1429329 24046 41517 44.4% 40776 99.8
This shows that with a 4 MB cache size, the cache hit rate is 44.4%, the
percentage 1429329 (number of cache hits) is of 3218856 (number of load
requests). The cache simulated 40776 evictions, to make room for new object
states. At the end, 99.8% of the bytes reserved for the cache file were in
use to hold object state (the remaining 0.2% consists of "holes", bytes freed
by object eviction and not yet reused to hold another object's state).
Let's try this again with an 8 MB cache::
$ python simul.py -s 8 /tmp/cachetrace.log
CircularCacheSimulation, cache size 8,388,608 bytes
START TIME DURATION LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 22 22:22 39:09 3218856 2182722 31315 41517 67.8% 40016 100.0
That's a huge improvement in hit rate, which isn't surprising since these are
very small cache sizes. The default cache size is 20 MB, which is still on
the small side::
$ python simul.py /tmp/cachetrace.log
CircularCacheSimulation, cache size 20,971,520 bytes
START TIME DURATION LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 22 22:22 39:09 3218856 2982589 37922 41517 92.7% 37761 99.9
Again a very nice improvement in hit rate, and there's not a lot of room left
for improvement. Let's try 100 MB::
$ python simul.py -s 100 /tmp/cachetrace.log
CircularCacheSimulation, cache size 104,857,600 bytes
START TIME DURATION LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 22 22:22 39:09 3218856 3218741 39572 41517 100.0% 22778 100.0
It's very unusual to see a hit rate so high. The application here frequently
modified a very large BTree, so given enough cache space to hold the entire
BTree it rarely needed to ask the ZEO server for data: this application
reused the same objects over and over.
More typical is that a substantial number of objects will be referenced only
once. Whenever an object turns out to be loaded only once, it's a pure loss
for the cache: the first (and only) load is a cache miss; storing the object
evicts other objects, possibly causing more cache misses; and the object is
never loaded again. If, for example, a third of the objects are loaded only
once, it's quite possible for the theoretical maximum hit rate to be 67%, no
matter how large the cache.
The simul.py script also contains code to simulate different cache
strategies. Since none of these are implemented, and only the default cache
strategy's code has been updated to be aware of MVCC, these are not further
documented here.
Simulation Limitations
----------------------
The cache simulation is an approximation, and actual hit rate may be higher
or lower than the simulated result. These are some factors that inhibit
exact simulation:
- The simulator doesn't try to emulate versions. If the trace file contains
loads and stores of objects in versions, the simulator treats them as if
they were loads and stores of non-version data.
- Each time a load of an object O in the trace file was a cache hit, but the
simulated cache has evicted O, the simulated cache has no way to repair its
knowledge about O. This is more frequent when simulating caches smaller
than the cache used to produce the trace file. When a real cache suffers a
cache miss, it asks the ZEO server for the needed information about O, and
saves O in the client cache. The simulated cache doesn't have a ZEO server
to ask, and O continues to be absent in the simulated cache. Further
requests for O will continue to be simulated cache misses, although in a
real cache they'll likely be cache hits. On the other hand, the
simulated cache doesn't need to evict any objects to make room for O, so it
may enjoy further cache hits on objects a real cache would have evicted.
ZEO-5.0.0a0/doc/zeo-client-cache.txt 0000664 0000000 0000000 00000003602 12737730500 0017017 0 ustar 00root root 0000000 0000000 ZEO Client Cache
The client cache provides a disk based cache for each ZEO client. The
client cache allows reads to be done from local disk rather than by remote
access to the storage server.
The cache may be persistent or transient. If the cache is persistent, then
the cache file is retained for use after process restarts. A non-
persistent cache uses a temporary file.
The client cache is managed in a single file, of the specified size.
The life of the cache is as follows:
- The cache file is opened (if it already exists), or created and set to
the specified size.
- Cache records are written to the cache file, as transactions commit
locally, and as data are loaded from the server.
- Writes are to "the current file position". This is a pointer that
travels around the file, circularly. After a record is written, the
pointer advances to just beyond it. Objects starting at the current
file position are evicted, as needed, to make room for the next record
written.
A distinct index file is not created, although indexing structures are
maintained in memory while a ClientStorage is running. When a persistent
client cache file is reopened, these indexing structures are recreated
by analyzing the file contents.
Persistent cache files are created in the directory named in the ``var``
argument to the ClientStorage, or if ``var`` is None, in the current
working directory. Persistent cache files have names of the form::
client-storage.zec
where:
client -- the client name, as given by the ClientStorage's ``client``
argument
storage -- the storage name, as given by the ClientStorage's ``storage``
argument; this is typically a string denoting a small integer,
"1" by default
For example, the cache file for client '8881' and storage 'spam' is named
"8881-spam.zec".
ZEO-5.0.0a0/log.ini 0000664 0000000 0000000 00000001112 12737730500 0013653 0 ustar 00root root 0000000 0000000 # This file configures the logging module for the test harness:
# critical errors are logged to testing.log; everything else is
# ignored.
# Documentation for the file format is at
# http://www.red-dove.com/python_logging.html#config
[logger_root]
level=CRITICAL
handlers=normal
[handler_normal]
class=FileHandler
level=NOTSET
formatter=common
args=('testing.log', 'a')
filename=testing.log
mode=a
[formatter_common]
format=------
%(asctime)s %(levelname)s %(name)s %(message)s
datefmt=%Y-%m-%dT%H:%M:%S
[loggers]
keys=root
[handlers]
keys=normal
[formatters]
keys=common
ZEO-5.0.0a0/release.py 0000664 0000000 0000000 00000004461 12737730500 0014375 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
"""Update version numbers and release dates for the next release.
usage: release.py version date
version should be a string like "3.2.0c1"
date should be a string like "23-Sep-2003"
The following files are updated:
- setup.py
- NEWS.txt
- doc/guide/zodb.tex
- src/ZEO/__init__.py
- src/ZEO/version.txt
- src/ZODB/__init__.py
"""
import fileinput
import os
import re
# In file filename, replace the first occurrence of regexp pat with
# string repl.
def replace(filename, pat, repl):
from sys import stderr as e # fileinput hijacks sys.stdout
foundone = False
for line in fileinput.input([filename], inplace=True, backup="~"):
if foundone:
print line,
else:
match = re.search(pat, line)
if match is not None:
foundone = True
new = re.sub(pat, repl, line)
print new,
print >> e, "In %s, replaced:" % filename
print >> e, " ", repr(line)
print >> e, " ", repr(new)
else:
print line,
if not foundone:
print >> e, "*" * 60, "Oops!"
print >> e, " Failed to find %r in %r" % (pat, filename)
# Nothing in our codebase cares about ZEO/version.txt. Jeremy said
# someone asked for it so that a shell script could read up the ZEO
# version easily.
# Before ZODB 3.4, the ZEO version was one smaller than the ZODB version;
# e.g., ZEO 2.2.7 shipped with ZODB 3.2.7. Now ZEO and ZODB share their
# version number.
def write_zeoversion(path, version):
f = open(path, "w")
print >> f, version
f.close()
def main(args):
version, date = args
replace("setup.py",
r'^VERSION = "\S+"$',
'VERSION = "%s"' % version)
replace("src/ZODB/__init__.py",
r'__version__ = "\S+"',
'__version__ = "%s"' % version)
replace("src/ZEO/__init__.py",
r'version = "\S+"',
'version = "%s"' % version)
write_zeoversion("src/ZEO/version.txt", version)
replace("NEWS.txt",
r"^Release date: .*",
"Release date: %s" % date)
replace("doc/guide/zodb.tex",
r"release{\S+}",
"release{%s}" % version)
if __name__ == "__main__":
import sys
main(sys.argv[1:])
ZEO-5.0.0a0/setup.py 0000664 0000000 0000000 00000010720 12737730500 0014110 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2002, 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
version = '5.0.0a0'
from setuptools import setup, find_packages
import os
import sys
if sys.version_info < (2, 7):
print("This version of ZEO requires Python 2.7 or higher")
sys.exit(0)
if (3, 0) < sys.version_info < (3, 4):
print("This version of ZEO requires Python 3.4 or higher")
sys.exit(0)
install_requires = [
'ZODB >= 5.0.0a5',
'six',
'transaction >= 1.6.0',
'persistent >= 4.1.0',
'zc.lockfile',
'ZConfig',
'zdaemon',
'zope.interface',
]
tests_require = ['zope.testing', 'manuel', 'random2', 'mock']
if sys.version_info[:2] < (3, ):
install_requires.extend(('futures', 'trollius'))
classifiers = """
Intended Audience :: Developers
License :: OSI Approved :: Zope Public License
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.4
Programming Language :: Python :: 3.5
Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: Implementation :: PyPy
Topic :: Database
Topic :: Software Development :: Libraries :: Python Modules
Operating System :: Microsoft :: Windows
Operating System :: Unix
Framework :: ZODB
""".strip().split('\n')
def _modname(path, base, name=''):
if path == base:
return name
dirname, basename = os.path.split(path)
return _modname(dirname, base, basename + '.' + name)
def _flatten(suite, predicate=lambda *x: True):
from unittest import TestCase
for suite_or_case in suite:
if predicate(suite_or_case):
if isinstance(suite_or_case, TestCase):
yield suite_or_case
else:
for x in _flatten(suite_or_case):
yield x
def _no_layer(suite_or_case):
return getattr(suite_or_case, 'layer', None) is None
def _unittests_only(suite, mod_suite):
for case in _flatten(mod_suite, _no_layer):
suite.addTest(case)
def alltests():
import logging
import pkg_resources
import unittest
import ZEO.ClientStorage
class NullHandler(logging.Handler):
level = 50
def emit(self, record):
pass
logging.getLogger().addHandler(NullHandler())
suite = unittest.TestSuite()
base = pkg_resources.working_set.find(
pkg_resources.Requirement.parse('ZEO')).location
for dirpath, dirnames, filenames in os.walk(base):
if os.path.basename(dirpath) == 'tests':
for filename in filenames:
if filename != 'testZEO.py': continue
if filename.endswith('.py') and filename.startswith('test'):
mod = __import__(
_modname(dirpath, base, os.path.splitext(filename)[0]),
{}, {}, ['*'])
_unittests_only(suite, mod.test_suite())
return suite
long_description = (
open('README.rst').read()
+ '\n' +
open('CHANGES.rst').read()
)
setup(name="ZEO",
version=version,
description = long_description.split('\n', 2)[1],
long_description = long_description,
url = 'https://pypi.python.org/pypi/ZEO',
maintainer="Zope Foundation and Contributors",
maintainer_email="zodb-dev@zope.org",
packages = find_packages('src'),
package_dir = {'': 'src'},
license = "ZPL 2.1",
platforms = ["any"],
classifiers = classifiers,
test_suite="__main__.alltests", # to support "setup.py test"
tests_require = tests_require,
extras_require = dict(test=tests_require),
install_requires = install_requires,
zip_safe = False,
entry_points = """
[console_scripts]
zeopack = ZEO.scripts.zeopack:main
runzeo = ZEO.runzeo:main
zeopasswd = ZEO.zeopasswd:main
zeoctl = ZEO.zeoctl:main
zeo-nagios = ZEO.nagios:main
""",
include_package_data = True,
)
ZEO-5.0.0a0/src/ 0000775 0000000 0000000 00000000000 12737730500 0013165 5 ustar 00root root 0000000 0000000 ZEO-5.0.0a0/src/ZEO/ 0000775 0000000 0000000 00000000000 12737730500 0013622 5 ustar 00root root 0000000 0000000 ZEO-5.0.0a0/src/ZEO/ClientStorage.py 0000664 0000000 0000000 00000127104 12737730500 0016744 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002, 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""The ClientStorage class and the exceptions that it may raise.
Public contents of this module:
ClientStorage -- the main class, implementing the Storage API
"""
import logging
import os
import re
import socket
import stat
import sys
import threading
import time
import weakref
from binascii import hexlify
import BTrees.OOBTree
import zc.lockfile
import ZODB
import ZODB.BaseStorage
import ZODB.ConflictResolution
import ZODB.interfaces
import zope.interface
import six
from persistent.TimeStamp import TimeStamp
from ZEO._compat import get_ident
from ZEO.Exceptions import ClientDisconnected
from ZEO.TransactionBuffer import TransactionBuffer
from ZODB import POSException
from ZODB import utils
import ZEO.asyncio.client
import ZEO.cache
logger = logging.getLogger(__name__)
# max signed 64-bit value ~ infinity :) Signed cuz LBTree and TimeStamp
m64 = b'\x7f\xff\xff\xff\xff\xff\xff\xff'
from ZODB.ConflictResolution import ResolvedSerial
def tid2time(tid):
return str(TimeStamp(tid))
def get_timestamp(prev_ts=None):
"""Internal helper to return a unique TimeStamp instance.
If the optional argument is not None, it must be a TimeStamp; the
return value is then guaranteed to be at least 1 microsecond later
the argument.
"""
t = time.time()
t = TimeStamp(*time.gmtime(t)[:5] + (t % 60,))
if prev_ts is not None:
t = t.laterThan(prev_ts)
return t
MB = 1024**2
@zope.interface.implementer(ZODB.interfaces.IMultiCommitStorage)
class ClientStorage(ZODB.ConflictResolution.ConflictResolvingStorage):
"""A storage class that is a network client to a remote storage.
This is a faithful implementation of the Storage API.
This class is thread-safe; transactions are serialized in
tpc_begin().
"""
def __init__(self, addr, storage='1', cache_size=20 * MB,
name='', wait_timeout=None,
disconnect_poll=None,
read_only=0, read_only_fallback=0,
blob_dir=None, shared_blob_dir=False,
blob_cache_size=None, blob_cache_size_check=10,
client_label=None,
cache=None,
ssl = None, ssl_server_hostname=None,
# Mostly ignored backward-compatability options
client=None, var=None,
min_disconnect_poll=1, max_disconnect_poll=None,
wait=True,
drop_cache_rather_verify=True,
username=None, password=None, realm=None,
# For tests:
_client_factory=ZEO.asyncio.client.ClientThread,
):
"""ClientStorage constructor.
This is typically invoked from a custom_zodb.py file.
All arguments except addr should be keyword arguments.
Arguments:
addr
The server address(es). This is either a list of
addresses or a single address. Each address can be a
(hostname, port) tuple to signify a TCP/IP connection or
a pathname string to signify a Unix domain socket
connection. A hostname may be a DNS name or a dotted IP
address. Required.
storage
The server storage name, defaulting to '1'. The name must
match one of the storage names supported by the server(s)
specified by the addr argument.
cache_size
The disk cache size, defaulting to 20 megabytes.
This is passed to the ClientCache constructor.
name
The storage name, defaulting to a combination of the
address and the server storage name. This is used to
construct the response to getName()
cache
A cache object or a name, relative to the current working
directory, used to construct persistent cache filenames.
Defaults to None, in which case the cache is not
persistent. See ClientCache for more info.
wait_timeout
Maximum time to wait for results, including connecting.
read_only
A flag indicating whether this should be a
read-only storage, defaulting to false (i.e. writing is
allowed by default).
read_only_fallback
A flag indicating whether a read-only
remote storage should be acceptable as a fallback when no
writable storages are available. Defaults to false. At
most one of read_only and read_only_fallback should be
true.
blob_dir
directory path for blob data. 'blob data' is data that
is retrieved via the loadBlob API.
shared_blob_dir
Flag whether the blob_dir is a server-shared filesystem
that should be used instead of transferring blob data over
ZEO protocol.
blob_cache_size
Maximum size of the ZEO blob cache, in bytes. If not set, then
the cache size isn't checked and the blob directory will
grow without bound.
This option is ignored if shared_blob_dir is true.
blob_cache_size_check
ZEO check size as percent of blob_cache_size. The ZEO
cache size will be checked when this many bytes have been
loaded into the cache. Defaults to 10% of the blob cache
size. This option is ignored if shared_blob_dir is true.
client_label
A label to include in server log messages for the client.
Note that the authentication protocol is defined by the server
and is detected by the ClientStorage upon connecting (see
testConnection() and doAuth() for details).
"""
if isinstance(addr, int):
addr = ('127.0.0.1', addr)
self.__name__ = name or str(addr) # Standard convention for storages
if isinstance(addr, str):
addr = [addr]
elif (isinstance(addr, tuple) and len(addr) == 2 and
isinstance(addr[0], str) and isinstance(addr[1], int)):
addr = [addr]
logger.info(
"%s %s (pid=%d) created %s/%s for storage: %r",
self.__name__,
self.__class__.__name__,
os.getpid(),
read_only and "RO" or "RW",
read_only_fallback and "fallback" or "normal",
storage,
)
self._is_read_only = read_only
self._read_only_fallback = read_only_fallback
self._addr = addr # For tests
self._iterators = weakref.WeakValueDictionary()
self._iterator_ids = set()
self._storage = storage
# _server_addr is used by sortKey()
self._server_addr = None
self._client_label = client_label
self._info = {'length': 0, 'size': 0, 'name': 'ZEO Client',
'supportsUndo': 0, 'interfaces': ()}
self._db = None
self._oids = [] # List of pre-fetched oids from server
cache = self._cache = open_cache(
cache, var, client, storage, cache_size)
# XXX need to check for POSIX-ness here
self.blob_dir = blob_dir
self.shared_blob_dir = shared_blob_dir
if blob_dir is not None:
# Avoid doing this import unless we need it, as it
# currently requires pywin32 on Windows.
import ZODB.blob
if shared_blob_dir:
self.fshelper = ZODB.blob.FilesystemHelper(blob_dir)
else:
if 'zeocache' not in ZODB.blob.LAYOUTS:
ZODB.blob.LAYOUTS['zeocache'] = BlobCacheLayout()
self.fshelper = ZODB.blob.FilesystemHelper(
blob_dir, layout_name='zeocache')
self.fshelper.create()
self.fshelper.checkSecure()
else:
self.fshelper = None
self._blob_cache_size = blob_cache_size
self._blob_data_bytes_loaded = 0
if blob_cache_size is not None:
assert blob_cache_size_check < 100
self._blob_cache_size_check = (
blob_cache_size * blob_cache_size_check // 100)
self._check_blob_size()
self._server = _client_factory(
addr, self, cache, storage,
ZEO.asyncio.client.Fallback if read_only_fallback else read_only,
wait_timeout or 30,
ssl = ssl, ssl_server_hostname=ssl_server_hostname,
)
self._call = self._server.call
self._async = self._server.async
self._async_iter = self._server.async_iter
self._wait = self._server.wait
self._commit_lock = threading.Lock()
if wait:
try:
self._wait()
except Exception:
# No point in keeping the server going of the storage
# creation fails
self._server.close()
raise
def new_addr(self, addr):
self._addr = addr
self._server.new_addrs(self._normalize_addr(addr))
def _normalize_addr(self, addr):
if isinstance(addr, int):
addr = ('127.0.0.1', addr)
if isinstance(addr, str):
addr = [addr]
elif (isinstance(addr, tuple) and len(addr) == 2 and
isinstance(addr[0], str) and isinstance(addr[1], int)):
addr = [addr]
return addr
def close(self):
"Storage API: finalize the storage, releasing external resources."
self._server.close()
if self._check_blob_size_thread is not None:
self._check_blob_size_thread.join()
_check_blob_size_thread = None
def _check_blob_size(self, bytes=None):
if self._blob_cache_size is None:
return
if self.shared_blob_dir or not self.blob_dir:
return
if (bytes is not None) and (bytes < self._blob_cache_size_check):
return
self._blob_data_bytes_loaded = 0
target = max(self._blob_cache_size - self._blob_cache_size_check, 0)
check_blob_size_thread = threading.Thread(
target=_check_blob_cache_size,
args=(self.blob_dir, target),
name="%s zeo client check blob size thread" % self.__name__,
)
check_blob_size_thread.setDaemon(True)
check_blob_size_thread.start()
self._check_blob_size_thread = check_blob_size_thread
def registerDB(self, db):
"""Storage API: register a database for invalidation messages.
This is called by ZODB.DB (and by some tests).
The storage isn't really ready to use until after this call.
"""
super(ClientStorage, self).registerDB(db)
self._db = db
def is_connected(self, test=False):
"""Return whether the storage is currently connected to a server."""
return self._server.is_connected()
def sync(self):
# The separate async thread should keep us up to date
pass
_connection_generation = 0
def notify_connected(self, conn, info):
reconnected = self._connection_generation
self.set_server_addr(conn.get_peername())
self.protocol_version = conn.protocol_version
self._is_read_only = conn.is_read_only()
# invalidate our db cache
if self._db is not None:
self._db.invalidateCache()
logger.info("%s %s to storage: %s",
self.__name__,
'Reconnected' if self._connection_generation
else 'Connected',
self._server_addr)
self._connection_generation += 1
if self._client_label:
conn.call_async_from_same_thread(
'set_client_label', self._client_label)
self._info.update(info)
for iface in (
ZODB.interfaces.IStorageRestoreable,
ZODB.interfaces.IStorageIteration,
ZODB.interfaces.IStorageUndoable,
ZODB.interfaces.IStorageCurrentRecordIteration,
ZODB.interfaces.IBlobStorage,
ZODB.interfaces.IExternalGC,
):
if (iface.__module__, iface.__name__) in self._info.get(
'interfaces', ()):
zope.interface.alsoProvides(self, iface)
def set_server_addr(self, addr):
# Normalize server address and convert to string
if isinstance(addr, str):
self._server_addr = addr
else:
assert isinstance(addr, tuple)
# If the server is on a remote host, we need to guarantee
# that all clients used the same name for the server. If
# they don't, the sortKey() may be different for each client.
# The best solution seems to be the official name reported
# by gethostbyaddr().
host = addr[0]
try:
canonical, aliases, addrs = socket.gethostbyaddr(host)
except socket.error as err:
logger.debug("%s Error resolving host: %s (%s)",
self.__name__, host, err)
canonical = host
self._server_addr = str((canonical, addr[1]))
def sortKey(self):
# XXX sortKey should be explicit, possibly based on database name.
# If the client isn't connected to anything, it can't have a
# valid sortKey(). Raise an error to stop the transaction early.
if self._server_addr is None:
raise ClientDisconnected
else:
return '%s:%s' % (self._storage, self._server_addr)
def notify_disconnected(self):
"""Internal: notify that the server connection was terminated.
This is called by ConnectionManager when the connection is
closed or when certain problems with the connection occur.
"""
logger.info("%s Disconnected from storage: %r",
self.__name__, self._server_addr)
self._iterator_gc(True)
self._connection_generation += 1
self._is_read_only = self._server.is_read_only()
def __len__(self):
"""Return the size of the storage."""
# TODO: Is this method used?
return self._info['length']
def getName(self):
"""Storage API: return the storage name as a string.
The return value consists of two parts: the name as determined
by the name and addr argments to the ClientStorage
constructor, and the string 'connected' or 'disconnected' in
parentheses indicating whether the storage is (currently)
connected.
"""
return "%s (%s)" % (
self.__name__,
self.is_connected() and "connected" or "disconnected")
def getSize(self):
"""Storage API: an approximate size of the database, in bytes."""
return self._info['size']
def supportsUndo(self):
"""Storage API: return whether we support undo."""
return self._info['supportsUndo']
def is_read_only(self):
"""Storage API: return whether we are in read-only mode.
"""
return self._is_read_only or self._server.is_read_only()
isReadOnly = is_read_only
def _check_trans(self, trans, meth):
"""Internal helper to check a transaction argument for sanity."""
if self._is_read_only:
raise POSException.ReadOnlyError()
try:
buf = trans.data(self)
except KeyError:
buf = None
if buf is None:
raise POSException.StorageTransactionError(
"Transaction not committing", meth, trans)
if buf.connection_generation != self._connection_generation:
# We were disconneected, so this one is poisoned
raise ClientDisconnected(meth, 'on a disconnected transaction')
return buf
def history(self, oid, size=1):
"""Storage API: return a sequence of HistoryEntry objects.
"""
return self._call('history', oid, size)
def record_iternext(self, next=None):
"""Storage API: get the next database record.
This is part of the conversion-support API.
"""
return self._call('record_iternext', next)
def getTid(self, oid):
# XXX deprecated: used by storage server for full cache verification.
return self._call('getTid', oid)
def loadSerial(self, oid, serial):
"""Storage API: load a historical revision of an object."""
return self._call('loadSerial', oid, serial)
def load(self, oid, version=''):
result = self.loadBefore(oid, m64)
if result is None:
raise POSException.POSKeyError(oid)
return result[:2]
def loadBefore(self, oid, tid):
return self._server.load_before(oid, tid)
def new_oid(self):
"""Storage API: return a new object identifier.
"""
if self._is_read_only:
raise POSException.ReadOnlyError()
while 1:
try:
return self._oids.pop()
except IndexError:
pass # We ran out. We need to get some more.
self._oids[:0] = reversed(self._call('new_oids'))
def pack(self, t=None, referencesf=None, wait=1, days=0):
"""Storage API: pack the storage.
Deviations from the Storage API: the referencesf argument is
ignored; two additional optional arguments wait and days are
provided:
wait -- a flag indicating whether to wait for the pack to
complete; defaults to true.
days -- a number of days to subtract from the pack time;
defaults to zero.
"""
# TODO: Is it okay that read-only connections allow pack()?
# rf argument ignored; server will provide its own implementation
if t is None:
t = time.time()
t = t - (days * 86400)
return self._call('pack', t, wait)
def store(self, oid, serial, data, version, txn):
"""Storage API: store data for an object."""
assert not version
tbuf = self._check_trans(txn, 'store')
self._async('storea', oid, serial, data, id(txn))
tbuf.store(oid, data)
def checkCurrentSerialInTransaction(self, oid, serial, transaction):
self._check_trans(transaction, 'checkCurrentSerialInTransaction')
self._async(
'checkCurrentSerialInTransaction', oid, serial, id(transaction))
def storeBlob(self, oid, serial, data, blobfilename, version, txn):
"""Storage API: store a blob object."""
assert not version
tbuf = self._check_trans(txn, 'storeBlob')
# Grab the file right away. That way, if we don't have enough
# room for a copy, we'll know now rather than in tpc_finish.
# Also, this releaves the client of having to manage the file
# (or the directory contianing it).
self.fshelper.getPathForOID(oid, create=True)
fd, target = self.fshelper.blob_mkstemp(oid, serial)
os.close(fd)
# It's a bit odd (and impossible on windows) to rename over
# an existing file. We'll use the temporary file name as a base.
target += '-'
ZODB.blob.rename_or_copy_blob(blobfilename, target)
os.remove(target[:-1])
serials = self.store(oid, serial, data, '', txn)
if self.shared_blob_dir:
self._async(
'storeBlobShared',
oid, serial, data, os.path.basename(target), id(txn))
else:
# Store a blob to the server. We don't want to real all of
# the data into memory, so we use a message iterator. This
# allows us to read the blob data as needed.
def store():
yield ('storeBlobStart', ())
f = open(target, 'rb')
while 1:
chunk = f.read(59000)
if not chunk:
break
yield ('storeBlobChunk', (chunk, ))
f.close()
yield ('storeBlobEnd', (oid, serial, data, id(txn)))
self._async_iter(store())
tbuf.storeBlob(oid, target)
return serials
def receiveBlobStart(self, oid, serial):
blob_filename = self.fshelper.getBlobFilename(oid, serial)
assert not os.path.exists(blob_filename)
lockfilename = os.path.join(os.path.dirname(blob_filename), '.lock')
assert os.path.exists(lockfilename)
blob_filename += '.dl'
assert not os.path.exists(blob_filename)
f = open(blob_filename, 'wb')
f.close()
def receiveBlobChunk(self, oid, serial, chunk):
blob_filename = self.fshelper.getBlobFilename(oid, serial)+'.dl'
assert os.path.exists(blob_filename)
f = open(blob_filename, 'r+b')
f.seek(0, 2)
f.write(chunk)
f.close()
self._blob_data_bytes_loaded += len(chunk)
self._check_blob_size(self._blob_data_bytes_loaded)
def receiveBlobStop(self, oid, serial):
blob_filename = self.fshelper.getBlobFilename(oid, serial)
os.rename(blob_filename+'.dl', blob_filename)
os.chmod(blob_filename, stat.S_IREAD)
def deleteObject(self, oid, serial, txn):
tbuf = self._check_trans(txn, 'deleteObject')
self._async('deleteObject', oid, serial, id(txn))
tbuf.store(oid, None)
def loadBlob(self, oid, serial):
# Load a blob. If it isn't present and we have a shared blob
# directory, then assume that it doesn't exist on the server
# and return None.
if self.fshelper is None:
raise POSException.Unsupported("No blob cache directory is "
"configured.")
blob_filename = self.fshelper.getBlobFilename(oid, serial)
if self.shared_blob_dir:
if os.path.exists(blob_filename):
return blob_filename
else:
# We're using a server shared cache. If the file isn't
# here, it's not anywhere.
raise POSException.POSKeyError(
"No blob file at %s" % blob_filename, oid, serial)
if os.path.exists(blob_filename):
return _accessed(blob_filename)
# First, we'll create the directory for this oid, if it doesn't exist.
self.fshelper.createPathForOID(oid)
# OK, it's not here and we (or someone) needs to get it. We
# want to avoid getting it multiple times. We want to avoid
# getting it multiple times even accross separate client
# processes on the same machine. We'll use file locking.
lock = _lock_blob(blob_filename)
try:
# We got the lock, so it's our job to download it. First,
# we'll double check that someone didn't download it while we
# were getting the lock:
if os.path.exists(blob_filename):
return _accessed(blob_filename)
# Ask the server to send it to us. When this function
# returns, it will have been sent. (The recieving will
# have been handled by the asyncore thread.)
self._call('sendBlob', oid, serial)
if os.path.exists(blob_filename):
return _accessed(blob_filename)
raise POSException.POSKeyError("No blob file", oid, serial)
finally:
lock.close()
def openCommittedBlobFile(self, oid, serial, blob=None):
blob_filename = self.loadBlob(oid, serial)
try:
if blob is None:
return open(blob_filename, 'rb')
else:
return ZODB.blob.BlobFile(blob_filename, 'r', blob)
except (IOError):
# The file got removed while we were opening.
# Fall through and try again with the protection of the lock.
pass
lock = _lock_blob(blob_filename)
try:
blob_filename = self.fshelper.getBlobFilename(oid, serial)
if not os.path.exists(blob_filename):
if self.shared_blob_dir:
# We're using a server shared cache. If the file isn't
# here, it's not anywhere.
raise POSException.POSKeyError("No blob file", oid, serial)
self._call('sendBlob', oid, serial)
if not os.path.exists(blob_filename):
raise POSException.POSKeyError("No blob file", oid, serial)
_accessed(blob_filename)
if blob is None:
return open(blob_filename, 'rb')
else:
return ZODB.blob.BlobFile(blob_filename, 'r', blob)
finally:
lock.close()
def temporaryDirectory(self):
return self.fshelper.temp_dir
def tpc_vote(self, txn):
"""Storage API: vote on a transaction.
"""
tbuf = self._check_trans(txn, 'tpc_vote')
try:
conflicts = True
vote_attempts = 0
while conflicts and vote_attempts < 9: # 9? Mainly avoid inf. loop
conflicts = False
for oid in self._call('vote', id(txn)) or ():
if isinstance(oid, dict):
# Conflict, let's try to resolve it
conflicts = True
conflict = oid
oid = conflict['oid']
committed, read = conflict['serials']
data = self.tryToResolveConflict(
oid, committed, read, conflict['data'])
self._async('storea', oid, committed, data, id(txn))
tbuf.resolve(oid, data)
else:
tbuf.serial(oid, ResolvedSerial)
vote_attempts += 1
except POSException.StorageTransactionError:
# Hm, we got disconnected and reconnected bwtween
# _check_trans and voting. Let's chack the transaction again:
self._check_trans(txn, 'tpc_vote')
raise
except POSException.ConflictError as err:
oid = getattr(err, 'oid', None)
if oid is not None:
# This is a band-aid to help recover from a situation
# that shouldn't happen. A Client somehow misses some
# invalidations and has out of date data in its
# cache. We need some whay to invalidate the cache
# entry without invalidations. So, if we see a
# (unresolved) conflict error, we assume that the
# cache entry is bad and invalidate it.
self._cache.invalidate(oid, None)
raise
if tbuf.exception:
raise tbuf.exception
if tbuf.server_resolved or tbuf.client_resolved:
return list(tbuf.server_resolved) + list(tbuf.client_resolved)
else:
return None
def tpc_transaction(self):
return self._transaction
def tpc_begin(self, txn, tid=None, status=' '):
"""Storage API: begin a transaction."""
if self._is_read_only:
raise POSException.ReadOnlyError()
try:
tbuf = txn.data(self)
except AttributeError:
# Gaaaa. This is a recovery transaction. Work around this
# until we can think of something better. XXX
tb = {}
txn.data = tb.__getitem__
txn.set_data = tb.__setitem__
except KeyError:
pass
else:
if tbuf is not None:
raise POSException.StorageTransactionError(
"Duplicate tpc_begin calls for same transaction")
txn.set_data(self, TransactionBuffer(self._connection_generation))
# XXX we'd like to allow multiple transactions at a time at some point,
# but for now, due to server limitations, TCBOO.
self._commit_lock.acquire()
self._tbuf = txn.data(self)
try:
self._async(
'tpc_begin', id(txn),
txn.user, txn.description, txn._extension, tid, status)
except ClientDisconnected:
self.tpc_end(txn)
raise
def tpc_end(self, txn):
tbuf = txn.data(self)
if tbuf is not None:
tbuf.close()
txn.set_data(self, None)
self._commit_lock.release()
def lastTransaction(self):
return self._cache.getLastTid()
def tpc_abort(self, txn, timeout=None):
"""Storage API: abort a transaction.
(The timeout keyword argument is for tests to wat longer than
they normally would.)
"""
try:
tbuf = txn.data(self)
except KeyError:
return
try:
# Caution: Are there any exceptions that should prevent an
# abort from occurring? It seems wrong to swallow them
# all, yet you want to be sure that other abort logic is
# executed regardless.
try:
# It's tempting to make an asynchronous call here, but
# it's useful for it to be synchronous because, if we
# failed due to a disconnect, synchronous calls will
# wait a little while in hopes of reconnecting. If
# we're able to reconnect and retry the transaction,
# ten it might succeed!
self._call('tpc_abort', id(txn), timeout=timeout)
except ClientDisconnected:
logger.debug("%s ClientDisconnected in tpc_abort() ignored",
self.__name__)
finally:
self._iterator_gc()
self.tpc_end(txn)
def tpc_finish(self, txn, f=lambda tid: None):
"""Storage API: finish a transaction."""
tbuf = self._check_trans(txn, 'tpc_finish')
try:
tid = self._server.tpc_finish(id(txn), tbuf, f)
finally:
self.tpc_end(txn)
self._iterator_gc()
self._update_blob_cache(tbuf, tid)
return tid
def _update_blob_cache(self, tbuf, tid):
"""Internal helper move blobs updated by a transaction to the cache.
"""
# Not sure why _update_cache() would be called on a closed storage.
if self._cache is None:
return
if self.fshelper is not None:
blobs = tbuf.blobs
had_blobs = False
while blobs:
oid, blobfilename = blobs.pop()
self._blob_data_bytes_loaded += os.stat(blobfilename).st_size
targetpath = self.fshelper.getPathForOID(oid, create=True)
target_blob_file_name = self.fshelper.getBlobFilename(oid, tid)
lock = _lock_blob(target_blob_file_name)
try:
ZODB.blob.rename_or_copy_blob(
blobfilename,
target_blob_file_name,
)
finally:
lock.close()
had_blobs = True
if had_blobs:
self._check_blob_size(self._blob_data_bytes_loaded)
def undo(self, trans_id, txn):
"""Storage API: undo a transaction.
This is executed in a transactional context. It has no effect
until the transaction is committed. It can be undone itself.
Zope uses this to implement undo unless it is not supported by
a storage.
"""
self._check_trans(txn, 'undo')
self._async('undoa', trans_id, id(txn))
def undoInfo(self, first=0, last=-20, specification=None):
"""Storage API: return undo information."""
return self._call('undoInfo', first, last, specification)
def undoLog(self, first=0, last=-20, filter=None):
"""Storage API: return a sequence of TransactionDescription objects.
The filter argument should be None or left unspecified, since
it is impossible to pass the filter function to the server to
be executed there. If filter is not None, an empty sequence
is returned.
"""
if filter is not None:
return []
return self._call('undoLog', first, last)
# Recovery support
def copyTransactionsFrom(self, other, verbose=0):
"""Copy transactions from another storage.
This is typically used for converting data from one storage to
another. `other` must have an .iterator() method.
"""
ZODB.BaseStorage.copy(other, self, verbose)
def restore(self, oid, serial, data, version, prev_txn, transaction):
"""Write data already committed in a separate database."""
assert not version
self._check_trans(transaction, 'restore')
self._async('restorea', oid, serial, data, prev_txn, id(transaction))
# Below are methods invoked by the StorageServer
def serialnos(self, args):
"""Server callback to pass a list of changed (oid, serial) pairs.
"""
for oid, s in args:
self._tbuf.serial(oid, s)
def info(self, dict):
"""Server callback to update the info dictionary."""
self._info.update(dict)
def invalidateCache(self):
if self._db is not None:
self._db.invalidateCache()
def invalidateTransaction(self, tid, oids):
"""Server callback: Invalidate objects modified by tid."""
if self._db is not None:
self._db.invalidate(tid, oids)
# IStorageIteration
def iterator(self, start=None, stop=None):
"""Return an IStorageTransactionInformation iterator."""
# iids are "iterator IDs" that can be used to query an iterator whose
# status is held on the server.
iid = self._call('iterator_start', start, stop)
return self._setup_iterator(TransactionIterator, iid)
def _setup_iterator(self, factory, iid, *args):
self._iterators[iid] = iterator = factory(self, iid, *args)
self._iterator_ids.add(iid)
return iterator
def _forget_iterator(self, iid):
self._iterators.pop(iid, None)
self._iterator_ids.remove(iid)
def _iterator_gc(self, disconnected=False):
if not self._iterator_ids:
return
if disconnected:
for i in self._iterators.values():
i._iid = -1
self._iterators.clear()
self._iterator_ids.clear()
return
# Recall that self._iterators is a WeakValueDictionary. Under
# non-refcounted implementations like PyPy, this means that
# unreachable iterators (and their IDs) may still be in this
# map for some arbitrary period of time (until the next
# garbage collection occurs.) This is fine: the server
# supports being asked to GC the same iterator ID more than
# once. Iterator ids can be reused, but only after a server
# restart, after which we had already been called with
# `disconnected` True and so had cleared out our map anyway,
# plus we simply replace whatever is in the map if we get a
# duplicate id---and duplicates at that point would be dead
# objects waiting to be cleaned up. So there's never any risk
# of confusing TransactionIterator objects that are in use.
iids = self._iterator_ids - set(self._iterators)
# let tests know we've been called:
self._iterators._last_gc = time.time()
if iids:
try:
self._async('iterator_gc', list(iids))
except ClientDisconnected:
# If we get disconnected, all of the iterators on the
# server are thrown away. We should clear ours too:
return self._iterator_gc(True)
self._iterator_ids -= iids
def server_status(self):
return self._call('server_status')
class TransactionIterator(object):
def __init__(self, storage, iid, *args):
self._storage = storage
self._iid = iid
self._ended = False
def __iter__(self):
return self
def __next__(self):
if self._ended:
raise StopIteration()
if self._iid < 0:
raise ClientDisconnected("Disconnected iterator")
tx_data = self._storage._call('iterator_next', self._iid)
if tx_data is None:
# The iterator is exhausted, and the server has already
# disposed it.
self._ended = True
self._storage._forget_iterator(self._iid)
raise StopIteration()
return ClientStorageTransactionInformation(
self._storage, self, *tx_data)
next = __next__
class ClientStorageTransactionInformation(ZODB.BaseStorage.TransactionRecord):
def __init__(self, storage, txiter, tid, status, user, description,
extension):
self._storage = storage
self._txiter = txiter
self._completed = False
self._riid = None
self.tid = tid
self.status = status
self.user = user
self.description = description
self.extension = extension
def __iter__(self):
riid = self._storage._call('iterator_record_start',
self._txiter._iid, self.tid)
return self._storage._setup_iterator(RecordIterator, riid)
class RecordIterator(object):
def __init__(self, storage, riid):
self._riid = riid
self._completed = False
self._storage = storage
def __iter__(self):
return self
def __next__(self):
if self._completed:
# We finished iteration once already and the server can't know
# about the iteration anymore.
raise StopIteration()
item = self._storage._call('iterator_record_next', self._riid)
if item is None:
# The iterator is exhausted, and the server has already
# disposed it.
self._completed = True
raise StopIteration()
return ZODB.BaseStorage.DataRecord(*item)
next = __next__
class BlobCacheLayout(object):
size = 997
def oid_to_path(self, oid):
return str(utils.u64(oid) % self.size)
def getBlobFilePath(self, oid, tid):
base, rem = divmod(utils.u64(oid), self.size)
return os.path.join(
str(rem),
"%s.%s%s" % (base, hexlify(tid).decode('ascii'),
ZODB.blob.BLOB_SUFFIX)
)
def _accessed(filename):
try:
os.utime(filename, (time.time(), os.stat(filename).st_mtime))
except OSError:
pass # We tried. :)
return filename
cache_file_name = re.compile(r'\d+$').match
def _check_blob_cache_size(blob_dir, target):
logger = logging.getLogger(__name__+'.check_blob_cache')
with open(os.path.join(blob_dir, ZODB.blob.LAYOUT_MARKER)) as layout_file:
layout = layout_file.read().strip()
if not layout == 'zeocache':
logger.critical("Invalid blob directory layout %s", layout)
raise ValueError("Invalid blob directory layout", layout)
attempt_path = os.path.join(blob_dir, 'check_size.attempt')
try:
check_lock = zc.lockfile.LockFile(
os.path.join(blob_dir, 'check_size.lock'))
except zc.lockfile.LockError:
try:
time.sleep(1)
check_lock = zc.lockfile.LockFile(
os.path.join(blob_dir, 'check_size.lock'))
except zc.lockfile.LockError:
# Someone is already cleaning up, so don't bother
logger.debug("%s Another thread is checking the blob cache size.",
get_ident())
open(attempt_path, 'w').close() # Mark that we tried
return
logger.debug("%s Checking blob cache size. (target: %s)",
get_ident(), target)
try:
while 1:
size = 0
blob_suffix = ZODB.blob.BLOB_SUFFIX
files_by_atime = BTrees.OOBTree.BTree()
for dirname in os.listdir(blob_dir):
if not cache_file_name(dirname):
continue
base = os.path.join(blob_dir, dirname)
if not os.path.isdir(base):
continue
for file_name in os.listdir(base):
if not file_name.endswith(blob_suffix):
continue
file_path = os.path.join(base, file_name)
if not os.path.isfile(file_path):
continue
stat = os.stat(file_path)
size += stat.st_size
t = stat.st_atime
if t not in files_by_atime:
files_by_atime[t] = []
files_by_atime[t].append(os.path.join(dirname, file_name))
logger.debug("%s blob cache size: %s", get_ident(), size)
if size <= target:
if os.path.isfile(attempt_path):
try:
os.remove(attempt_path)
except OSError:
pass # Sigh, windows
continue
logger.debug("%s -->", get_ident())
break
while size > target and files_by_atime:
for file_name in files_by_atime.pop(files_by_atime.minKey()):
file_name = os.path.join(blob_dir, file_name)
lockfilename = os.path.join(os.path.dirname(file_name),
'.lock')
try:
lock = zc.lockfile.LockFile(lockfilename)
except zc.lockfile.LockError:
logger.debug("%s Skipping locked %s",
get_ident(),
os.path.basename(file_name))
continue # In use, skip
try:
fsize = os.stat(file_name).st_size
try:
ZODB.blob.remove_committed(file_name)
except OSError as v:
pass # probably open on windows
else:
size -= fsize
finally:
lock.close()
if size <= target:
break
logger.debug("%s reduced blob cache size: %s",
get_ident(), size)
finally:
check_lock.close()
def check_blob_size_script(args=None):
if args is None:
args = sys.argv[1:]
blob_dir, target = args
_check_blob_cache_size(blob_dir, int(target))
def _lock_blob(path):
lockfilename = os.path.join(os.path.dirname(path), '.lock')
n = 0
while 1:
try:
return zc.lockfile.LockFile(lockfilename)
except zc.lockfile.LockError:
time.sleep(0.01)
n += 1
if n > 60000:
raise
else:
break
def open_cache(cache, var, client, storage, cache_size):
if isinstance(cache, (None.__class__, str)):
from ZEO.cache import ClientCache
if cache is None:
if client:
cache = os.path.join(var or os.getcwd(),
"%s-%s.zec" % (client, storage))
else:
# ephemeral cache
return ClientCache(None, cache_size)
cache = ClientCache(cache, cache_size)
return cache
ZEO-5.0.0a0/src/ZEO/Exceptions.py 0000664 0000000 0000000 00000002324 12737730500 0016316 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Exceptions for ZEO."""
from ZODB.POSException import StorageError
class ClientStorageError(StorageError):
"""An error occurred in the ZEO Client Storage."""
class UnrecognizedResult(ClientStorageError):
"""A server call returned an unrecognized result."""
class ClientDisconnected(ClientStorageError):
"""The database storage is disconnected from the storage."""
class AuthError(StorageError):
"""The client provided invalid authentication credentials."""
class ProtocolError(ClientStorageError):
"""A client contacted a server with an incomparible protocol
"""
ZEO-5.0.0a0/src/ZEO/StorageServer.py 0000664 0000000 0000000 00000141204 12737730500 0016771 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002, 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""The StorageServer class and the exception that it may raise.
This server acts as a front-end for one or more real storages, like
file storage or Berkeley storage.
TODO: Need some basic access control-- a declaration of the methods
exported for invocation by the server.
"""
import codecs
import itertools
import logging
import os
import socket
import sys
import tempfile
import threading
import time
import transaction
import warnings
import ZEO.asyncio.server
import ZODB.blob
import ZODB.event
import ZODB.serialize
import ZODB.TimeStamp
import zope.interface
import six
from ZEO._compat import Pickler, Unpickler, PY3, BytesIO
from ZEO.Exceptions import AuthError
from ZEO.monitor import StorageStats
from ZEO.asyncio.server import Delay, MTDelay, Result
from ZODB.ConflictResolution import ResolvedSerial
from ZODB.loglevels import BLATHER
from ZODB.POSException import StorageError, StorageTransactionError
from ZODB.POSException import TransactionError, ReadOnlyError, ConflictError
from ZODB.serialize import referencesf
from ZODB.utils import oid_repr, p64, u64, z64
from .asyncio.server import Acceptor
logger = logging.getLogger('ZEO.StorageServer')
def log(message, level=logging.INFO, label='', exc_info=False):
"""Internal helper to log a message."""
if label:
message = "(%s) %s" % (label, message)
logger.log(level, message, exc_info=exc_info)
class StorageServerError(StorageError):
"""Error reported when an unpicklable exception is raised."""
registered_methods = set(( 'get_info', 'lastTransaction',
'getInvalidations', 'new_oids', 'pack', 'loadBefore', 'storea',
'checkCurrentSerialInTransaction', 'restorea', 'storeBlobStart',
'storeBlobChunk', 'storeBlobEnd', 'storeBlobShared',
'deleteObject', 'tpc_begin', 'vote', 'tpc_finish', 'tpc_abort',
'history', 'record_iternext', 'sendBlob', 'getTid', 'loadSerial',
'new_oid', 'undoa', 'undoLog', 'undoInfo', 'iterator_start',
'iterator_next', 'iterator_record_start', 'iterator_record_next',
'iterator_gc', 'server_status', 'set_client_label'))
class ZEOStorage:
"""Proxy to underlying storage for a single remote client."""
# A list of extension methods. A subclass with extra methods
# should override.
extensions = []
connected = connection = stats = storage = storage_id = transaction = None
blob_tempfile = None
log_label = 'unconnected'
locked = False # Don't have storage lock
verifying = 0
def __init__(self, server, read_only=0):
self.server = server
self.client_conflict_resolution = server.client_conflict_resolution
# timeout and stats will be initialized in register()
self.read_only = read_only
# The authentication protocol may define extra methods.
self._extensions = {}
for func in self.extensions:
self._extensions[func.__name__] = None
self._iterators = {}
self._iterator_ids = itertools.count()
# Stores the last item that was handed out for a
# transaction iterator.
self._txn_iterators_last = {}
def set_database(self, database):
self.database = database
def notify_connected(self, conn):
self.connection = conn
self.connected = True
assert conn.protocol_version is not None
self.log_label = _addr_label(conn.addr)
def notify_disconnected(self):
# When this storage closes, we must ensure that it aborts
# any pending transaction.
if self.transaction is not None:
self.log("disconnected during %s transaction"
% (self.locked and 'locked' or 'unlocked'))
self.tpc_abort(self.transaction.id)
else:
self.log("disconnected")
self.connected = False
self.server.close_conn(self)
def __repr__(self):
tid = self.transaction and repr(self.transaction.id)
if self.storage:
stid = (self.tpc_transaction() and
repr(self.tpc_transaction().id))
else:
stid = None
name = self.__class__.__name__
return "<%s %X trans=%s s_trans=%s>" % (name, id(self), tid, stid)
def log(self, msg, level=logging.INFO, exc_info=False):
log(msg, level=level, label=self.log_label, exc_info=exc_info)
def setup_delegation(self):
"""Delegate several methods to the storage
"""
# Called from register
storage = self.storage
info = self.get_info()
if not info['supportsUndo']:
self.undoLog = self.undoInfo = lambda *a,**k: ()
self.getTid = storage.getTid
self.load = storage.load
self.loadSerial = storage.loadSerial
record_iternext = getattr(storage, 'record_iternext', None)
if record_iternext is not None:
self.record_iternext = record_iternext
try:
fn = storage.getExtensionMethods
except AttributeError:
pass # no extension methods
else:
d = fn()
self._extensions.update(d)
for name in d:
assert not hasattr(self, name)
setattr(self, name, getattr(storage, name))
self.lastTransaction = storage.lastTransaction
try:
self.tpc_transaction = storage.tpc_transaction
except AttributeError:
if hasattr(storage, '_transaction'):
log("Storage %r doesn't have a tpc_transaction method.\n"
"See ZEO.interfaces.IServeable."
"Falling back to using _transaction attribute, which\n."
"is icky.",
logging.ERROR)
self.tpc_transaction = lambda : storage._transaction
else:
raise
self.connection.methods = registered_methods
def history(self,tid,size=1):
# This caters for storages which still accept
# a version parameter.
return self.storage.history(tid,size=size)
def _check_tid(self, tid, exc=None):
if self.read_only:
raise ReadOnlyError()
if self.transaction is None:
caller = sys._getframe().f_back.f_code.co_name
self.log("no current transaction: %s()" % caller,
level=logging.WARNING)
if exc is not None:
raise exc(None, tid)
else:
return 0
if self.transaction.id != tid:
caller = sys._getframe().f_back.f_code.co_name
self.log("%s(%s) invalid; current transaction = %s" %
(caller, repr(tid), repr(self.transaction.id)),
logging.WARNING)
if exc is not None:
raise exc(self.transaction.id, tid)
else:
return 0
return 1
def register(self, storage_id, read_only):
"""Select the storage that this client will use
This method must be the first one called by the client.
For authenticated storages this method will be called by the client
immediately after authentication is finished.
"""
if self.storage is not None:
self.log("duplicate register() call")
raise ValueError("duplicate register() call")
storage = self.server.storages.get(storage_id)
if storage is None:
self.log("unknown storage_id: %s" % storage_id)
raise ValueError("unknown storage: %s" % storage_id)
if not read_only and (self.read_only or storage.isReadOnly()):
raise ReadOnlyError()
self.read_only = self.read_only or read_only
self.storage_id = storage_id
self.storage = storage
self.setup_delegation()
self.stats = self.server.register_connection(storage_id, self)
def get_info(self):
storage = self.storage
supportsUndo = (getattr(storage, 'supportsUndo', lambda : False)()
and self.connection.protocol_version >= b'Z310')
# Communicate the backend storage interfaces to the client
storage_provides = zope.interface.providedBy(storage)
interfaces = []
for candidate in storage_provides.__iro__:
interfaces.append((candidate.__module__, candidate.__name__))
return {'length': len(storage),
'size': storage.getSize(),
'name': storage.getName(),
'supportsUndo': supportsUndo,
'extensionMethods': self.getExtensionMethods(),
'supports_record_iternext': hasattr(self, 'record_iternext'),
'interfaces': tuple(interfaces),
}
def get_size_info(self):
return {'length': len(self.storage),
'size': self.storage.getSize(),
}
def getExtensionMethods(self):
return self._extensions
def loadEx(self, oid):
self.stats.loads += 1
return self.storage.load(oid, '')
def loadBefore(self, oid, tid):
self.stats.loads += 1
return self.storage.loadBefore(oid, tid)
def getInvalidations(self, tid):
invtid, invlist = self.server.get_invalidations(self.storage_id, tid)
if invtid is None:
return None
self.log("Return %d invalidations up to tid %s"
% (len(invlist), u64(invtid)))
return invtid, invlist
def pack(self, time, wait=1):
# Yes, you can pack a read-only server or storage!
if wait:
return run_in_thread(self._pack_impl, time)
else:
# If the client isn't waiting for a reply, start a thread
# and forget about it.
t = threading.Thread(target=self._pack_impl, args=(time,))
t.setName("zeo storage packing thread")
t.start()
return None
def _pack_impl(self, time):
self.log("pack(time=%s) started..." % repr(time))
self.storage.pack(time, referencesf)
self.log("pack(time=%s) complete" % repr(time))
# Broadcast new size statistics
self.server.invalidate(0, self.storage_id, None,
(), self.get_size_info())
def new_oids(self, n=100):
"""Return a sequence of n new oids, where n defaults to 100"""
n = min(n, 100)
if self.read_only:
raise ReadOnlyError()
if n <= 0:
n = 1
return [self.storage.new_oid() for i in range(n)]
# undoLog and undoInfo are potentially slow methods
def undoInfo(self, first, last, spec):
return run_in_thread(self.storage.undoInfo, first, last, spec)
def undoLog(self, first, last):
return run_in_thread(self.storage.undoLog, first, last)
def tpc_begin(self, id, user, description, ext, tid=None, status=" "):
if self.read_only:
raise ReadOnlyError()
if self.transaction is not None:
if self.transaction.id == id:
self.log("duplicate tpc_begin(%s)" % repr(id))
return
else:
raise StorageTransactionError("Multiple simultaneous tpc_begin"
" requests from one client.")
t = transaction.Transaction()
t.id = id
t.user = user
t.description = description
t._extension = ext
self.serials = []
self.conflicts = {}
self.invalidated = []
self.txnlog = CommitLog()
self.blob_log = []
self.tid = tid
self.status = status
self.stats.active_txns += 1
# Assign the transaction attribute last. This is so we don't
# think we've entered TPC until everything is set. Why?
# Because if we have an error after this, the server will
# think it is in TPC and the client will think it isn't. At
# that point, the client will keep trying to enter TPC and
# server won't let it. Errors *after* the tpc_begin call will
# cause the client to abort the transaction.
# (Also see https://bugs.launchpad.net/zodb/+bug/374737.)
self.transaction = t
def tpc_finish(self, id):
if not self._check_tid(id):
return
assert self.locked, "finished called wo lock"
self.stats.commits += 1
self.storage.tpc_finish(self.transaction, self._invalidate)
self.connection.async('info', self.get_size_info())
# Note that the tid is still current because we still hold the
# commit lock. We'll relinquish it in _clear_transaction.
tid = self.storage.lastTransaction()
# Return the tid, for cache invalidation optimization
return Result(tid, self._clear_transaction)
def _invalidate(self, tid):
if self.invalidated:
self.server.invalidate(self, self.storage_id, tid, self.invalidated)
def tpc_abort(self, tid):
if not self._check_tid(tid):
return
self.stats.aborts += 1
self.storage.tpc_abort(self.transaction)
self._clear_transaction()
def _clear_transaction(self):
# Common code at end of tpc_finish() and tpc_abort()
if self.locked:
self.server.unlock_storage(self)
self.locked = 0
if self.transaction is not None:
self.server.stop_waiting(self)
self.transaction = None
self.stats.active_txns -= 1
if self.txnlog is not None:
self.txnlog.close()
self.txnlog = None
for oid, oldserial, data, blobfilename in self.blob_log:
ZODB.blob.remove_committed(blobfilename)
del self.blob_log
def vote(self, tid):
self._check_tid(tid, exc=StorageTransactionError)
if self.locked or self.server.already_waiting(self):
raise StorageTransactionError(
'Already voting (%s)' % (self.locked and 'locked' or 'waiting')
)
return self._try_to_vote()
def _try_to_vote(self, delay=None):
if not self.connected:
return # We're disconnected
if delay is not None and delay.sent:
# as a consequence of the unlocking strategy, _try_to_vote
# may be called multiple times for delayed
# transactions. The first call will mark the delay as
# sent. We should skip if the delay was already sent.
return
self.locked, delay = self.server.lock_storage(self, delay)
if self.locked:
result = None
try:
self.log(
"Preparing to commit transaction: %d objects, %d bytes"
% (self.txnlog.stores, self.txnlog.size()),
level=BLATHER)
if (self.tid is not None) or (self.status != ' '):
self.storage.tpc_begin(self.transaction,
self.tid, self.status)
else:
self.storage.tpc_begin(self.transaction)
for op, args in self.txnlog:
getattr(self, op)(*args)
# Blob support
while self.blob_log:
oid, oldserial, data, blobfilename = self.blob_log.pop()
self._store(oid, oldserial, data, blobfilename)
if not self.conflicts:
try:
serials = self.storage.tpc_vote(self.transaction)
except ConflictError as err:
if (self.client_conflict_resolution and
err.oid and err.serials and err.data
):
self.conflicts[err.oid] = dict(
oid=err.oid, serials=err.serials, data=err.data)
else:
raise
else:
if serials:
self.serials.extend(serials)
result = self.serials
if self.conflicts:
result = list(self.conflicts.values())
self.storage.tpc_abort(self.transaction)
self.server.unlock_storage(self)
self.locked = False
self.server.stop_waiting(self)
except Exception as err:
self.storage.tpc_abort(self.transaction)
self._clear_transaction()
if isinstance(err, ConflictError):
self.stats.conflicts += 1
self.log("conflict error %s" % err, BLATHER)
if not isinstance(err, TransactionError):
logger.exception("While voting")
if delay is not None:
delay.error(sys.exc_info())
else:
raise
else:
if delay is not None:
delay.reply(result)
else:
return result
else:
return delay
def _unlock_callback(self, delay):
if self.connected:
self.connection.call_soon_threadsafe(self._try_to_vote, delay)
else:
self.server.stop_waiting(self)
# The public methods of the ZEO client API do not do the real work.
# They defer work until after the storage lock has been acquired.
# Most of the real implementations are in methods beginning with
# an _.
def deleteObject(self, oid, serial, id):
self._check_tid(id, exc=StorageTransactionError)
self.stats.stores += 1
self.txnlog.delete(oid, serial)
def storea(self, oid, serial, data, id):
self._check_tid(id, exc=StorageTransactionError)
self.stats.stores += 1
self.txnlog.store(oid, serial, data)
def checkCurrentSerialInTransaction(self, oid, serial, id):
self._check_tid(id, exc=StorageTransactionError)
self.txnlog.checkread(oid, serial)
def restorea(self, oid, serial, data, prev_txn, id):
self._check_tid(id, exc=StorageTransactionError)
self.stats.stores += 1
self.txnlog.restore(oid, serial, data, prev_txn)
def storeBlobStart(self):
assert self.blob_tempfile is None
self.blob_tempfile = tempfile.mkstemp(
dir=self.storage.temporaryDirectory())
def storeBlobChunk(self, chunk):
os.write(self.blob_tempfile[0], chunk)
def storeBlobEnd(self, oid, serial, data, id):
self._check_tid(id, exc=StorageTransactionError)
assert self.txnlog is not None # effectively not allowed after undo
fd, tempname = self.blob_tempfile
self.blob_tempfile = None
os.close(fd)
self.blob_log.append((oid, serial, data, tempname))
def storeBlobShared(self, oid, serial, data, filename, id):
self._check_tid(id, exc=StorageTransactionError)
assert self.txnlog is not None # effectively not allowed after undo
# Reconstruct the full path from the filename in the OID directory
if (os.path.sep in filename
or not (filename.endswith('.tmp')
or filename[:-1].endswith('.tmp')
)
):
logger.critical(
"We're under attack! (bad filename to storeBlobShared, %r)",
filename)
raise ValueError(filename)
filename = os.path.join(self.storage.fshelper.getPathForOID(oid),
filename)
self.blob_log.append((oid, serial, data, filename))
def sendBlob(self, oid, serial):
blobfilename = self.storage.loadBlob(oid, serial)
def store():
yield ('receiveBlobStart', (oid, serial))
with open(blobfilename, 'rb') as f:
while 1:
chunk = f.read(59000)
if not chunk:
break
yield ('receiveBlobChunk', (oid, serial, chunk, ))
yield ('receiveBlobStop', (oid, serial))
self.connection.call_async_iter(store())
def undo(*a, **k):
raise NotImplementedError
def undoa(self, trans_id, tid):
self._check_tid(tid, exc=StorageTransactionError)
self.txnlog.undo(trans_id)
def _delete(self, oid, serial):
self.storage.deleteObject(oid, serial, self.transaction)
def _checkread(self, oid, serial):
self.storage.checkCurrentSerialInTransaction(
oid, serial, self.transaction)
def _store(self, oid, serial, data, blobfile=None):
try:
if blobfile is None:
self.storage.store(oid, serial, data, '', self.transaction)
else:
self.storage.storeBlob(
oid, serial, data, blobfile, '', self.transaction)
except ConflictError as err:
if self.client_conflict_resolution and err.serials:
self.conflicts[oid] = dict(
oid=oid, serials=err.serials, data=data)
else:
raise
else:
if oid in self.conflicts:
del self.conflicts[oid]
if serial != b"\0\0\0\0\0\0\0\0":
self.invalidated.append(oid)
def _restore(self, oid, serial, data, prev_txn):
self.storage.restore(oid, serial, data, '', prev_txn,
self.transaction)
def _undo(self, trans_id):
tid, oids = self.storage.undo(trans_id, self.transaction)
self.invalidated.extend(oids)
self.serials.extend(oids)
# IStorageIteration support
def iterator_start(self, start, stop):
iid = next(self._iterator_ids)
self._iterators[iid] = iter(self.storage.iterator(start, stop))
return iid
def iterator_next(self, iid):
iterator = self._iterators[iid]
try:
info = next(iterator)
except StopIteration:
del self._iterators[iid]
item = None
if iid in self._txn_iterators_last:
del self._txn_iterators_last[iid]
else:
item = (info.tid,
info.status,
info.user,
info.description,
info.extension)
# Keep a reference to the last iterator result to allow starting a
# record iterator off it.
self._txn_iterators_last[iid] = info
return item
def iterator_record_start(self, txn_iid, tid):
record_iid = next(self._iterator_ids)
txn_info = self._txn_iterators_last[txn_iid]
if txn_info.tid != tid:
raise Exception(
'Out-of-order request for record iterator for transaction %r'
% tid)
self._iterators[record_iid] = iter(txn_info)
return record_iid
def iterator_record_next(self, iid):
iterator = self._iterators[iid]
try:
info = next(iterator)
except StopIteration:
del self._iterators[iid]
item = None
else:
item = (info.oid,
info.tid,
info.data,
info.data_txn)
return item
def iterator_gc(self, iids):
for iid in iids:
self._iterators.pop(iid, None)
def server_status(self):
return self.server.server_status(self.storage_id)
def set_client_label(self, label):
self.log_label = str(label)+' '+_addr_label(self.connection.addr)
def ruok(self):
return self.server.ruok()
class StorageServerDB:
"""Adapter from StorageServerDB to ZODB.interfaces.IStorageWrapper
This is used in a ZEO fan-out situation, where a storage server
calls registerDB on a ClientStorage.
Note that this is called from the Client-storage's IO thread, so
always a separate thread from the storge-server connections.
"""
def __init__(self, server, storage_id):
self.server = server
self.storage_id = storage_id
self.references = ZODB.serialize.referencesf
def invalidate(self, tid, oids, version=''):
if version:
raise StorageServerError("Versions aren't supported.")
storage_id = self.storage_id
self.server.invalidate(None, storage_id, tid, oids)
def invalidateCache(self):
self.server._invalidateCache(self.storage_id)
transform_record_data = untransform_record_data = lambda self, data: data
class StorageServer:
"""The server side implementation of ZEO.
The StorageServer is the 'manager' for incoming connections. Each
connection is associated with its own ZEOStorage instance (defined
below). The StorageServer may handle multiple storages; each
ZEOStorage instance only handles a single storage.
"""
def __init__(self, addr, storages,
read_only=0,
invalidation_queue_size=100,
invalidation_age=None,
transaction_timeout=None,
ssl=None,
client_conflict_resolution=False,
):
"""StorageServer constructor.
This is typically invoked from the start.py script.
Arguments (the first two are required and positional):
addr -- the address at which the server should listen. This
can be a tuple (host, port) to signify a TCP/IP connection
or a pathname string to signify a Unix domain socket
connection. A hostname may be a DNS name or a dotted IP
address.
storages -- a dictionary giving the storage(s) to handle. The
keys are the storage names, the values are the storage
instances, typically FileStorage or Berkeley storage
instances. By convention, storage names are typically
strings representing small integers starting at '1'.
read_only -- an optional flag saying whether the server should
operate in read-only mode. Defaults to false. Note that
even if the server is operating in writable mode,
individual storages may still be read-only. But if the
server is in read-only mode, no write operations are
allowed, even if the storages are writable. Note that
pack() is considered a read-only operation.
invalidation_queue_size -- The storage server keeps a queue
of the objects modified by the last N transactions, where
N == invalidation_queue_size. This queue is used to
speed client cache verification when a client disconnects
for a short period of time.
invalidation_age --
If the invalidation queue isn't big enough to support a
quick verification, but the last transaction seen by a
client is younger than the invalidation age, then
invalidations will be computed by iterating over
transactions later than the given transaction.
transaction_timeout -- The maximum amount of time to wait for
a transaction to commit after acquiring the storage lock.
If the transaction takes too long, the client connection
will be closed and the transaction aborted.
"""
self.storages = storages
msg = ", ".join(
["%s:%s:%s" % (name, storage.isReadOnly() and "RO" or "RW",
storage.getName())
for name, storage in storages.items()])
log("%s created %s with storages: %s" %
(self.__class__.__name__, read_only and "RO" or "RW", msg))
self._lock = threading.Lock()
self._commit_locks = {}
self._waiting = dict((name, []) for name in storages)
self.read_only = read_only
self.database = None
# A list, by server, of at most invalidation_queue_size invalidations.
# The list is kept in sorted order with the most recent
# invalidation at the front. The list never has more than
# self.invq_bound elements.
self.invq_bound = invalidation_queue_size
self.invq = {}
for name, storage in storages.items():
self._setup_invq(name, storage)
storage.registerDB(StorageServerDB(self, name))
if client_conflict_resolution:
# XXX this may go away later, when storages grow
# configuration for this.
storage.tryToResolveConflict = never_resolve_conflict
self.invalidation_age = invalidation_age
self.zeo_storages_by_storage_id = {} # {storage_id -> [ZEOStorage]}
self.client_conflict_resolution = client_conflict_resolution
if addr is not None:
self.acceptor = Acceptor(self, addr, ssl)
if isinstance(addr, tuple) and addr[0]:
self.addr = self.acceptor.addr
else:
self.addr = addr
self.loop = self.acceptor.loop
ZODB.event.notify(Serving(self, address=self.acceptor.addr))
self.stats = {}
self.timeouts = {}
for name in self.storages.keys():
self.zeo_storages_by_storage_id[name] = []
self.stats[name] = StorageStats(
self.zeo_storages_by_storage_id[name])
if transaction_timeout is None:
# An object with no-op methods
timeout = StubTimeoutThread()
else:
timeout = TimeoutThread(transaction_timeout)
timeout.setName("TimeoutThread for %s" % name)
timeout.start()
self.timeouts[name] = timeout
def create_client_handler(self):
return ZEOStorage(self, self.read_only)
def _setup_invq(self, name, storage):
lastInvalidations = getattr(storage, 'lastInvalidations', None)
if lastInvalidations is None:
# Using None below doesn't look right, but the first
# element in invq is never used. See get_invalidations.
# (If it was used, it would generate an error, which would
# be good. :) Doing this allows clients that were up to
# date when a server was restarted to pick up transactions
# it subsequently missed.
self.invq[name] = [(storage.lastTransaction() or z64, None)]
else:
self.invq[name] = list(lastInvalidations(self.invq_bound))
self.invq[name].reverse()
def register_connection(self, storage_id, zeo_storage):
"""Internal: register a ZEOStorage with a particular storage.
This is called by ZEOStorage.register().
The dictionary self.zeo_storages_by_storage_id maps each
storage name to a list of current ZEOStorages for that
storage; this information is needed to handle invalidation.
This function updates this dictionary.
Returns the timeout and stats objects for the appropriate storage.
"""
self.zeo_storages_by_storage_id[storage_id].append(zeo_storage)
return self.stats[storage_id]
def _invalidateCache(self, storage_id):
"""We need to invalidate any caches we have.
This basically means telling our clients to
invalidate/revalidate their caches. We do this by closing them
and making them reconnect.
"""
# This method is called from foreign threads. We have to
# worry about interaction with the main thread.
# 1. We modify self.invq which is read by get_invalidations
# below. This is why get_invalidations makes a copy of
# self.invq.
# 2. We access connections. There are two dangers:
#
# a. We miss a new connection. This is not a problem because
# if a client connects after we get the list of connections,
# then it will have to read the invalidation queue, which
# has already been reset.
#
# b. A connection is closes while we are iterating. This
# doesn't matter, bacause we can call should_close on a closed
# connection.
# Rebuild invq
self._setup_invq(storage_id, self.storages[storage_id])
# Make a copy since we are going to be mutating the
# connections indirectoy by closing them. We don't care about
# later transactions since they will have to validate their
# caches anyway.
for zs in self.zeo_storages_by_storage_id[storage_id][:]:
zs.connection.call_soon_threadsafe(zs.connection.close)
def invalidate(
self, zeo_storage, storage_id, tid, invalidated=(), info=None):
"""Internal: broadcast info and invalidations to clients.
This is called from several ZEOStorage methods.
invalidated is a sequence of oids.
This can do three different things:
- If the invalidated argument is non-empty, it broadcasts
invalidateTransaction() messages to all clients of the given
storage except the current client (the zeo_storage argument).
- If the invalidated argument is empty and the info argument
is a non-empty dictionary, it broadcasts info() messages to
all clients of the given storage, including the current
client.
- If both the invalidated argument and the info argument are
non-empty, it broadcasts invalidateTransaction() messages to all
clients except the current, and sends an info() message to
the current client.
"""
# This method can be called from foreign threads. We have to
# worry about interaction with the main thread.
# 1. We modify self.invq which is read by get_invalidations
# below. This is why get_invalidations makes a copy of
# self.invq.
# 2. We access connections. There are two dangers:
#
# a. We miss a new connection. This is not a problem because
# we are called while the storage lock is held. A new
# connection that tries to read data won't read committed
# data without first recieving an invalidation. Also, if a
# client connects after getting the list of connections,
# then it will have to read the invalidation queue, which
# has been updated to reflect the invalidations.
#
# b. A connection is closes while we are iterating. We'll need
# to cactch and ignore Disconnected errors.
if invalidated:
invq = self.invq[storage_id]
if len(invq) >= self.invq_bound:
invq.pop()
invq.insert(0, (tid, invalidated))
# serialize invalidation message, so we don't have to to
# it over and over
for zs in self.zeo_storages_by_storage_id[storage_id]:
connection = zs.connection
if invalidated and zs is not zeo_storage:
connection.call_soon_threadsafe(
connection.async, 'invalidateTransaction', tid, invalidated)
elif info is not None:
connection.call_soon_threadsafe(
connection.async, 'info', info)
def get_invalidations(self, storage_id, tid):
"""Return a tid and list of all objects invalidation since tid.
The tid is the most recent transaction id seen by the client.
Returns None if it is unable to provide a complete list
of invalidations for tid. In this case, client should
do full cache verification.
XXX This API is stupid. It would be better to simply return a
list of oid-tid pairs. With this API, we can't really use the
tid returned and have to discard all versions for an OID. If
we used the max tid, then loadBefore results from the cache
might be incorrect.
"""
# We make a copy of invq because it might be modified by a
# foreign (other than main thread) calling invalidate above.
invq = self.invq[storage_id][:]
oids = set()
latest_tid = None
if invq and invq[-1][0] <= tid:
# We have needed data in the queue
for _tid, L in invq:
if _tid <= tid:
break
oids.update(L)
latest_tid = invq[0][0]
elif (self.invalidation_age and
(self.invalidation_age >
(time.time()-ZODB.TimeStamp.TimeStamp(tid).timeTime())
)
):
for t in self.storages[storage_id].iterator(p64(u64(tid)+1)):
for r in t:
oids.add(r.oid)
latest_tid = t.tid
elif not invq:
log("invq empty")
else:
log("tid to old for invq %s < %s" % (u64(tid), u64(invq[-1][0])))
return latest_tid, list(oids)
__thread = None
def start_thread(self, daemon=True):
self.__thread = thread = threading.Thread(target=self.loop)
thread.setName("StorageServer(%s)" % _addr_label(self.addr))
thread.setDaemon(daemon)
thread.start()
__closed = False
def close(self, join_timeout=1):
"""Close the dispatcher so that there are no new connections.
This is only called from the test suite, AFAICT.
"""
if self.__closed:
return
self.__closed = True
# Stop accepting connections
self.acceptor.close()
ZODB.event.notify(Closed(self))
# Close open client connections
for sid, zeo_storages in self.zeo_storages_by_storage_id.items():
for zs in zeo_storages[:]:
try:
logger.debug("Closing %s", zs.connection)
zs.connection.call_soon_threadsafe(
zs.connection.close)
except Exception:
logger.exception("closing connection %r", zs)
for name, storage in six.iteritems(self.storages):
logger.info("closing storage %r", name)
storage.close()
if self.__thread is not None:
self.__thread.join(join_timeout)
def close_conn(self, zeo_storage):
"""Remove the given zeo_storage from self.zeo_storages_by_storage_id.
This is the inverse of register_connection().
"""
for zeo_storages in self.zeo_storages_by_storage_id.values():
if zeo_storage in zeo_storages:
zeo_storages.remove(zeo_storage)
def lock_storage(self, zeostore, delay):
storage_id = zeostore.storage_id
waiting = self._waiting[storage_id]
with self._lock:
if storage_id in self._commit_locks:
# The lock is held by another zeostore
locked = self._commit_locks[storage_id]
assert locked is not zeostore, (storage_id, delay)
if not locked.connected:
locked.log("Still locked after disconnected. Unlocking.",
logging.CRITICAL)
if locked.transaction:
locked.storage.tpc_abort(locked.transaction)
del self._commit_locks[storage_id]
# yuck: have to manipulate lock to appease with :(
self._lock.release()
try:
return self.lock_storage(zeostore, delay)
finally:
self._lock.acquire()
if delay is None:
# New request, queue it
assert not [i for i in waiting if i[0] is zeostore
], "already waiting"
delay = Delay()
waiting.append((zeostore, delay))
zeostore.log("(%r) queue lock: transactions waiting: %s"
% (storage_id, len(waiting)),
_level_for_waiting(waiting)
)
return False, delay
else:
self._commit_locks[storage_id] = zeostore
self.timeouts[storage_id].begin(zeostore)
self.stats[storage_id].lock_time = time.time()
if delay is not None:
# we were waiting, stop
waiting[:] = [i for i in waiting if i[0] is not zeostore]
zeostore.log("(%r) lock: transactions waiting: %s"
% (storage_id, len(waiting)),
_level_for_waiting(waiting)
)
return True, delay
def unlock_storage(self, zeostore):
storage_id = zeostore.storage_id
waiting = self._waiting[storage_id]
with self._lock:
assert self._commit_locks[storage_id] is zeostore
del self._commit_locks[storage_id]
self.timeouts[storage_id].end(zeostore)
self.stats[storage_id].lock_time = None
callbacks = waiting[:]
if callbacks:
assert not [i for i in waiting if i[0] is zeostore
], "waiting while unlocking"
zeostore.log("(%r) unlock: transactions waiting: %s"
% (storage_id, len(callbacks)),
_level_for_waiting(callbacks)
)
for zeostore, delay in callbacks:
try:
zeostore._unlock_callback(delay)
except (SystemExit, KeyboardInterrupt):
raise
except Exception:
logger.exception("Calling unlock callback")
def stop_waiting(self, zeostore):
storage_id = zeostore.storage_id
waiting = self._waiting[storage_id]
with self._lock:
new_waiting = [i for i in waiting if i[0] is not zeostore]
if len(new_waiting) == len(waiting):
return
waiting[:] = new_waiting
zeostore.log("(%r) dequeue lock: transactions waiting: %s"
% (storage_id, len(waiting)),
_level_for_waiting(waiting)
)
def already_waiting(self, zeostore):
storage_id = zeostore.storage_id
waiting = self._waiting[storage_id]
with self._lock:
return bool([i for i in waiting if i[0] is zeostore])
def server_status(self, storage_id):
status = self.stats[storage_id].__dict__.copy()
status['connections'] = len(status['connections'])
status['waiting'] = len(self._waiting[storage_id])
status['timeout-thread-is-alive'] = self.timeouts[storage_id].isAlive()
last_transaction = self.storages[storage_id].lastTransaction()
last_transaction_hex = codecs.encode(last_transaction, 'hex_codec')
if PY3:
# doctests and maybe clients expect a str, not bytes
last_transaction_hex = str(last_transaction_hex, 'ascii')
status['last-transaction'] = last_transaction_hex
return status
def ruok(self):
return dict((storage_id, self.server_status(storage_id))
for storage_id in self.storages)
def _level_for_waiting(waiting):
if len(waiting) > 9:
return logging.CRITICAL
if len(waiting) > 3:
return logging.WARNING
else:
return logging.DEBUG
class StubTimeoutThread:
def begin(self, client):
pass
def end(self, client):
pass
isAlive = lambda self: 'stub'
class TimeoutThread(threading.Thread):
"""Monitors transaction progress and generates timeouts."""
# There is one TimeoutThread per storage, because there's one
# transaction lock per storage.
def __init__(self, timeout):
threading.Thread.__init__(self)
self.setName("TimeoutThread")
self.setDaemon(1)
self._timeout = timeout
self._client = None
self._deadline = None
self._cond = threading.Condition() # Protects _client and _deadline
def begin(self, client):
# Called from the restart code the "main" thread, whenever the
# storage lock is being acquired. (Serialized by asyncore.)
with self._cond:
assert self._client is None
self._client = client
self._deadline = time.time() + self._timeout
self._cond.notify()
def end(self, client):
# Called from the "main" thread whenever the storage lock is
# being released. (Serialized by asyncore.)
with self._cond:
assert self._client is not None
assert self._client is client
self._client = None
self._deadline = None
def run(self):
# Code running in the thread.
while 1:
with self._cond:
while self._deadline is None:
self._cond.wait()
howlong = self._deadline - time.time()
if howlong <= 0:
# Prevent reporting timeout more than once
self._deadline = None
client = self._client # For the howlong <= 0 branch below
if howlong <= 0:
client.log("Transaction timeout after %s seconds" %
self._timeout, logging.CRITICAL)
try:
client.connection.call_soon_threadsafe(
client.connection.close)
except:
client.log("Timeout failure", logging.CRITICAL,
exc_info=sys.exc_info())
self.end(client)
else:
time.sleep(howlong)
def run_in_thread(method, *args):
t = SlowMethodThread(method, args)
t.start()
return t.delay
class SlowMethodThread(threading.Thread):
"""Thread to run potentially slow storage methods.
Clients can use the delay attribute to access the MTDelay object
used to send a zrpc response at the right time.
"""
# Some storage methods can take a long time to complete. If we
# run these methods via a standard asyncore read handler, they
# will block all other server activity until they complete. To
# avoid blocking, we spawn a separate thread, return an MTDelay()
# object, and have the thread reply() when it finishes.
def __init__(self, method, args):
threading.Thread.__init__(self)
self.setName("SlowMethodThread for %s" % method.__name__)
self._method = method
self._args = args
self.delay = MTDelay()
def run(self):
try:
result = self._method(*self._args)
except (SystemExit, KeyboardInterrupt):
raise
except Exception:
self.delay.error(sys.exc_info())
else:
self.delay.reply(result)
def _addr_label(addr):
if isinstance(addr, six.binary_type):
return addr.decode('ascii')
if isinstance(addr, six.string_types):
return addr
else:
host, port = addr
return str(host) + ":" + str(port)
class CommitLog:
def __init__(self):
self.file = tempfile.TemporaryFile(suffix=".comit-log")
self.pickler = Pickler(self.file, 1)
self.pickler.fast = 1
self.stores = 0
def size(self):
return self.file.tell()
def delete(self, oid, serial):
self.pickler.dump(('_delete', (oid, serial)))
self.stores += 1
def checkread(self, oid, serial):
self.pickler.dump(('_checkread', (oid, serial)))
self.stores += 1
def store(self, oid, serial, data):
self.pickler.dump(('_store', (oid, serial, data)))
self.stores += 1
def restore(self, oid, serial, data, prev_txn):
self.pickler.dump(('_restore', (oid, serial, data, prev_txn)))
self.stores += 1
def undo(self, transaction_id):
self.pickler.dump(('_undo', (transaction_id, )))
self.stores += 1
def __iter__(self):
self.file.seek(0)
unpickler = Unpickler(self.file)
for i in range(self.stores):
yield unpickler.load()
def close(self):
if self.file:
self.file.close()
self.file = None
class ServerEvent:
def __init__(self, server, **kw):
self.__dict__.update(kw)
self.server = server
class Serving(ServerEvent):
pass
class Closed(ServerEvent):
pass
def never_resolve_conflict(oid, committedSerial, oldSerial, newpickle,
committedData=b''):
raise ConflictError(oid=oid, serials=(committedSerial, oldSerial),
data=newpickle)
ZEO-5.0.0a0/src/ZEO/TransactionBuffer.py 0000664 0000000 0000000 00000007006 12737730500 0017616 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""A TransactionBuffer store transaction updates until commit or abort.
A transaction may generate enough data that it is not practical to
always hold pending updates in memory. Instead, a TransactionBuffer
is used to store the data until a commit or abort.
"""
# A faster implementation might store trans data in memory until it
# reaches a certain size.
import os
import tempfile
import ZODB.blob
from ZODB.ConflictResolution import ResolvedSerial
from ZEO._compat import Pickler, Unpickler
class TransactionBuffer:
# The TransactionBuffer is used by client storage to hold update
# data until the tpc_finish(). It is only used by a single
# thread, because only one thread can be in the two-phase commit
# at one time.
def __init__(self, connection_generation):
self.connection_generation = connection_generation
self.file = tempfile.TemporaryFile(suffix=".tbuf")
self.count = 0
self.size = 0
self.blobs = []
# It's safe to use a fast pickler because the only objects
# stored are builtin types -- strings or None.
self.pickler = Pickler(self.file, 1)
self.pickler.fast = 1
self.server_resolved = set() # {oid}
self.client_resolved = {} # {oid -> buffer_record_number}
self.exception = None
def close(self):
self.file.close()
def store(self, oid, data):
"""Store oid, version, data for later retrieval"""
self.pickler.dump((oid, data))
self.count += 1
# Estimate per-record cache size
self.size = self.size + (data and len(data) or 0) + 31
def resolve(self, oid, data):
"""Record client-resolved data
"""
self.store(oid, data)
self.client_resolved[oid] = self.count - 1
def serial(self, oid, serial):
if isinstance(serial, Exception):
self.exception = serial # This transaction will never be committed
elif serial == ResolvedSerial:
self.server_resolved.add(oid)
def storeBlob(self, oid, blobfilename):
self.blobs.append((oid, blobfilename))
def __iter__(self):
self.file.seek(0)
unpickler = Unpickler(self.file)
server_resolved = self.server_resolved
client_resolved = self.client_resolved
# Gaaaa, this is awkward. There can be entries in serials that
# aren't in the buffer, because undo. Entries can be repeated
# in the buffer, because ZODB. (Maybe this is a bug now, but
# it may be a feature later.
seen = set()
for i in range(self.count):
oid, data = unpickler.load()
if client_resolved.get(oid, i) == i:
seen.add(oid)
yield oid, data, oid in server_resolved
# We may have leftover oids because undo
for oid in server_resolved:
if oid not in seen:
yield oid, None, True
ZEO-5.0.0a0/src/ZEO/__init__.py 0000664 0000000 0000000 00000005673 12737730500 0015746 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""ZEO -- Zope Enterprise Objects.
See the file README.txt in this directory for an overview.
ZEO is now part of ZODB; ZODB's home on the web is
http://wiki.zope.org/ZODB
"""
def client(*args, **kw):
import ZEO.ClientStorage
return ZEO.ClientStorage.ClientStorage(*args, **kw)
def DB(*args, **kw):
s = client(*args, **kw)
try:
import ZODB
return ZODB.DB(s)
except Exception:
s.close()
raise
def connection(*args, **kw):
db = DB(*args, **kw)
try:
return db.open_then_close_db_when_connection_closes()
except Exception:
db.close()
ra
def server(path=None, blob_dir=None, storage_conf=None, zeo_conf=None,
port=0, threaded=True, **kw):
"""Convenience function to start a server for interactive exploration
This fuction starts a ZEO server, given a storage configuration or
a file-storage path and blob directory. You can also supply a ZEO
configuration string or a port. If neither a ZEO port or
configuration is supplied, a port is chosen randomly.
The server address and a stop function are returned. The address
can be passed to ZEO.ClientStorage.ClientStorage or ZEO.DB to
create a client to the server. The stop function can be called
without arguments to stop the server.
Arguments:
path
A file-storage path. This argument is ignored if a storage
configuration is supplied.
blob_dir
A blob directory path. This argument is ignored if a storage
configuration is supplied.
storage_conf
A storage configuration string. If none is supplied, then at
least a file-storage path must be supplied and the storage
configuration will be generated from the file-storage path and
the blob directory.
zeo_conf
A ZEO server configuration string.
port
If no ZEO configuration is supplied, the one will be computed
from the port. If no port is supplied, one will be chosedn
dynamically.
"""
import os, ZEO.tests.forker
if storage_conf is None and path is None:
storage_conf = '\n'
return ZEO.tests.forker.start_zeo_server(
storage_conf, zeo_conf, port, keep=True, path=path,
blob_dir=blob_dir, suicide=False, threaded=threaded, **kw)
ZEO-5.0.0a0/src/ZEO/_compat.py 0000664 0000000 0000000 00000003342 12737730500 0015620 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Python versions compatiblity
"""
import sys
import platform
PY3 = sys.version_info[0] >= 3
PY32 = sys.version_info[:2] == (3, 2)
PYPY = getattr(platform, 'python_implementation', lambda: None)() == 'PyPy'
if PY3:
from pickle import Pickler, Unpickler as _Unpickler, dump, dumps, loads
class Unpickler(_Unpickler):
# Py3: Python 3 doesn't allow assignments to find_global,
# instead, find_class can be overridden
find_global = None
def find_class(self, modulename, name):
if self.find_global is None:
return super(Unpickler, self).find_class(modulename, name)
return self.find_global(modulename, name)
else:
# Pickle support
from cPickle import Pickler, Unpickler, dump, dumps, loads
# String and Bytes IO
from ZODB._compat import BytesIO
if PY3:
import _thread as thread
if PY32:
from threading import _get_ident as get_ident
else:
from threading import get_ident
else:
import thread
from thread import get_ident
try:
from cStringIO import StringIO
except:
from io import StringIO
ZEO-5.0.0a0/src/ZEO/asyncio/ 0000775 0000000 0000000 00000000000 12737730500 0015267 5 ustar 00root root 0000000 0000000 ZEO-5.0.0a0/src/ZEO/asyncio/README.rst 0000664 0000000 0000000 00000007216 12737730500 0016764 0 ustar 00root root 0000000 0000000 ================================
asyncio-based networking for ZEO
================================
This package provides the networking interface for ZEO. It provides a
somewhat RPC-like API.
Notes
=====
Sending data immediately: ayncio vs asyncore
--------------------------------------------
The previous ZEO networking implementation used the ``asyncore`` library.
When writing with asyncore, writes were done only from the event loop.
This meant that when sending data, code would have to "wake up" the
event loop, typically after adding data to some sort of output buffer.
Asyncio takes an entirely different and saner approach. When an
application wants to send data, it writes to a transport. All
interactions with a transport (in a correct application) are from the
same thread, which is also the thread running any event loop.
Transports are always either idle or sending data. When idle, the
transport writes to the outout socket immediately. If not all data
isn't sent, then it buffers it and becomes sending. If a transport is
sending, then we know that the socket isn't ready for more data, so
``write`` can just buffer the data. There's no point in waking up the
event loop, because the socket will do so when it's ready for more
data.
An exception to the paragraph above occurs when operations cross
threads, as occures for most client operations and when a transaction
commits on the server and results have to be sent to other clients. In
these cases, a call_soon_threadsafe method is used which queues an
operation and has to wake up an event loop to process it.
Server threading
----------------
There are currently two server implementations, an implementation that
used a thread per client (and a thread to listen for connections),
``ZEO.asyncio.mtacceptor.Acceptor``, and an implementation that uses a
single networking thread, ``ZEO.asyncio.server.Acceptor``. The
implementation is selected by changing an import in
``ZEO.StorageServer``. The currently-used implementation is
``ZEO.asyncio.server.Acceptor``, although this sentance is likely to
rot, so check the import to be sure. (Maybe this should be configurable.)
ZEO switched to a multi-threaded implementation several years ago
because it was found to improve performance for large databases using
magnetic disks. Because client threads are always working on behalf of
a single client, there's not really an issue with making blocking
calls, such as executing slow I/O operations.
Initially, the asyncio-based implementation used a multi-threaded
server. A simple thread accepted connections and handed accepted
sockets to ``create_connection``. This became a problem when SSL was
added because ``create_connection`` sets up SSL conections as client
connections, and doesn't provide an option to create server
connections.
In response, I created an ``asyncio.Server``-based implementation.
This required using a single thread. This was a pretty trivial
change, however, it led to the tests becoming unstable to the point
that it was impossible to run all tests without some failing. One
test was broken due to a ``asyncio.Server`` `bug
`_. It's unclear whether the test
instability is due to ``asyncio.Server`` problems or due to latent
test (or ZEO) bugs, but even after beating the tests mostly into
submission, tests failures are more likely when using
``asyncio.Server``. Beatings will continue.
While fighting test failures using ``asyncio.Server``, the
multi-threaded implementation was updated to use a monkey patch to
allow it to create SSL server connections. Aside from the real risk of a
monkey patch, this works very well.
Both implementations seem to perform about the same.
ZEO-5.0.0a0/src/ZEO/asyncio/__init__.py 0000664 0000000 0000000 00000000002 12737730500 0017370 0 ustar 00root root 0000000 0000000 #
ZEO-5.0.0a0/src/ZEO/asyncio/base.py 0000664 0000000 0000000 00000012745 12737730500 0016564 0 ustar 00root root 0000000 0000000 from .._compat import PY3
if PY3:
import asyncio
else:
import trollius as asyncio
import logging
import socket
from struct import unpack
import sys
from .marshal import encoder
logger = logging.getLogger(__name__)
INET_FAMILIES = socket.AF_INET, socket.AF_INET6
class Protocol(asyncio.Protocol):
"""asyncio low-level ZEO base interface
"""
# All of the code in this class runs in a single dedicated
# thread. Thus, we can mostly avoid worrying about interleaved
# operations.
# One place where special care was required was in cache setup on
# connect. See finish connect below.
transport = protocol_version = None
def __init__(self, loop, addr):
self.loop = loop
self.addr = addr
self.input = [] # Input buffer when assembling messages
self.output = [] # Output buffer when paused
self.paused = [] # Paused indicator, mutable to avoid attr lookup
# Handle the first message, the protocol handshake, differently
self.message_received = self.first_message_received
def __repr__(self):
return self.name
closed = False
def close(self):
if not self.closed:
self.closed = True
if self.transport is not None:
self.transport.close()
def connection_made(self, transport):
logger.info("Connected %s", self)
if sys.version_info < (3, 6):
sock = transport.get_extra_info('socket')
if sock is not None and sock.family in INET_FAMILIES:
# See https://bugs.python.org/issue27456 :(
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, True)
self.transport = transport
paused = self.paused
output = self.output
append = output.append
writelines = transport.writelines
from struct import pack
def write(message):
if paused:
append(message)
else:
writelines((pack(">I", len(message)), message))
self._write = write
def writeit(data):
# Note, don't worry about combining messages. Iters
# will be used with blobs, in which case, the individual
# messages will be big to begin with.
data = iter(data)
for message in data:
writelines((pack(">I", len(message)), message))
if paused:
append(data)
break
self._writeit = writeit
got = 0
want = 4
getting_size = True
def data_received(self, data):
# Low-level input handler collects data into sized messages.
# Note that the logic below assume that when new data pushes
# us over what we want, we process it in one call until we
# need more, because we assume that excess data is all in the
# last item of self.input. This is why the exception handling
# in the while loop is critical. Without it, an exception
# might cause us to exit before processing all of the data we
# should, when then causes the logic to be broken in
# subsequent calls.
self.got += len(data)
self.input.append(data)
while self.got >= self.want:
try:
extra = self.got - self.want
if extra == 0:
collected = b''.join(self.input)
self.input = []
else:
input = self.input
self.input = [input[-1][-extra:]]
input[-1] = input[-1][:-extra]
collected = b''.join(input)
self.got = extra
if self.getting_size:
# we were recieving the message size
assert self.want == 4
self.want = unpack(">I", collected)[0]
self.getting_size = False
else:
self.want = 4
self.getting_size = True
self.message_received(collected)
except Exception:
logger.exception("data_received %s %s %s",
self.want, self.got, self.getting_size)
def first_message_received(self, protocol_version):
# Handler for first/handshake message, set up in __init__
del self.message_received # use default handler from here on
self.encode = encoder()
self.finish_connect(protocol_version)
def call_async(self, method, args):
self._write(self.encode(0, True, method, args))
def call_async_iter(self, it):
self._writeit(self.encode(0, True, method, args)
for method, args in it)
def pause_writing(self):
self.paused.append(1)
def resume_writing(self):
paused = self.paused
del paused[:]
output = self.output
writelines = self.transport.writelines
from struct import pack
while output and not paused:
message = output.pop(0)
if isinstance(message, bytes):
writelines((pack(">I", len(message)), message))
else:
data = message
for message in data:
writelines((pack(">I", len(message)), message))
if paused: # paused again. Put iter back.
output.insert(0, data)
break
def get_peername(self):
return self.transport.get_extra_info('peername')
ZEO-5.0.0a0/src/ZEO/asyncio/client.py 0000664 0000000 0000000 00000075367 12737730500 0017141 0 ustar 00root root 0000000 0000000 from .._compat import PY3
if PY3:
import asyncio
else:
import trollius as asyncio
from ZEO.Exceptions import ClientDisconnected
from ZODB.ConflictResolution import ResolvedSerial
import concurrent.futures
import logging
import random
import threading
import ZODB.event
import ZODB.POSException
import ZEO.Exceptions
import ZEO.interfaces
from . import base
from .marshal import decode
logger = logging.getLogger(__name__)
Fallback = object()
local_random = random.Random() # use separate generator to facilitate tests
class Protocol(base.Protocol):
"""asyncio low-level ZEO client interface
"""
# All of the code in this class runs in a single dedicated
# thread. Thus, we can mostly avoid worrying about interleaved
# operations.
# One place where special care was required was in cache setup on
# connect. See finish connect below.
protocols = b'Z309', b'Z310', b'Z3101', b'Z4', b'Z5'
def __init__(self, loop,
addr, client, storage_key, read_only, connect_poll=1,
heartbeat_interval=60, ssl=None, ssl_server_hostname=None):
"""Create a client interface
addr is either a host,port tuple or a string file name.
client is a ClientStorage. It must be thread safe.
cache is a ZEO.interfaces.IClientCache.
"""
super(Protocol, self).__init__(loop, addr)
self.storage_key = storage_key
self.read_only = read_only
self.name = "%s(%r, %r, %r)" % (
self.__class__.__name__, addr, storage_key, read_only)
self.client = client
self.connect_poll = connect_poll
self.heartbeat_interval = heartbeat_interval
self.futures = {} # { message_id -> future }
self.ssl = ssl
self.ssl_server_hostname = ssl_server_hostname
self.connect()
def close(self):
if not self.closed:
self.closed = True
self._connecting.cancel()
if self.transport is not None:
self.transport.close()
for future in self.pop_futures():
future.set_exception(ClientDisconnected("Closed"))
def pop_futures(self):
# Remove and return futures from self.futures. The caller
# will finalize them in some way and callbacks may modify
# self.futures.
futures = list(self.futures.values())
self.futures.clear()
return futures
def protocol_factory(self):
return self
def connect(self):
if isinstance(self.addr, tuple):
host, port = self.addr
cr = self.loop.create_connection(
self.protocol_factory, host or 'localhost', port,
ssl=self.ssl, server_hostname=self.ssl_server_hostname)
else:
cr = self.loop.create_unix_connection(
self.protocol_factory, self.addr, ssl=self.ssl)
self._connecting = cr = asyncio.async(cr, loop=self.loop)
@cr.add_done_callback
def done_connecting(future):
if future.exception() is not None:
logger.info("Connection to %r failed, retrying, %s",
self.addr, future.exception())
# keep trying
if not self.closed:
self.loop.call_later(
self.connect_poll + local_random.random(),
self.connect,
)
def connection_made(self, transport):
super(Protocol, self).connection_made(transport)
self.heartbeat(write=False)
def connection_lost(self, exc):
logger.debug('connection_lost %r', exc)
self.heartbeat_handle.cancel()
if self.closed:
for f in self.pop_futures():
f.cancel()
else:
# We have to be careful processing the futures, because
# exception callbacks might modufy them.
for f in self.pop_futures():
f.set_exception(ClientDisconnected(exc or 'connection lost'))
self.closed = True
self.client.disconnected(self)
def finish_connect(self, protocol_version):
# We use a promise model rather than coroutines here because
# for the most part, this class is reactive and coroutines
# aren't a good model of it's activities. During
# initialization, however, we use promises to provide an
# imperative flow.
# The promise(/future) implementation we use differs from
# asyncio.Future in that callbacks are called immediately,
# rather than using the loops call_soon. We want to avoid a
# race between invalidations and cache initialization. In
# particular, after getting a response from lastTransaction or
# getInvalidations, we want to make sure we set the cache's
# lastTid before processing (and possibly missing) subsequent
# invalidations.
self.protocol_version = min(protocol_version, self.protocols[-1])
if self.protocol_version not in self.protocols:
self.client.register_failed(
self, ZEO.Exceptions.ProtocolError(protocol_version))
return
self._write(self.protocol_version)
register = self.promise(
'register', self.storage_key,
self.read_only if self.read_only is not Fallback else False,
)
if self.read_only is not Fallback:
# Get lastTransaction in flight right away to make
# successful connection quicker, but only if we're not
# doing read-only fallback. If we might need to retry, we
# can't send lastTransaction because if the registration
# fails, it will be seen as an invalid message and the
# connection will close. :( It would be a lot better of
# registere returned the last transaction (and info while
# it's at it).
lastTransaction = self.promise('lastTransaction')
else:
lastTransaction = None # to make python happy
@register
def registered(_):
if self.read_only is Fallback:
self.read_only = False
r_lastTransaction = self.promise('lastTransaction')
else:
r_lastTransaction = lastTransaction
self.client.registered(self, r_lastTransaction)
@register.catch
def register_failed(exc):
if (isinstance(exc, ZODB.POSException.ReadOnlyError) and
self.read_only is Fallback):
# We tried a write connection, degrade to a read-only one
self.read_only = True
logger.info("%s write connection failed. Trying read-only",
self)
register = self.promise('register', self.storage_key, True)
# get lastTransaction in flight.
lastTransaction = self.promise('lastTransaction')
@register
def registered(_):
self.client.registered(self, lastTransaction)
@register.catch
def register_failed(exc):
self.client.register_failed(self, exc)
else:
self.client.register_failed(self, exc)
exception_type_type = type(Exception)
def message_received(self, data):
msgid, async, name, args = decode(data)
if name == '.reply':
future = self.futures.pop(msgid)
if (isinstance(args, tuple) and len(args) > 1 and
type(args[0]) == self.exception_type_type and
issubclass(args[0], Exception)
):
if not issubclass(
args[0], (
ZODB.POSException.POSKeyError,
ZODB.POSException.ConflictError,)
):
logger.error("%s from server: %s.%s:%s",
self.name,
args[0].__module__,
args[0].__name__,
args[1])
future.set_exception(args[1])
else:
future.set_result(args)
else:
assert async # clients only get async calls
if name in self.client_methods:
getattr(self.client, name)(*args)
else:
raise AttributeError(name)
message_id = 0
def call(self, future, method, args):
self.message_id += 1
self.futures[self.message_id] = future
self._write(self.encode(self.message_id, False, method, args))
return future
def promise(self, method, *args):
return self.call(Promise(), method, args)
# Methods called by the server.
# WARNING WARNING we can't call methods that call back to us
# syncronously, as that would lead to DEADLOCK!
client_methods = (
'invalidateTransaction', 'serialnos', 'info',
'receiveBlobStart', 'receiveBlobChunk', 'receiveBlobStop',
# plus: notify_connected, notify_disconnected
)
client_delegated = client_methods[2:]
def heartbeat(self, write=True):
if write:
self._write(b'(J\xff\xff\xff\xffK\x00U\x06.replyNt.')
self.heartbeat_handle = self.loop.call_later(
self.heartbeat_interval, self.heartbeat)
class Client(object):
"""asyncio low-level ZEO client interface
"""
# All of the code in this class runs in a single dedicated
# thread. Thus, we can mostly avoid worrying about interleaved
# operations.
# One place where special care was required was in cache setup on
# connect.
protocol = None
ready = None # Tri-value: None=Never connected, True=connected,
# False=Disconnected
def __init__(self, loop,
addrs, client, cache, storage_key, read_only, connect_poll,
register_failed_poll=9,
ssl=None, ssl_server_hostname=None):
"""Create a client interface
addr is either a host,port tuple or a string file name.
client is a ClientStorage. It must be thread safe.
cache is a ZEO.interfaces.IClientCache.
"""
self.loop = loop
self.addrs = addrs
self.storage_key = storage_key
self.read_only = read_only
self.connect_poll = connect_poll
self.register_failed_poll = register_failed_poll
self.client = client
self.ssl = ssl
self.ssl_server_hostname = ssl_server_hostname
for name in Protocol.client_delegated:
setattr(self, name, getattr(client, name))
self.cache = cache
self.protocols = ()
self.disconnected(None)
def new_addrs(self, addrs):
self.addrs = addrs
if self.trying_to_connect():
self.disconnected(None)
def trying_to_connect(self):
"""Return whether we're trying to connect
Either because we're disconnected, or because we're connected
read-only, but want a writable connection if we can get one.
"""
return (not self.ready or
self.is_read_only() and self.read_only is Fallback)
closed = False
def close(self):
if not self.closed:
self.closed = True
self.ready = False
if self.protocol is not None:
self.protocol.close()
self.cache.close()
self._clear_protocols()
def _clear_protocols(self, protocol=None):
for p in self.protocols:
if p is not protocol:
p.close()
self.protocols = ()
def disconnected(self, protocol=None):
logger.debug('disconnected %r %r', self, protocol)
if protocol is None or protocol is self.protocol:
if protocol is self.protocol and protocol is not None:
self.client.notify_disconnected()
if self.ready:
self.ready = False
self.connected = concurrent.futures.Future()
self.protocol = None
self._clear_protocols()
if all(p.closed for p in self.protocols):
self.try_connecting()
def upgrade(self, protocol):
self.ready = False
self.connected = concurrent.futures.Future()
self.protocol.close()
self.protocol = protocol
self._clear_protocols(protocol)
def try_connecting(self):
logger.debug('try_connecting')
if not self.closed:
self.protocols = [
Protocol(self.loop, addr, self,
self.storage_key, self.read_only, self.connect_poll,
ssl=self.ssl,
ssl_server_hostname=self.ssl_server_hostname,
)
for addr in self.addrs
]
def registered(self, protocol, last_transaction_promise):
if self.protocol is None:
self.protocol = protocol
if not (self.read_only is Fallback and protocol.read_only):
# We're happy with this protocol. Tell the others to
# stop trying.
self._clear_protocols(protocol)
self.verify(last_transaction_promise)
elif (self.read_only is Fallback and not protocol.read_only and
self.protocol.read_only):
self.upgrade(protocol)
self.verify(last_transaction_promise)
else:
protocol.close() # too late, we went home with another
def register_failed(self, protocol, exc):
# A protocol failed registration. That's weird. If they've all
# failed, we should try again in a bit.
if protocol is not self:
protocol.close()
logger.exception("Registration or cache validation failed, %s", exc)
if (self.protocol is None and not
any(not p.closed for p in self.protocols)
):
self.loop.call_later(
self.register_failed_poll + local_random.random(),
self.try_connecting)
verify_result = None # for tests
def verify(self, last_transaction_promise):
protocol = self.protocol
@last_transaction_promise
def finish_verify(server_tid):
cache = self.cache
if cache:
cache_tid = cache.getLastTid()
if not cache_tid:
self.verify_result = "Non-empty cache w/o tid"
logger.error("Non-empty cache w/o tid -- clearing")
cache.clear()
self.client.invalidateCache()
self.finished_verify(server_tid)
elif cache_tid > server_tid:
self.verify_result = "Cache newer than server"
logger.critical(
'Client has seen newer transactions than server!')
raise AssertionError("Server behind client, %r < %r, %s",
server_tid, cache_tid, protocol)
elif cache_tid == server_tid:
self.verify_result = "Cache up to date"
self.finished_verify(server_tid)
else:
@protocol.promise('getInvalidations', cache_tid)
def verify_invalidations(vdata):
if vdata:
self.verify_result = "quick verification"
tid, oids = vdata
for oid in oids:
cache.invalidate(oid, None)
self.client.invalidateTransaction(tid, oids)
return tid
else:
# cache is too old
self.verify_result = "cache too old, clearing"
try:
ZODB.event.notify(
ZEO.interfaces.StaleCache(self.client))
except Exception:
logger.exception("sending StaleCache event")
logger.critical(
"%s dropping stale cache",
getattr(self.client, '__name__', ''),
)
self.cache.clear()
self.client.invalidateCache()
return server_tid
verify_invalidations(
self.finished_verify,
self.connected.set_exception,
)
else:
self.verify_result = "empty cache"
self.finished_verify(server_tid)
@finish_verify.catch
def verify_failed(exc):
del self.protocol
self.register_failed(protocol, exc)
def finished_verify(self, server_tid):
# The cache is validated and the last tid we got from the server.
# Set ready so we apply any invalidations that follow.
# We've been ignoring them up to this point.
self.cache.setLastTid(server_tid)
self.ready = True
@self.protocol.promise('get_info')
def got_info(info):
self.client.notify_connected(self, info)
self.connected.set_result(None)
@got_info.catch
def failed_info(exc):
self.register_failed(self, exc)
def get_peername(self):
return self.protocol.get_peername()
def call_async_threadsafe(self, future, method, args):
if self.ready:
self.protocol.call_async(method, args)
future.set_result(None)
else:
future.set_exception(ClientDisconnected())
def call_async_from_same_thread(self, method, *args):
return self.protocol.call_async(method, args)
def call_async_iter_threadsafe(self, future, it):
if self.ready:
self.protocol.call_async_iter(it)
future.set_result(None)
else:
future.set_exception(ClientDisconnected())
def _when_ready(self, func, result_future, *args):
if self.ready is None:
# We started without waiting for a connection. (prob tests :( )
result_future.set_exception(ClientDisconnected("never connected"))
else:
@self.connected.add_done_callback
def done(future):
e = future.exception()
if e is not None:
future.set_exception(e)
else:
if self.ready:
func(result_future, *args)
else:
self._when_ready(func, result_future, *args)
def call_threadsafe(self, future, method, args):
if self.ready:
self.protocol.call(future, method, args)
else:
self._when_ready(self.call_threadsafe, future, method, args)
# Special methods because they update the cache.
def load_before_threadsafe(self, future, oid, tid):
data = self.cache.loadBefore(oid, tid)
if data is not None:
future.set_result(data)
elif self.ready:
@self.protocol.promise('loadBefore', oid, tid)
def load_before(data):
future.set_result(data)
if data:
data, start, end = data
self.cache.store(oid, start, end, data)
load_before.catch(future.set_exception)
else:
self._when_ready(self.load_before_threadsafe, future, oid, tid)
def tpc_finish_threadsafe(self, future, tid, updates, f):
if self.ready:
@self.protocol.promise('tpc_finish', tid)
def committed(tid):
try:
cache = self.cache
for oid, data, resolved in updates:
cache.invalidate(oid, tid)
if data and not resolved:
cache.store(oid, tid, None, data)
cache.setLastTid(tid)
except Exception as exc:
future.set_exception(exc)
# At this point, our cache is in an inconsistent
# state. We need to reconnect in hopes of
# recovering to a consistent state.
self.protocol.close()
self.disconnected(self.protocol)
else:
f(tid)
future.set_result(tid)
committed.catch(future.set_exception)
else:
future.set_exception(ClientDisconnected())
def close_threadsafe(self, future):
self.close()
future.set_result(None)
def invalidateTransaction(self, tid, oids):
if self.ready:
for oid in oids:
self.cache.invalidate(oid, tid)
self.client.invalidateTransaction(tid, oids)
self.cache.setLastTid(tid)
def serialnos(self, serials):
# Before delegating, check for errors (likely ConflictErrors)
# and invalidate the oids they're associated with. In the
# past, this was done by the client, but now we control the
# cache and this is our last chance, as the client won't call
# back into us when there's an error.
for oid, serial in serials:
if isinstance(serial, Exception):
self.cache.invalidate(oid, None)
self.client.serialnos(serials)
@property
def protocol_version(self):
return self.protocol.protocol_version
def is_read_only(self):
try:
protocol = self.protocol
except AttributeError:
return self.read_only
else:
if protocol is None:
return self.read_only
else:
return protocol.read_only
class ClientRunner(object):
def set_options(self, addrs, wrapper, cache, storage_key, read_only,
timeout=30, disconnect_poll=1,
**kwargs):
self.__args = (addrs, wrapper, cache, storage_key, read_only,
disconnect_poll)
self.__kwargs = kwargs
self.timeout = timeout
def setup_delegation(self, loop):
self.loop = loop
self.client = Client(loop, *self.__args, **self.__kwargs)
self.call_threadsafe = self.client.call_threadsafe
self.call_async_threadsafe = self.client.call_async_threadsafe
from concurrent.futures import Future
call_soon_threadsafe = loop.call_soon_threadsafe
def call(meth, *args, **kw):
timeout = kw.pop('timeout', None)
assert not kw
result = Future()
call_soon_threadsafe(meth, result, *args)
return self.wait_for_result(result, timeout)
self.__call = call
def wait_for_result(self, future, timeout):
try:
return future.result(self.timeout if timeout is None else timeout)
except concurrent.futures.TimeoutError:
if not self.client.ready:
raise ClientDisconnected("timed out waiting for connection")
else:
raise
def call(self, method, *args, **kw):
return self.__call(self.call_threadsafe, method, args, **kw)
def call_future(self, method, *args):
# for tests
result = concurrent.futures.Future()
self.loop.call_soon_threadsafe(
self.call_threadsafe, result, method, args)
return result
def async(self, method, *args):
return self.__call(self.call_async_threadsafe, method, args)
def async_iter(self, it):
return self.__call(self.client.call_async_iter_threadsafe, it)
def load_before(self, oid, tid):
return self.__call(self.client.load_before_threadsafe, oid, tid)
def tpc_finish(self, tid, updates, f):
return self.__call(self.client.tpc_finish_threadsafe, tid, updates, f)
def is_connected(self):
return self.client.ready
def is_read_only(self):
try:
protocol = self.client.protocol
except AttributeError:
return True
else:
if protocol is None:
return True
else:
return protocol.read_only
def close(self):
self.__call(self.client.close_threadsafe)
# Short circuit from now on. We're closed.
def call_closed(*a, **k):
raise ClientDisconnected('closed')
self.__call = call_closed
def apply_threadsafe(self, future, func, *args):
try:
future.set_result(func(*args))
except Exception as exc:
future.set_exception(exc)
def new_addrs(self, addrs):
# This usually doesn't have an immediate effect, since the
# addrs aren't used until the client disconnects.xs
self.__call(self.apply_threadsafe, self.client.new_addrs, addrs)
def wait(self, timeout=None):
if timeout is None:
timeout = self.timeout
self.wait_for_result(self.client.connected, timeout)
class ClientThread(ClientRunner):
"""Thread wrapper for client interface
A ClientProtocol is run in a dedicated thread.
Calls to it are made in a thread-safe fashion.
"""
def __init__(self, addrs, client, cache,
storage_key='1', read_only=False, timeout=30,
disconnect_poll=1, ssl=None, ssl_server_hostname=None):
self.set_options(addrs, client, cache, storage_key, read_only,
timeout, disconnect_poll,
ssl=ssl, ssl_server_hostname=ssl_server_hostname)
self.thread = threading.Thread(
target=self.run,
name="%s zeo client networking thread" % client.__name__,
)
self.thread.setDaemon(True)
self.started = threading.Event()
self.thread.start()
self.started.wait()
if self.exception:
raise self.exception
exception = None
def run(self):
loop = None
try:
loop = asyncio.new_event_loop()
self.setup_delegation(loop)
self.started.set()
loop.run_forever()
except Exception as exc:
raise
logger.exception("Client thread")
self.exception = exc
finally:
if not self.closed:
self.closed = True
try:
if self.client.ready:
self.client.ready = False
self.client.client.notify_disconnected()
except AttributeError:
pass
logger.critical("Client loop stopped unexpectedly")
if loop is not None:
loop.close()
logger.debug('Stopping client thread')
closed = False
def close(self):
if not self.closed:
self.closed = True
super(ClientThread, self).close()
self.loop.call_soon_threadsafe(self.loop.stop)
self.thread.join(9)
if self.exception:
raise self.exception
class Promise(object):
"""Lightweight future with a partial promise API.
These are lighweight because they call callbacks synchronously
rather than through an event loop, and because they ony support
single callbacks.
"""
# Note that we can know that they are completed after callbacks
# are set up because they're used to make network requests.
# Requests are made by writing to a transport. Because we're used
# in a single-threaded protocol, we can't get a response and be
# completed if the callbacks are set in the same code that
# created the promise, which they are.
next = success_callback = error_callback = cancelled = None
def __call__(self, success_callback = None, error_callback = None):
"""Set the promises success and error handlers and beget a new promise
The promise returned provides for promise chaining, providing
a sane imperative flow. Let's call this the "next" promise.
Any results or exceptions generated by the promise or it's
callbacks are passed on to the next promise.
When the promise completes successfully, if a success callback
isn't set, then the next promise is completed with the
successfull result. If a success callback is provided, it's
called. If the call succeeds, and the result is a promise,
them the result is called with the next promise's set_result
and set_exception methods, chaining the result and next
promise. If the result isn't a promise, then the next promise
is completed with it by calling set_result. If the success
callback fails, then it's exception is passed to
next.set_exception.
If the promise completes with an error and the error callback
isn't set, then the exception is passed to the next promises
set_exception. If an error handler is provided, it's called
and if it doesn't error, then the original exception is passed
to the next promise's set_exception. If there error handler
errors, then that exception is passed to the next promise's
set_exception.
"""
self.next = self.__class__()
self.success_callback = success_callback
self.error_callback = error_callback
return self.next
def cancel(self):
self.set_exception(concurrent.futures.CancelledError)
def catch(self, error_callback):
self.error_callback = error_callback
def set_exception(self, exc):
self._notify(None, exc)
def set_result(self, result):
self._notify(result, None)
def _notify(self, result, exc):
next = self.next
if exc is not None:
if self.error_callback is not None:
try:
result = self.error_callback(exc)
except Exception:
logger.exception("Exception handling error %s", exc)
if next is not None:
next.set_exception(exc)
else:
if next is not None:
next.set_result(result)
elif next is not None:
next.set_exception(exc)
else:
if self.success_callback is not None:
try:
result = self.success_callback(result)
except Exception as exc:
logger.exception("Exception in success callback")
if next is not None:
next.set_exception(exc)
else:
if next is not None:
if isinstance(result, Promise):
result(next.set_result, next.set_exception)
else:
next.set_result(result)
elif next is not None:
next.set_result(result)
ZEO-5.0.0a0/src/ZEO/asyncio/marshal.py 0000664 0000000 0000000 00000007263 12737730500 0017300 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Support for marshaling ZEO messages
Not to be confused with marshaling objects in ZODB.
We currently use pickle. In the future, we may use a
Python-independent format, or possibly a minimal pickle subset.
"""
import logging
from .._compat import Unpickler, Pickler, BytesIO, PY3, PYPY
from ..shortrepr import short_repr
logger = logging.getLogger(__name__)
def encoder():
"""Return a non-thread-safe encoder
"""
if PY3 or PYPY:
f = BytesIO()
getvalue = f.getvalue
seek = f.seek
truncate = f.truncate
pickler = Pickler(f, 3 if PY3 else 1)
pickler.fast = 1
dump = pickler.dump
def encode(*args):
seek(0)
truncate()
dump(args)
return getvalue()
else:
pickler = Pickler(1)
pickler.fast = 1
dump = pickler.dump
def encode(*args):
return dump(args, 2)
return encode
def encode(*args):
return encoder()(*args)
def decode(msg):
"""Decodes msg and returns its parts"""
unpickler = Unpickler(BytesIO(msg))
unpickler.find_global = find_global
try:
# PyPy, zodbpickle, the non-c-accelerated version
unpickler.find_class = find_global
except AttributeError:
pass
try:
return unpickler.load() # msgid, flags, name, args
except:
logger.error("can't decode message: %s" % short_repr(msg))
raise
def server_decode(msg):
"""Decodes msg and returns its parts"""
unpickler = Unpickler(BytesIO(msg))
unpickler.find_global = server_find_global
try:
# PyPy, zodbpickle, the non-c-accelerated version
unpickler.find_class = server_find_global
except AttributeError:
pass
try:
return unpickler.load() # msgid, flags, name, args
except:
logger.error("can't decode message: %s" % short_repr(msg))
raise
_globals = globals()
_silly = ('__doc__',)
exception_type_type = type(Exception)
def find_global(module, name):
"""Helper for message unpickler"""
try:
m = __import__(module, _globals, _globals, _silly)
except ImportError as msg:
raise ImportError("import error %s: %s" % (module, msg))
try:
r = getattr(m, name)
except AttributeError:
raise ImportError("module %s has no global %s" % (module, name))
safe = getattr(r, '__no_side_effects__', 0)
if safe:
return r
# TODO: is there a better way to do this?
if type(r) == exception_type_type and issubclass(r, Exception):
return r
raise ImportError("Unsafe global: %s.%s" % (module, name))
def server_find_global(module, name):
"""Helper for message unpickler"""
try:
if module != 'ZopeUndo.Prefix':
raise ImportError
m = __import__(module, _globals, _globals, _silly)
except ImportError as msg:
raise ImportError("import error %s: %s" % (module, msg))
try:
r = getattr(m, name)
except AttributeError:
raise ImportError("module %s has no global %s" % (module, name))
return r
ZEO-5.0.0a0/src/ZEO/asyncio/mtacceptor.py 0000664 0000000 0000000 00000017163 12737730500 0020012 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Multi-threaded server connectin acceptor
Each connection is run in it's own thread. Testing serveral years ago
suggsted that this was a win, but ZODB shootout and another
lower-level tests suggest otherwise. It's really unclear, which is
why we're keeping this around for now.
Asyncio doesn't let you accept connections in one thread and handle
them in another. To get around this, we have a listener implemented
using asyncore, but when we get a connection, we hand the socket to
asyncio. This worked well until we added SSL support. (Even then, it
worked on Mac OS X for some reason.)
SSL + non-blocking sockets requires special care, which asyncio
provides. Unfortunately, create_connection, assumes it's creating a
client connection. It would be easy to fix this,
http://bugs.python.org/issue27392, but it's hard to justify the fix to
get it accepted, so we won't bother for now. This currently uses a
horrible monley patch to work with SSL.
To use this module, replace::
from .asyncio.server import Acceptor
with:
from .asyncio.mtacceptor import Acceptor
in ZEO.StorageServer.
"""
from .._compat import PY3
if PY3:
import asyncio
else:
import trollius as asyncio
import asyncore
import socket
import threading
from .server import ServerProtocol
# _has_dualstack: True if the dual-stack sockets are supported
try:
# Check whether IPv6 sockets can be created
s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
except (socket.error, AttributeError):
_has_dualstack = False
else:
# Check whether enabling dualstack (disabling v6only) works
try:
s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, False)
except (socket.error, AttributeError):
_has_dualstack = False
else:
_has_dualstack = True
s.close()
del s
import logging
logger = logging.getLogger(__name__)
class Acceptor(asyncore.dispatcher):
"""A server that accepts incoming RPC connections
And creates a separate thread for each.
"""
def __init__(self, storage_server, addr, ssl):
self.storage_server = storage_server
self.addr = addr
self.__socket_map = {}
asyncore.dispatcher.__init__(self, map=self.__socket_map)
self.ssl_context = ssl
self._open_socket()
def _open_socket(self):
addr = self.addr
if type(addr) == tuple:
if addr[0] == '' and _has_dualstack:
# Wildcard listen on all interfaces, both IPv4 and
# IPv6 if possible
self.create_socket(socket.AF_INET6, socket.SOCK_STREAM)
self.socket.setsockopt(
socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, False)
elif ':' in addr[0]:
self.create_socket(socket.AF_INET6, socket.SOCK_STREAM)
if _has_dualstack:
# On Linux, IPV6_V6ONLY is off by default.
# If the user explicitly asked for IPv6, don't bind to IPv4
self.socket.setsockopt(
socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, True)
else:
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
else:
self.create_socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.set_reuse_addr()
for i in range(25):
try:
self.bind(addr)
except Exception as exc:
logger.info("bind on %s failed %s waiting", addr, i)
if i == 24:
raise
else:
time.sleep(5)
except:
logger.exception('binding')
raise
else:
break
if isinstance(addr, tuple) and addr[1] == 0:
self.addr = addr = self.socket.getsockname()
logger.info("listening on %s", str(addr))
self.listen(5)
def writable(self):
return 0
def readable(self):
return 1
def handle_accept(self):
try:
sock, addr = self.accept()
except socket.error as msg:
logger.info("accepted failed: %s", msg)
return
# We could short-circuit the attempt below in some edge cases
# and avoid a log message by checking for addr being None.
# Unfortunately, our test for the code below,
# quick_close_doesnt_kill_server, causes addr to be None and
# we'd have to write a test for the non-None case, which is
# *even* harder to provoke. :/ So we'll leave things as they
# are for now.
# It might be better to check whether the socket has been
# closed, but I don't see a way to do that. :(
# Drop flow-info from IPv6 addresses
if addr: # Sometimes None on Mac. See above.
addr = addr[:2]
try:
logger.debug("new connection %s" % (addr,))
def run():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
zs = self.storage_server.create_client_handler()
protocol = ServerProtocol(loop, self.addr, zs)
protocol.stop = loop.stop
if self.ssl_context is None:
cr = loop.create_connection((lambda : protocol), sock=sock)
else:
#######################################################
# XXX See http://bugs.python.org/issue27392 :(
_make_ssl_transport = loop._make_ssl_transport
def make_ssl_transport(*a, **kw):
kw['server_side'] = True
return _make_ssl_transport(*a, **kw)
loop._make_ssl_transport = make_ssl_transport
#
#######################################################
cr = loop.create_connection(
(lambda : protocol), sock=sock,
ssl=self.ssl_context,
server_hostname='fu' # http://bugs.python.org/issue27391
)
asyncio.async(cr, loop=loop)
loop.run_forever()
loop.close()
thread = threading.Thread(target=run, name='zeo_client_hander')
thread.setDaemon(True)
thread.start()
except Exception:
if sock.fileno() in self.__socket_map:
del self.__socket_map[sock.fileno()]
logger.exception("Error in handle_accept")
else:
logger.info("connect from %s", repr(addr))
def loop(self, timeout=30.0):
try:
asyncore.loop(map=self.__socket_map, timeout=timeout)
except Exception:
if not self.__closed:
raise # Unexpected exc
logger.debug('acceptor %s loop stopped', self.addr)
__closed = False
def close(self):
if not self.__closed:
self.__closed = True
asyncore.dispatcher.close(self)
logger.debug("Closed accepter, %s", len(self.__socket_map))
ZEO-5.0.0a0/src/ZEO/asyncio/server.py 0000664 0000000 0000000 00000020502 12737730500 0017146 0 ustar 00root root 0000000 0000000 from .._compat import PY3
if PY3:
import asyncio
else:
import trollius as asyncio
import json
import logging
import os
import random
import threading
import ZODB.POSException
logger = logging.getLogger(__name__)
from ..shortrepr import short_repr
from . import base
from .marshal import server_decode
class ServerProtocol(base.Protocol):
"""asyncio low-level ZEO server interface
"""
protocols = (b'Z5', )
name = 'server protocol'
methods = set(('register', ))
unlogged_exception_types = (
ZODB.POSException.POSKeyError,
)
def __init__(self, loop, addr, zeo_storage):
"""Create a server's client interface
"""
super(ServerProtocol, self).__init__(loop, addr)
self.zeo_storage = zeo_storage
closed = False
def close(self):
logger.debug("Closing server protocol")
if not self.closed:
self.closed = True
if self.transport is not None:
self.transport.close()
connected = None # for tests
def connection_made(self, transport):
self.connected = True
super(ServerProtocol, self).connection_made(transport)
self._write(best_protocol_version)
def connection_lost(self, exc):
self.connected = False
if exc:
logger.error("Disconnected %s:%s", exc.__class__.__name__, exc)
self.zeo_storage.notify_disconnected()
self.stop()
def stop(self):
pass # Might be replaced when running a thread per client
def finish_connect(self, protocol_version):
if protocol_version == b'ruok':
self._write(json.dumps(self.zeo_storage.ruok()).encode("ascii"))
self.close()
else:
if protocol_version in self.protocols:
logger.info("received handshake %r" %
str(protocol_version.decode('ascii')))
self.protocol_version = protocol_version
self.zeo_storage.notify_connected(self)
else:
logger.error("bad handshake %s" % short_repr(protocol_version))
self.close()
def call_soon_threadsafe(self, func, *args):
try:
self.loop.call_soon_threadsafe(func, *args)
except RuntimeError:
if self.connected:
logger.exception("call_soon_threadsafe failed while connected")
def message_received(self, message):
try:
message_id, async, name, args = server_decode(message)
except Exception:
logger.exception("Can't deserialize message")
self.close()
if message_id == -1:
return # keep-alive
if name not in self.methods:
logger.error('Invalid method, %r', name)
self.close()
try:
result = getattr(self.zeo_storage, name)(*args)
except Exception as exc:
if not isinstance(exc, self.unlogged_exception_types):
logger.exception(
"Bad %srequest, %r", 'async ' if async else '', name)
if async:
return self.close() # No way to recover/cry for help
else:
return self.send_error(message_id, exc)
if not async:
self.send_reply(message_id, result)
def send_reply(self, message_id, result, send_error=False):
try:
result = self.encode(message_id, 0, '.reply', result)
except Exception:
if isinstance(result, Delay):
result.set_sender(message_id, self)
return
else:
logger.exception("Unpicklable response %r", result)
if not send_error:
self.send_error(
message_id,
ValueError("Couldn't pickle response"),
True)
self._write(result)
def send_reply_threadsafe(self, message_id, result):
self.loop.call_soon_threadsafe(self.reply, message_id, result)
def send_error(self, message_id, exc, send_error=False):
"""Abstracting here so we can make this cleaner in the future
"""
self.send_reply(message_id, (exc.__class__, exc), send_error)
def async(self, method, *args):
self.call_async(method, args)
best_protocol_version = os.environ.get(
'ZEO_SERVER_PROTOCOL',
ServerProtocol.protocols[-1].decode('utf-8')).encode('utf-8')
assert best_protocol_version in ServerProtocol.protocols
def new_connection(loop, addr, socket, zeo_storage):
protocol = ServerProtocol(loop, addr, zeo_storage)
cr = loop.create_connection((lambda : protocol), sock=socket)
asyncio.async(cr, loop=loop)
class Delay(object):
"""Used to delay response to client for synchronous calls.
When a synchronous call is made and the original handler returns
without handling the call, it returns a Delay object that prevents
the mainloop from sending a response.
"""
msgid = protocol = sent = None
def set_sender(self, msgid, protocol):
self.msgid = msgid
self.protocol = protocol
def reply(self, obj):
self.sent = 'reply'
self.protocol.send_reply(self.msgid, obj)
def error(self, exc_info):
self.sent = 'error'
logger.error("Error raised in delayed method", exc_info=exc_info)
self.protocol.send_error(self.msgid, exc_info[1])
def __repr__(self):
return "%s[%s, %r, %r, %r]" % (
self.__class__.__name__, id(self),
self.msgid, self.protocol, self.sent)
def __reduce__(self):
raise TypeError("Can't pickle delays.")
class Result(Delay):
def __init__(self, *args):
self.args = args
def set_sender(self, msgid, protocol):
reply, callback = self.args
protocol.send_reply(msgid, reply)
callback()
class MTDelay(Delay):
def __init__(self):
self.ready = threading.Event()
def set_sender(self, *args):
Delay.set_sender(self, *args)
self.ready.set()
def reply(self, obj):
self.ready.wait()
self.protocol.call_soon_threadsafe(
self.protocol.send_reply, self.msgid, obj)
def error(self, exc_info):
self.ready.wait()
self.protocol.call_soon_threadsafe(Delay.error, self, exc_info)
class Acceptor(object):
def __init__(self, storage_server, addr, ssl):
self.storage_server = storage_server
self.addr = addr
self.ssl_context = ssl
self.event_loop = loop = asyncio.new_event_loop()
if isinstance(addr, tuple):
cr = loop.create_server(self.factory, addr[0], addr[1],
reuse_address=True, ssl=ssl)
else:
cr = loop.create_unix_server(self.factory, addr, ssl=ssl)
f = asyncio.async(cr, loop=loop)
server = loop.run_until_complete(f)
self.server = server
if isinstance(addr, tuple) and addr[1] == 0:
addrs = [s.getsockname() for s in server.sockets]
addrs = [a for a in addrs if len(a) == len(addr)]
if addrs:
self.addr = addrs[0]
else:
self.addr = server.sockets[0].getsockname()[:len(addr)]
logger.info("listening on %s", str(addr))
def factory(self):
try:
logger.debug("Accepted connection")
zs = self.storage_server.create_client_handler()
protocol = ServerProtocol(self.event_loop, self.addr, zs)
except Exception:
logger.exception("Failure in protocol factory")
return protocol
def loop(self, timeout=None):
self.event_loop.run_forever()
self.event_loop.close()
closed = False
def close(self):
if not self.closed:
self.closed = True
self.event_loop.call_soon_threadsafe(self._close)
def _close(self):
loop = self.event_loop
self.server.close()
f = asyncio.async(self.server.wait_closed(), loop=loop)
@f.add_done_callback
def server_closed(f):
# stop the loop when the server closes:
loop.call_soon(loop.stop)
def timeout():
logger.warning("Timed out closing asyncio.Server")
loop.call_soon(loop.stop)
# But if the server doesn't close in a second, stop the loop anyway.
loop.call_later(1, timeout)
ZEO-5.0.0a0/src/ZEO/asyncio/testing.py 0000664 0000000 0000000 00000010513 12737730500 0017316 0 ustar 00root root 0000000 0000000 from .._compat import PY3
if PY3:
import asyncio
else:
import trollius as asyncio
try:
ConnectionRefusedError
except NameError:
class ConnectionRefusedError(OSError):
pass
import pprint
class Loop(object):
protocol = transport = None
def __init__(self, addrs=(), debug=True):
self.addrs = addrs
self.get_debug = lambda : debug
self.connecting = {}
self.later = []
self.exceptions = []
def call_soon(self, func, *args):
func(*args)
def _connect(self, future, protocol_factory):
self.protocol = protocol = protocol_factory()
self.transport = transport = Transport(protocol)
protocol.connection_made(transport)
future.set_result((transport, protocol))
def connect_connecting(self, addr):
future, protocol_factory = self.connecting.pop(addr)
self._connect(future, protocol_factory)
def fail_connecting(self, addr):
future, protocol_factory = self.connecting.pop(addr)
if not future.cancelled():
future.set_exception(ConnectionRefusedError())
def create_connection(
self, protocol_factory, host=None, port=None, sock=None,
ssl=None, server_hostname=None
):
future = asyncio.Future(loop=self)
if sock is None:
addr = host, port
if addr in self.addrs:
self._connect(future, protocol_factory)
else:
self.connecting[addr] = future, protocol_factory
else:
self._connect(future, protocol_factory)
return future
def create_unix_connection(self, protocol_factory, path):
future = asyncio.Future(loop=self)
if path in self.addrs:
self._connect(future, protocol_factory)
else:
self.connecting[path] = future, protocol_factory
return future
def call_soon_threadsafe(self, func, *args):
func(*args)
return Handle()
def call_later(self, delay, func, *args):
handle = Handle()
self.later.append((delay, func, args, handle))
return handle
def call_exception_handler(self, context):
self.exceptions.append(context)
closed = False
def close(self):
self.closed = True
stopped = False
def stop(self):
self.stopped = True
class Handle(object):
cancelled = False
def cancel(self):
self.cancelled = True
class Transport(object):
capacity = 1 << 64
paused = False
extra = dict(peername='1.2.3.4', sockname=('127.0.0.1', 4200), socket=None)
def __init__(self, protocol):
self.data = []
self.protocol = protocol
def write(self, data):
self.data.append(data)
self.check_pause()
def writelines(self, lines):
self.data.extend(lines)
self.check_pause()
def check_pause(self):
if len(self.data) > self.capacity and not self.paused:
self.paused = True
self.protocol.pause_writing()
def pop(self, count=None):
if count:
r = self.data[:count]
del self.data[:count]
else:
r = self.data[:]
del self.data[:]
self.check_resume()
return r
def check_resume(self):
if len(self.data) < self.capacity and self.paused:
self.paused = False
self.protocol.resume_writing()
closed = False
def close(self):
self.closed = True
def get_extra_info(self, name):
return self.extra[name]
class AsyncRPC(object):
"""Adapt an asyncio API to an RPC to help hysterical tests
"""
def __init__(self, api):
self.api = api
def __getattr__(self, name):
return lambda *a, **kw: self.api.call(name, *a, **kw)
class ClientRunner(object):
def __init__(self, addr, client, cache, storage, read_only, timeout,
**kw):
self.addr = addr
self.client = client
self.cache = cache
self.storage = storage
self.read_only = read_only
self.timeout = timeout,
for name in kw:
self.__dict__[name] = kw[name]
def start(self, wait=True):
pass
def call(self, method, *args, **kw):
return getattr(self, method)(*args)
async = async_iter = call
def wait(self, timeout=None):
pass
ZEO-5.0.0a0/src/ZEO/asyncio/tests.py 0000664 0000000 0000000 00000101520 12737730500 0017002 0 ustar 00root root 0000000 0000000 from .._compat import PY3
if PY3:
import asyncio
else:
import trollius as asyncio
from zope.testing import setupstack
from concurrent.futures import Future
import mock
from ZODB.POSException import ReadOnlyError
import collections
import logging
import pdb
import struct
import unittest
from ..Exceptions import ClientDisconnected, ProtocolError
from ..ClientStorage import m64
from .testing import Loop
from .client import ClientRunner, Fallback
from .server import new_connection, best_protocol_version
from .marshal import encoder, decode
class Base(object):
def setUp(self):
super(Base, self).setUp()
self.encode = encoder()
def unsized(self, data, unpickle=False):
result = []
while data:
size, message = data[:2]
data = data[2:]
self.assertEqual(struct.unpack(">I", size)[0], len(message))
if unpickle:
message = decode(message)
result.append(message)
if len(result) == 1:
result = result[0]
return result
def parse(self, data):
return self.unsized(data, True)
target = None
def send(self, method, *args, **kw):
target = kw.pop('target', self.target)
called = kw.pop('called', True)
no_output = kw.pop('no_output', True)
self.assertFalse(kw)
self.loop.protocol.data_received(
sized(self.encode(0, True, method, args)))
if target is not None:
target = getattr(target, method)
if called:
target.assert_called_with(*args)
target.reset_mock()
else:
self.assertFalse(target.called)
if no_output:
self.assertFalse(self.loop.transport.pop())
def pop(self, count=None, parse=True):
return self.unsized(self.loop.transport.pop(count), parse)
class ClientTests(Base, setupstack.TestCase, ClientRunner):
def start(self,
addrs=(('127.0.0.1', 8200), ), loop_addrs=None,
read_only=False,
finish_start=False,
):
# To create a client, we need to specify an address, a client
# object and a cache.
wrapper = mock.Mock()
self.target = wrapper
cache = MemoryCache()
self.set_options(addrs, wrapper, cache, 'TEST', read_only, timeout=1)
# We can also provide an event loop. We'll use a testing loop
# so we don't have to actually make any network connection.
loop = Loop(addrs if loop_addrs is None else loop_addrs)
self.setup_delegation(loop)
self.assertFalse(wrapper.notify_disconnected.called)
protocol = loop.protocol
transport = loop.transport
if finish_start:
protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.pop(2, False), b'Z3101')
self.assertEqual(self.pop(),
[(1, False, 'register', ('TEST', False)),
(2, False, 'lastTransaction', ()),
])
self.respond(1, None)
self.respond(2, 'a'*8)
self.assertEqual(self.pop(), (3, False, 'get_info', ()))
self.respond(3, dict(length=42))
return (wrapper, cache, self.loop, self.client, protocol, transport)
def respond(self, message_id, result):
self.loop.protocol.data_received(
sized(self.encode(message_id, False, '.reply', result)))
def wait_for_result(self, future, timeout):
return future
def testClientBasics(self):
# Here, we'll go through the basic usage of the asyncio ZEO
# network client. The client is responsible for the core
# functionality of a ZEO client storage. The client storage
# is largely just a wrapper around the asyncio client.
wrapper, cache, loop, client, protocol, transport = self.start()
self.assertFalse(wrapper.notify_disconnected.called)
# The client isn't connected until the server sends it some data.
self.assertFalse(client.connected.done() or transport.data)
# The server sends the client it's protocol. In this case,
# it's a very high one. The client will send it's highest that
# it can use.
protocol.data_received(sized(b'Z99999'))
# The client sends back a handshake, and registers the
# storage, and requests the last transaction.
self.assertEqual(self.pop(2, False), b'Z5')
self.assertEqual(self.pop(),
[(1, False, 'register', ('TEST', False)),
(2, False, 'lastTransaction', ()),
])
# Actually, the client isn't connected until it initializes it's cache:
self.assertFalse(client.connected.done() or transport.data)
# If we try to make calls while the client is *initially*
# connecting, we get an error. This is because some dufus
# decided to create a client storage without waiting for it to
# connect.
f1 = self.call('foo', 1, 2)
self.assertTrue(isinstance(f1.exception(), ClientDisconnected))
# When the client is reconnecting, it's ready flag is set to False and
# it queues calls:
client.ready = False
f1 = self.call('foo', 1, 2)
self.assertFalse(f1.done())
# If we try to make an async call, we get an immediate error:
f2 = self.async('bar', 3, 4)
self.assert_(isinstance(f2.exception(), ClientDisconnected))
# The wrapper object (ClientStorage) hasn't been notified:
self.assertFalse(wrapper.notify_connected.called)
# Let's respond to those first 2 calls:
self.respond(1, None)
self.respond(2, 'a'*8)
# After verification, the client requests info:
self.assertEqual(self.pop(), (3, False, 'get_info', ()))
self.respond(3, dict(length=42))
# Now we're connected, the cache was initialized, and the
# queued message has been sent:
self.assert_(client.connected.done())
self.assertEqual(cache.getLastTid(), 'a'*8)
self.assertEqual(self.pop(), (4, False, 'foo', (1, 2)))
# The wrapper object (ClientStorage) has been notified:
wrapper.notify_connected.assert_called_with(client, {'length': 42})
self.respond(4, 42)
self.assertEqual(f1.result(), 42)
# Now we can make async calls:
f2 = self.async('bar', 3, 4)
self.assert_(f2.done() and f2.exception() is None)
self.assertEqual(self.pop(), (0, True, 'bar', (3, 4)))
# Loading objects gets special handling to leverage the cache.
loaded = self.load_before(b'1'*8, m64)
# The data wasn't in the cache, so we make a server call:
self.assertEqual(self.pop(), (5, False, 'loadBefore', (b'1'*8, m64)))
self.respond(5, (b'data', b'a'*8, None))
self.assertEqual(loaded.result(), (b'data', b'a'*8, None))
# If we make another request, it will be satisfied from the cache:
loaded = self.load_before(b'1'*8, m64)
self.assertEqual(loaded.result(), (b'data', b'a'*8, None))
self.assertFalse(transport.data)
# Let's send an invalidation:
self.send('invalidateTransaction', b'b'*8, [b'1'*8])
# Now, if we try to load current again, we'll make a server request.
loaded = self.load_before(b'1'*8, m64)
self.assertEqual(self.pop(), (6, False, 'loadBefore', (b'1'*8, m64)))
self.respond(6, (b'data2', b'b'*8, None))
self.assertEqual(loaded.result(), (b'data2', b'b'*8, None))
# Loading non-current data may also be satisfied from cache
loaded = self.load_before(b'1'*8, b'b'*8)
self.assertEqual(loaded.result(), (b'data', b'a'*8, b'b'*8))
self.assertFalse(transport.data)
loaded = self.load_before(b'1'*8, b'c'*8)
self.assertEqual(loaded.result(), (b'data2', b'b'*8, None))
self.assertFalse(transport.data)
loaded = self.load_before(b'1'*8, b'_'*8)
self.assertEqual(self.pop(),
(7, False, 'loadBefore', (b'1'*8, b'_'*8)))
self.respond(7, (b'data0', b'^'*8, b'_'*8))
self.assertEqual(loaded.result(), (b'data0', b'^'*8, b'_'*8))
# When committing transactions, we need to update the cache
# with committed data. To do this, we pass a (oid, data, resolved)
# iteratable to tpc_finish_threadsafe.
tids = []
def finished_cb(tid):
tids.append(tid)
committed = self.tpc_finish(
b'd'*8,
[(b'2'*8, 'committed 2', False),
(b'1'*8, 'committed 3', True),
(b'4'*8, 'committed 4', False),
],
finished_cb)
self.assertFalse(committed.done() or
cache.load(b'2'*8) or
cache.load(b'4'*8))
self.assertEqual(cache.load(b'1'*8), (b'data2', b'b'*8))
self.assertEqual(self.pop(),
(8, False, 'tpc_finish', (b'd'*8,)))
self.respond(8, b'e'*8)
self.assertEqual(committed.result(), b'e'*8)
self.assertEqual(cache.load(b'1'*8), None)
self.assertEqual(cache.load(b'2'*8), ('committed 2', b'e'*8))
self.assertEqual(cache.load(b'4'*8), ('committed 4', b'e'*8))
self.assertEqual(tids.pop(), b'e'*8)
# If the protocol is disconnected, it will reconnect and will
# resolve outstanding requests with exceptions:
loaded = self.load_before(b'1'*8, m64)
f1 = self.call('foo', 1, 2)
self.assertFalse(loaded.done() or f1.done())
self.assertEqual(self.pop(), [(9, False, 'loadBefore', (b'1'*8, m64)),
(10, False, 'foo', (1, 2))],
)
exc = TypeError(43)
self.assertFalse(wrapper.notify_disconnected.called)
wrapper.notify_connected.reset_mock()
protocol.connection_lost(exc)
wrapper.notify_disconnected.assert_called_with()
self.assertTrue(isinstance(loaded.exception(), ClientDisconnected))
self.assertEqual(loaded.exception().args, (exc,))
self.assertTrue(isinstance(f1.exception(), ClientDisconnected))
self.assertEqual(f1.exception().args, (exc,))
# Because we reconnected, a new protocol and transport were created:
self.assert_(protocol is not loop.protocol)
self.assert_(transport is not loop.transport)
protocol = loop.protocol
transport = loop.transport
# and we have a new incomplete connect future:
self.assertFalse(client.connected.done() or transport.data)
# This time we'll send a lower protocol version. The client
# will send it back, because it's lower than the client's
# protocol:
protocol.data_received(sized(b'Z310'))
self.assertEqual(self.unsized(transport.pop(2)), b'Z310')
self.assertEqual(self.pop(),
[(1, False, 'register', ('TEST', False)),
(2, False, 'lastTransaction', ()),
])
self.assertFalse(wrapper.notify_connected.called)
self.respond(1, None)
self.respond(2, b'e'*8)
self.assertEqual(self.pop(), (3, False, 'get_info', ()))
self.respond(3, dict(length=42))
# Because the server tid matches the cache tid, we're done connecting
wrapper.notify_connected.assert_called_with(client, {'length': 42})
self.assert_(client.connected.done() and not transport.data)
self.assertEqual(cache.getLastTid(), b'e'*8)
# Because we were able to update the cache, we didn't have to
# invalidate the database cache:
self.assertFalse(wrapper.invalidateTransaction.called)
# The close method closes the connection and cache:
client.close()
self.assert_(transport.closed and cache.closed)
# The client doesn't reconnect
self.assertEqual(loop.protocol, protocol)
self.assertEqual(loop.transport, transport)
def test_cache_behind(self):
wrapper, cache, loop, client, protocol, transport = self.start()
cache.setLastTid(b'a'*8)
cache.store(b'4'*8, b'a'*8, None, '4 data')
cache.store(b'2'*8, b'a'*8, None, '2 data')
self.assertFalse(client.connected.done() or transport.data)
protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.unsized(transport.pop(2)), b'Z3101')
self.assertEqual(self.pop(),
[(1, False, 'register', ('TEST', False)),
(2, False, 'lastTransaction', ()),
])
self.respond(1, None)
self.respond(2, b'e'*8)
# We have to verify the cache, so we're not done connecting:
self.assertFalse(client.connected.done())
self.assertEqual(self.pop(), (3, False, 'getInvalidations', (b'a'*8, )))
self.respond(3, (b'e'*8, [b'4'*8]))
self.assertEqual(self.pop(), (4, False, 'get_info', ()))
self.respond(4, dict(length=42))
# Now that verification is done, we're done connecting
self.assert_(client.connected.done() and not transport.data)
self.assertEqual(cache.getLastTid(), b'e'*8)
# And the cache has been updated:
self.assertEqual(cache.load(b'2'*8),
('2 data', b'a'*8)) # unchanged
self.assertEqual(cache.load(b'4'*8), None)
# Because we were able to update the cache, we didn't have to
# invalidate the database cache:
self.assertFalse(wrapper.invalidateCache.called)
def test_cache_way_behind(self):
wrapper, cache, loop, client, protocol, transport = self.start()
cache.setLastTid(b'a'*8)
cache.store(b'4'*8, b'a'*8, None, '4 data')
self.assertTrue(cache)
self.assertFalse(client.connected.done() or transport.data)
protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.unsized(transport.pop(2)), b'Z3101')
self.assertEqual(self.pop(),
[(1, False, 'register', ('TEST', False)),
(2, False, 'lastTransaction', ()),
])
self.respond(1, None)
self.respond(2, b'e'*8)
# We have to verify the cache, so we're not done connecting:
self.assertFalse(client.connected.done())
self.assertEqual(self.pop(), (3, False, 'getInvalidations', (b'a'*8, )))
# We respond None, indicating that we're too far out of date:
self.respond(3, None)
self.assertEqual(self.pop(), (4, False, 'get_info', ()))
self.respond(4, dict(length=42))
# Now that verification is done, we're done connecting
self.assert_(client.connected.done() and not transport.data)
self.assertEqual(cache.getLastTid(), b'e'*8)
# But the cache is now empty and we invalidated the database cache
self.assertFalse(cache)
wrapper.invalidateCache.assert_called_with()
def test_multiple_addresses(self):
# We can pass multiple addresses to client constructor
addrs = [('1.2.3.4', 8200), ('2.2.3.4', 8200)]
wrapper, cache, loop, client, protocol, transport = self.start(
addrs, ())
# We haven't connected yet
self.assert_(protocol is None and transport is None)
# There are 2 connection attempts outstanding:
self.assertEqual(sorted(loop.connecting), addrs)
# We cause the first one to fail:
loop.fail_connecting(addrs[0])
self.assertEqual(sorted(loop.connecting), addrs[1:])
# The failed connection is attempted in the future:
delay, func, args, _ = loop.later.pop(0)
self.assert_(1 <= delay <= 2)
func(*args)
self.assertEqual(sorted(loop.connecting), addrs)
# Let's connect the second address
loop.connect_connecting(addrs[1])
self.assertEqual(sorted(loop.connecting), addrs[:1])
protocol = loop.protocol
transport = loop.transport
protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.unsized(transport.pop(2)), b'Z3101')
self.respond(1, None)
# Now, when the first connection fails, it won't be retried,
# because we're already connected.
# (first in later is heartbeat)
self.assertEqual(sorted(loop.later[1:]), [])
loop.fail_connecting(addrs[0])
self.assertEqual(sorted(loop.connecting), [])
self.assertEqual(sorted(loop.later[1:]), [])
def test_bad_server_tid(self):
# If in verification we get a server_tid behing the cache's, make sure
# we retry the connection later.
wrapper, cache, loop, client, protocol, transport = self.start()
cache.store(b'4'*8, b'a'*8, None, '4 data')
cache.setLastTid('b'*8)
protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.unsized(transport.pop(2)), b'Z3101')
self.assertEqual(self.pop(),
[(1, False, 'register', ('TEST', False)),
(2, False, 'lastTransaction', ()),
])
self.respond(1, None)
self.respond(2, 'a'*8)
self.assertFalse(client.connected.done() or transport.data)
delay, func, args, _ = loop.later.pop(1) # first in later is heartbeat
self.assert_(8 < delay < 10)
self.assertEqual(len(loop.later), 1) # first in later is heartbeat
func(*args) # connect again
self.assertFalse(protocol is loop.protocol)
self.assertFalse(transport is loop.transport)
protocol = loop.protocol
transport = loop.transport
protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.unsized(transport.pop(2)), b'Z3101')
self.assertEqual(self.pop(),
[(1, False, 'register', ('TEST', False)),
(2, False, 'lastTransaction', ()),
])
self.respond(1, None)
self.respond(2, 'b'*8)
self.assertEqual(self.pop(), (3, False, 'get_info', ()))
self.respond(3, dict(length=42))
self.assert_(client.connected.done() and not transport.data)
self.assert_(client.ready)
def test_readonly_fallback(self):
addrs = [('1.2.3.4', 8200), ('2.2.3.4', 8200)]
wrapper, cache, loop, client, protocol, transport = self.start(
addrs, (), read_only=Fallback)
self.assertTrue(self.is_read_only())
# We'll treat the first address as read-only and we'll let it connect:
loop.connect_connecting(addrs[0])
protocol, transport = loop.protocol, loop.transport
protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.unsized(transport.pop(2)), b'Z3101')
# We see that the client tried a writable connection:
self.assertEqual(self.pop(),
(1, False, 'register', ('TEST', False)))
# We respond with a read-only exception:
self.respond(1, (ReadOnlyError, ReadOnlyError()))
self.assertTrue(self.is_read_only())
# The client tries for a read-only connection:
self.assertEqual(self.pop(),
[(2, False, 'register', ('TEST', True)),
(3, False, 'lastTransaction', ()),
])
# We respond with successfully:
self.respond(2, None)
self.respond(3, 'b'*8)
self.assertTrue(self.is_read_only())
# At this point, the client is ready and using the protocol,
# and the protocol is read-only:
self.assert_(client.ready)
self.assertEqual(client.protocol, protocol)
self.assertEqual(protocol.read_only, True)
connected = client.connected
# The client asks for info, and we respond:
self.assertEqual(self.pop(), (4, False, 'get_info', ()))
self.respond(4, dict(length=42))
self.assert_(connected.done())
# We connect the second address:
loop.connect_connecting(addrs[1])
loop.protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.unsized(loop.transport.pop(2)), b'Z3101')
self.assertEqual(self.parse(loop.transport.pop()),
(1, False, 'register', ('TEST', False)))
self.assertTrue(self.is_read_only())
# We respond and the writable connection succeeds:
self.respond(1, None)
self.assertFalse(self.is_read_only())
# at this point, a lastTransaction request is emitted:
self.assertEqual(self.parse(loop.transport.pop()),
(2, False, 'lastTransaction', ()))
# Now, the original protocol is closed, and the client is
# no-longer ready:
self.assertFalse(client.ready)
self.assertFalse(client.protocol is protocol)
self.assertEqual(client.protocol, loop.protocol)
self.assertEqual(protocol.closed, True)
self.assert_(client.connected is not connected)
self.assertFalse(client.connected.done())
protocol, transport = loop.protocol, loop.transport
self.assertEqual(protocol.read_only, False)
# Now, we finish verification
self.respond(2, 'b'*8)
self.respond(3, dict(length=42))
self.assert_(client.ready)
self.assert_(client.connected.done())
def test_invalidations_while_verifying(self):
# While we're verifying, invalidations are ignored
wrapper, cache, loop, client, protocol, transport = self.start()
protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.unsized(transport.pop(2)), b'Z3101')
self.assertEqual(self.pop(),
[(1, False, 'register', ('TEST', False)),
(2, False, 'lastTransaction', ()),
])
self.respond(1, None)
self.send('invalidateTransaction', b'b'*8, [b'1'*8], called=False)
self.respond(2, b'a'*8)
self.send('invalidateTransaction', b'c'*8, [b'1'*8], no_output=False)
self.assertEqual(self.pop(), (3, False, 'get_info', ()))
# We'll disconnect:
protocol.connection_lost(Exception("lost"))
self.assert_(protocol is not loop.protocol)
self.assert_(transport is not loop.transport)
protocol = loop.protocol
transport = loop.transport
# Similarly, invalidations aren't processed while reconnecting:
protocol.data_received(sized(b'Z3101'))
self.assertEqual(self.unsized(transport.pop(2)), b'Z3101')
self.assertEqual(self.pop(),
[(1, False, 'register', ('TEST', False)),
(2, False, 'lastTransaction', ()),
])
self.respond(1, None)
self.send('invalidateTransaction', b'd'*8, [b'1'*8], called=False)
self.respond(2, b'c'*8)
self.send('invalidateTransaction', b'e'*8, [b'1'*8], no_output=False)
self.assertEqual(self.pop(), (3, False, 'get_info', ()))
def test_flow_control(self):
# When sending a lot of data (blobs), we don't want to fill up
# memory behind a slow socket. Asycio's flow control helper
# seems a bit complicated. We'd rather pass an iterator that's
# consumed as we can.
wrapper, cache, loop, client, protocol, transport = self.start(
finish_start=True)
# Give the transport a small capacity:
transport.capacity = 2
self.async('foo')
self.async('bar')
self.async('baz')
self.async('splat')
# The first 2 were sent, but the remaining were queued.
self.assertEqual(self.pop(),
[(0, True, 'foo', ()), (0, True, 'bar', ())])
# But popping them allowed sending to resume:
self.assertEqual(self.pop(),
[(0, True, 'baz', ()), (0, True, 'splat', ())])
# This is especially handy with iterators:
self.async_iter((name, ()) for name in 'abcde')
self.assertEqual(self.pop(), [(0, True, 'a', ()), (0, True, 'b', ())])
self.assertEqual(self.pop(), [(0, True, 'c', ()), (0, True, 'd', ())])
self.assertEqual(self.pop(), (0, True, 'e', ()))
self.assertEqual(self.pop(), [])
def test_bad_protocol(self):
wrapper, cache, loop, client, protocol, transport = self.start()
with mock.patch("ZEO.asyncio.client.logger.error") as error:
self.assertFalse(error.called)
protocol.data_received(sized(b'Z200'))
self.assert_(isinstance(error.call_args[0][1], ProtocolError))
def test_get_peername(self):
wrapper, cache, loop, client, protocol, transport = self.start(
finish_start=True)
self.assertEqual(client.get_peername(), '1.2.3.4')
def test_call_async_from_same_thread(self):
# There are a few (1?) cases where we call into client storage
# where it needs to call back asyncronously. Because we're
# calling from the same thread, we don't need to use a futurte.
wrapper, cache, loop, client, protocol, transport = self.start(
finish_start=True)
client.call_async_from_same_thread('foo', 1)
self.assertEqual(self.pop(), (0, True, 'foo', (1, )))
def test_ClientDisconnected_on_call_timeout(self):
wrapper, cache, loop, client, protocol, transport = self.start()
self.wait_for_result = super(ClientTests, self).wait_for_result
self.assertRaises(ClientDisconnected, self.call, 'foo')
client.ready = False
self.assertRaises(ClientDisconnected, self.call, 'foo')
def test_errors_in_data_received(self):
# There was a bug in ZEO.async.client.Protocol.data_recieved
# that caused it to fail badly if errors were raised while
# handling data.
wrapper, cache, loop, client, protocol, transport =self.start(
finish_start=True)
wrapper.receiveBlobStart.side_effect = ValueError('test')
chunk = 'x' * 99999
try:
loop.protocol.data_received(
sized(
self.encode(0, True, 'receiveBlobStart', ('oid', 'serial'))
) +
sized(
self.encode(
0, True, 'receiveBlobChunk', ('oid', 'serial', chunk))
)
)
except ValueError:
pass
loop.protocol.data_received(sized(
self.encode(0, True, 'receiveBlobStop', ('oid', 'serial'))
))
wrapper.receiveBlobChunk.assert_called_with('oid', 'serial', chunk)
wrapper.receiveBlobStop.assert_called_with('oid', 'serial')
def test_heartbeat(self):
# Protocols run heartbeats on a configurable (sort of)
# heartbeat interval, which defaults to every 60 seconds.
wrapper, cache, loop, client, protocol, transport = self.start(
finish_start=True)
delay, func, args, handle = loop.later.pop()
self.assertEqual(
(delay, func, args, handle),
(60, protocol.heartbeat, (), protocol.heartbeat_handle),
)
self.assertFalse(loop.later or handle.cancelled)
# The heartbeat function sends heartbeat data and reschedules itself.
func()
self.assertEqual(self.pop(), (-1, 0, '.reply', None))
self.assertTrue(protocol.heartbeat_handle != handle)
delay, func, args, handle = loop.later.pop()
self.assertEqual(
(delay, func, args, handle),
(60, protocol.heartbeat, (), protocol.heartbeat_handle),
)
self.assertFalse(loop.later or handle.cancelled)
# The heartbeat is cancelled when the protocol connection is lost:
protocol.connection_lost(None)
self.assertTrue(handle.cancelled)
class MemoryCache(object):
def __init__(self):
# { oid -> [(start, end, data)] }
self.data = collections.defaultdict(list)
self.last_tid = None
clear = __init__
closed = False
def close(self):
self.closed = True
def __len__(self):
return len(self.data)
def load(self, oid):
revisions = self.data[oid]
if revisions:
start, end, data = revisions[-1]
if not end:
return data, start
return None
def store(self, oid, start_tid, end_tid, data):
assert start_tid is not None
revisions = self.data[oid]
revisions.append((start_tid, end_tid, data))
revisions.sort()
def loadBefore(self, oid, tid):
for start, end, data in self.data[oid]:
if start < tid and (end is None or end >= tid):
return data, start, end
def invalidate(self, oid, tid):
revisions = self.data[oid]
if revisions:
if tid is None:
del revisions[:]
else:
start, end, data = revisions[-1]
if end is None:
revisions[-1] = start, tid, data
def getLastTid(self):
return self.last_tid
def setLastTid(self, tid):
self.last_tid = tid
class ServerTests(Base, setupstack.TestCase):
# The server side of things is pretty simple compared to the
# client, because it's the clien't job to make and keep
# connections. Servers are pretty passive.
def connect(self, finish=False):
protocol = server_protocol()
self.loop = protocol.loop
self.target = protocol.zeo_storage
if finish:
self.assertEqual(self.pop(parse=False), best_protocol_version)
protocol.data_received(sized(b'Z5'))
return protocol
message_id = 0
target = None
def call(self, meth, *args, **kw):
if kw:
expect = kw.pop('expect', self)
target = kw.pop('target', self.target)
self.assertFalse(kw)
if target is not None:
target = getattr(target, meth)
if expect is not self:
target.return_value = expect
self.message_id += 1
self.loop.protocol.data_received(
sized(self.encode(self.message_id, False, meth, args)))
if target is not None:
target.assert_called_once_with(*args)
target.reset_mock()
if expect is not self:
self.assertEqual(self.pop(),
(self.message_id, False, '.reply', expect))
def testServerBasics(self):
# A simple listening thread accepts connections. It creats
# asyncio connections by calling ZEO.asyncio.new_connection:
protocol = self.connect()
self.assertFalse(protocol.zeo_storage.notify_connected.called)
# The server sends it's protocol.
self.assertEqual(self.pop(parse=False), best_protocol_version)
# The client sends it's protocol:
protocol.data_received(sized(b'Z5'))
self.assertEqual(protocol.protocol_version, b'Z5')
protocol.zeo_storage.notify_connected.assert_called_once_with(protocol)
# The client registers:
self.call('register', False, expect=None)
# It does other things, like, send hearbeats:
protocol.data_received(sized(b'(J\xff\xff\xff\xffK\x00U\x06.replyNt.'))
# The client can make async calls:
self.send('register')
# Let's close the connection
self.assertFalse(protocol.zeo_storage.notify_disconnected.called)
protocol.connection_lost(None)
protocol.zeo_storage.notify_disconnected.assert_called_once_with()
def test_invalid_methods(self):
protocol = self.connect(True)
protocol.zeo_storage.notify_connected.assert_called_once_with(protocol)
# If we try to call a methid that isn't in the protocol's
# white list, it will disconnect:
self.assertFalse(protocol.loop.transport.closed)
self.call('foo', target=None)
self.assertTrue(protocol.loop.transport.closed)
def server_protocol(zeo_storage=None,
protocol_version=None,
addr=('1.2.3.4', '42'),
):
if zeo_storage is None:
zeo_storage = mock.Mock()
loop = Loop()
sock = () # anything not None
new_connection(loop, addr, sock, zeo_storage)
if protocol_version:
loop.protocol.data_received(sized(protocol_version))
return loop.protocol
def response(*data):
return sized(self.encode(*data))
def sized(message):
return struct.pack(">I", len(message)) + message
class Logging(object):
def __init__(self, level=logging.ERROR):
self.level = level
def __enter__(self):
self.handler = logging.StreamHandler()
logging.getLogger().addHandler(self.handler)
logging.getLogger().setLevel(self.level)
def __exit__(self, *args):
logging.getLogger().removeHandler(self.handler)
logging.getLogger().setLevel(logging.NOTSET)
def test_suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(ClientTests))
suite.addTest(unittest.makeSuite(ServerTests))
return suite
ZEO-5.0.0a0/src/ZEO/cache.py 0000664 0000000 0000000 00000075153 12737730500 0015252 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Disk-based client cache for ZEO.
ClientCache exposes an API used by the ZEO client storage. FileCache stores
objects on disk using a 2-tuple of oid and tid as key.
ClientCache's API is similar to a storage API, with methods like load(),
store(), and invalidate(). It manages in-memory data structures that allow
it to map this richer API onto the simple key-based API of the lower-level
FileCache.
"""
from __future__ import print_function
from struct import pack, unpack
import BTrees.LLBTree
import BTrees.LOBTree
import logging
import os
import tempfile
import time
import ZODB.fsIndex
import zc.lockfile
from ZODB.utils import p64, u64, z64, RLock
import six
from ._compat import PYPY
logger = logging.getLogger("ZEO.cache")
# A disk-based cache for ZEO clients.
#
# This class provides an interface to a persistent, disk-based cache
# used by ZEO clients to store copies of database records from the
# server.
#
# The details of the constructor as unspecified at this point.
#
# Each entry in the cache is valid for a particular range of transaction
# ids. The lower bound is the transaction that wrote the data. The
# upper bound is the next transaction that wrote a revision of the
# object. If the data is current, the upper bound is stored as None;
# the data is considered current until an invalidate() call is made.
#
# It is an error to call store() twice with the same object without an
# intervening invalidate() to set the upper bound on the first cache
# entry. Perhaps it will be necessary to have a call the removes
# something from the cache outright, without keeping a non-current
# entry.
# Cache verification
#
# When the client is connected to the server, it receives
# invalidations every time an object is modified. When the client is
# disconnected then reconnects, it must perform cache verification to make
# sure its cached data is synchronized with the storage's current state.
#
# quick verification
# full verification
#
# FileCache stores a cache in a single on-disk file.
#
# On-disk cache structure.
#
# The file begins with a 12-byte header. The first four bytes are the
# file's magic number - ZEC3 - indicating zeo cache version 4. The
# next eight bytes are the last transaction id.
magic = b"ZEC3"
ZEC_HEADER_SIZE = 12
# Maximum block size. Note that while we are doing a store, we may
# need to write a free block that is almost twice as big. If we die
# in the middle of a store, then we need to split the large free records
# while opening.
max_block_size = (1<<31) - 1
# After the header, the file contains a contiguous sequence of blocks. All
# blocks begin with a one-byte status indicator:
#
# b'a'
# Allocated. The block holds an object; the next 4 bytes are >I
# format total block size.
#
# b'f'
# Free. The block is free; the next 4 bytes are >I format total
# block size.
#
# b'1', b'2', b'3', b'4'
# The block is free, and consists of 1, 2, 3 or 4 bytes total.
#
# "Total" includes the status byte, and size bytes. There are no
# empty (size 0) blocks.
# Allocated blocks have more structure:
#
# 1 byte allocation status (b'a').
# 4 bytes block size, >I format.
# 8 byte oid
# 8 byte start_tid
# 8 byte end_tid
# 2 byte version length must be 0
# 4 byte data size
# data
# 8 byte redundant oid for error detection.
allocated_record_overhead = 43
# The cache's currentofs goes around the file, circularly, forever.
# It's always the starting offset of some block.
#
# When a new object is added to the cache, it's stored beginning at
# currentofs, and currentofs moves just beyond it. As many contiguous
# blocks needed to make enough room for the new object are evicted,
# starting at currentofs. Exception: if currentofs is close enough
# to the end of the file that the new object can't fit in one
# contiguous chunk, currentofs is reset to ZEC_HEADER_SIZE first.
# Under PyPy, the available dict specializations perform significantly
# better (faster) than the pure-Python BTree implementation. They may
# use less memory too. And we don't require any of the special BTree features...
_current_index_type = ZODB.fsIndex.fsIndex if not PYPY else dict
_noncurrent_index_type = BTrees.LOBTree.LOBTree if not PYPY else dict
# ...except at this leaf level
_noncurrent_bucket_type = BTrees.LLBTree.LLBucket
class ClientCache(object):
"""A simple in-memory cache."""
# The default size of 200MB makes a lot more sense than the traditional
# default of 20MB. The default here is misleading, though, since
# ClientStorage is the only user of ClientCache, and it always passes an
# explicit size of its own choosing.
def __init__(self, path=None, size=200*1024**2, rearrange=.8):
# - `path`: filepath for the cache file, or None (in which case
# a temp file will be created)
self.path = path
# - `maxsize`: total size of the cache file
# We set to the minimum size of less than the minimum.
size = max(size, ZEC_HEADER_SIZE)
self.maxsize = size
# rearrange: if we read a current record and it's more than
# rearrange*size from the end, then copy it forward to keep it
# from being evicted.
self.rearrange = rearrange * size
# The number of records in the cache.
self._len = 0
# {oid -> pos}
self.current = _current_index_type()
# {oid -> {tid->pos}}
# Note that caches in the wild seem to have very little non-current
# data, so this would seem to have little impact on memory consumption.
# I wonder if we even need to store non-current data in the cache.
self.noncurrent = _noncurrent_index_type()
# tid for the most recent transaction we know about. This is also
# stored near the start of the file.
self.tid = z64
# Always the offset into the file of the start of a block.
# New and relocated objects are always written starting at
# currentofs.
self.currentofs = ZEC_HEADER_SIZE
self._lock = RLock()
# self.f is the open file object.
# When we're not reusing an existing file, self.f is left None
# here -- the scan() method must be called then to open the file
# (and it sets self.f).
fsize = ZEC_HEADER_SIZE
if path:
self._lock_file = zc.lockfile.LockFile(path + '.lock')
if not os.path.exists(path):
# Create a small empty file. We'll make it bigger in _initfile.
self.f = open(path, 'wb+')
self.f.write(magic+z64)
logger.info("created persistent cache file %r", path)
else:
fsize = os.path.getsize(self.path)
self.f = open(path, 'rb+')
logger.info("reusing persistent cache file %r", path)
else:
# Create a small empty file. We'll make it bigger in _initfile.
self.f = tempfile.TemporaryFile()
self.f.write(magic+z64)
logger.info("created temporary cache file %r", self.f.name)
try:
self._initfile(fsize)
except:
self.f.close()
if not path:
raise # unrecoverable temp file error :(
badpath = path+'.bad'
if os.path.exists(badpath):
logger.critical(
'Removing bad cache file: %r (prev bad exists).',
path, exc_info=1)
os.remove(path)
else:
logger.critical('Moving bad cache file to %r.',
badpath, exc_info=1)
os.rename(path, badpath)
self.f = open(path, 'wb+')
self.f.write(magic+z64)
self._initfile(ZEC_HEADER_SIZE)
# Statistics: _n_adds, _n_added_bytes,
# _n_evicts, _n_evicted_bytes,
# _n_accesses
self.clearStats()
self._setup_trace(path)
# Backward compatibility. Client code used to have to use the fc
# attr to get to the file cache to get cache stats.
@property
def fc(self):
return self
def clear(self):
with self._lock:
self.f.seek(ZEC_HEADER_SIZE)
self.f.truncate()
self._initfile(ZEC_HEADER_SIZE)
##
# Scan the current contents of the cache file, calling `install`
# for each object found in the cache. This method should only
# be called once to initialize the cache from disk.
def _initfile(self, fsize):
maxsize = self.maxsize
f = self.f
read = f.read
seek = f.seek
write = f.write
seek(0)
if read(4) != magic:
seek(0)
raise ValueError("unexpected magic number: %r" % read(4))
self.tid = read(8)
if len(self.tid) != 8:
raise ValueError("cache file too small -- no tid at start")
# Populate .filemap and .key2entry to reflect what's currently in the
# file, and tell our parent about it too (via the `install` callback).
# Remember the location of the largest free block. That seems a
# decent place to start currentofs.
self.current = _current_index_type()
self.noncurrent = _noncurrent_index_type()
l = 0
last = ofs = ZEC_HEADER_SIZE
first_free_offset = 0
current = self.current
status = b' '
while ofs < fsize:
seek(ofs)
status = read(1)
if status == b'a':
size, oid, start_tid, end_tid, lver = unpack(
">I8s8s8sH", read(30))
if ofs+size <= maxsize:
if end_tid == z64:
assert oid not in current, (ofs, f.tell())
current[oid] = ofs
else:
assert start_tid < end_tid, (ofs, f.tell())
self._set_noncurrent(oid, start_tid, ofs)
assert lver == 0, "Versions aren't supported"
l += 1
else:
# free block
if first_free_offset == 0:
first_free_offset = ofs
if status == b'f':
size, = unpack(">I", read(4))
if size > max_block_size:
# Oops, we either have an old cache, or a we
# crashed while storing. Split this block into two.
assert size <= max_block_size*2
seek(ofs+max_block_size)
write(b'f'+pack(">I", size-max_block_size))
seek(ofs)
write(b'f'+pack(">I", max_block_size))
sync(f)
elif status in b'1234':
size = int(status)
else:
raise ValueError("unknown status byte value %s in client "
"cache file" % 0, hex(ord(status)))
last = ofs
ofs += size
if ofs >= maxsize:
# Oops, the file was bigger before.
if ofs > maxsize:
# The last record is too big. Replace it with a smaller
# free record
size = maxsize-last
seek(last)
if size > 4:
write(b'f'+pack(">I", size))
else:
write("012345"[size].encode())
sync(f)
ofs = maxsize
break
if fsize < maxsize:
assert ofs==fsize
# Make sure the OS really saves enough bytes for the file.
seek(self.maxsize - 1)
write(b'x')
# add as many free blocks as are needed to fill the space
seek(ofs)
nfree = maxsize - ZEC_HEADER_SIZE
for i in range(0, nfree, max_block_size):
block_size = min(max_block_size, nfree-i)
write(b'f' + pack(">I", block_size))
seek(block_size-5, 1)
sync(self.f)
# There is always data to read and
assert last and (status in b' f1234')
first_free_offset = last
else:
assert ofs==maxsize
if maxsize < fsize:
seek(maxsize)
f.truncate()
# We use the first_free_offset because it is most likelyt the
# place where we last wrote.
self.currentofs = first_free_offset or ZEC_HEADER_SIZE
self._len = l
def _set_noncurrent(self, oid, tid, ofs):
noncurrent_for_oid = self.noncurrent.get(u64(oid))
if noncurrent_for_oid is None:
noncurrent_for_oid = _noncurrent_bucket_type()
self.noncurrent[u64(oid)] = noncurrent_for_oid
noncurrent_for_oid[u64(tid)] = ofs
def _del_noncurrent(self, oid, tid):
try:
noncurrent_for_oid = self.noncurrent[u64(oid)]
del noncurrent_for_oid[u64(tid)]
if not noncurrent_for_oid:
del self.noncurrent[u64(oid)]
except KeyError:
logger.error("Couldn't find non-current %r", (oid, tid))
def clearStats(self):
self._n_adds = self._n_added_bytes = 0
self._n_evicts = self._n_evicted_bytes = 0
self._n_accesses = 0
def getStats(self):
return (self._n_adds, self._n_added_bytes,
self._n_evicts, self._n_evicted_bytes,
self._n_accesses
)
##
# The number of objects currently in the cache.
def __len__(self):
return self._len
##
# Close the underlying file. No methods accessing the cache should be
# used after this.
def close(self):
self._unsetup_trace()
f = self.f
self.f = None
if f is not None:
sync(f)
f.close()
if hasattr(self,'_lock_file'):
self._lock_file.close()
##
# Evict objects as necessary to free up at least nbytes bytes,
# starting at currentofs. If currentofs is closer than nbytes to
# the end of the file, currentofs is reset to ZEC_HEADER_SIZE first.
# The number of bytes actually freed may be (and probably will be)
# greater than nbytes, and is _makeroom's return value. The file is not
# altered by _makeroom. filemap and key2entry are updated to reflect the
# evictions, and it's the caller's responsibility both to fiddle
# the file, and to update filemap, to account for all the space
# freed (starting at currentofs when _makeroom returns, and
# spanning the number of bytes retured by _makeroom).
def _makeroom(self, nbytes):
assert 0 < nbytes <= self.maxsize - ZEC_HEADER_SIZE, (
nbytes, self.maxsize)
if self.currentofs + nbytes > self.maxsize:
self.currentofs = ZEC_HEADER_SIZE
ofs = self.currentofs
seek = self.f.seek
read = self.f.read
current = self.current
while nbytes > 0:
seek(ofs)
status = read(1)
if status == b'a':
size, oid, start_tid, end_tid = unpack(">I8s8s8s", read(28))
self._n_evicts += 1
self._n_evicted_bytes += size
if end_tid == z64:
del current[oid]
else:
self._del_noncurrent(oid, start_tid)
self._len -= 1
else:
if status == b'f':
size = unpack(">I", read(4))[0]
else:
assert status in b'1234'
size = int(status)
ofs += size
nbytes -= size
return ofs - self.currentofs
##
# Update our idea of the most recent tid. This is stored in the
# instance, and also written out near the start of the cache file. The
# new tid must be strictly greater than our current idea of the most
# recent tid.
def setLastTid(self, tid):
with self._lock:
if (not tid) or (tid == z64):
return
if (tid <= self.tid) and self._len:
if tid == self.tid:
return # Be a little forgiving
raise ValueError("new last tid (%s) must be greater than "
"previous one (%s)"
% (u64(tid), u64(self.tid)))
assert isinstance(tid, bytes) and len(tid) == 8, tid
self.tid = tid
self.f.seek(len(magic))
self.f.write(tid)
self.f.flush()
##
# Return the last transaction seen by the cache.
# @return a transaction id
# @defreturn string, or 8 nulls if no transaction is yet known
def getLastTid(self):
with self._lock:
return self.tid
##
# Return the current data record for oid.
# @param oid object id
# @return (data record, serial number, tid), or None if the object is not
# in the cache
# @defreturn 3-tuple: (string, string, string)
def load(self, oid, before_tid=None):
with self._lock:
ofs = self.current.get(oid)
if ofs is None:
self._trace(0x20, oid)
return None
self.f.seek(ofs)
read = self.f.read
status = read(1)
assert status == b'a', (ofs, self.f.tell(), oid)
size, saved_oid, tid, end_tid, lver, ldata = unpack(
">I8s8s8sHI", read(34))
assert saved_oid == oid, (ofs, self.f.tell(), oid, saved_oid)
assert end_tid == z64, (ofs, self.f.tell(), oid, tid, end_tid)
assert lver == 0, "Versions aren't supported"
if before_tid and tid >= before_tid:
return None
data = read(ldata)
assert len(data) == ldata, (
ofs, self.f.tell(), oid, len(data), ldata)
# WARNING: The following assert changes the file position.
# We must not depend on this below or we'll fail in optimized mode.
assert read(8) == oid, (ofs, self.f.tell(), oid)
self._n_accesses += 1
self._trace(0x22, oid, tid, end_tid, ldata)
ofsofs = self.currentofs - ofs
if ofsofs < 0:
ofsofs += self.maxsize
if (ofsofs > self.rearrange and
self.maxsize > 10*len(data) and
size > 4):
# The record is far back and might get evicted, but it's
# valuable, so move it forward.
# Remove fromn old loc:
del self.current[oid]
self.f.seek(ofs)
self.f.write(b'f'+pack(">I", size))
# Write to new location:
self._store(oid, tid, None, data, size)
return data, tid
##
# Return a non-current revision of oid that was current before tid.
# @param oid object id
# @param tid id of transaction that wrote next revision of oid
# @return data record, serial number, start tid, and end tid
# @defreturn 4-tuple: (string, string, string, string)
def loadBefore(self, oid, before_tid):
with self._lock:
noncurrent_for_oid = self.noncurrent.get(u64(oid))
if noncurrent_for_oid is None:
result = self.load(oid, before_tid)
if result:
return result[0], result[1], None
else:
self._trace(0x24, oid, "", before_tid)
return result
items = noncurrent_for_oid.items(None, u64(before_tid)-1)
if not items:
result = self.load(oid, before_tid)
if result:
return result[0], result[1], None
else:
self._trace(0x24, oid, "", before_tid)
return result
tid, ofs = items[-1]
self.f.seek(ofs)
read = self.f.read
status = read(1)
assert status == b'a', (ofs, self.f.tell(), oid, before_tid)
size, saved_oid, saved_tid, end_tid, lver, ldata = unpack(
">I8s8s8sHI", read(34))
assert saved_oid == oid, (ofs, self.f.tell(), oid, saved_oid)
assert saved_tid == p64(tid), (
ofs, self.f.tell(), oid, saved_tid, tid)
assert end_tid != z64, (ofs, self.f.tell(), oid)
assert lver == 0, "Versions aren't supported"
data = read(ldata)
assert len(data) == ldata, (ofs, self.f.tell())
# WARNING: The following assert changes the file position.
# We must not depend on this below or we'll fail in optimized mode.
assert read(8) == oid, (ofs, self.f.tell(), oid)
if end_tid < before_tid:
result = self.load(oid, before_tid)
if result:
return result[0], result[1], None
else:
self._trace(0x24, oid, "", before_tid)
return result
self._n_accesses += 1
self._trace(0x26, oid, "", saved_tid)
return data, saved_tid, end_tid
##
# Store a new data record in the cache.
# @param oid object id
# @param start_tid the id of the transaction that wrote this revision
# @param end_tid the id of the transaction that created the next
# revision of oid. If end_tid is None, the data is
# current.
# @param data the actual data
def store(self, oid, start_tid, end_tid, data):
with self._lock:
seek = self.f.seek
if end_tid is None:
ofs = self.current.get(oid)
if ofs:
seek(ofs)
read = self.f.read
status = read(1)
assert status == b'a', (ofs, self.f.tell(), oid)
size, saved_oid, saved_tid, end_tid = unpack(
">I8s8s8s", read(28))
assert saved_oid == oid, (
ofs, self.f.tell(), oid, saved_oid)
assert end_tid == z64, (ofs, self.f.tell(), oid)
if saved_tid == start_tid:
return
raise ValueError("already have current data for oid")
else:
noncurrent_for_oid = self.noncurrent.get(u64(oid))
if noncurrent_for_oid and (
u64(start_tid) in noncurrent_for_oid):
return
size = allocated_record_overhead + len(data)
# A number of cache simulation experiments all concluded that the
# 2nd-level ZEO cache got a much higher hit rate if "very large"
# objects simply weren't cached. For now, we ignore the request
# only if the entire cache file is too small to hold the object.
if size >= min(max_block_size, self.maxsize - ZEC_HEADER_SIZE):
return
self._n_adds += 1
self._n_added_bytes += size
self._len += 1
self._store(oid, start_tid, end_tid, data, size)
if end_tid:
self._trace(0x54, oid, start_tid, end_tid, dlen=len(data))
else:
self._trace(0x52, oid, start_tid, dlen=len(data))
def _store(self, oid, start_tid, end_tid, data, size):
# Low-level store used by store and load
# In the next line, we ask for an extra to make sure we always
# have a free block after the new alocated block. This free
# block acts as a ring pointer, so that on restart, we start
# where we left off.
nfreebytes = self._makeroom(size+1)
assert size <= nfreebytes, (size, nfreebytes)
excess = nfreebytes - size
# If there's any excess (which is likely), we need to record a
# free block following the end of the data record. That isn't
# expensive -- it's all a contiguous write.
if excess == 0:
extra = b''
elif excess < 5:
extra = "01234"[excess].encode()
else:
extra = b'f' + pack(">I", excess)
ofs = self.currentofs
seek = self.f.seek
seek(ofs)
write = self.f.write
# Before writing data, we'll write a free block for the space freed.
# We'll come back with a last atomic write to rewrite the start of the
# allocated-block header.
write(b'f'+pack(">I", nfreebytes))
# Now write the rest of the allocation block header and object data.
write(pack(">8s8s8sHI", oid, start_tid, end_tid or z64, 0, len(data)))
write(data)
write(oid)
write(extra)
# Now, we'll go back and rewrite the beginning of the
# allocated block header.
seek(ofs)
write(b'a'+pack(">I", size))
if end_tid:
self._set_noncurrent(oid, start_tid, ofs)
else:
self.current[oid] = ofs
self.currentofs += size
##
# If `tid` is None,
# forget all knowledge of `oid`. (`tid` can be None only for
# invalidations generated by startup cache verification.) If `tid`
# isn't None, and we had current
# data for `oid`, stop believing we have current data, and mark the
# data we had as being valid only up to `tid`. In all other cases, do
# nothing.
#
# Paramters:
#
# - oid object id
# - tid the id of the transaction that wrote a new revision of oid,
# or None to forget all cached info about oid.
def invalidate(self, oid, tid):
with self._lock:
ofs = self.current.get(oid)
if ofs is None:
# 0x10 == invalidate (miss)
self._trace(0x10, oid, tid)
return
self.f.seek(ofs)
read = self.f.read
status = read(1)
assert status == b'a', (ofs, self.f.tell(), oid)
size, saved_oid, saved_tid, end_tid = unpack(">I8s8s8s", read(28))
assert saved_oid == oid, (ofs, self.f.tell(), oid, saved_oid)
assert end_tid == z64, (ofs, self.f.tell(), oid)
del self.current[oid]
if tid is None:
self.f.seek(ofs)
self.f.write(b'f'+pack(">I", size))
# 0x1E = invalidate (hit, discarding current or non-current)
self._trace(0x1E, oid, tid)
self._len -= 1
else:
if tid == saved_tid:
logger.warning(
"Ignoring invalidation with same tid as current")
return
self.f.seek(ofs+21)
self.f.write(tid)
self._set_noncurrent(oid, saved_tid, ofs)
# 0x1C = invalidate (hit, saving non-current)
self._trace(0x1C, oid, tid)
##
# Generates (oid, serial) oairs for all objects in the
# cache. This generator is used by cache verification.
def contents(self):
# May need to materialize list instead of iterating;
# depends on whether the caller may change the cache.
seek = self.f.seek
read = self.f.read
for oid, ofs in six.iteritems(self.current):
seek(ofs)
status = read(1)
assert status == b'a', (ofs, self.f.tell(), oid)
size, saved_oid, tid, end_tid = unpack(">I8s8s8s", read(28))
assert saved_oid == oid, (ofs, self.f.tell(), oid, saved_oid)
assert end_tid == z64, (ofs, self.f.tell(), oid)
yield oid, tid
def dump(self):
from ZODB.utils import oid_repr
print("cache size", len(self))
L = list(self.contents())
L.sort()
for oid, tid in L:
print(oid_repr(oid), oid_repr(tid))
print("dll contents")
L = list(self)
L.sort(lambda x, y: cmp(x.key, y.key))
for x in L:
end_tid = x.end_tid or z64
print(oid_repr(x.key[0]), oid_repr(x.key[1]), oid_repr(end_tid))
print()
# If `path` isn't None (== we're using a persistent cache file), and
# envar ZEO_CACHE_TRACE is set to a non-empty value, try to open
# path+'.trace' as a trace file, and store the file object in
# self._tracefile. If not, or we can't write to the trace file, disable
# tracing by setting self._trace to a dummy function, and set
# self._tracefile to None.
_tracefile = None
def _trace(self, *a, **kw):
pass
def _setup_trace(self, path):
_tracefile = None
if path and os.environ.get("ZEO_CACHE_TRACE"):
tfn = path + ".trace"
try:
_tracefile = open(tfn, "ab")
except IOError as msg:
logger.warning("cannot write tracefile %r (%s)", tfn, msg)
else:
logger.info("opened tracefile %r", tfn)
if _tracefile is None:
return
now = time.time
def _trace(code, oid=b"", tid=z64, end_tid=z64, dlen=0):
# The code argument is two hex digits; bits 0 and 7 must be zero.
# The first hex digit shows the operation, the second the outcome.
# This method has been carefully tuned to be as fast as possible.
# Note: when tracing is disabled, this method is hidden by a dummy.
encoded = (dlen << 8) + code
if tid is None:
tid = z64
if end_tid is None:
end_tid = z64
try:
_tracefile.write(
pack(">iiH8s8s",
int(now()), encoded, len(oid), tid, end_tid) + oid,
)
except:
print(repr(tid), repr(end_tid))
raise
self._trace = _trace
self._tracefile = _tracefile
_trace(0x00)
def _unsetup_trace(self):
if self._tracefile is not None:
del self._trace
self._tracefile.close()
del self._tracefile
def sync(f):
f.flush()
if hasattr(os, 'fsync'):
def sync(f):
f.flush()
os.fsync(f.fileno())
ZEO-5.0.0a0/src/ZEO/component.xml 0000664 0000000 0000000 00000012672 12737730500 0016356 0 ustar 00root root 0000000 0000000
The full path to an SSL certificate file.
The full path to an SSL key file for the client certificate.
Dotted name of importable function for retrieving a password
for the client certificate key.
Path to a file or directory containing server certificates to be
authenticated.
Verify the host name in the server certificate is as expected.
Host name to use for SSL host name checks.
If ``check-hostname`` is true then use this as the
value to check. If an address is a host/port pair, then this
defaults to the host in the address.
The cache size in bytes, KB or MB. This defaults to a 20MB.
Optional ``KB`` or ``MB`` suffixes can (and usually are) used to
specify units other than bytes.
The file path of a persistent cache file
Path name to the blob cache directory.
Tells whether the cache is a shared writable directory
and that the ZEO protocol should not transfer the file
but only the filename when committing.
Maximum size of the ZEO blob cache, in bytes. If not set, then
the cache size isn't checked and the blob directory will
grow without bound.
This option is ignored if shared_blob_dir is true.
ZEO check size as percent of blob_cache_size. The ZEO
cache size will be checked when this many bytes have been
loaded into the cache. Defaults to 10% of the blob cache
size. This option is ignored if shared_blob_dir is true.
A flag indicating whether this should be a read-only storage,
defaulting to false (i.e. writing is allowed by default).
A flag indicating whether a read-only remote storage should be
acceptable as a fallback when no writable storages are
available. Defaults to false. At most one of read_only and
read_only_fallback should be true.
How long to wait for an initial connection, defaulting to 30
seconds. If an initial connection can't be made within this time
limit, then creation of the client storage will fail with a
``ZEO.Exceptions.ClientDisconnected`` exception.
After the initial connection, if the client is disconnected:
- In-flight server requests will fail with a
``ZEO.Exceptions.ClientDisconnected`` exception.
- New requests will block for up to ``wait_timeout`` waiting for a
connection to be established before failing with a
``ZEO.Exceptions.ClientDisconnected`` exception.
A label for the client in server logs
The name of the storage that the client wants to use. If the
ZEO server serves more than one storage, the client selects
the storage it wants to use by name. The default name is '1',
which is also the default name for the ZEO server.
The storage name. If unspecified, the address of the server
will be used as the name.
ZEO-5.0.0a0/src/ZEO/hash.py 0000664 0000000 0000000 00000001704 12737730500 0015121 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2008 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""In Python 2.6, the "sha" and "md5" modules have been deprecated
in favor of using hashlib for both. This class allows for compatibility
between versions."""
try:
import hashlib
sha1 = hashlib.sha1
new = sha1
except ImportError:
import sha
sha1 = sha.new
new = sha1
digest_size = sha.digest_size
ZEO-5.0.0a0/src/ZEO/interfaces.py 0000664 0000000 0000000 00000006744 12737730500 0016332 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2006 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
import zope.interface
class StaleCache(object):
"""A ZEO cache is stale and requires verification.
"""
def __init__(self, storage):
self.storage = storage
class IClientCache(zope.interface.Interface):
"""Client cache interface.
Note that caches need to be thread safe.
"""
def close():
"""Close the cache
"""
def load(oid):
"""Get current data for object
Returns data and serial, or None.
"""
def __len__():
"""Retirn the number of items in the cache.
"""
def store(oid, start_tid, end_tid, data):
"""Store data for the object
The start_tid is the transaction that committed this data.
The end_tid is the tid of the next transaction that modified
the objects, or None if this is the current version.
"""
def loadBefore(oid, tid):
"""Load the data for the object last modified before the tid
Returns the data, and start and end tids.
"""
def invalidate(oid, tid):
"""Invalidate data for the object
If ``tid`` is None, forget all knowledge of `oid`. (``tid``
can be None only for invalidations generated by startup cache
verification.)
If ``tid`` isn't None, and we had current data for ``oid``,
stop believing we have current data, and mark the data we had
as being valid only up to `tid`. In all other cases, do
nothing.
"""
def getLastTid():
"""Get the last tid seen by the cache
This is the cached last tid we've seen from the server.
This method may be called from multiple threads. (It's assumed
to be trivial.)
"""
def setLastTid(tid):
"""Save the last tid sent by the server
"""
def clear():
"""Clear/empty the cache
"""
class IServeable(zope.interface.Interface):
"""Interface provided by storages that can be served by ZEO
"""
def getTid(oid):
"""The last transaction to change an object
Return the transaction id of the last transaction that committed a
change to an object with the given object id.
"""
def tpc_transaction():
"""The current transaction being committed.
If a storage is participating in a two-phase commit, then
return the transaction (object) being committed. Otherwise
return None.
"""
def lastInvalidations(size):
"""Get recent transaction invalidations
This method is optional and is used to get invalidations
performed by the most recent transactions.
An iterable of up to size entries must be returned, where each
entry is a transaction id and a sequence of object-id/empty-string
pairs describing the objects written by the
transaction, in chronological order.
"""
ZEO-5.0.0a0/src/ZEO/monitor.py 0000664 0000000 0000000 00000007771 12737730500 0015677 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Monitor behavior of ZEO server and record statistics.
$Id$
"""
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
import asyncore
import socket
import time
import logging
zeo_version = 'unknown'
try:
import pkg_resources
except ImportError:
pass
else:
zeo_dist = pkg_resources.working_set.find(
pkg_resources.Requirement.parse('ZODB3')
)
if zeo_dist is not None:
zeo_version = zeo_dist.version
class StorageStats:
"""Per-storage usage statistics."""
def __init__(self, connections=None):
self.connections = connections
self.loads = 0
self.stores = 0
self.commits = 0
self.aborts = 0
self.active_txns = 0
self.lock_time = None
self.conflicts = 0
self.conflicts_resolved = 0
self.start = time.ctime()
@property
def clients(self):
return len(self.connections)
def parse(self, s):
# parse the dump format
lines = s.split("\n")
for line in lines:
field, value = line.split(":", 1)
if field == "Server started":
self.start = value
elif field == "Clients":
# Hack because we use this both on the server and on
# the client where there are no connections.
self.connections = [0] * int(value)
elif field == "Clients verifying":
self.verifying_clients = int(value)
elif field == "Active transactions":
self.active_txns = int(value)
elif field == "Commit lock held for":
# This assumes
self.lock_time = time.time() - int(value)
elif field == "Commits":
self.commits = int(value)
elif field == "Aborts":
self.aborts = int(value)
elif field == "Loads":
self.loads = int(value)
elif field == "Stores":
self.stores = int(value)
elif field == "Conflicts":
self.conflicts = int(value)
elif field == "Conflicts resolved":
self.conflicts_resolved = int(value)
def dump(self, f):
print("Server started:", self.start, file=f)
print("Clients:", self.clients, file=f)
print("Clients verifying:", self.verifying_clients, file=f)
print("Active transactions:", self.active_txns, file=f)
if self.lock_time:
howlong = time.time() - self.lock_time
print("Commit lock held for:", int(howlong), file=f)
print("Commits:", self.commits, file=f)
print("Aborts:", self.aborts, file=f)
print("Loads:", self.loads, file=f)
print("Stores:", self.stores, file=f)
print("Conflicts:", self.conflicts, file=f)
print("Conflicts resolved:", self.conflicts_resolved, file=f)
ZEO-5.0.0a0/src/ZEO/nagios.py 0000664 0000000 0000000 00000011321 12737730500 0015452 0 ustar 00root root 0000000 0000000 from __future__ import print_function
##############################################################################
#
# Copyright (c) 2011 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""%prog [options] address
Where the address is an IPV6 address of the form: [addr]:port, an IPV4
address of the form: addr:port, or the name of a unix-domain socket file.
"""
import json
import optparse
import os
import re
import socket
import struct
import sys
import time
NO_TRANSACTION = '0'*16
nodiff_names = 'active_txns connections waiting'.split()
diff_names = 'aborts commits conflicts conflicts_resolved loads stores'.split()
per_times = dict(seconds=1.0, minutes=60.0, hours=3600.0, days=86400.0)
def new_metric(metrics, storage_id, name, value):
if storage_id == '1':
label = name
else:
if ' ' in storage_id:
label = "'%s:%s'" % (storage_id, name)
else:
label = "%s:%s" % (storage_id, name)
metrics.append("%s=%s" % (label, value))
def result(messages, metrics=(), status=None):
if metrics:
messages[0] += '|' + metrics[0]
if len(metrics) > 1:
messages.append('| ' + '\n '.join(metrics[1:]))
print('\n'.join(messages))
return status
def error(message):
return result((message, ), (), 2)
def warn(message):
return result((message, ), (), 1)
def check(addr, output_metrics, status, per):
m = re.match(r'\[(\S+)\]:(\d+)$', addr)
if m:
addr = m.group(1), int(m.group(2))
s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
else:
m = re.match(r'(\S+):(\d+)$', addr)
if m:
addr = m.group(1), int(m.group(2))
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
else:
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
try:
s.connect(addr)
except socket.error as err:
return error("Can't connect %s" % err)
s.sendall(b'\x00\x00\x00\x04ruok')
proto = s.recv(struct.unpack(">I", s.recv(4))[0])
datas = s.recv(struct.unpack(">I", s.recv(4))[0])
s.close()
data = json.loads(datas.decode("ascii"))
if not data:
return warn("No storages")
metrics = []
messages = []
level = 0
if output_metrics:
for storage_id, sdata in sorted(data.items()):
for name in nodiff_names:
new_metric(metrics, storage_id, name, sdata[name])
if status:
now = time.time()
if os.path.exists(status):
dt = now - os.stat(status).st_mtime
if dt > 0: # sanity :)
with open(status) as f: # Read previous
old = json.loads(f.read())
dt /= per_times[per]
for storage_id, sdata in sorted(data.items()):
sdata['sameple-time'] = now
if storage_id in old:
sold = old[storage_id]
for name in diff_names:
v = (sdata[name] - sold[name]) / dt
new_metric(metrics, storage_id, name, v)
with open(status, 'w') as f: # save current
f.write(json.dumps(data))
for storage_id, sdata in sorted(data.items()):
if sdata['last-transaction'] == NO_TRANSACTION:
messages.append("Empty storage %r" % storage_id)
level = max(level, 1)
if not messages:
messages.append('OK')
return result(messages, metrics, level or None)
def main(args=None):
if args is None:
args = sys.argv[1:]
parser = optparse.OptionParser(__doc__)
parser.add_option(
'-m', '--output-metrics', action="store_true",
help="Output metrics.",
)
parser.add_option(
'-s', '--status-path',
help="Path to status file, needed to get rate metrics",
)
parser.add_option(
'-u', '--time-units', type='choice', default='minutes',
choices=['seconds', 'minutes', 'hours', 'days'],
help="Time unit for rate metrics",
)
(options, args) = parser.parse_args(args)
[addr] = args
return check(
addr, options.output_metrics, options.status_path, options.time_units)
if __name__ == '__main__':
main()
ZEO-5.0.0a0/src/ZEO/nagios.rst 0000664 0000000 0000000 00000007334 12737730500 0015643 0 ustar 00root root 0000000 0000000 ZEO Nagios plugin
=================
ZEO includes a script that provides a nagios monitor plugin:
>>> import pkg_resources, time
>>> nagios = pkg_resources.load_entry_point(
... 'ZEO', 'console_scripts', 'zeo-nagios')
In it's simplest form, the script just checks if it can get status:
>>> import ZEO
>>> addr, stop = ZEO.server('test.fs')
>>> saddr = ':'.join(map(str, addr)) # (host, port) -> host:port
>>> nagios([saddr])
Empty storage u'1'
1
The storage was empty. In that case, the monitor warned as much.
Let's add some data:
>>> ZEO.DB(addr).close()
>>> nagios([saddr])
OK
If we stop the server, we'll error:
>>> stop()
>>> nagios([saddr])
Can't connect [Errno 61] Connection refused
2
Metrics
-------
The monitor will optionally output server metric data. There are 2
kinds of metrics it can output, level and rate metric. If we use the
-m/--output-metrics option, we'll just get rate metrics:
>>> addr, stop = ZEO.server('test.fs')
>>> saddr = ':'.join(map(str, addr)) # (host, port) -> host:port
>>> nagios([saddr, '-m'])
OK|active_txns=0
| connections=0
waiting=0
We only got the metrics that are levels, like current number of
connections. If we want rate metrics, we need to be able to save
values from run to run. We need to use the -s/--status-path option to
specify the name of a file for status information:
>>> nagios([saddr, '-m', '-sstatus'])
OK|active_txns=0
| connections=0
waiting=0
We still didn't get any rate metrics, because we've only run once.
Let's actually do something with the database and then make another
sample.
>>> db = ZEO.DB(addr)
>>> nagios([saddr, '-m', '-sstatus'])
OK|active_txns=0
| connections=1
waiting=0
aborts=0.0
commits=0.0
conflicts=0.0
conflicts_resolved=0.0
loads=81.226297803
stores=0.0
Note that this time, we saw that there was a connection.
The ZEO.nagios module provides a check function that can be used by
other monitors (e.g. that get address data from ZooKeeper). It takes:
- Address string,
- Metrics flag.
- Status file name (or None), and
- Time units for rate metrics
::
>>> import ZEO.nagios
>>> ZEO.nagios.check(saddr, True, 'status', 'seconds')
OK|active_txns=0
| connections=1
waiting=0
aborts=0.0
commits=0.0
conflicts=0.0
conflicts_resolved=0.0
loads=0.0
stores=0.0
>>> db.close()
>>> stop()
Multi-storage servers
---------------------
A ZEO server can host multiple servers. (This is a feature that will
likely be dropped in the future.) When this is the case, the monitor
profixes metrics with a storage id.
>>> addr, stop = ZEO.server(
... storage_conf = """
...
...
...
...
... """)
>>> saddr = ':'.join(map(str, addr)) # (host, port) -> host:port
>>> nagios([saddr, '-m', '-sstatus'])
Empty storage u'first'|first:active_txns=0
Empty storage u'second'
| first:connections=0
first:waiting=0
second:active_txns=0
second:connections=0
second:waiting=0
1
>>> nagios([saddr, '-m', '-sstatus'])
Empty storage u'first'|first:active_txns=0
Empty storage u'second'
| first:connections=0
first:waiting=0
second:active_txns=0
second:connections=0
second:waiting=0
first:aborts=0.0
first:commits=0.0
first:conflicts=0.0
first:conflicts_resolved=0.0
first:loads=42.42
first:stores=0.0
second:aborts=0.0
second:commits=0.0
second:conflicts=0.0
second:conflicts_resolved=0.0
second:loads=42.42
second:stores=0.0
1
>>> stop()
ZEO-5.0.0a0/src/ZEO/ordering.rst 0000664 0000000 0000000 00000023347 12737730500 0016176 0 ustar 00root root 0000000 0000000 ==============================
Response ordering requirements
==============================
ZEO servers are logically concurrent because they serve multiple
clients in multiple threads. Because of this, we have to make sure
that information about object histories remain consistent.
An object history is a sequence of object revisions. Each revision has
a tid, which is essentially a time stamp.
We load objects using either ``load``, which returns the current
object. or ``loadBefore``, which returns the object before a specific time/tid.
When we cache revisions, we record the tid and the next/end tid, which
may be None. The end tid is important for choosing a revision for
``loadBefore``, as well as for determining whether a cached value is
current, for ``load``.
Because the client and server are multi-threaded, the client may see
data out of order. Let's consider some scenarios. In these
scenarios
Scenarios
=========
When considering ordering scenarioes, we'll consider 2 different
client behaviors, traditional (T) and loadBefore (B).
The *traditional* behaviors is that used in ZODB 4. It uses the storage
``load(oid)`` method to load objects if it hasn't seen an invalidation
for the object. If it has seen an invalidation, it uses
``loadBefore(oid, START)``, where ``START`` is the transaction time of
the first invalidation it's seen. If it hasn't seen an invalidation
*for an object*, it uses ``load(oid)`` and then checks again for an
invalidation. If it sees an invalidation, then it retries using
``loadBefore``. This approach **assumes that invalidations for a tid
are returned before loads for a tid**.
The *loadBefore* behavior, used in ZODB5, always determines
transaction start time, ``START`` at the beginning of a transaction by
calling the storage's ``sync`` method and then querying the storage's
``lastTransaction`` method (and adding 1). It loads objects
exclusively using ``loadBefore(oid, START)``.
Scenario 1, Invalidations seen after loads for transaction
----------------------------------------------------------
This scenario could occur because the commits are for a different
client, and a hypothetical; server doesn't block loads while
committing, or sends invalidations in a way that might delay them (but
not send them out of order).
T1
- client starts a transaction
- client load(O1) gets O1-T1
- client load(O2)
- Server commits O2-T2
- Server loads (O2-T2)
- Client gets O2-T2, updates the client cache, and completes load
- Client sees invalidation for O2-T2. If the
client is smart, it doesn't update the cache.
The transaction now has inconsistent data, because it should have
loaded whatever O2 was before T2. Because the invalidation came
in after O2 was loaded, the load was unaffected.
B1
- client starts a transaction. Sets START to T1+1
- client loadBefore(O1, T1+1) gets O1-T1, T1, None
- client loadBefore(O2, T1+1)
- Server commits O2-T2
- Server loadBefore(O2, T1+1) -> O2-T0-T2
(assuming that the revision of O2 before T2 was T0)
- Client gets O2-T0-T2, updates cache.
- Client sees invalidation for O2-T2. No update to the cache is
necessary.
In this scenario, loadBefore prevents reading incorrect data.
A variation on this scenario is that client sees invalidations
tpc_finish in another thread after loads for the same transaction.
Scenario 2, Client sees invalidations for later transaction before load result
------------------------------------------------------------------------------
T2
- client starts a transaction
- client load(O1) gets O1-T1
- client load(O2)
- Server loads (O2-T0)
- Server commits O2-T2
- Client sees invalidation for O2-T2. O2 isn't in the cache, so
nothing to do.
- Client gets O2-T0, updates the client cache, and completes load
The cache is now incorrect. It has O2-T0-None, meaning it thinks
O2-T0 is current.
The transaction is OK, because it got a consistent value for O2.
B2
- client starts a transaction. Sets START to T1+1
- client loadBefore(O1, T1+1) gets O1-T1, T1, None
- client loadBefore(O2, T1+1)
- Server loadBefore(O2, T1+1) -> O2-T0-None
- Server commits O2-T2
- Client sees invalidation for O2-T2. O2 isn't in the cache, so
nothing to do.
- Client gets O2-T0-None, and completes load
ZEO 4 doesn't cache loadBefore results with no ending transaction.
Assume ZEO 5 updates the client cache.
For ZEO 5, the cache is now incorrect. It has O2-T0-None, meaning
it thinks O2-T0 is current.
The transaction is OK, because it got a consistent value for O2.
In this case, ``loadBefore`` didn't prevent an invalid cache value.
Scenario 3, client sees invalidation after lastTransaction result
------------------------------------------------------------------
(This doesn't effect the traditional behavior.)
B3
- The client cache has a last tid of T1.
- ZODB calls sync() then calls lastTransaction. Is so configured,
ZEO calls lastTransaction on the server. This is mainly to make a
round trip to get in-flight invalidations. We don't necessarily
need to use the value. In fact, in protocol 5, we could just add a
sync method that just makes a round trip, but does nothing else.
- Server commits O1-T2, O2-T2.
- Server reads and returns T2. (It doesn't mater what it returns
- client sets START to T1+1, because lastTransaction is based on
what's in the cache, which is based on invalidations.
- Client loadBefore(O1, T2+1), finds O1-T1-None in cache and uses
it.
- Client gets invalidation for O1-T2. Updates cache to O1-T1-T2.
- Client loadBefore(O2, T1+1), gets O2-T1-None
This is OK, as long as the client doesn't do anything with the
lastTransaction result in ``sync``.
Implementation notes
===================
ZEO 4
-----
The ZEO 4 server sends data to the client in correct order with
respect to loads and invalidations (or tpc_finish results). This is a
consequence of the fact that invalidations are sent in a callback
called when the storage lock is held, blocking loads while committing,
and, fact that client requests, for a particular client, are
handled by a single thread on the server, and that all output for a
client goes through a thread-safe queue.
Invalidations are sent from different threads than clients. Outgoing
data is queued, however, using Python lists, which are protected by
the GIL. This means that the serialization provided though storage
locks is preserved by the way that server outputs are queued. **The
queueing mechanism is in part a consequence of the way asyncore, used
by ZEO4, works.
In ZEO 4 clients, invalidations and loads are handled by separate
threads. This means that even though data arive in order, they may not
be processed in order,
T1
The existing servers mitigate this by blocking loads while
committing. On the client, this is still a potential issue because loads
and invalidations are handled by separate threads, however, locks are
used on the client to assure that invalidations are processed before
blocked loads complete.
T2
Existing storage servers serialize commits (and thus sending of
invalidations) and loads. As with scenario T1, threading on the
client can cause load results and invalidations to be processed out
of order. To mitigate this, the client uses a load lock to track
when loads are invalidated while in flight and doesn't save to the
cache when they are. This is bad on multiple levels. It serializes
loads even when there are multiple threads. It may prevent writing
to the cache unnecessarily, if the invalidation is for a revision
before the one that was loaded.
B2
Here, we avoid incorrect returned values and incorrect cache at the
cost of caching nothing.
ZEO 4.2.0 addressed this by using the same locking strategy for
``loadBefore`` that was used for ``load``, thus mitigating B2 the
same way it mitigates T2.
ZEO 5
-----
In ZEO(/ZODB) 5, we want to get more concurrency, both on the client,
and on the server. On the client, cache invalidations and loads are
done by the same thread, which makes things a bit simpler. This let's
us get rid of the client load lock and prevents the scenarios above
with existing servers.
On the client, we'd like to stop serializing loads and commits. We'd
like commits (tpc_finish calls) to be in flight with loads (and with
other commits). In the current protocol, tpc_finish, load and
loadBefore are all synchronous calls that are handled by a single
thread on the server, so these calls end up being serialized on the
server anyway.
The server-side hndling of invalidations is a bit tricker in ZEO 5
because there isn't a thread-safe queue of outgoing messages in ZEO 5
as there was in ZEO 4. The natural approach in ZEO 5 would be to use
asyncio's ``call_soon_threadsafe`` to send invalidations in a client's
thread. This could easily cause invalidations to be sent after loads.
As shown above, this isn't a problem for ZODB 5, at least assuming
that invalidations arrive in order. This would be a problem for
ZODB 4. For this reason, we require ZODB 5 for ZEO 5.
Note that this approach can't cause invalidations to be sent early,
because they could only be sent by the thread that's busy loading, so
scenario 2 wouldn't happen.
B2
Because the server send invalidations by calling
``call_soon_threadsafe``, it's impoossible for invalidations to be
send while a load request is being handled.
The main server opportunity is allowing commits for separate oids to
happen concurrently. This wouldn't effect the invalidation/load
ordering though.
It would be nice not to block loads while making tpc_finish calls, but
storages do this anyway now, so there's nothing to be done about it
now. Storage locking requirements aren't well specified, and probably
should be rethought in light of ZODB5/loadBefore.
ZEO-5.0.0a0/src/ZEO/protocol.txt 0000664 0000000 0000000 00000004433 12737730500 0016230 0 ustar 00root root 0000000 0000000 ZEO Network Protocol (sans authentication)
==========================================
This document describes the ZEO network protocol. It assumes that the
optional authentication protocol isn't used. At the lowest
level, the protocol consists of sized messages. All communication
between the client and server consists of sized messages. A sized
message consists of a 4-byte unsigned big-endian content length,
followed by the content. There are two subprotocols, for protocol
negotiation, and for normal operation. The normal operation protocol
is a basic RPC protocol.
In the protocol negotiation phase, the server sends a protocol
identifier to the client. The client chooses a protocol to use to the
server. The client or the server can fail if it doesn't like the
protocol string sent by the other party. After sending their protocol
strings, the client and server switch to RPC mode.
The RPC protocol uses messages that are pickled tuples consisting of:
message_id
The message id is used to match replies with requests, allowing
multiple outstanding synchronous requests.
async_flag
An integer 0 for a regular (2-way) request and 1 for a one-way
request. Two-way requests have a reply. One way requests don't.
ZRS tries to use as many one-way requests as possible to avoid
network round trips.
name
The name of a method to call. If this is the special string
".reply", then the message is interpreted as a return from a
synchronous call.
args
A tuple of positional arguments or returned values.
After making a connection and negotiating the protocol, the following
interactions occur:
- The client requests the authentication protocol by calling
getAuthProtocol. For this discussion, we'll assume the server
returns None. Note that if the server doesn't require
authentication, this step is optional.
- The client calls register passing a storage identifier and a
read-only flag. The server doesn't return a value, but it may raise
an exception either if the storage doesn't exist, or if the
storage is readonly and the read-only flag passed by the client is
false.
At this point, the client and server send each other messages as
needed. The client may make regular or one-way calls to the
server. The server sends replies and one-way calls to the client.
ZEO-5.0.0a0/src/ZEO/runzeo.py 0000664 0000000 0000000 00000032417 12737730500 0015525 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002, 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Start the ZEO storage server.
Usage: %s [-C URL] [-a ADDRESS] [-f FILENAME] [-h]
Options:
-C/--configuration URL -- configuration file or URL
-a/--address ADDRESS -- server address of the form PORT, HOST:PORT, or PATH
(a PATH must contain at least one "/")
-f/--filename FILENAME -- filename for FileStorage
-t/--timeout TIMEOUT -- transaction timeout in seconds (default no timeout)
-h/--help -- print this usage message and exit
--pid-file PATH -- relative path to output file containing this process's pid;
default $(INSTANCE_HOME)/var/ZEO.pid but only if envar
INSTANCE_HOME is defined
Unless -C is specified, -a and -f are required.
"""
from __future__ import print_function
from __future__ import print_function
# The code here is designed to be reused by other, similar servers.
# For the forseeable future, it must work under Python 2.1 as well as
# 2.2 and above.
import asyncore
import os
import sys
import signal
import socket
import logging
import ZConfig.datatypes
import ZEO
from zdaemon.zdoptions import ZDOptions
logger = logging.getLogger('ZEO.runzeo')
_pid = str(os.getpid())
def log(msg, level=logging.INFO, exc_info=False):
"""Internal: generic logging function."""
message = "(%s) %s" % (_pid, msg)
logger.log(level, message, exc_info=exc_info)
def parse_binding_address(arg):
# Caution: Not part of the official ZConfig API.
obj = ZConfig.datatypes.SocketBindingAddress(arg)
return obj.family, obj.address
def windows_shutdown_handler():
# Called by the signal mechanism on Windows to perform shutdown.
import asyncore
asyncore.close_all()
class ZEOOptionsMixin:
storages = None
def handle_address(self, arg):
self.family, self.address = parse_binding_address(arg)
def handle_filename(self, arg):
from ZODB.config import FileStorage # That's a FileStorage *opener*!
class FSConfig:
def __init__(self, name, path):
self._name = name
self.path = path
self.stop = None
def getSectionName(self):
return self._name
if not self.storages:
self.storages = []
name = str(1 + len(self.storages))
conf = FileStorage(FSConfig(name, arg))
self.storages.append(conf)
testing_exit_immediately = False
def handle_test(self, *args):
self.testing_exit_immediately = True
def add_zeo_options(self):
self.add(None, None, None, "test", self.handle_test)
self.add(None, None, "a:", "address=", self.handle_address)
self.add(None, None, "f:", "filename=", self.handle_filename)
self.add("family", "zeo.address.family")
self.add("address", "zeo.address.address",
required="no server address specified; use -a or -C")
self.add("read_only", "zeo.read_only", default=0)
self.add("client_conflict_resolution",
"zeo.client_conflict_resolution",
default=0)
self.add("invalidation_queue_size", "zeo.invalidation_queue_size",
default=100)
self.add("invalidation_age", "zeo.invalidation_age")
self.add("transaction_timeout", "zeo.transaction_timeout",
"t:", "timeout=", float)
self.add('pid_file', 'zeo.pid_filename',
None, 'pid-file=')
self.add("ssl", "zeo.ssl")
class ZEOOptions(ZDOptions, ZEOOptionsMixin):
__doc__ = __doc__
logsectionname = "eventlog"
schemadir = os.path.dirname(ZEO.__file__)
def __init__(self):
ZDOptions.__init__(self)
self.add_zeo_options()
self.add("storages", "storages",
required="no storages specified; use -f or -C")
def realize(self, *a, **k):
ZDOptions.realize(self, *a, **k)
nunnamed = [s for s in self.storages if s.name is None]
if nunnamed:
if len(nunnamed) > 1:
return self.usage("No more than one storage may be unnamed.")
if [s for s in self.storages if s.name == '1']:
return self.usage(
"Can't have an unnamed storage and a storage named 1.")
for s in self.storages:
if s.name is None:
s.name = '1'
break
class ZEOServer:
def __init__(self, options):
self.options = options
def main(self):
self.setup_default_logging()
self.check_socket()
self.clear_socket()
self.make_pidfile()
try:
self.open_storages()
self.setup_signals()
self.create_server()
self.loop_forever()
finally:
self.server.close()
self.clear_socket()
self.remove_pidfile()
def setup_default_logging(self):
if self.options.config_logger is not None:
return
# No log file is configured; default to stderr.
root = logging.getLogger()
root.setLevel(logging.INFO)
fmt = logging.Formatter(
"------\n%(asctime)s %(levelname)s %(name)s %(message)s",
"%Y-%m-%dT%H:%M:%S")
handler = logging.StreamHandler()
handler.setFormatter(fmt)
root.addHandler(handler)
def check_socket(self):
if (isinstance(self.options.address, tuple) and
self.options.address[1] is None):
self.options.address = self.options.address[0], 0
return
if self.can_connect(self.options.family, self.options.address):
self.options.usage("address %s already in use" %
repr(self.options.address))
def can_connect(self, family, address):
s = socket.socket(family, socket.SOCK_STREAM)
try:
s.connect(address)
except socket.error:
return 0
else:
s.close()
return 1
def clear_socket(self):
if isinstance(self.options.address, type("")):
try:
os.unlink(self.options.address)
except os.error:
pass
def open_storages(self):
self.storages = {}
for opener in self.options.storages:
log("opening storage %r using %s"
% (opener.name, opener.__class__.__name__))
self.storages[opener.name] = opener.open()
def setup_signals(self):
"""Set up signal handlers.
The signal handler for SIGFOO is a method handle_sigfoo().
If no handler method is defined for a signal, the signal
action is not changed from its initial value. The handler
method is called without additional arguments.
"""
if os.name != "posix":
if os.name == "nt":
self.setup_win32_signals()
return
if hasattr(signal, 'SIGXFSZ'):
signal.signal(signal.SIGXFSZ, signal.SIG_IGN) # Special case
init_signames()
for sig, name in signames.items():
method = getattr(self, "handle_" + name.lower(), None)
if method is not None:
def wrapper(sig_dummy, frame_dummy, method=method):
method()
signal.signal(sig, wrapper)
def setup_win32_signals(self):
# Borrow the Zope Signals package win32 support, if available.
# Signals does a check/log for the availability of pywin32.
try:
import Signals.Signals
except ImportError:
logger.debug("Signals package not found. "
"Windows-specific signal handler "
"will *not* be installed.")
return
SignalHandler = Signals.Signals.SignalHandler
if SignalHandler is not None: # may be None if no pywin32.
SignalHandler.registerHandler(signal.SIGTERM,
windows_shutdown_handler)
SignalHandler.registerHandler(signal.SIGINT,
windows_shutdown_handler)
SIGUSR2 = 12 # not in signal module on Windows.
SignalHandler.registerHandler(SIGUSR2, self.handle_sigusr2)
def create_server(self):
self.server = create_server(self.storages, self.options)
def loop_forever(self):
if self.options.testing_exit_immediately:
print("testing exit immediately")
else:
self.server.loop()
def handle_sigterm(self):
log("terminated by SIGTERM")
sys.exit(0)
def handle_sigint(self):
log("terminated by SIGINT")
sys.exit(0)
def handle_sighup(self):
log("restarted by SIGHUP")
sys.exit(1)
def handle_sigusr2(self):
# log rotation signal - do the same as Zope 2.7/2.8...
if self.options.config_logger is None or os.name not in ("posix", "nt"):
log("received SIGUSR2, but it was not handled!",
level=logging.WARNING)
return
loggers = [self.options.config_logger]
if os.name == "posix":
for l in loggers:
l.reopen()
log("Log files reopened successfully", level=logging.INFO)
else: # nt - same rotation code as in Zope's Signals/Signals.py
for l in loggers:
for f in l.handler_factories:
handler = f()
if hasattr(handler, 'rotate') and callable(handler.rotate):
handler.rotate()
log("Log files rotation complete", level=logging.INFO)
def _get_pidfile(self):
pidfile = self.options.pid_file
# 'pidfile' is marked as not required.
if not pidfile:
# Try to find a reasonable location if the pidfile is not
# set. If we are running in a Zope environment, we can
# safely assume INSTANCE_HOME.
instance_home = os.environ.get("INSTANCE_HOME")
if not instance_home:
# If all our attempts failed, just log a message and
# proceed.
logger.debug("'pidfile' option not set, and 'INSTANCE_HOME' "
"environment variable could not be found. "
"Cannot guess pidfile location.")
return
self.options.pid_file = os.path.join(instance_home,
"var", "ZEO.pid")
def make_pidfile(self):
if not self.options.read_only:
self._get_pidfile()
pidfile = self.options.pid_file
if pidfile is None:
return
pid = os.getpid()
try:
if os.path.exists(pidfile):
os.unlink(pidfile)
f = open(pidfile, 'w')
print(pid, file=f)
f.close()
log("created PID file '%s'" % pidfile)
except IOError:
logger.error("PID file '%s' cannot be opened" % pidfile)
def remove_pidfile(self):
if not self.options.read_only:
pidfile = self.options.pid_file
if pidfile is None:
return
try:
if os.path.exists(pidfile):
os.unlink(pidfile)
log("removed PID file '%s'" % pidfile)
except IOError:
logger.error("PID file '%s' could not be removed" % pidfile)
def create_server(storages, options):
from ZEO.StorageServer import StorageServer
return StorageServer(
options.address,
storages,
read_only = options.read_only,
client_conflict_resolution=options.client_conflict_resolution,
invalidation_queue_size = options.invalidation_queue_size,
invalidation_age = options.invalidation_age,
transaction_timeout = options.transaction_timeout,
ssl = options.ssl,
)
# Signal names
signames = None
def signame(sig):
"""Return a symbolic name for a signal.
Return "signal NNN" if there is no corresponding SIG name in the
signal module.
"""
if signames is None:
init_signames()
return signames.get(sig) or "signal %d" % sig
def init_signames():
global signames
signames = {}
for name, sig in signal.__dict__.items():
k_startswith = getattr(name, "startswith", None)
if k_startswith is None:
continue
if k_startswith("SIG") and not k_startswith("SIG_"):
signames[sig] = name
# Main program
def main(args=None):
options = ZEOOptions()
options.realize(args)
s = ZEOServer(options)
s.main()
def run(args):
options = ZEOOptions()
options.realize(args)
s = ZEOServer(options)
s.run()
if __name__ == "__main__":
main()
ZEO-5.0.0a0/src/ZEO/schema.xml 0000664 0000000 0000000 00000002254 12737730500 0015607 0 ustar 00root root 0000000 0000000
This schema describes the configuration of the ZEO storage server
process.
One or more storages that are provided by the ZEO server. The
section names are used as the storage names, and must be unique
within each ZEO storage server. Traditionally, these names
represent small integers starting at '1'.
ZEO-5.0.0a0/src/ZEO/scripts/ 0000775 0000000 0000000 00000000000 12737730500 0015311 5 ustar 00root root 0000000 0000000 ZEO-5.0.0a0/src/ZEO/scripts/README.txt 0000664 0000000 0000000 00000003613 12737730500 0017012 0 ustar 00root root 0000000 0000000 This directory contains a collection of utilities for working with
ZEO. Some are more useful than others. If you install ZODB using
distutils ("python setup.py install"), some of these will be
installed.
Unless otherwise noted, these scripts are invoked with the name of the
Data.fs file as their only argument. Example: checkbtrees.py data.fs.
parsezeolog.py -- parse BLATHER logs from ZEO server
This script may be obsolete. It has not been tested against the
current log output of the ZEO server.
Reports on the time and size of transactions committed by a ZEO
server, by inspecting log messages at BLATHER level.
timeout.py -- script to test transaction timeout
usage: timeout.py address delay [storage-name]
This script connects to a storage, begins a transaction, calls store()
and tpc_vote(), and then sleeps forever. This should trigger the
transaction timeout feature of the server.
zeopack.py -- pack a ZEO server
The script connects to a server and calls pack() on a specific
storage. See the script for usage details.
zeoreplay.py -- experimental script to replay transactions from a ZEO log
Like parsezeolog.py, this may be obsolete because it was written
against an earlier version of the ZEO server. See the script for
usage details.
zeoup.py
usage: zeoup.py [options]
The test will connect to a ZEO server, load the root object, and
attempt to update the zeoup counter in the root. It will report
success if it updates to counter or if it gets a ConflictError. A
ConflictError is considered a success, because the client was able to
start a transaction.
See the script for details about the options.
zeoserverlog.py -- analyze ZEO server log for performance statistics
See the module docstring for details; there are a large number of
options. New in ZODB3 3.1.4.
zeoqueue.py -- report number of clients currently waiting in the ZEO queue
See the module docstring for details.
ZEO-5.0.0a0/src/ZEO/scripts/__init__.py 0000664 0000000 0000000 00000000002 12737730500 0017412 0 ustar 00root root 0000000 0000000 #
ZEO-5.0.0a0/src/ZEO/scripts/cache_simul.py 0000775 0000000 0000000 00000050405 12737730500 0020146 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
##############################################################################
#
# Copyright (c) 2001-2005 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Cache simulation.
Usage: simul.py [-s size] tracefile
Options:
-s size: cache size in MB (default 20 MB)
-i: summarizing interval in minutes (default 15; max 60)
-r: rearrange factor
Note:
- The simulation isn't perfect.
- The simulation will be far off if the trace file
was created starting with a non-empty cache
"""
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
import bisect
import getopt
import struct
import re
import sys
import ZEO.cache
from ZODB.utils import z64, u64
# we assign ctime locally to facilitate test replacement!
from time import ctime
import six
def usage(msg):
print(msg, file=sys.stderr)
print(__doc__, file=sys.stderr)
def main(args=None):
if args is None:
args = sys.argv[1:]
# Parse options.
MB = 1<<20
cachelimit = 20*MB
rearrange = 0.8
simclass = CircularCacheSimulation
interval_step = 15
try:
opts, args = getopt.getopt(args, "s:i:r:")
except getopt.error as msg:
usage(msg)
return 2
for o, a in opts:
if o == '-s':
cachelimit = int(float(a)*MB)
elif o == '-i':
interval_step = int(a)
elif o == '-r':
rearrange = float(a)
else:
assert False, (o, a)
interval_step *= 60
if interval_step <= 0:
interval_step = 60
elif interval_step > 3600:
interval_step = 3600
if len(args) != 1:
usage("exactly one file argument required")
return 2
filename = args[0]
# Open file.
if filename.endswith(".gz"):
# Open gzipped file.
try:
import gzip
except ImportError:
print("can't read gzipped files (no module gzip)", file=sys.stderr)
return 1
try:
f = gzip.open(filename, "rb")
except IOError as msg:
print("can't open %s: %s" % (filename, msg), file=sys.stderr)
return 1
elif filename == "-":
# Read from stdin.
f = sys.stdin
else:
# Open regular file.
try:
f = open(filename, "rb")
except IOError as msg:
print("can't open %s: %s" % (filename, msg), file=sys.stderr)
return 1
# Create simulation object.
sim = simclass(cachelimit, rearrange)
interval_sim = simclass(cachelimit, rearrange)
# Print output header.
sim.printheader()
# Read trace file, simulating cache behavior.
f_read = f.read
unpack = struct.unpack
FMT = ">iiH8s8s"
FMT_SIZE = struct.calcsize(FMT)
assert FMT_SIZE == 26
last_interval = None
while 1:
# Read a record and decode it.
r = f_read(FMT_SIZE)
if len(r) < FMT_SIZE:
break
ts, code, oidlen, start_tid, end_tid = unpack(FMT, r)
if ts == 0:
# Must be a misaligned record caused by a crash; skip 8 bytes
# and try again. Why 8? Lost in the mist of history.
f.seek(f.tell() - FMT_SIZE + 8)
continue
oid = f_read(oidlen)
if len(oid) < oidlen:
break
# Decode the code.
dlen, version, code = ((code & 0x7fffff00) >> 8,
code & 0x80,
code & 0x7e)
# And pass it to the simulation.
this_interval = int(ts) // interval_step
if this_interval != last_interval:
if last_interval is not None:
interval_sim.report()
interval_sim.restart()
if not interval_sim.warm:
sim.restart()
last_interval = this_interval
sim.event(ts, dlen, version, code, oid, start_tid, end_tid)
interval_sim.event(ts, dlen, version, code, oid, start_tid, end_tid)
f.close()
# Finish simulation.
interval_sim.report()
sim.finish()
class Simulation(object):
"""Base class for simulations.
The driver program calls: event(), printheader(), finish().
The standard event() method calls these additional methods:
write(), load(), inval(), report(), restart(); the standard
finish() method also calls report().
"""
def __init__(self, cachelimit, rearrange):
self.cachelimit = cachelimit
self.rearrange = rearrange
# Initialize global statistics.
self.epoch = None
self.warm = False
self.total_loads = 0
self.total_hits = 0 # subclass must increment
self.total_invals = 0 # subclass must increment
self.total_writes = 0
if not hasattr(self, "extras"):
self.extras = (self.extraname,)
self.format = self.format + " %7s" * len(self.extras)
# Reset per-run statistics and set up simulation data.
self.restart()
def restart(self):
# Reset per-run statistics.
self.loads = 0
self.hits = 0 # subclass must increment
self.invals = 0 # subclass must increment
self.writes = 0
self.ts0 = None
def event(self, ts, dlen, _version, code, oid,
start_tid, end_tid):
# Record first and last timestamp seen.
if self.ts0 is None:
self.ts0 = ts
if self.epoch is None:
self.epoch = ts
self.ts1 = ts
# Simulate cache behavior. Caution: the codes in the trace file
# record whether the actual cache missed or hit on each load, but
# that bears no necessary relationship to whether the simulated cache
# will hit or miss. Relatedly, if the actual cache needed to store
# an object, the simulated cache may not need to (it may already
# have the data).
action = code & 0x70
if action & 0x20:
# Load.
self.loads += 1
self.total_loads += 1
# Asserting that dlen is 0 iff it's a load miss.
# assert (dlen == 0) == (code in (0x20, 0x24))
self.load(oid, dlen, start_tid, code)
elif action & 0x40:
# Store.
assert dlen
self.write(oid, dlen, start_tid, end_tid)
elif action & 0x10:
# Invalidate.
self.inval(oid, start_tid)
elif action == 0x00:
# Restart.
self.restart()
else:
raise ValueError("unknown trace code 0x%x" % code)
def write(self, oid, size, start_tid, end_tid):
pass
def load(self, oid, size, start_tid, code):
# Must increment .hits and .total_hits as appropriate.
pass
def inval(self, oid, start_tid):
# Must increment .invals and .total_invals as appropriate.
pass
format = "%12s %6s %7s %7s %6s %6s %7s"
# Subclass should override extraname to name known instance variables;
# if extraname is 'foo', both self.foo and self.total_foo must exist:
extraname = "*** please override ***"
def printheader(self):
print("%s, cache size %s bytes" % (self.__class__.__name__,
addcommas(self.cachelimit)))
self.extraheader()
extranames = tuple([s.upper() for s in self.extras])
args = ("START TIME", "DUR.", "LOADS", "HITS",
"INVALS", "WRITES", "HITRATE") + extranames
print(self.format % args)
def extraheader(self):
pass
nreports = 0
def report(self):
if not hasattr(self, 'ts1'):
return
self.nreports += 1
args = (ctime(self.ts0)[4:-8],
duration(self.ts1 - self.ts0),
self.loads, self.hits, self.invals, self.writes,
hitrate(self.loads, self.hits))
args += tuple([getattr(self, name) for name in self.extras])
print(self.format % args)
def finish(self):
# Make sure that the last line of output ends with "OVERALL". This
# makes it much easier for another program parsing the output to
# find summary statistics.
print('-'*74)
if self.nreports < 2:
self.report()
else:
self.report()
args = (
ctime(self.epoch)[4:-8],
duration(self.ts1 - self.epoch),
self.total_loads,
self.total_hits,
self.total_invals,
self.total_writes,
hitrate(self.total_loads, self.total_hits))
args += tuple([getattr(self, "total_" + name)
for name in self.extras])
print(self.format % args)
# For use in CircularCacheSimulation.
class CircularCacheEntry(object):
__slots__ = (
# object key: an (oid, start_tid) pair, where start_tid is the
# tid of the transaction that created this revision of oid
'key',
# tid of transaction that created the next revision; z64 iff
# this is the current revision
'end_tid',
# Offset from start of file to the object's data record; this
# includes all overhead bytes (status byte, size bytes, etc).
'offset',
)
def __init__(self, key, end_tid, offset):
self.key = key
self.end_tid = end_tid
self.offset = offset
from ZEO.cache import ZEC_HEADER_SIZE
class CircularCacheSimulation(Simulation):
"""Simulate the ZEO 3.0 cache."""
# The cache is managed as a single file with a pointer that
# goes around the file, circularly, forever. New objects
# are written at the current pointer, evicting whatever was
# there previously.
extras = "evicts", "inuse"
evicts = 0
def __init__(self, cachelimit, rearrange):
from ZEO import cache
Simulation.__init__(self, cachelimit, rearrange)
self.total_evicts = 0 # number of cache evictions
# Current offset in file.
self.offset = ZEC_HEADER_SIZE
# Map offset in file to (size, CircularCacheEntry) pair, or to
# (size, None) if the offset starts a free block.
self.filemap = {ZEC_HEADER_SIZE: (self.cachelimit - ZEC_HEADER_SIZE,
None)}
# Map key to CircularCacheEntry. A key is an (oid, tid) pair.
self.key2entry = {}
# Map oid to tid of current revision.
self.current = {}
# Map oid to list of (start_tid, end_tid) pairs in sorted order.
# Used to find matching key for load of non-current data.
self.noncurrent = {}
# The number of overhead bytes needed to store an object pickle
# on disk (all bytes beyond those needed for the object pickle).
self.overhead = ZEO.cache.allocated_record_overhead
# save evictions so we can replay them, if necessary
self.evicted = {}
def restart(self):
Simulation.restart(self)
if self.evicts:
self.warm = True
self.evicts = 0
self.evicted_hit = self.evicted_miss = 0
evicted_hit = evicted_miss = 0
def load(self, oid, size, tid, code):
if (code == 0x20) or (code == 0x22):
# Trying to load current revision.
if oid in self.current: # else it's a cache miss
self.hits += 1
self.total_hits += 1
tid = self.current[oid]
entry = self.key2entry[(oid, tid)]
offset_offset = self.offset - entry.offset
if offset_offset < 0:
offset_offset += self.cachelimit
assert offset_offset >= 0
if offset_offset > self.rearrange * self.cachelimit:
# we haven't accessed it in a while. Move it forward
size = self.filemap[entry.offset][0]
self._remove(*entry.key)
self.add(oid, size, tid)
elif oid in self.evicted:
size, e = self.evicted[oid]
self.write(oid, size, e.key[1], z64, 1)
self.evicted_hit += 1
else:
self.evicted_miss += 1
return
# May or may not be trying to load current revision.
cur_tid = self.current.get(oid)
if cur_tid == tid:
self.hits += 1
self.total_hits += 1
return
# It's a load for non-current data. Do we know about this oid?
L = self.noncurrent.get(oid)
if L is None:
return # cache miss
i = bisect.bisect_left(L, (tid, None))
if i == 0:
# This tid is smaller than any we know about -- miss.
return
lo, hi = L[i-1]
assert lo < tid
if tid > hi:
# No data in the right tid range -- miss.
return
# Cache hit.
self.hits += 1
self.total_hits += 1
# (oid, tid) is in the cache. Remove it: take it out of key2entry,
# and in `filemap` mark the space it occupied as being free. The
# caller is responsible for removing it from `current` or `noncurrent`.
def _remove(self, oid, tid):
key = oid, tid
e = self.key2entry.pop(key)
pos = e.offset
size, _e = self.filemap[pos]
assert e is _e
self.filemap[pos] = size, None
def _remove_noncurrent_revisions(self, oid):
noncurrent_list = self.noncurrent.get(oid)
if noncurrent_list:
self.invals += len(noncurrent_list)
self.total_invals += len(noncurrent_list)
for start_tid, end_tid in noncurrent_list:
self._remove(oid, start_tid)
del self.noncurrent[oid]
def inval(self, oid, tid):
if tid == z64:
# This is part of startup cache verification: forget everything
# about this oid.
self._remove_noncurrent_revisions(oid)
if oid in self.evicted:
del self.evicted[oid]
cur_tid = self.current.get(oid)
if cur_tid is None:
# We don't have current data, so nothing more to do.
return
# We had current data for oid, but no longer.
self.invals += 1
self.total_invals += 1
del self.current[oid]
if tid == z64:
# Startup cache verification: forget this oid entirely.
self._remove(oid, cur_tid)
return
# Our current data becomes non-current data.
# Add the validity range to the list of non-current data for oid.
assert cur_tid < tid
L = self.noncurrent.setdefault(oid, [])
bisect.insort_left(L, (cur_tid, tid))
# Update the end of oid's validity range in its CircularCacheEntry.
e = self.key2entry[oid, cur_tid]
assert e.end_tid == z64
e.end_tid = tid
def write(self, oid, size, start_tid, end_tid, evhit=0):
if end_tid == z64:
# Storing current revision.
if oid in self.current: # we already have it in cache
if evhit:
import pdb; pdb.set_trace()
raise ValueError('WTF')
return
self.current[oid] = start_tid
self.writes += 1
self.total_writes += 1
self.add(oid, size, start_tid)
return
if evhit:
import pdb; pdb.set_trace()
raise ValueError('WTF')
# Storing non-current revision.
L = self.noncurrent.setdefault(oid, [])
p = start_tid, end_tid
if p in L:
return # we already have it in cache
bisect.insort_left(L, p)
self.writes += 1
self.total_writes += 1
self.add(oid, size, start_tid, end_tid)
# Add `oid` to the cache, evicting objects as needed to make room.
# This updates `filemap` and `key2entry`; it's the caller's
# responsibilty to update `current` or `noncurrent` appropriately.
def add(self, oid, size, start_tid, end_tid=z64):
key = oid, start_tid
assert key not in self.key2entry
size += self.overhead
avail = self.makeroom(size+1) # see cache.py
e = CircularCacheEntry(key, end_tid, self.offset)
self.filemap[self.offset] = size, e
self.key2entry[key] = e
self.offset += size
# All the space made available must be accounted for in filemap.
excess = avail - size
if excess:
self.filemap[self.offset] = excess, None
# Evict enough objects to make at least `need` contiguous bytes, starting
# at `self.offset`, available. Evicted objects are removed from
# `filemap`, `key2entry`, `current` and `noncurrent`. The caller is
# responsible for adding new entries to `filemap` to account for all
# the freed bytes, and for advancing `self.offset`. The number of bytes
# freed is the return value, and will be >= need.
def makeroom(self, need):
if self.offset + need > self.cachelimit:
self.offset = ZEC_HEADER_SIZE
pos = self.offset
while need > 0:
assert pos < self.cachelimit
size, e = self.filemap.pop(pos)
if e: # there is an object here (else it's already free space)
self.evicts += 1
self.total_evicts += 1
assert pos == e.offset
_e = self.key2entry.pop(e.key)
assert e is _e
oid, start_tid = e.key
if e.end_tid == z64:
del self.current[oid]
self.evicted[oid] = size-self.overhead, e
else:
L = self.noncurrent[oid]
L.remove((start_tid, e.end_tid))
need -= size
pos += size
return pos - self.offset # total number of bytes freed
def report(self):
self.check()
free = used = total = 0
for size, e in six.itervalues(self.filemap):
total += size
if e:
used += size
else:
free += size
self.inuse = round(100.0 * used / total, 1)
self.total_inuse = self.inuse
Simulation.report(self)
#print self.evicted_hit, self.evicted_miss
def check(self):
oidcount = 0
pos = ZEC_HEADER_SIZE
while pos < self.cachelimit:
size, e = self.filemap[pos]
if e:
oidcount += 1
assert self.key2entry[e.key].offset == pos
pos += size
assert oidcount == len(self.key2entry)
assert pos == self.cachelimit
def dump(self):
print(len(self.filemap))
L = list(self.filemap)
L.sort()
for k in L:
v = self.filemap[k]
print(k, v[0], repr(v[1]))
def roundup(size):
k = MINSIZE
while k < size:
k += k
return k
def hitrate(loads, hits):
if loads < 1:
return 'n/a'
return "%5.1f%%" % (100.0 * hits / loads)
def duration(secs):
mm, ss = divmod(secs, 60)
hh, mm = divmod(mm, 60)
if hh:
return "%d:%02d:%02d" % (hh, mm, ss)
if mm:
return "%d:%02d" % (mm, ss)
return "%d" % ss
nre = re.compile('([=-]?)(\d+)([.]\d*)?').match
def addcommas(n):
sign, s, d = nre(str(n)).group(1, 2, 3)
if d == '.0':
d = ''
result = s[-3:]
s = s[:-3]
while s:
result = s[-3:]+','+result
s = s[:-3]
return (sign or '') + result + (d or '')
import random
def maybe(f, p=0.5):
if random.random() < p:
f()
if __name__ == "__main__":
sys.exit(main())
ZEO-5.0.0a0/src/ZEO/scripts/cache_stats.py 0000775 0000000 0000000 00000031226 12737730500 0020153 0 ustar 00root root 0000000 0000000 from __future__ import print_function
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Trace file statistics analyzer.
Usage: stats.py [-h] [-i interval] [-q] [-s] [-S] [-v] [-X] tracefile
-h: print histogram of object load frequencies
-i: summarizing interval in minutes (default 15; max 60)
-q: quiet; don't print summaries
-s: print histogram of object sizes
-S: don't print statistics
-v: verbose; print each record
-X: enable heuristic checking for misaligned records: oids > 2**32
will be rejected; this requires the tracefile to be seekable
"""
"""File format:
Each record is 26 bytes, plus a variable number of bytes to store an oid,
with the following layout. Numbers are big-endian integers.
Offset Size Contents
0 4 timestamp (seconds since 1/1/1970)
4 3 data size, in 256-byte increments, rounded up
7 1 code (see below)
8 2 object id length
10 8 start tid
18 8 end tid
26 variable object id
The code at offset 7 packs three fields:
Mask bits Contents
0x80 1 set if there was a non-empty version string
0x7e 6 function and outcome code
0x01 1 current cache file (0 or 1)
The "current cache file" bit is no longer used; it refers to a 2-file
cache scheme used before ZODB 3.3.
The function and outcome codes are documented in detail at the end of
this file in the 'explain' dictionary. Note that the keys there (and
also the arguments to _trace() in ClientStorage.py) are 'code & 0x7e',
i.e. the low bit is always zero.
"""
import sys
import time
import getopt
import struct
# we assign ctime locally to facilitate test replacement!
from time import ctime
import six
def usage(msg):
print(msg, file=sys.stderr)
print(__doc__, file=sys.stderr)
def main(args=None):
if args is None:
args = sys.argv[1:]
# Parse options
verbose = False
quiet = False
dostats = True
print_size_histogram = False
print_histogram = False
interval = 15*60 # Every 15 minutes
heuristic = False
try:
opts, args = getopt.getopt(args, "hi:qsSvX")
except getopt.error as msg:
usage(msg)
return 2
for o, a in opts:
if o == '-h':
print_histogram = True
elif o == "-i":
interval = int(60 * float(a))
if interval <= 0:
interval = 60
elif interval > 3600:
interval = 3600
elif o == "-q":
quiet = True
verbose = False
elif o == "-s":
print_size_histogram = True
elif o == "-S":
dostats = False
elif o == "-v":
verbose = True
elif o == '-X':
heuristic = True
else:
assert False, (o, opts)
if len(args) != 1:
usage("exactly one file argument required")
return 2
filename = args[0]
# Open file
if filename.endswith(".gz"):
# Open gzipped file
try:
import gzip
except ImportError:
print("can't read gzipped files (no module gzip)", file=sys.stderr)
return 1
try:
f = gzip.open(filename, "rb")
except IOError as msg:
print("can't open %s: %s" % (filename, msg), file=sys.stderr)
return 1
elif filename == '-':
# Read from stdin
f = sys.stdin
else:
# Open regular file
try:
f = open(filename, "rb")
except IOError as msg:
print("can't open %s: %s" % (filename, msg), file=sys.stderr)
return 1
rt0 = time.time()
bycode = {} # map code to count of occurrences
byinterval = {} # map code to count in current interval
records = 0 # number of trace records read
versions = 0 # number of trace records with versions
datarecords = 0 # number of records with dlen set
datasize = 0 # sum of dlen across records with dlen set
oids = {} # map oid to number of times it was loaded
bysize = {} # map data size to number of loads
bysizew = {} # map data size to number of writes
total_loads = 0
t0 = None # first timestamp seen
te = None # most recent timestamp seen
h0 = None # timestamp at start of current interval
he = None # timestamp at end of current interval
thisinterval = None # generally te//interval
f_read = f.read
unpack = struct.unpack
FMT = ">iiH8s8s"
FMT_SIZE = struct.calcsize(FMT)
assert FMT_SIZE == 26
# Read file, gathering statistics, and printing each record if verbose.
print(' '*16, "%7s %7s %7s %7s" % ('loads', 'hits', 'inv(h)', 'writes'), end=' ')
print('hitrate')
try:
while 1:
r = f_read(FMT_SIZE)
if len(r) < FMT_SIZE:
break
ts, code, oidlen, start_tid, end_tid = unpack(FMT, r)
if ts == 0:
# Must be a misaligned record caused by a crash.
if not quiet:
print("Skipping 8 bytes at offset", f.tell() - FMT_SIZE)
f.seek(f.tell() - FMT_SIZE + 8)
continue
oid = f_read(oidlen)
if len(oid) < oidlen:
break
records += 1
if t0 is None:
t0 = ts
thisinterval = t0 // interval
h0 = he = ts
te = ts
if ts // interval != thisinterval:
if not quiet:
dumpbyinterval(byinterval, h0, he)
byinterval = {}
thisinterval = ts // interval
h0 = ts
he = ts
dlen, code = (code & 0x7fffff00) >> 8, code & 0xff
if dlen:
datarecords += 1
datasize += dlen
if code & 0x80:
version = 'V'
versions += 1
else:
version = '-'
code &= 0x7e
bycode[code] = bycode.get(code, 0) + 1
byinterval[code] = byinterval.get(code, 0) + 1
if dlen:
if code & 0x70 == 0x20: # All loads
bysize[dlen] = d = bysize.get(dlen) or {}
d[oid] = d.get(oid, 0) + 1
elif code & 0x70 == 0x50: # All stores
bysizew[dlen] = d = bysizew.get(dlen) or {}
d[oid] = d.get(oid, 0) + 1
if verbose:
print("%s %02x %s %016x %016x %c%s" % (
ctime(ts)[4:-5],
code,
oid_repr(oid),
U64(start_tid),
U64(end_tid),
version,
dlen and (' '+str(dlen)) or ""))
if code & 0x70 == 0x20:
oids[oid] = oids.get(oid, 0) + 1
total_loads += 1
elif code == 0x00: # restart
if not quiet:
dumpbyinterval(byinterval, h0, he)
byinterval = {}
thisinterval = ts // interval
h0 = he = ts
if not quiet:
print(ctime(ts)[4:-5], end=' ')
print('='*20, "Restart", '='*20)
except KeyboardInterrupt:
print("\nInterrupted. Stats so far:\n")
end_pos = f.tell()
f.close()
rte = time.time()
if not quiet:
dumpbyinterval(byinterval, h0, he)
# Error if nothing was read
if not records:
print("No records processed", file=sys.stderr)
return 1
# Print statistics
if dostats:
print()
print("Read %s trace records (%s bytes) in %.1f seconds" % (
addcommas(records), addcommas(end_pos), rte-rt0))
print("Versions: %s records used a version" % addcommas(versions))
print("First time: %s" % ctime(t0))
print("Last time: %s" % ctime(te))
print("Duration: %s seconds" % addcommas(te-t0))
print("Data recs: %s (%.1f%%), average size %d bytes" % (
addcommas(datarecords),
100.0 * datarecords / records,
datasize / datarecords))
print("Hit rate: %.1f%% (load hits / loads)" % hitrate(bycode))
print()
codes = sorted(bycode.keys())
print("%13s %4s %s" % ("Count", "Code", "Function (action)"))
for code in codes:
print("%13s %02x %s" % (
addcommas(bycode.get(code, 0)),
code,
explain.get(code) or "*** unknown code ***"))
# Print histogram.
if print_histogram:
print()
print("Histogram of object load frequency")
total = len(oids)
print("Unique oids: %s" % addcommas(total))
print("Total loads: %s" % addcommas(total_loads))
s = addcommas(total)
width = max(len(s), len("objects"))
fmt = "%5d %" + str(width) + "s %5.1f%% %5.1f%% %5.1f%%"
hdr = "%5s %" + str(width) + "s %6s %6s %6s"
print(hdr % ("loads", "objects", "%obj", "%load", "%cum"))
cum = 0.0
for binsize, count in histogram(oids):
obj_percent = 100.0 * count / total
load_percent = 100.0 * count * binsize / total_loads
cum += load_percent
print(fmt % (binsize, addcommas(count),
obj_percent, load_percent, cum))
# Print size histogram.
if print_size_histogram:
print()
print("Histograms of object sizes")
print()
dumpbysize(bysizew, "written", "writes")
dumpbysize(bysize, "loaded", "loads")
def dumpbysize(bysize, how, how2):
print()
print("Unique sizes %s: %s" % (how, addcommas(len(bysize))))
print("%10s %6s %6s" % ("size", "objs", how2))
sizes = sorted(bysize.keys())
for size in sizes:
loads = 0
for n in six.itervalues(bysize[size]):
loads += n
print("%10s %6d %6d" % (addcommas(size),
len(bysize.get(size, "")),
loads))
def dumpbyinterval(byinterval, h0, he):
loads = hits = invals = writes = 0
for code in byinterval:
if code & 0x20:
n = byinterval[code]
loads += n
if code in (0x22, 0x26):
hits += n
elif code & 0x40:
writes += byinterval[code]
elif code & 0x10:
if code != 0x10:
invals += byinterval[code]
if loads:
hr = "%5.1f%%" % (100.0 * hits / loads)
else:
hr = 'n/a'
print("%s-%s %7s %7s %7s %7s %7s" % (
ctime(h0)[4:-8], ctime(he)[14:-8],
loads, hits, invals, writes, hr))
def hitrate(bycode):
loads = hits = 0
for code in bycode:
if code & 0x70 == 0x20:
n = bycode[code]
loads += n
if code in (0x22, 0x26):
hits += n
if loads:
return 100.0 * hits / loads
else:
return 0.0
def histogram(d):
bins = {}
for v in six.itervalues(d):
bins[v] = bins.get(v, 0) + 1
L = sorted(bins.items())
return L
def U64(s):
return struct.unpack(">Q", s)[0]
def oid_repr(oid):
if isinstance(oid, six.binary_type) and len(oid) == 8:
return '%16x' % U64(oid)
else:
return repr(oid)
def addcommas(n):
sign, s = '', str(n)
if s[0] == '-':
sign, s = '-', s[1:]
i = len(s) - 3
while i > 0:
s = s[:i] + ',' + s[i:]
i -= 3
return sign + s
explain = {
# The first hex digit shows the operation, the second the outcome.
# If the second digit is in "02468" then it is a 'miss'.
# If it is in "ACE" then it is a 'hit'.
0x00: "_setup_trace (initialization)",
0x10: "invalidate (miss)",
0x1A: "invalidate (hit, version)",
0x1C: "invalidate (hit, saving non-current)",
# 0x1E can occur during startup verification.
0x1E: "invalidate (hit, discarding current or non-current)",
0x20: "load (miss)",
0x22: "load (hit)",
0x24: "load (non-current, miss)",
0x26: "load (non-current, hit)",
0x50: "store (version)",
0x52: "store (current, non-version)",
0x54: "store (non-current)",
}
if __name__ == "__main__":
sys.exit(main())
ZEO-5.0.0a0/src/ZEO/scripts/parsezeolog.py 0000664 0000000 0000000 00000007001 12737730500 0020213 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python2.3
"""Parse the BLATHER logging generated by ZEO2.
An example of the log format is:
2002-04-15T13:05:29 BLATHER(-100) ZEO Server storea(3235680, [714], 235339406490168806) ('10.0.26.30', 45514)
"""
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
import re
import time
rx_time = re.compile('(\d\d\d\d-\d\d-\d\d)T(\d\d:\d\d:\d\d)')
def parse_time(line):
"""Return the time portion of a zLOG line in seconds or None."""
mo = rx_time.match(line)
if mo is None:
return None
date, time_ = mo.group(1, 2)
date_l = [int(elt) for elt in date.split('-')]
time_l = [int(elt) for elt in time_.split(':')]
return int(time.mktime(date_l + time_l + [0, 0, 0]))
rx_meth = re.compile("zrpc:\d+ calling (\w+)\((.*)")
def parse_method(line):
pass
def parse_line(line):
"""Parse a log entry and return time, method info, and client."""
t = parse_time(line)
if t is None:
return None, None
mo = rx_meth.search(line)
if mo is None:
return None, None
meth_name = mo.group(1)
meth_args = mo.group(2).strip()
if meth_args.endswith(')'):
meth_args = meth_args[:-1]
meth_args = [s.strip() for s in meth_args.split(",")]
m = meth_name, tuple(meth_args)
return t, m
class TStats:
counter = 1
def __init__(self):
self.id = TStats.counter
TStats.counter += 1
fields = ("time", "vote", "done", "user", "path")
fmt = "%-24s %5s %5s %-15s %s"
hdr = fmt % fields
def report(self):
"""Print a report about the transaction"""
t = time.ctime(self.begin)
if hasattr(self, "vote"):
d_vote = self.vote - self.begin
else:
d_vote = "*"
if hasattr(self, "finish"):
d_finish = self.finish - self.begin
else:
d_finish = "*"
print(self.fmt % (time.ctime(self.begin), d_vote, d_finish,
self.user, self.url))
class TransactionParser:
def __init__(self):
self.txns = {}
self.skipped = 0
def parse(self, line):
t, m = parse_line(line)
if t is None:
return
name = m[0]
meth = getattr(self, name, None)
if meth is not None:
meth(t, m[1])
def tpc_begin(self, time, args):
t = TStats()
t.begin = time
t.user = args[1]
t.url = args[2]
t.objects = []
tid = eval(args[0])
self.txns[tid] = t
def get_txn(self, args):
tid = eval(args[0])
try:
return self.txns[tid]
except KeyError:
print("uknown tid", repr(tid))
return None
def tpc_finish(self, time, args):
t = self.get_txn(args)
if t is None:
return
t.finish = time
def vote(self, time, args):
t = self.get_txn(args)
if t is None:
return
t.vote = time
def get_txns(self):
L = [(t.id, t) for t in self.txns.values()]
L.sort()
return [t for (id, t) in L]
if __name__ == "__main__":
import fileinput
p = TransactionParser()
i = 0
for line in fileinput.input():
i += 1
try:
p.parse(line)
except:
print("line", i)
raise
print("Transaction: %d" % len(p.txns))
print(TStats.hdr)
for txn in p.get_txns():
txn.report()
ZEO-5.0.0a0/src/ZEO/scripts/tests.py 0000664 0000000 0000000 00000002155 12737730500 0017030 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2004 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
from __future__ import print_function
import doctest, re, unittest
from zope.testing import renormalizing
def test_suite():
return unittest.TestSuite((
doctest.DocFileSuite(
'zeopack.test',
checker=renormalizing.RENormalizing([
(re.compile('usage: Usage: '), 'Usage: '), # Py 2.4
(re.compile('options:'), 'Options:'), # Py 2.4
]),
globs={'print_function': print_function},
),
))
ZEO-5.0.0a0/src/ZEO/scripts/timeout.py 0000775 0000000 0000000 00000003530 12737730500 0017355 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python2.3
"""Transaction timeout test script.
This script connects to a storage, begins a transaction, calls store()
and tpc_vote(), and then sleeps forever. This should trigger the
transaction timeout feature of the server.
usage: timeout.py address delay [storage-name]
"""
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
import sys
import time
from ZODB.Transaction import Transaction
from ZODB.tests.MinPO import MinPO
from ZODB.tests.StorageTestBase import zodb_pickle
from ZEO.ClientStorage import ClientStorage
ZERO = '\0'*8
def main():
if len(sys.argv) not in (3, 4):
sys.stderr.write("Usage: timeout.py address delay [storage-name]\n" %
sys.argv[0])
sys.exit(2)
hostport = sys.argv[1]
delay = float(sys.argv[2])
if sys.argv[3:]:
name = sys.argv[3]
else:
name = "1"
if "/" in hostport:
address = hostport
else:
if ":" in hostport:
i = hostport.index(":")
host, port = hostport[:i], hostport[i+1:]
else:
host, port = "", hostport
port = int(port)
address = (host, port)
print("Connecting to %s..." % repr(address))
storage = ClientStorage(address, name)
print("Connected. Now starting a transaction...")
oid = storage.new_oid()
revid = ZERO
data = MinPO("timeout.py")
pickled_data = zodb_pickle(data)
t = Transaction()
t.user = "timeout.py"
storage.tpc_begin(t)
storage.store(oid, revid, pickled_data, '', t)
print("Stored. Now voting...")
storage.tpc_vote(t)
print("Voted; now sleeping %s..." % delay)
time.sleep(delay)
print("Done.")
if __name__ == "__main__":
main()
ZEO-5.0.0a0/src/ZEO/scripts/zeopack.py 0000775 0000000 0000000 00000012066 12737730500 0017327 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python2.3
import logging
import optparse
import socket
import sys
import time
import traceback
import ZEO.ClientStorage
from six.moves import map
from six.moves import zip
usage = """Usage: %prog [options] [servers]
Pack one or more storages hosted by ZEO servers.
The positional arguments specify 0 or more tcp servers to pack, where
each is of the form:
host:port[:name]
"""
WAIT = 10 # wait no more than 10 seconds for client to connect
def _main(args=None, prog=None):
if args is None:
args = sys.argv[1:]
parser = optparse.OptionParser(usage, prog=prog)
parser.add_option(
"-d", "--days", dest="days", type='int', default=0,
help=("Pack objects that are older than this number of days")
)
parser.add_option(
"-t", "--time", dest="time",
help=("Time of day to pack to of the form: HH[:MM[:SS]]. "
"Defaults to current time.")
)
parser.add_option(
"-u", "--unix", dest="unix_sockets", action="append",
help=("A unix-domain-socket server to connect to, of the form: "
"path[:name]")
)
parser.remove_option('-h')
parser.add_option(
"-h", dest="host",
help=("Deprecated: "
"Used with the -p and -S options, specified the host to "
"connect to.")
)
parser.add_option(
"-p", type="int", dest="port",
help=("Deprecated: "
"Used with the -h and -S options, specifies "
"the port to connect to.")
)
parser.add_option(
"-S", dest="name", default='1',
help=("Deprecated: Used with the -h and -p, options, or with the "
"-U option specified the storage name to use. Defaults to 1.")
)
parser.add_option(
"-U", dest="unix",
help=("Deprecated: Used with the -S option, "
"Unix-domain socket to connect to.")
)
if not args:
parser.print_help()
return
def error(message):
sys.stderr.write("Error:\n%s\n" % message)
sys.exit(1)
options, args = parser.parse_args(args)
packt = time.time()
if options.time:
time_ = list(map(int, options.time.split(':')))
if len(time_) == 1:
time_ += (0, 0)
elif len(time_) == 2:
time_ += (0,)
elif len(time_) > 3:
error("Invalid time value: %r" % options.time)
packt = time.localtime(packt)
packt = time.mktime(packt[:3]+tuple(time_)+packt[6:])
packt -= options.days * 86400
servers = []
if options.host:
if not options.port:
error("If host (-h) is specified then a port (-p) must be "
"specified as well.")
servers.append(((options.host, options.port), options.name))
elif options.port:
servers.append(((socket.gethostname(), options.port), options.name))
if options.unix:
servers.append((options.unix, options.name))
for server in args:
data = server.split(':')
if len(data) in (2, 3):
host = data[0]
try:
port = int(data[1])
except ValueError:
error("Invalid port in server specification: %r" % server)
addr = host, port
if len(data) == 2:
name = '1'
else:
name = data[2]
else:
error("Invalid server specification: %r" % server)
servers.append((addr, name))
for server in options.unix_sockets or ():
data = server.split(':')
if len(data) == 1:
addr = data[0]
name = '1'
elif len(data) == 2:
addr = data[0]
name = data[1]
else:
error("Invalid server specification: %r" % server)
servers.append((addr, name))
if not servers:
error("No servers specified.")
for addr, name in servers:
try:
cs = ZEO.ClientStorage.ClientStorage(
addr, storage=name, wait=False, read_only=1)
for i in range(60):
if cs.is_connected():
break
time.sleep(1)
else:
sys.stderr.write("Couldn't connect to: %r\n"
% ((addr, name), ))
cs.close()
continue
cs.pack(packt, wait=True)
cs.close()
except:
traceback.print_exception(*(sys.exc_info()+(99, sys.stderr)))
error("Error packing storage %s in %r" % (name, addr))
def main(*args):
root_logger = logging.getLogger()
old_level = root_logger.getEffectiveLevel()
logging.getLogger().setLevel(logging.WARNING)
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter(
"%(name)s %(levelname)s %(message)s"))
logging.getLogger().addHandler(handler)
try:
_main(*args)
finally:
logging.getLogger().setLevel(old_level)
logging.getLogger().removeHandler(handler)
if __name__ == "__main__":
main()
ZEO-5.0.0a0/src/ZEO/scripts/zeopack.test 0000664 0000000 0000000 00000021167 12737730500 0017655 0 ustar 00root root 0000000 0000000 zeopack
=======
The zeopack script can be used to pack one or more storages. It uses
ClientStorage to do this. To test it's behavior, we'll replace the
normal ClientStorage with a fake one that echos information we'll want
for our test:
>>> class ClientStorage:
... connect_wait = 0
... def __init__(self, *args, **kw):
... if args[0] == 'bad':
... import logging
... logging.getLogger('test.ClientStorage').error(
... "I hate this address, %r", args[0])
... raise ValueError("Bad address")
... print("ClientStorage(%s %s)" % (
... repr(args)[1:-1],
... ', '.join("%s=%r" % i for i in sorted(kw.items())),
... ))
... def pack(self, t=None, *args, **kw):
... now = time.localtime(time.time())
... local_midnight = time.mktime(now[:3]+(0, 0, 0)+now[6:])
... t -= local_midnight # adjust for tz
... t += 86400*7 # add a week to make sure we're positive
... print("pack(%r,%s %s)" % (
... t, repr(args)[1:-1],
... ', '.join("%s=%r" % i for i in sorted(kw.items())),
... ))
... def is_connected(self):
... self.connect_wait -= 1
... print('is_connected', self.connect_wait < 0)
... return self.connect_wait < 0
... def close(self):
... print("close()")
>>> import ZEO
>>> ClientStorage_orig = ZEO.ClientStorage.ClientStorage
>>> ZEO.ClientStorage.ClientStorage = ClientStorage
Now, we're ready to try the script:
>>> from ZEO.scripts.zeopack import main
If we call it with no arguments, we get help:
>>> import os; os.environ['COLUMNS'] = '80' # for consistent optparse output
>>> main([], 'zeopack')
Usage: zeopack [options] [servers]
Pack one or more storages hosted by ZEO servers.
The positional arguments specify 0 or more tcp servers to pack, where
each is of the form:
host:port[:name]
Options:
-d DAYS, --days=DAYS Pack objects that are older than this number of days
-t TIME, --time=TIME Time of day to pack to of the form: HH[:MM[:SS]].
Defaults to current time.
-u UNIX_SOCKETS, --unix=UNIX_SOCKETS
A unix-domain-socket server to connect to, of the
form: path[:name]
-h HOST Deprecated: Used with the -p and -S options, specified
the host to connect to.
-p PORT Deprecated: Used with the -h and -S options, specifies
the port to connect to.
-S NAME Deprecated: Used with the -h and -p, options, or with
the -U option specified the storage name to use.
Defaults to 1.
-U UNIX Deprecated: Used with the -S option, Unix-domain
socket to connect to.
Since packing involves time, we'd better have our way with it. Replace
time.time() with a function that always returns the same value. The
value is timezone dependent.
>>> import time
>>> time_orig = time.time
>>> time.time = lambda : time.mktime((2009, 3, 24, 10, 55, 17, 1, 83, -1))
>>> sleep_orig = time.sleep
>>> def sleep(t):
... print('sleep(%r)' % t)
>>> time.sleep = sleep
Normally, we pass one or more TCP server specifications:
>>> main(["host1:8100", "host1:8100:2"])
ClientStorage(('host1', 8100), read_only=1, storage='1', wait=False)
is_connected True
pack(644117.0, wait=True)
close()
ClientStorage(('host1', 8100), read_only=1, storage='2', wait=False)
is_connected True
pack(644117.0, wait=True)
close()
We can also pass unix-domain-sockey servers using the -u option:
>>> main(["-ufoo", "-ubar:spam", "host1:8100", "host1:8100:2"])
ClientStorage(('host1', 8100), read_only=1, storage='1', wait=False)
is_connected True
pack(644117.0, wait=True)
close()
ClientStorage(('host1', 8100), read_only=1, storage='2', wait=False)
is_connected True
pack(644117.0, wait=True)
close()
ClientStorage('foo', read_only=1, storage='1', wait=False)
is_connected True
pack(644117.0, wait=True)
close()
ClientStorage('bar', read_only=1, storage='spam', wait=False)
is_connected True
pack(644117.0, wait=True)
close()
The -d option causes a pack time the given number of days earlier to
be used:
>>> main(["-ufoo", "-ubar:spam", "-d3", "host1:8100", "host1:8100:2"])
ClientStorage(('host1', 8100), read_only=1, storage='1', wait=False)
is_connected True
pack(384917.0, wait=True)
close()
ClientStorage(('host1', 8100), read_only=1, storage='2', wait=False)
is_connected True
pack(384917.0, wait=True)
close()
ClientStorage('foo', read_only=1, storage='1', wait=False)
is_connected True
pack(384917.0, wait=True)
close()
ClientStorage('bar', read_only=1, storage='spam', wait=False)
is_connected True
pack(384917.0, wait=True)
close()
The -t option allows us to control the time of day:
>>> main(["-ufoo", "-d3", "-t1:30", "host1:8100:2"])
ClientStorage(('host1', 8100), read_only=1, storage='2', wait=False)
is_connected True
pack(351000.0, wait=True)
close()
ClientStorage('foo', read_only=1, storage='1', wait=False)
is_connected True
pack(351000.0, wait=True)
close()
Connection timeout
------------------
The zeopack script tells ClientStorage not to wait for connections
before returning from the constructor, but will time out after 60
seconds of waiting for a connect.
>>> ClientStorage.connect_wait = 3
>>> main(["-d3", "-t1:30", "host1:8100:2"])
ClientStorage(('host1', 8100), read_only=1, storage='2', wait=False)
is_connected False
sleep(1)
is_connected False
sleep(1)
is_connected False
sleep(1)
is_connected True
pack(351000.0, wait=True)
close()
>>> def call_main(args):
... import sys
... old_stderr = sys.stderr
... sys.stderr = sys.stdout
... try:
... try:
... main(args)
... except SystemExit as v:
... print("Exited", v)
... finally:
... sys.stderr = old_stderr
>>> ClientStorage.connect_wait = 999
>>> call_main(["-d3", "-t1:30", "host1:8100", "host1:8100:2"])
... # doctest: +ELLIPSIS
ClientStorage(('host1', 8100), read_only=1, storage='1', wait=False)
is_connected False
sleep(1)
...
is_connected False
sleep(1)
Couldn't connect to: (('host1', 8100), '1')
close()
ClientStorage(('host1', 8100), read_only=1, storage='2', wait=False)
is_connected False
sleep(1)
...
is_connected False
sleep(1)
Couldn't connect to: (('host1', 8100), '2')
close()
>>> ClientStorage.connect_wait = 0
Legacy support
--------------
>>> main(["-d3", "-h", "host1", "-p", "8100", "-S", "2"])
ClientStorage(('host1', 8100), read_only=1, storage='2', wait=False)
is_connected True
pack(384917.0, wait=True)
close()
>>> import socket
>>> old_gethostname = socket.gethostname
>>> socket.gethostname = lambda : 'test.host.com'
>>> main(["-d3", "-p", "8100"])
ClientStorage(('test.host.com', 8100), read_only=1, storage='1', wait=False)
is_connected True
pack(384917.0, wait=True)
close()
>>> socket.gethostname = old_gethostname
>>> main(["-d3", "-U", "foo/bar", "-S", "2"])
ClientStorage('foo/bar', read_only=1, storage='2', wait=False)
is_connected True
pack(384917.0, wait=True)
close()
Error handling
--------------
>>> call_main(["-d3"])
Error:
No servers specified.
Exited 1
>>> call_main(["-d3", "a"])
Error:
Invalid server specification: 'a'
Exited 1
>>> call_main(["-d3", "a:b:c:d"])
Error:
Invalid server specification: 'a:b:c:d'
Exited 1
>>> call_main(["-d3", "a:b:2"])
Error:
Invalid port in server specification: 'a:b:2'
Exited 1
>>> call_main(["-d3", "-u", "a:b:2"])
Error:
Invalid server specification: 'a:b:2'
Exited 1
>>> call_main(["-d3", "-u", "bad"]) # doctest: +ELLIPSIS
test.ClientStorage ERROR I hate this address, 'bad'
Traceback (most recent call last):
...
ValueError: Bad address
Error:
Error packing storage 1 in 'bad'
Exited 1
Note that in the previous example, the first line was output through logging.
.. tear down
>>> ZEO.ClientStorage.ClientStorage = ClientStorage_orig
>>> time.time = time_orig
>>> time.sleep = sleep_orig
ZEO-5.0.0a0/src/ZEO/scripts/zeoqueue.py 0000775 0000000 0000000 00000025526 12737730500 0017542 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python2.3
"""Report on the number of currently waiting clients in the ZEO queue.
Usage: %(PROGRAM)s [options] logfile
Options:
-h / --help
Print this help text and exit.
-v / --verbose
Verbose output
-f file
--file file
Use the specified file to store the incremental state as a pickle. If
not given, %(STATEFILE)s is used.
-r / --reset
Reset the state of the tool. This blows away any existing state
pickle file and then exits -- it does not parse the file. Use this
when you rotate log files so that the next run will parse from the
beginning of the file.
"""
from __future__ import print_function
import os
import re
import sys
import time
import errno
import getopt
from ZEO._compat import load, dump
COMMASPACE = ', '
STATEFILE = 'zeoqueue.pck'
PROGRAM = sys.argv[0]
tcre = re.compile(r"""
(?P
\d{4}- # year
\d{2}- # month
\d{2}) # day
T # separator
(?P
\d{2}: # hour
\d{2}: # minute
\d{2}) # second
""", re.VERBOSE)
ccre = re.compile(r"""
zrpc-conn:(?P\d+.\d+.\d+.\d+:\d+)\s+
calling\s+
(?P
\w+) # the method
\( # args open paren
\' # string quote start
(?P
\S+) # first argument -- usually the tid
\' # end of string
(?P
.*) # rest of line
""", re.VERBOSE)
wcre = re.compile(r'Clients waiting: (?P\d+)')
def parse_time(line):
"""Return the time portion of a zLOG line in seconds or None."""
mo = tcre.match(line)
if mo is None:
return None
date, time_ = mo.group('ymd', 'hms')
date_l = [int(elt) for elt in date.split('-')]
time_l = [int(elt) for elt in time_.split(':')]
return int(time.mktime(date_l + time_l + [0, 0, 0]))
class Txn:
"""Track status of single transaction."""
def __init__(self, tid):
self.tid = tid
self.hint = None
self.begin = None
self.vote = None
self.abort = None
self.finish = None
self.voters = []
def isactive(self):
if self.begin and not (self.abort or self.finish):
return True
else:
return False
class Status:
"""Track status of ZEO server by replaying log records.
We want to keep track of several events:
- The last committed transaction.
- The last committed or aborted transaction.
- The last transaction that got the lock but didn't finish.
- The client address doing the first vote of a transaction.
- The number of currently active transactions.
- The number of reported queued transactions.
- Client restarts.
- Number of current connections (but this might not be useful).
We can observe these events by reading the following sorts of log
entries:
2002-12-16T06:16:05 BLATHER(-100) zrpc:12649 calling
tpc_begin('\x03I\x90((\xdbp\xd5', '', 'QueueCatal...
2002-12-16T06:16:06 BLATHER(-100) zrpc:12649 calling
vote('\x03I\x90((\xdbp\xd5')
2002-12-16T06:16:06 BLATHER(-100) zrpc:12649 calling
tpc_finish('\x03I\x90((\xdbp\xd5')
2002-12-16T10:46:10 INFO(0) ZSS:12649:1 Transaction blocked waiting
for storage. Clients waiting: 1.
2002-12-16T06:15:57 BLATHER(-100) zrpc:12649 connect from
('10.0.26.54', 48983):
2002-12-16T10:30:09 INFO(0) ZSS:12649:1 disconnected
"""
def __init__(self):
self.lineno = 0
self.pos = 0
self.reset()
def reset(self):
self.commit = None
self.commit_or_abort = None
self.last_unfinished = None
self.n_active = 0
self.n_blocked = 0
self.n_conns = 0
self.t_restart = None
self.txns = {}
def iscomplete(self):
# The status report will always be complete if we encounter an
# explicit restart.
if self.t_restart is not None:
return True
# If we haven't seen a restart, assume that seeing a finished
# transaction is good enough.
return self.commit is not None
def process_file(self, fp):
if self.pos:
if VERBOSE:
print('seeking to file position', self.pos)
fp.seek(self.pos)
while True:
line = fp.readline()
if not line:
break
self.lineno += 1
self.process(line)
self.pos = fp.tell()
def process(self, line):
if line.find("calling") != -1:
self.process_call(line)
elif line.find("connect") != -1:
self.process_connect(line)
# test for "locked" because word may start with "B" or "b"
elif line.find("locked") != -1:
self.process_block(line)
elif line.find("Starting") != -1:
self.process_start(line)
def process_call(self, line):
mo = ccre.search(line)
if mo is None:
return
called_method = mo.group('method')
# Exit early if we've got zeoLoad, because it's the most
# frequently called method and we don't use it.
if called_method == "zeoLoad":
return
t = parse_time(line)
meth = getattr(self, "call_%s" % called_method, None)
if meth is None:
return
client = mo.group('addr')
tid = mo.group('tid')
rest = mo.group('rest')
meth(t, client, tid, rest)
def process_connect(self, line):
pass
def process_block(self, line):
mo = wcre.search(line)
if mo is None:
# assume that this was a restart message for the last blocked
# transaction.
self.n_blocked = 0
else:
self.n_blocked = int(mo.group('num'))
def process_start(self, line):
if line.find("Starting ZEO server") != -1:
self.reset()
self.t_restart = parse_time(line)
def call_tpc_begin(self, t, client, tid, rest):
txn = Txn(tid)
txn.begin = t
if rest[0] == ',':
i = 1
while rest[i].isspace():
i += 1
rest = rest[i:]
txn.hint = rest
self.txns[tid] = txn
self.n_active += 1
self.last_unfinished = txn
def call_vote(self, t, client, tid, rest):
txn = self.txns.get(tid)
if txn is None:
print("Oops!")
txn = self.txns[tid] = Txn(tid)
txn.vote = t
txn.voters.append(client)
def call_tpc_abort(self, t, client, tid, rest):
txn = self.txns.get(tid)
if txn is None:
print("Oops!")
txn = self.txns[tid] = Txn(tid)
txn.abort = t
txn.voters = []
self.n_active -= 1
if self.commit_or_abort:
# delete the old transaction
try:
del self.txns[self.commit_or_abort.tid]
except KeyError:
pass
self.commit_or_abort = txn
def call_tpc_finish(self, t, client, tid, rest):
txn = self.txns.get(tid)
if txn is None:
print("Oops!")
txn = self.txns[tid] = Txn(tid)
txn.finish = t
txn.voters = []
self.n_active -= 1
if self.commit:
# delete the old transaction
try:
del self.txns[self.commit.tid]
except KeyError:
pass
if self.commit_or_abort:
# delete the old transaction
try:
del self.txns[self.commit_or_abort.tid]
except KeyError:
pass
self.commit = self.commit_or_abort = txn
def report(self):
print("Blocked transactions:", self.n_blocked)
if not VERBOSE:
return
if self.t_restart:
print("Server started:", time.ctime(self.t_restart))
if self.commit is not None:
t = self.commit_or_abort.finish
if t is None:
t = self.commit_or_abort.abort
print("Last finished transaction:", time.ctime(t))
# the blocked transaction should be the first one that calls vote
L = [(txn.begin, txn) for txn in self.txns.values()]
L.sort()
for x, txn in L:
if txn.isactive():
began = txn.begin
if txn.voters:
print("Blocked client (first vote):", txn.voters[0])
print("Blocked transaction began at:", time.ctime(began))
print("Hint:", txn.hint)
print("Idle time: %d sec" % int(time.time() - began))
break
def usage(code, msg=''):
print(__doc__ % globals(), file=sys.stderr)
if msg:
print(msg, file=sys.stderr)
sys.exit(code)
def main():
global VERBOSE
VERBOSE = 0
file = STATEFILE
reset = False
# -0 is a secret option used for testing purposes only
seek = True
try:
opts, args = getopt.getopt(sys.argv[1:], 'vhf:r0',
['help', 'verbose', 'file=', 'reset'])
except getopt.error as msg:
usage(1, msg)
for opt, arg in opts:
if opt in ('-h', '--help'):
usage(0)
elif opt in ('-v', '--verbose'):
VERBOSE += 1
elif opt in ('-f', '--file'):
file = arg
elif opt in ('-r', '--reset'):
reset = True
elif opt == '-0':
seek = False
if reset:
# Blow away the existing state file and exit
try:
os.unlink(file)
if VERBOSE:
print('removing pickle state file', file)
except OSError as e:
if e.errno != errno.ENOENT:
raise
return
if not args:
usage(1, 'logfile is required')
if len(args) > 1:
usage(1, 'too many arguments: %s' % COMMASPACE.join(args))
path = args[0]
# Get the previous status object from the pickle file, if it is available
# and if the --reset flag wasn't given.
status = None
try:
statefp = open(file, 'rb')
try:
status = load(statefp)
if VERBOSE:
print('reading status from file', file)
finally:
statefp.close()
except IOError as e:
if e.errno != errno.ENOENT:
raise
if status is None:
status = Status()
if VERBOSE:
print('using new status')
if not seek:
status.pos = 0
fp = open(path, 'rb')
try:
status.process_file(fp)
finally:
fp.close()
# Save state
statefp = open(file, 'wb')
dump(status, statefp, 1)
statefp.close()
# Print the report and return the number of blocked clients in the exit
# status code.
status.report()
sys.exit(status.n_blocked)
if __name__ == "__main__":
main()
ZEO-5.0.0a0/src/ZEO/scripts/zeoreplay.py 0000664 0000000 0000000 00000021324 12737730500 0017677 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python2.3
"""Parse the BLATHER logging generated by ZEO, and optionally replay it.
Usage: zeointervals.py [options]
Options:
--help / -h
Print this message and exit.
--replay=storage
-r storage
Replay the parsed transactions through the new storage
--maxtxn=count
-m count
Parse no more than count transactions.
--report / -p
Print a report as we're parsing.
Unlike parsezeolog.py, this script generates timestamps for each transaction,
and sub-command in the transaction. We can use this to compare timings with
synthesized data.
"""
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
import re
import sys
import time
import getopt
import operator
# ZEO logs measure wall-clock time so for consistency we need to do the same
#from time import clock as now
from time import time as now
from ZODB.FileStorage import FileStorage
#from BDBStorage.BDBFullStorage import BDBFullStorage
#from Standby.primary import PrimaryStorage
#from Standby.config import RS_PORT
from ZODB.Transaction import Transaction
from ZODB.utils import p64
from functools import reduce
datecre = re.compile('(\d\d\d\d-\d\d-\d\d)T(\d\d:\d\d:\d\d)')
methcre = re.compile("ZEO Server (\w+)\((.*)\) \('(.*)', (\d+)")
class StopParsing(Exception):
pass
def usage(code, msg=''):
print(__doc__)
if msg:
print(msg)
sys.exit(code)
def parse_time(line):
"""Return the time portion of a zLOG line in seconds or None."""
mo = datecre.match(line)
if mo is None:
return None
date, time_ = mo.group(1, 2)
date_l = [int(elt) for elt in date.split('-')]
time_l = [int(elt) for elt in time_.split(':')]
return int(time.mktime(date_l + time_l + [0, 0, 0]))
def parse_line(line):
"""Parse a log entry and return time, method info, and client."""
t = parse_time(line)
if t is None:
return None, None, None
mo = methcre.search(line)
if mo is None:
return None, None, None
meth_name = mo.group(1)
meth_args = mo.group(2)
meth_args = [s.strip() for s in meth_args.split(',')]
m = meth_name, tuple(meth_args)
c = mo.group(3), mo.group(4)
return t, m, c
class StoreStat:
def __init__(self, when, oid, size):
self.when = when
self.oid = oid
self.size = size
# Crufty
def __getitem__(self, i):
if i == 0: return self.oid
if i == 1: return self.size
raise IndexError
class TxnStat:
def __init__(self):
self._begintime = None
self._finishtime = None
self._aborttime = None
self._url = None
self._objects = []
def tpc_begin(self, when, args, client):
self._begintime = when
# args are txnid, user, description (looks like it's always a url)
self._url = args[2]
def storea(self, when, args, client):
oid = int(args[0])
# args[1] is "[numbytes]"
size = int(args[1][1:-1])
s = StoreStat(when, oid, size)
self._objects.append(s)
def tpc_abort(self, when):
self._aborttime = when
def tpc_finish(self, when):
self._finishtime = when
# Mapping oid -> revid
_revids = {}
class ReplayTxn(TxnStat):
def __init__(self, storage):
self._storage = storage
self._replaydelta = 0
TxnStat.__init__(self)
def replay(self):
ZERO = '\0'*8
t0 = now()
t = Transaction()
self._storage.tpc_begin(t)
for obj in self._objects:
oid = obj.oid
revid = _revids.get(oid, ZERO)
# BAW: simulate a pickle of the given size
data = 'x' * obj.size
# BAW: ignore versions for now
newrevid = self._storage.store(p64(oid), revid, data, '', t)
_revids[oid] = newrevid
if self._aborttime:
self._storage.tpc_abort(t)
origdelta = self._aborttime - self._begintime
else:
self._storage.tpc_vote(t)
self._storage.tpc_finish(t)
origdelta = self._finishtime - self._begintime
t1 = now()
# Shows how many seconds behind (positive) or ahead (negative) of the
# original reply our local update took
self._replaydelta = t1 - t0 - origdelta
class ZEOParser:
def __init__(self, maxtxns=-1, report=1, storage=None):
self.__txns = []
self.__curtxn = {}
self.__skipped = 0
self.__maxtxns = maxtxns
self.__finishedtxns = 0
self.__report = report
self.__storage = storage
def parse(self, line):
t, m, c = parse_line(line)
if t is None:
# Skip this line
return
name = m[0]
meth = getattr(self, name, None)
if meth is not None:
meth(t, m[1], c)
def tpc_begin(self, when, args, client):
txn = ReplayTxn(self.__storage)
self.__curtxn[client] = txn
meth = getattr(txn, 'tpc_begin', None)
if meth is not None:
meth(when, args, client)
def storea(self, when, args, client):
txn = self.__curtxn.get(client)
if txn is None:
self.__skipped += 1
return
meth = getattr(txn, 'storea', None)
if meth is not None:
meth(when, args, client)
def tpc_finish(self, when, args, client):
txn = self.__curtxn.get(client)
if txn is None:
self.__skipped += 1
return
meth = getattr(txn, 'tpc_finish', None)
if meth is not None:
meth(when)
if self.__report:
self.report(txn)
self.__txns.append(txn)
self.__curtxn[client] = None
self.__finishedtxns += 1
if self.__maxtxns > 0 and self.__finishedtxns >= self.__maxtxns:
raise StopParsing
def report(self, txn):
"""Print a report about the transaction"""
if txn._objects:
bytes = reduce(operator.add, [size for oid, size in txn._objects])
else:
bytes = 0
print('%s %s %4d %10d %s %s' % (
txn._begintime, txn._finishtime - txn._begintime,
len(txn._objects),
bytes,
time.ctime(txn._begintime),
txn._url))
def replay(self):
for txn in self.__txns:
txn.replay()
# How many fell behind?
slower = []
faster = []
for txn in self.__txns:
if txn._replaydelta > 0:
slower.append(txn)
else:
faster.append(txn)
print(len(slower), 'laggards,', len(faster), 'on-time or faster')
# Find some averages
if slower:
sum = reduce(operator.add,
[txn._replaydelta for txn in slower], 0)
print('average slower txn was:', float(sum) / len(slower))
if faster:
sum = reduce(operator.add,
[txn._replaydelta for txn in faster], 0)
print('average faster txn was:', float(sum) / len(faster))
def main():
try:
opts, args = getopt.getopt(
sys.argv[1:],
'hr:pm:',
['help', 'replay=', 'report', 'maxtxns='])
except getopt.error as e:
usage(1, e)
if args:
usage(1)
replay = 0
maxtxns = -1
report = 0
storagefile = None
for opt, arg in opts:
if opt in ('-h', '--help'):
usage(0)
elif opt in ('-r', '--replay'):
replay = 1
storagefile = arg
elif opt in ('-p', '--report'):
report = 1
elif opt in ('-m', '--maxtxns'):
try:
maxtxns = int(arg)
except ValueError:
usage(1, 'Bad -m argument: %s' % arg)
if replay:
storage = FileStorage(storagefile)
#storage = BDBFullStorage(storagefile)
#storage = PrimaryStorage('yyz', storage, RS_PORT)
t0 = now()
p = ZEOParser(maxtxns, report, storage)
i = 0
while 1:
line = sys.stdin.readline()
if not line:
break
i += 1
try:
p.parse(line)
except StopParsing:
break
except:
print('input file line:', i)
raise
t1 = now()
print('total parse time:', t1-t0)
t2 = now()
if replay:
p.replay()
t3 = now()
print('total replay time:', t3-t2)
print('total time:', t3-t0)
if __name__ == '__main__':
main()
ZEO-5.0.0a0/src/ZEO/scripts/zeoserverlog.py 0000664 0000000 0000000 00000036673 12737730500 0020430 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python2.3
##############################################################################
#
# Copyright (c) 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Tools for analyzing ZEO Server logs.
This script contains a number of commands, implemented by command
functions. To run a command, give the command name and it's arguments
as arguments to this script.
Commands:
blocked_times file threshold
Output a summary of episodes where thransactions were blocked
when the episode lasted at least threshold seconds.
The file may be a file name or - to read from standard input.
The file may also be a command:
script blocked_times 'bunzip2 = 0
if blocking and waiting == 1:
t1 = time(line)
t2 = t1
if not blocking and last_blocking:
last_wait = 0
t2 = time(line)
cid = idre.search(line).group(1)
if waiting == 0:
d = sub(t1, time(line))
if d >= thresh:
print(t1, sub(t1, t2), cid, d)
t1 = t2 = cid = blocking = waiting = last_wait = max_wait = 0
last_blocking = blocking
connidre = re.compile(r' zrpc-conn:(\d+.\d+.\d+.\d+:\d+) ')
def time_calls(f):
f, thresh = f
if f == '-':
f = sys.stdin
else:
f = xopen(f)
thresh = float(thresh)
t1 = None
maxd = 0
for line in f:
line = line.strip()
if ' calling ' in line:
t1 = time(line)
elif ' returns ' in line and t1 is not None:
d = sub(t1, time(line))
if d >= thresh:
print(t1, d, connidre.search(line).group(1))
maxd = max(maxd, d)
t1 = None
print(maxd)
def xopen(f):
if f == '-':
return sys.stdin
if ' ' in f:
return os.popen(f, 'r')
return open(f)
def time_tpc(f):
f, thresh = f
if f == '-':
f = sys.stdin
else:
f = xopen(f)
thresh = float(thresh)
transactions = {}
for line in f:
line = line.strip()
if ' calling vote(' in line:
cid = connidre.search(line).group(1)
transactions[cid] = time(line),
elif ' vote returns None' in line:
cid = connidre.search(line).group(1)
transactions[cid] += time(line), 'n'
elif ' vote() raised' in line:
cid = connidre.search(line).group(1)
transactions[cid] += time(line), 'e'
elif ' vote returns ' in line:
# delayed, skip
cid = connidre.search(line).group(1)
transactions[cid] += time(line), 'd'
elif ' calling tpc_abort(' in line:
cid = connidre.search(line).group(1)
if cid in transactions:
t1, t2, vs = transactions[cid]
t = time(line)
d = sub(t1, t)
if d >= thresh:
print('a', t1, cid, sub(t1, t2), vs, sub(t2, t))
del transactions[cid]
elif ' calling tpc_finish(' in line:
if cid in transactions:
cid = connidre.search(line).group(1)
transactions[cid] += time(line),
elif ' tpc_finish returns ' in line:
if cid in transactions:
t1, t2, vs, t3 = transactions[cid]
t = time(line)
d = sub(t1, t)
if d >= thresh:
print('c', t1, cid, sub(t1, t2), vs, sub(t2, t3), sub(t3, t))
del transactions[cid]
newobre = re.compile(r"storea\(.*, '\\x00\\x00\\x00\\x00\\x00")
def time_trans(f):
f, thresh = f
if f == '-':
f = sys.stdin
else:
f = xopen(f)
thresh = float(thresh)
transactions = {}
for line in f:
line = line.strip()
if ' calling tpc_begin(' in line:
cid = connidre.search(line).group(1)
transactions[cid] = time(line), [0, 0]
if ' calling storea(' in line:
cid = connidre.search(line).group(1)
if cid in transactions:
transactions[cid][1][0] += 1
if not newobre.search(line):
transactions[cid][1][1] += 1
elif ' calling vote(' in line:
cid = connidre.search(line).group(1)
if cid in transactions:
transactions[cid] += time(line),
elif ' vote returns None' in line:
cid = connidre.search(line).group(1)
if cid in transactions:
transactions[cid] += time(line), 'n'
elif ' vote() raised' in line:
cid = connidre.search(line).group(1)
if cid in transactions:
transactions[cid] += time(line), 'e'
elif ' vote returns ' in line:
# delayed, skip
cid = connidre.search(line).group(1)
if cid in transactions:
transactions[cid] += time(line), 'd'
elif ' calling tpc_abort(' in line:
cid = connidre.search(line).group(1)
if cid in transactions:
try:
t0, (stores, old), t1, t2, vs = transactions[cid]
except ValueError:
pass
else:
t = time(line)
d = sub(t1, t)
if d >= thresh:
print(t1, cid, "%s/%s" % (stores, old), \
sub(t0, t1), sub(t1, t2), vs, \
sub(t2, t), 'abort')
del transactions[cid]
elif ' calling tpc_finish(' in line:
if cid in transactions:
cid = connidre.search(line).group(1)
transactions[cid] += time(line),
elif ' tpc_finish returns ' in line:
if cid in transactions:
t0, (stores, old), t1, t2, vs, t3 = transactions[cid]
t = time(line)
d = sub(t1, t)
if d >= thresh:
print(t1, cid, "%s/%s" % (stores, old), \
sub(t0, t1), sub(t1, t2), vs, \
sub(t2, t3), sub(t3, t))
del transactions[cid]
def minute(f, slice=16, detail=1, summary=1):
f, = f
if f == '-':
f = sys.stdin
else:
f = xopen(f)
cols = ["time", "reads", "stores", "commits", "aborts", "txns"]
fmt = "%18s %6s %6s %7s %6s %6s"
print(fmt % cols)
print(fmt % ["-"*len(col) for col in cols])
mlast = r = s = c = a = cl = None
rs = []
ss = []
cs = []
aborts = []
ts = []
cls = []
for line in f:
line = line.strip()
if (line.find('returns') > 0
or line.find('storea') > 0
or line.find('tpc_abort') > 0
):
client = connidre.search(line).group(1)
m = line[:slice]
if m != mlast:
if mlast:
if detail:
print(fmt % (mlast, len(cl), r, s, c, a, a+c))
cls.append(len(cl))
rs.append(r)
ss.append(s)
cs.append(c)
aborts.append(a)
ts.append(c+a)
mlast = m
r = s = c = a = 0
cl = {}
if line.find('zeoLoad') > 0:
r += 1
cl[client] = 1
elif line.find('storea') > 0:
s += 1
cl[client] = 1
elif line.find('tpc_finish') > 0:
c += 1
cl[client] = 1
elif line.find('tpc_abort') > 0:
a += 1
cl[client] = 1
if mlast:
if detail:
print(fmt % (mlast, len(cl), r, s, c, a, a+c))
cls.append(len(cl))
rs.append(r)
ss.append(s)
cs.append(c)
aborts.append(a)
ts.append(c+a)
if summary:
print()
print('Summary: \t', '\t'.join(('min', '10%', '25%', 'med',
'75%', '90%', 'max', 'mean')))
print("n=%6d\t" % len(cls), '-'*62)
print('Clients: \t', '\t'.join(map(str,stats(cls))))
print('Reads: \t', '\t'.join(map(str,stats(rs))))
print('Stores: \t', '\t'.join(map(str,stats(ss))))
print('Commits: \t', '\t'.join(map(str,stats(cs))))
print('Aborts: \t', '\t'.join(map(str,stats(aborts))))
print('Trans: \t', '\t'.join(map(str,stats(ts))))
def stats(s):
s.sort()
min = s[0]
max = s[-1]
n = len(s)
out = [min]
ni = n + 1
for p in .1, .25, .5, .75, .90:
lp = ni*p
l = int(lp)
if lp < 1 or lp > n:
out.append('-')
elif abs(lp-l) < .00001:
out.append(s[l-1])
else:
out.append(int(s[l-1] + (lp - l) * (s[l] - s[l-1])))
mean = 0.0
for v in s:
mean += v
out.extend([max, int(mean/n)])
return out
def minutes(f):
minute(f, 16, detail=0)
def hour(f):
minute(f, 13)
def day(f):
minute(f, 10)
def hours(f):
minute(f, 13, detail=0)
def days(f):
minute(f, 10, detail=0)
new_connection_idre = re.compile(
r"new connection \('(\d+.\d+.\d+.\d+)', (\d+)\):")
def verify(f):
f, = f
if f == '-':
f = sys.stdin
else:
f = xopen(f)
t1 = None
nv = {}
for line in f:
if line.find('new connection') > 0:
m = new_connection_idre.search(line)
cid = "%s:%s" % (m.group(1), m.group(2))
nv[cid] = [time(line), 0]
elif line.find('calling zeoVerify(') > 0:
cid = connidre.search(line).group(1)
nv[cid][1] += 1
elif line.find('calling endZeoVerify()') > 0:
cid = connidre.search(line).group(1)
t1, n = nv[cid]
if n:
d = sub(t1, time(line))
print(cid, t1, n, d, n and (d*1000.0/n) or '-')
def recovery(f):
f, = f
if f == '-':
f = sys.stdin
else:
f = xopen(f)
last = ''
trans = []
n = 0
for line in f:
n += 1
if line.find('RecoveryServer') < 0:
continue
l = line.find('sending transaction ')
if l > 0 and last.find('sending transaction ') > 0:
trans.append(line[l+20:].strip())
else:
if trans:
if len(trans) > 1:
print(" ... %s similar records skipped ..." % (
len(trans) - 1))
print(n, last.strip())
trans=[]
print(n, line.strip())
last = line
if len(trans) > 1:
print(" ... %s similar records skipped ..." % (
len(trans) - 1))
print(n, last.strip())
if __name__ == '__main__':
globals()[sys.argv[1]](sys.argv[2:])
ZEO-5.0.0a0/src/ZEO/scripts/zeoup.py 0000775 0000000 0000000 00000010321 12737730500 0017025 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python2.3
"""Make sure a ZEO server is running.
usage: zeoup.py [options]
The test will connect to a ZEO server, load the root object, and attempt to
update the zeoup counter in the root. It will report success if it updates
the counter or if it gets a ConflictError. A ConflictError is considered a
success, because the client was able to start a transaction.
Options:
-p port -- port to connect to
-h host -- host to connect to (default is current host)
-S storage -- storage name (default '1')
-U path -- Unix-domain socket to connect to
--nowrite -- Do not update the zeoup counter.
-1 -- Connect to a ZEO 1.0 server.
You must specify either -p and -h or -U.
"""
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
import getopt
import logging
import socket
import sys
import time
from persistent.mapping import PersistentMapping
import transaction
import ZODB
from ZODB.POSException import ConflictError
from ZODB.tests.MinPO import MinPO
from ZEO.ClientStorage import ClientStorage
ZEO_VERSION = 2
def setup_logging():
# Set up logging to stderr which will show messages originating
# at severity ERROR or higher.
root = logging.getLogger()
root.setLevel(logging.ERROR)
fmt = logging.Formatter(
"------\n%(asctime)s %(levelname)s %(name)s %(message)s",
"%Y-%m-%dT%H:%M:%S")
handler = logging.StreamHandler()
handler.setFormatter(fmt)
root.addHandler(handler)
def check_server(addr, storage, write):
t0 = time.time()
if ZEO_VERSION == 2:
# TODO: should do retries w/ exponential backoff.
cs = ClientStorage(addr, storage=storage, wait=0,
read_only=(not write))
else:
cs = ClientStorage(addr, storage=storage, debug=1,
wait_for_server_on_startup=1)
# _startup() is an artifact of the way ZEO 1.0 works. The
# ClientStorage doesn't get fully initialized until registerDB()
# is called. The only thing we care about, though, is that
# registerDB() calls _startup().
if write:
db = ZODB.DB(cs)
cn = db.open()
root = cn.root()
try:
# We store the data in a special `monitor' dict under the root,
# where other tools may also store such heartbeat and bookkeeping
# type data.
monitor = root.get('monitor')
if monitor is None:
monitor = root['monitor'] = PersistentMapping()
obj = monitor['zeoup'] = monitor.get('zeoup', MinPO(0))
obj.value += 1
transaction.commit()
except ConflictError:
pass
cn.close()
db.close()
else:
data, serial = cs.load("\0\0\0\0\0\0\0\0", "")
cs.close()
t1 = time.time()
print("Elapsed time: %.2f" % (t1 - t0))
def usage(exit=1):
print(__doc__)
print(" ".join(sys.argv))
sys.exit(exit)
def main():
host = None
port = None
unix = None
write = 1
storage = '1'
try:
opts, args = getopt.getopt(sys.argv[1:], 'p:h:U:S:1',
['nowrite'])
for o, a in opts:
if o == '-p':
port = int(a)
elif o == '-h':
host = a
elif o == '-U':
unix = a
elif o == '-S':
storage = a
elif o == '--nowrite':
write = 0
elif o == '-1':
ZEO_VERSION = 1
except Exception as err:
s = str(err)
if s:
s = ": " + s
print(err.__class__.__name__ + s)
usage()
if unix is not None:
addr = unix
else:
if host is None:
host = socket.gethostname()
if port is None:
usage()
addr = host, port
setup_logging()
check_server(addr, storage, write)
if __name__ == "__main__":
try:
main()
except SystemExit:
raise
except Exception as err:
s = str(err)
if s:
s = ": " + s
print(err.__class__.__name__ + s)
sys.exit(1)
ZEO-5.0.0a0/src/ZEO/server.xml 0000664 0000000 0000000 00000010303 12737730500 0015647 0 ustar 00root root 0000000 0000000
The full path to an SSL certificate file.
The full path to an SSL key file for the server certificate.
Dotted name of importable function for retrieving a password
for the client certificate key.
Path to a file or directory containing client certificates to
be authenticated. This can also be - or SIGNED to require
signed client certificates.
The content of a ZEO section describe operational parameters
of a ZEO server except for the storage(s) to be served.
The address at which the server should listen. This can be in
the form 'host:port' to signify a TCP/IP connection or a
pathname string to signify a Unix domain socket connection (at
least one '/' is required). A hostname may be a DNS name or a
dotted IP address. If the hostname is omitted, the platform's
default behavior is used when binding the listening socket (''
is passed to socket.bind() as the hostname portion of the
address).
Flag indicating whether the server should operate in read-only
mode. Defaults to false. Note that even if the server is
operating in writable mode, individual storages may still be
read-only. But if the server is in read-only mode, no write
operations are allowed, even if the storages are writable. Note
that pack() is considered a read-only operation.
The storage server keeps a queue of the objects modified by the
last N transactions, where N == invalidation_queue_size. This
queue is used to speed client cache verification when a client
disconnects for a short period of time.
The maximum age of a client for which quick-verification
invalidations will be provided by iterating over the served
storage. This option should only be used if the served storage
supports efficient iteration from a starting point near the
end of the transaction history (e.g. end of file).
The maximum amount of time to wait for a transaction to commit
after acquiring the storage lock, specified in seconds. If the
transaction takes too long, the client connection will be closed
and the transaction aborted.
The full path to the file in which to write the ZEO server's Process ID
at startup. If omitted, $INSTANCE/var/ZEO.pid is used.
$INSTANCE/var/ZEO.pid (or $clienthome/ZEO.pid)
Flag indicating whether the server should return conflict
errors to the client, for resolution there.
ZEO-5.0.0a0/src/ZEO/shortrepr.py 0000664 0000000 0000000 00000003667 12737730500 0016240 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
REPR_LIMIT = 60
def short_repr(obj):
"Return an object repr limited to REPR_LIMIT bytes."
# Some of the objects being repr'd are large strings. A lot of memory
# would be wasted to repr them and then truncate, so they are treated
# specially in this function.
# Also handle short repr of a tuple containing a long string.
# This strategy works well for arguments to StorageServer methods.
# The oid is usually first and will get included in its entirety.
# The pickle is near the beginning, too, and you can often fit the
# module name in the pickle.
if isinstance(obj, str):
if len(obj) > REPR_LIMIT:
r = repr(obj[:REPR_LIMIT])
else:
r = repr(obj)
if len(r) > REPR_LIMIT:
r = r[:REPR_LIMIT-4] + '...' + r[-1]
return r
elif isinstance(obj, (list, tuple)):
elts = []
size = 0
for elt in obj:
r = short_repr(elt)
elts.append(r)
size += len(r)
if size > REPR_LIMIT:
break
if isinstance(obj, tuple):
r = "(%s)" % (", ".join(elts))
else:
r = "[%s]" % (", ".join(elts))
else:
r = repr(obj)
if len(r) > REPR_LIMIT:
return r[:REPR_LIMIT] + '...'
else:
return r
ZEO-5.0.0a0/src/ZEO/tests/ 0000775 0000000 0000000 00000000000 12737730500 0014764 5 ustar 00root root 0000000 0000000 ZEO-5.0.0a0/src/ZEO/tests/Cache.py 0000664 0000000 0000000 00000003476 12737730500 0016353 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Tests of the ZEO cache"""
from ZODB.tests.MinPO import MinPO
from ZODB.tests.StorageTestBase import zodb_unpickle
from transaction import Transaction
class TransUndoStorageWithCache:
def checkUndoInvalidation(self):
oid = self._storage.new_oid()
revid = self._dostore(oid, data=MinPO(23))
revid = self._dostore(oid, revid=revid, data=MinPO(24))
revid = self._dostore(oid, revid=revid, data=MinPO(25))
info = self._storage.undoInfo()
if not info:
# Preserved this comment, but don't understand it:
# "Perhaps we have an old storage implementation that
# does do the negative nonsense."
info = self._storage.undoInfo(0, 20)
tid = info[0]['id']
# Now start an undo transaction
t = Transaction()
t.note('undo1')
oids = self._begin_undos_vote(t, tid)
# Make sure this doesn't load invalid data into the cache
self._storage.load(oid, '')
self._storage.tpc_finish(t)
assert len(oids) == 1
assert oids[0] == oid
data, revid = self._storage.load(oid, '')
obj = zodb_unpickle(data)
assert obj == MinPO(24)
ZEO-5.0.0a0/src/ZEO/tests/CommitLockTests.py 0000664 0000000 0000000 00000013037 12737730500 0020426 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Tests of the distributed commit lock."""
import threading
import time
from persistent.TimeStamp import TimeStamp
import transaction
from ZODB.tests.StorageTestBase import zodb_pickle, MinPO
import ZEO.ClientStorage
from ZEO.Exceptions import ClientDisconnected
from ZEO.tests.TestThread import TestThread
ZERO = b'\0'*8
class DummyDB:
def invalidate(self, *args, **kwargs):
pass
transform_record_data = untransform_record_data = lambda self, data: data
class WorkerThread(TestThread):
# run the entire test in a thread so that the blocking call for
# tpc_vote() doesn't hang the test suite.
def __init__(self, test, storage, trans):
self.storage = storage
self.trans = trans
self.ready = threading.Event()
TestThread.__init__(self, test)
def testrun(self):
try:
self.storage.tpc_begin(self.trans)
oid = self.storage.new_oid()
p = zodb_pickle(MinPO("c"))
self.storage.store(oid, ZERO, p, '', self.trans)
oid = self.storage.new_oid()
p = zodb_pickle(MinPO("c"))
self.storage.store(oid, ZERO, p, '', self.trans)
self.myvote()
self.storage.tpc_finish(self.trans)
except ClientDisconnected:
pass
def myvote(self):
# The vote() call is synchronous, which makes it difficult to
# coordinate the action of multiple threads that all call
# vote(). This method sends the vote call, then sets the
# event saying vote was called, then waits for the vote
# response.
future = self.storage._server.call_future('vote', id(self.trans))
self.ready.set()
future.result(9)
class CommitLockTests:
NUM_CLIENTS = 5
# The commit lock tests verify that the storage successfully
# blocks and restarts transactions when there is contention for a
# single storage. There are a lot of cases to cover.
# The general flow of these tests is to start a transaction by
# getting far enough into 2PC to acquire the commit lock. Then
# begin one or more other connections that also want to commit.
# This causes the commit lock code to be exercised. Once the
# other connections are started, the first transaction completes.
def _cleanup(self):
for store, trans in self._storages:
store.tpc_abort(trans)
store.close()
self._storages = []
def _start_txn(self):
txn = transaction.Transaction()
self._storage.tpc_begin(txn)
oid = self._storage.new_oid()
self._storage.store(oid, ZERO, zodb_pickle(MinPO(1)), '', txn)
return oid, txn
def _begin_threads(self):
# Start a second transaction on a different connection without
# blocking the test thread. Returns only after each thread has
# set it's ready event.
self._storages = []
self._threads = []
for i in range(self.NUM_CLIENTS):
storage = self._duplicate_client()
txn = transaction.Transaction()
tid = self._get_timestamp()
t = WorkerThread(self, storage, txn)
self._threads.append(t)
t.start()
t.ready.wait()
# Close one of the connections abnormally to test server response
if i == 0:
storage.close()
else:
self._storages.append((storage, txn))
def _finish_threads(self):
for t in self._threads:
t.cleanup()
def _duplicate_client(self):
"Open another ClientStorage to the same server."
# It's hard to find the actual address.
# The rpc mgr addr attribute is a list. Each element in the
# list is a socket domain (AF_INET, AF_UNIX, etc.) and an
# address.
addr = self._storage._addr
new = ZEO.ClientStorage.ClientStorage(
addr, wait=1, **self._client_options())
new.registerDB(DummyDB())
return new
def _get_timestamp(self):
t = time.time()
t = TimeStamp(*time.gmtime(t)[:5]+(t%60,))
return repr(t)
class CommitLockVoteTests(CommitLockTests):
def checkCommitLockVoteFinish(self):
oid, txn = self._start_txn()
self._storage.tpc_vote(txn)
self._begin_threads()
self._storage.tpc_finish(txn)
self._storage.load(oid, '')
self._finish_threads()
self._dostore()
self._cleanup()
def checkCommitLockVoteAbort(self):
oid, txn = self._start_txn()
self._storage.tpc_vote(txn)
self._begin_threads()
self._storage.tpc_abort(txn)
self._finish_threads()
self._dostore()
self._cleanup()
def checkCommitLockVoteClose(self):
oid, txn = self._start_txn()
self._storage.tpc_vote(txn)
self._begin_threads()
self._storage.close()
self._finish_threads()
self._cleanup()
ZEO-5.0.0a0/src/ZEO/tests/ConnectionTests.py 0000664 0000000 0000000 00000121221 12737730500 0020457 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
import concurrent.futures
import contextlib
import os
import time
import socket
import threading
import logging
from ZEO.ClientStorage import ClientStorage
from ZEO.Exceptions import ClientDisconnected
from ZEO.asyncio.marshal import encode
from ZEO.tests import forker
from ZODB.DB import DB
from ZODB.POSException import ReadOnlyError, ConflictError
from ZODB.tests.StorageTestBase import StorageTestBase
from ZODB.tests.MinPO import MinPO
from ZODB.tests.StorageTestBase \
import zodb_pickle, zodb_unpickle, handle_all_serials, handle_serials
import ZODB.tests.util
import transaction
from transaction import Transaction
from . import testssl
logger = logging.getLogger('ZEO.tests.ConnectionTests')
ZERO = '\0'*8
class TestClientStorage(ClientStorage):
test_connection = False
connection_count_for_tests = 0
def notify_connected(self, conn, info):
ClientStorage.notify_connected(self, conn, info)
self.connection_count_for_tests += 1
self.verify_result = conn.verify_result
class DummyDB:
def invalidate(self, *args, **kwargs):
pass
def invalidateCache(self):
pass
transform_record_data = untransform_record_data = lambda self, data: data
class CommonSetupTearDown(StorageTestBase):
"""Common boilerplate"""
__super_setUp = StorageTestBase.setUp
__super_tearDown = StorageTestBase.tearDown
keep = 0
invq = None
timeout = None
db_class = DummyDB
def setUp(self, before=None):
"""Test setup for connection tests.
This starts only one server; a test may start more servers by
calling self._newAddr() and then self.startServer(index=i)
for i in 1, 2, ...
"""
self.__super_setUp()
logging.info("setUp() %s", self.id())
self.file = 'storage_conf'
self._servers = []
self.caches = []
self.addr = [('localhost', 0)]
self.startServer()
def tearDown(self):
"""Try to cause the tests to halt"""
if getattr(self, '_storage', None) is not None:
self._storage.close()
if hasattr(self._storage, 'cleanup'):
logging.debug("cleanup storage %s" %
self._storage.__name__)
self._storage.cleanup()
for stop in self._servers:
stop()
for c in self.caches:
for i in 0, 1:
for ext in "", ".trace", ".lock":
path = "%s-%s.zec%s" % (c, "1", ext)
# On Windows before 2.3, we don't have a way to wait for
# the spawned server(s) to close, and they inherited
# file descriptors for our open files. So long as those
# processes are alive, we can't delete the files. Try
# a few times then give up.
need_to_delete = False
if os.path.exists(path):
need_to_delete = True
for dummy in range(5):
try:
os.unlink(path)
except:
time.sleep(0.5)
else:
need_to_delete = False
break
if need_to_delete:
os.unlink(path) # sometimes this is just gonna fail
self.__super_tearDown()
def _newAddr(self):
self.addr.append(self._getAddr())
def _getAddr(self):
return 'localhost', forker.get_port(self)
def getConfig(self, path, create, read_only):
raise NotImplementedError
cache_id = 1
def openClientStorage(self, cache=None, cache_size=200000, wait=1,
read_only=0, read_only_fallback=0,
username=None, password=None, realm=None):
if cache is None:
cache = str(self.__class__.cache_id)
self.__class__.cache_id += 1
self.caches.append(cache)
storage = TestClientStorage(self.addr,
client=cache,
var='.',
cache_size=cache_size,
wait=wait,
min_disconnect_poll=0.1,
read_only=read_only,
read_only_fallback=read_only_fallback,
**self._client_options())
storage.registerDB(DummyDB())
return storage
def _client_options(self):
return {}
def getServerConfig(self, addr, ro_svr):
zconf = forker.ZEOConfig(addr)
if ro_svr:
zconf.read_only = 1
if self.invq:
zconf.invalidation_queue_size = self.invq
if self.timeout:
zconf.transaction_timeout = self.timeout
return zconf
def startServer(self, create=1, index=0, read_only=0, ro_svr=0, keep=None,
path=None, **kw):
addr = self.addr[index]
logging.info("startServer(create=%d, index=%d, read_only=%d) @ %s" %
(create, index, read_only, addr))
if path is None:
path = "%s.%d" % (self.file, index)
sconf = self.getConfig(path, create, read_only)
zconf = self.getServerConfig(addr, ro_svr)
if keep is None:
keep = self.keep
zeoport, stop = forker.start_zeo_server(
sconf, zconf, addr[1], keep, **kw)
self._servers.append(stop)
if addr[1] == 0:
self.addr[index] = zeoport
def shutdownServer(self, index=0):
logging.info("shutdownServer(index=%d) @ %s" %
(index, self._servers[index]))
stop = self._servers[index]
if stop is not None:
stop()
self._servers[index] = lambda : None
def pollUp(self, timeout=30.0, storage=None):
if storage is None:
storage = self._storage
storage.server_status()
def pollDown(self, timeout=30.0):
# Poll until we're disconnected.
now = time.time()
giveup = now + timeout
while self._storage.is_connected():
now = time.time()
if now > giveup:
self.fail("timed out waiting for storage to disconnect")
time.sleep(0.1)
class ConnectionTests(CommonSetupTearDown):
"""Tests that explicitly manage the server process.
To test the cache or re-connection, these test cases explicit
start and stop a ZEO storage server.
"""
def checkMultipleAddresses(self):
for i in range(4):
self._newAddr()
self._storage = self.openClientStorage('test', 100000)
oid = self._storage.new_oid()
obj = MinPO(12)
self._dostore(oid, data=obj)
self._storage.close()
def checkReadOnlyClient(self):
# Open a read-only client to a read-write server; stores fail
# Start a read-only client for a read-write server
self._storage = self.openClientStorage(read_only=1)
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
self._storage.close()
def checkReadOnlyServer(self):
# Open a read-only client to a read-only *server*; stores fail
# We don't want the read-write server created by setUp()
self.shutdownServer()
self._servers = []
# Start a read-only server
self.startServer(create=0, index=0, ro_svr=1)
# Start a read-only client
self._storage = self.openClientStorage(read_only=1)
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
self._storage.close()
# Get rid of the 'test left new threads behind' warning
time.sleep(0.1)
def checkReadOnlyFallbackWritable(self):
# Open a fallback client to a read-write server; stores succeed
# Start a read-only-fallback client for a read-write server
self._storage = self.openClientStorage(read_only_fallback=1)
# Stores should succeed here
self._dostore()
self._storage.close()
def checkReadOnlyFallbackReadOnlyServer(self):
# Open a fallback client to a read-only *server*; stores fail
# We don't want the read-write server created by setUp()
self.shutdownServer()
self._servers = []
# Start a read-only server
self.startServer(create=0, index=0, ro_svr=1)
# Start a read-only-fallback client
self._storage = self.openClientStorage(read_only_fallback=1)
self.assert_(self._storage.isReadOnly())
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
self._storage.close()
def checkDisconnectionError(self):
# Make sure we get a ClientDisconnected when we try to read an
# object when we're not connected to a storage server and the
# object is not in the cache.
self.shutdownServer()
self._storage = self.openClientStorage('test', 1000, wait=0)
with short_timeout(self):
self.assertRaises(ClientDisconnected,
self._storage.load, b'fredwash', '')
self._storage.close()
def checkBasicPersistence(self):
# Verify cached data persists across client storage instances.
# To verify that the cache is being used, the test closes the
# server and then starts a new client with the server down.
# When the server is down, a load() gets the data from its cache.
self._storage = self.openClientStorage('test', 100000)
oid = self._storage.new_oid()
obj = MinPO(12)
revid1 = self._dostore(oid, data=obj)
self._storage.close()
self.shutdownServer()
self._storage = self.openClientStorage('test', 100000, wait=0)
data, revid2 = self._storage.load(oid, '')
self.assertEqual(zodb_unpickle(data), MinPO(12))
self.assertEqual(revid1, revid2)
self._storage.close()
def checkDisconnectedCacheWorks(self):
# Check that the cache works when the client is disconnected.
self._storage = self.openClientStorage('test')
oid1 = self._storage.new_oid()
obj1 = MinPO("1" * 500)
self._dostore(oid1, data=obj1)
oid2 = self._storage.new_oid()
obj2 = MinPO("2" * 500)
self._dostore(oid2, data=obj2)
expected1 = self._storage.load(oid1, '')
expected2 = self._storage.load(oid2, '')
# Shut it all down, and try loading from the persistent cache file
# without a server present.
self._storage.close()
self.shutdownServer()
self._storage = self.openClientStorage('test', wait=False)
self.assertEqual(expected1, self._storage.load(oid1, ''))
self.assertEqual(expected2, self._storage.load(oid2, ''))
self._storage.close()
def checkDisconnectedCacheFails(self):
# Like checkDisconnectedCacheWorks above, except the cache
# file is so small that only one object can be remembered.
self._storage = self.openClientStorage('test', cache_size=900)
oid1 = self._storage.new_oid()
obj1 = MinPO("1" * 500)
self._dostore(oid1, data=obj1)
oid2 = self._storage.new_oid()
obj2 = MinPO("2" * 500)
# The cache file is so small that adding oid2 will evict oid1.
self._dostore(oid2, data=obj2)
expected2 = self._storage.load(oid2, '')
# Shut it all down, and try loading from the persistent cache file
# without a server present.
self._storage.close()
self.shutdownServer()
self._storage = self.openClientStorage('test', cache_size=900,
wait=False)
# oid2 should still be in cache.
self.assertEqual(expected2, self._storage.load(oid2, ''))
# But oid1 should have been purged, so that trying to load it will
# try to fetch it from the (non-existent) ZEO server.
with short_timeout(self):
self.assertRaises(ClientDisconnected, self._storage.load, oid1, '')
self._storage.close()
def checkVerificationInvalidationPersists(self):
# This tests a subtle invalidation bug from ZODB 3.3:
# invalidations processed as part of ZEO cache verification acted
# kinda OK wrt the in-memory cache structures, but had no effect
# on the cache file. So opening the file cache again could
# incorrectly believe that a previously invalidated object was
# still current. This takes some effort to set up.
# First, using a persistent cache ('test'), create an object
# MinPO(13). We used to see this again at the end of this test,
# despite that we modify it, and despite that it gets invalidated
# in 'test', before the end.
self._storage = self.openClientStorage('test')
oid = self._storage.new_oid()
obj = MinPO(13)
self._dostore(oid, data=obj)
self._storage.close()
# Now modify obj via a temp connection. `test` won't learn about
# this until we open a connection using `test` again.
self._storage = self.openClientStorage()
pickle, rev = self._storage.load(oid, '')
newobj = zodb_unpickle(pickle)
self.assertEqual(newobj, obj)
newobj.value = 42 # .value *should* be 42 forever after now, not 13
self._dostore(oid, data=newobj, revid=rev)
self._storage.close()
# Open 'test' again. `oid` in this cache should be (and is)
# invalidated during cache verification. The bug was that it
# got invalidated (kinda) in memory, but not in the cache file.
self._storage = self.openClientStorage('test')
# The invalidation happened already. Now create and store a new
# object before closing this storage: this is so `test` believes
# it's seen transactions beyond the one that invalidated `oid`, so
# that the *next* time we open `test` it doesn't process another
# invalidation for `oid`. It's also important that we not try to
# load `oid` now: because it's been (kinda) invalidated in the
# cache's memory structures, loading it now would fetch the
# current revision from the server, thus hiding the bug.
obj2 = MinPO(666)
oid2 = self._storage.new_oid()
self._dostore(oid2, data=obj2)
self._storage.close()
# Finally, open `test` again and load `oid`. `test` believes
# it's beyond the transaction that modified `oid`, so its view
# of whether it has an up-to-date `oid` comes solely from the disk
# file, unaffected by cache verification.
self._storage = self.openClientStorage('test')
pickle, rev = self._storage.load(oid, '')
newobj_copy = zodb_unpickle(pickle)
# This used to fail, with
# AssertionError: MinPO(13) != MinPO(42)
# That is, `test` retained a stale revision of the object on disk.
self.assertEqual(newobj_copy, newobj)
self._storage.close()
def checkBadMessage1(self):
# not even close to a real message
self._bad_message(b"salty")
def checkBadMessage2(self):
# just like a real message, but with an unpicklable argument
global Hack
class Hack:
pass
msg = encode(1, 0, "foo", (Hack(),))
self._bad_message(msg)
del Hack
def _bad_message(self, msg):
# Establish a connection, then send the server an ill-formatted
# request. Verify that the connection is closed and that it is
# possible to establish a new connection.
self._storage = self.openClientStorage()
self._dostore()
generation = self._storage._connection_generation
future = concurrent.futures.Future()
def write():
try:
self._storage._server.client.protocol._write(msg)
except Exception as exc:
future.set_exception(exc)
else:
future.set_result(None)
# break into the internals to send a bogus message
self._storage._server.loop.call_soon_threadsafe(write)
future.result()
# If we manage to call _dostore before the server disconnects
# us, we'll get a ClientDisconnected error. When we retry, it
# will succeed. It will succeed because:
# - _dostore calls tpc_abort
# - tpc_abort makes a synchronous call to the server to abort
# the transaction
# - when disconnected, synchronous calls are blocked for a little
# while while reconnecting (or they timeout of it takes too long).
try:
self._dostore()
except ClientDisconnected:
self._dostore()
self.assertTrue(self._storage._connection_generation > generation)
# Test case for multiple storages participating in a single
# transaction. This is not really a connection test, but it needs
# about the same infrastructure (several storage servers).
# TODO: with the current ZEO code, this occasionally fails.
# That's the point of this test. :-)
def NOcheckMultiStorageTransaction(self):
# Configuration parameters (larger values mean more likely deadlocks)
N = 2
# These don't *have* to be all the same, but it's convenient this way
self.nservers = N
self.nthreads = N
self.ntrans = N
self.nobj = N
# Start extra servers
for i in range(1, self.nservers):
self._newAddr()
self.startServer(index=i)
# Spawn threads that each do some transactions on all storages
threads = []
try:
for i in range(self.nthreads):
t = MSTThread(self, "T%d" % i)
threads.append(t)
t.start()
# Wait for all threads to finish
for t in threads:
t.join(60)
self.failIf(t.isAlive(), "%s didn't die" % t.getName())
finally:
for t in threads:
t.closeclients()
def checkCrossDBInvalidations(self):
db1 = DB(self.openClientStorage())
c1 = db1.open()
r1 = c1.root()
r1["a"] = MinPO("a")
transaction.commit()
self.assertEqual(r1._p_state, 0) # up-to-date
db2 = DB(self.openClientStorage())
r2 = db2.open().root()
self.assertEqual(r2["a"].value, "a")
r2["b"] = MinPO("b")
transaction.commit()
# Make sure the invalidation is received in the other client.
# We've had problems with this timing out on "slow" and/or "very
# busy" machines, so we increase the sleep time on each trip, and
# are willing to wait quite a long time.
for i in range(20):
c1.sync()
if r1._p_state == -1:
break
time.sleep(i / 10.0)
self.assertEqual(r1._p_state, -1) # ghost
r1.keys() # unghostify
self.assertEqual(r1._p_serial, r2._p_serial)
self.assertEqual(r1["b"].value, "b")
db2.close()
db1.close()
def checkCheckForOutOfDateServer(self):
# We don't want to connect a client to a server if the client
# has seen newer transactions.
self._storage = self.openClientStorage()
self._dostore()
self.shutdownServer()
with short_timeout(self):
self.assertRaises(ClientDisconnected,
self._storage.load, b'\0'*8, '')
self.startServer()
# No matter how long we wait, the client won't reconnect:
time.sleep(2)
with short_timeout(self):
self.assertRaises(ClientDisconnected,
self._storage.load, b'\0'*8, '')
class SSLConnectionTests(ConnectionTests):
def getServerConfig(self, addr, ro_svr):
return testssl.server_config.replace(
'127.0.0.1:0',
'{}: {}\nread-only {}'.format(
addr[0], addr[1], 'true' if ro_svr else 'false'))
def _client_options(self):
return {'ssl': testssl.client_ssl()}
class InvqTests(CommonSetupTearDown):
invq = 3
def checkQuickVerificationWith2Clients(self):
perstorage = self.openClientStorage(cache="test", cache_size=4000)
self._storage = self.openClientStorage()
oid = self._storage.new_oid()
oid2 = self._storage.new_oid()
# When we create a new storage, it should always do a full
# verification
self.assertEqual(self._storage.verify_result, "empty cache")
# do two storages of the object to make sure an invalidation
# message is generated
revid = self._dostore(oid)
revid = self._dostore(oid, revid)
# Create a second object and revision to guarantee it doesn't
# show up in the list of invalidations sent when perstore restarts.
revid2 = self._dostore(oid2)
revid2 = self._dostore(oid2, revid2)
forker.wait_until(
lambda :
perstorage.lastTransaction() == self._storage.lastTransaction())
perstorage.load(oid, '')
perstorage.close()
forker.wait_until(lambda : os.path.exists('test-1.zec'))
revid = self._dostore(oid, revid)
perstorage = self.openClientStorage(cache="test")
self.assertEqual(perstorage.verify_result, "quick verification")
self.assertEqual(perstorage.load(oid, ''),
self._storage.load(oid, ''))
perstorage.close()
def checkVerificationWith2ClientsInvqOverflow(self):
perstorage = self.openClientStorage(cache="test")
self.assertEqual(perstorage.verify_result, "empty cache")
self._storage = self.openClientStorage()
oid = self._storage.new_oid()
# When we create a new storage, it should always do a full
# verification
self.assertEqual(self._storage.verify_result, "empty cache")
# do two storages of the object to make sure an invalidation
# message is generated
revid = self._dostore(oid)
revid = self._dostore(oid, revid)
forker.wait_until(
"Client has seen all of the transactions from the server",
lambda :
perstorage.lastTransaction() == self._storage.lastTransaction()
)
perstorage.load(oid, '')
perstorage.close()
# the test code sets invq bound to 2
for i in range(5):
revid = self._dostore(oid, revid)
perstorage = self.openClientStorage(cache="test")
self.assertEqual(perstorage.verify_result, "cache too old, clearing")
self.assertEqual(self._storage.load(oid, '')[1], revid)
self.assertEqual(perstorage.load(oid, ''),
self._storage.load(oid, ''))
perstorage.close()
class ReconnectionTests(CommonSetupTearDown):
# The setUp() starts a server automatically. In order for its
# state to persist, we set the class variable keep to 1. In
# order for its state to be cleaned up, the last startServer()
# call in the test must pass keep=0.
keep = 1
invq = 2
def checkReadOnlyStorage(self):
# Open a read-only client to a read-only *storage*; stores fail
# We don't want the read-write server created by setUp()
self.shutdownServer()
self._servers = []
# Start a read-only server
self.startServer(create=0, index=0, read_only=1, keep=0)
# Start a read-only client
self._storage = self.openClientStorage(read_only=1)
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
def checkReadOnlyFallbackReadOnlyStorage(self):
# Open a fallback client to a read-only *storage*; stores fail
# We don't want the read-write server created by setUp()
self.shutdownServer()
self._servers = []
# Start a read-only server
self.startServer(create=0, index=0, read_only=1, keep=0)
# Start a read-only-fallback client
self._storage = self.openClientStorage(read_only_fallback=1)
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
# TODO: Compare checkReconnectXXX() here to checkReconnection()
# further down. Is the code here hopelessly naive, or is
# checkReconnection() overwrought?
def checkReconnectWritable(self):
# A read-write client reconnects to a read-write server
# Start a client
self._storage = self.openClientStorage()
# Stores should succeed here
self._dostore()
# Shut down the server
self.shutdownServer()
self._servers = []
# Poll until the client disconnects
self.pollDown()
# Stores should fail now
with short_timeout(self):
self.assertRaises(ClientDisconnected, self._dostore)
# Restart the server
self.startServer(create=0)
# Poll until the client connects
self.pollUp()
# Stores should succeed here
self._dostore()
self._storage.close()
def checkReconnectReadOnly(self):
# A read-only client reconnects from a read-write to a
# read-only server
# Start a client
self._storage = self.openClientStorage(read_only=1)
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
# Shut down the server
self.shutdownServer()
self._servers = []
# Poll until the client disconnects
self.pollDown()
# Stores should still fail
self.assertRaises(ReadOnlyError, self._dostore)
# Restart the server
self.startServer(create=0, read_only=1, keep=0)
# Poll until the client connects
self.pollUp()
# Stores should still fail
self.assertRaises(ReadOnlyError, self._dostore)
def checkReconnectFallback(self):
# A fallback client reconnects from a read-write to a
# read-only server
# Start a client in fallback mode
self._storage = self.openClientStorage(read_only_fallback=1)
# Stores should succeed here
self._dostore()
# Shut down the server
self.shutdownServer()
self._servers = []
# Poll until the client disconnects
self.pollDown()
# Stores should fail now
with short_timeout(self):
self.assertRaises(ClientDisconnected, self._dostore)
# Restart the server
self.startServer(create=0, read_only=1, keep=0)
# Poll until the client connects
self.pollUp()
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
def checkReconnectUpgrade(self):
# A fallback client reconnects from a read-only to a
# read-write server
# We don't want the read-write server created by setUp()
self.shutdownServer()
self._servers = []
# Start a read-only server
self.startServer(create=0, read_only=1)
# Start a client in fallback mode
self._storage = self.openClientStorage(read_only_fallback=1)
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
# Shut down the server
self.shutdownServer()
self._servers = []
# Poll until the client disconnects
self.pollDown()
# Accesses should fail now
with short_timeout(self):
self.assertRaises(ClientDisconnected, self._storage.history, ZERO)
# Restart the server, this time read-write
self.startServer(create=0, keep=0)
# Poll until the client sconnects
self.pollUp()
# Stores should now succeed
self._dostore()
def checkReconnectSwitch(self):
# A fallback client initially connects to a read-only server,
# then discovers a read-write server and switches to that
# We don't want the read-write server created by setUp()
self.shutdownServer()
self._servers = []
# Allocate a second address (for the second server)
self._newAddr()
# Start a read-only server
self.startServer(create=0, index=0, read_only=1, keep=0)
# Start a client in fallback mode
self._storage = self.openClientStorage(read_only_fallback=1)
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
# Start a read-write server
self.startServer(index=1, read_only=0, keep=0)
# After a while, stores should work
for i in range(300): # Try for 30 seconds
try:
self._dostore()
break
except (ClientDisconnected, ReadOnlyError):
# If the client isn't connected at all, sync() returns
# quickly and the test fails because it doesn't wait
# long enough for the client.
time.sleep(0.1)
else:
self.fail("Couldn't store after starting a read-write server")
def checkNoVerificationOnServerRestart(self):
self._storage = self.openClientStorage()
# When we create a new storage, it should always do a full
# verification
self.assertEqual(self._storage.verify_result, "empty cache")
self._dostore()
self.shutdownServer()
self.pollDown()
self._storage.verify_result = None
self.startServer(create=0, keep=0)
self.pollUp()
# There were no transactions committed, so no verification
# should be needed.
self.assertEqual(self._storage.verify_result, "Cache up to date")
def checkNoVerificationOnServerRestartWith2Clients(self):
perstorage = self.openClientStorage(cache="test")
self.assertEqual(perstorage.verify_result, "empty cache")
self._storage = self.openClientStorage()
oid = self._storage.new_oid()
# When we create a new storage, it should always do a full
# verification
self.assertEqual(self._storage.verify_result, "empty cache")
# do two storages of the object to make sure an invalidation
# message is generated
revid = self._dostore(oid)
revid = self._dostore(oid, revid)
forker.wait_until(
"Client has seen all of the transactions from the server",
lambda :
perstorage.lastTransaction() == self._storage.lastTransaction()
)
perstorage.load(oid, '')
self.shutdownServer()
self.pollDown()
self._storage.verify_result = None
perstorage.verify_result = None
logging.info('2ALLBEEF')
self.startServer(create=0, keep=0)
self.pollUp()
self.pollUp(storage=perstorage)
# There were no transactions committed, so no verification
# should be needed.
self.assertEqual(self._storage.verify_result, "Cache up to date")
self.assertEqual(perstorage.verify_result, "Cache up to date")
perstorage.close()
self._storage.close()
def checkDisconnectedAbort(self):
self._storage = self.openClientStorage()
self._dostore()
oids = [self._storage.new_oid() for i in range(5)]
txn = Transaction()
self._storage.tpc_begin(txn)
for oid in oids:
data = zodb_pickle(MinPO(oid))
self._storage.store(oid, None, data, '', txn)
self.shutdownServer()
with short_timeout(self):
self.assertRaises(ClientDisconnected, self._storage.tpc_vote, txn)
self.startServer(create=0)
self._storage.tpc_abort(txn)
self._dostore()
# This test is supposed to cover the following error, although
# I don't have much confidence that it does. The likely
# explanation for the error is that the _tbuf contained
# objects that weren't in the _seriald, because the client was
# interrupted waiting for tpc_vote() to return. When the next
# transaction committed, it tried to do something with the
# bogus _tbuf entries. The explanation is wrong/incomplete,
# because tpc_begin() should clear the _tbuf.
# 2003-01-15T15:44:19 ERROR(200) ZODB A storage error occurred
# in the last phase of a two-phase commit. This shouldn't happen.
# Traceback (innermost last):
# Module ZODB.Transaction, line 359, in _finish_one
# Module ZODB.Connection, line 691, in tpc_finish
# Module ZEO.ClientStorage, line 679, in tpc_finish
# Module ZEO.ClientStorage, line 709, in _update_cache
# KeyError: ...
def checkReconnection(self):
# Check that the client reconnects when a server restarts.
self._storage = self.openClientStorage()
oid = self._storage.new_oid()
obj = MinPO(12)
self._dostore(oid, data=obj)
logging.info("checkReconnection(): About to shutdown server")
self.shutdownServer()
logging.info("checkReconnection(): About to restart server")
self.startServer(create=0)
forker.wait_until('reconnect', self._storage.is_connected)
oid = self._storage.new_oid()
obj = MinPO(12)
while 1:
try:
self._dostore(oid, data=obj)
break
except ClientDisconnected:
# Maybe the exception mess is better now
logging.info("checkReconnection(): Error after"
" server restart; retrying.", exc_info=True)
transaction.abort()
# Give the other thread a chance to run.
time.sleep(0.1)
logging.info("checkReconnection(): finished")
self._storage.close()
def checkMultipleServers(self):
# Crude test-- just start two servers and do a commit at each one.
self._newAddr()
self._storage = self.openClientStorage('test', 100000)
self._dostore()
self.shutdownServer(index=0)
# When we start the second server, we use file data file from
# the original server so tha the new server is a replica of
# the original. We need this becaise ClientStorage won't use
# a server if the server's last transaction is earlier than
# what the client has seen.
self.startServer(index=1, path=self.file+'.0', create=False)
# If we can still store after shutting down one of the
# servers, we must be reconnecting to the other server.
did_a_store = 0
for i in range(10):
try:
self._dostore()
did_a_store = 1
break
except ClientDisconnected:
time.sleep(0.5)
self.assert_(did_a_store)
self._storage.close()
class TimeoutTests(CommonSetupTearDown):
timeout = 1
def checkTimeout(self):
self._storage = storage = self.openClientStorage()
txn = Transaction()
storage.tpc_begin(txn)
storage.tpc_vote(txn)
time.sleep(2)
with short_timeout(self):
self.assertRaises(ClientDisconnected, storage.tpc_finish, txn)
# Make sure it's logged as CRITICAL
for line in open("server-0.log"):
if (('Transaction timeout after' in line) and
('CRITICAL ZEO.StorageServer' in line)
):
break
else:
self.assert_(False, 'bad logging')
storage.close()
def checkTimeoutOnAbort(self):
storage = self.openClientStorage()
txn = Transaction()
storage.tpc_begin(txn)
storage.tpc_vote(txn)
storage.tpc_abort(txn)
storage.close()
def checkTimeoutOnAbortNoLock(self):
storage = self.openClientStorage()
txn = Transaction()
storage.tpc_begin(txn)
storage.tpc_abort(txn)
storage.close()
def checkTimeoutAfterVote(self):
self._storage = storage = self.openClientStorage()
# Assert that the zeo cache is empty
self.assert_(not list(storage._cache.contents()))
# Create the object
oid = storage.new_oid()
obj = MinPO(7)
# Now do a store, sleeping before the finish so as to cause a timeout
t = Transaction()
old_connection_count = storage.connection_count_for_tests
storage.tpc_begin(t)
revid1 = storage.store(oid, ZERO, zodb_pickle(obj), '', t)
storage.tpc_vote(t)
# Now sleep long enough for the storage to time out
time.sleep(3)
self.assert_(
(not storage.is_connected())
or
(storage.connection_count_for_tests > old_connection_count)
)
storage._wait()
self.assert_(storage.is_connected())
# We expect finish to fail
self.assertRaises(ClientDisconnected, storage.tpc_finish, t)
# The cache should still be empty
self.assert_(not list(storage._cache.contents()))
# Load should fail since the object should not be in either the cache
# or the server.
self.assertRaises(KeyError, storage.load, oid, '')
class MSTThread(threading.Thread):
__super_init = threading.Thread.__init__
def __init__(self, testcase, name):
self.__super_init(name=name)
self.testcase = testcase
self.clients = []
def run(self):
tname = self.getName()
testcase = self.testcase
# Create client connections to each server
clients = self.clients
for i in range(len(testcase.addr)):
c = testcase.openClientStorage(addr=testcase.addr[i])
c.__name = "C%d" % i
clients.append(c)
for i in range(testcase.ntrans):
# Because we want a transaction spanning all storages,
# we can't use _dostore(). This is several _dostore() calls
# expanded in-line (mostly).
# Create oid->serial mappings
for c in clients:
c.__oids = []
c.__serials = {}
# Begin a transaction
t = Transaction()
for c in clients:
#print("%s.%s.%s begin" % (tname, c.__name, i))
c.tpc_begin(t)
for j in range(testcase.nobj):
for c in clients:
# Create and store a new object on each server
oid = c.new_oid()
c.__oids.append(oid)
data = MinPO("%s.%s.t%d.o%d" % (tname, c.__name, i, j))
#print(data.value)
data = zodb_pickle(data)
s = c.store(oid, ZERO, data, '', t)
c.__serials.update(handle_all_serials(oid, s))
# Vote on all servers and handle serials
for c in clients:
#print("%s.%s.%s vote" % (tname, c.__name, i))
s = c.tpc_vote(t)
c.__serials.update(handle_all_serials(None, s))
# Finish on all servers
for c in clients:
#print("%s.%s.%s finish\n" % (tname, c.__name, i))
c.tpc_finish(t)
for c in clients:
# Check that we got serials for all oids
for oid in c.__oids:
testcase.failUnless(oid in c.__serials)
# Check that we got serials for no other oids
for oid in c.__serials.keys():
testcase.failUnless(oid in c.__oids)
def closeclients(self):
# Close clients opened by run()
for c in self.clients:
try:
c.close()
except:
pass
@contextlib.contextmanager
def short_timeout(self):
old = self._storage._server.timeout
self._storage._server.timeout = 1
yield
self._storage._server.timeout = old
# Run IPv6 tests if V6 sockets are supported
try:
socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
except (socket.error, AttributeError):
pass
else:
class V6Setup:
def _getAddr(self):
return '::1', forker.get_port(self)
_g = globals()
for name, value in tuple(_g.items()):
if isinstance(value, type) and issubclass(value, CommonSetupTearDown):
_g[name+"V6"] = type(name+"V6", (V6Setup, value), {})
ZEO-5.0.0a0/src/ZEO/tests/InvalidationTests.py 0000664 0000000 0000000 00000040626 12737730500 0021012 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
import threading
import time
from random import Random
import transaction
from BTrees.check import check, display
from BTrees.OOBTree import OOBTree
from ZEO.tests.TestThread import TestThread
from ZODB.DB import DB
from ZODB.POSException import ReadConflictError, ConflictError
# The tests here let several threads have a go at one or more database
# instances simultaneously. Each thread appends a disjoint (from the
# other threads) sequence of increasing integers to an OOBTree, one at
# at time (per thread). This provokes lots of conflicts, and BTrees
# work hard at conflict resolution too. An OOBTree is used because
# that flavor has the smallest maximum bucket size, and so splits buckets
# more often than other BTree flavors.
#
# When these tests were first written, they provoked an amazing number
# of obscure timing-related bugs in cache consistency logic, revealed
# by failure of the BTree to pass internal consistency checks at the end,
# and/or by failure of the BTree to contain all the keys the threads
# thought they added (i.e., the keys for which transaction.commit()
# did not raise any exception).
class FailableThread(TestThread):
# mixin class
# subclass must provide
# - self.stop attribute (an event)
# - self._testrun() method
# TestThread.run() invokes testrun().
def testrun(self):
try:
self._testrun()
except:
# Report the failure here to all the other threads, so
# that they stop quickly.
self.stop.set()
raise
class StressTask:
# Append integers startnum, startnum + step, startnum + 2*step, ...
# to 'tree'. If sleep is given, sleep
# that long after each append. At the end, instance var .added_keys
# is a list of the ints the thread believes it added successfully.
def __init__(self, db, threadnum, startnum, step=2, sleep=None):
self.db = db
self.threadnum = threadnum
self.startnum = startnum
self.step = step
self.sleep = sleep
self.added_keys = []
self.tm = transaction.TransactionManager()
self.cn = self.db.open(transaction_manager=self.tm)
self.cn.sync()
def doStep(self):
tree = self.cn.root()["tree"]
key = self.startnum
tree[key] = self.threadnum
def commit(self):
cn = self.cn
key = self.startnum
self.tm.get().note("add key %s" % key)
try:
self.tm.get().commit()
except ConflictError as msg:
self.tm.abort()
else:
if self.sleep:
time.sleep(self.sleep)
self.added_keys.append(key)
self.startnum += self.step
def cleanup(self):
self.tm.get().abort()
self.cn.close()
def _runTasks(rounds, *tasks):
'''run *task* interleaved for *rounds* rounds.'''
def commit(run, actions):
actions.append(':')
for t in run:
t.commit()
del run[:]
r = Random()
r.seed(1064589285) # make it deterministic
run = []
actions = []
try:
for i in range(rounds):
t = r.choice(tasks)
if t in run:
commit(run, actions)
run.append(t)
t.doStep()
actions.append(repr(t.startnum))
commit(run,actions)
# stderr.write(' '.join(actions)+'\n')
finally:
for t in tasks:
t.cleanup()
class StressThread(FailableThread):
# Append integers startnum, startnum + step, startnum + 2*step, ...
# to 'tree' until Event stop is set. If sleep is given, sleep
# that long after each append. At the end, instance var .added_keys
# is a list of the ints the thread believes it added successfully.
def __init__(self, testcase, db, stop, threadnum, commitdict,
startnum, step=2, sleep=None):
TestThread.__init__(self, testcase)
self.db = db
self.stop = stop
self.threadnum = threadnum
self.startnum = startnum
self.step = step
self.sleep = sleep
self.added_keys = []
self.commitdict = commitdict
def _testrun(self):
tm = transaction.TransactionManager()
cn = self.db.open(transaction_manager=tm)
while not self.stop.isSet():
try:
tree = cn.root()["tree"]
break
except (ConflictError, KeyError):
tm.abort()
key = self.startnum
while not self.stop.isSet():
try:
tree[key] = self.threadnum
tm.get().note("add key %s" % key)
tm.commit()
self.commitdict[self] = 1
if self.sleep:
time.sleep(self.sleep)
except (ReadConflictError, ConflictError) as msg:
tm.abort()
else:
self.added_keys.append(key)
key += self.step
cn.close()
class LargeUpdatesThread(FailableThread):
# A thread that performs a lot of updates. It attempts to modify
# more than 25 objects so that it can test code that runs vote
# in a separate thread when it modifies more than 25 objects.
def __init__(self, test, db, stop, threadnum, commitdict, startnum,
step=2, sleep=None):
TestThread.__init__(self, test)
self.db = db
self.stop = stop
self.threadnum = threadnum
self.startnum = startnum
self.step = step
self.sleep = sleep
self.added_keys = []
self.commitdict = commitdict
def _testrun(self):
cn = self.db.open()
while not self.stop.isSet():
try:
tree = cn.root()["tree"]
break
except (ConflictError, KeyError):
# print("%d getting tree abort" % self.threadnum)
transaction.abort()
keys_added = {} # set of keys we commit
tkeys = []
while not self.stop.isSet():
# The test picks 50 keys spread across many buckets.
# self.startnum and self.step ensure that all threads use
# disjoint key sets, to minimize conflict errors.
nkeys = len(tkeys)
if nkeys < 50:
tkeys = list(range(self.startnum, 3000, self.step))
nkeys = len(tkeys)
step = max(int(nkeys / 50), 1)
keys = [tkeys[i] for i in range(0, nkeys, step)]
for key in keys:
try:
tree[key] = self.threadnum
except (ReadConflictError, ConflictError) as msg:
# print("%d setting key %s" % (self.threadnum, msg))
transaction.abort()
break
else:
# print("%d set #%d" % (self.threadnum, len(keys)))
transaction.get().note("keys %s" % ", ".join(map(str, keys)))
try:
transaction.commit()
self.commitdict[self] = 1
if self.sleep:
time.sleep(self.sleep)
except ConflictError as msg:
# print("%d commit %s" % (self.threadnum, msg))
transaction.abort()
continue
for k in keys:
tkeys.remove(k)
keys_added[k] = 1
self.added_keys = keys_added.keys()
cn.close()
class InvalidationTests:
# Minimum # of seconds the main thread lets the workers run. The
# test stops as soon as this much time has elapsed, and all threads
# have managed to commit a change.
MINTIME = 10
# Maximum # of seconds the main thread lets the workers run. We
# stop after this long has elapsed regardless of whether all threads
# have managed to commit a change.
MAXTIME = 300
StressThread = StressThread
def _check_tree(self, cn, tree):
# Make sure the BTree is sane at the C level.
retries = 3
while retries:
retries -= 1
try:
check(tree)
tree._check()
except ReadConflictError:
if retries:
transaction.abort()
else:
raise
except:
display(tree)
raise
def _check_threads(self, tree, *threads):
# Make sure the thread's view of the world is consistent with
# the actual database state.
expected_keys = []
errormsgs = []
err = errormsgs.append
for t in threads:
if not t.added_keys:
err("thread %d didn't add any keys" % t.threadnum)
expected_keys.extend(t.added_keys)
expected_keys.sort()
for i in range(100):
tree._p_jar.sync()
actual_keys = list(tree.keys())
if expected_keys == actual_keys:
break
time.sleep(.1)
else:
err("expected keys != actual keys")
for k in expected_keys:
if k not in actual_keys:
err("key %s expected but not in tree" % k)
for k in actual_keys:
if k not in expected_keys:
err("key %s in tree but not expected" % k)
self.fail('\n'.join(errormsgs))
def go(self, stop, commitdict, *threads):
# Run the threads
for t in threads:
t.start()
delay = self.MINTIME
start = time.time()
while time.time() - start <= self.MAXTIME:
stop.wait(delay)
if stop.isSet():
# Some thread failed. Stop right now.
break
delay = 2.0
if len(commitdict) >= len(threads):
break
# Some thread still hasn't managed to commit anything.
stop.set()
# Give all the threads some time to stop before trying to clean up.
# cleanup() will cause the test to fail if some thread ended with
# an uncaught exception, and unittest will call the base class
# tearDown then immediately, but if other threads are still
# running that can lead to a cascade of spurious exceptions.
for t in threads:
t.join(30)
for t in threads:
t.cleanup(10)
def checkConcurrentUpdates2Storages_emulated(self):
self._storage = storage1 = self.openClientStorage()
db1 = DB(storage1)
storage2 = self.openClientStorage()
db2 = DB(storage2)
cn = db1.open()
tree = cn.root()["tree"] = OOBTree()
transaction.commit()
# DM: allow time for invalidations to come in and process them
time.sleep(0.1)
# Run two threads that update the BTree
t1 = StressTask(db1, 1, 1,)
t2 = StressTask(db2, 2, 2,)
_runTasks(100, t1, t2)
cn.sync()
self._check_tree(cn, tree)
self._check_threads(tree, t1, t2)
cn.close()
db1.close()
db2.close()
def checkConcurrentUpdates2Storages(self):
self._storage = storage1 = self.openClientStorage()
db1 = DB(storage1)
storage2 = self.openClientStorage()
db2 = DB(storage2)
stop = threading.Event()
cn = db1.open()
tree = cn.root()["tree"] = OOBTree()
transaction.commit()
cn.close()
# Run two threads that update the BTree
cd = {}
t1 = self.StressThread(self, db1, stop, 1, cd, 1)
t2 = self.StressThread(self, db2, stop, 2, cd, 2)
self.go(stop, cd, t1, t2)
while db1.lastTransaction() != db2.lastTransaction():
db1._storage.sync()
db2._storage.sync()
cn = db1.open()
tree = cn.root()["tree"]
self._check_tree(cn, tree)
self._check_threads(tree, t1, t2)
cn.close()
db1.close()
db2.close()
def checkConcurrentUpdates19Storages(self):
n = 19
dbs = [DB(self.openClientStorage()) for i in range(n)]
self._storage = dbs[0].storage
stop = threading.Event()
cn = dbs[0].open()
tree = cn.root()["tree"] = OOBTree()
transaction.commit()
cn.close()
# Run threads that update the BTree
cd = {}
threads = [self.StressThread(self, dbs[i], stop, i, cd, i, n)
for i in range(n)]
self.go(stop, cd, *threads)
while len(set(db.lastTransaction() for db in dbs)) > 1:
_ = [db._storage.sync() for db in dbs]
cn = dbs[0].open()
tree = cn.root()["tree"]
self._check_tree(cn, tree)
self._check_threads(tree, *threads)
cn.close()
_ = [db.close() for db in dbs]
def checkConcurrentUpdates1Storage(self):
self._storage = storage1 = self.openClientStorage()
db1 = DB(storage1)
stop = threading.Event()
cn = db1.open()
tree = cn.root()["tree"] = OOBTree()
transaction.commit()
cn.close()
# Run two threads that update the BTree
cd = {}
t1 = self.StressThread(self, db1, stop, 1, cd, 1, sleep=0.01)
t2 = self.StressThread(self, db1, stop, 2, cd, 2, sleep=0.01)
self.go(stop, cd, t1, t2)
cn = db1.open()
tree = cn.root()["tree"]
self._check_tree(cn, tree)
self._check_threads(tree, t1, t2)
cn.close()
db1.close()
def checkConcurrentUpdates2StoragesMT(self):
self._storage = storage1 = self.openClientStorage()
db1 = DB(storage1)
db2 = DB(self.openClientStorage())
stop = threading.Event()
cn = db1.open()
tree = cn.root()["tree"] = OOBTree()
transaction.commit()
cn.close()
# Run three threads that update the BTree.
# Two of the threads share a single storage so that it
# is possible for both threads to read the same object
# at the same time.
cd = {}
t1 = self.StressThread(self, db1, stop, 1, cd, 1, 3)
t2 = self.StressThread(self, db2, stop, 2, cd, 2, 3, 0.01)
t3 = self.StressThread(self, db2, stop, 3, cd, 3, 3, 0.01)
self.go(stop, cd, t1, t2, t3)
while db1.lastTransaction() != db2.lastTransaction():
time.sleep(.1)
time.sleep(.1)
cn = db1.open()
tree = cn.root()["tree"]
self._check_tree(cn, tree)
self._check_threads(tree, t1, t2, t3)
cn.close()
db1.close()
db2.close()
def checkConcurrentLargeUpdates(self):
# Use 3 threads like the 2StorageMT test above.
self._storage = storage1 = self.openClientStorage()
db1 = DB(storage1)
db2 = DB(self.openClientStorage())
stop = threading.Event()
cn = db1.open()
tree = cn.root()["tree"] = OOBTree()
for i in range(0, 3000, 2):
tree[i] = 0
transaction.commit()
cn.close()
# Run three threads that update the BTree.
# Two of the threads share a single storage so that it
# is possible for both threads to read the same object
# at the same time.
cd = {}
t1 = LargeUpdatesThread(self, db1, stop, 1, cd, 1, 3, 0.02)
t2 = LargeUpdatesThread(self, db2, stop, 2, cd, 2, 3, 0.01)
t3 = LargeUpdatesThread(self, db2, stop, 3, cd, 3, 3, 0.01)
self.go(stop, cd, t1, t2, t3)
while db1.lastTransaction() != db2.lastTransaction():
db1._storage.sync()
db2._storage.sync()
cn = db1.open()
tree = cn.root()["tree"]
self._check_tree(cn, tree)
# Purge the tree of the dummy entries mapping to 0.
losers = [k for k, v in tree.items() if v == 0]
for k in losers:
del tree[k]
transaction.commit()
self._check_threads(tree, t1, t2, t3)
cn.close()
db1.close()
db2.close()
ZEO-5.0.0a0/src/ZEO/tests/IterationTests.py 0000664 0000000 0000000 00000020274 12737730500 0020324 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2008 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""ZEO iterator protocol tests."""
import transaction
import six
import gc
from ..asyncio.testing import AsyncRPC
class IterationTests:
def _assertIteratorIdsEmpty(self):
# Account for the need to run a GC collection
# under non-refcounted implementations like PyPy
# for storage._iterator_gc to fully do its job.
# First, confirm that it ran
self.assertTrue(self._storage._iterators._last_gc > 0)
gc_enabled = gc.isenabled()
gc.disable() # make sure there's no race conditions cleaning out the weak refs
try:
self.assertEquals(0, len(self._storage._iterator_ids))
except AssertionError:
# Ok, we have ids. That should also mean that the
# weak dictionary has the same length.
self.assertEqual(len(self._storage._iterators), len(self._storage._iterator_ids))
# Now if we do a collection and re-ask for iterator_gc
# everything goes away as expected.
gc.enable()
gc.collect()
gc.collect() # sometimes PyPy needs it twice to clear weak refs
self._storage._iterator_gc()
self.assertEqual(len(self._storage._iterators), len(self._storage._iterator_ids))
self.assertEquals(0, len(self._storage._iterator_ids))
finally:
if gc_enabled:
gc.enable()
else:
gc.disable()
def checkIteratorGCProtocol(self):
# Test garbage collection on protocol level.
server = AsyncRPC(self._storage._server)
iid = server.iterator_start(None, None)
# None signals the end of iteration.
self.assertEquals(None, server.iterator_next(iid))
# The server has disposed the iterator already.
self.assertRaises(KeyError, server.iterator_next, iid)
iid = server.iterator_start(None, None)
# This time, we tell the server to throw the iterator away.
server.iterator_gc([iid])
self.assertRaises(KeyError, server.iterator_next, iid)
def checkIteratorExhaustionStorage(self):
# Test the storage's garbage collection mechanism.
self._dostore()
iterator = self._storage.iterator()
# At this point, a wrapping iterator might not have called the CS
# iterator yet. We'll consume one item to make sure this happens.
six.advance_iterator(iterator)
self.assertEquals(1, len(self._storage._iterator_ids))
iid = list(self._storage._iterator_ids)[0]
self.assertEquals([], list(iterator))
self.assertEquals(0, len(self._storage._iterator_ids))
# The iterator has run through, so the server has already disposed it.
self.assertRaises(KeyError, self._storage._call, 'iterator_next', iid)
def checkIteratorGCSpanTransactions(self):
# Keep a hard reference to the iterator so it won't be automatically
# garbage collected at the transaction boundary.
self._dostore()
iterator = self._storage.iterator()
self._dostore()
# As the iterator was not garbage collected, we can still use it. (We
# don't see the transaction we just wrote being picked up, because
# iterators only see the state from the point in time when they were
# created.)
self.assert_(list(iterator))
def checkIteratorGCStorageCommitting(self):
# We want the iterator to be garbage-collected, so we don't keep any
# hard references to it. The storage tracks its ID, though.
# The odd little jig we do below arises from the fact that the
# CS iterator may not be constructed right away if the CS is wrapped.
# We need to actually do some iteration to get the iterator created.
# We do a store to make sure the iterator isn't exhausted right away.
self._dostore()
six.advance_iterator(self._storage.iterator())
self.assertEquals(1, len(self._storage._iterator_ids))
iid = list(self._storage._iterator_ids)[0]
# GC happens at the transaction boundary. After that, both the storage
# and the server have forgotten the iterator.
self._storage._iterators._last_gc = -1
self._dostore()
self._assertIteratorIdsEmpty()
self.assertRaises(KeyError, self._storage._call, 'iterator_next', iid)
def checkIteratorGCStorageTPCAborting(self):
# The odd little jig we do below arises from the fact that the
# CS iterator may not be constructed right away if the CS is wrapped.
# We need to actually do some iteration to get the iterator created.
# We do a store to make sure the iterator isn't exhausted right away.
self._dostore()
six.advance_iterator(self._storage.iterator())
iid = list(self._storage._iterator_ids)[0]
t = transaction.Transaction()
self._storage._iterators._last_gc = -1
self._storage.tpc_begin(t)
self._storage.tpc_abort(t)
self._assertIteratorIdsEmpty()
self.assertRaises(KeyError, self._storage._call, 'iterator_next', iid)
def checkIteratorGCStorageDisconnect(self):
# The odd little jig we do below arises from the fact that the
# CS iterator may not be constructed right away if the CS is wrapped.
# We need to actually do some iteration to get the iterator created.
# We do a store to make sure the iterator isn't exhausted right away.
self._dostore()
six.advance_iterator(self._storage.iterator())
iid = list(self._storage._iterator_ids)[0]
t = transaction.Transaction()
self._storage.tpc_begin(t)
# Show that after disconnecting, the client side GCs the iterators
# as well. I'm calling this directly to avoid accidentally
# calling tpc_abort implicitly.
self._storage.notify_disconnected()
self.assertEquals(0, len(self._storage._iterator_ids))
def checkIteratorParallel(self):
self._dostore()
self._dostore()
iter1 = self._storage.iterator()
iter2 = self._storage.iterator()
txn_info1 = six.advance_iterator(iter1)
txn_info2 = six.advance_iterator(iter2)
self.assertEquals(txn_info1.tid, txn_info2.tid)
txn_info1 = six.advance_iterator(iter1)
txn_info2 = six.advance_iterator(iter2)
self.assertEquals(txn_info1.tid, txn_info2.tid)
self.assertRaises(StopIteration, next, iter1)
self.assertRaises(StopIteration, next, iter2)
def iterator_sane_after_reconnect():
r"""Make sure that iterators are invalidated on disconnect.
Start a server:
>>> addr, adminaddr = start_server(
... '\npath fs\n', keep=1)
Open a client storage to it and commit a some transactions:
>>> import ZEO, ZODB, transaction
>>> client = ZEO.client(addr)
>>> db = ZODB.DB(client)
>>> conn = db.open()
>>> for i in range(10):
... conn.root().i = i
... transaction.commit()
Create an iterator:
>>> it = client.iterator()
>>> tid1 = it.next().tid
Restart the storage:
>>> stop_server(adminaddr)
>>> wait_disconnected(client)
>>> _ = start_server('\npath fs\n', addr=addr)
>>> wait_connected(client)
Now, we'll create a second iterator:
>>> it2 = client.iterator()
If we try to advance the first iterator, we should get an error:
>>> it.next().tid > tid1
Traceback (most recent call last):
...
ClientDisconnected: Disconnected iterator
The second iterator should be peachy:
>>> it2.next().tid == tid1
True
Cleanup:
>>> db.close()
"""
ZEO-5.0.0a0/src/ZEO/tests/TestThread.py 0000664 0000000 0000000 00000004311 12737730500 0017404 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""A Thread base class for use with unittest."""
import threading
import sys
import six
class TestThread(threading.Thread):
"""Base class for defining threads that run from unittest.
The subclass should define a testrun() method instead of a run()
method.
Call cleanup() when the test is done with the thread, instead of join().
If the thread exits with an uncaught exception, it's captured and
re-raised when cleanup() is called. cleanup() should be called by
the main thread! Trying to tell unittest that a test failed from
another thread creates a nightmare of timing-depending cascading
failures and missed errors (tracebacks that show up on the screen,
but don't cause unittest to believe the test failed).
cleanup() also joins the thread. If the thread ended without raising
an uncaught exception, and the join doesn't succeed in the timeout
period, then the test is made to fail with a "Thread still alive"
message.
"""
def __init__(self, testcase):
threading.Thread.__init__(self)
# In case this thread hangs, don't stop Python from exiting.
self.setDaemon(1)
self._exc_info = None
self._testcase = testcase
def run(self):
try:
self.testrun()
except:
self._exc_info = sys.exc_info()
def cleanup(self, timeout=15):
self.join(timeout)
if self._exc_info:
six.reraise(self._exc_info[0], self._exc_info[1], self._exc_info[2])
if self.isAlive():
self._testcase.fail("Thread did not finish: %s" % self)
ZEO-5.0.0a0/src/ZEO/tests/ThreadTests.py 0000664 0000000 0000000 00000012007 12737730500 0017570 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Compromising positions involving threads."""
import threading
import transaction
from ZODB.tests.StorageTestBase import zodb_pickle, MinPO
import ZEO.Exceptions
ZERO = '\0'*8
class BasicThread(threading.Thread):
def __init__(self, storage, doNextEvent, threadStartedEvent):
self.storage = storage
self.trans = transaction.Transaction()
self.doNextEvent = doNextEvent
self.threadStartedEvent = threadStartedEvent
self.gotValueError = 0
self.gotDisconnected = 0
threading.Thread.__init__(self)
self.setDaemon(1)
def join(self):
threading.Thread.join(self, 10)
assert not self.isAlive()
class GetsThroughVoteThread(BasicThread):
# This thread gets partially through a transaction before it turns
# execution over to another thread. We're trying to establish that a
# tpc_finish() after a storage has been closed by another thread will get
# a ClientStorageError error.
#
# This class gets does a tpc_begin(), store(), tpc_vote() and is waiting
# to do the tpc_finish() when the other thread closes the storage.
def run(self):
self.storage.tpc_begin(self.trans)
oid = self.storage.new_oid()
self.storage.store(oid, ZERO, zodb_pickle(MinPO("c")), '', self.trans)
self.storage.tpc_vote(self.trans)
self.threadStartedEvent.set()
self.doNextEvent.wait(10)
try:
self.storage.tpc_finish(self.trans)
except ZEO.Exceptions.ClientStorageError:
self.gotValueError = 1
self.storage.tpc_abort(self.trans)
class GetsThroughBeginThread(BasicThread):
# This class is like the above except that it is intended to be run when
# another thread is already in a tpc_begin(). Thus, this thread will
# block in the tpc_begin until another thread closes the storage. When
# that happens, this one will get disconnected too.
def run(self):
try:
self.storage.tpc_begin(self.trans)
except ZEO.Exceptions.ClientStorageError:
self.gotValueError = 1
class ThreadTests:
# Thread 1 should start a transaction, but not get all the way through it.
# Main thread should close the connection. Thread 1 should then get
# disconnected.
def checkDisconnectedOnThread2Close(self):
doNextEvent = threading.Event()
threadStartedEvent = threading.Event()
thread1 = GetsThroughVoteThread(self._storage,
doNextEvent, threadStartedEvent)
thread1.start()
threadStartedEvent.wait(10)
self._storage.close()
doNextEvent.set()
thread1.join()
self.assertEqual(thread1.gotValueError, 1)
# Thread 1 should start a transaction, but not get all the way through
# it. While thread 1 is in the middle of the transaction, a second thread
# should start a transaction, and it will block in the tcp_begin() --
# because thread 1 has acquired the lock in its tpc_begin(). Now the main
# thread closes the storage and both sub-threads should get disconnected.
def checkSecondBeginFails(self):
doNextEvent = threading.Event()
threadStartedEvent = threading.Event()
thread1 = GetsThroughVoteThread(self._storage,
doNextEvent, threadStartedEvent)
thread2 = GetsThroughBeginThread(self._storage,
doNextEvent, threadStartedEvent)
thread1.start()
threadStartedEvent.wait(1)
thread2.start()
self._storage.close()
doNextEvent.set()
thread1.join()
thread2.join()
self.assertEqual(thread1.gotValueError, 1)
self.assertEqual(thread2.gotValueError, 1)
# Run a bunch of threads doing small and large stores in parallel
def checkMTStores(self):
threads = []
for i in range(5):
t = threading.Thread(target=self.mtstorehelper)
threads.append(t)
t.start()
for t in threads:
t.join(30)
for i in threads:
self.failUnless(not t.isAlive())
# Helper for checkMTStores
def mtstorehelper(self):
name = threading.currentThread().getName()
objs = []
for i in range(10):
objs.append(MinPO("X" * 200000))
objs.append(MinPO("X"))
for obj in objs:
self._dostore(data=obj)
ZEO-5.0.0a0/src/ZEO/tests/__init__.py 0000664 0000000 0000000 00000001201 12737730500 0017067 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
ZEO-5.0.0a0/src/ZEO/tests/client-config.test 0000664 0000000 0000000 00000003470 12737730500 0020412 0 ustar 00root root 0000000 0000000 ZEO Client Configuration
========================
Here we'll describe (and test) the various ZEO Client configuration
options. To facilitate this, we'l start a server that our client can
connect to:
>>> addr, _ = start_server(blob_dir='server-blobs')
The simplest client configuration specified a server address:
>>> import ZODB.config
>>> storage = ZODB.config.storageFromString("""
...
... server %s:%s
...
... """ % addr)
>>> storage.getName(), storage.__class__.__name__
... # doctest: +ELLIPSIS
("[('localhost', ...)] (connected)", 'ClientStorage')
>>> storage.blob_dir
>>> storage._storage
'1'
>>> storage._cache.maxsize
20971520
>>> storage._cache.path
>>> storage._is_read_only
False
>>> storage._read_only_fallback
False
>>> storage._blob_cache_size
>>> storage.close()
>>> storage = ZODB.config.storageFromString("""
...
... server %s:%s
... blob-dir blobs
... storage 2
... cache-size 100
... name bob
... client cache
... read-only true
... drop-cache-rather-verify true
... blob-cache-size 1000MB
... blob-cache-size-check 10
... wait false
...
... """ % addr)
>>> storage.getName(), storage.__class__.__name__
('bob (disconnected)', 'ClientStorage')
>>> storage.blob_dir
'blobs'
>>> storage._storage
'2'
>>> storage._cache.maxsize
100
>>> import os
>>> storage._cache.path == os.path.abspath('cache-2.zec')
True
>>> storage._is_read_only
True
>>> storage._read_only_fallback
False
>>> storage._blob_cache_size
1048576000
>>> print(storage._blob_cache_size_check)
104857600
>>> storage.close()
ZEO-5.0.0a0/src/ZEO/tests/client.pem 0000664 0000000 0000000 00000002644 12737730500 0016753 0 ustar 00root root 0000000 0000000 -----BEGIN CERTIFICATE-----
MIID/TCCAuWgAwIBAgIJAJWOSC4oLyp9MA0GCSqGSIb3DQEBBQUAMFwxCzAJBgNV
BAYTAlVTMQswCQYDVQQIEwJWQTENMAsGA1UEChMEWk9EQjERMA8GA1UEAxMIem9k
Yi5vcmcxHjAcBgkqhkiG9w0BCQEWD2NsaWVudEB6b2RiLm9yZzAeFw0xNjA2MjMx
NTA3MTNaFw0xNzA2MjMxNTA3MTNaMFwxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJW
QTENMAsGA1UEChMEWk9EQjERMA8GA1UEAxMIem9kYi5vcmcxHjAcBgkqhkiG9w0B
CQEWD2NsaWVudEB6b2RiLm9yZzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
ggEBANqtLqtrzwDv0nakKiTX5ZtSHOaTmfwHzHsZJkcNf7kJUgGtE0oe3cIj4iAC
CCpPfUf9OfIA1NpmDsvPMqw80ho1o9g8QZWfW6QOTj2kbnanrRqMDmRvrWlzMQIe
+7EabYxbTjyduk76N2Sa4Tf/my6Yton3zyKtdjSJzoQ/SAKT+Nvjt5s47I4THEFY
gN7Njbg1FyihKwuwR64EUsyBanutKvXVT7gnB1V2cQhn+LW+NwgzsxKnptvyvBGr
Ago+uoxrHlIQu59xznS2vA3Ck2K3hIOnXpXYYGeRYKzplZLZnCfZ2uQheiO3ioEO
UCXICbPMxA7IEpTC75j1n9a3HKcCAwEAAaOBwTCBvjAdBgNVHQ4EFgQUCg1EhkHz
qYFOgbI/D6zR1tcwHBcwgY4GA1UdIwSBhjCBg4AUCg1EhkHzqYFOgbI/D6zR1tcw
HBehYKReMFwxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJWQTENMAsGA1UEChMEWk9E
QjERMA8GA1UEAxMIem9kYi5vcmcxHjAcBgkqhkiG9w0BCQEWD2NsaWVudEB6b2Ri
Lm9yZ4IJAJWOSC4oLyp9MAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEB
AJk3TviNGmjzCuDzLajeLEB9Iy285Fl3D/3TvThSDECS3cUq+i8fQm0cjOYnE/rK
6Lpg6soeQFetoWQsNT2uvxyv3iZEOtBwNhQKaKTkzIRTmMx/aWM0zG0c3psepC6c
1Fgp0HAts2JKC4Ni7zHFBDb8YZi87IUHYNKuJKcUWKiEgeHu2zCI1Q43FMSKoaG8
XwYi1Mxw6TQQtjZrMnNPSeO7zySBuCw10bCZSMC5xvsqckfREifRT//4A0/COWYK
/p6TZMTaMjrK8fOaPpap314QnLf80P6oLEZ7wkghaNuyq8IzgATuxYVy21132MNB
qZIUS+iblQAZDSHnQJoehQQ=
-----END CERTIFICATE-----
ZEO-5.0.0a0/src/ZEO/tests/client_key.pem 0000664 0000000 0000000 00000003217 12737730500 0017620 0 ustar 00root root 0000000 0000000 -----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEA2q0uq2vPAO/SdqQqJNflm1Ic5pOZ/AfMexkmRw1/uQlSAa0T
Sh7dwiPiIAIIKk99R/058gDU2mYOy88yrDzSGjWj2DxBlZ9bpA5OPaRudqetGowO
ZG+taXMxAh77sRptjFtOPJ26Tvo3ZJrhN/+bLpi2iffPIq12NInOhD9IApP42+O3
mzjsjhMcQViA3s2NuDUXKKErC7BHrgRSzIFqe60q9dVPuCcHVXZxCGf4tb43CDOz
Eqem2/K8EasCCj66jGseUhC7n3HOdLa8DcKTYreEg6deldhgZ5FgrOmVktmcJ9na
5CF6I7eKgQ5QJcgJs8zEDsgSlMLvmPWf1rccpwIDAQABAoIBAQDSju7hIG2x+Tou
AuSRlVEAvZAWdQlQJDJAVXcF83mIMfFEq+Jm/FGLHgIdz9cM5n07VBj3bNWHdb3J
gTjJn8audffNvjdoWoli7mNn92xl1A5aAYHaM65GWyRVZn/zh/7zpvcuZrF+Wm/7
7yXtRbGmrGUXdAV+3odzDz5LGKO91fTuM2nW0j+p7+q2Bzko7+rl9AVxaveco4pt
TtqXX2eOC3wTcNotBfJJD89/+/szg62K4CYCUAaetKMPcVrgQ4v0YHakOl8lJJxW
q7XiqhPyjeZp6h8e9dtVCkeZHa3xacuoftF2w3FslVEX/LOooAKf07PU1xxJezZN
trP11pgBAoGBAPoysqD9OW3C0WyIXm/Lx1Pncr/p/kKYlXNDWQRmizHmfLC4xrBI
n75mYSp7BJ7Gh2W7lW9lnyJ0uaqXdqOOswO/baTeSEvvhL+lLs4F1cQWemjTD4qy
KqCxCCbTf8gZPssiJXsXGmf6yWAdpjfYUxarxB0Ks6wHLBpQHctvVebJAoGBAN+/
W0+yAXssr7AWgKAyiHtqE/MiI8e0bDJWDjHxaPvnSVyDwVOARj/JKvw7jnhg8FKJ
1iWtl8ym2ktcAQygSl3HW4wggD59BbeXrUQr4nZi8wLMTpLxnSfITFrwXpwVShT4
8KJPR46W/Plkphm1fZn6Lr0uGJ12pV+iPAq/F+/vAoGALmRoKuHJXEjbfDxtBl3K
wAwSgvNoagDQ9WZvgxlghggu5rXcYaOVu0BQlAfre2VkhcCanOVC9KigJLmhDgLP
vsooEoIE9c+b1c1TOHBsiseAOx+nqhgPP2yUDl75Oqkzs4bJXGGUS+N8o43b3E8I
WRPQcXIijqtlyhtA6w/h5cECgYEA2M6DnGXQKZrTYr1rRc+xkGTpj9607P5XGS9p
8dsK74zd+VdyLYdOiuBTVrYfB2ZneJM3fqsHPLcxL3SnT6TCarySaOXVXremoo/G
xRgBCNY4w61VNe4JalMcKcJg6r12W3wdMCnCHNkRqFdu29qRKnLSd14DXBFrjY+W
vpMMjuECgYEAtNOm9WTCAlVbGTMplXcUJBWNwdOz+sHMi+PhP/fTNm0XH3fepFeQ
th7b/RYocTwAHGu8mArX2DgXAfDQAYMPFN8vmbh0XQsQV+iuja9MO2TagTyVSe3x
DBOTFCLg8oWtzwySZ6BKR/KsGI2ryRxyE1VojV0zNXRyZqDCm+G4Vs0=
-----END RSA PRIVATE KEY-----
ZEO-5.0.0a0/src/ZEO/tests/drop_cache_rather_than_verify.txt 0000664 0000000 0000000 00000011134 12737730500 0023557 0 ustar 00root root 0000000 0000000 Avoiding cache verifification
=============================
For large databases it is common to also use very large ZEO cache
files. If a client has beed disconnected for too long, the server
can't play back missing invalidations. In this case, the cache is
cleared. When this happens, a ZEO.interfaces.StaleCache event is
published, largely for backward compatibility.
ClientStorage used to provide an option to drop it's cache rather than
doing verification. This is now the only behavior. Cache
verification is no longer supported.
- Invalidates all object caches
- Drops or clears it's client cache. (The end result is that the cache
is working but empty.)
- Logs a CRITICAL message.
Here's an example that shows that this is actually what happens.
Start a server, create a client to it and commit some data
>>> addr, admin = start_server(keep=1)
>>> import ZEO, transaction
>>> db = ZEO.DB(addr, client='cache', name='test')
>>> wait_connected(db.storage)
>>> conn = db.open()
>>> conn.root()[1] = conn.root().__class__()
>>> conn.root()[1].x = 1
>>> transaction.commit()
>>> len(db.storage._cache)
3
Now, we'll stop the server and restart with a different address:
>>> stop_server(admin)
>>> addr2, admin = start_server(keep=1)
And create another client and write some data to it:
>>> db2 = ZEO.DB(addr2)
>>> wait_connected(db2.storage)
>>> conn2 = db2.open()
>>> for i in range(5):
... conn2.root()[1].x += 1
... transaction.commit()
>>> db2.close()
>>> stop_server(admin)
Now, we'll restart the server. Before we do that, we'll capture
logging and event data:
>>> import logging, zope.testing.loggingsupport, ZODB.event
>>> handler = zope.testing.loggingsupport.InstalledHandler(
... 'ZEO', level=logging.ERROR)
>>> events = []
>>> def event_handler(e):
... if hasattr(e, 'storage'):
... events.append((
... len(e.storage._server.client.cache), str(handler), e.__class__.__name__))
>>> old_notify = ZODB.event.notify
>>> ZODB.event.notify = event_handler
Note that the event handler is saving away the length of the cache and
the state of the log handler. We'll use this to show that the event
is generated before the cache is dropped or the message is logged.
Now, we'll restart the server on the original address:
>>> _, admin = start_server(zeo_conf=dict(invalidation_queue_size=1),
... addr=addr, keep=1, threaded=True)
>>> wait_connected(db.storage)
Now, let's verify our assertions above:
- Publishes a stale-cache event.
>>> for e in events:
... print(e)
(3, '', 'StaleCache')
Note that the length of the cache when the event handler was
called waa non-zero. This is because the cache wasn't cleared
yet. Similarly, the dropping-cache message hasn't been logged
yet.
>>> del events[:]
- Drops or clears it's client cache. (The end result is that the cache
is working but empty.)
>>> len(db.storage._cache)
0
- Invalidates all object caches
>>> transaction.abort()
>>> conn.root()._p_changed
- Logs a CRITICAL message.
>>> print(handler) # doctest: +ELLIPSIS
ZEO... CRITICAL
test dropping stale cache
>>> handler.clear()
If we access the root object, it'll be loaded from the server:
>>> conn.root()[1].x
6
>>> len(db.storage._cache)
2
Similarly, if we simply disconnect the client, and write data from
another client:
>>> db.close()
>>> db2 = ZEO.DB(addr)
>>> wait_connected(db2.storage)
>>> conn2 = db2.open()
>>> for i in range(5):
... conn2.root()[1].x += 1
... transaction.commit()
>>> db2.close()
>>> db = ZEO.DB(addr, drop_cache_rather_verify=True, client='cache',
... name='test')
>>> wait_connected(db.storage)
- Drops or clears it's client cache. (The end result is that the cache
is working but empty.)
>>> len(db.storage._cache)
1
(When a database is created, it checks to make sure the root object is
in the database, which is why we get 1, rather than 0 objects in the cache.)
- Publishes a stake-cache event.
>>> for e in events:
... print(e)
(2, '', 'StaleCache')
>>> del events[:]
- Logs a CRITICAL message.
>>> print(handler) # doctest: +ELLIPSIS
ZEO... CRITICAL
test dropping stale cache
>>> handler.clear()
If we access the root object, it'll be loaded from the server:
>>> conn = db.open()
>>> conn.root()[1].x
11
.. Cleanup
>>> db.close()
>>> handler.uninstall()
>>> ZODB.event.notify = old_notify
ZEO-5.0.0a0/src/ZEO/tests/dynamic_server_ports.test 0000664 0000000 0000000 00000004133 12737730500 0022127 0 ustar 00root root 0000000 0000000 The storage server can be told to bind to port 0, allowing the OS to
pick a port dynamically. For this to be useful, there needs to be a
way to tell someone. For this reason, the server posts events to
ZODB.notify.
>>> import ZODB.event
>>> old_notify = ZODB.event.notify
>>> last_event = None
>>> def notify(event):
... global last_event
... last_event = event
>>> ZODB.event.notify = notify
Now, let's start a server and verify that we get a serving event:
>>> import ZEO
>>> addr, stop = ZEO.server()
>>> isinstance(last_event, ZEO.StorageServer.Serving)
True
>>> last_event.address == addr
True
>>> server = last_event.server
>>> server.addr == addr
True
Let's make sure we can connect.
>>> client = ZEO.client(last_event.address).close()
If we close the server, we'll get a closed event:
>>> stop()
>>> isinstance(last_event, ZEO.StorageServer.Closed)
True
>>> last_event.server is server
True
If we pass an empty string as the host part of the server address, we
can't really assign a single address, so the server addr attribute is
left alone:
>>> addr, stop = ZEO.server(port=('', 0))
>>> isinstance(last_event, ZEO.StorageServer.Serving)
True
>>> last_event.address[1] > 0
True
>>> last_event.server.addr
('', 0)
>>> stop()
The runzeo module provides some process support, including getting the
server configuration via a ZConfig configuration file. To spell a
dynamic port using ZConfig, you'd use a hostname by itself. In this
case, ZConfig passes None as the port.
>>> import ZEO.runzeo
>>> with open('conf', 'w') as f:
... _ = f.write("""
...
... address 127.0.0.1
...
...
...
... """)
>>> options = ZEO.runzeo.ZEOOptions()
>>> options.realize('-C conf'.split())
>>> options.address
('127.0.0.1', None)
>>> rs = ZEO.runzeo.ZEOServer(options)
>>> rs.check_socket()
>>> options.address
('127.0.0.1', 0)
.. cleanup
>>> ZODB.event.notify = old_notify
ZEO-5.0.0a0/src/ZEO/tests/forker.py 0000664 0000000 0000000 00000030772 12737730500 0016637 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Library for forking storage server and connecting client storage"""
from __future__ import print_function
import gc
import os
import random
import sys
import time
import errno
import socket
import subprocess
import logging
import tempfile
import six
import ZODB.tests.util
import zope.testing.setupstack
from ZEO._compat import StringIO
logger = logging.getLogger('ZEO.tests.forker')
class ZEOConfig:
"""Class to generate ZEO configuration file. """
def __init__(self, addr, **options):
if isinstance(addr, str):
self.logpath = addr+'.log'
else:
self.logpath = 'server-%s.log' % addr[1]
addr = '%s:%s' % addr
self.address = addr
self.read_only = None
self.loglevel = 'INFO'
self.__dict__.update(options)
def dump(self, f):
print("", file=f)
print("address " + self.address, file=f)
if self.read_only is not None:
print("read-only", self.read_only and "true" or "false", file=f)
for name in (
'invalidation_queue_size', 'invalidation_age',
'transaction_timeout', 'pid_filename',
'ssl_certificate', 'ssl_key', 'client_conflict_resolution',
):
v = getattr(self, name, None)
if v:
print(name.replace('_', '-'), v, file=f)
print("", file=f)
print("""
level %s
path %s
""" % (self.loglevel, self.logpath), file=f)
def __str__(self):
f = StringIO()
self.dump(f)
return f.getvalue()
def encode_format(fmt):
# The list of replacements mirrors
# ZConfig.components.logger.handlers._control_char_rewrites
for xform in (("\n", r"\n"), ("\t", r"\t"), ("\b", r"\b"),
("\f", r"\f"), ("\r", r"\r")):
fmt = fmt.replace(*xform)
return fmt
def runner(config, qin, qout, timeout=None,
debug=False, name=None,
keep=False, protocol=None):
if debug:
debug_logging()
old_protocol = None
if protocol:
import ZEO.asyncio.server
old_protocol = ZEO.asyncio.server.best_protocol_version
ZEO.asyncio.server.best_protocol_version = protocol
old_protocols = ZEO.asyncio.server.ServerProtocol.protocols
ZEO.asyncio.server.ServerProtocol.protocols = tuple(sorted(
set(old_protocols) | set([protocol])
))
try:
import ZEO.runzeo, threading
from six.moves.queue import Empty
options = ZEO.runzeo.ZEOOptions()
options.realize(['-C', config])
server = ZEO.runzeo.ZEOServer(options)
server.open_storages()
server.clear_socket()
server.create_server()
logger.debug('SERVER CREATED')
qout.put(server.server.acceptor.addr)
logger.debug('ADDRESS SENT')
thread = threading.Thread(
target=server.server.loop, kwargs=dict(timeout=.2),
name = None if name is None else name + '-server',
)
thread.setDaemon(True)
thread.start()
try:
qin.get(timeout=timeout)
except Empty:
pass
server.server.close()
thread.join(3)
if not keep:
# Try to cleanup storage files
for storage in server.server.storages.values():
try:
storage.cleanup()
except AttributeError:
pass
qout.put(thread.is_alive())
qin.get(timeout=11) # ack
if hasattr(qout, 'close'):
qout.close()
qout.cancel_join_thread()
except Exception:
logger.exception("In server thread")
finally:
if old_protocol:
ZEO.asyncio.server.best_protocol_version = old_protocol
ZEO.asyncio.server.ServerProtocol.protocols = old_protocols
def stop_runner(thread, config, qin, qout, stop_timeout=9, pid=None):
qin.put('stop')
dirty = qout.get(timeout=stop_timeout)
qin.put('ack')
if dirty:
print("WARNING SERVER DIDN'T STOP CLEANLY", file=sys.stderr)
# The runner thread didn't stop. If it was a process,
# give it some time to exit
if hasattr(thread, 'pid') and thread.pid:
os.waitpid(thread.pid, 0)
else:
# Gaaaa, force gc in hopes of maybe getting the unclosed
# sockets to get GCed
gc.collect()
thread.join(stop_timeout)
os.remove(config)
if hasattr(qin, 'close'):
qin.close()
qin.cancel_join_thread()
def start_zeo_server(storage_conf=None, zeo_conf=None, port=None, keep=False,
path='Data.fs', protocol=None, blob_dir=None,
suicide=True, debug=False,
threaded=False, start_timeout=33, name=None,
):
"""Start a ZEO server in a separate process.
Takes two positional arguments a string containing the storage conf
and a ZEOConfig object.
Returns the ZEO address, the test server address, the pid, and the path
to the config file.
"""
if not storage_conf:
storage_conf = '\npath %s\n' % path
if blob_dir:
storage_conf = '\nblob-dir %s\n%s\n' % (
blob_dir, storage_conf)
if zeo_conf is None or isinstance(zeo_conf, dict):
if port is None:
raise AssertionError("The port wasn't specified")
if isinstance(port, int):
addr = 'localhost', port
else:
addr = port
z = ZEOConfig(addr)
if zeo_conf:
z.__dict__.update(zeo_conf)
zeo_conf = str(z)
# Store the config info in a temp file.
tmpfile = tempfile.mktemp(".conf", dir=os.getcwd())
fp = open(tmpfile, 'w')
fp.write(str(zeo_conf) + '\n\n')
fp.write(storage_conf)
fp.close()
if threaded:
from threading import Thread
from six.moves.queue import Queue
else:
from multiprocessing import Process as Thread
from multiprocessing import Queue
qin = Queue()
qout = Queue()
thread = Thread(
target=runner,
args=[tmpfile, qin, qout, 999 if suicide else None],
kwargs=dict(debug=debug, name=name, protocol=protocol, keep=keep),
name = None if name is None else name + '-server-runner',
)
thread.daemon = True
thread.start()
try:
addr = qout.get(timeout=start_timeout)
except Exception:
whine("SERVER FAILED TO START")
if thread.is_alive():
whine("Server thread/process is still running")
elif not threaded:
whine("Exit status", thread.exitcode)
raise
def stop(stop_timeout=99):
stop_runner(thread, tmpfile, qin, qout, stop_timeout)
return addr, stop
if sys.platform[:3].lower() == "win":
def _quote_arg(s):
return '"%s"' % s
else:
def _quote_arg(s):
return s
def shutdown_zeo_server(stop):
stop()
def get_port(test=None):
"""Return a port that is not in use.
Checks if a port is in use by trying to connect to it. Assumes it
is not in use if connect raises an exception. We actually look for
2 consective free ports because most of the clients of this
function will use the returned port and the next one.
Raises RuntimeError after 10 tries.
"""
if test is not None:
return get_port2(test)
for i in range(10):
port = random.randrange(20000, 30000)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
try:
s.connect(('localhost', port))
except socket.error:
pass # Perhaps we should check value of error too.
else:
continue
try:
s1.connect(('localhost', port+1))
except socket.error:
pass # Perhaps we should check value of error too.
else:
continue
return port
finally:
s.close()
s1.close()
raise RuntimeError("Can't find port")
def get_port2(test):
for i in range(10):
while 1:
port = random.randrange(20000, 30000)
if port%3 == 0:
break
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.bind(('localhost', port+2))
except socket.error as e:
if e.args[0] != errno.EADDRINUSE:
raise
s.close()
continue
if not (can_connect(port) or can_connect(port+1)):
zope.testing.setupstack.register(test, s.close)
return port
s.close()
raise RuntimeError("Can't find port")
def can_connect(port):
c = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
try:
c.connect(('localhost', port))
except socket.error:
return False # Perhaps we should check value of error too.
else:
return True
finally:
c.close()
def setUp(test):
ZODB.tests.util.setUp(test)
servers = []
def start_server(storage_conf=None, zeo_conf=None, port=None, keep=False,
addr=None, path='Data.fs', protocol=None, blob_dir=None,
suicide=True, debug=False, **kw):
"""Start a ZEO server.
Return the server and admin addresses.
"""
if port is None:
if addr is None:
port = get_port2(test)
else:
port = addr[1]
elif addr is not None:
raise TypeError("Can't specify port and addr")
addr, stop = start_zeo_server(
storage_conf=storage_conf,
zeo_conf=zeo_conf,
port=port,
keep=keep,
path=path,
protocol=protocol,
blob_dir=blob_dir,
suicide=suicide,
debug=debug,
**kw)
servers.append(stop)
return addr, stop
test.globs['start_server'] = start_server
def get_port():
return get_port2(test)
test.globs['get_port'] = get_port
def stop_server(stop):
stop()
servers.remove(stop)
test.globs['stop_server'] = stop_server
def cleanup_servers():
for stop in list(servers):
stop()
zope.testing.setupstack.register(test, cleanup_servers)
test.globs['wait_until'] = wait_until
test.globs['wait_connected'] = wait_connected
test.globs['wait_disconnected'] = wait_disconnected
def wait_until(label=None, func=None, timeout=30, onfail=None):
if label is None:
if func is not None:
label = func.__name__
elif not isinstance(label, six.string_types) and func is None:
func = label
label = func.__name__
if func is None:
def wait_decorator(f):
wait_until(label, f, timeout, onfail)
return wait_decorator
giveup = time.time() + timeout
while not func():
if time.time() > giveup:
if onfail is None:
raise AssertionError("Timed out waiting for: ", label)
else:
return onfail()
time.sleep(0.01)
def wait_connected(storage):
wait_until("storage is connected", storage.is_connected)
def wait_disconnected(storage):
wait_until("storage is disconnected",
lambda : not storage.is_connected())
def debug_logging(logger='ZEO', stream='stderr', level=logging.DEBUG):
handler = logging.StreamHandler(getattr(sys, stream))
logger = logging.getLogger(logger)
logger.addHandler(handler)
logger.setLevel(level)
def stop():
logger.removeHandler(handler)
logger.setLevel(logging.NOTSET)
return stop
def whine(*message):
print(*message, file=sys.stderr)
sys.stderr.flush()
ZEO-5.0.0a0/src/ZEO/tests/invalidation-age.txt 0000664 0000000 0000000 00000010150 12737730500 0020735 0 ustar 00root root 0000000 0000000 Invalidation age
================
When a ZEO client with a non-empty cache connects to the server, it
needs to verify whether the data in its cache is current. It does
this in one of 2 ways:
quick verification
It gets a list of invalidations from the server since the last
transaction the client has seen and applies those to it's disk and
in-memory caches. This is only possible if there haven't been too
many transactions since the client was last connected.
full verification
If quick verification isn't possible, the client iterates through
it's disk cache asking the server to verify whether each current
entry is valid.
Unfortunately, for large caches, full verification is soooooo not
quick that it is impractical. Quick verificatioin is highly
desireable.
To support quick verification, the server keeps a list of recent
invalidations. The size of this list is controlled by the
invalidation_queue_size parameter. If there is a lot of database
activity, the size might need to be quite large to support having
clients be disconnected for more than a few minutes. A very large
invalidation queue size can use a lot of memory.
To suppliment the invalidation queue, you can also specify an
invalidation_age parameter. When a client connects and presents the
last transaction id it has seen, we first check to see if the
invalidation queue has that transaction id. It it does, then we send
all transactions since that id. Otherwise, we check to see if the
difference between storage's last transaction id and the given id is
less than or equal to the invalidation age. If it is, then we iterate
over the storage, starting with the given id, to get the invalidations
since the given id.
NOTE: This assumes that iterating from a point near the "end" of a
database is inexpensive. Don't use this option for a storage for which
that is not the case.
Here's an example. We set up a server, using an
invalidation-queue-size of 5:
>>> addr, admin = start_server(zeo_conf=dict(invalidation_queue_size=5),
... keep=True)
Now, we'll open a client with a persistent cache, set up some data,
and then close client:
>>> import ZEO, transaction
>>> db = ZEO.DB(addr, client='test')
>>> conn = db.open()
>>> for i in range(9):
... conn.root()[i] = conn.root().__class__()
... conn.root()[i].x = 0
>>> transaction.commit()
>>> db.close()
We'll open another client, and commit some transactions:
>>> db = ZEO.DB(addr)
>>> conn = db.open()
>>> import transaction
>>> for i in range(2):
... conn.root()[i].x = 1
... transaction.commit()
>>> db.close()
If we reopen the first client, we'll do quick verification.
>>> db = ZEO.DB(addr, client='test') # doctest: +ELLIPSIS
>>> db._storage._server.client.verify_result
'quick verification'
>>> [v.x for v in db.open().root().values()]
[1, 1, 0, 0, 0, 0, 0, 0, 0]
Now, if we disconnect and commit more than 5 transactions, we'll see
that we had to clear the cache:
>>> db.close()
>>> db = ZEO.DB(addr)
>>> conn = db.open()
>>> import transaction
>>> for i in range(9):
... conn.root()[i].x = 2
... transaction.commit()
>>> db.close()
>>> db = ZEO.DB(addr, client='test')
>>> db._storage._server.client.verify_result
'cache too old, clearing'
>>> [v.x for v in db.open().root().values()]
[2, 2, 2, 2, 2, 2, 2, 2, 2]
>>> db.close()
But if we restart the server with invalidation-age set, we can
do quick verification:
>>> stop_server(admin)
>>> addr, admin = start_server(zeo_conf=dict(invalidation_queue_size=5,
... invalidation_age=100))
>>> db = ZEO.DB(addr)
>>> conn = db.open()
>>> import transaction
>>> for i in range(9):
... conn.root()[i].x = 3
... transaction.commit()
>>> db.close()
>>> db = ZEO.DB(addr, client='test') # doctest: +ELLIPSIS
>>> db._storage._server.client.verify_result
'quick verification'
>>> [v.x for v in db.open().root().values()]
[3, 3, 3, 3, 3, 3, 3, 3, 3]
>>> db.close()
ZEO-5.0.0a0/src/ZEO/tests/new_addr.test 0000664 0000000 0000000 00000001757 12737730500 0017462 0 ustar 00root root 0000000 0000000 You can change the address(es) of a client storaage.
We'll start by setting up a server and connecting to it:
>>> import ZEO, transaction
>>> addr, stop = ZEO.server(path='test.fs')
>>> conn = ZEO.connection(addr)
>>> client = conn.db().storage
>>> client.is_connected()
True
>>> conn.root()
{}
>>> conn.root.x = 1
>>> transaction.commit()
Now we'll close the server:
>>> stop()
And wait for the connectin to notice it's disconnected:
>>> wait_until(lambda : not client.is_connected())
Now, we'll restart the server:
>>> addr, stop = ZEO.server(path='test.fs')
Update with another client:
>>> conn2 = ZEO.connection(addr)
>>> conn2.root.x += 1
>>> transaction.commit()
Update the connection and wait for connect:
>>> client.new_addr(addr)
>>> wait_until(lambda : client.is_connected())
>>> _ = transaction.begin()
>>> conn.root()
{'x': 2}
.. cleanup
>>> conn.close()
>>> conn2.close()
>>> stop()
ZEO-5.0.0a0/src/ZEO/tests/protocols.test 0000664 0000000 0000000 00000012006 12737730500 0017710 0 ustar 00root root 0000000 0000000 Test that multiple protocols are supported
==========================================
A full test of all protocols isn't practical. But we'll do a limited
test that at least the current and previous protocols are supported in
both directions.
Let's start a Z4 server
>>> storage_conf = '''
...
... blob-dir server-blobs
...
... path Data.fs
...
...
... '''
>>> addr, stop = start_server(
... storage_conf, dict(invalidation_queue_size=5), protocol=b'Z4')
A current client should be able to connect to a old server:
>>> import ZEO, ZODB.blob, transaction
>>> db = ZEO.DB(addr, client='client', blob_dir='blobs')
>>> wait_connected(db.storage)
>>> str(db.storage.protocol_version.decode('ascii'))
'Z4'
>>> conn = db.open()
>>> conn.root().x = 0
>>> transaction.commit()
>>> len(db.history(conn.root()._p_oid, 99))
2
>>> conn.root()['blob1'] = ZODB.blob.Blob()
>>> with conn.root()['blob1'].open('w') as f:
... r = f.write(b'blob data 1')
>>> transaction.commit()
>>> db2 = ZEO.DB(addr, blob_dir='server-blobs', shared_blob_dir=True)
>>> wait_connected(db2.storage)
>>> conn2 = db2.open()
>>> for i in range(5):
... conn2.root().x += 1
... transaction.commit()
>>> conn2.root()['blob2'] = ZODB.blob.Blob()
>>> with conn2.root()['blob2'].open('w') as f:
... r = f.write(b'blob data 2')
>>> transaction.commit()
>>> @wait_until("Get the new data")
... def f():
... conn.sync()
... return conn.root().x == 5
>>> db.close()
>>> for i in range(2):
... conn2.root().x += 1
... transaction.commit()
>>> db = ZEO.DB(addr, client='client', blob_dir='blobs')
>>> wait_connected(db.storage)
>>> conn = db.open()
>>> conn.root().x
7
>>> db.close()
>>> for i in range(10):
... conn2.root().x += 1
... transaction.commit()
>>> db = ZEO.DB(addr, client='client', blob_dir='blobs')
>>> wait_connected(db.storage)
>>> conn = db.open()
>>> conn.root().x
17
>>> with conn.root()['blob1'].open() as f:
... f.read()
b'blob data 1'
>>> with conn.root()['blob2'].open() as f:
... f.read()
b'blob data 2'
>>> db2.close()
>>> db.close()
>>> stop_server(stop)
>>> import os, zope.testing.setupstack
>>> os.remove('client-1.zec')
>>> zope.testing.setupstack.rmtree('blobs')
>>> zope.testing.setupstack.rmtree('server-blobs')
#############################################################################
# Note that the ZEO 5.0 server only supports clients that use the Z5 protocol
# And the other way around:
# >>> addr, _ = start_server(storage_conf, dict(invalidation_queue_size=5))
# Note that we'll have to pull some hijinks:
# >>> db = ZEO.DB(addr, client='client', blob_dir='blobs')
# >>> str(db.storage.protocol_version.decode('ascii'))
# 'Z4'
# >>> wait_connected(db.storage)
# >>> conn = db.open()
# >>> conn.root().x = 0
# >>> transaction.commit()
# >>> len(db.history(conn.root()._p_oid, 99))
# 2
# >>> db = ZEO.DB(addr, client='client', blob_dir='blobs')
# >>> db.storage.protocol_version
# b'Z4'
# >>> wait_connected(db.storage)
# >>> conn = db.open()
# >>> conn.root().x = 0
# >>> transaction.commit()
# >>> len(db.history(conn.root()._p_oid, 99))
# 2
# >>> conn.root()['blob1'] = ZODB.blob.Blob()
# >>> with conn.root()['blob1'].open('w') as f:
# ... r = f.write(b'blob data 1')
# >>> transaction.commit()
# >>> db2 = ZEO.DB(addr, blob_dir='server-blobs', shared_blob_dir=True)
# >>> wait_connected(db2.storage)
# >>> conn2 = db2.open()
# >>> for i in range(5):
# ... conn2.root().x += 1
# ... transaction.commit()
# >>> conn2.root()['blob2'] = ZODB.blob.Blob()
# >>> with conn2.root()['blob2'].open('w') as f:
# ... r = f.write(b'blob data 2')
# >>> transaction.commit()
# >>> @wait_until()
# ... def x_to_be_5():
# ... conn.sync()
# ... return conn.root().x == 5
# >>> db.close()
# >>> for i in range(2):
# ... conn2.root().x += 1
# ... transaction.commit()
# >>> db = ZEO.DB(addr, client='client', blob_dir='blobs')
# >>> wait_connected(db.storage)
# >>> conn = db.open()
# >>> conn.root().x
# 7
# >>> db.close()
# >>> for i in range(10):
# ... conn2.root().x += 1
# ... transaction.commit()
# >>> db = ZEO.DB(addr, client='client', blob_dir='blobs')
# >>> wait_connected(db.storage)
# >>> conn = db.open()
# >>> conn.root().x
# 17
# >>> with conn.root()['blob1'].open() as f:
# ... f.read()
# b'blob data 1'
# >>> with conn.root()['blob2'].open() as f:
# ... f.read()
# b'blob data 2'
# >>> db2.close()
# >>> db.close()
# Undo the hijinks:
# >>> ZEO.asyncio.client.Protocol.protocols = old_protocols
ZEO-5.0.0a0/src/ZEO/tests/server.pem 0000664 0000000 0000000 00000002644 12737730500 0017003 0 ustar 00root root 0000000 0000000 -----BEGIN CERTIFICATE-----
MIID/TCCAuWgAwIBAgIJAKpYWt7G+3R4MA0GCSqGSIb3DQEBBQUAMFwxCzAJBgNV
BAYTAlVTMQswCQYDVQQIEwJWQTENMAsGA1UEChMEWk9EQjERMA8GA1UEAxMIem9k
Yi5vcmcxHjAcBgkqhkiG9w0BCQEWD3NlcnZlckB6b2RiLm9yZzAeFw0xNjA2MjMx
NTA5MTZaFw0xNzA2MjMxNTA5MTZaMFwxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJW
QTENMAsGA1UEChMEWk9EQjERMA8GA1UEAxMIem9kYi5vcmcxHjAcBgkqhkiG9w0B
CQEWD3NlcnZlckB6b2RiLm9yZzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
ggEBANqyMazB39wFKFh2nIspOw+e/7LPLQZpsd6BnYOxPbdlhb23Q930WqgW5qCA
aJjLqkmvQvZhElXdO2lOxGLR2/Cu71UgmYXkZbMKqPNtoYCPRPCCKm5EczAFXTy4
SUD40wAlndP7J8TjpIaKZjgdfy4GWnGQQAbXUR1eFZszQtU7pUvyjgXsY+BEgq4h
F4iIBarcf8k/6PTldTRLxEbRiTNZ4cdIEtTJL/LzSu5oBw8Z3J05aPg9DcWVl89P
XtemKxU8+Vm847xXJkuODkpSgOCqAPn959b/DGdn1fyx2CR0lSgC4n2rYE/oWxw0
hiyI1jeXTAfOtqvcg4XA5Cs3ivUCAwEAAaOBwTCBvjAdBgNVHQ4EFgQUoWgKmGKI
o2U9vFK48cyXuZrmPdgwgY4GA1UdIwSBhjCBg4AUoWgKmGKIo2U9vFK48cyXuZrm
PdihYKReMFwxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJWQTENMAsGA1UEChMEWk9E
QjERMA8GA1UEAxMIem9kYi5vcmcxHjAcBgkqhkiG9w0BCQEWD3NlcnZlckB6b2Ri
Lm9yZ4IJAKpYWt7G+3R4MAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEB
ACiGzPx1445EdyO4Em3M6MwPQD8KVLvxsYE5HQMk0A2BTJOs2Vzf0NPL2hn947D/
wFe/ubkJXmzPG2CBCbNA//exWj8x2rR8256qJuWDwFx0WUFpRmEVFUGK+bpvgxQV
DlLxnWV8z6Tq9vINRjKT3mcBX4OpDgbEL4O92pbJ7kZNn4Z2+v6/lsWzg3Eo6LVY
fj902gQD/6hjVenD6J5Dqftj4x9nsKjdMz8n5Ok5E1J02ghiWjlUp1PNUKwc2Elw
oFnjPiacbko0fSnD9Zf6qBbACAYyBkHvBc1ZMebnGepZn3E6V91X5kZl84hGzgsb
C+2aGtAqSnvL4/DlIyss3hc=
-----END CERTIFICATE-----
ZEO-5.0.0a0/src/ZEO/tests/server_key.pem 0000664 0000000 0000000 00000003217 12737730500 0017650 0 ustar 00root root 0000000 0000000 -----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA2rIxrMHf3AUoWHaciyk7D57/ss8tBmmx3oGdg7E9t2WFvbdD
3fRaqBbmoIBomMuqSa9C9mESVd07aU7EYtHb8K7vVSCZheRlswqo822hgI9E8IIq
bkRzMAVdPLhJQPjTACWd0/snxOOkhopmOB1/LgZacZBABtdRHV4VmzNC1TulS/KO
Bexj4ESCriEXiIgFqtx/yT/o9OV1NEvERtGJM1nhx0gS1Mkv8vNK7mgHDxncnTlo
+D0NxZWXz09e16YrFTz5WbzjvFcmS44OSlKA4KoA+f3n1v8MZ2fV/LHYJHSVKALi
fatgT+hbHDSGLIjWN5dMB862q9yDhcDkKzeK9QIDAQABAoIBACtwA0/V/jm8SIQx
ouw9Fz8GDLGeVsoUSkDwq7GRjbmUj5jcAr3eH/eM/OfaOWxH353dEsbPBw5I79j9
zSH3nuDSTjUxUWz3rX9/WYloOBDJ5B6FLBpUvDBIkHlT/TDLe1VnI08Mbpy7vlz+
tkjlCvLATkyKIz14nOLhYhc+ekLRxQZrVRgHIPW13c0F61drkc4uCs/UMbYRzZiZ
nnMeQLghIUP13xMtMMkNx0P1ampvV18kCJWuvHUqOMnCaPnTGCnGfOUtq98sTdui
vBnleePmBzF5VGT7M3e3pr63EKyVPD/bx2dbItcxOahBTgDbMQR4AQ65NvD5+B0+
d9XbfgECgYEA7wLsMvFAmj5IrXYGYnaqy1LsMiZll9zv4QB2tAKLSIy5z0Cns54R
ttI6Ni94CfjBOKmR5IZXjgXZg5ydu+tBLwqNMJISziFSyTxzwNsAEPczUHefFhwq
zhAkIVsISy42AxpuyHlz5EBSJiUULhguUhmIDxCbQnmlSK6X9MxYO3UCgYEA6j2c
dfU4RC2tzjNKoKs52mUoWmwFsM5CKs0hDeRn2KV80nNKn/mRa0cZ36u/x0hN3fBm
wrwNiZWOX+ot5SfnePx0dOiTOxfWYeXtzVqF6KhPUK8K5424zL+N1uw941gvkXYR
Cc79AxDI/S5y/2ZR8aW4H91LdCcvPlE4Dp1JoYECgYAuE2gpYezMT1l/ZxNQBARk
8fVqrZBEOGld/NLlXOAw+kAPvi0WKVDM57YlH/2KHpRRMg9X+LYEQQhvoM+fnHiS
cvxI8sABUNc+yBKgiRd4Lc+MoaLfhkqSMvZkH8J3i88Jxhy5NQCsbeHoTJmZUTwM
w7NBBDiKFh1Q56ePn50ayQKBgQCA8guoT6Z6uZ6dDVU+nyOI2vjc1exICTMZdrSE
fkDAXVEaVMc2y17G7GwM2fIHlQDwdP9ModLd80td937uUAo3atn85W7vL88fM0C2
M+fVTJnk84cQMs8RPz2om4HyHcCJ1bHJcX2Ma3gJD8HUYJIpcS2rtNlthoiWSIWQ
XfuDgQKBgQCYW+x52XSaTjE0SviYmeNuO30bELgjxPyPLge407EAp1h6Sr9Za5ap
2aQsClsuzpHcHDoWmzO+dTr70Qyt/YoY5FN3uZRG9m7X/OH7+ceBEU/BtQ0Whem4
YNE5lpuw4eGsJQGkhRFcqKy66gId7Sw5b6CIaJqbhQCogkWt5JQuWQ==
-----END RSA PRIVATE KEY-----
ZEO-5.0.0a0/src/ZEO/tests/serverpw.pem 0000664 0000000 0000000 00000002624 12737730500 0017350 0 ustar 00root root 0000000 0000000 -----BEGIN CERTIFICATE-----
MIID8DCCAtigAwIBAgIJALop9P9MBfzLMA0GCSqGSIb3DQEBBQUAMFgxCzAJBgNV
BAYTAlVTMQswCQYDVQQIEwJWQTENMAsGA1UEChMEWk9EQjERMA8GA1UEAxMIem9k
Yi5vcmcxGjAYBgkqhkiG9w0BCQEWC3B3QHpvZGIub3JnMB4XDTE2MDYyMzE1MTAz
MVoXDTE3MDYyMzE1MTAzMVowWDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlZBMQ0w
CwYDVQQKEwRaT0RCMREwDwYDVQQDEwh6b2RiLm9yZzEaMBgGCSqGSIb3DQEJARYL
cHdAem9kYi5vcmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDKw/iw
N1EPddU9QQQ+OnCJv9G3rbTOPt4zEbpfTROIHTME3krFKPALrGF2aK+oBpHx3/TZ
HN5UvWK/jmGtDL9jekKCAaeAaVIKlESUS6DIxZY+FaO3re/1fbmBNRz8Cnn1raAw
/4YZRDPvblooH4Nt5m7uooGAIIDPft3fInhmGboOoIpXc7nMGVGOWXlDN5I9oFmm
4vby4CUMy3A/0wnHgTuMNy7Tpjgz2E/1MRAOyWQ7PZYiASs4ycZfas8058O8DI+o
rSYyum/czecIz52P6jbx5LWvcKDWac8QbJoHPelthYtxcMHee2+Nh6MWW688CBzq
HSeFAdNO3d9kMiFpAgMBAAGjgbwwgbkwHQYDVR0OBBYEFDui1OC2+2z2rHADglk5
tGOndxhoMIGJBgNVHSMEgYEwf4AUO6LU4Lb7bPascAOCWTm0Y6d3GGihXKRaMFgx
CzAJBgNVBAYTAlVTMQswCQYDVQQIEwJWQTENMAsGA1UEChMEWk9EQjERMA8GA1UE
AxMIem9kYi5vcmcxGjAYBgkqhkiG9w0BCQEWC3B3QHpvZGIub3JnggkAuin0/0wF
/MswDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAiEYO8MZ3OG8sqy9t
AtUZbv0aTsIUzy/QTUKKDUo8qwKNOylqyqGAZV0tZ5eCoqIGFAwRJBBymIizU3zH
U1k2MnYZMVi7uYwSy+qwg52+X7GLl/kaAfx8kNvpr274CuZQnLojJS+K8HtH5Pom
YD3gTO3OxGS4IS6uf6DD+mf+C9OBnTl47P0HA0/eHBEXVSc2vsv30H/UoW5VbZ6z
6TWkoPwSMVhCNRRRif4/eqCLh24/h5b4uvAC+tsrIPQ9If7EsqVTNMCbAkv3ib6g
OmaCdbrGkqvD3UVn7i5ci96UZoF80EWNZiwhMdvQtMfOAR4jHQ1pTepJni6JwzZP
UMNDpQ==
-----END CERTIFICATE-----
ZEO-5.0.0a0/src/ZEO/tests/serverpw_key.pem 0000664 0000000 0000000 00000003327 12737730500 0020221 0 ustar 00root root 0000000 0000000 -----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,769B900D03925712
z5M/XkqEC1+PxJ1T3QrUhGG9fPTBsPxKy8WlwIytbMg0RXS0oJBiFgNYp1ktqzGo
yT+AdTCRR1hNVX8M5HbV3ksUjKxXKCL3+yaaB6JtGbNRr2qTNwosvxD92nKT/hvN
R6rHF6LcO05s8ubs9b9ON/ja7HCx69N5CjBuCbCFHUTlAXkwD9w0ScrxrtfP50EY
FOw6LAqhhzq6/KO7c1SJ7k9LYzakhL+nbw5KM9QgBk4WHlmKLbCZIZ5RWvu0F4s5
n4qk/BcuXIkbYuEv2kH0nDk5eDfA/dj7xZcMMgL5VFymQzaZLYyj4WuQYXu/7JW/
nM/ZWBkZOMaI3vnPTG1hJ9pgjLjQnjfNA/bGWwbLxjCsPmR8yvZS4v2iqdB6X3Vl
yJ9aV9r8KoU0PJk3x4v2Zp+RQxgrKSaQw59sXptaXAY3NCRR6ohvq4P5X6UB8r5S
vYdoMeVXhX1hzXeMguln7zQInwJhPZqk4wMIV3lTsCqh1eJ7NC2TGCwble+B/ClR
KtzuJBzoYPLw44ltnXPEMmVH1Fkh17+QZFgZRJrKGD9PGOAXmnzudsZ1xX9kNnOM
JLIT/mzKcqkd8S1n1Xi1x7zGA0Ng5xmKGFe8oDokPJucJO1Ou+hbLDmC0DZUGzr1
qqPJ3F/DzZZDTmD/rZF6doPJgFAZvgpVeiWS8/v1qbz/nz13uwXDLjRPgLfcKpmQ
4R3V4QlgviDilW61VTZnzV9qAOx4fG6+IwWIGBlrJnfsH/fSCDNlAStc6k12zdun
PIIRJBfbEprGig3vRWUoBASReqow1JCN9DaVCX1P27pDKY5oDe+7/HOrQpwhPoya
2HEwbKeyY0nCcCXbkWGL1bwEUs/PrJv+61rik4KxOWhKpHWkZLzbozELb44jXrJx
e8K8XKz4La2DEjsUYHc31u6T69GBQO9JDEvih15phUWq8ITvDnkHpAg+wYb1JAHD
QcqDtAulMvT/ZGN0h7qdwbHMggEsLgCCVPG4iZ5K4cXsMbePFvQqq+o4FTMF+cM5
2Dq0wir92U9cH+ooy80LIt5Kp5zqgQZzr73o9MEgwqJocCrx9ZrofKRUmTV+ZU0r
w5mfUM47Ctnqia0UNGx6SUs3CHFDPWPbzrAaqGzSvFhzR1MMoL1/rJzP1VSm3Fk3
ESWkPrg0J8dcQP/ch9MhH8eoQYyA+2q1vClUbeZLAs5KoHxgi6pSkGYqFhshrA+t
2AIrUPDPPDf0PgRoXJrzdVOiNNY1rzyql+0JqDH6DjCVcAADWY+48p9U2YFTd7Je
DvnZWihwe0qYGn1AKIkvJ4SR3bQg36etrxhMrMl/8lUn2dnT7GFrhjr9HwCpJwa7
8tv150SrQXt3FXZCHb+RMUgoWZDeksDohPiGzXkPU6kaSviZVnRMslyU4ahWp6vC
8tYUhb7K6N+is1hYkICNt6zLl2vBDuCDWmiIwopHtnH1kz8bYlp4/GBVaMIgZiCM
gM/7+p4YCc++s2sJiQ9+BqPo0zKm3bbSP+fPpeWefQVte9Jx4S36YXU52HsJxBTN
WUdHABC+aS2A45I12xMNzOJR6VfxnG6f3JLpt3MkUCEg+898vJGope+TJUhD+aJC
-----END RSA PRIVATE KEY-----
ZEO-5.0.0a0/src/ZEO/tests/servertesting.py 0000664 0000000 0000000 00000003727 12737730500 0020253 0 ustar 00root root 0000000 0000000 from __future__ import print_function
from __future__ import print_function
##############################################################################
#
# Copyright Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
# Testing the current ZEO implementation is rather hard due to the
# architecture, which mixes concerns, especially between application
# and networking. Still, it's not as bad as it could be.
# The 2 most important classes in the architecture are ZEOStorage and
# StorageServer. A ZEOStorage is created for each client connection.
# The StorageServer maintains data shared or needed for coordination
# among clients.
# The other important part of the architecture is connections.
# Connections are used by ZEOStorages to send messages or return data
# to clients.
# Here, we'll try to provide some testing infrastructure to isolate
# servers from the network.
import ZEO.asyncio.tests
import ZEO.StorageServer
import ZODB.MappingStorage
class StorageServer(ZEO.StorageServer.StorageServer):
def __init__(self, addr='test_addr', storages=None, **kw):
if storages is None:
storages = {'1': ZODB.MappingStorage.MappingStorage()}
ZEO.StorageServer.StorageServer.__init__(self, addr, storages, **kw)
def client(server, name='client'):
zs = ZEO.StorageServer.ZEOStorage(server)
protocol = ZEO.asyncio.tests.server_protocol(
zs, protocol_version=b'Z5', addr='test-addr-%s' % name)
zs.notify_connected(protocol)
zs.register('1', 0)
return zs
ZEO-5.0.0a0/src/ZEO/tests/speed.py 0000664 0000000 0000000 00000014403 12737730500 0016440 0 ustar 00root root 0000000 0000000 from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
from __future__ import print_function
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
usage="""Test speed of a ZODB storage
Options:
-d file The data file to use as input.
The default is this script.
-n n The number of repititions
-s module A module that defines a 'Storage'
attribute, which is an open storage.
If not specified, a FileStorage will ne
used.
-z Test compressing data
-D Run in debug mode
-L Test loads as well as stores by minimizing
the cache after eachrun
-M Output means only
-C Run with a persistent client cache
-U Run ZEO using a Unix domain socket
-t n Number of concurrent threads to run.
"""
import asyncore
import sys, os, getopt, time
##sys.path.insert(0, os.getcwd())
import persistent
import transaction
import ZODB
from ZODB.POSException import ConflictError
from ZEO.tests import forker
class P(persistent.Persistent):
pass
fs_name = "zeo-speed.fs"
class ZEOExit(asyncore.file_dispatcher):
"""Used to exit ZEO.StorageServer when run is done"""
def writable(self):
return 0
def readable(self):
return 1
def handle_read(self):
buf = self.recv(4)
assert buf == "done"
self.delete_fs()
os._exit(0)
def handle_close(self):
print("Parent process exited unexpectedly")
self.delete_fs()
os._exit(0)
def delete_fs(self):
os.unlink(fs_name)
os.unlink(fs_name + ".lock")
os.unlink(fs_name + ".tmp")
def work(db, results, nrep, compress, data, detailed, minimize, threadno=None):
for j in range(nrep):
for r in 1, 10, 100, 1000:
t = time.time()
conflicts = 0
jar = db.open()
while 1:
try:
transaction.begin()
rt = jar.root()
key = 's%s' % r
if key in rt:
p = rt[key]
else:
rt[key] = p =P()
for i in range(r):
v = getattr(p, str(i), P())
if compress is not None:
v.d = compress(data)
else:
v.d = data
setattr(p, str(i), v)
transaction.commit()
except ConflictError:
conflicts = conflicts + 1
else:
break
jar.close()
t = time.time() - t
if detailed:
if threadno is None:
print("%s\t%s\t%.4f\t%d" % (j, r, t, conflicts))
else:
print("%s\t%s\t%.4f\t%d\t%d" % (j, r, t, conflicts,
threadno))
results[r].append((t, conflicts))
rt=d=p=v=None # release all references
if minimize:
time.sleep(3)
jar.cacheMinimize()
def main(args):
opts, args = getopt.getopt(args, 'zd:n:Ds:LMt:U')
s = None
compress = None
data=sys.argv[0]
nrep=5
minimize=0
detailed=1
cache = None
domain = 'AF_INET'
threads = 1
for o, v in opts:
if o=='-n': nrep = int(v)
elif o=='-d': data = v
elif o=='-s': s = v
elif o=='-z':
import zlib
compress = zlib.compress
elif o=='-L':
minimize=1
elif o=='-M':
detailed=0
elif o=='-D':
global debug
os.environ['STUPID_LOG_FILE']=''
os.environ['STUPID_LOG_SEVERITY']='-999'
debug = 1
elif o == '-C':
cache = 'speed'
elif o == '-U':
domain = 'AF_UNIX'
elif o == '-t':
threads = int(v)
zeo_pipe = None
if s:
s = __import__(s, globals(), globals(), ('__doc__',))
s = s.Storage
server = None
else:
s, server, pid = forker.start_zeo("FileStorage",
(fs_name, 1), domain=domain)
data=open(data).read()
db=ZODB.DB(s,
# disable cache deactivation
cache_size=4000,
cache_deactivate_after=6000,)
print("Beginning work...")
results={1:[], 10:[], 100:[], 1000:[]}
if threads > 1:
import threading
l = []
for i in range(threads):
t = threading.Thread(target=work,
args=(db, results, nrep, compress, data,
detailed, minimize, i))
l.append(t)
for t in l:
t.start()
for t in l:
t.join()
else:
work(db, results, nrep, compress, data, detailed, minimize)
if server is not None:
server.close()
os.waitpid(pid, 0)
if detailed:
print('-'*24)
print("num\tmean\tmin\tmax")
for r in 1, 10, 100, 1000:
times = []
for time, conf in results[r]:
times.append(time)
t = mean(times)
print("%d\t%.4f\t%.4f\t%.4f" % (r, t, min(times), max(times)))
def mean(l):
tot = 0
for v in l:
tot = tot + v
return tot / len(l)
##def compress(s):
## c = zlib.compressobj()
## o = c.compress(s)
## return o + c.flush()
if __name__=='__main__':
main(sys.argv[1:])
ZEO-5.0.0a0/src/ZEO/tests/stress.py 0000664 0000000 0000000 00000007320 12737730500 0016663 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""A ZEO client-server stress test to look for leaks.
The stress test should run in an infinite loop and should involve
multiple connections.
"""
from __future__ import print_function
# TODO: This code is currently broken.
import transaction
import ZODB
from ZODB.MappingStorage import MappingStorage
from ZODB.tests import MinPO
from ZEO.ClientStorage import ClientStorage
from ZEO.tests import forker
import os
import random
NUM_TRANSACTIONS_PER_CONN = 10
NUM_CONNECTIONS = 10
NUM_ROOTS = 20
MAX_DEPTH = 20
MIN_OBJSIZE = 128
MAX_OBJSIZE = 2048
def an_object():
"""Return an object suitable for a PersistentMapping key"""
size = random.randrange(MIN_OBJSIZE, MAX_OBJSIZE)
if os.path.exists("/dev/urandom"):
f = open("/dev/urandom")
buf = f.read(size)
f.close()
return buf
else:
f = open(MinPO.__file__)
l = list(f.read(size))
f.close()
random.shuffle(l)
return "".join(l)
def setup(cn):
"""Initialize the database with some objects"""
root = cn.root()
for i in range(NUM_ROOTS):
prev = an_object()
for j in range(random.randrange(1, MAX_DEPTH)):
o = MinPO.MinPO(prev)
prev = o
root[an_object()] = o
transaction.commit()
cn.close()
def work(cn):
"""Do some work with a transaction"""
cn.sync()
root = cn.root()
obj = random.choice(root.values())
# walk down to the bottom
while not isinstance(obj.value, str):
obj = obj.value
obj.value = an_object()
transaction.commit()
def main():
# Yuck! Need to cleanup forker so that the API is consistent
# across Unix and Windows, at least if that's possible.
if os.name == "nt":
zaddr, tport, pid = forker.start_zeo_server('MappingStorage', ())
def exitserver():
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(tport)
s.close()
else:
zaddr = '', random.randrange(20000, 30000)
pid, exitobj = forker.start_zeo_server(MappingStorage(), zaddr)
def exitserver():
exitobj.close()
while 1:
pid = start_child(zaddr)
print("started", pid)
os.waitpid(pid, 0)
exitserver()
def start_child(zaddr):
pid = os.fork()
if pid != 0:
return pid
try:
_start_child(zaddr)
finally:
os._exit(0)
def _start_child(zaddr):
storage = ClientStorage(zaddr, debug=1, min_disconnect_poll=0.5, wait=1)
db = ZODB.DB(storage, pool_size=NUM_CONNECTIONS)
setup(db.open())
conns = []
conn_count = 0
for i in range(NUM_CONNECTIONS):
c = db.open()
c.__count = 0
conns.append(c)
conn_count += 1
while conn_count < 25:
c = random.choice(conns)
if c.__count > NUM_TRANSACTIONS_PER_CONN:
conns.remove(c)
c.close()
conn_count += 1
c = db.open()
c.__count = 0
conns.append(c)
else:
c.__count += 1
work(c)
if __name__ == "__main__":
main()
ZEO-5.0.0a0/src/ZEO/tests/testConfig.py 0000664 0000000 0000000 00000007750 12737730500 0017454 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
import unittest
from zope.testing import setupstack
from ZODB.config import storageFromString
from .forker import start_zeo_server
class ZEOConfigTestBase(setupstack.TestCase):
setUp = setupstack.setUpDirectory
def start_server(self, settings='', **kw):
for name, value in kw.items():
settings += '\n%s %s\n' % (name.replace('_', '-'), value)
zeo_conf = """
address 127.0.0.1:0
%s
""" % settings
return start_zeo_server("\n\n",
zeo_conf, threaded=True)
def start_client(self, addr, settings='', **kw):
settings += '\nserver %s:%s\n' % addr
for name, value in kw.items():
settings += '\n%s %s\n' % (name.replace('_', '-'), value)
return storageFromString(
"""
%import ZEO
{}
""".format(settings))
def _client_assertions(
self, client, addr,
connected=True,
cache_size=20 * (1<<20),
cache_path=None,
blob_dir=None,
shared_blob_dir=False,
blob_cache_size=None,
blob_cache_size_check=10,
read_only=False,
read_only_fallback=False,
wait_timeout=30,
client_label=None,
storage='1',
name=None,
):
self.assertEqual(client.is_connected(), connected)
self.assertEqual(client._addr, [addr])
self.assertEqual(client._cache.maxsize, cache_size)
self.assertEqual(client._cache.path, cache_path)
self.assertEqual(client.blob_dir, blob_dir)
self.assertEqual(client.shared_blob_dir, shared_blob_dir)
self.assertEqual(client._blob_cache_size, blob_cache_size)
if blob_cache_size:
self.assertEqual(client._blob_cache_size_check,
blob_cache_size * blob_cache_size_check // 100)
self.assertEqual(client._is_read_only, read_only)
self.assertEqual(client._read_only_fallback, read_only_fallback)
self.assertEqual(client._server.timeout, wait_timeout)
self.assertEqual(client._client_label, client_label)
self.assertEqual(client._storage, storage)
self.assertEqual(client.__name__,
name if name is not None else str(client._addr))
class ZEOConfigTest(ZEOConfigTestBase):
def test_default_zeo_config(self, **client_settings):
addr, stop = self.start_server()
client = self.start_client(addr, **client_settings)
self._client_assertions(client, addr, **client_settings)
client.close()
stop()
def test_client_variations(self):
for name, value in dict(
cache_size=4200,
cache_path='test',
blob_dir='blobs',
blob_cache_size=424242,
read_only=True,
read_only_fallback=True,
wait_timeout=33,
client_label='test_client',
name='Test'
).items():
params = {name: value}
self.test_default_zeo_config(**params)
def test_blob_cache_size_check(self):
self.test_default_zeo_config(blob_cache_size=424242,
blob_cache_size_check=50)
def test_suite():
return unittest.makeSuite(ZEOConfigTest)
ZEO-5.0.0a0/src/ZEO/tests/testConnection.py 0000664 0000000 0000000 00000017127 12737730500 0020345 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Test setup for ZEO connection logic.
The actual tests are in ConnectionTests.py; this file provides the
platform-dependent scaffolding.
"""
from __future__ import with_statement, print_function
from ZEO.tests import ConnectionTests, InvalidationTests
from zope.testing import setupstack
import os
if os.environ.get('USE_ZOPE_TESTING_DOCTEST'):
from zope.testing import doctest
else:
import doctest
import unittest
import ZEO.tests.forker
import ZODB.tests.util
class FileStorageConfig:
def getConfig(self, path, create, read_only):
return """\
path %s
create %s
read-only %s
""" % (path,
create and 'yes' or 'no',
read_only and 'yes' or 'no')
class MappingStorageConfig:
def getConfig(self, path, create, read_only):
return """"""
class FileStorageConnectionTests(
FileStorageConfig,
ConnectionTests.ConnectionTests,
InvalidationTests.InvalidationTests
):
"""FileStorage-specific connection tests."""
class FileStorageReconnectionTests(
FileStorageConfig,
ConnectionTests.ReconnectionTests,
):
"""FileStorage-specific re-connection tests."""
# Run this at level 1 because MappingStorage can't do reconnection tests
class FileStorageInvqTests(
FileStorageConfig,
ConnectionTests.InvqTests
):
"""FileStorage-specific invalidation queue tests."""
class FileStorageTimeoutTests(
FileStorageConfig,
ConnectionTests.TimeoutTests
):
pass
class MappingStorageConnectionTests(
MappingStorageConfig,
ConnectionTests.ConnectionTests
):
"""Mapping storage connection tests."""
class SSLConnectionTests(
MappingStorageConfig,
ConnectionTests.SSLConnectionTests,
):
pass
# The ReconnectionTests can't work with MappingStorage because it's only an
# in-memory storage and has no persistent state.
class MappingStorageTimeoutTests(
MappingStorageConfig,
ConnectionTests.TimeoutTests
):
pass
class SSLConnectionTests(
MappingStorageConfig,
ConnectionTests.SSLConnectionTests,
):
pass
test_classes = [FileStorageConnectionTests,
FileStorageReconnectionTests,
FileStorageInvqTests,
FileStorageTimeoutTests,
MappingStorageConnectionTests,
MappingStorageTimeoutTests,
SSLConnectionTests,
]
def invalidations_while_connecting():
r"""
As soon as a client registers with a server, it will recieve
invalidations from the server. The client must be careful to queue
these invalidations until it is ready to deal with them. At the time
of the writing of this test, clients weren't careful enough about
queing invalidations. This led to cache corruption in the form of
both low-level file corruption as well as out-of-date records marked
as current.
This tests tries to provoke this bug by:
- starting a server
>>> addr, _ = start_server()
- opening a client to the server that writes some objects, filling
it's cache at the same time,
>>> import ZODB.tests.MinPO, transaction
>>> db = ZEO.DB(addr, client='x')
>>> conn = db.open()
>>> nobs = 1000
>>> for i in range(nobs):
... conn.root()[i] = ZODB.tests.MinPO.MinPO(0)
>>> transaction.commit()
>>> import zope.testing.loggingsupport, logging
>>> handler = zope.testing.loggingsupport.InstalledHandler(
... 'ZEO', level=logging.INFO)
# >>> logging.getLogger('ZEO').debug(
# ... 'Initial tid %r' % conn.root()._p_serial)
- disconnecting the first client (closing it with a persistent cache),
>>> db.close()
- starting a second client that writes objects more or less
constantly,
>>> import random, threading, time
>>> stop = False
>>> db2 = ZEO.DB(addr)
>>> tm = transaction.TransactionManager()
>>> conn2 = db2.open(transaction_manager=tm)
>>> random = random.Random(0)
>>> lock = threading.Lock()
>>> def run():
... while 1:
... i = random.randint(0, nobs-1)
... if stop:
... return
... with lock:
... conn2.root()[i].value += 1
... tm.commit()
... #logging.getLogger('ZEO').debug(
... # 'COMMIT %s %s %r' % (
... # i, conn2.root()[i].value, conn2.root()[i]._p_serial))
... time.sleep(0)
>>> thread = threading.Thread(target=run)
>>> thread.setDaemon(True)
>>> thread.start()
- restarting the first client, and
- testing for cache validity.
>>> bad = False
>>> try:
... for c in range(10):
... time.sleep(.1)
... db = ZODB.DB(ZEO.ClientStorage.ClientStorage(addr, client='x'))
... with lock:
... #logging.getLogger('ZEO').debug('Locked %s' % c)
... @wait_until("connected and we have caught up", timeout=199)
... def _():
... if (db.storage.is_connected()
... and db.storage.lastTransaction()
... == db.storage._call('lastTransaction')
... ):
... #logging.getLogger('ZEO').debug(
... # 'Connected %r' % db.storage.lastTransaction())
... return True
...
... conn = db.open()
... for i in range(1000):
... if conn.root()[i].value != conn2.root()[i].value:
... print('bad', c, i, conn.root()[i].value, end=" ")
... print(conn2.root()[i].value)
... bad = True
... print('client debug log with lock held')
... while handler.records:
... record = handler.records.pop(0)
... print(record.name, record.levelname, end=' ')
... print(handler.format(record))
... if bad:
... with open('server-%s.log' % addr[1]) as f:
... print(f.read())
... #else:
... # logging.getLogger('ZEO').debug('GOOD %s' % c)
... db.close()
... finally:
... stop = True
... thread.join(10)
>>> thread.isAlive()
False
>>> for record in handler.records:
... if record.levelno < logging.ERROR:
... continue
... print(record.name, record.levelname)
... print(handler.format(record))
>>> handler.uninstall()
>>> db.close()
>>> db2.close()
"""
def test_suite():
suite = unittest.TestSuite()
for klass in test_classes:
sub = unittest.makeSuite(klass, 'check')
suite.addTest(sub)
suite.addTest(doctest.DocTestSuite(
setUp=ZEO.tests.forker.setUp, tearDown=setupstack.tearDown,
))
suite.layer = ZODB.tests.util.MininalTestLayer('ZEO Connection Tests')
return suite
ZEO-5.0.0a0/src/ZEO/tests/testConversionSupport.py 0000664 0000000 0000000 00000007245 12737730500 0021770 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2006 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
import doctest
import unittest
import ZEO.asyncio.testing
class FakeStorageBase:
def __getattr__(self, name):
if name in ('getTid', 'history', 'load', 'loadSerial',
'lastTransaction', 'getSize', 'getName', 'supportsUndo',
'tpc_transaction'):
return lambda *a, **k: None
raise AttributeError(name)
def isReadOnly(self):
return False
def __len__(self):
return 4
class FakeStorage(FakeStorageBase):
def record_iternext(self, next=None):
if next == None:
next = '0'
next = str(int(next) + 1)
oid = next
if next == '4':
next = None
return oid, oid*8, 'data ' + oid, next
class FakeServer:
storages = {
'1': FakeStorage(),
'2': FakeStorageBase(),
}
def register_connection(*args):
return None, None
client_conflict_resolution = False
class FakeConnection:
protocol_version = b'Z4'
addr = 'test'
def test_server_record_iternext():
"""
On the server, record_iternext calls are simply delegated to the
underlying storage.
>>> import ZEO.StorageServer
>>> zeo = ZEO.StorageServer.ZEOStorage(FakeServer(), False)
>>> zeo.notify_connected(FakeConnection())
>>> zeo.register('1', False)
>>> next = None
>>> while 1:
... oid, serial, data, next = zeo.record_iternext(next)
... print(oid)
... if next is None:
... break
1
2
3
4
The storage info also reflects the fact that record_iternext is supported.
>>> zeo.get_info()['supports_record_iternext']
True
>>> zeo = ZEO.StorageServer.ZEOStorage(FakeServer(), False)
>>> zeo.notify_connected(FakeConnection())
>>> zeo.register('2', False)
>>> zeo.get_info()['supports_record_iternext']
False
"""
def test_client_record_iternext():
"""Test client storage delegation to the network client
The client simply delegates record_iternext calls to it's server stub.
There's really no decent way to test ZEO without running too much crazy
stuff. I'd rather do a lame test than a really lame test, so here goes.
First, fake out the connection manager so we can make a connection:
>>> import ZEO
>>> class Client(ZEO.asyncio.testing.ClientRunner):
...
... def record_iternext(self, next=None):
... if next == None:
... next = '0'
... next = str(int(next) + 1)
... oid = next
... if next == '4':
... next = None
...
... return oid, oid*8, 'data ' + oid, next
>>> client = ZEO.client(
... '', wait=False, _client_factory=Client)
Now we'll have our way with it's private _server attr:
>>> next = None
>>> while 1:
... oid, serial, data, next = client.record_iternext(next)
... print(oid)
... if next is None:
... break
1
2
3
4
"""
def test_suite():
return doctest.DocTestSuite()
if __name__ == '__main__':
unittest.main(defaultTest='test_suite')
ZEO-5.0.0a0/src/ZEO/tests/testTransactionBuffer.py 0000664 0000000 0000000 00000003510 12737730500 0021654 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
import random
import unittest
from ZODB.ConflictResolution import ResolvedSerial
from ZEO.TransactionBuffer import TransactionBuffer
def random_string(size):
"""Return a random string of size size."""
l = [chr(random.randrange(256)) for i in range(size)]
return "".join(l)
def new_store_data():
"""Return arbitrary data to use as argument to store() method."""
return random_string(8), random_string(random.randrange(1000))
def store(tbuf, serial=None):
data = new_store_data()
tbuf.store(*data)
tbuf.serial(data[0], serial)
return data
class TransBufTests(unittest.TestCase):
def checkTypicalUsage(self):
tbuf = TransactionBuffer(0)
store(tbuf)
store(tbuf)
for o in tbuf:
pass
def checkOrderPreserved(self):
tbuf = TransactionBuffer(0)
data = []
for i in range(10):
data.append((store(tbuf), False))
data.append((store(tbuf, ResolvedSerial), True))
for i, (oid, d, resolved) in enumerate(tbuf):
self.assertEqual((oid, d), data[i][0])
self.assertEqual(resolved, data[i][1])
def test_suite():
return unittest.makeSuite(TransBufTests, 'check')
ZEO-5.0.0a0/src/ZEO/tests/testZEO.py 0000664 0000000 0000000 00000144425 12737730500 0016705 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Test suite for ZEO based on ZODB.tests."""
from __future__ import print_function
import multiprocessing
import re
from ZEO.ClientStorage import ClientStorage, m64
from ZEO.tests.forker import get_port
from ZEO.tests import forker, Cache, CommitLockTests, ThreadTests
from ZEO.tests import IterationTests
from ZEO._compat import PY3
from ZODB.tests import StorageTestBase, BasicStorage, \
TransactionalUndoStorage, \
PackableStorage, Synchronization, ConflictResolution, RevisionStorage, \
MTStorage, ReadOnlyStorage, IteratorStorage, RecoveryStorage
from ZODB.tests.MinPO import MinPO
from ZODB.tests.StorageTestBase import zodb_unpickle
from ZODB.utils import p64, u64, z64
from zope.testing import renormalizing
import doctest
import logging
import os
import persistent
import pprint
import re
import shutil
import signal
import stat
import ssl
import sys
import tempfile
import threading
import time
import transaction
import unittest
import ZEO.StorageServer
import ZEO.tests.ConnectionTests
import ZODB
import ZODB.blob
import ZODB.tests.hexstorage
import ZODB.tests.testblob
import ZODB.tests.util
import ZODB.utils
import zope.testing.setupstack
from . import testssl
logger = logging.getLogger('ZEO.tests.testZEO')
class DummyDB:
def invalidate(self, *args):
pass
def invalidateCache(*unused):
pass
transform_record_data = untransform_record_data = lambda self, v: v
class CreativeGetState(persistent.Persistent):
def __getstate__(self):
self.name = 'me'
return super(CreativeGetState, self).__getstate__()
class MiscZEOTests:
"""ZEO tests that don't fit in elsewhere."""
def checkCreativeGetState(self):
# This test covers persistent objects that provide their own
# __getstate__ which modifies the state of the object.
# For details see bug #98275
db = ZODB.DB(self._storage)
cn = db.open()
rt = cn.root()
m = CreativeGetState()
m.attr = 'hi'
rt['a'] = m
# This commit used to fail because of the `Mine` object being put back
# into `changed` state although it was already stored causing the ZEO
# cache to bail out.
transaction.commit()
cn.close()
def checkLargeUpdate(self):
obj = MinPO("X" * (10 * 128 * 1024))
self._dostore(data=obj)
def checkZEOInvalidation(self):
addr = self._storage._addr
storage2 = self._wrap_client(
ClientStorage(addr, wait=1, **self._client_options()))
try:
oid = self._storage.new_oid()
ob = MinPO('first')
revid1 = self._dostore(oid, data=ob)
data, serial = storage2.load(oid, '')
self.assertEqual(zodb_unpickle(data), MinPO('first'))
self.assertEqual(serial, revid1)
revid2 = self._dostore(oid, data=MinPO('second'), revid=revid1)
# Now, storage 2 should eventually get the new data. It
# will take some time, although hopefully not much.
# We'll poll till we get it and whine if we time out:
for n in range(30):
time.sleep(.1)
data, serial = storage2.load(oid, '')
if (serial == revid2 and
zodb_unpickle(data) == MinPO('second')
):
break
else:
raise AssertionError('Invalidation message was not sent!')
finally:
storage2.close()
def checkVolatileCacheWithImmediateLastTransaction(self):
# Earlier, a ClientStorage would not have the last transaction id
# available right after successful connection, this is required now.
addr = self._storage._addr
storage2 = ClientStorage(addr, **self._client_options())
self.assert_(storage2.is_connected())
self.assertEquals(ZODB.utils.z64, storage2.lastTransaction())
storage2.close()
self._dostore()
storage3 = ClientStorage(addr, **self._client_options())
self.assert_(storage3.is_connected())
self.assertEquals(8, len(storage3.lastTransaction()))
self.assertNotEquals(ZODB.utils.z64, storage3.lastTransaction())
storage3.close()
class GenericTestBase(
# Base class for all ZODB tests
StorageTestBase.StorageTestBase):
shared_blob_dir = False
blob_cache_dir = None
def setUp(self):
StorageTestBase.StorageTestBase.setUp(self)
logger.info("setUp() %s", self.id())
zport, stop = forker.start_zeo_server(self.getConfig(),
self.getZEOConfig())
self._servers = [stop]
if not self.blob_cache_dir:
# This is the blob cache for ClientStorage
self.blob_cache_dir = tempfile.mkdtemp(
'blob_cache',
dir=os.path.abspath(os.getcwd()))
self._storage = self._wrap_client(
ClientStorage(
zport, '1', cache_size=20000000,
min_disconnect_poll=0.5, wait=1,
wait_timeout=60, blob_dir=self.blob_cache_dir,
shared_blob_dir=self.shared_blob_dir,
**self._client_options()),
)
self._storage.registerDB(DummyDB())
def getZEOConfig(self):
return forker.ZEOConfig(('127.0.0.1', 0))
def _wrap_client(self, client):
return client
def _client_options(self):
return {}
def tearDown(self):
self._storage.close()
for stop in self._servers:
stop()
StorageTestBase.StorageTestBase.tearDown(self)
class GenericTests(
GenericTestBase,
# ZODB test mixin classes (in the same order as imported)
BasicStorage.BasicStorage,
PackableStorage.PackableStorage,
Synchronization.SynchronizedStorage,
MTStorage.MTStorage,
ReadOnlyStorage.ReadOnlyStorage,
# ZEO test mixin classes (in the same order as imported)
CommitLockTests.CommitLockVoteTests,
ThreadTests.ThreadTests,
# Locally defined (see above)
MiscZEOTests,
):
"""Combine tests from various origins in one class.
"""
def open(self, read_only=0):
# Needed to support ReadOnlyStorage tests. Ought to be a
# cleaner way.
addr = self._storage._addr
self._storage.close()
self._storage = ClientStorage(
addr, read_only=read_only, wait=1, **self._client_options())
def checkWriteMethods(self):
# ReadOnlyStorage defines checkWriteMethods. The decision
# about where to raise the read-only error was changed after
# Zope 2.5 was released. So this test needs to detect Zope
# of the 2.5 vintage and skip the test.
# The __version__ attribute was not present in Zope 2.5.
if hasattr(ZODB, "__version__"):
ReadOnlyStorage.ReadOnlyStorage.checkWriteMethods(self)
def checkSortKey(self):
key = '%s:%s' % (self._storage._storage, self._storage._server_addr)
self.assertEqual(self._storage.sortKey(), key)
def _do_store_in_separate_thread(self, oid, revid, voted):
def do_store():
store = ZEO.ClientStorage.ClientStorage(
self._storage._addr, **self._client_options())
try:
t = transaction.get()
store.tpc_begin(t)
store.store(oid, revid, b'x', '', t)
store.tpc_vote(t)
store.tpc_finish(t)
except Exception as v:
import traceback
print('E'*70)
print(v)
traceback.print_exception(*sys.exc_info())
finally:
store.close()
thread = threading.Thread(name='T2', target=do_store)
thread.setDaemon(True)
thread.start()
thread.join(voted and .1 or 9)
return thread
class FullGenericTests(
GenericTests,
Cache.TransUndoStorageWithCache,
ConflictResolution.ConflictResolvingStorage,
ConflictResolution.ConflictResolvingTransUndoStorage,
PackableStorage.PackableUndoStorage,
RevisionStorage.RevisionStorage,
TransactionalUndoStorage.TransactionalUndoStorage,
IteratorStorage.IteratorStorage,
IterationTests.IterationTests,
):
"""Extend GenericTests with tests that MappingStorage can't pass."""
class FileStorageRecoveryTests(StorageTestBase.StorageTestBase,
RecoveryStorage.RecoveryStorage):
def getConfig(self):
return """\
path %s
""" % tempfile.mktemp(dir='.')
def _new_storage(self):
port = get_port(self)
zconf = forker.ZEOConfig(('', port))
zport, stop = forker.start_zeo_server(self.getConfig(),
zconf, port)
self._servers.append(stop)
blob_cache_dir = tempfile.mkdtemp(dir='.')
storage = ClientStorage(
zport, '1', cache_size=20000000,
min_disconnect_poll=0.5, wait=1,
wait_timeout=60, blob_dir=blob_cache_dir)
storage.registerDB(DummyDB())
return storage
def setUp(self):
StorageTestBase.StorageTestBase.setUp(self)
self._servers = []
self._storage = self._new_storage()
self._dst = self._new_storage()
def tearDown(self):
self._storage.close()
self._dst.close()
for stop in self._servers:
stop()
StorageTestBase.StorageTestBase.tearDown(self)
def new_dest(self):
return self._new_storage()
class FileStorageTests(FullGenericTests):
"""Test ZEO backed by a FileStorage."""
def getConfig(self):
return """\
path Data.fs
"""
_expected_interfaces = (
('ZODB.interfaces', 'IStorageRestoreable'),
('ZODB.interfaces', 'IStorageIteration'),
('ZODB.interfaces', 'IStorageUndoable'),
('ZODB.interfaces', 'IStorageCurrentRecordIteration'),
('ZODB.interfaces', 'IExternalGC'),
('ZODB.interfaces', 'IStorage'),
('zope.interface', 'Interface'),
)
def checkInterfaceFromRemoteStorage(self):
# ClientStorage itself doesn't implement IStorageIteration, but the
# FileStorage on the other end does, and thus the ClientStorage
# instance that is connected to it reflects this.
self.failIf(ZODB.interfaces.IStorageIteration.implementedBy(
ZEO.ClientStorage.ClientStorage))
self.failUnless(ZODB.interfaces.IStorageIteration.providedBy(
self._storage))
# This is communicated using ClientStorage's _info object:
self.assertEquals(self._expected_interfaces,
self._storage._info['interfaces']
)
class FileStorageSSLTests(FileStorageTests):
def getZEOConfig(self):
return testssl.server_config
def _client_options(self):
return {'ssl': testssl.client_ssl()}
class FileStorageHexTests(FileStorageTests):
_expected_interfaces = (
('ZODB.interfaces', 'IStorageRestoreable'),
('ZODB.interfaces', 'IStorageIteration'),
('ZODB.interfaces', 'IStorageUndoable'),
('ZODB.interfaces', 'IStorageCurrentRecordIteration'),
('ZODB.interfaces', 'IExternalGC'),
('ZODB.interfaces', 'IStorage'),
('ZODB.interfaces', 'IStorageWrapper'),
('zope.interface', 'Interface'),
)
def getConfig(self):
return """\
%import ZODB.tests
path Data.fs
"""
class FileStorageClientHexTests(FileStorageHexTests):
def getConfig(self):
return """\
%import ZODB.tests
path Data.fs
"""
def _wrap_client(self, client):
return ZODB.tests.hexstorage.HexStorage(client)
class ClientConflictResolutionTests(
GenericTestBase,
ConflictResolution.ConflictResolvingStorage,
):
def getConfig(self):
return '\n\n'
def getZEOConfig(self):
return forker.ZEOConfig(('', 0), client_conflict_resolution=True)
class MappingStorageTests(GenericTests):
"""ZEO backed by a Mapping storage."""
def getConfig(self):
return """"""
def checkSimpleIteration(self):
# The test base class IteratorStorage assumes that we keep undo data
# to construct our iterator, which we don't, so we disable this test.
pass
def checkUndoZombie(self):
# The test base class IteratorStorage assumes that we keep undo data
# to construct our iterator, which we don't, so we disable this test.
pass
class DemoStorageTests(
GenericTests,
):
def getConfig(self):
return """
path Data.fs
"""
def checkUndoZombie(self):
# The test base class IteratorStorage assumes that we keep undo data
# to construct our iterator, which we don't, so we disable this test.
pass
def checkPackWithMultiDatabaseReferences(self):
pass # DemoStorage pack doesn't do gc
checkPackAllRevisions = checkPackWithMultiDatabaseReferences
class ZRPCConnectionTests(ZEO.tests.ConnectionTests.CommonSetupTearDown):
def getConfig(self, path, create, read_only):
return """"""
def checkCatastrophicClientLoopFailure(self):
# Test what happens when the client loop falls over
self._storage = self.openClientStorage()
import zope.testing.loggingsupport
handler = zope.testing.loggingsupport.InstalledHandler(
'ZEO.asyncio.client')
# We no longer implement the event loop, we we no longer know
# how to break it. We'll just stop it instead for now.
self._storage._server.loop.call_soon_threadsafe(
self._storage._server.loop.stop)
forker.wait_until(
'disconnected',
lambda : not self._storage.is_connected()
)
log = str(handler)
handler.uninstall()
self.assert_("Client loop stopped unexpectedly" in log)
def checkExceptionLogsAtError(self):
# Test the exceptions are logged at error
self._storage = self.openClientStorage()
self._dostore(z64, data=MinPO("X" * (10 * 128 * 1024)))
from zope.testing.loggingsupport import InstalledHandler
handler = InstalledHandler('ZEO.asyncio.client')
import ZODB.POSException
self.assertRaises(TypeError, self._storage.history, z64, None)
self.assertTrue(re.search(" from server: .*TypeError", str(handler)))
# POSKeyErrors and ConflictErrors aren't logged:
handler.clear()
self.assertRaises(ZODB.POSException.POSKeyError,
self._storage.history, None, None)
handler.uninstall()
self.assertEquals(str(handler), '')
def checkConnectionInvalidationOnReconnect(self):
storage = ClientStorage(self.addr, min_disconnect_poll=0.1)
self._storage = storage
assert storage.is_connected()
class DummyDB:
_invalidatedCache = 0
def invalidateCache(self):
self._invalidatedCache += 1
def invalidate(*a, **k):
pass
transform_record_data = untransform_record_data = \
lambda self, data: data
db = DummyDB()
storage.registerDB(db)
base = db._invalidatedCache
# Now we'll force a disconnection and reconnection
storage._server.loop.call_soon_threadsafe(
storage._server.client.protocol.connection_lost,
ValueError('test'))
# and we'll wait for the storage to be reconnected:
for i in range(100):
if storage.is_connected():
if db._invalidatedCache > base:
break
time.sleep(0.1)
else:
raise AssertionError("Couldn't connect to server")
# Now, the root object in the connection should have been invalidated:
self.assertEqual(db._invalidatedCache, base+1)
class CommonBlobTests:
def getConfig(self):
return """
blob-dir blobs
path Data.fs
"""
blobdir = 'blobs'
blob_cache_dir = 'blob_cache'
def checkStoreBlob(self):
import transaction
from ZODB.blob import Blob
from ZODB.tests.StorageTestBase import handle_serials
from ZODB.tests.StorageTestBase import ZERO
from ZODB.tests.StorageTestBase import zodb_pickle
somedata = b'a' * 10
blob = Blob()
with blob.open('w') as bd_fh:
bd_fh.write(somedata)
tfname = bd_fh.name
oid = self._storage.new_oid()
data = zodb_pickle(blob)
self.assert_(os.path.exists(tfname))
t = transaction.Transaction()
try:
self._storage.tpc_begin(t)
self._storage.storeBlob(oid, ZERO, data, tfname, '', t)
self._storage.tpc_vote(t)
revid = self._storage.tpc_finish(t)
except:
self._storage.tpc_abort(t)
raise
self.assert_(not os.path.exists(tfname))
filename = self._storage.fshelper.getBlobFilename(oid, revid)
self.assert_(os.path.exists(filename))
with open(filename, 'rb') as f:
self.assertEqual(somedata, f.read())
def checkStoreBlob_wrong_partition(self):
os_rename = os.rename
try:
def fail(*a):
raise OSError
os.rename = fail
self.checkStoreBlob()
finally:
os.rename = os_rename
def checkLoadBlob(self):
from ZODB.blob import Blob
from ZODB.tests.StorageTestBase import zodb_pickle, ZERO, \
handle_serials
import transaction
somedata = b'a' * 10
blob = Blob()
with blob.open('w') as bd_fh:
bd_fh.write(somedata)
tfname = bd_fh.name
oid = self._storage.new_oid()
data = zodb_pickle(blob)
t = transaction.Transaction()
try:
self._storage.tpc_begin(t)
self._storage.storeBlob(oid, ZERO, data, tfname, '', t)
self._storage.tpc_vote(t)
serial = self._storage.tpc_finish(t)
except:
self._storage.tpc_abort(t)
raise
filename = self._storage.loadBlob(oid, serial)
with open(filename, 'rb') as f:
self.assertEqual(somedata, f.read())
self.assert_(not(os.stat(filename).st_mode & stat.S_IWRITE))
self.assert_((os.stat(filename).st_mode & stat.S_IREAD))
def checkTemporaryDirectory(self):
self.assertEquals(os.path.join(self.blob_cache_dir, 'tmp'),
self._storage.temporaryDirectory())
def checkTransactionBufferCleanup(self):
oid = self._storage.new_oid()
with open('blob_file', 'wb') as f:
f.write(b'I am a happy blob.')
t = transaction.Transaction()
self._storage.tpc_begin(t)
self._storage.storeBlob(
oid, ZODB.utils.z64, 'foo', 'blob_file', '', t)
self._storage.close()
class BlobAdaptedFileStorageTests(FullGenericTests, CommonBlobTests):
"""ZEO backed by a BlobStorage-adapted FileStorage."""
def checkStoreAndLoadBlob(self):
import transaction
from ZODB.blob import Blob
from ZODB.tests.StorageTestBase import handle_serials
from ZODB.tests.StorageTestBase import ZERO
from ZODB.tests.StorageTestBase import zodb_pickle
somedata_path = os.path.join(self.blob_cache_dir, 'somedata')
with open(somedata_path, 'w+b') as somedata:
for i in range(1000000):
somedata.write(("%s\n" % i).encode('ascii'))
def check_data(path):
self.assert_(os.path.exists(path))
f = open(path, 'rb')
somedata.seek(0)
d1 = d2 = 1
while d1 or d2:
d1 = f.read(8096)
d2 = somedata.read(8096)
self.assertEqual(d1, d2)
somedata.seek(0)
blob = Blob()
with blob.open('w') as bd_fh:
ZODB.utils.cp(somedata, bd_fh)
bd_fh.close()
tfname = bd_fh.name
oid = self._storage.new_oid()
data = zodb_pickle(blob)
self.assert_(os.path.exists(tfname))
t = transaction.Transaction()
try:
self._storage.tpc_begin(t)
self._storage.storeBlob(oid, ZERO, data, tfname, '', t)
self._storage.tpc_vote(t)
revid = self._storage.tpc_finish(t)
except:
self._storage.tpc_abort(t)
raise
# The uncommitted data file should have been removed
self.assert_(not os.path.exists(tfname))
# The file should be in the cache ...
filename = self._storage.fshelper.getBlobFilename(oid, revid)
check_data(filename)
# ... and on the server
server_filename = os.path.join(
self.blobdir,
ZODB.blob.BushyLayout().getBlobFilePath(oid, revid),
)
self.assert_(server_filename.startswith(self.blobdir))
check_data(server_filename)
# If we remove it from the cache and call loadBlob, it should
# come back. We can do this in many threads.
ZODB.blob.remove_committed(filename)
returns = []
threads = [
threading.Thread(
target=lambda :
returns.append(self._storage.loadBlob(oid, revid))
)
for i in range(10)
]
[thread.start() for thread in threads]
[thread.join() for thread in threads]
[self.assertEqual(r, filename) for r in returns]
check_data(filename)
class BlobWritableCacheTests(FullGenericTests, CommonBlobTests):
blob_cache_dir = 'blobs'
shared_blob_dir = True
class FauxConn:
addr = 'x'
protocol_version = ZEO.asyncio.server.best_protocol_version
peer_protocol_version = protocol_version
serials = []
def async(self, method, *args):
if method == 'serialnos':
self.serials.extend(args[0])
call_soon_threadsafe = async
class StorageServerWrapper:
def __init__(self, server, storage_id):
self.storage_id = storage_id
self.server = ZEO.StorageServer.ZEOStorage(server, server.read_only)
self.server.notify_connected(FauxConn())
self.server.register(storage_id, False)
def sortKey(self):
return self.storage_id
def __getattr__(self, name):
return getattr(self.server, name)
def registerDB(self, *args):
pass
def supportsUndo(self):
return False
def new_oid(self):
return self.server.new_oids(1)[0]
def tpc_begin(self, transaction):
self.server.tpc_begin(id(transaction), '', '', {}, None, ' ')
def tpc_vote(self, transaction):
result = self.server.vote(id(transaction))
assert result == self.server.connection.serials[:]
del self.server.connection.serials[:]
return result
def store(self, oid, serial, data, version_ignored, transaction):
self.server.storea(oid, serial, data, id(transaction))
def send_reply(self, _, result): # Masquerade as conn
self._result = result
def tpc_abort(self, transaction):
self.server.tpc_abort(id(transaction))
def tpc_finish(self, transaction, func = lambda: None):
self.server.tpc_finish(id(transaction)).set_sender(0, self)
return self._result
def multiple_storages_invalidation_queue_is_not_insane():
"""
>>> from ZEO.StorageServer import StorageServer, ZEOStorage
>>> from ZODB.FileStorage import FileStorage
>>> from ZODB.DB import DB
>>> from persistent.mapping import PersistentMapping
>>> from transaction import commit
>>> fs1 = FileStorage('t1.fs')
>>> fs2 = FileStorage('t2.fs')
>>> server = StorageServer(('', get_port()), dict(fs1=fs1, fs2=fs2))
>>> s1 = StorageServerWrapper(server, 'fs1')
>>> s2 = StorageServerWrapper(server, 'fs2')
>>> db1 = DB(s1); conn1 = db1.open()
>>> db2 = DB(s2); conn2 = db2.open()
>>> commit()
>>> o1 = conn1.root()
>>> for i in range(10):
... o1.x = PersistentMapping(); o1 = o1.x
... commit()
>>> last = fs1.lastTransaction()
>>> for i in range(5):
... o1.x = PersistentMapping(); o1 = o1.x
... commit()
>>> o2 = conn2.root()
>>> for i in range(20):
... o2.x = PersistentMapping(); o2 = o2.x
... commit()
>>> trans, oids = s1.getInvalidations(last)
>>> from ZODB.utils import u64
>>> sorted([int(u64(oid)) for oid in oids])
[10, 11, 12, 13, 14]
>>> server.close()
"""
def getInvalidationsAfterServerRestart():
"""
Clients were often forced to verify their caches after a server
restart even if there weren't many transactions between the server
restart and the client connect.
Let's create a file storage and stuff some data into it:
>>> from ZEO.StorageServer import StorageServer, ZEOStorage
>>> from ZODB.FileStorage import FileStorage
>>> from ZODB.DB import DB
>>> from persistent.mapping import PersistentMapping
>>> fs = FileStorage('t.fs')
>>> db = DB(fs)
>>> conn = db.open()
>>> from transaction import commit
>>> last = []
>>> for i in range(100):
... conn.root()[i] = PersistentMapping()
... commit()
... last.append(fs.lastTransaction())
>>> db.close()
Now we'll open a storage server on the data, simulating a restart:
>>> fs = FileStorage('t.fs')
>>> sv = StorageServer(('', get_port()), dict(fs=fs))
>>> s = ZEOStorage(sv, sv.read_only)
>>> s.notify_connected(FauxConn())
>>> s.register('fs', False)
If we ask for the last transaction, we should get the last transaction
we saved:
>>> s.lastTransaction() == last[-1]
True
If a storage implements the method lastInvalidations, as FileStorage
does, then the storage server will populate its invalidation data
structure using lastTransactions.
>>> tid, oids = s.getInvalidations(last[-10])
>>> tid == last[-1]
True
>>> from ZODB.utils import u64
>>> sorted([int(u64(oid)) for oid in oids])
[0, 92, 93, 94, 95, 96, 97, 98, 99, 100]
(Note that the fact that we get oids for 92-100 is actually an
artifact of the fact that the FileStorage lastInvalidations method
returns all OIDs written by transactions, even if the OIDs were
created and not modified. FileStorages don't record whether objects
were created rather than modified. Objects that are just created don't
need to be invalidated. This means we'll invalidate objects that
dont' need to be invalidated, however, that's better than verifying
caches.)
>>> sv.close()
>>> fs.close()
If a storage doesn't implement lastInvalidations, a client can still
avoid verifying its cache if it was up to date when the server
restarted. To illustrate this, we'll create a subclass of FileStorage
without this method:
>>> class FS(FileStorage):
... lastInvalidations = property()
>>> fs = FS('t.fs')
>>> sv = StorageServer(('', get_port()), dict(fs=fs))
>>> st = StorageServerWrapper(sv, 'fs')
>>> s = st.server
Now, if we ask for the invalidations since the last committed
transaction, we'll get a result:
>>> tid, oids = s.getInvalidations(last[-1])
>>> tid == last[-1]
True
>>> oids
[]
>>> db = DB(st); conn = db.open()
>>> ob = conn.root()
>>> for i in range(5):
... ob.x = PersistentMapping(); ob = ob.x
... commit()
... last.append(fs.lastTransaction())
>>> ntid, oids = s.getInvalidations(tid)
>>> ntid == last[-1]
True
>>> sorted([int(u64(oid)) for oid in oids])
[0, 101, 102, 103, 104]
>>> fs.close()
"""
def tpc_finish_error():
r"""Server errors in tpc_finish weren't handled properly.
If there are errors applying changes to the client cache, don't
leave the cache in an inconsistent state.
>>> addr, admin = start_server()
>>> client = ZEO.client(addr)
>>> db = ZODB.DB(client)
>>> conn = db.open()
>>> conn.root.x = 1
>>> t = conn.transaction_manager.get()
>>> conn._storage.tpc_begin(t)
>>> conn.commit(t)
>>> _ = client.tpc_vote(t)
Cause some breakage by messing with the clients transaction
buffer, sadly, using implementation details:
>>> tbuf = t.data(client)
>>> tbuf.client_resolved = None
tpc_finish will fail:
>>> client.tpc_finish(t) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
AttributeError: ...
>>> client.tpc_abort(t)
>>> t.abort()
But we can still load the saved data:
>>> conn2 = db.open()
>>> conn2.root.x
1
And we can save new data:
>>> conn2.root.x += 1
>>> conn2.transaction_manager.commit()
>>> db.close()
>>> stop_server(admin)
"""
def client_has_newer_data_than_server():
"""It is bad if a client has newer data than the server.
>>> db = ZODB.DB('Data.fs')
>>> db.close()
>>> r = shutil.copyfile('Data.fs', 'Data.save')
>>> addr, admin = start_server(keep=1)
>>> db = ZEO.DB(addr, name='client', max_disconnect_poll=.01)
>>> wait_connected(db.storage)
>>> conn = db.open()
>>> conn.root().x = 1
>>> transaction.commit()
OK, we've added some data to the storage and the client cache has
the new data. Now, we'll stop the server, put back the old data, and
see what happens. :)
>>> stop_server(admin)
>>> r = shutil.copyfile('Data.save', 'Data.fs')
>>> import zope.testing.loggingsupport
>>> handler = zope.testing.loggingsupport.InstalledHandler(
... 'ZEO', level=logging.ERROR)
>>> formatter = logging.Formatter('%(name)s %(levelname)s %(message)s')
>>> _, admin = start_server(addr=addr)
>>> wait_until('got enough errors', lambda:
... len([x for x in handler.records
... if x.levelname == 'CRITICAL' and
... 'Client has seen newer transactions than server!' in x.msg
... ]) >= 2)
Note that the errors repeat because the client keeps on trying to connect.
>>> db.close()
>>> handler.uninstall()
>>> stop_server(admin)
"""
def history_over_zeo():
"""
>>> addr, _ = start_server()
>>> db = ZEO.DB(addr)
>>> wait_connected(db.storage)
>>> conn = db.open()
>>> conn.root().x = 0
>>> transaction.commit()
>>> len(db.history(conn.root()._p_oid, 99))
2
>>> db.close()
"""
def dont_log_poskeyerrors_on_server():
"""
>>> addr, admin = start_server()
>>> cs = ClientStorage(addr)
>>> cs.load(ZODB.utils.p64(1))
Traceback (most recent call last):
...
POSKeyError: 0x01
>>> cs.close()
>>> stop_server(admin)
>>> with open('server-%s.log' % addr[1]) as f:
... 'POSKeyError' in f.read()
False
"""
def open_convenience():
"""Often, we just want to open a single connection.
>>> addr, _ = start_server(path='data.fs')
>>> conn = ZEO.connection(addr)
>>> conn.root()
{}
>>> conn.root()['x'] = 1
>>> transaction.commit()
>>> conn.close()
Let's make sure the database was cloased when we closed the
connection, and that the data is there.
>>> db = ZEO.DB(addr)
>>> conn = db.open()
>>> conn.root()
{'x': 1}
>>> db.close()
"""
def client_asyncore_thread_has_name():
"""
>>> addr, _ = start_server()
>>> db = ZEO.DB(addr)
>>> any(t for t in threading.enumerate()
... if ' zeo client networking thread' in t.getName())
True
>>> db.close()
"""
def runzeo_without_configfile():
"""
>>> with open('runzeo', 'w') as r:
... _ = r.write('''
... import sys
... sys.path[:] = %r
... import ZEO.runzeo
... ZEO.runzeo.main(sys.argv[1:])
... ''' % sys.path)
>>> import subprocess, re
>>> print(re.sub(b'\d\d+|[:]', b'', subprocess.Popen(
... [sys.executable, 'runzeo', '-a:%s' % get_port(), '-ft', '--test'],
... stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
... ).stdout.read()).decode('ascii'))
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
------
--T INFO ZEO.runzeo () opening storage '1' using FileStorage
------
--T INFO ZEO.StorageServer StorageServer created RW with storages 1RWt
------
--T INFO ZEO.asyncio... listening on ...
------
--T INFO ZEO.StorageServer closing storage '1'
testing exit immediately
"""
def close_client_storage_w_invalidations():
r"""
Invalidations could cause errors when closing client storages,
>>> addr, _ = start_server()
>>> writing = threading.Event()
>>> def mad_write_thread():
... global writing
... conn = ZEO.connection(addr)
... writing.set()
... while writing.isSet():
... conn.root.x = 1
... transaction.commit()
... conn.close()
>>> thread = threading.Thread(target=mad_write_thread)
>>> thread.setDaemon(True)
>>> thread.start()
>>> _ = writing.wait()
>>> time.sleep(.01)
>>> for i in range(10):
... conn = ZEO.connection(addr)
... _ = conn._storage.load(b'\0'*8)
... conn.close()
>>> writing.clear()
>>> thread.join(1)
"""
def convenient_to_pass_port_to_client_and_ZEO_dot_client():
"""Jim hates typing
>>> addr, _ = start_server()
>>> client = ZEO.client(addr[1])
>>> client.__name__ == "('127.0.0.1', %s)" % addr[1]
True
>>> client.close()
"""
def test_server_status():
"""
You can get server status using the server_status method.
>>> addr, _ = start_server(zeo_conf=dict(transaction_timeout=1))
>>> db = ZEO.DB(addr)
>>> pprint.pprint(db.storage.server_status(), width=40)
{'aborts': 0,
'active_txns': 0,
'commits': 1,
'conflicts': 0,
'conflicts_resolved': 0,
'connections': 1,
'last-transaction': '03ac11b771fa1c00',
'loads': 1,
'lock_time': None,
'start': 'Tue May 4 10:55:20 2010',
'stores': 1,
'timeout-thread-is-alive': True,
'waiting': 0}
>>> db.close()
"""
def test_ruok():
"""
You can also get server status using the ruok protocol.
>>> addr, _ = start_server(zeo_conf=dict(transaction_timeout=1))
>>> db = ZEO.DB(addr) # force a transaction :)
>>> import json, socket, struct
>>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> s.connect(addr)
>>> writer = s.makefile(mode='wb')
>>> _ = writer.write(struct.pack(">I", 4)+b"ruok")
>>> writer.close()
>>> proto = s.recv(struct.unpack(">I", s.recv(4))[0])
>>> data = json.loads(
... s.recv(struct.unpack(">I", s.recv(4))[0]).decode("ascii"))
>>> pprint.pprint(data['1'])
{u'aborts': 0,
u'active_txns': 0,
u'commits': 1,
u'conflicts': 0,
u'conflicts_resolved': 0,
u'connections': 1,
u'last-transaction': u'03ac11cd11372499',
u'loads': 1,
u'lock_time': None,
u'start': u'Sun Jan 4 09:37:03 2015',
u'stores': 1,
u'timeout-thread-is-alive': True,
u'waiting': 0}
>>> db.close(); s.close()
"""
def client_labels():
"""
When looking at server logs, for servers with lots of clients coming
from the same machine, it can be very difficult to correlate server
log entries with actual clients. It's possible, sort of, but tedious.
You can make this easier by passing a label to the ClientStorage
constructor.
>>> addr, _ = start_server()
>>> db = ZEO.DB(addr, client_label='test-label-1')
>>> db.close()
>>> @wait_until
... def check_for_test_label_1():
... with open('server-%s.log' % addr[1]) as f:
... for line in f:
... if 'test-label-1' in line:
... print(line.split()[1:4])
... return True
['INFO', 'ZEO.StorageServer', '(test-label-1']
You can specify the client label via a configuration file as well:
>>> import ZODB.config
>>> db = ZODB.config.databaseFromString('''
...
...
... server :%s
... client-label test-label-2
...
...
... ''' % addr[1])
>>> db.close()
>>> @wait_until
... def check_for_test_label_2():
... with open('server-%s.log' % addr[1]) as f:
... for line in open('server-%s.log' % addr[1]):
... if 'test-label-2' in line:
... print(line.split()[1:4])
... return True
['INFO', 'ZEO.StorageServer', '(test-label-2']
"""
def invalidate_client_cache_entry_on_server_commit_error():
"""
When the serials returned during commit includes an error, typically a
conflict error, invalidate the cache entry. This is important when
the cache is messed up.
>>> addr, _ = start_server()
>>> conn1 = ZEO.connection(addr)
>>> conn1.root.x = conn1.root().__class__()
>>> transaction.commit()
>>> conn1.root.x
{}
>>> cs = ZEO.ClientStorage.ClientStorage(addr, client='cache')
>>> conn2 = ZODB.connection(cs)
>>> conn2.root.x
{}
>>> conn2.close()
>>> cs.close()
>>> conn1.root.x['x'] = 1
>>> transaction.commit()
>>> conn1.root.x
{'x': 1}
Now, let's screw up the cache by making it have a last tid that is later than
the root serial.
>>> import ZEO.cache
>>> cache = ZEO.cache.ClientCache('cache-1.zec')
>>> cache.setLastTid(p64(u64(conn1.root.x._p_serial)+1))
>>> cache.close()
We'll also update the server so that it's last tid is newer than the cache's:
>>> conn1.root.y = 1
>>> transaction.commit()
>>> conn1.root.y = 2
>>> transaction.commit()
Now, if we reopen the client storage, we'll get the wrong root:
>>> cs = ZEO.ClientStorage.ClientStorage(addr, client='cache')
>>> conn2 = ZODB.connection(cs)
>>> conn2.root.x
{}
And, we'll get a conflict error if we try to modify it:
>>> conn2.root.x['y'] = 1
>>> transaction.commit() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ConflictError: ...
But, if we abort, we'll get up to date data and we'll see the changes.
>>> transaction.abort()
>>> conn2.root.x
{'x': 1}
>>> conn2.root.x['y'] = 1
>>> transaction.commit()
>>> sorted(conn2.root.x.items())
[('x', 1), ('y', 1)]
>>> conn2.close()
>>> cs.close()
>>> conn1.close()
"""
script_template = """
import sys
sys.path[:] = %(path)r
%(src)s
"""
def generate_script(name, src):
with open(name, 'w') as f:
f.write(script_template % dict(
exe=sys.executable,
path=sys.path,
src=src,
))
def read(filename):
with open(filename) as f:
return f.read()
def runzeo_logrotate_on_sigusr2():
"""
>>> port = get_port()
>>> with open('c', 'w') as r:
... _ = r.write('''
...
... address %s
...
...
...
...
...
... path l
...
...
... ''' % port)
>>> generate_script('s', '''
... import ZEO.runzeo
... ZEO.runzeo.main()
... ''')
>>> import subprocess, signal
>>> p = subprocess.Popen([sys.executable, 's', '-Cc'], close_fds=True)
>>> wait_until('started',
... lambda : os.path.exists('l') and ('listening on' in read('l'))
... )
>>> oldlog = read('l')
>>> os.rename('l', 'o')
>>> os.kill(p.pid, signal.SIGUSR2)
>>> wait_until('new file', lambda : os.path.exists('l'))
XXX: if any of the previous commands failed, we'll hang here trying to
connect to ZEO, because doctest runs all the assertions...
>>> s = ClientStorage(port)
>>> s.close()
>>> wait_until('See logging', lambda : ('Log files ' in read('l')))
>>> read('o') == oldlog # No new data in old log
True
# Cleanup:
>>> os.kill(p.pid, signal.SIGKILL)
>>> _ = p.wait()
"""
def unix_domain_sockets():
"""Make sure unix domain sockets work
>>> addr, _ = start_server(port='./sock')
>>> c = ZEO.connection(addr)
>>> c.root.x = 1
>>> transaction.commit()
>>> c.close()
"""
def gracefully_handle_abort_while_storing_many_blobs():
r"""
>>> import logging, sys
>>> old_level = logging.getLogger().getEffectiveLevel()
>>> logging.getLogger().setLevel(logging.ERROR)
>>> handler = logging.StreamHandler(sys.stdout)
>>> logging.getLogger().addHandler(handler)
>>> addr, _ = start_server(blob_dir='blobs')
>>> client = ZEO.client(addr, blob_dir='cblobs')
>>> c = ZODB.connection(client)
>>> c.root.x = ZODB.blob.Blob(b'z'*(1<<20))
>>> c.root.y = ZODB.blob.Blob(b'z'*(1<<2))
>>> t = c.transaction_manager.get()
>>> c.tpc_begin(t)
>>> c.commit(t)
We've called commit, but the blob sends are queued. We'll call abort
right away, which will delete the temporary blob files. The queued
iterators will try to open these files.
>>> c.tpc_abort(t)
Now we'll try to use the connection, mainly to wait for everything to
get processed. Before we fixed this by making tpc_finish a synchronous
call to the server. we'd get some sort of error here.
>>> _ = client._call('loadBefore', b'\0'*8, m64)
>>> c.close()
>>> logging.getLogger().removeHandler(handler)
>>> logging.getLogger().setLevel(old_level)
"""
if sys.platform.startswith('win'):
del runzeo_logrotate_on_sigusr2
del unix_domain_sockets
def work_with_multiprocessing_process(name, addr, q):
conn = ZEO.connection(addr)
q.put((name, conn.root.x))
conn.close()
class MultiprocessingTests(unittest.TestCase):
layer = ZODB.tests.util.MininalTestLayer('work_with_multiprocessing')
def test_work_with_multiprocessing(self):
"Client storage should work with multi-processing."
# Gaaa, zope.testing.runner.FakeInputContinueGenerator has no close
if not hasattr(sys.stdin, 'close'):
sys.stdin.close = lambda : None
if not hasattr(sys.stdin, 'fileno'):
sys.stdin.fileno = lambda : -1
self.globs = {}
forker.setUp(self)
addr, adminaddr = self.globs['start_server']()
conn = ZEO.connection(addr)
conn.root.x = 1
transaction.commit()
q = multiprocessing.Queue()
processes = [multiprocessing.Process(
target=work_with_multiprocessing_process,
args=(i, addr, q))
for i in range(3)]
_ = [p.start() for p in processes]
self.assertEqual(sorted(q.get(timeout=300) for p in processes),
[(0, 1), (1, 1), (2, 1)])
_ = [p.join(30) for p in processes]
conn.close()
zope.testing.setupstack.tearDown(self)
def quick_close_doesnt_kill_server():
r"""
Start a server:
>>> from .testssl import server_config, client_ssl
>>> addr, _ = start_server(zeo_conf=server_config)
Now connect and immediately disconnect. This caused the server to
die in the past:
>>> import socket, struct
>>> for i in range(5):
... s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
... s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER,
... struct.pack('ii', 1, 0))
... s.connect(addr)
... s.close()
>>> print("\n\nXXX WARNING: running quick_close_doesnt_kill_server with ssl as hack pending http://bugs.python.org/issue27386\n", file=sys.stderr) # Intentional long line to be annoying till this is fixed
Now we should be able to connect as normal:
>>> db = ZEO.DB(addr, ssl=client_ssl())
>>> db.storage.is_connected()
True
>>> db.close()
"""
def can_use_empty_string_for_local_host_on_client():
"""We should be able to spell localhost with ''.
>>> (_, port), _ = start_server()
>>> conn = ZEO.connection(('', port))
>>> conn.root()
{}
>>> conn.root.x = 1
>>> transaction.commit()
>>> conn.close()
"""
slow_test_classes = [
BlobAdaptedFileStorageTests, BlobWritableCacheTests,
MappingStorageTests, DemoStorageTests,
FileStorageTests, FileStorageSSLTests,
FileStorageHexTests, FileStorageClientHexTests,
]
quick_test_classes = [FileStorageRecoveryTests, ZRPCConnectionTests]
class ServerManagingClientStorage(ClientStorage):
def __init__(self, name, blob_dir, shared=False, extrafsoptions=''):
if shared:
server_blob_dir = blob_dir
else:
server_blob_dir = 'server-'+blob_dir
self.globs = {}
port = forker.get_port2(self)
addr, stop = forker.start_zeo_server(
"""
blob-dir %s
path %s
%s
""" % (server_blob_dir, name+'.fs', extrafsoptions),
port=port,
)
zope.testing.setupstack.register(self, stop)
if shared:
ClientStorage.__init__(self, addr, blob_dir=blob_dir,
shared_blob_dir=True)
else:
ClientStorage.__init__(self, addr, blob_dir=blob_dir)
def close(self):
ClientStorage.close(self)
zope.testing.setupstack.tearDown(self)
def create_storage_shared(name, blob_dir):
return ServerManagingClientStorage(name, blob_dir, True)
class ServerManagingClientStorageForIExternalGCTest(
ServerManagingClientStorage):
def pack(self, t=None, referencesf=None):
ServerManagingClientStorage.pack(self, t, referencesf, wait=True)
# Packing doesn't clear old versions out of zeo client caches,
# so we'll clear the caches.
self._cache.clear()
ZEO.ClientStorage._check_blob_cache_size(self.blob_dir, 0)
def test_suite():
suite = unittest.TestSuite()
zeo = unittest.TestSuite()
zeo.addTest(unittest.makeSuite(ZODB.tests.util.AAAA_Test_Runner_Hack))
patterns = [
(re.compile(r"u?'start': u?'[^\n]+'"), 'start'),
(re.compile(r"u?'last-transaction': u?'[0-9a-f]+'"),
'last-transaction'),
(re.compile("ZODB.POSException.ConflictError"), "ConflictError"),
(re.compile("ZODB.POSException.POSKeyError"), "POSKeyError"),
(re.compile("ZEO.Exceptions.ClientStorageError"), "ClientStorageError"),
(re.compile(r"\[Errno \d+\]"), '[Errno N]'),
(re.compile(r"loads=\d+\.\d+"), 'loads=42.42'),
# Python 3 drops the u prefix
(re.compile("u('.*?')"), r"\1"),
(re.compile('u(".*?")'), r"\1")
]
if not PY3:
patterns.append((re.compile("^'(blob[^']*)'"), r"b'\1'"))
patterns.append((re.compile("^'Z308'"), "b'Z308'"))
zeo.addTest(doctest.DocTestSuite(
setUp=forker.setUp, tearDown=zope.testing.setupstack.tearDown,
checker=renormalizing.RENormalizing(patterns),
))
zeo.addTest(doctest.DocTestSuite(
ZEO.tests.IterationTests,
setUp=forker.setUp, tearDown=zope.testing.setupstack.tearDown,
checker=renormalizing.RENormalizing((
(re.compile("ZEO.Exceptions.ClientDisconnected"),
"ClientDisconnected"),
)),
))
zeo.addTest(unittest.makeSuite(ClientConflictResolutionTests, 'check'))
zeo.layer = ZODB.tests.util.MininalTestLayer('testZeo-misc')
suite.addTest(zeo)
zeo = unittest.TestSuite()
zeo.addTest(
doctest.DocFileSuite(
'zdoptions.test',
'drop_cache_rather_than_verify.txt', 'client-config.test',
'protocols.test', 'zeo_blob_cache.test', 'invalidation-age.txt',
'dynamic_server_ports.test', '../nagios.rst',
setUp=forker.setUp, tearDown=zope.testing.setupstack.tearDown,
checker=renormalizing.RENormalizing(patterns),
globs={'print_function': print_function},
),
)
zeo.addTest(PackableStorage.IExternalGC_suite(
lambda :
ServerManagingClientStorageForIExternalGCTest(
'data.fs', 'blobs', extrafsoptions='pack-gc false')
))
for klass in quick_test_classes:
zeo.addTest(unittest.makeSuite(klass, "check"))
zeo.layer = ZODB.tests.util.MininalTestLayer('testZeo-misc2')
suite.addTest(zeo)
# tests that often fail, maybe if they have their own layers
for name in 'zeo-fan-out.test', 'new_addr.test':
zeo = unittest.TestSuite()
zeo.addTest(
doctest.DocFileSuite(
name,
setUp=forker.setUp, tearDown=zope.testing.setupstack.tearDown,
checker=renormalizing.RENormalizing(patterns),
globs={'print_function': print_function},
),
)
zeo.layer = ZODB.tests.util.MininalTestLayer('testZeo-' + name)
suite.addTest(zeo)
suite.addTest(unittest.makeSuite(MultiprocessingTests))
# Put the heavyweights in their own layers
for klass in slow_test_classes:
sub = unittest.makeSuite(klass, "check")
sub.layer = ZODB.tests.util.MininalTestLayer(klass.__name__)
suite.addTest(sub)
suite.addTest(ZODB.tests.testblob.storage_reusable_suite(
'ClientStorageNonSharedBlobs', ServerManagingClientStorage))
suite.addTest(ZODB.tests.testblob.storage_reusable_suite(
'ClientStorageSharedBlobs', create_storage_shared))
return suite
if __name__ == "__main__":
unittest.main(defaultTest="test_suite")
ZEO-5.0.0a0/src/ZEO/tests/testZEO2.py 0000664 0000000 0000000 00000042171 12737730500 0016762 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
from __future__ import print_function
from zope.testing import setupstack, renormalizing
import doctest
import logging
import pprint
import re
import sys
import transaction
import unittest
import ZEO.StorageServer
import ZEO.tests.servertesting
import ZODB.blob
import ZODB.FileStorage
import ZODB.tests.util
import ZODB.utils
def proper_handling_of_blob_conflicts():
r"""
Conflict errors weren't properly handled when storing blobs, the
result being that the storage was left in a transaction.
We originally saw this when restarting a blob transaction, although
it doesn't really matter.
Set up the storage with some initial blob data.
>>> fs = ZODB.FileStorage.FileStorage('t.fs', blob_dir='t.blobs')
>>> db = ZODB.DB(fs)
>>> conn = db.open()
>>> conn.root.b = ZODB.blob.Blob(b'x')
>>> transaction.commit()
Get the oid and first serial. We'll use the serial later to provide
out-of-date data.
>>> oid = conn.root.b._p_oid
>>> serial = conn.root.b._p_serial
>>> with conn.root.b.open('w') as file:
... _ = file.write(b'y')
>>> transaction.commit()
>>> data = fs.load(oid)[0]
Create the server:
>>> server = ZEO.tests.servertesting.StorageServer('x', {'1': fs})
And an initial client.
>>> zs1 = ZEO.tests.servertesting.client(server, 1)
>>> zs1.tpc_begin('0', '', '', {})
>>> zs1.storea(ZODB.utils.p64(99), ZODB.utils.z64, b'x', '0')
>>> _ = zs1.vote('0') # doctest: +ELLIPSIS
In a second client, we'll try to commit using the old serial. This
will conflict. It will be blocked at the vote call.
>>> zs2 = ZEO.tests.servertesting.client(server, 2)
>>> zs2.tpc_begin('1', '', '', {})
>>> zs2.storeBlobStart()
>>> zs2.storeBlobChunk(b'z')
>>> zs2.storeBlobEnd(oid, serial, data, '1')
>>> delay = zs2.vote('1')
>>> class Sender:
... def send_reply(self, id, reply):
... print('reply', id, reply)
... def send_error(self, id, err):
... print('error', id, err)
>>> delay.set_sender(1, Sender())
>>> logger = logging.getLogger('ZEO')
>>> handler = logging.StreamHandler(sys.stdout)
>>> logger.setLevel(logging.INFO)
>>> logger.addHandler(handler)
Now, when we abort the transaction for the first client. The second
client will be restarted. It will get a conflict error, that is
raised to the client:
>>> zs1.tpc_abort('0') # doctest: +ELLIPSIS
Error raised in delayed method
Traceback (most recent call last):
...ConflictError: ...
error 1 database conflict error ...
The transaction is aborted by the server:
>>> fs.tpc_transaction() is None
True
>>> zs2.connected
True
>>> logger.setLevel(logging.NOTSET)
>>> logger.removeHandler(handler)
>>> zs2.tpc_abort('1')
>>> fs.close()
"""
def proper_handling_of_errors_in_restart():
r"""
It's critical that if there is an error in vote that the
storage isn't left in tpc.
>>> fs = ZODB.FileStorage.FileStorage('t.fs', blob_dir='t.blobs')
>>> server = ZEO.tests.servertesting.StorageServer('x', {'1': fs})
And an initial client.
>>> zs1 = ZEO.tests.servertesting.client(server, 1)
>>> zs1.tpc_begin('0', '', '', {})
>>> zs1.storea(ZODB.utils.p64(99), ZODB.utils.z64, b'x', '0')
Intentionally break zs1:
>>> zs1._store = lambda : None
>>> _ = zs1.vote('0') # doctest: +ELLIPSIS +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
TypeError: () takes no arguments (3 given)
We're not in a transaction:
>>> fs.tpc_transaction() is None
True
We can start another client and get the storage lock.
>>> zs1 = ZEO.tests.servertesting.client(server, 1)
>>> zs1.tpc_begin('1', '', '', {})
>>> zs1.storea(ZODB.utils.p64(99), ZODB.utils.z64, b'x', '1')
>>> _ = zs1.vote('1') # doctest: +ELLIPSIS
>>> zs1.tpc_finish('1').set_sender(0, zs1.connection)
>>> fs.close()
"""
def errors_in_vote_should_clear_lock():
"""
So, we arrange to get an error in vote:
>>> import ZODB.MappingStorage
>>> vote_should_fail = True
>>> class MappingStorage(ZODB.MappingStorage.MappingStorage):
... def tpc_vote(*args):
... if vote_should_fail:
... raise ValueError
... return ZODB.MappingStorage.MappingStorage.tpc_vote(*args)
>>> server = ZEO.tests.servertesting.StorageServer(
... 'x', {'1': MappingStorage()})
>>> zs = ZEO.tests.servertesting.client(server, 1)
>>> zs.tpc_begin('0', '', '', {})
>>> zs.storea(ZODB.utils.p64(99), ZODB.utils.z64, 'x', '0')
>>> zs.vote('0')
Traceback (most recent call last):
...
ValueError
When we do, the storage server's transaction lock shouldn't be held:
>>> '1' in server._commit_locks
False
Of course, if vote suceeds, the lock will be held:
>>> vote_should_fail = False
>>> zs.tpc_begin('1', '', '', {})
>>> zs.storea(ZODB.utils.p64(99), ZODB.utils.z64, 'x', '1')
>>> _ = zs.vote('1') # doctest: +ELLIPSIS
>>> '1' in server._commit_locks
True
>>> zs.tpc_abort('1')
"""
def some_basic_locking_tests():
r"""
>>> itid = 0
>>> def start_trans(zs):
... global itid
... itid += 1
... tid = str(itid)
... zs.tpc_begin(tid, '', '', {})
... zs.storea(ZODB.utils.p64(99), ZODB.utils.z64, 'x', tid)
... return tid
>>> server = ZEO.tests.servertesting.StorageServer()
>>> handler = logging.StreamHandler(sys.stdout)
>>> handler.setFormatter(logging.Formatter(
... '%(name)s %(levelname)s\n%(message)s'))
>>> logging.getLogger('ZEO').addHandler(handler)
>>> logging.getLogger('ZEO').setLevel(logging.DEBUG)
Work around the fact that ZODB registers level names backwards, which
quit working in Python 3.4:
>>> import logging
>>> from ZODB.loglevels import BLATHER
>>> logging.addLevelName(BLATHER, "BLATHER")
We start a transaction and vote, this leads to getting the lock.
>>> zs1 = ZEO.tests.servertesting.client(server, '1')
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
>>> tid1 = start_trans(zs1)
>>> resolved1 = zs1.vote(tid1) # doctest: +ELLIPSIS
ZEO.StorageServer DEBUG
(test-addr-1) ('1') lock: transactions waiting: 0
ZEO.StorageServer BLATHER
(test-addr-1) Preparing to commit transaction: 1 objects, ... bytes
If another client tried to vote, it's lock request will be queued and
a delay will be returned:
>>> zs2 = ZEO.tests.servertesting.client(server, '2')
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
>>> tid2 = start_trans(zs2)
>>> delay = zs2.vote(tid2)
ZEO.StorageServer DEBUG
(test-addr-2) ('1') queue lock: transactions waiting: 1
>>> delay.set_sender(0, zs2.connection)
When we end the first transaction, the queued vote gets the lock.
>>> zs1.tpc_abort(tid1) # doctest: +ELLIPSIS
ZEO.StorageServer DEBUG
(test-addr-1) ('1') unlock: transactions waiting: 1
ZEO.StorageServer DEBUG
(test-addr-2) ('1') lock: transactions waiting: 0
ZEO.StorageServer BLATHER
(test-addr-2) Preparing to commit transaction: 1 objects, ... bytes
Let's try again with the first client. The vote will be queued:
>>> tid1 = start_trans(zs1)
>>> delay = zs1.vote(tid1)
ZEO.StorageServer DEBUG
(test-addr-1) ('1') queue lock: transactions waiting: 1
If the queued transaction is aborted, it will be dequeued:
>>> zs1.tpc_abort(tid1) # doctest: +ELLIPSIS
ZEO.StorageServer DEBUG
(test-addr-1) ('1') dequeue lock: transactions waiting: 0
BTW, voting multiple times will error:
>>> zs2.vote(tid2)
Traceback (most recent call last):
...
StorageTransactionError: Already voting (locked)
>>> tid1 = start_trans(zs1)
>>> delay = zs1.vote(tid1)
ZEO.StorageServer DEBUG
(test-addr-1) ('1') queue lock: transactions waiting: 1
>>> delay.set_sender(0, zs1.connection)
>>> zs1.vote(tid1)
Traceback (most recent call last):
...
StorageTransactionError: Already voting (waiting)
Note that the locking activity is logged at debug level to avoid
cluttering log files, however, as the number of waiting votes
increased, so does the logging level:
>>> clients = []
>>> for i in range(9):
... client = ZEO.tests.servertesting.client(server, str(i+10))
... tid = start_trans(client)
... delay = client.vote(tid)
... clients.append(client)
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
ZEO.StorageServer DEBUG
(test-addr-10) ('1') queue lock: transactions waiting: 2
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
ZEO.StorageServer DEBUG
(test-addr-11) ('1') queue lock: transactions waiting: 3
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
ZEO.StorageServer WARNING
(test-addr-12) ('1') queue lock: transactions waiting: 4
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
ZEO.StorageServer WARNING
(test-addr-13) ('1') queue lock: transactions waiting: 5
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
ZEO.StorageServer WARNING
(test-addr-14) ('1') queue lock: transactions waiting: 6
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
ZEO.StorageServer WARNING
(test-addr-15) ('1') queue lock: transactions waiting: 7
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
ZEO.StorageServer WARNING
(test-addr-16) ('1') queue lock: transactions waiting: 8
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
ZEO.StorageServer WARNING
(test-addr-17) ('1') queue lock: transactions waiting: 9
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
ZEO.StorageServer CRITICAL
(test-addr-18) ('1') queue lock: transactions waiting: 10
If a client with the transaction lock disconnects, it will abort and
release the lock and one of the waiting clients will get the lock.
>>> zs2.notify_disconnected() # doctest: +ELLIPSIS
ZEO.StorageServer INFO
(test-addr-2) disconnected during locked transaction
ZEO.StorageServer CRITICAL
(test-addr-2) ('1') unlock: transactions waiting: 10
ZEO.StorageServer WARNING
(test-addr-1) ('1') lock: transactions waiting: 9
ZEO.StorageServer BLATHER
(test-addr-1) Preparing to commit transaction: 1 objects, ... bytes
(In practice, waiting clients won't necessarily get the lock in order.)
We can find out about the current lock state, and get other server
statistics using the server_status method:
>>> pprint.pprint(zs1.server_status(), width=40)
{'aborts': 3,
'active_txns': 10,
'commits': 0,
'conflicts': 0,
'conflicts_resolved': 0,
'connections': 10,
'last-transaction': '0000000000000000',
'loads': 0,
'lock_time': 1272653598.693882,
'start': 'Fri Apr 30 14:53:18 2010',
'stores': 13,
'timeout-thread-is-alive': 'stub',
'waiting': 9}
If clients disconnect while waiting, they will be dequeued:
>>> for client in clients:
... client.notify_disconnected()
ZEO.StorageServer INFO
(test-addr-10) disconnected during unlocked transaction
ZEO.StorageServer WARNING
(test-addr-10) ('1') dequeue lock: transactions waiting: 8
ZEO.StorageServer INFO
(test-addr-11) disconnected during unlocked transaction
ZEO.StorageServer WARNING
(test-addr-11) ('1') dequeue lock: transactions waiting: 7
ZEO.StorageServer INFO
(test-addr-12) disconnected during unlocked transaction
ZEO.StorageServer WARNING
(test-addr-12) ('1') dequeue lock: transactions waiting: 6
ZEO.StorageServer INFO
(test-addr-13) disconnected during unlocked transaction
ZEO.StorageServer WARNING
(test-addr-13) ('1') dequeue lock: transactions waiting: 5
ZEO.StorageServer INFO
(test-addr-14) disconnected during unlocked transaction
ZEO.StorageServer WARNING
(test-addr-14) ('1') dequeue lock: transactions waiting: 4
ZEO.StorageServer INFO
(test-addr-15) disconnected during unlocked transaction
ZEO.StorageServer DEBUG
(test-addr-15) ('1') dequeue lock: transactions waiting: 3
ZEO.StorageServer INFO
(test-addr-16) disconnected during unlocked transaction
ZEO.StorageServer DEBUG
(test-addr-16) ('1') dequeue lock: transactions waiting: 2
ZEO.StorageServer INFO
(test-addr-17) disconnected during unlocked transaction
ZEO.StorageServer DEBUG
(test-addr-17) ('1') dequeue lock: transactions waiting: 1
ZEO.StorageServer INFO
(test-addr-18) disconnected during unlocked transaction
ZEO.StorageServer DEBUG
(test-addr-18) ('1') dequeue lock: transactions waiting: 0
>>> zs1.tpc_abort(tid1)
>>> logging.getLogger('ZEO').setLevel(logging.NOTSET)
>>> logging.getLogger('ZEO').removeHandler(handler)
"""
def lock_sanity_check():
r"""
On one occasion with 3.10.0a1 in production, we had a case where a
transaction lock wasn't released properly. One possibility, fron
scant log information, is that the server and ZEOStorage had different
ideas about whether the ZEOStorage was locked. The timeout thread
properly closed the ZEOStorage's connection, but the ZEOStorage didn't
release it's lock, presumably because it thought it wasn't locked. I'm
not sure why this happened. I've refactored the logic quite a bit to
try to deal with this, but the consequences of this failure are so
severe, I'm adding some sanity checking when queueing lock requests.
Helper to manage transactions:
>>> itid = 0
>>> def start_trans(zs):
... global itid
... itid += 1
... tid = str(itid)
... zs.tpc_begin(tid, '', '', {})
... zs.storea(ZODB.utils.p64(99), ZODB.utils.z64, 'x', tid)
... return tid
Set up server and logging:
>>> server = ZEO.tests.servertesting.StorageServer()
>>> handler = logging.StreamHandler(sys.stdout)
>>> handler.setFormatter(logging.Formatter(
... '%(name)s %(levelname)s\n%(message)s'))
>>> logging.getLogger('ZEO').addHandler(handler)
>>> logging.getLogger('ZEO').setLevel(logging.DEBUG)
Work around the fact that ZODB registers level names backwards, which
quit working in Python 3.4:
>>> import logging
>>> from ZODB.loglevels import BLATHER
>>> logging.addLevelName(BLATHER, "BLATHER")
Now, we'll start a transaction, get the lock and then mark the
ZEOStorage as closed and see if trying to get a lock cleans it up:
>>> zs1 = ZEO.tests.servertesting.client(server, '1')
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
>>> tid1 = start_trans(zs1)
>>> resolved1 = zs1.vote(tid1) # doctest: +ELLIPSIS
ZEO.StorageServer DEBUG
(test-addr-1) ('1') lock: transactions waiting: 0
ZEO.StorageServer BLATHER
(test-addr-1) Preparing to commit transaction: 1 objects, ... bytes
>>> zs1.connection.connection_lost(None)
ZEO.StorageServer INFO
(test-addr-1) disconnected during locked transaction
>>> zs2 = ZEO.tests.servertesting.client(server, '2')
ZEO.asyncio.base INFO
Connected server protocol
ZEO.asyncio.server INFO
received handshake 'Z5'
>>> tid2 = start_trans(zs2)
>>> resolved2 = zs2.vote(tid2) # doctest: +ELLIPSIS
ZEO.StorageServer DEBUG
(test-addr-2) ('1') lock: transactions waiting: 0
ZEO.StorageServer BLATHER
(test-addr-2) Preparing to commit transaction: 1 objects, ... bytes
>>> zs2.tpc_abort(tid2)
>>> logging.getLogger('ZEO').setLevel(logging.NOTSET)
>>> logging.getLogger('ZEO').removeHandler(handler)
"""
def test_suite():
return unittest.TestSuite((
doctest.DocTestSuite(
setUp=ZODB.tests.util.setUp, tearDown=setupstack.tearDown,
checker=renormalizing.RENormalizing([
(re.compile('\d+/test-addr'), ''),
(re.compile("'lock_time': \d+.\d+"), 'lock_time'),
(re.compile(r"'start': '[^\n]+'"), 'start'),
(re.compile('ZODB.POSException.StorageTransactionError'),
'StorageTransactionError'),
]),
),
))
if __name__ == '__main__':
unittest.main(defaultTest='test_suite')
ZEO-5.0.0a0/src/ZEO/tests/testZEOOptions.py 0000664 0000000 0000000 00000007375 12737730500 0020263 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Test suite for ZEO.runzeo.ZEOOptions."""
import os
import tempfile
import unittest
import ZODB.config
from ZEO.runzeo import ZEOOptions
from zdaemon.tests.testzdoptions import TestZDOptions
# When a hostname isn't specified in a socket binding address, ZConfig
# supplies the empty string.
DEFAULT_BINDING_HOST = ""
class TestZEOOptions(TestZDOptions):
OptionsClass = ZEOOptions
input_args = ["-f", "Data.fs", "-a", "5555"]
output_opts = [("-f", "Data.fs"), ("-a", "5555")]
output_args = []
configdata = """
address 5555
path Data.fs
"""
def setUp(self):
self.tempfilename = tempfile.mktemp()
with open(self.tempfilename, "w") as f:
f.write(self.configdata)
def tearDown(self):
try:
os.remove(self.tempfilename)
except os.error:
pass
def test_configure(self):
# Hide the base class test_configure
pass
def test_default_help(self): pass # disable silly test w spurious failures
def test_defaults_with_schema(self):
options = self.OptionsClass()
options.realize(["-C", self.tempfilename])
self.assertEqual(options.address, (DEFAULT_BINDING_HOST, 5555))
self.assertEqual(len(options.storages), 1)
opener = options.storages[0]
self.assertEqual(opener.name, "fs")
self.assertEqual(opener.__class__, ZODB.config.FileStorage)
self.assertEqual(options.read_only, 0)
self.assertEqual(options.transaction_timeout, None)
self.assertEqual(options.invalidation_queue_size, 100)
def test_defaults_without_schema(self):
options = self.OptionsClass()
options.realize(["-a", "5555", "-f", "Data.fs"])
self.assertEqual(options.address, (DEFAULT_BINDING_HOST, 5555))
self.assertEqual(len(options.storages), 1)
opener = options.storages[0]
self.assertEqual(opener.name, "1")
self.assertEqual(opener.__class__, ZODB.config.FileStorage)
self.assertEqual(opener.config.path, "Data.fs")
self.assertEqual(options.read_only, 0)
self.assertEqual(options.transaction_timeout, None)
self.assertEqual(options.invalidation_queue_size, 100)
def test_commandline_overrides(self):
options = self.OptionsClass()
options.realize(["-C", self.tempfilename,
"-a", "6666", "-f", "Wisdom.fs"])
self.assertEqual(options.address, (DEFAULT_BINDING_HOST, 6666))
self.assertEqual(len(options.storages), 1)
opener = options.storages[0]
self.assertEqual(opener.__class__, ZODB.config.FileStorage)
self.assertEqual(opener.config.path, "Wisdom.fs")
self.assertEqual(options.read_only, 0)
self.assertEqual(options.transaction_timeout, None)
self.assertEqual(options.invalidation_queue_size, 100)
def test_suite():
suite = unittest.TestSuite()
for cls in [TestZEOOptions]:
suite.addTest(unittest.makeSuite(cls))
return suite
if __name__ == "__main__":
unittest.main(defaultTest='test_suite')
ZEO-5.0.0a0/src/ZEO/tests/test_cache.py 0000664 0000000 0000000 00000135212 12737730500 0017444 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Basic unit tests for a client cache."""
from __future__ import print_function
from ZODB.utils import p64, repr_to_oid
import doctest
import os
import re
import string
import struct
import sys
import tempfile
import unittest
import ZEO.cache
import ZODB.tests.util
import zope.testing.setupstack
import zope.testing.renormalizing
import ZEO.cache
from ZODB.utils import p64, u64, z64
n1 = p64(1)
n2 = p64(2)
n3 = p64(3)
n4 = p64(4)
n5 = p64(5)
def hexprint(file):
file.seek(0)
data = file.read()
offset = 0
while data:
line, data = data[:16], data[16:]
printable = ""
hex = ""
for character in line:
if (character in string.printable
and not ord(character) in [12,13,9]):
printable += character
else:
printable += '.'
hex += character.encode('hex') + ' '
hex = hex[:24] + ' ' + hex[24:]
hex = hex.ljust(49)
printable = printable.ljust(16)
print('%08x %s |%s|' % (offset, hex, printable))
offset += 16
def oid(o):
repr = '%016x' % o
return repr_to_oid(repr)
tid = oid
class CacheTests(ZODB.tests.util.TestCase):
def setUp(self):
# The default cache size is much larger than we need here. Since
# testSerialization reads the entire file into a string, it's not
# good to leave it that big.
ZODB.tests.util.TestCase.setUp(self)
self.cache = ZEO.cache.ClientCache(size=1024**2)
def tearDown(self):
self.cache.close()
if self.cache.path:
os.remove(self.cache.path)
ZODB.tests.util.TestCase.tearDown(self)
def testLastTid(self):
self.assertEqual(self.cache.getLastTid(), z64)
self.cache.setLastTid(n2)
self.assertEqual(self.cache.getLastTid(), n2)
self.assertEqual(self.cache.getLastTid(), n2)
self.cache.setLastTid(n3)
self.assertEqual(self.cache.getLastTid(), n3)
# Check that setting tids out of order gives an error:
# the cache complains only when it's non-empty
self.cache.store(n1, n3, None, b'x')
self.assertRaises(ValueError, self.cache.setLastTid, n2)
def testLoad(self):
data1 = b"data for n1"
self.assertEqual(self.cache.load(n1), None)
self.cache.store(n1, n3, None, data1)
self.assertEqual(self.cache.load(n1), (data1, n3))
def testInvalidate(self):
data1 = b"data for n1"
self.cache.store(n1, n3, None, data1)
self.cache.invalidate(n2, n2)
self.cache.invalidate(n1, n4)
self.assertEqual(self.cache.load(n1), None)
self.assertEqual(self.cache.loadBefore(n1, n4),
(data1, n3, n4))
def testNonCurrent(self):
data1 = b"data for n1"
data2 = b"data for n2"
self.cache.store(n1, n4, None, data1)
self.cache.store(n1, n2, n3, data2)
# can't say anything about state before n2
self.assertEqual(self.cache.loadBefore(n1, n2), None)
# n3 is the upper bound of non-current record n2
self.assertEqual(self.cache.loadBefore(n1, n3), (data2, n2, n3))
# no data for between n2 and n3
self.assertEqual(self.cache.loadBefore(n1, n4), None)
self.cache.invalidate(n1, n5)
self.assertEqual(self.cache.loadBefore(n1, n5), (data1, n4, n5))
self.assertEqual(self.cache.loadBefore(n2, n4), None)
def testException(self):
self.cache.store(n1, n2, None, b"data")
self.cache.store(n1, n2, None, b"data")
self.assertRaises(ValueError,
self.cache.store,
n1, n3, None, b"data")
def testEviction(self):
# Manually override the current maxsize
cache = ZEO.cache.ClientCache(None, 3395)
# Trivial test of eviction code. Doesn't test non-current
# eviction.
data = [b"z" * i for i in range(100)]
for i in range(50):
n = p64(i)
cache.store(n, n, None, data[i])
self.assertEquals(len(cache), i + 1)
# The cache is now almost full. The next insert
# should delete some objects.
n = p64(50)
cache.store(n, n, None, data[51])
self.assert_(len(cache) < 51)
# TODO: Need to make sure eviction of non-current data
# are handled correctly.
def testSerialization(self):
self.cache.store(n1, n2, None, b"data for n1")
self.cache.store(n3, n3, n4, b"non-current data for n3")
self.cache.store(n3, n4, n5, b"more non-current data for n3")
path = tempfile.mktemp()
# Copy data from self.cache into path, reaching into the cache
# guts to make the copy.
with open(path, "wb+") as dst:
src = self.cache.f
src.seek(0)
dst.write(src.read(self.cache.maxsize))
copy = ZEO.cache.ClientCache(path)
# Verify that internals of both objects are the same.
# Could also test that external API produces the same results.
eq = self.assertEqual
eq(copy.getLastTid(), self.cache.getLastTid())
eq(len(copy), len(self.cache))
eq(dict(copy.current), dict(self.cache.current))
eq(dict([(k, dict(v)) for (k, v) in copy.noncurrent.items()]),
dict([(k, dict(v)) for (k, v) in self.cache.noncurrent.items()]),
)
def testCurrentObjectLargerThanCache(self):
if self.cache.path:
os.remove(self.cache.path)
self.cache = ZEO.cache.ClientCache(size=50)
# We store an object that is a bit larger than the cache can handle.
self.cache.store(n1, n2, None, "x"*64)
# We can see that it was not stored.
self.assertEquals(None, self.cache.load(n1))
# If an object cannot be stored in the cache, it must not be
# recorded as current.
self.assert_(n1 not in self.cache.current)
# Regression test: invalidation must still work.
self.cache.invalidate(n1, n2)
def testOldObjectLargerThanCache(self):
if self.cache.path:
os.remove(self.cache.path)
cache = ZEO.cache.ClientCache(size=50)
# We store an object that is a bit larger than the cache can handle.
cache.store(n1, n2, n3, "x"*64)
# We can see that it was not stored.
self.assertEquals(None, cache.load(n1))
# If an object cannot be stored in the cache, it must not be
# recorded as non-current.
self.assert_(1 not in cache.noncurrent)
def testVeryLargeCaches(self):
cache = ZEO.cache.ClientCache('cache', size=(1<<32)+(1<<20))
cache.store(n1, n2, None, b"x")
cache.close()
cache = ZEO.cache.ClientCache('cache', size=(1<<33)+(1<<20))
self.assertEquals(cache.load(n1), (b'x', n2))
cache.close()
def testConversionOfLargeFreeBlocks(self):
with open('cache', 'wb') as f:
f.write(ZEO.cache.magic+
b'\0'*8 +
b'f'+struct.pack(">I", (1<<32)-12)
)
f.seek((1<<32)-1)
f.write(b'x')
cache = ZEO.cache.ClientCache('cache', size=1<<32)
cache.close()
cache = ZEO.cache.ClientCache('cache', size=1<<32)
cache.close()
with open('cache', 'rb') as f:
f.seek(12)
self.assertEquals(f.read(1), b'f')
self.assertEquals(struct.unpack(">I", f.read(4))[0],
ZEO.cache.max_block_size)
if not sys.platform.startswith('linux'):
# On platforms without sparse files, these tests are just way
# too hard on the disk and take too long (especially in a windows
# VM).
del testVeryLargeCaches
del testConversionOfLargeFreeBlocks
def test_clear_zeo_cache(self):
cache = self.cache
for i in range(10):
cache.store(p64(i), n2, None, str(i).encode())
cache.store(p64(i), n1, n2, str(i).encode()+b'old')
self.assertEqual(len(cache), 20)
self.assertEqual(cache.load(n3), (b'3', n2))
self.assertEqual(cache.loadBefore(n3, n2), (b'3old', n1, n2))
cache.clear()
self.assertEqual(len(cache), 0)
self.assertEqual(cache.load(n3), None)
self.assertEqual(cache.loadBefore(n3, n2), None)
def testChangingCacheSize(self):
# start with a small cache
data = b'x'
recsize = ZEO.cache.allocated_record_overhead+len(data)
for extra in (2, recsize-2):
cache = ZEO.cache.ClientCache(
'cache', size=ZEO.cache.ZEC_HEADER_SIZE+100*recsize+extra)
for i in range(100):
cache.store(p64(i), n1, None, data)
self.assertEquals(len(cache), 100)
self.assertEquals(os.path.getsize(
'cache'), ZEO.cache.ZEC_HEADER_SIZE+100*recsize+extra)
# Now make it smaller
cache.close()
small = 50
cache = ZEO.cache.ClientCache(
'cache', size=ZEO.cache.ZEC_HEADER_SIZE+small*recsize+extra)
self.assertEquals(len(cache), small)
self.assertEquals(os.path.getsize(
'cache'), ZEO.cache.ZEC_HEADER_SIZE+small*recsize+extra)
self.assertEquals(set(u64(oid) for (oid, tid) in cache.contents()),
set(range(small)))
for i in range(100, 110):
cache.store(p64(i), n1, None, data)
# We use small-1 below because an extra object gets
# evicted because of the optimization to assure that we
# always get a free block after a new allocated block.
expected_len = small - 1
self.assertEquals(len(cache), expected_len)
expected_oids = set(list(range(11, 50))+list(range(100, 110)))
self.assertEquals(
set(u64(oid) for (oid, tid) in cache.contents()),
expected_oids)
# Make sure we can reopen with same size
cache.close()
cache = ZEO.cache.ClientCache(
'cache', size=ZEO.cache.ZEC_HEADER_SIZE+small*recsize+extra)
self.assertEquals(len(cache), expected_len)
self.assertEquals(set(u64(oid) for (oid, tid) in cache.contents()),
expected_oids)
# Now make it bigger
cache.close()
large = 150
cache = ZEO.cache.ClientCache(
'cache', size=ZEO.cache.ZEC_HEADER_SIZE+large*recsize+extra)
self.assertEquals(len(cache), expected_len)
self.assertEquals(os.path.getsize(
'cache'), ZEO.cache.ZEC_HEADER_SIZE+large*recsize+extra)
self.assertEquals(set(u64(oid) for (oid, tid) in cache.contents()),
expected_oids)
for i in range(200, 305):
cache.store(p64(i), n1, None, data)
# We use large-2 for the same reason we used small-1 above.
expected_len = large-2
self.assertEquals(len(cache), expected_len)
expected_oids = set(list(range(11, 50)) +
list(range(106, 110)) +
list(range(200, 305)))
self.assertEquals(set(u64(oid) for (oid, tid) in cache.contents()),
expected_oids)
# Make sure we can reopen with same size
cache.close()
cache = ZEO.cache.ClientCache(
'cache', size=ZEO.cache.ZEC_HEADER_SIZE+large*recsize+extra)
self.assertEquals(len(cache), expected_len)
self.assertEquals(set(u64(oid) for (oid, tid) in cache.contents()),
expected_oids)
# Cleanup
cache.close()
os.remove('cache')
def testSetAnyLastTidOnEmptyCache(self):
self.cache.setLastTid(p64(5))
self.cache.setLastTid(p64(5))
self.cache.setLastTid(p64(3))
self.cache.setLastTid(p64(4))
def test_loadBefore_doesnt_miss_current(self):
# Make sure that loadBefore get's current data if there
# isn't non-current data
cache = self.cache
oid = n1
cache.store(oid, n1, None, b'first')
self.assertEqual(cache.loadBefore(oid, n1), None)
self.assertEqual(cache.loadBefore(oid, n2), (b'first', n1, None))
self.cache.invalidate(oid, n2)
cache.store(oid, n2, None, b'second')
self.assertEqual(cache.loadBefore(oid, n1), None)
self.assertEqual(cache.loadBefore(oid, n2), (b'first', n1, n2))
self.assertEqual(cache.loadBefore(oid, n3), (b'second', n2, None))
def kill_does_not_cause_cache_corruption():
r"""
If we kill a process while a cache is being written to, the cache
isn't corrupted. To see this, we'll write a little script that
writes records to a cache file repeatedly.
>>> import os, random, sys, time
>>> with open('t', 'w') as f:
... _ = f.write('''
... import os, random, sys, time
... try:
... import thread
... except ImportError:
... import _thread as thread
... sys.path = %r
...
... def suicide():
... time.sleep(random.random()/10)
... os._exit(0)
...
... import ZEO.cache
... from ZODB.utils import p64
... cache = ZEO.cache.ClientCache('cache')
... oid = 0
... t = 0
... thread.start_new_thread(suicide, ())
... while 1:
... oid += 1
... t += 1
... data = b'X' * random.randint(5000,25000)
... cache.store(p64(oid), p64(t), None, data)
...
... ''' % sys.path)
>>> for i in range(10):
... _ = os.spawnl(os.P_WAIT, sys.executable, sys.executable, 't')
... if os.path.exists('cache'):
... cache = ZEO.cache.ClientCache('cache')
... cache.close()
... os.remove('cache')
... os.remove('cache.lock')
"""
def full_cache_is_valid():
r"""
If we fill up the cache without any free space, the cache can
still be used.
>>> import ZEO.cache
>>> cache = ZEO.cache.ClientCache('cache', 1000)
>>> data = b'X' * (1000 - ZEO.cache.ZEC_HEADER_SIZE - 41)
>>> cache.store(p64(1), p64(1), None, data)
>>> cache.close()
>>> cache = ZEO.cache.ClientCache('cache', 1000)
>>> cache.store(p64(2), p64(2), None, b'XXX')
>>> cache.close()
"""
def cannot_open_same_cache_file_twice():
r"""
>>> import ZEO.cache
>>> cache = ZEO.cache.ClientCache('cache', 1000)
>>> cache2 = ZEO.cache.ClientCache('cache', 1000) \
... # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
LockError: Couldn't lock 'cache.lock'
>>> cache.close()
"""
def broken_non_current():
r"""
In production, we saw a situation where an _del_noncurrent raused
a key error when trying to free space, causing the cache to become
unusable. I can't see why this would occur, but added a logging
exception handler so, in the future, we'll still see cases in the
log, but will ignore the error and keep going.
>>> import ZEO.cache, ZODB.utils, logging, sys
>>> logger = logging.getLogger('ZEO.cache')
>>> logger.setLevel(logging.ERROR)
>>> handler = logging.StreamHandler(sys.stdout)
>>> logger.addHandler(handler)
>>> cache = ZEO.cache.ClientCache('cache', 1000)
>>> cache.store(ZODB.utils.p64(1), ZODB.utils.p64(1), None, b'0')
>>> cache.invalidate(ZODB.utils.p64(1), ZODB.utils.p64(2))
>>> cache._del_noncurrent(ZODB.utils.p64(1), ZODB.utils.p64(2))
... # doctest: +NORMALIZE_WHITESPACE
Couldn't find non-current
('\x00\x00\x00\x00\x00\x00\x00\x01', '\x00\x00\x00\x00\x00\x00\x00\x02')
>>> cache._del_noncurrent(ZODB.utils.p64(1), ZODB.utils.p64(1))
>>> cache._del_noncurrent(ZODB.utils.p64(1), ZODB.utils.p64(1)) #
... # doctest: +NORMALIZE_WHITESPACE
Couldn't find non-current
('\x00\x00\x00\x00\x00\x00\x00\x01', '\x00\x00\x00\x00\x00\x00\x00\x01')
>>> logger.setLevel(logging.NOTSET)
>>> logger.removeHandler(handler)
>>> cache.close()
"""
# def bad_magic_number(): See rename_bad_cache_file
def cache_trace_analysis():
r"""
Check to make sure the cache analysis scripts work.
>>> import time
>>> timetime = time.time
>>> now = 1278864701.5
>>> time.time = lambda : now
>>> os.environ["ZEO_CACHE_TRACE"] = 'yes'
>>> import random2 as random
>>> random = random.Random(42)
>>> history = []
>>> serial = 1
>>> for i in range(1000):
... serial += 1
... oid = random.randint(i+1000, i+6000)
... history.append((b's', p64(oid), p64(serial),
... b'x'*random.randint(200,2000)))
... for j in range(10):
... oid = random.randint(i+1000, i+6000)
... history.append((b'l', p64(oid), p64(serial),
... b'x'*random.randint(200,2000)))
>>> def cache_run(name, size):
... serial = 1
... random.seed(42)
... global now
... now = 1278864701.5
... cache = ZEO.cache.ClientCache(name, size*(1<<20))
... for action, oid, serial, data in history:
... now += 1
... if action == b's':
... cache.invalidate(oid, serial)
... cache.store(oid, serial, None, data)
... else:
... v = cache.load(oid)
... if v is None:
... cache.store(oid, serial, None, data)
... cache.close()
>>> cache_run('cache', 2)
>>> import ZEO.scripts.cache_stats, ZEO.scripts.cache_simul
>>> def ctime(t):
... return time.asctime(time.gmtime(t-3600*4))
>>> ZEO.scripts.cache_stats.ctime = ctime
>>> ZEO.scripts.cache_simul.ctime = ctime
############################################################
Stats
>>> ZEO.scripts.cache_stats.main(['cache.trace'])
loads hits inv(h) writes hitrate
Jul 11 12:11-11 0 0 0 0 n/a
Jul 11 12:11:41 ==================== Restart ====================
Jul 11 12:11-14 180 1 2 197 0.6%
Jul 11 12:15-29 818 107 9 793 13.1%
Jul 11 12:30-44 818 213 22 687 26.0%
Jul 11 12:45-59 818 291 19 609 35.6%
Jul 11 13:00-14 818 295 36 605 36.1%
Jul 11 13:15-29 818 277 31 623 33.9%
Jul 11 13:30-44 819 276 29 624 33.7%
Jul 11 13:45-59 818 251 25 649 30.7%
Jul 11 14:00-14 818 295 27 605 36.1%
Jul 11 14:15-29 818 262 33 638 32.0%
Jul 11 14:30-44 818 297 32 603 36.3%
Jul 11 14:45-59 819 268 23 632 32.7%
Jul 11 15:00-14 818 291 30 609 35.6%
Jul 11 15:15-15 2 1 0 1 50.0%
Read 18,876 trace records (641,776 bytes) in 0.0 seconds
Versions: 0 records used a version
First time: Sun Jul 11 12:11:41 2010
Last time: Sun Jul 11 15:15:01 2010
Duration: 11,000 seconds
Data recs: 11,000 (58.3%), average size 1108 bytes
Hit rate: 31.2% (load hits / loads)
Count Code Function (action)
1 00 _setup_trace (initialization)
682 10 invalidate (miss)
318 1c invalidate (hit, saving non-current)
6,875 20 load (miss)
3,125 22 load (hit)
7,875 52 store (current, non-version)
>>> ZEO.scripts.cache_stats.main('-q cache.trace'.split())
loads hits inv(h) writes hitrate
Read 18,876 trace records (641,776 bytes) in 0.0 seconds
Versions: 0 records used a version
First time: Sun Jul 11 12:11:41 2010
Last time: Sun Jul 11 15:15:01 2010
Duration: 11,000 seconds
Data recs: 11,000 (58.3%), average size 1108 bytes
Hit rate: 31.2% (load hits / loads)
Count Code Function (action)
1 00 _setup_trace (initialization)
682 10 invalidate (miss)
318 1c invalidate (hit, saving non-current)
6,875 20 load (miss)
3,125 22 load (hit)
7,875 52 store (current, non-version)
>>> ZEO.scripts.cache_stats.main('-v cache.trace'.split())
... # doctest: +ELLIPSIS
loads hits inv(h) writes hitrate
Jul 11 12:11:41 00 '' 0000000000000000 0000000000000000 -
Jul 11 12:11-11 0 0 0 0 n/a
Jul 11 12:11:41 ==================== Restart ====================
Jul 11 12:11:42 10 1065 0000000000000002 0000000000000000 -
Jul 11 12:11:42 52 1065 0000000000000002 0000000000000000 - 245
Jul 11 12:11:43 20 947 0000000000000000 0000000000000000 -
Jul 11 12:11:43 52 947 0000000000000002 0000000000000000 - 602
Jul 11 12:11:44 20 124b 0000000000000000 0000000000000000 -
Jul 11 12:11:44 52 124b 0000000000000002 0000000000000000 - 1418
...
Jul 11 15:14:55 52 10cc 00000000000003e9 0000000000000000 - 1306
Jul 11 15:14:56 20 18a7 0000000000000000 0000000000000000 -
Jul 11 15:14:56 52 18a7 00000000000003e9 0000000000000000 - 1610
Jul 11 15:14:57 22 18b5 000000000000031d 0000000000000000 - 1636
Jul 11 15:14:58 20 b8a 0000000000000000 0000000000000000 -
Jul 11 15:14:58 52 b8a 00000000000003e9 0000000000000000 - 838
Jul 11 15:14:59 22 1085 0000000000000357 0000000000000000 - 217
Jul 11 15:00-14 818 291 30 609 35.6%
Jul 11 15:15:00 22 1072 000000000000037e 0000000000000000 - 204
Jul 11 15:15:01 20 16c5 0000000000000000 0000000000000000 -
Jul 11 15:15:01 52 16c5 00000000000003e9 0000000000000000 - 1712
Jul 11 15:15-15 2 1 0 1 50.0%
Read 18,876 trace records (641,776 bytes) in 0.0 seconds
Versions: 0 records used a version
First time: Sun Jul 11 12:11:41 2010
Last time: Sun Jul 11 15:15:01 2010
Duration: 11,000 seconds
Data recs: 11,000 (58.3%), average size 1108 bytes
Hit rate: 31.2% (load hits / loads)
Count Code Function (action)
1 00 _setup_trace (initialization)
682 10 invalidate (miss)
318 1c invalidate (hit, saving non-current)
6,875 20 load (miss)
3,125 22 load (hit)
7,875 52 store (current, non-version)
>>> ZEO.scripts.cache_stats.main('-h cache.trace'.split())
loads hits inv(h) writes hitrate
Jul 11 12:11-11 0 0 0 0 n/a
Jul 11 12:11:41 ==================== Restart ====================
Jul 11 12:11-14 180 1 2 197 0.6%
Jul 11 12:15-29 818 107 9 793 13.1%
Jul 11 12:30-44 818 213 22 687 26.0%
Jul 11 12:45-59 818 291 19 609 35.6%
Jul 11 13:00-14 818 295 36 605 36.1%
Jul 11 13:15-29 818 277 31 623 33.9%
Jul 11 13:30-44 819 276 29 624 33.7%
Jul 11 13:45-59 818 251 25 649 30.7%
Jul 11 14:00-14 818 295 27 605 36.1%
Jul 11 14:15-29 818 262 33 638 32.0%
Jul 11 14:30-44 818 297 32 603 36.3%
Jul 11 14:45-59 819 268 23 632 32.7%
Jul 11 15:00-14 818 291 30 609 35.6%
Jul 11 15:15-15 2 1 0 1 50.0%
Read 18,876 trace records (641,776 bytes) in 0.0 seconds
Versions: 0 records used a version
First time: Sun Jul 11 12:11:41 2010
Last time: Sun Jul 11 15:15:01 2010
Duration: 11,000 seconds
Data recs: 11,000 (58.3%), average size 1108 bytes
Hit rate: 31.2% (load hits / loads)
Count Code Function (action)
1 00 _setup_trace (initialization)
682 10 invalidate (miss)
318 1c invalidate (hit, saving non-current)
6,875 20 load (miss)
3,125 22 load (hit)
7,875 52 store (current, non-version)
Histogram of object load frequency
Unique oids: 4,585
Total loads: 10,000
loads objects %obj %load %cum
1 1,645 35.9% 16.4% 16.4%
2 1,465 32.0% 29.3% 45.8%
3 809 17.6% 24.3% 70.0%
4 430 9.4% 17.2% 87.2%
5 167 3.6% 8.3% 95.6%
6 49 1.1% 2.9% 98.5%
7 12 0.3% 0.8% 99.3%
8 7 0.2% 0.6% 99.9%
9 1 0.0% 0.1% 100.0%
>>> ZEO.scripts.cache_stats.main('-s cache.trace'.split())
... # doctest: +ELLIPSIS
loads hits inv(h) writes hitrate
Jul 11 12:11-11 0 0 0 0 n/a
Jul 11 12:11:41 ==================== Restart ====================
Jul 11 12:11-14 180 1 2 197 0.6%
Jul 11 12:15-29 818 107 9 793 13.1%
Jul 11 12:30-44 818 213 22 687 26.0%
Jul 11 12:45-59 818 291 19 609 35.6%
Jul 11 13:00-14 818 295 36 605 36.1%
Jul 11 13:15-29 818 277 31 623 33.9%
Jul 11 13:30-44 819 276 29 624 33.7%
Jul 11 13:45-59 818 251 25 649 30.7%
Jul 11 14:00-14 818 295 27 605 36.1%
Jul 11 14:15-29 818 262 33 638 32.0%
Jul 11 14:30-44 818 297 32 603 36.3%
Jul 11 14:45-59 819 268 23 632 32.7%
Jul 11 15:00-14 818 291 30 609 35.6%
Jul 11 15:15-15 2 1 0 1 50.0%
Read 18,876 trace records (641,776 bytes) in 0.0 seconds
Versions: 0 records used a version
First time: Sun Jul 11 12:11:41 2010
Last time: Sun Jul 11 15:15:01 2010
Duration: 11,000 seconds
Data recs: 11,000 (58.3%), average size 1108 bytes
Hit rate: 31.2% (load hits / loads)
Count Code Function (action)
1 00 _setup_trace (initialization)
682 10 invalidate (miss)
318 1c invalidate (hit, saving non-current)
6,875 20 load (miss)
3,125 22 load (hit)
7,875 52 store (current, non-version)
Histograms of object sizes
Unique sizes written: 1,782
size objs writes
200 5 5
201 4 4
202 4 4
203 1 1
204 1 1
205 6 6
206 8 8
...
1,995 1 2
1,996 2 2
1,997 1 1
1,998 2 2
1,999 2 4
2,000 1 1
>>> ZEO.scripts.cache_stats.main('-S cache.trace'.split())
loads hits inv(h) writes hitrate
Jul 11 12:11-11 0 0 0 0 n/a
Jul 11 12:11:41 ==================== Restart ====================
Jul 11 12:11-14 180 1 2 197 0.6%
Jul 11 12:15-29 818 107 9 793 13.1%
Jul 11 12:30-44 818 213 22 687 26.0%
Jul 11 12:45-59 818 291 19 609 35.6%
Jul 11 13:00-14 818 295 36 605 36.1%
Jul 11 13:15-29 818 277 31 623 33.9%
Jul 11 13:30-44 819 276 29 624 33.7%
Jul 11 13:45-59 818 251 25 649 30.7%
Jul 11 14:00-14 818 295 27 605 36.1%
Jul 11 14:15-29 818 262 33 638 32.0%
Jul 11 14:30-44 818 297 32 603 36.3%
Jul 11 14:45-59 819 268 23 632 32.7%
Jul 11 15:00-14 818 291 30 609 35.6%
Jul 11 15:15-15 2 1 0 1 50.0%
>>> ZEO.scripts.cache_stats.main('-X cache.trace'.split())
loads hits inv(h) writes hitrate
Jul 11 12:11-11 0 0 0 0 n/a
Jul 11 12:11:41 ==================== Restart ====================
Jul 11 12:11-14 180 1 2 197 0.6%
Jul 11 12:15-29 818 107 9 793 13.1%
Jul 11 12:30-44 818 213 22 687 26.0%
Jul 11 12:45-59 818 291 19 609 35.6%
Jul 11 13:00-14 818 295 36 605 36.1%
Jul 11 13:15-29 818 277 31 623 33.9%
Jul 11 13:30-44 819 276 29 624 33.7%
Jul 11 13:45-59 818 251 25 649 30.7%
Jul 11 14:00-14 818 295 27 605 36.1%
Jul 11 14:15-29 818 262 33 638 32.0%
Jul 11 14:30-44 818 297 32 603 36.3%
Jul 11 14:45-59 819 268 23 632 32.7%
Jul 11 15:00-14 818 291 30 609 35.6%
Jul 11 15:15-15 2 1 0 1 50.0%
Read 18,876 trace records (641,776 bytes) in 0.0 seconds
Versions: 0 records used a version
First time: Sun Jul 11 12:11:41 2010
Last time: Sun Jul 11 15:15:01 2010
Duration: 11,000 seconds
Data recs: 11,000 (58.3%), average size 1108 bytes
Hit rate: 31.2% (load hits / loads)
Count Code Function (action)
1 00 _setup_trace (initialization)
682 10 invalidate (miss)
318 1c invalidate (hit, saving non-current)
6,875 20 load (miss)
3,125 22 load (hit)
7,875 52 store (current, non-version)
>>> ZEO.scripts.cache_stats.main('-i 5 cache.trace'.split())
loads hits inv(h) writes hitrate
Jul 11 12:11-11 0 0 0 0 n/a
Jul 11 12:11:41 ==================== Restart ====================
Jul 11 12:11-14 180 1 2 197 0.6%
Jul 11 12:15-19 272 19 2 281 7.0%
Jul 11 12:20-24 273 35 5 265 12.8%
Jul 11 12:25-29 273 53 2 247 19.4%
Jul 11 12:30-34 272 60 8 240 22.1%
Jul 11 12:35-39 273 68 6 232 24.9%
Jul 11 12:40-44 273 85 8 215 31.1%
Jul 11 12:45-49 273 84 6 216 30.8%
Jul 11 12:50-54 272 104 9 196 38.2%
Jul 11 12:55-59 273 103 4 197 37.7%
Jul 11 13:00-04 273 92 12 208 33.7%
Jul 11 13:05-09 273 103 8 197 37.7%
Jul 11 13:10-14 272 100 16 200 36.8%
Jul 11 13:15-19 273 91 11 209 33.3%
Jul 11 13:20-24 273 96 9 204 35.2%
Jul 11 13:25-29 272 90 11 210 33.1%
Jul 11 13:30-34 273 82 14 218 30.0%
Jul 11 13:35-39 273 102 9 198 37.4%
Jul 11 13:40-44 273 92 6 208 33.7%
Jul 11 13:45-49 272 82 6 218 30.1%
Jul 11 13:50-54 273 83 8 217 30.4%
Jul 11 13:55-59 273 86 11 214 31.5%
Jul 11 14:00-04 273 95 11 205 34.8%
Jul 11 14:05-09 272 91 10 209 33.5%
Jul 11 14:10-14 273 109 6 191 39.9%
Jul 11 14:15-19 273 89 9 211 32.6%
Jul 11 14:20-24 272 84 16 216 30.9%
Jul 11 14:25-29 273 89 8 211 32.6%
Jul 11 14:30-34 273 97 12 203 35.5%
Jul 11 14:35-39 273 93 10 207 34.1%
Jul 11 14:40-44 272 107 10 193 39.3%
Jul 11 14:45-49 273 80 8 220 29.3%
Jul 11 14:50-54 273 100 8 200 36.6%
Jul 11 14:55-59 273 88 7 212 32.2%
Jul 11 15:00-04 272 99 8 201 36.4%
Jul 11 15:05-09 273 95 11 205 34.8%
Jul 11 15:10-14 273 97 11 203 35.5%
Jul 11 15:15-15 2 1 0 1 50.0%
Read 18,876 trace records (641,776 bytes) in 0.0 seconds
Versions: 0 records used a version
First time: Sun Jul 11 12:11:41 2010
Last time: Sun Jul 11 15:15:01 2010
Duration: 11,000 seconds
Data recs: 11,000 (58.3%), average size 1108 bytes
Hit rate: 31.2% (load hits / loads)
Count Code Function (action)
1 00 _setup_trace (initialization)
682 10 invalidate (miss)
318 1c invalidate (hit, saving non-current)
6,875 20 load (miss)
3,125 22 load (hit)
7,875 52 store (current, non-version)
>>> ZEO.scripts.cache_simul.main('-s 2 -i 5 cache.trace'.split())
CircularCacheSimulation, cache size 2,097,152 bytes
START TIME DUR. LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 11 12:11 3:17 180 1 2 197 0.6% 0 10.7
Jul 11 12:15 4:59 272 19 2 281 7.0% 0 26.4
Jul 11 12:20 4:59 273 35 5 265 12.8% 0 40.4
Jul 11 12:25 4:59 273 53 2 247 19.4% 0 54.8
Jul 11 12:30 4:59 272 60 8 240 22.1% 0 67.1
Jul 11 12:35 4:59 273 68 6 232 24.9% 0 79.8
Jul 11 12:40 4:59 273 85 8 215 31.1% 0 91.4
Jul 11 12:45 4:59 273 84 6 216 30.8% 77 99.1
Jul 11 12:50 4:59 272 104 9 196 38.2% 196 98.9
Jul 11 12:55 4:59 273 104 4 196 38.1% 188 99.1
Jul 11 13:00 4:59 273 92 12 208 33.7% 213 99.3
Jul 11 13:05 4:59 273 103 8 197 37.7% 190 99.0
Jul 11 13:10 4:59 272 100 16 200 36.8% 203 99.2
Jul 11 13:15 4:59 273 91 11 209 33.3% 222 98.7
Jul 11 13:20 4:59 273 96 9 204 35.2% 210 99.2
Jul 11 13:25 4:59 272 89 11 211 32.7% 212 99.1
Jul 11 13:30 4:59 273 82 14 218 30.0% 220 99.1
Jul 11 13:35 4:59 273 101 9 199 37.0% 191 99.5
Jul 11 13:40 4:59 273 92 6 208 33.7% 214 99.4
Jul 11 13:45 4:59 272 80 6 220 29.4% 217 99.3
Jul 11 13:50 4:59 273 81 8 219 29.7% 214 99.2
Jul 11 13:55 4:59 273 86 11 214 31.5% 208 98.8
Jul 11 14:00 4:59 273 95 11 205 34.8% 188 99.3
Jul 11 14:05 4:59 272 93 10 207 34.2% 207 99.3
Jul 11 14:10 4:59 273 110 6 190 40.3% 198 98.8
Jul 11 14:15 4:59 273 91 9 209 33.3% 209 99.1
Jul 11 14:20 4:59 272 85 16 215 31.2% 210 99.3
Jul 11 14:25 4:59 273 89 8 211 32.6% 226 99.3
Jul 11 14:30 4:59 273 96 12 204 35.2% 214 99.3
Jul 11 14:35 4:59 273 90 10 210 33.0% 213 99.3
Jul 11 14:40 4:59 272 106 10 194 39.0% 196 98.8
Jul 11 14:45 4:59 273 80 8 220 29.3% 230 99.0
Jul 11 14:50 4:59 273 99 8 201 36.3% 202 99.0
Jul 11 14:55 4:59 273 87 8 213 31.9% 205 99.4
Jul 11 15:00 4:59 272 98 8 202 36.0% 211 99.3
Jul 11 15:05 4:59 273 93 11 207 34.1% 198 99.2
Jul 11 15:10 4:59 273 96 11 204 35.2% 184 99.2
Jul 11 15:15 1 2 1 0 1 50.0% 1 99.2
--------------------------------------------------------------------------
Jul 11 12:45 2:30:01 8184 2794 286 6208 34.1% 6067 99.2
>>> cache_run('cache4', 4)
>>> ZEO.scripts.cache_stats.main('cache4.trace'.split())
loads hits inv(h) writes hitrate
Jul 11 12:11-11 0 0 0 0 n/a
Jul 11 12:11:41 ==================== Restart ====================
Jul 11 12:11-14 180 1 2 197 0.6%
Jul 11 12:15-29 818 107 9 793 13.1%
Jul 11 12:30-44 818 213 22 687 26.0%
Jul 11 12:45-59 818 322 23 578 39.4%
Jul 11 13:00-14 818 381 43 519 46.6%
Jul 11 13:15-29 818 450 44 450 55.0%
Jul 11 13:30-44 819 503 47 397 61.4%
Jul 11 13:45-59 818 496 49 404 60.6%
Jul 11 14:00-14 818 516 48 384 63.1%
Jul 11 14:15-29 818 532 59 368 65.0%
Jul 11 14:30-44 818 516 51 384 63.1%
Jul 11 14:45-59 819 529 53 371 64.6%
Jul 11 15:00-14 818 515 49 385 63.0%
Jul 11 15:15-15 2 2 0 0 100.0%
Read 16,918 trace records (575,204 bytes) in 0.0 seconds
Versions: 0 records used a version
First time: Sun Jul 11 12:11:41 2010
Last time: Sun Jul 11 15:15:01 2010
Duration: 11,000 seconds
Data recs: 11,000 (65.0%), average size 1104 bytes
Hit rate: 50.8% (load hits / loads)
Count Code Function (action)
1 00 _setup_trace (initialization)
501 10 invalidate (miss)
499 1c invalidate (hit, saving non-current)
4,917 20 load (miss)
5,083 22 load (hit)
5,917 52 store (current, non-version)
>>> ZEO.scripts.cache_simul.main('-s 4 cache.trace'.split())
CircularCacheSimulation, cache size 4,194,304 bytes
START TIME DUR. LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 11 12:11 3:17 180 1 2 197 0.6% 0 5.4
Jul 11 12:15 14:59 818 107 9 793 13.1% 0 27.4
Jul 11 12:30 14:59 818 213 22 687 26.0% 0 45.7
Jul 11 12:45 14:59 818 322 23 578 39.4% 0 61.4
Jul 11 13:00 14:59 818 381 43 519 46.6% 0 75.8
Jul 11 13:15 14:59 818 450 44 450 55.0% 0 88.2
Jul 11 13:30 14:59 819 503 47 397 61.4% 36 98.2
Jul 11 13:45 14:59 818 496 49 404 60.6% 388 98.5
Jul 11 14:00 14:59 818 515 48 385 63.0% 376 98.3
Jul 11 14:15 14:59 818 529 58 371 64.7% 391 98.1
Jul 11 14:30 14:59 818 511 51 389 62.5% 376 98.5
Jul 11 14:45 14:59 819 529 53 371 64.6% 410 97.9
Jul 11 15:00 14:59 818 512 49 388 62.6% 379 97.7
Jul 11 15:15 1 2 2 0 0 100.0% 0 97.7
--------------------------------------------------------------------------
Jul 11 13:30 1:45:01 5730 3597 355 2705 62.8% 2356 97.7
>>> cache_run('cache1', 1)
>>> ZEO.scripts.cache_stats.main('cache1.trace'.split())
loads hits inv(h) writes hitrate
Jul 11 12:11-11 0 0 0 0 n/a
Jul 11 12:11:41 ==================== Restart ====================
Jul 11 12:11-14 180 1 2 197 0.6%
Jul 11 12:15-29 818 107 9 793 13.1%
Jul 11 12:30-44 818 160 16 740 19.6%
Jul 11 12:45-59 818 158 8 742 19.3%
Jul 11 13:00-14 818 141 21 759 17.2%
Jul 11 13:15-29 818 128 17 772 15.6%
Jul 11 13:30-44 819 151 13 749 18.4%
Jul 11 13:45-59 818 120 17 780 14.7%
Jul 11 14:00-14 818 159 17 741 19.4%
Jul 11 14:15-29 818 141 13 759 17.2%
Jul 11 14:30-44 818 157 16 743 19.2%
Jul 11 14:45-59 819 133 13 767 16.2%
Jul 11 15:00-14 818 158 10 742 19.3%
Jul 11 15:15-15 2 1 0 1 50.0%
Read 20,286 trace records (689,716 bytes) in 0.0 seconds
Versions: 0 records used a version
First time: Sun Jul 11 12:11:41 2010
Last time: Sun Jul 11 15:15:01 2010
Duration: 11,000 seconds
Data recs: 11,000 (54.2%), average size 1105 bytes
Hit rate: 17.1% (load hits / loads)
Count Code Function (action)
1 00 _setup_trace (initialization)
828 10 invalidate (miss)
172 1c invalidate (hit, saving non-current)
8,285 20 load (miss)
1,715 22 load (hit)
9,285 52 store (current, non-version)
>>> ZEO.scripts.cache_simul.main('-s 1 cache.trace'.split())
CircularCacheSimulation, cache size 1,048,576 bytes
START TIME DUR. LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 11 12:11 3:17 180 1 2 197 0.6% 0 21.5
Jul 11 12:15 14:59 818 107 9 793 13.1% 96 99.6
Jul 11 12:30 14:59 818 160 16 740 19.6% 724 99.6
Jul 11 12:45 14:59 818 158 8 742 19.3% 741 99.2
Jul 11 13:00 14:59 818 140 21 760 17.1% 771 99.5
Jul 11 13:15 14:59 818 125 17 775 15.3% 781 99.6
Jul 11 13:30 14:59 819 147 13 753 17.9% 748 99.5
Jul 11 13:45 14:59 818 120 17 780 14.7% 763 99.5
Jul 11 14:00 14:59 818 159 17 741 19.4% 728 99.4
Jul 11 14:15 14:59 818 141 13 759 17.2% 787 99.6
Jul 11 14:30 14:59 818 150 15 750 18.3% 755 99.2
Jul 11 14:45 14:59 819 132 13 768 16.1% 771 99.5
Jul 11 15:00 14:59 818 154 10 746 18.8% 723 99.2
Jul 11 15:15 1 2 1 0 1 50.0% 0 99.3
--------------------------------------------------------------------------
Jul 11 12:15 3:00:01 9820 1694 169 9108 17.3% 8388 99.3
Cleanup:
>>> del os.environ["ZEO_CACHE_TRACE"]
>>> time.time = timetime
>>> ZEO.scripts.cache_stats.ctime = time.ctime
>>> ZEO.scripts.cache_simul.ctime = time.ctime
"""
def cache_simul_properly_handles_load_miss_after_eviction_and_inval():
r"""
Set up evicted and then invalidated oid
>>> os.environ["ZEO_CACHE_TRACE"] = 'yes'
>>> cache = ZEO.cache.ClientCache('cache', 1<<21)
>>> cache.store(p64(1), p64(1), None, b'x')
>>> for i in range(10):
... cache.store(p64(2+i), p64(1), None, b'x'*(1<<19)) # Evict 1
>>> cache.store(p64(1), p64(1), None, b'x')
>>> cache.invalidate(p64(1), p64(2))
>>> cache.load(p64(1))
>>> cache.close()
Now try to do simulation:
>>> import ZEO.scripts.cache_simul
>>> ZEO.scripts.cache_simul.main('-s 1 cache.trace'.split())
... # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
CircularCacheSimulation, cache size 1,048,576 bytes
START TIME DUR. LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
... 1 0 1 12 0.0% 10 50.0
--------------------------------------------------------------------------
... 1 0 1 12 0.0% 10 50.0
>>> del os.environ["ZEO_CACHE_TRACE"]
"""
def invalidations_with_current_tid_dont_wreck_cache():
"""
>>> cache = ZEO.cache.ClientCache('cache', 1000)
>>> cache.store(p64(1), p64(1), None, b'data')
>>> import logging, sys
>>> handler = logging.StreamHandler(sys.stdout)
>>> logging.getLogger().addHandler(handler)
>>> old_level = logging.getLogger().getEffectiveLevel()
>>> logging.getLogger().setLevel(logging.WARNING)
>>> cache.invalidate(p64(1), p64(1))
Ignoring invalidation with same tid as current
>>> cache.close()
>>> cache = ZEO.cache.ClientCache('cache', 1000)
>>> cache.close()
>>> logging.getLogger().removeHandler(handler)
>>> logging.getLogger().setLevel(old_level)
"""
def rename_bad_cache_file():
"""
An attempt to open a bad cache file will cause it to be dropped and recreated.
>>> with open('cache', 'w') as f:
... _ = f.write('x'*100)
>>> import logging, sys
>>> handler = logging.StreamHandler(sys.stdout)
>>> logging.getLogger().addHandler(handler)
>>> old_level = logging.getLogger().getEffectiveLevel()
>>> logging.getLogger().setLevel(logging.WARNING)
>>> cache = ZEO.cache.ClientCache('cache', 1000) # doctest: +ELLIPSIS
Moving bad cache file to 'cache.bad'.
Traceback (most recent call last):
...
ValueError: unexpected magic number: 'xxxx'
>>> cache.store(p64(1), p64(1), None, b'data')
>>> cache.close()
>>> with open('cache') as f:
... _ = f.seek(0, 2)
... print(f.tell())
1000
>>> with open('cache', 'w') as f:
... _ = f.write('x'*200)
>>> cache = ZEO.cache.ClientCache('cache', 1000) # doctest: +ELLIPSIS
Removing bad cache file: 'cache' (prev bad exists).
Traceback (most recent call last):
...
ValueError: unexpected magic number: 'xxxx'
>>> cache.store(p64(1), p64(1), None, b'data')
>>> cache.close()
>>> with open('cache') as f:
... _ = f.seek(0, 2)
... print(f.tell())
1000
>>> with open('cache.bad') as f:
... _ = f.seek(0, 2)
... print(f.tell())
100
>>> logging.getLogger().removeHandler(handler)
>>> logging.getLogger().setLevel(old_level)
"""
def test_suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(CacheTests))
suite.addTest(
doctest.DocTestSuite(
setUp=zope.testing.setupstack.setUpDirectory,
tearDown=zope.testing.setupstack.tearDown,
checker=ZODB.tests.util.checker + \
zope.testing.renormalizing.RENormalizing([
(re.compile(r'31\.3%'), '31.2%'),
]),
)
)
return suite
ZEO-5.0.0a0/src/ZEO/tests/test_client_side_conflict_resolution.py 0000664 0000000 0000000 00000011643 12737730500 0025030 0 ustar 00root root 0000000 0000000 import unittest
import zope.testing.setupstack
from BTrees.Length import Length
from ZODB import serialize
from ZODB.DemoStorage import DemoStorage
from ZODB.utils import p64, z64, maxtid
from ZODB.broken import find_global
import ZEO
from .utils import StorageServer
class Var(object):
def __eq__(self, other):
self.value = other
return True
class ClientSideConflictResolutionTests(zope.testing.setupstack.TestCase):
def test_server_side(self):
# First, verify default conflict resolution.
server = StorageServer(self, DemoStorage())
zs = server.zs
reader = serialize.ObjectReader(
factory=lambda conn, *args: find_global(*args))
writer = serialize.ObjectWriter()
ob = Length(0)
ob._p_oid = z64
# 2 non-conflicting transactions:
zs.tpc_begin(1, '', '', {})
zs.storea(ob._p_oid, z64, writer.serialize(ob), 1)
self.assertEqual(zs.vote(1), [])
tid1 = server.unpack_result(zs.tpc_finish(1))
server.assert_calls(self, ('info', {'length': 1, 'size': Var()}))
ob.change(1)
zs.tpc_begin(2, '', '', {})
zs.storea(ob._p_oid, tid1, writer.serialize(ob), 2)
self.assertEqual(zs.vote(2), [])
tid2 = server.unpack_result(zs.tpc_finish(2))
server.assert_calls(self, ('info', {'size': Var(), 'length': 1}))
# Now, a cnflicting one:
zs.tpc_begin(3, '', '', {})
zs.storea(ob._p_oid, tid1, writer.serialize(ob), 3)
# Vote returns the object id, indicating that a conflict was resolved.
self.assertEqual(zs.vote(3), [ob._p_oid])
tid3 = server.unpack_result(zs.tpc_finish(3))
p, serial, next_serial = zs.loadBefore(ob._p_oid, maxtid)
self.assertEqual((serial, next_serial), (tid3, None))
self.assertEqual(reader.getClassName(p), 'BTrees.Length.Length')
self.assertEqual(reader.getState(p), 2)
# Now, we'll create a server that expects the client to
# resolve conflicts:
server = StorageServer(
self, DemoStorage(), client_conflict_resolution=True)
zs = server.zs
# 2 non-conflicting transactions:
zs.tpc_begin(1, '', '', {})
zs.storea(ob._p_oid, z64, writer.serialize(ob), 1)
self.assertEqual(zs.vote(1), [])
tid1 = server.unpack_result(zs.tpc_finish(1))
server.assert_calls(self, ('info', {'size': Var(), 'length': 1}))
ob.change(1)
zs.tpc_begin(2, '', '', {})
zs.storea(ob._p_oid, tid1, writer.serialize(ob), 2)
self.assertEqual(zs.vote(2), [])
tid2 = server.unpack_result(zs.tpc_finish(2))
server.assert_calls(self, ('info', {'length': 1, 'size': Var()}))
# Now, a conflicting one:
zs.tpc_begin(3, '', '', {})
zs.storea(ob._p_oid, tid1, writer.serialize(ob), 3)
# Vote returns an object, indicating that a conflict was not resolved.
self.assertEqual(
zs.vote(3),
[dict(oid=ob._p_oid,
serials=(tid2, tid1),
data=writer.serialize(ob),
)],
)
# Now, it's up to the client to resolve the conflict. It can
# do this by making another store call. In this call, we use
# tid2 as the starting tid:
ob.change(1)
zs.storea(ob._p_oid, tid2, writer.serialize(ob), 3)
self.assertEqual(zs.vote(3), [])
tid3 = server.unpack_result(zs.tpc_finish(3))
server.assert_calls(self, ('info', {'size': Var(), 'length': 1}))
p, serial, next_serial = zs.loadBefore(ob._p_oid, maxtid)
self.assertEqual((serial, next_serial), (tid3, None))
self.assertEqual(reader.getClassName(p), 'BTrees.Length.Length')
self.assertEqual(reader.getState(p), 3)
def test_client_side(self):
# First, traditional:
addr, stop = ZEO.server('data.fs')
db = ZEO.DB(addr)
with db.transaction() as conn:
conn.root.l = Length(0)
conn2 = db.open()
conn2.root.l.change(1)
with db.transaction() as conn:
conn.root.l.change(1)
conn2.transaction_manager.commit()
self.assertEqual(conn2.root.l.value, 2)
db.close(); stop()
# Now, do conflict resolution on the client.
addr2, stop = ZEO.server(
storage_conf='\n\n',
zeo_conf=dict(client_conflict_resolution=True),
)
db = ZEO.DB(addr2)
with db.transaction() as conn:
conn.root.l = Length(0)
conn2 = db.open()
conn2.root.l.change(1)
with db.transaction() as conn:
conn.root.l.change(1)
self.assertEqual(conn2.root.l.value, 1)
conn2.transaction_manager.commit()
self.assertEqual(conn2.root.l.value, 2)
db.close(); stop()
def test_suite():
return unittest.makeSuite(ClientSideConflictResolutionTests)
ZEO-5.0.0a0/src/ZEO/tests/testssl.py 0000664 0000000 0000000 00000027427 12737730500 0017053 0 ustar 00root root 0000000 0000000 from .._compat import PY3
import mock
import os
import ssl
import unittest
from ZODB.config import storageFromString
from ..Exceptions import ClientDisconnected
from .. import runzeo
from .testConfig import ZEOConfigTestBase
here = os.path.dirname(__file__)
server_cert = os.path.join(here, 'server.pem')
server_key = os.path.join(here, 'server_key.pem')
serverpw_cert = os.path.join(here, 'serverpw.pem')
serverpw_key = os.path.join(here, 'serverpw_key.pem')
client_cert = os.path.join(here, 'client.pem')
client_key = os.path.join(here, 'client_key.pem')
class SSLConfigTest(ZEOConfigTestBase):
def test_ssl_basic(self):
# This shows that configuring ssl has an actual effect on connections.
# Other SSL configuration tests will be Mockiavellian.
# Also test that an SSL connection mismatch doesn't kill
# the server loop.
# An SSL client can't talk to a non-SSL server:
addr, stop = self.start_server()
with self.assertRaises(ClientDisconnected):
self.start_client(
addr,
"""
certificate {}
key {}
""".format(client_cert, client_key), wait_timeout=1)
# But a non-ssl one can:
client = self.start_client(addr)
self._client_assertions(client, addr)
client.close()
stop()
# A non-SSL client can't talk to an SSL server:
addr, stop = self.start_server(
"""
certificate {}
key {}
authenticate {}
""".format(server_cert, server_key, client_cert)
)
with self.assertRaises(ClientDisconnected):
self.start_client(addr, wait_timeout=1)
# But an SSL one can:
client = self.start_client(
addr,
"""
certificate {}
key {}
authenticate {}
server-hostname zodb.org
""".format(client_cert, client_key, server_cert))
self._client_assertions(client, addr)
client.close()
stop()
def test_ssl_hostname_check(self):
addr, stop = self.start_server(
"""
certificate {}
key {}
authenticate {}
""".format(server_cert, server_key, client_cert)
)
# Connext with bad hostname fails:
with self.assertRaises(ClientDisconnected):
client = self.start_client(
addr,
"""
certificate {}
key {}
authenticate {}
server-hostname example.org
""".format(client_cert, client_key, server_cert),
wait_timeout=1)
# Connext with good hostname succeeds:
client = self.start_client(
addr,
"""
certificate {}
key {}
authenticate {}
server-hostname zodb.org
""".format(client_cert, client_key, server_cert))
self._client_assertions(client, addr)
client.close()
stop()
def test_ssl_pw(self):
addr, stop = self.start_server(
"""
certificate {}
key {}
authenticate {}
password-function ZEO.tests.testssl.pwfunc
""".format(serverpw_cert, serverpw_key, client_cert)
)
stop()
@mock.patch(('asyncio' if PY3 else 'trollius') + '.async')
@mock.patch(('asyncio' if PY3 else 'trollius') + '.set_event_loop')
@mock.patch(('asyncio' if PY3 else 'trollius') + '.new_event_loop')
class SSLConfigTestMockiavellian(ZEOConfigTestBase):
@mock.patch('ssl.create_default_context')
def test_ssl_mockiavellian_server_no_ssl(self, factory, *_):
server = create_server()
self.assertFalse(factory.called)
self.assertEqual(server.acceptor.ssl_context, None)
server.close()
def assert_context(
self, factory, context,
cert=(server_cert, server_key, None),
verify_mode=ssl.CERT_REQUIRED,
check_hostname=False,
cafile=None, capath=None,
):
factory.assert_called_with(
ssl.Purpose.CLIENT_AUTH, cafile=cafile, capath=capath)
context.load_cert_chain.assert_called_with(*cert)
self.assertEqual(context, factory.return_value)
self.assertEqual(context.verify_mode, verify_mode)
self.assertEqual(context.check_hostname, check_hostname)
@mock.patch('ssl.create_default_context')
def test_ssl_mockiavellian_server_ssl_no_auth(self, factory, *_):
with self.assertRaises(SystemExit):
# auth is required
create_server(certificate=server_cert, key=server_key)
@mock.patch('ssl.create_default_context')
def test_ssl_mockiavellian_server_ssl_auth_file(self, factory, *_):
server = create_server(
certificate=server_cert, key=server_key, authenticate=__file__)
context = server.acceptor.ssl_context
self.assert_context(factory, context, cafile=__file__)
server.close()
@mock.patch('ssl.create_default_context')
def test_ssl_mockiavellian_server_ssl_auth_dir(self, factory, *_):
server = create_server(
certificate=server_cert, key=server_key, authenticate=here)
context = server.acceptor.ssl_context
self.assert_context(factory, context, capath=here)
server.close()
@mock.patch('ssl.create_default_context')
def test_ssl_mockiavellian_server_ssl_pw(self, factory, *_):
server = create_server(
certificate=server_cert,
key=server_key,
password_function='ZEO.tests.testssl.pwfunc',
authenticate=here,
)
context = server.acceptor.ssl_context
self.assert_context(
factory, context, (server_cert, server_key, pwfunc), capath=here)
server.close()
@mock.patch('ssl.create_default_context')
@mock.patch('ZEO.ClientStorage.ClientStorage')
def test_ssl_mockiavellian_client_no_ssl(self, ClientStorage, factory, *_):
client = ssl_client()
self.assertFalse('ssl' in ClientStorage.call_args[1])
self.assertFalse('ssl_server_hostname' in ClientStorage.call_args[1])
@mock.patch('ssl.create_default_context')
@mock.patch('ZEO.ClientStorage.ClientStorage')
def test_ssl_mockiavellian_client_server_signed(
self, ClientStorage, factory, *_
):
client = ssl_client(certificate=client_cert, key=client_key)
context = ClientStorage.call_args[1]['ssl']
self.assertEqual(ClientStorage.call_args[1]['ssl_server_hostname'],
None)
self.assert_context(
factory, context, (client_cert, client_key, None),
check_hostname=True)
@mock.patch('ssl.create_default_context')
@mock.patch('ZEO.ClientStorage.ClientStorage')
def test_ssl_mockiavellian_client_auth_dir(
self, ClientStorage, factory, *_
):
client = ssl_client(
certificate=client_cert, key=client_key, authenticate=here)
context = ClientStorage.call_args[1]['ssl']
self.assertEqual(ClientStorage.call_args[1]['ssl_server_hostname'],
None)
self.assert_context(
factory, context, (client_cert, client_key, None),
capath=here,
check_hostname=True,
)
@mock.patch('ssl.create_default_context')
@mock.patch('ZEO.ClientStorage.ClientStorage')
def test_ssl_mockiavellian_client_auth_file(
self, ClientStorage, factory, *_
):
client = ssl_client(
certificate=client_cert, key=client_key, authenticate=server_cert)
context = ClientStorage.call_args[1]['ssl']
self.assertEqual(ClientStorage.call_args[1]['ssl_server_hostname'],
None)
self.assert_context(
factory, context, (client_cert, client_key, None),
cafile=server_cert,
check_hostname=True,
)
@mock.patch('ssl.create_default_context')
@mock.patch('ZEO.ClientStorage.ClientStorage')
def test_ssl_mockiavellian_client_pw(
self, ClientStorage, factory, *_
):
client = ssl_client(
certificate=client_cert, key=client_key,
password_function='ZEO.tests.testssl.pwfunc',
authenticate=server_cert)
context = ClientStorage.call_args[1]['ssl']
self.assertEqual(ClientStorage.call_args[1]['ssl_server_hostname'],
None)
self.assert_context(
factory, context, (client_cert, client_key, pwfunc),
cafile=server_cert,
check_hostname=True,
)
@mock.patch('ssl.create_default_context')
@mock.patch('ZEO.ClientStorage.ClientStorage')
def test_ssl_mockiavellian_client_server_hostname(
self, ClientStorage, factory, *_
):
client = ssl_client(
certificate=client_cert, key=client_key, authenticate=server_cert,
server_hostname='example.com')
context = ClientStorage.call_args[1]['ssl']
self.assertEqual(ClientStorage.call_args[1]['ssl_server_hostname'],
'example.com')
self.assert_context(
factory, context, (client_cert, client_key, None),
cafile=server_cert,
check_hostname=True,
)
@mock.patch('ssl.create_default_context')
@mock.patch('ZEO.ClientStorage.ClientStorage')
def test_ssl_mockiavellian_client_check_hostname(
self, ClientStorage, factory, *_
):
client = ssl_client(
certificate=client_cert, key=client_key, authenticate=server_cert,
check_hostname=False)
context = ClientStorage.call_args[1]['ssl']
self.assertEqual(ClientStorage.call_args[1]['ssl_server_hostname'],
None)
self.assert_context(
factory, context, (client_cert, client_key, None),
cafile=server_cert,
check_hostname=False,
)
def args(*a, **kw):
return a, kw
def ssl_conf(**ssl_settings):
if ssl_settings:
ssl_conf = '\n' + '\n'.join(
'{} {}'.format(name.replace('_', '-'), value)
for name, value in ssl_settings.items()
) + '\n\n'
else:
ssl_conf = ''
return ssl_conf
def ssl_client(**ssl_settings):
return storageFromString(
"""%import ZEO
server localhost:0
{}
""".format(ssl_conf(**ssl_settings))
)
def create_server(**ssl_settings):
with open('conf', 'w') as f:
f.write(
"""
address localhost:0
{}
""".format(ssl_conf(**ssl_settings)))
options = runzeo.ZEOOptions()
options.realize(['-C', 'conf'])
s = runzeo.ZEOServer(options)
s.open_storages()
s.create_server()
return s.server
pwfunc = lambda : '1234'
def test_suite():
return unittest.TestSuite((
unittest.makeSuite(SSLConfigTest),
unittest.makeSuite(SSLConfigTestMockiavellian),
))
# Helpers for other tests:
server_config = """
address 127.0.0.1:0
certificate {}
key {}
authenticate {}
""".format(server_cert, server_key, client_cert)
def client_ssl():
context = ssl.create_default_context(
ssl.Purpose.CLIENT_AUTH, cafile=server_cert)
context.load_cert_chain(client_cert, client_key)
context.verify_mode = ssl.CERT_REQUIRED
context.check_hostname = False
return context
ZEO-5.0.0a0/src/ZEO/tests/utils.py 0000664 0000000 0000000 00000003503 12737730500 0016477 0 ustar 00root root 0000000 0000000 """Testing helpers
"""
import ZEO.StorageServer
from ..asyncio.server import best_protocol_version
class ServerProtocol:
method = ('register', )
def __init__(self, zs,
protocol_version=best_protocol_version,
addr='test-address'):
self.calls = []
self.addr = addr
self.zs = zs
self.protocol_version = protocol_version
zs.notify_connected(self)
closed = False
def close(self):
if not self.closed:
self.closed = True
self.zs.notify_disconnected()
def call_soon_threadsafe(self, func, *args):
func(*args)
def async(self, *args):
self.calls.append(args)
class StorageServer:
"""Create a client interface to a StorageServer.
This is for testing StorageServer. It interacts with the storgr
server through its network interface, but without creating a
network connection.
"""
def __init__(self, test, storage,
protocol_version=best_protocol_version,
**kw):
self.test = test
self.storage_server = ZEO.StorageServer.StorageServer(
None, {'1': storage}, **kw)
self.zs = self.storage_server.create_client_handler()
self.protocol = ServerProtocol(self.zs,
protocol_version=protocol_version)
self.zs.register('1', kw.get('read_only', False))
def assert_calls(self, test, *argss):
if argss:
for args in argss:
test.assertEqual(self.protocol.calls.pop(0), args)
else:
test.assertEqual(self.protocol.calls, ())
def unpack_result(self, result):
"""For methods that return Result objects, unwrap the results
"""
result, callback = result.args
callback()
return result
ZEO-5.0.0a0/src/ZEO/tests/zdoptions.test 0000664 0000000 0000000 00000007174 12737730500 0017727 0 ustar 00root root 0000000 0000000 Minimal test of Server Options Handling
=======================================
This is initially motivated by a desire to remove the requirement of
specifying a storage name when there is only one storage.
Storage Names
-------------
It is an error not to specify any storages:
>>> import sys, ZEO.runzeo
>>> try:
... from StringIO import StringIO
... except ImportError:
... from io import StringIO
>>> stderr = sys.stderr
>>> with open('config', 'w') as f:
... _ = f.write("""
...
... address 8100
...
... """)
>>> sys.stderr = StringIO()
>>> options = ZEO.runzeo.ZEOOptions()
>>> options.realize('-C config'.split())
Traceback (most recent call last):
...
SystemExit: 2
>>> print(sys.stderr.getvalue()) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Error: not enough values for section type 'zodb.storage';
0 found, 1 required
...
But we can specify a storage without a name:
>>> with open('config', 'w') as f:
... _ = f.write("""
...
... address 8100
...
...
...
... """)
>>> options = ZEO.runzeo.ZEOOptions()
>>> options.realize('-C config'.split())
>>> [storage.name for storage in options.storages]
['1']
We can't have multiple unnamed storages:
>>> sys.stderr = StringIO()
>>> with open('config', 'w') as f:
... _ = f.write("""
...
... address 8100
...
...
...
...
...
... """)
>>> options = ZEO.runzeo.ZEOOptions()
>>> options.realize('-C config'.split())
Traceback (most recent call last):
...
SystemExit: 2
>>> print(sys.stderr.getvalue()) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Error: No more than one storage may be unnamed.
...
Or an unnamed storage and one named '1':
>>> sys.stderr = StringIO()
>>> with open('config', 'w') as f:
... _ = f.write("""
...
... address 8100
...
...
...
...
...
... """)
>>> options = ZEO.runzeo.ZEOOptions()
>>> options.realize('-C config'.split())
Traceback (most recent call last):
...
SystemExit: 2
>>> print(sys.stderr.getvalue()) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Error: Can't have an unnamed storage and a storage named 1.
...
But we can have multiple storages:
>>> with open('config', 'w') as f:
... _ = f.write("""
...
... address 8100
...
...
...
...
...
... """)
>>> options = ZEO.runzeo.ZEOOptions()
>>> options.realize('-C config'.split())
>>> [storage.name for storage in options.storages]
['x', 'y']
As long as the names are unique:
>>> sys.stderr = StringIO()
>>> with open('config', 'w') as f:
... _ = f.write("""
...
... address 8100
...
...
...
...
...
... """)
>>> options = ZEO.runzeo.ZEOOptions()
>>> options.realize('-C config'.split())
Traceback (most recent call last):
...
SystemExit: 2
>>> print(sys.stderr.getvalue()) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
Error: section names must not be re-used within the same container:'1'
...
.. Cleanup =====================================================
>>> sys.stderr = stderr
ZEO-5.0.0a0/src/ZEO/tests/zeo-fan-out.test 0000664 0000000 0000000 00000010007 12737730500 0020027 0 ustar 00root root 0000000 0000000 ZEO Fan Out
===========
We should be able to set up ZEO servers with ZEO clients. Let's see
if we can make it work.
We'll use some helper functions. The first is a helper that starts
ZEO servers for us and another one that picks ports.
We'll start the first server:
>>> (_, port0), adminaddr0 = start_server(
... '\npath fs\nblob-dir blobs\n', keep=1)
Then we'll start 2 others that use this one:
>>> addr1, _ = start_server(
... '\nserver %s\nblob-dir b1\n' % port0)
>>> addr2, _ = start_server(
... '\nserver %s\nblob-dir b2\n' % port0)
Now, let's create some client storages that connect to these:
>>> import os, ZEO, ZODB.blob, ZODB.POSException, transaction
>>> db0 = ZEO.DB(port0, blob_dir='cb0')
>>> db1 = ZEO.DB(addr1, blob_dir='cb1')
>>> tm1 = transaction.TransactionManager()
>>> c1 = db1.open(transaction_manager=tm1)
>>> r1 = c1.root()
>>> r1
{}
>>> db2 = ZEO.DB(addr2, blob_dir='cb2')
>>> tm2 = transaction.TransactionManager()
>>> c2 = db2.open(transaction_manager=tm2)
>>> r2 = c2.root()
>>> r2
{}
If we update c1, we'll eventually see the change in c2:
>>> import persistent.mapping
>>> r1[1] = persistent.mapping.PersistentMapping()
>>> r1[1].v = 1000
>>> r1[2] = persistent.mapping.PersistentMapping()
>>> r1[2].v = -1000
>>> r1[3] = ZODB.blob.Blob(b'x'*4111222)
>>> for i in range(1000, 2000):
... r1[i] = persistent.mapping.PersistentMapping()
... r1[i].v = 0
>>> tm1.commit()
>>> blob_id = r1[3]._p_oid, r1[1]._p_serial
>>> import time
>>> for i in range(100):
... t = tm2.begin()
... if 1 in r2:
... break
... time.sleep(0.01)
>>> tm2.abort()
>>> r2[1].v
1000
>>> r2[2].v
-1000
Now, let's see if we can break it. :)
>>> def f():
... c = db1.open(transaction.TransactionManager())
... r = c.root()
... i = 0
... while i < 100:
... r[1].v -= 1
... r[2].v += 1
... try:
... c.transaction_manager.commit()
... i += 1
... except ZODB.POSException.ConflictError:
... c.transaction_manager.abort()
... c.close()
>>> import threading
>>> threadf = threading.Thread(target=f)
>>> threadg = threading.Thread(target=f)
>>> threadf.start()
>>> threadg.start()
>>> s2 = db2.storage
>>> start_time = time.time()
>>> import os
>>> from ZEO.ClientStorage import _lock_blob
>>> while time.time() - start_time < 999:
... t = tm2.begin()
... if r2[1].v + r2[2].v:
... print('oops', r2[1], r2[2])
... if r2[1].v == 800:
... break # we caught up
... path = s2.fshelper.getBlobFilename(*blob_id)
... if os.path.exists(path):
... ZODB.blob.remove_committed(path)
... _ = s2.fshelper.createPathForOID(blob_id[0])
... blob_lock = _lock_blob(path)
... s2._call('sendBlob', *blob_id, timeout=9999)
... blob_lock.close()
... else: print('Dang')
>>> threadf.join()
>>> threadg.join()
If we shutdown and restart the source server, the variables will be
invalidated:
>>> stop_server(adminaddr0)
>>> _ = start_server('\npath fs\n\n',
... port=port0)
>>> time.sleep(10) # get past startup / verification
>>> for i in range(1000):
... c1.sync()
... c2.sync()
... if (
... (r1[1]._p_changed is None)
... and
... (r1[2]._p_changed is None)
... and
... (r2[1]._p_changed is None)
... and
... (r2[2]._p_changed is None)
... ):
... print('Cool')
... break
... time.sleep(0.01)
... else:
... print('Dang')
Cool
Cleanup:
>>> db0.close()
>>> db1.close()
>>> db2.close()
ZEO-5.0.0a0/src/ZEO/tests/zeo_blob_cache.test 0000664 0000000 0000000 00000012322 12737730500 0020603 0 ustar 00root root 0000000 0000000 ZEO caching of blob data
========================
ZEO supports 2 modes for providing clients access to blob data:
shared
Blob data are shared via a network file system. The client shares
a common blob directory with the server.
non-shared
Blob data are loaded from the storage server and cached locally.
A maximum size for the blob data can be set and data are removed
when the size is exceeded.
In this test, we'll demonstrate that blobs data are removed from a ZEO
cache when the amount of data stored exceeds a given limit.
Let's start by setting up some data:
>>> addr, _ = start_server(blob_dir='server-blobs')
We'll also create a client.
>>> import ZEO
>>> db = ZEO.DB(addr, blob_dir='blobs', blob_cache_size=3000)
Here, we passed a blob_cache_size parameter, which specifies a target
blob cache size. This is not a hard limit, but rather a target. It
defaults to a very large value. We also passed a blob_cache_size_check
option. The blob_cache_size_check option specifies the number of
bytes, as a percent of the target that can be written or downloaded
from the server before the cache size is checked. The
blob_cache_size_check option defaults to 100. We passed 10, to check
after writing 10% of the target size.
.. We're going to wait for any threads we started to finish, so...
>>> import threading
>>> old_threads = list(threading.enumerate())
We want to check for name collections in the blob cache dir. We'll try
to provoke name collections by reducing the number of cache directory
subdirectories.
>>> import ZEO.ClientStorage
>>> orig_blob_cache_layout_size = ZEO.ClientStorage.BlobCacheLayout.size
>>> ZEO.ClientStorage.BlobCacheLayout.size = 11
Now, let's write some data:
>>> import ZODB.blob, transaction, time
>>> conn = db.open()
>>> for i in range(1, 101):
... conn.root()[i] = ZODB.blob.Blob()
... with conn.root()[i].open('w') as f:
... w = f.write((chr(i)*100).encode('ascii'))
>>> transaction.commit()
We've committed 10000 bytes of data, but our target size is 3000. We
expect to have not much more than the target size in the cache blob
directory.
>>> import os
>>> def cache_size(d):
... size = 0
... for base, dirs, files in os.walk(d):
... for f in files:
... if f.endswith('.blob'):
... try:
... size += os.stat(os.path.join(base, f)).st_size
... except OSError:
... if os.path.exists(os.path.join(base, f)):
... raise
... return size
>>> def check():
... return cache_size('blobs') < 5000
>>> def onfail():
... return cache_size('blobs')
>>> from ZEO.tests.forker import wait_until
>>> wait_until("size is reduced", check, 99, onfail)
If we read all of the blobs, data will be downloaded again, as
necessary, but the cache size will remain not much bigger than the
target:
>>> for i in range(1, 101):
... with conn.root()[i].open() as f:
... data = f.read()
... if data != (chr(i)*100).encode('ascii'):
... print('bad data', repr(chr(i)), repr(data))
>>> wait_until("size is reduced", check, 99, onfail)
>>> for i in range(1, 101):
... with conn.root()[i].open() as f:
... data = f.read()
... if data != (chr(i)*100).encode('ascii'):
... print('bad data', repr(chr(i)), repr(data))
>>> for i in range(1, 101):
... with conn.root()[i].open('c') as f:
... data = f.read()
... if data != (chr(i)*100).encode('ascii'):
... print('bad data', repr(chr(i)), repr(data))
>>> wait_until("size is reduced", check, 99, onfail)
Now let see if we can stress things a bit. We'll create many clients
and get them to pound on the blobs all at once to see if we can
provoke problems:
>>> import threading, random
>>> def run():
... db = ZEO.DB(addr, blob_dir='blobs', blob_cache_size=4000)
... conn = db.open()
... for i in range(300):
... time.sleep(0)
... i = random.randint(1, 100)
... with conn.root()[i].open() as f:
... data = f.read()
... if data != (chr(i)*100).encode('ascii'):
... print('bad data', repr(chr(i)), repr(data))
... i = random.randint(1, 100)
... with conn.root()[i].open('c') as f:
... data = f.read()
... if data != (chr(i)*100).encode('ascii'):
... print('bad data', repr(chr(i)), repr(data))
... db.close()
>>> threads = [threading.Thread(target=run) for i in range(10)]
>>> for thread in threads:
... thread.setDaemon(True)
>>> for thread in threads:
... thread.start()
>>> for thread in threads:
... thread.join(99)
... if thread.isAlive():
... print("Can't join thread.")
>>> wait_until("size is reduced", check, 99, onfail)
.. cleanup
>>> for thread in threading.enumerate():
... if thread not in old_threads:
... thread.join(33)
>>> db.close()
>>> ZEO.ClientStorage.BlobCacheLayout.size = orig_blob_cache_layout_size
ZEO-5.0.0a0/src/ZEO/util.py 0000664 0000000 0000000 00000003506 12737730500 0015155 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Utilities for setting up the server environment."""
import os
def parentdir(p, n=1):
"""Return the ancestor of p from n levels up."""
d = p
while n:
d = os.path.dirname(d)
if not d or d == '.':
d = os.getcwd()
n -= 1
return d
class Environment:
"""Determine location of the Data.fs & ZEO_SERVER.pid files.
Pass the argv[0] used to start ZEO to the constructor.
Use the zeo_pid and fs attributes to get the filenames.
"""
def __init__(self, argv0):
v = os.environ.get("INSTANCE_HOME")
if v is None:
# looking for a Zope/var directory assuming that this code
# is installed in Zope/lib/python/ZEO
p = parentdir(argv0, 4)
if os.path.isdir(os.path.join(p, "var")):
v = p
else:
v = os.getcwd()
self.home = v
self.var = os.path.join(v, "var")
if not os.path.isdir(self.var):
self.var = self.home
pid = os.environ.get("ZEO_SERVER_PID")
if pid is None:
pid = os.path.join(self.var, "ZEO_SERVER.pid")
self.zeo_pid = pid
self.fs = os.path.join(self.var, "Data.fs")
ZEO-5.0.0a0/src/ZEO/version.txt 0000664 0000000 0000000 00000000010 12737730500 0016037 0 ustar 00root root 0000000 0000000 3.7.0b3
ZEO-5.0.0a0/src/ZEO/zconfig.py 0000664 0000000 0000000 00000005074 12737730500 0015641 0 ustar 00root root 0000000 0000000 """SSL configuration support
"""
import os
import sys
def ssl_config(section, server):
import ssl
cafile = capath = None
auth = section.authenticate
if auth:
if os.path.isdir(auth):
capath=auth
else:
cafile=auth
context = ssl.create_default_context(
ssl.Purpose.CLIENT_AUTH, cafile=cafile, capath=capath)
if section.certificate:
password = section.password_function
if password:
module, name = password.rsplit('.', 1)
module = __import__(module, globals(), locals(), ['*'], 0)
password = getattr(module, name)
context.load_cert_chain(section.certificate, section.key, password)
context.verify_mode = ssl.CERT_REQUIRED
if sys.version_info >= (3, 4):
context.verify_flags |= ssl.VERIFY_X509_STRICT | (
context.cert_store_stats()['crl'] and ssl.VERIFY_CRL_CHECK_LEAF)
if server:
context.check_hostname = False
return context
context.check_hostname = section.check_hostname
return context, section.server_hostname
def server_ssl(section):
return ssl_config(section, True)
def client_ssl(section):
return ssl_config(section, False)
class ClientStorageConfig:
def __init__(self, config):
self.config = config
self.name = config.getSectionName()
def open(self):
from ZEO.ClientStorage import ClientStorage
# config.server is a multikey of socket-connection-address values
# where the value is a socket family, address tuple.
config = self.config
addresses = [server.address for server in config.server]
options = {}
if config.blob_cache_size is not None:
options['blob_cache_size'] = config.blob_cache_size
if config.blob_cache_size_check is not None:
options['blob_cache_size_check'] = config.blob_cache_size_check
if config.client_label is not None:
options['client_label'] = config.client_label
ssl = config.ssl
if ssl:
options['ssl'] = ssl[0]
options['ssl_server_hostname'] = ssl[1]
return ClientStorage(
addresses,
blob_dir=config.blob_dir,
shared_blob_dir=config.shared_blob_dir,
storage=config.storage,
cache_size=config.cache_size,
cache=config.cache_path,
name=config.name,
read_only=config.read_only,
read_only_fallback=config.read_only_fallback,
wait_timeout=config.wait_timeout,
**options)
ZEO-5.0.0a0/src/ZEO/zeoctl.py 0000664 0000000 0000000 00000002002 12737730500 0015466 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python2.3
##############################################################################
#
# Copyright (c) 2005 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Wrapper script for zdctl.py that causes it to use the ZEO schema."""
import os
import ZEO
import zdaemon.zdctl
# Main program
def main(args=None):
options = zdaemon.zdctl.ZDCtlOptions()
options.schemadir = os.path.dirname(ZEO.__file__)
options.schemafile = "zeoctl.xml"
zdaemon.zdctl.main(args, options)
if __name__ == "__main__":
main()
ZEO-5.0.0a0/src/ZEO/zeoctl.xml 0000664 0000000 0000000 00000001505 12737730500 0015645 0 ustar 00root root 0000000 0000000
This schema describes the configuration of the ZEO storage server
controller. It differs from the schema for the storage server
only in that the "runner" section is required.
ZEO-5.0.0a0/src/ZEO/zeopasswd.py 0000664 0000000 0000000 00000010425 12737730500 0016215 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2003 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Update a user's authentication tokens for a ZEO server.
usage: python zeopasswd.py [options] username [password]
Specify either a configuration file:
-C/--configuration -- ZConfig configuration file
or the individual options:
-f/--filename -- authentication database filename
-p/--protocol -- authentication protocol name
-r/--realm -- authentication database realm
Additional options:
-d/--delete -- delete user instead of updating password
"""
from __future__ import print_function
from __future__ import print_function
import getopt
import getpass
import sys
import os
import ZConfig
import ZEO
def usage(msg):
print(__doc__)
print(msg)
sys.exit(2)
def options(args):
"""Password-specific options loaded from regular ZEO config file."""
try:
opts, args = getopt.getopt(args, "dr:p:f:C:", ["configure=",
"protocol=",
"filename=",
"realm"])
except getopt.error as msg:
usage(msg)
config = None
delete = 0
auth_protocol = None
auth_db = ""
auth_realm = None
for k, v in opts:
if k == '-C' or k == '--configure':
schemafile = os.path.join(os.path.dirname(ZEO.__file__),
"schema.xml")
schema = ZConfig.loadSchema(schemafile)
config, nil = ZConfig.loadConfig(schema, v)
if k == '-d' or k == '--delete':
delete = 1
if k == '-p' or k == '--protocol':
auth_protocol = v
if k == '-f' or k == '--filename':
auth_db = v
if k == '-r' or k == '--realm':
auth_realm = v
if config is not None:
if auth_protocol or auth_db:
usage("Error: Conflicting options; use either -C *or* -p and -f")
auth_protocol = config.zeo.authentication_protocol
auth_db = config.zeo.authentication_database
auth_realm = config.zeo.authentication_realm
elif not (auth_protocol and auth_db):
usage("Error: Must specifiy configuration file or protocol and database")
password = None
if delete:
if not args:
usage("Error: Must specify a username to delete")
elif len(args) > 1:
usage("Error: Too many arguments")
username = args[0]
else:
if not args:
usage("Error: Must specify a username")
elif len(args) > 2:
usage("Error: Too many arguments")
elif len(args) == 1:
username = args[0]
else:
username, password = args
return auth_protocol, auth_db, auth_realm, delete, username, password
def main(args=None, dbclass=None):
if args is None:
args = sys.argv[1:]
p, auth_db, auth_realm, delete, username, password = options(args)
if p is None:
usage("Error: configuration does not specify auth protocol")
if p == "digest":
from ZEO.auth.auth_digest import DigestDatabase as Database
elif p == "srp":
from ZEO.auth.auth_srp import SRPDatabase as Database
elif dbclass:
# dbclass is used for testing tests.auth_plaintext, see testAuth.py
Database = dbclass
else:
raise ValueError("Unknown database type %r" % p)
if auth_db is None:
usage("Error: configuration does not specify auth database")
db = Database(auth_db, auth_realm)
if delete:
db.del_user(username)
else:
if password is None:
password = getpass.getpass("Enter password: ")
db.add_user(username, password)
db.save()
ZEO-5.0.0a0/tox.ini 0000664 0000000 0000000 00000001143 12737730500 0013710 0 ustar 00root root 0000000 0000000 [tox]
envlist =
py27,py34,py35,simple
[testenv]
commands =
# Run unit tests first.
zope-testrunner -u --test-path=src --auto-color --auto-progress
# Only run functional tests if unit tests pass.
zope-testrunner -f --test-path=src --auto-color --auto-progress
deps =
ZODB >= 4.2.0b1
random2
ZConfig
manuel
persistent >= 4.1.0
transaction
zc.lockfile
zdaemon
zope.interface
zope.testing
zope.testrunner
mock
[testenv:simple]
# Test that 'setup.py test' works
basepython =
python3.4
commands =
python setup.py -q test -q
deps = {[testenv]deps}