Commit 6ca9e449 authored by Stephan Richter's avatar Stephan Richter

Made sure that all documentation files are ReST compliant. Wow, this is

some good documentation here; I added all those files as chapters to the 
Zope 3 apidoc.
parent d19ade63
=======================
Collabortation Diagrams
=======================
This file contains several collaboration diagrams for the ZODB.
Simple fetch, modify, commit
============================
Participants Participants
DB: ZODB.DB.DB ------------
C: ZODB.Connection.Connection
S: ZODB.FileStorage.FileStorage - ``DB``: ``ZODB.DB.DB``
T: transaction.interfaces.ITransaction - ``C``: ``ZODB.Connection.Connection``
TM: transaction.interfaces.ITransactionManager - ``S``: ``ZODB.FileStorage.FileStorage``
o1, o2, ...: pre-existing persistent objects - ``T``: ``transaction.interfaces.ITransaction``
- ``TM``: ``transaction.interfaces.ITransactionManager``
- ``o1``, ``o2``, ...: pre-existing persistent objects
Scenario Scenario
"""Simple fetch, modify, commit.""" --------
::
DB.open() DB.open()
create C create C
...@@ -50,16 +63,23 @@ Scenario ...@@ -50,16 +63,23 @@ Scenario
# transactions. # transactions.
Simple fetch, modify, abort
===========================
Participants Participants
DB: ZODB.DB.DB ------------
C: ZODB.Connection.Connection
S: ZODB.FileStorage.FileStorage - ``DB``: ``ZODB.DB.DB``
T: transaction.interfaces.ITransaction - ``C``: ``ZODB.Connection.Connection``
TM: transaction.interfaces.ITransactionManager - ``S``: ``ZODB.FileStorage.FileStorage``
o1, o2, ...: pre-existing persistent objects - ``T``: ``transaction.interfaces.ITransaction``
- ``TM``: ``transaction.interfaces.ITransactionManager``
- ``o1``, ``o2``, ...: pre-existing persistent objects
Scenario Scenario
"""Simple fetch, modify, abort.""" --------
::
DB.open() DB.open()
create C create C
...@@ -91,15 +111,22 @@ Scenario ...@@ -91,15 +111,22 @@ Scenario
# transactions. # transactions.
Participants: Rollback of a savepoint
T: ITransaction =======================
o1, o2, o3: some persistent objects
C1, C2, C3: resource managers Participants
S1, S2: Transaction savepoint objects ------------
s11, s21, s22: resource-manager savepoints
- ``T``: ``transaction.interfaces.ITransaction``
- ``o1``, ``o2``, ``o3``: some persistent objects
- ``C1``, ``C2``, ``C3``: resource managers
- ``S1``, ``S2``: Transaction savepoint objects
- ``s11``, ``s21``, ``s22``: resource-manager savepoints
Scenario Scenario
"""Rollback of a savepoint""" --------
::
create T create T
o1.modify() o1.modify()
...@@ -140,8 +167,8 @@ Scenario ...@@ -140,8 +167,8 @@ Scenario
o2.invalidate() o2.invalidate()
# truncates temporary storage to beginning, because # truncates temporary storage to beginning, because
# s22 was the first savepoint. (Perhaps conection # s22 was the first savepoint. (Perhaps conection
# savepoints record the log position before the # savepoints record the log position before the
# data were written, which is 0 in this case. # data were written, which is 0 in this case.
T.commit() T.commit()
C1.beforeCompletion(T) C1.beforeCompletion(T)
C2.beforeCompletion(T) C2.beforeCompletion(T)
......
=========================
Cross-Database References Cross-Database References
========================= =========================
...@@ -36,7 +37,7 @@ We'll have a reference to the first object: ...@@ -36,7 +37,7 @@ We'll have a reference to the first object:
>>> tm.commit() >>> tm.commit()
Now, let's open a separate connection to database 2. We use it to Now, let's open a separate connection to database 2. We use it to
read p2, use p2 to get to p1, and verify that it is in database 1: read `p2`, use `p2` to get to `p1`, and verify that it is in database 1:
>>> conn = db2.open() >>> conn = db2.open()
>>> p2x = conn.root()['p'] >>> p2x = conn.root()['p']
...@@ -77,8 +78,8 @@ happens. Consider: ...@@ -77,8 +78,8 @@ happens. Consider:
>>> p1.p4 = p4 >>> p1.p4 = p4
>>> p2.p4 = p4 >>> p2.p4 = p4
In this example, the new object is reachable from both p1 in database In this example, the new object is reachable from both `p1` in database
1 and p2 in database 2. If we commit, which database will p4 end up 1 and `p2` in database 2. If we commit, which database will `p4` end up
in? This sort of ambiguity can lead to subtle bugs. For that reason, in? This sort of ambiguity can lead to subtle bugs. For that reason,
an error is generated if we commit changes when new objects are an error is generated if we commit changes when new objects are
reachable from multiple databases: reachable from multiple databases:
...@@ -126,7 +127,7 @@ to explicitly say waht database an object belongs to: ...@@ -126,7 +127,7 @@ to explicitly say waht database an object belongs to:
>>> p1.p5 = p5 >>> p1.p5 = p5
>>> p2.p5 = p5 >>> p2.p5 = p5
>>> conn1.add(p5) >>> conn1.add(p5)
>>> tm.commit() >>> tm.commit()
>>> p5._p_jar.db().database_name >>> p5._p_jar.db().database_name
'1' '1'
...@@ -141,6 +142,7 @@ cross-database references, however, there are a number of facilities ...@@ -141,6 +142,7 @@ cross-database references, however, there are a number of facilities
missing: missing:
cross-database garbage collection cross-database garbage collection
Garbage collection is done on a database by database basis. Garbage collection is done on a database by database basis.
If an object on a database only has references to it from other If an object on a database only has references to it from other
databases, then the object will be garbage collected when its databases, then the object will be garbage collected when its
...@@ -148,11 +150,13 @@ cross-database garbage collection ...@@ -148,11 +150,13 @@ cross-database garbage collection
broken. broken.
cross-database undo cross-database undo
Undo is only applied to a single database. Fixing this for Undo is only applied to a single database. Fixing this for
multiple databases is going to be extremely difficult. Undo multiple databases is going to be extremely difficult. Undo
currently poses consistency problems, so it is not (or should not currently poses consistency problems, so it is not (or should not
be) widely used. be) widely used.
Cross-database aware (tolerant) export/import Cross-database aware (tolerant) export/import
The export/import facility needs to be aware, at least, of cross-database The export/import facility needs to be aware, at least, of cross-database
references. references.
==================
Persistent Classes Persistent Classes
================== ==================
...@@ -39,7 +40,7 @@ functions to make them picklable. ...@@ -39,7 +40,7 @@ functions to make them picklable.
Also note that we explictly set the module. Persistent classes don't Also note that we explictly set the module. Persistent classes don't
live in normal Python modules. Rather, they live in the database. We live in normal Python modules. Rather, they live in the database. We
use information in __module__ to record where in the database. When use information in ``__module__`` to record where in the database. When
we want to use a database, we will need to supply a custom class we want to use a database, we will need to supply a custom class
factory to load instances of the class. factory to load instances of the class.
...@@ -189,7 +190,7 @@ database: ...@@ -189,7 +190,7 @@ database:
NOTE: If a non-persistent instance of a persistent class is copied, NOTE: If a non-persistent instance of a persistent class is copied,
the class may be copied as well. This is usually not the desired the class may be copied as well. This is usually not the desired
result. result.
Persistent instances of persistent classes Persistent instances of persistent classes
...@@ -228,10 +229,10 @@ Now, if we try to load it, we get a broken oject: ...@@ -228,10 +229,10 @@ Now, if we try to load it, we get a broken oject:
>>> connection2.root()['obs']['p'] >>> connection2.root()['obs']['p']
<persistent broken __zodb__.P instance '\x00\x00\x00\x00\x00\x00\x00\x04'> <persistent broken __zodb__.P instance '\x00\x00\x00\x00\x00\x00\x00\x04'>
because the module, "__zodb__" can't be loaded. We need to provide a because the module, `__zodb__` can't be loaded. We need to provide a
class factory that knows about this special module. Here we'll supply a class factory that knows about this special module. Here we'll supply a
sample class factory that looks up a class name in the database root sample class factory that looks up a class name in the database root
if the module is "__zodb__". It falls back to the normal class lookup if the module is `__zodb__`. It falls back to the normal class lookup
for other modules: for other modules:
>>> from ZODB.broken import find_global >>> from ZODB.broken import find_global
...@@ -242,7 +243,7 @@ for other modules: ...@@ -242,7 +243,7 @@ for other modules:
>>> some_database.classFactory = classFactory >>> some_database.classFactory = classFactory
Normally, the classFactory should be set before a database is opened. Normally, the classFactory should be set before a database is opened.
We'll reopen the connections we're using. We'll assign the old We'll reopen the connections we're using. We'll assign the old
connections to a variable first to prevent getting them from the connections to a variable first to prevent getting them from the
connection pool: connection pool:
...@@ -250,7 +251,7 @@ connection pool: ...@@ -250,7 +251,7 @@ connection pool:
>>> old = connection, connection2 >>> old = connection, connection2
>>> connection = some_database.open(transaction_manager=tm) >>> connection = some_database.open(transaction_manager=tm)
>>> connection2 = some_database.open(transaction_manager=tm2) >>> connection2 = some_database.open(transaction_manager=tm2)
Now, we can read the object: Now, we can read the object:
>>> connection2.root()['obs']['p'].color >>> connection2.root()['obs']['p'].color
......
...@@ -8,44 +8,44 @@ subtransactions. When a transaction is committed, a flag is passed ...@@ -8,44 +8,44 @@ subtransactions. When a transaction is committed, a flag is passed
indicating whether it is a subtransaction or a top-level transaction. indicating whether it is a subtransaction or a top-level transaction.
Consider the following exampler commit calls: Consider the following exampler commit calls:
- commit() - ``commit()``
A regular top-level transaction is committed. A regular top-level transaction is committed.
- commit(1) - ``commit(1)``
A subtransaction is committed. There is now one subtransaction of A subtransaction is committed. There is now one subtransaction of
the current top-level transaction. the current top-level transaction.
- commit(1) - ``commit(1)``
A subtransaction is committed. There are now two subtransactions of A subtransaction is committed. There are now two subtransactions of
the current top-level transaction. the current top-level transaction.
- abort(1) - ``abort(1)``
A subtransaction is aborted. There are still two subtransactions of A subtransaction is aborted. There are still two subtransactions of
the current top-level transaction; work done since the last the current top-level transaction; work done since the last
commit(1) call is discarded. ``commit(1)`` call is discarded.
- commit() - ``commit()``
We now commit a top-level transaction. The work done in the previous We now commit a top-level transaction. The work done in the previous
two subtransactions *plus* work done since the last abort(1) call two subtransactions *plus* work done since the last ``abort(1)`` call
is saved. is saved.
- commit(1) - ``commit(1)``
A subtransaction is committed. There is now one subtransaction of A subtransaction is committed. There is now one subtransaction of
the current top-level transaction. the current top-level transaction.
- commit(1) - ``commit(1)``
A subtransaction is committed. There are now two subtransactions of A subtransaction is committed. There are now two subtransactions of
the current top-level transaction. the current top-level transaction.
- abort() - ``abort()``
We now abort a top-level transaction. We discard the work done in We now abort a top-level transaction. We discard the work done in
the previous two subtransactions *plus* work done since the last the previous two subtransactions *plus* work done since the last
commit(1) call. ``commit(1)`` call.
############################################################################## ==================
# Multiple Databases
# Copyright (c) 2005 Zope Corporation and Contributors. ==================
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
Multi-database tests
====================
Multi-database support adds the ability to tie multiple databases into a Multi-database support adds the ability to tie multiple databases into a
collection. The original proposal is in the fishbowl: collection. The original proposal is in the fishbowl:
...@@ -25,29 +12,29 @@ by Jim Fulton, Christian Theune, and Tim Peters. Overview: ...@@ -25,29 +12,29 @@ by Jim Fulton, Christian Theune, and Tim Peters. Overview:
No private attributes were added, and one new method was introduced. No private attributes were added, and one new method was introduced.
DB: ``DB``:
- a new .database_name attribute holds the name of this database - a new ``.database_name`` attribute holds the name of this database.
- a new .databases attribute maps from database name to DB object; all DBs - a new ``.databases`` attribute maps from database name to ``DB`` object; all
in a multi-database collection share the same .databases object databases in a multi-database collection share the same ``.databases`` object
- the DB constructor has new optional arguments with the same names - the ``DB`` constructor has new optional arguments with the same names
(database_name= and databases=). (``database_name=`` and ``databases=``).
Connection: ``Connection``:
- a new .connections attribute maps from database name to a Connection for - a new ``.connections`` attribute maps from database name to a ``Connection``
the database with that name; the .connections mapping object is also for the database with that name; the ``.connections`` mapping object is also
shared among databases in a collection shared among databases in a collection.
- a new .get_connection(database_name) method returns a Connection for a - a new ``.get_connection(database_name)`` method returns a ``Connection`` for
database in the collection; if a connection is already open, it's returned a database in the collection; if a connection is already open, it's returned
(this is the value .connections[database_name]), else a new connection is (this is the value ``.connections[database_name]``), else a new connection
opened (and stored as .connections[database_name]) is opened (and stored as ``.connections[database_name]``)
Creating a multi-database starts with creating a named DB: Creating a multi-database starts with creating a named ``DB``:
>>> from ZODB.tests.test_storage import MinimalMemoryStorage >>> from ZODB.tests.test_storage import MinimalMemoryStorage
>>> from ZODB import DB >>> from ZODB import DB
...@@ -69,7 +56,8 @@ Adding another database to the collection works like this: ...@@ -69,7 +56,8 @@ Adding another database to the collection works like this:
... database_name='notroot', ... database_name='notroot',
... databases=dbmap) ... databases=dbmap)
The new db2 now shares the 'databases' dictionary with db and has two entries: The new ``db2`` now shares the ``databases`` dictionary with db and has two
entries:
>>> db2.databases is db.databases is dbmap >>> db2.databases is db.databases is dbmap
True True
...@@ -87,7 +75,7 @@ It's an error to try to insert a database with a name already in use: ...@@ -87,7 +75,7 @@ It's an error to try to insert a database with a name already in use:
... ...
ValueError: database_name 'root' already in databases ValueError: database_name 'root' already in databases
Because that failed, db.databases wasn't changed: Because that failed, ``db.databases`` wasn't changed:
>>> len(db.databases) # still 2 >>> len(db.databases) # still 2
2 2
...@@ -127,7 +115,7 @@ Now there are two connections in that collection: ...@@ -127,7 +115,7 @@ Now there are two connections in that collection:
>>> names = cn.connections.keys(); names.sort(); print names >>> names = cn.connections.keys(); names.sort(); print names
['notroot', 'root'] ['notroot', 'root']
So long as this database group remains open, the same Connection objects So long as this database group remains open, the same ``Connection`` objects
are returned: are returned:
>>> cn.get_connection('root') is cn >>> cn.get_connection('root') is cn
...@@ -152,6 +140,7 @@ Clean up: ...@@ -152,6 +140,7 @@ Clean up:
>>> for a_db in dbmap.values(): >>> for a_db in dbmap.values():
... a_db.close() ... a_db.close()
Configuration from File Configuration from File
----------------------- -----------------------
...@@ -171,8 +160,8 @@ ZODB 3.6: ...@@ -171,8 +160,8 @@ ZODB 3.6:
>>> db.databases.keys() >>> db.databases.keys()
['this_is_the_name'] ['this_is_the_name']
However, the .databases attribute cannot be configured from file. It However, the ``.databases`` attribute cannot be configured from file. It
can be passed to the ZConfig factory. I'm not sure of the clearest way can be passed to the `ZConfig` factory. I'm not sure of the clearest way
to test that here; this is ugly: to test that here; this is ugly:
>>> from ZODB.config import getDbSchema >>> from ZODB.config import getDbSchema
...@@ -184,13 +173,13 @@ different database_name: ...@@ -184,13 +173,13 @@ different database_name:
>>> config2 = config.replace("this_is_the_name", "another_name") >>> config2 = config.replace("this_is_the_name", "another_name")
Now get a ZConfig factory from `config2`: Now get a `ZConfig` factory from `config2`:
>>> f = StringIO(config2) >>> f = StringIO(config2)
>>> zconfig, handle = ZConfig.loadConfigFile(getDbSchema(), f) >>> zconfig, handle = ZConfig.loadConfigFile(getDbSchema(), f)
>>> factory = zconfig.database >>> factory = zconfig.database
The desired `databases` mapping can be passed to this factory: The desired ``databases`` mapping can be passed to this factory:
>>> db2 = factory.open(databases=db.databases) >>> db2 = factory.open(databases=db.databases)
>>> print db2.database_name # has the right name >>> print db2.database_name # has the right name
......
Here are some tests that storage sync() methods get called at appropriate =============
Synchronizers
=============
Here are some tests that storage ``sync()`` methods get called at appropriate
times in the life of a transaction. The tested behavior is new in ZODB 3.4. times in the life of a transaction. The tested behavior is new in ZODB 3.4.
First define a lightweight storage with a sync() method: First define a lightweight storage with a ``sync()`` method:
>>> import ZODB >>> import ZODB
>>> from ZODB.MappingStorage import MappingStorage >>> from ZODB.MappingStorage import MappingStorage
...@@ -27,14 +31,14 @@ Sync should not have been called yet. ...@@ -27,14 +31,14 @@ Sync should not have been called yet.
False False
sync is called by the Connection's afterCompletion() hook after the commit ``sync()`` is called by the Connection's ``afterCompletion()`` hook after the
completes. commit completes.
>>> transaction.commit() >>> transaction.commit()
>>> st.sync_called # False before 3.4 >>> st.sync_called # False before 3.4
True True
sync is also called by the afterCompletion() hook after an abort. ``sync()`` is also called by the ``afterCompletion()`` hook after an abort.
>>> st.sync_called = False >>> st.sync_called = False
>>> rt['b'] = 2 >>> rt['b'] = 2
...@@ -42,8 +46,8 @@ sync is also called by the afterCompletion() hook after an abort. ...@@ -42,8 +46,8 @@ sync is also called by the afterCompletion() hook after an abort.
>>> st.sync_called # False before 3.4 >>> st.sync_called # False before 3.4
True True
And sync is called whenever we explicitly start a new txn, via the And ``sync()`` is called whenever we explicitly start a new transaction, via
newTransaction() hook. the ``newTransaction()`` hook.
>>> st.sync_called = False >>> st.sync_called = False
>>> dummy = transaction.begin() >>> dummy = transaction.begin()
...@@ -51,19 +55,19 @@ newTransaction() hook. ...@@ -51,19 +55,19 @@ newTransaction() hook.
True True
Clean up. Closing db isn't enough -- closing a DB doesn't close its Clean up. Closing db isn't enough -- closing a DB doesn't close its
Connections. Leaving our Connection open here can cause the `Connections`. Leaving our `Connection` open here can cause the
SimpleStorage.sync() method to get called later, during another test, and ``SimpleStorage.sync()`` method to get called later, during another test, and
our doctest-synthesized module globals no longer exist then. You get our doctest-synthesized module globals no longer exist then. You get a weird
a weird traceback then ;-) traceback then ;-)
>>> cn.close() >>> cn.close()
One more, very obscure. It was the case that if the first action a new One more, very obscure. It was the case that if the first action a new
threaded transaction manager saw was a begin() call, then synchronizers threaded transaction manager saw was a ``begin()`` call, then synchronizers
registered after that in the same transaction weren't communicated to registered after that in the same transaction weren't communicated to the
the Transaction object, and so the synchronizers' afterCompletion() hooks `Transaction` object, and so the synchronizers' ``afterCompletion()`` hooks
weren't called when the transaction commited. None of the test suites weren't called when the transaction commited. None of the test suites
(ZODB's, Zope 2.8's, or Zope3's) caught that, but apparently Zope3 takes this (ZODB's, Zope 2.8's, or Zope3's) caught that, but apparently Zope 3 takes this
path at some point when serving pages. path at some point when serving pages.
>>> tm = transaction.ThreadTransactionManager() >>> tm = transaction.ThreadTransactionManager()
...@@ -75,14 +79,14 @@ path at some point when serving pages. ...@@ -75,14 +79,14 @@ path at some point when serving pages.
>>> st.sync_called >>> st.sync_called
False False
Now ensure that cn.afterCompletion() -> st.sync() gets called by commit Now ensure that ``cn.afterCompletion() -> st.sync()`` gets called by commit
despite that the Connection registered after the transaction began: despite that the `Connection` registered after the transaction began:
>>> tm.commit() >>> tm.commit()
>>> st.sync_called >>> st.sync_called
True True
And try the same thing with a non-threaded TM: And try the same thing with a non-threaded transaction manager:
>>> cn.close() >>> cn.close()
>>> tm = transaction.TransactionManager() >>> tm = transaction.TransactionManager()
......
==========
Savepoints Savepoints
========== ==========
Savepoints provide a way to save to disk intermediate work done during Savepoints provide a way to save to disk intermediate work done during a
a transaction allowing: transaction allowing:
- partial transaction (subtransaction) rollback (abort) - partial transaction (subtransaction) rollback (abort)
- state of saved objects to be freed, freeing on-line memory for other - state of saved objects to be freed, freeing on-line memory for other
uses uses
Savepoints make it possible to write atomic subroutines that don't Savepoints make it possible to write atomic subroutines that don't make
make top-level transaction commitments. top-level transaction commitments.
Applications Applications
------------ ------------
...@@ -39,13 +41,13 @@ and abort changes: ...@@ -39,13 +41,13 @@ and abort changes:
>>> root['name'] >>> root['name']
'bob' 'bob'
Now, let's look at an application that manages funds for people. Now, let's look at an application that manages funds for people. It allows
It allows deposits and debits to be entered for multiple people. deposits and debits to be entered for multiple people. It accepts a sequence
It accepts a sequence of entries and generates a sequence of status of entries and generates a sequence of status messages. For each entry, it
messages. For each entry, it applies the change and then validates applies the change and then validates the user's account. If the user's
the user's account. If the user's account is invalid, we roll back account is invalid, we roll back the change for that entry. The success or
the change for that entry. The success or failure of an entry is failure of an entry is indicated in the output status. First we'll initialize
indicated in the output status. First we'll initialize some accounts: some accounts:
>>> root['bob-balance'] = 0.0 >>> root['bob-balance'] = 0.0
>>> root['bob-credit'] = 0.0 >>> root['bob-credit'] = 0.0
...@@ -59,8 +61,8 @@ Now, we'll define a validation function to validate an account: ...@@ -59,8 +61,8 @@ Now, we'll define a validation function to validate an account:
... if root[name+'-balance'] + root[name+'-credit'] < 0: ... if root[name+'-balance'] + root[name+'-credit'] < 0:
... raise ValueError('Overdrawn', name) ... raise ValueError('Overdrawn', name)
And a function to apply entries. If the function fails in some And a function to apply entries. If the function fails in some unexpected
unexpected way, it rolls back all of its changes and prints the error: way, it rolls back all of its changes and prints the error:
>>> def apply_entries(entries): >>> def apply_entries(entries):
... savepoint = transaction.savepoint() ... savepoint = transaction.savepoint()
...@@ -114,9 +116,9 @@ If we provide entries that cause an unexpected error: ...@@ -114,9 +116,9 @@ If we provide entries that cause an unexpected error:
Updated sally Updated sally
Unexpected exception unsupported operand type(s) for +=: 'float' and 'str' Unexpected exception unsupported operand type(s) for +=: 'float' and 'str'
Because the apply_entries used a savepoint for the entire function, Because the apply_entries used a savepoint for the entire function, it was
it was able to rollback the partial changes without rolling back able to rollback the partial changes without rolling back changes made in the
changes made in the previous call to apply_entries: previous call to ``apply_entries``:
>>> root['bob-balance'] >>> root['bob-balance']
30.0 30.0
...@@ -135,6 +137,7 @@ away: ...@@ -135,6 +137,7 @@ away:
>>> root['sally-balance'] >>> root['sally-balance']
0.0 0.0
Savepoint invalidation Savepoint invalidation
---------------------- ----------------------
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment