- 18 Dec, 2017 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
also: - DataTid -> DataTidHint + 0 if there is no such hint.
-
- 13 Dec, 2017 3 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
So now we are benchmarking disk for sizes 4K and 2M which are the usual sizes popping up with e.g. wendelin.core.
-
- 11 Dec, 2017 4 commits
-
-
Kirill Smelkov authored
* origin/master: client: account for cache hit/miss statistics client: remove redundant information from cache's __repr__ cache: fix possible endless loop in __repr__/_iterQueue storage: speed up replication by not getting object next_serial for nothing storage: speed up replication by sending bigger network packets neoctl: remove ignored option client: bug found, add log to collect more information client: new 'cache-size' Storage option doc: mention HTTPS URLs when possible doc: update comment in neolog about Python issue 13773 neolog: add support for xz-compressed logs, using external xzcat commands neolog: --from option now also tries to parse with dateutil importer: do not crash if a backup cluster tries to replicate storage: disable data deduplication by default Release version 1.8.1
-
Kirill Smelkov authored
This information is handy to see how well cache performs. Amended by Julien Muchembled: - do not abbreviate some existing field names in repr result (asking the user to look at the source code in order to decipher logs is not nice) - hit: change from %.1f to %.3g - hit: hide it completely if nload is 0 - use __future__.division instead of adding more casts to float
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 08 Dec, 2017 1 commit
-
-
Kirill Smelkov authored
On my disk it gives: name time/op deco/disk/randread/direct/4K-min 98.0µs ± 1% deco/disk/randread/direct/4K-avg 104µs ± 0% deco/disk/randread/direct/1M-min 2.90ms ±17% deco/disk/randread/direct/1M-avg 3.55ms ± 0% deco/disk/randread/pagecache/4K-min 227ns ± 1% deco/disk/randread/pagecache/4K-avg 629ns ± 0% deco/disk/randread/pagecache/1M-min 70.8µs ± 7% deco/disk/randread/pagecache/1M-avg 99.4µs ± 1%
-
- 05 Dec, 2017 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 04 Dec, 2017 1 commit
-
-
Julien Muchembled authored
-
- 22 Nov, 2017 1 commit
-
-
Kirill Smelkov authored
-
- 21 Nov, 2017 1 commit
-
-
Julien Muchembled authored
INFO Z2 Log files reopened successfully INFO SignalHandler Caught signal SIGTERM INFO Z2 Shutting down fast INFO ZServer closing HTTP to new connections ERROR ZODB.Connection Couldn't load state for BTrees.LOBTree.LOBucket 0xc12e29 Traceback (most recent call last): File "ZODB/Connection.py", line 909, in setstate self._setstate(obj, oid) File "ZODB/Connection.py", line 953, in _setstate p, serial = self._storage.load(oid, '') File "neo/client/Storage.py", line 81, in load return self.app.load(oid)[:2] File "neo/client/app.py", line 355, in load data, tid, next_tid, _ = self._loadFromStorage(oid, tid, before_tid) File "neo/client/app.py", line 387, in _loadFromStorage askStorage) File "neo/client/app.py", line 297, in _askStorageForRead self.sync() File "neo/client/app.py", line 898, in sync self._askPrimary(Packets.Ping()) File "neo/client/app.py", line 163, in _askPrimary return self._ask(self._getMasterConnection(), packet, File "neo/client/app.py", line 177, in _getMasterConnection result = self.master_conn = self._connectToPrimaryNode() File "neo/client/app.py", line 202, in _connectToPrimaryNode index = (index + 1) % len(master_list) ZeroDivisionError: integer division or modulo by zero
-
- 20 Nov, 2017 1 commit
-
-
Kirill Smelkov authored
-
- 19 Nov, 2017 1 commit
-
-
Julien Muchembled authored
-
- 17 Nov, 2017 4 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 15 Nov, 2017 1 commit
-
-
Julien Muchembled authored
It's not possible yet to replicate a node that is importing data. One must wait that the migration is finished.
-
- 09 Nov, 2017 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 08 Nov, 2017 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 07 Nov, 2017 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This would prevent e.g. eth1 going before eth0 as it was the case in 55a64368.
-
Julien Muchembled authored
-
Kirill Smelkov authored
-
Julien Muchembled authored
-
- 06 Nov, 2017 1 commit
-
-
Kirill Smelkov authored
; NEO/py log to no-log: $ ./benchstat-neopy-lognolog 20171106-time-rio-Cenabled.txt name old µs/object new µs/object delta dataset:wczblk1-8 rio/neo/py/sqlite/zhash.py 304 ± 6% 291 ± 2% ~ (p=0.056 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% 2.01k ± 2% -8.20% (p=0.000 n=13+16) rio/neo/py/sqlite/zhash.go 248 ± 1% 231 ± 1% -7.19% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 125 ± 1% 110 ± 2% -11.57% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.76k ±13% 1.62k ± 7% -8.06% (p=0.015 n=16+16) rio/neo/py/sql/zhash.py 325 ± 4% 313 ± 4% ~ (p=0.114 n=4+4) rio/neo/py/sql/zhash.py-P16 2.88k ± 1% 2.56k ± 1% -11.05% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 275 ± 2% 258 ± 1% -6.03% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 154 ± 3% 139 ± 1% -9.29% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go-P16 2.30k ± 8% 2.21k ± 5% ~ (p=0.072 n=16+16) dataset:prod1-1024 rio/neo/py/sqlite/zhash.py 269 ± 1% 259 ± 4% -3.49% (p=0.032 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% 1.89k ± 1% -13.62% (p=0.000 n=16+15) rio/neo/py/sqlite/zhash.go 158 ± 1% 142 ± 1% -10.36% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 116 ± 3% 101 ± 2% -13.22% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.90k ± 0% 1.57k ± 0% -17.14% (p=0.000 n=14+13) rio/neo/py/sql/zhash.py 337 ±43% 293 ± 4% ~ (p=0.286 n=5+4) rio/neo/py/sql/zhash.py-P16 2.73k ± 0% 2.47k ± 0% -9.45% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 186 ± 3% 168 ± 1% -9.39% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 145 ± 2% 130 ± 2% -10.24% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go-P16 2.29k ± 6% 2.08k ± 3% -9.20% (p=0.000 n=16+16) -------- ; Full summary $ benchstat -split dataset 20171106-time-rio-Cenabled.txt name pystone/s rio/pystone 178k ± 2% name µs/op rio/sha1/py/1024B 1.40 ± 0% rio/sha1/go/1024B 1.79 ± 1% rio/sha1/py/4096B 5.08 ± 2% rio/sha1/go/4096B 7.14 ± 0% name us/op rio/disk/randread/direct/4K-min 34.0 ± 1% rio/disk/randread/direct/4K-avg 92.9 ± 0% name time/op rio/disk/randread/pagecache/4K-min 221ns ± 0% rio/disk/randread/pagecache/4K-avg 637ns ± 0% name µs/object dataset:wczblk1-8 rio/fs1/zhash.py 22.3 ± 2% rio/fs1/zhash.py-P16 51.7 ±72% rio/fs1/zhash.go 2.40 ± 0% rio/fs1/zhash.go+prefetch128 4.34 ± 8% rio/fs1/zhash.go-P16 3.58 ±24% rio/zeo/zhash.py 336 ± 2% rio/zeo/zhash.py-P16 1.61k ±19% rio/neo/py/sqlite/zhash.py 304 ± 6% rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% rio/neo/py/sqlite/zhash.go 248 ± 1% rio/neo/py/sqlite/zhash.go+prefetch128 125 ± 1% rio/neo/py/sqlite/zhash.go-P16 1.76k ±13% rio/neo/py(!log)/sqlite/zhash.py 291 ± 2% rio/neo/py(!log)/sqlite/zhash.py-P16 2.01k ± 2% rio/neo/py(!log)/sqlite/zhash.go 231 ± 1% rio/neo/py(!log)/sqlite/zhash.go+prefetch128 110 ± 2% rio/neo/py(!log)/sqlite/zhash.go-P16 1.62k ± 7% rio/neo/py/sql/zhash.py 325 ± 4% rio/neo/py/sql/zhash.py-P16 2.88k ± 1% rio/neo/py/sql/zhash.go 275 ± 2% rio/neo/py/sql/zhash.go+prefetch128 154 ± 3% rio/neo/py/sql/zhash.go-P16 2.30k ± 8% rio/neo/py(!log)/sql/zhash.py 313 ± 4% rio/neo/py(!log)/sql/zhash.py-P16 2.56k ± 1% rio/neo/py(!log)/sql/zhash.go 258 ± 1% rio/neo/py(!log)/sql/zhash.go+prefetch128 139 ± 1% rio/neo/py(!log)/sql/zhash.go-P16 2.21k ± 5% rio/neo/go/zhash.py 190 ± 3% rio/neo/go/zhash.py-P16 784 ± 9% rio/neo/go/zhash.go 52.0 ± 1% rio/neo/go/zhash.go+prefetch128 26.6 ± 5% rio/neo/go/zhash.go-P16 256 ± 6% rio/neo/go(!sha1)/zhash.go 35.3 ± 4% rio/neo/go(!sha1)/zhash.go+prefetch128 17.3 ± 2% rio/neo/go(!sha1)/zhash.go-P16 152 ±13% dataset:prod1-1024 rio/fs1/zhash.py 18.9 ± 1% rio/fs1/zhash.py-P16 58.0 ±52% rio/fs1/zhash.go 1.30 ± 0% rio/fs1/zhash.go+prefetch128 2.78 ±14% rio/fs1/zhash.go-P16 2.21 ± 9% rio/zeo/zhash.py 302 ± 7% rio/zeo/zhash.py-P16 1.44k ±11% rio/neo/py/sqlite/zhash.py 269 ± 1% rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% rio/neo/py/sqlite/zhash.go 158 ± 1% rio/neo/py/sqlite/zhash.go+prefetch128 116 ± 3% rio/neo/py/sqlite/zhash.go-P16 1.90k ± 0% rio/neo/py(!log)/sqlite/zhash.py 259 ± 4% rio/neo/py(!log)/sqlite/zhash.py-P16 1.89k ± 1% rio/neo/py(!log)/sqlite/zhash.go 142 ± 1% rio/neo/py(!log)/sqlite/zhash.go+prefetch128 101 ± 2% rio/neo/py(!log)/sqlite/zhash.go-P16 1.57k ± 0% rio/neo/py/sql/zhash.py 337 ±43% rio/neo/py/sql/zhash.py-P16 2.73k ± 0% rio/neo/py/sql/zhash.go 186 ± 3% rio/neo/py/sql/zhash.go+prefetch128 145 ± 2% rio/neo/py/sql/zhash.go-P16 2.29k ± 6% rio/neo/py(!log)/sql/zhash.py 293 ± 4% rio/neo/py(!log)/sql/zhash.py-P16 2.47k ± 0% rio/neo/py(!log)/sql/zhash.go 168 ± 1% rio/neo/py(!log)/sql/zhash.go+prefetch128 130 ± 2% rio/neo/py(!log)/sql/zhash.go-P16 2.08k ± 3% rio/neo/go/zhash.py 181 ± 5% rio/neo/go/zhash.py-P16 714 ± 6% rio/neo/go/zhash.go 36.9 ± 3% rio/neo/go/zhash.go+prefetch128 16.5 ± 1% rio/neo/go/zhash.go-P16 239 ± 4% rio/neo/go(!sha1)/zhash.go 32.7 ± 7% rio/neo/go(!sha1)/zhash.go+prefetch128 13.5 ± 1% rio/neo/go(!sha1)/zhash.go-P16 190 ± 7%
-