-
Kirill Smelkov authored
With zwrk for ZODB being similar to what wrk is for HTTP. Rationale: simulating multiple clients is: 1. noisy - the timings from run to run are changing sometimes up to 50% 2. with significant additional overhead - there are constant OS-level process switches in between client processes and this prevents to actually create the load. 3. the above load from "2" actually takes resources from the server in localhost case. So let's switch to simulating many requests in lightweight way similarly to how it is done in wrk - in one process and not so many threads (it can be just 1) with many connections opened to server and epolly way to load it with Go providing epoll-goroutine matching. Example summarized zbench-local output: x/src/lab.nexedi.com/kirr/neo/go/neo/t$ benchstat -split node,cluster,dataset x.txt name time/object cluster:rio dataset:wczblk1-8 fs1-zhash.py 23.7µs ± 5% fs1-zhash.go 5.68µs ± 8% fs1-zhash.go+prefetch128 6.44µs ±16% zeo/py/fs1-zhash.py 376µs ± 4% zeo/py/fs1-zhash.go 130µs ± 3% zeo/py/fs1-zhash.go+prefetch128 72.3µs ± 4% neo/py(!log)/sqlite·P1-zhash.py 565µs ± 4% neo/py(!log)/sql·P1-zhash.py 491µs ± 8% cluster:rio dataset:prod1-1024 fs1-zhash.py 19.5µs ± 2% fs1-zhash.go 3.92µs ±12% fs1-zhash.go+prefetch128 4.42µs ± 6% zeo/py/fs1-zhash.py 365µs ± 9% zeo/py/fs1-zhash.go 120µs ± 1% zeo/py/fs1-zhash.go+prefetch128 68.4µs ± 3% neo/py(!log)/sqlite·P1-zhash.py 560µs ± 5% neo/py(!log)/sql·P1-zhash.py 482µs ± 8% name req/s cluster:rio dataset:wczblk1-8 fs1-zwrk.go·1 380k ± 2% fs1-zwrk.go·2 666k ± 3% fs1-zwrk.go·3 948k ± 1% fs1-zwrk.go·4 1.24M ± 1% fs1-zwrk.go·8 1.62M ± 0% fs1-zwrk.go·12 1.70M ± 0% fs1-zwrk.go·16 1.71M ± 0% zeo/py/fs1-zwrk.go·1 8.29k ± 1% zeo/py/fs1-zwrk.go·2 10.4k ± 2% zeo/py/fs1-zwrk.go·3 11.2k ± 1% zeo/py/fs1-zwrk.go·4 11.7k ± 1% zeo/py/fs1-zwrk.go·8 12.1k ± 2% zeo/py/fs1-zwrk.go·12 12.3k ± 1% zeo/py/fs1-zwrk.go·16 12.3k ± 2% cluster:rio dataset:prod1-1024 fs1-zwrk.go·1 594k ± 7% fs1-zwrk.go·2 1.14M ± 4% fs1-zwrk.go·3 1.60M ± 2% fs1-zwrk.go·4 2.09M ± 1% fs1-zwrk.go·8 2.74M ± 1% fs1-zwrk.go·12 2.76M ± 0% fs1-zwrk.go·16 2.76M ± 1% zeo/py/fs1-zwrk.go·1 9.42k ± 9% zeo/py/fs1-zwrk.go·2 10.4k ± 1% zeo/py/fs1-zwrk.go·3 11.4k ± 1% zeo/py/fs1-zwrk.go·4 11.7k ± 2% zeo/py/fs1-zwrk.go·8 12.4k ± 1% zeo/py/fs1-zwrk.go·12 12.5k ± 1% zeo/py/fs1-zwrk.go·16 13.4k ±11% name latency-time/object cluster:rio dataset:wczblk1-8 fs1-zwrk.go·1 2.63µs ± 2% fs1-zwrk.go·2 3.00µs ± 3% fs1-zwrk.go·3 3.16µs ± 1% fs1-zwrk.go·4 3.23µs ± 1% fs1-zwrk.go·8 4.94µs ± 0% fs1-zwrk.go·12 7.06µs ± 0% fs1-zwrk.go·16 9.36µs ± 0% zeo/py/fs1-zwrk.go·1 121µs ± 1% zeo/py/fs1-zwrk.go·2 192µs ± 2% zeo/py/fs1-zwrk.go·3 267µs ± 1% zeo/py/fs1-zwrk.go·4 343µs ± 1% zeo/py/fs1-zwrk.go·8 660µs ± 2% zeo/py/fs1-zwrk.go·12 977µs ± 1% zeo/py/fs1-zwrk.go·16 1.30ms ± 2% cluster:rio dataset:prod1-1024 fs1-zwrk.go·1 1.69µs ± 7% fs1-zwrk.go·2 1.76µs ± 4% fs1-zwrk.go·3 1.88µs ± 2% fs1-zwrk.go·4 1.91µs ± 1% fs1-zwrk.go·8 2.92µs ± 1% fs1-zwrk.go·12 4.34µs ± 0% fs1-zwrk.go·16 5.80µs ± 1% zeo/py/fs1-zwrk.go·1 107µs ± 9% zeo/py/fs1-zwrk.go·2 192µs ± 1% zeo/py/fs1-zwrk.go·3 263µs ± 1% zeo/py/fs1-zwrk.go·4 342µs ± 2% zeo/py/fs1-zwrk.go·8 648µs ± 1% zeo/py/fs1-zwrk.go·12 957µs ± 1% zeo/py/fs1-zwrk.go·16 1.20ms ±10% The scalability graphs in http://navytux.spb.ru/~kirr/neo.html were made with simulating client load by zwrk, not many client OS processes. http://navytux.spb.ru/~kirr/neo.html#performance-tests has some additional notes on zwrk. Some draft history related to this patch: lab.nexedi.com/kirr/neo/commit/ca0d828b X neotest: Tzwrk1 - place to control running time of 1 zwrk iteration lab.nexedi.com/kirr/neo/commit/bbfb5006 X zwrk: Make sure we warm up connections to all NEO storages when cluster is partitioned lab.nexedi.com/kirr/neo/commit/7f22bba6 X zwrk: New tool to simulate paralell load from multiple clients
646a94b5