- 15 Sep, 2017 3 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 14 Sep, 2017 16 commits
-
-
Kirill Smelkov authored
BenchmarkSyncChanRTT-4 5000000 350 ns/op BenchmarkBufChanRTT-4 5000000 352 ns/op BenchmarkBufChanAXRXRTT-4 3000000 407 ns/op BenchmarkNetPipeRTT-4 2000000 938 ns/op BenchmarkNetPipeRTTsr-4 1000000 1594 ns/op <-- here BenchmarkTCPlo-4 300000 4814 ns/op BenchmarkTCPlosr-4 100000 12261 ns/op <-- here BenchmarkLinkNetPipeRTT-4 500000 3017 ns/op BenchmarkLinkTCPRTT-4 100000 15650 ns/op the δ beetwen TCPlo + serveRecv style RX and full link over TCPlo is ~ 3μs. -> need to find out why TCPlosr = TCPlo + 8μs
-
Kirill Smelkov authored
without runtime.Gosched null:00 ; oid=0..16995 nread=68269354 t=536.560158ms (31.569µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=532.416867ms (31.326µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=536.958977ms (31.593µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=534.170594ms (31.429µs / object) x=zsha1.go with runtime.Gosched null:00 ; oid=0..16995 nread=68269354 t=594.966346ms (35.006µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=597.510359ms (35.155µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=598.251026ms (35.199µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=596.02138ms (35.068µs / object) x=zsha1.go ---- -> trace shows runtime.Gosched indeed switches to woken up G without second syscall in serveRecv but serveRecv migrates to different M (so different CPU)
-
Kirill Smelkov authored
LinkNetPipeRTT-4 3.05µs ± 1% 2.99µs ± 0% -2.05% (p=0.008 n=5+5) LinkTCPRTT-4 15.9µs ± 1% 14.3µs ± 2% -10.11% (p=0.008 n=5+5)
-
Kirill Smelkov authored
- BenchmarkLinkNetPipeRTT-4 500000 3189 ns/op 225 B/op 5 allocs/op + BenchmarkLinkNetPipeRTT-4 500000 3035 ns/op 225 B/op 5 allocs/op
-
Kirill Smelkov authored
- BenchmarkLinkNetPipeRTT-4 500000 3555 ns/op 225 B/op 5 allocs/op + BenchmarkLinkNetPipeRTT-4 500000 3189 ns/op 225 B/op 5 allocs/op
-
Kirill Smelkov authored
- BenchmarkLinkNetPipeRTT-4 500000 3768 ns/op 225 B/op 5 allocs/op + BenchmarkLinkNetPipeRTT-4 500000 3555 ns/op 225 B/op 5 allocs/op
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
- BenchmarkLinkNetPipeRTT-4 300000 4825 ns/op 225 B/op 5 allocs/op + BenchmarkLinkNetPipeRTT-4 500000 3807 ns/op 225 B/op 5 allocs/op
-
Kirill Smelkov authored
- BenchmarkLinkNetPipeRTT-4 300000 5668 ns/op 225 B/op 5 allocs/op + BenchmarkLinkNetPipeRTT-4 300000 4825 ns/op 225 B/op 5 allocs/op
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 13 Sep, 2017 8 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
automatically removes select serveRecv.
-
Kirill Smelkov authored
null:00 ; oid=0..16995 nread=68269354 t=815.489603ms (47.981µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=809.428095ms (47.624µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=815.088024ms (47.957µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=803.725121ms (47.289µs / object) x=zsha1.go
-
Kirill Smelkov authored
before: null:00 ; oid=0..16995 nread=68269354 t=481.582632ms (28.335µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=473.499859ms (27.859µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=471.996668ms (27.771µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=478.029272ms (28.125µs / object) x=zsha1.go after: null:00 ; oid=0..16995 nread=68269354 t=709.761334ms (41.76µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=704.768088ms (41.466µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=720.756186ms (42.407µs / object) x=zsha1.go null:00 ; oid=0..16995 nread=68269354 t=693.688744ms (40.814µs / object) x=zsha1.go now we'll be teaching Recv1 & friends to do things in optimized way but Conn functionality must stay working.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 12 Sep, 2017 13 commits
-
-
Kirill Smelkov authored
if rxq is buffered it adds only ~50ns to SyncChanRTT (was ~ 350ns -> 400ns) and important: no extra goroutines switches are introduced (verified via analyzing trace).
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-