- 19 Mar, 2018 40 commits
-
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
Needed to make large range allocations (1000 per ALLOCATE_ID roundtrip) feasible.
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
Fixes support for virtualenv. Closes #152.
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
It knows how to dispatch messages from multiple receivers (associated with multiple services) to multiple threads, where the service implementation is invoked on the message. It wakes a maximum of one thread per received message. It knows how to shut down gracefully. Implication: due to the latch use, there are 2 file descriptors burned for every thread. We don't need interruptibility here, so in future, it might be nice to allow swapping a diferent queueing primitive into Select (maybe a subclass?) just for this case.
-
David Wilson authored
Causes all threads sleeping on the select to wake.
-
David Wilson authored
This is to allow Select() to be used as a generic queueing primitive that supports graceful shutdown.
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
Will be used more heavily for CI later, but it's already in use by gcloud-ansible-playbook.py.
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
This is a work in progress.
-
David Wilson authored
-
David Wilson authored
Usage: - insert a call to mitogen.debug() in the desired process - kill -USR2 that process - observe its controlling TTY produces thread stack dumps
-
David Wilson authored
Need to re-test with the lock held, else >1 threads can end up waiting for lock then reopening the log repeatedly.
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
This allows us to write 128kb at a time towards SSH, but it doesn't help with sudo, where the ancient tty layer is always used.
-
David Wilson authored
Implication: the entire message remains buffered until its last byte is transmitted. Not wasting time on it, as there are pieces of work like issue #6 that might invalidate these problems on the transmit path entirely.
-
David Wilson authored
Rather than slowly build up a Python string over time, we just store a deque of chunks (which, in a later commit, will now be around 128KB each), and track the total buffer size in a separate integer. The tricky loop is there to ensure the header does not need to be sliced off the full message (which may be huge, causing yet another spike and copy), but rather only off the much smaller first 128kb-sized chunk received. There is one more problem with this code: the ''.join() causes RAM usage to temporarily double, but that was true of the old solution too. Shall wait for bug reports before fixing this, as it gets very ugly very fast.
-
David Wilson authored
There is no penalty for just passing as much data to the OS as possible, it is not copied, and for a non-blocking socket, the OS will just keep buffer as much as it can and tell us how much that was. Also avoids a rather pointless string slice.
-
David Wilson authored
Reduces the number of IO loop iterations required to receive large messages at a small cost to RAM usage. Note that when calling read() with a large buffer value like this, Python must zero-allocate that much RAM. In other words, for even a single byte received, 128kb of RAM might need to be written. Consequently CHUNK_SIZE is quite a sensitive value and this might need further tuning.
-