bigfile/virtmem: Big Virtmem lock
At present several threads running can corrupt internal virtmem datastructures (e.g. ram->lru_list, fileh->pagemap, etc). This can happen even if we have zope instances only with 1 worker thread - because there are other "system" thread, and python garbage collection can trigger at any thread, so if a virtmem object, e.g. VMA or FileH was there sitting at GC queue to be collected, their collection, and thus e.g. vma_unmap() and fileh_close() will be called from different-from-worker thread. Because of that virtmem just has to be aware of threads not to allow internal datastructure corruption. On the other hand, the idea of introducing userspace virtual memory manager turned out to be not so good from performance and complexity point of view, and thus the plan is to try to move it back into the kernel. This way it does not make sense to do a well-optimised locking implementation for userspace version. So we do just a simple single "protect-all" big lock for virtmem. Of a particular note is interaction with Python's GIL - any long-lived lock has to be taken with GIL released, because else it can deadlock: t1 t2 G V G !G V G so we introduce helpers to make sure the GIL is not taken, and to retake it back if we were holding it initially. Those helpers (py_gil_ensure_unlocked / py_gil_retake_if_waslocked) are symmetrical opposites to what Python provides to make sure the GIL is locked (via PyGILState_Ensure / PyGILState_Release). Otherwise, the patch is more-or-less straightforward application for one-big-lock to protect everything idea.
Showing
bigfile/tests/test_thread.py
0 → 100644