Commit db4df481 authored by Mauro Carvalho Chehab's avatar Mauro Carvalho Chehab Committed by Jonathan Corbet

futex-requeue-pi.txt: standardize document format

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- promote level for the document title;
- mark literal blocks.
Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: default avatarJonathan Corbet <corbet@lwn.net>
parent af7175bc
================
Futex Requeue PI Futex Requeue PI
---------------- ================
Requeueing of tasks from a non-PI futex to a PI futex requires Requeueing of tasks from a non-PI futex to a PI futex requires
special handling in order to ensure the underlying rt_mutex is never special handling in order to ensure the underlying rt_mutex is never
...@@ -20,11 +21,11 @@ implementation would wake the highest-priority waiter, and leave the ...@@ -20,11 +21,11 @@ implementation would wake the highest-priority waiter, and leave the
rest to the natural wakeup inherent in unlocking the mutex rest to the natural wakeup inherent in unlocking the mutex
associated with the condvar. associated with the condvar.
Consider the simplified glibc calls: Consider the simplified glibc calls::
/* caller must lock mutex */ /* caller must lock mutex */
pthread_cond_wait(cond, mutex) pthread_cond_wait(cond, mutex)
{ {
lock(cond->__data.__lock); lock(cond->__data.__lock);
unlock(mutex); unlock(mutex);
do { do {
...@@ -34,14 +35,14 @@ pthread_cond_wait(cond, mutex) ...@@ -34,14 +35,14 @@ pthread_cond_wait(cond, mutex)
} while(...) } while(...)
unlock(cond->__data.__lock); unlock(cond->__data.__lock);
lock(mutex); lock(mutex);
} }
pthread_cond_broadcast(cond) pthread_cond_broadcast(cond)
{ {
lock(cond->__data.__lock); lock(cond->__data.__lock);
unlock(cond->__data.__lock); unlock(cond->__data.__lock);
futex_requeue(cond->data.__futex, cond->mutex); futex_requeue(cond->data.__futex, cond->mutex);
} }
Once pthread_cond_broadcast() requeues the tasks, the cond->mutex Once pthread_cond_broadcast() requeues the tasks, the cond->mutex
has waiters. Note that pthread_cond_wait() attempts to lock the has waiters. Note that pthread_cond_wait() attempts to lock the
...@@ -53,12 +54,12 @@ In order to support PI-aware pthread_condvar's, the kernel needs to ...@@ -53,12 +54,12 @@ In order to support PI-aware pthread_condvar's, the kernel needs to
be able to requeue tasks to PI futexes. This support implies that be able to requeue tasks to PI futexes. This support implies that
upon a successful futex_wait system call, the caller would return to upon a successful futex_wait system call, the caller would return to
user space already holding the PI futex. The glibc implementation user space already holding the PI futex. The glibc implementation
would be modified as follows: would be modified as follows::
/* caller must lock mutex */ /* caller must lock mutex */
pthread_cond_wait_pi(cond, mutex) pthread_cond_wait_pi(cond, mutex)
{ {
lock(cond->__data.__lock); lock(cond->__data.__lock);
unlock(mutex); unlock(mutex);
do { do {
...@@ -68,14 +69,14 @@ pthread_cond_wait_pi(cond, mutex) ...@@ -68,14 +69,14 @@ pthread_cond_wait_pi(cond, mutex)
} while(...) } while(...)
unlock(cond->__data.__lock); unlock(cond->__data.__lock);
/* the kernel acquired the mutex for us */ /* the kernel acquired the mutex for us */
} }
pthread_cond_broadcast_pi(cond) pthread_cond_broadcast_pi(cond)
{ {
lock(cond->__data.__lock); lock(cond->__data.__lock);
unlock(cond->__data.__lock); unlock(cond->__data.__lock);
futex_requeue_pi(cond->data.__futex, cond->mutex); futex_requeue_pi(cond->data.__futex, cond->mutex);
} }
The actual glibc implementation will likely test for PI and make the The actual glibc implementation will likely test for PI and make the
necessary changes inside the existing calls rather than creating new necessary changes inside the existing calls rather than creating new
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment