• Doug Ledford's avatar
    ipc/mqueue: improve performance of send/recv · d6629859
    Doug Ledford authored
    The existing implementation of the POSIX message queue send and recv
    functions is, well, abysmal.  Even worse than abysmal.  I submitted a
    patch to increase the maximum POSIX message queue limit to 65536 due to
    customer needs, however, upon looking over the send/recv implementation, I
    realized that my customer needs help with that too even if they don't know
    it.  The basic problem is that, given the fairly typical use case scenario
    for a large queue of queueing lots of messages all at the same priority (I
    verified with my customer that this is indeed what their app does), the
    msg_insert routine is basically a frikkin' bubble sort.  I mean, whoa,
    that's *so* middle school.
    
    OK, OK, to not slam the original author too much, I'm sure they didn't
    envision a queue depth of 50,000+ messages.  No one would think that
    moving elements in an array, one at a time, and dereferencing each pointer
    in that array to check priority of the message being pointed too, again
    one at a time, for 50,000+ times would be good.  So let's assume that, as
    is typical, the users have found a way to break our code simply by using
    it in a way we didn't envision.  Fair enough.
    
    "So, just how broken is it?", you ask.  I wondered the same thing, so I
    wrote an app to let me know.  It's my next patch.  It gave me some
    interesting results.  Here's what it tested:
    
    Interference with other apps - In continuous mode, the app just sits there
    and hits a message queue forever, while you go do something productive on
    another terminal using other CPUs.  You then measure how long it takes you
    to do that something productive.  Then you restart the app in fake
    continuous mode, and it sits in a tight loop on a CPU while you repeat
    your tests.  The whole point of this is to keep one CPU tied up (so it
    can't be used in your other work) but in one case tied up hitting the
    mqueue code so we can see the effect of walking that 65,528 element array
    one pointer at a time on the global CPU cache.  If it's bad, then it will
    slow down your app on the other CPUs just by polluting cache mercilessly.
    In the fake case, it will be in a tight loop, but not polluting cache.
    Testing the mqueue subsystem directly - Here we just run a number of tests
    to see how the mqueue subsystem performs under different conditions.  A
    couple conditions are known to be worst case for the old system, and some
    routines, so this tests all of them.
    
    So, on to the results already:
    
    Subsystem/Test                  Old                         New
    
    Time to compile linux
    kernel (make -j12 on a
    6 core CPU)
      Running mqueue test     user 49m10.744s             user 45m26.294s
    			   sys  5m51.924s              sys  4m59.894s
    			 total 55m02.668s            total 50m26.188s
    
      Running fake test       user 45m32.686s             user 45m18.552s
                               sys  5m12.465s              sys  4m56.468s
                             total 50m45.151s            total 50m15.020s
    
      % slowdown from mqueue
        cache thrashing            ~8%                         ~.5%
    
    Avg time to send/recv (in nanoseconds per message)
      when queue empty            305/288                    349/318
      when queue full (65528 messages)
        constant priority      526589/823                    362/314
        increasing priority    403105/916                    495/445
        decreasing priority     73420/594                    482/409
        random priority        280147/920                    546/436
    
    Time to fill/drain queue (65528 messages, in seconds)
      constant priority         17.37/.12                    .13/.12
      increasing priority        4.14/.14                    .21/.18
      decreasing priority       12.93/.13                    .21/.18
      random priority            8.88/.16                    .22/.17
    
    So, I think the results speak for themselves.  It's possible this
    implementation could be improved by cacheing at least one priority level
    in the node tree (that would bring the queue empty performance more in
    line with the old implementation), but this works and is *so* much better
    than what we had, especially for the common case of a single priority in
    use, that further refinements can be in follow on patches.
    
    [akpm@linux-foundation.org: fix typo in comment, remove stray semicolon]
    [levinsasha928@gmail.com: use correct gfp flags in msg_insert]
    Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
    Cc: Stephen Rothwell <sfr@canb.auug.org.au>
    Cc: Manfred Spraul <manfred@colorfullife.com>
    Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
    Signed-off-by: default avatarSasha Levin <levinsasha928@gmail.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    d6629859
mqueue.c 33.7 KB