1. 13 Jun, 2014 5 commits
    • Daniel Borkmann's avatar
      net: sctp: test if association is dead in sctp_wake_up_waiters · f8f74420
      Daniel Borkmann authored
      [ Upstream commit 1e1cdf8a ]
      
      In function sctp_wake_up_waiters(), we need to involve a test
      if the association is declared dead. If so, we don't have any
      reference to a possible sibling association anymore and need
      to invoke sctp_write_space() instead, and normally walk the
      socket's associations and notify them of new wmem space. The
      reason for special casing is that otherwise, we could run
      into the following issue when a sctp_primitive_SEND() call
      from sctp_sendmsg() fails, and tries to flush an association's
      outq, i.e. in the following way:
      
      sctp_association_free()
      `-> list_del(&asoc->asocs)         <-- poisons list pointer
          asoc->base.dead = true
          sctp_outq_free(&asoc->outqueue)
          `-> __sctp_outq_teardown()
           `-> sctp_chunk_free()
            `-> consume_skb()
             `-> sctp_wfree()
              `-> sctp_wake_up_waiters() <-- dereferences poisoned pointers
                                             if asoc->ep->sndbuf_policy=0
      
      Therefore, only walk the list in an 'optimized' way if we find
      that the current association is still active. We could also use
      list_del_init() in addition when we call sctp_association_free(),
      but as Vlad suggests, we want to trap such bugs and thus leave
      it poisoned as is.
      
      Why is it safe to resolve the issue by testing for asoc->base.dead?
      Parallel calls to sctp_sendmsg() are protected under socket lock,
      that is lock_sock()/release_sock(). Only within that path under
      lock held, we're setting skb/chunk owner via sctp_set_owner_w().
      Eventually, chunks are freed directly by an association still
      under that lock. So when traversing association list on destruction
      time from sctp_wake_up_waiters() via sctp_wfree(), a different
      CPU can't be running sctp_wfree() while another one calls
      sctp_association_free() as both happens under the same lock.
      Therefore, this can also not race with setting/testing against
      asoc->base.dead as we are guaranteed for this to happen in order,
      under lock. Further, Vlad says: the times we check asoc->base.dead
      is when we've cached an association pointer for later processing.
      In between cache and processing, the association may have been
      freed and is simply still around due to reference counts. We check
      asoc->base.dead under a lock, so it should always be safe to check
      and not race against sctp_association_free(). Stress-testing seems
      fine now, too.
      
      Fixes: cd253f9f357d ("net: sctp: wake up all assocs if sndbuf policy is per socket")
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Cc: Vlad Yasevich <vyasevic@redhat.com>
      Acked-by: default avatarNeil Horman <nhorman@tuxdriver.com>
      Acked-by: default avatarVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      f8f74420
    • Daniel Borkmann's avatar
      net: sctp: wake up all assocs if sndbuf policy is per socket · cb329759
      Daniel Borkmann authored
      [ Upstream commit 52c35bef ]
      
      SCTP charges chunks for wmem accounting via skb->truesize in
      sctp_set_owner_w(), and sctp_wfree() respectively as the
      reverse operation. If a sender runs out of wmem, it needs to
      wait via sctp_wait_for_sndbuf(), and gets woken up by a call
      to __sctp_write_space() mostly via sctp_wfree().
      
      __sctp_write_space() is being called per association. Although
      we assign sk->sk_write_space() to sctp_write_space(), which
      is then being done per socket, it is only used if send space
      is increased per socket option (SO_SNDBUF), as SOCK_USE_WRITE_QUEUE
      is set and therefore not invoked in sock_wfree().
      
      Commit 4c3a5bda ("sctp: Don't charge for data in sndbuf
      again when transmitting packet") fixed an issue where in case
      sctp_packet_transmit() manages to queue up more than sndbuf
      bytes, sctp_wait_for_sndbuf() will never be woken up again
      unless it is interrupted by a signal. However, a still
      remaining issue is that if net.sctp.sndbuf_policy=0, that is
      accounting per socket, and one-to-many sockets are in use,
      the reclaimed write space from sctp_wfree() is 'unfairly'
      handed back on the server to the association that is the lucky
      one to be woken up again via __sctp_write_space(), while
      the remaining associations are never be woken up again
      (unless by a signal).
      
      The effect disappears with net.sctp.sndbuf_policy=1, that
      is wmem accounting per association, as it guarantees a fair
      share of wmem among associations.
      
      Therefore, if we have reclaimed memory in case of per socket
      accounting, wake all related associations to a socket in a
      fair manner, that is, traverse the socket association list
      starting from the current neighbour of the association and
      issue a __sctp_write_space() to everyone until we end up
      waking ourselves. This guarantees that no association is
      preferred over another and even if more associations are
      taken into the one-to-many session, all receivers will get
      messages from the server and are not stalled forever on
      high load. This setting still leaves the advantage of per
      socket accounting in touch as an association can still use
      up global limits if unused by others.
      
      Fixes: 4eb701df ("[SCTP] Fix SCTP sendbuffer accouting.")
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Vlad Yasevich <vyasevic@redhat.com>
      Acked-by: default avatarVlad Yasevich <vyasevic@redhat.com>
      Acked-by: default avatarNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      cb329759
    • Steve Dickson's avatar
      SUNRPC: Ensure call_connect_status() deals correctly with SOFTCONN tasks · 917be78c
      Steve Dickson authored
      commit 1fa3e2eb upstream.
      
      Don't schedule an rpc_delay before checking to see if the task
      is a SOFTCONN because the tk_callback from the delay (__rpc_atrun)
      clears the task status before the rpc_exit_task can be run.
      Signed-off-by: default avatarSteve Dickson <steved@redhat.com>
      Fixes: 561ec160 (SUNRPC: call_connect_status should recheck...)
      Link: http://lkml.kernel.org/r/5329CF7C.7090308@RedHat.comSigned-off-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      [ Stefan Bader: backport to 3.13-stable: context ]
      Signed-off-by: default avatarStefan Bader <stefan.bader@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      917be78c
    • Trond Myklebust's avatar
      SUNRPC: Ensure that call_connect times out correctly · be63d02c
      Trond Myklebust authored
      commit 485f2251 upstream.
      
      When the server is unavailable due to a networking error, etc, we want
      the RPC client to respect the timeout delays when attempting to reconnect.
      Reported-by: default avatarNeil Brown <neilb@suse.de>
      Fixes: 561ec160 (SUNRPC: call_connect_status should recheck bind..)
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      Cc: Stefan Bader <stefan.bader@canonical.com>
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      be63d02c
    • Kamal Mostafa's avatar
      Linux 3.13.11.3 · 51b4d117
      Kamal Mostafa authored
      Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
      51b4d117
  2. 11 Jun, 2014 29 commits
  3. 10 Jun, 2014 6 commits