1. 17 Jun, 2013 9 commits
    • Ying Xue's avatar
      tipc: convert topology server to use new server facility · 13a2e898
      Ying Xue authored
      As the new TIPC server infrastructure has been introduced, we can
      now convert the TIPC topology server to it.  We get two benefits
      from doing this:
      
      1) It simplifies the topology server locking policy.  In the
      original locking policy, we placed one spin lock pointer in the
      tipc_subscriber structure to reuse the lock of the subscriber's
      server port, controlling access to members of tipc_subscriber
      instance.  That is, we only used one lock to ensure both
      tipc_port and tipc_subscriber members were safely accessed.
      
      Now we introduce another spin lock for tipc_subscriber structure
      only protecting themselves, to get a finer granularity locking
      policy.  Moreover, the change will allow us to make the topology
      server code more readable and maintainable.
      
      2) It fixes a bug where sent subscription events may be lost when
      the topology port is congested.  Using the new service, the
      topology server now queues sent events into an outgoing buffer,
      and then wakes up a sender process which has been blocked in
      workqueue context.  The process will keep picking events from the
      buffer and send them to their respective subscribers, using the
      kernel socket interface, until the buffer is empty. Even if the
      socket is congested during transmission there is no risk that
      events may be dropped, since the sender process may block when
      needed.
      
      Some minor reordering of initialization is done, since we now
      have a scenario where the topology server must be started after
      socket initialization has taken place, as the former depends
      on the latter.  And overall, we see a simplification of the
      TIPC subscriber code in making this changeover.
      Signed-off-by: default avatarYing Xue <ying.xue@windriver.com>
      Signed-off-by: default avatarJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      13a2e898
    • Ying Xue's avatar
      tipc: introduce new TIPC server infrastructure · c5fa7b3c
      Ying Xue authored
      TIPC has two internal servers, one providing a subscription
      service for topology events, and another providing the
      configuration interface. These servers have previously been running
      in BH context, accessing the TIPC-port (aka native) API directly.
      Apart from these servers, even the TIPC socket implementation is
      partially built on this API.
      
      As this API may simultaneously be called via different paths and in
      different contexts, a complex and costly lock policiy is required
      in order to protect TIPC internal resources.
      
      To eliminate the need for this complex lock policiy, we introduce
      a new, generic service API that uses kernel sockets for message
      passing instead of the native API. Once the toplogy and configuration
      servers are converted to use this new service, all code pertaining
      to the native API can be removed. This entails a significant
      reduction in code amount and complexity, and opens up for a complete
      rework of the locking policy in TIPC.
      
      The new service also solves another problem:
      
      As the current topology server works in BH context, it cannot easily
      be blocked when sending of events fails due to congestion. In such
      cases events may have to be silently dropped, something that is
      unacceptable. Therefore, the new service keeps a dedicated outbound
      queue receiving messages from BH context. Once messages are
      inserted into this queue, we will immediately schedule a work from a
      special workqueue. This way, messages/events from the topology server
      are in reality sent in process context, and the server can block
      if necessary.
      
      Analogously, there is a new workqueue for receiving messages. Once a
      notification about an arriving message is received in BH context, we
      schedule a work from the receive workqueue to do the job of
      receiving the message in process context.
      
      As both sending and receive messages are now finished in processes,
      subscribed events cannot be dropped any more.
      
      As of this commit, this new server infrastructure is built, but
      not actually yet called by the existing TIPC code, but since the
      conversion changes required in order to use it are significant,
      the addition is kept here as a separate commit.
      Signed-off-by: default avatarYing Xue <ying.xue@windriver.com>
      Signed-off-by: default avatarJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c5fa7b3c
    • Erik Hugne's avatar
      tipc: allow implicit connect for stream sockets · 5d21cb70
      Erik Hugne authored
      TIPC's implied connect feature, aka piggyback connect, allows
      applications to save one syscall and all SYN/SYN-ACK signalling
      overhead when setting up a connection.  Until now, this has only
      been supported for SEQPACKET sockets.  Here, we make it possible
      to use this feature even with stream sockets.
      
      At the connecting side, the connection is completed when the
      first data message arrives from the accepting peer.  This means
      that we must allow the connecting user to call blocking recv()
      before the socket has reached state SS_CONNECTED.  So we must must
      relax the state machine check at recv_stream(), and allow the
      recv() call even if socket is in state SS_CONNECTING.
      Signed-off-by: default avatarErik Hugne <erik.hugne@ericsson.com>
      Signed-off-by: default avatarJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5d21cb70
    • Ying Xue's avatar
      tipc: change socket buffer overflow control to respect sk_rcvbuf · cc79dd1b
      Ying Xue authored
      As per feedback from the netdev community, we change the buffer
      overflow protection algorithm in receiving sockets so that it
      always respects the nominal upper limit set in sk_rcvbuf.
      
      Instead of scaling up from a small sk_rcvbuf value, which leads to
      violation of the configured sk_rcvbuf limit, we now calculate the
      weighted per-message limit by scaling down from a much bigger value,
      still in the same field, according to the importance priority of the
      received message.
      
      To allow for administrative tunability of the socket receive buffer
      size, we create a tipc_rmem sysctl variable to allow the user to
      configure an even bigger value via sysctl command.  It is a size of
      three (min/default/max) to be consistent with things like tcp_rmem.
      
      By default, the value initialized in tipc_rmem[1] is equal to the
      receive socket size needed by a TIPC_CRITICAL_IMPORTANCE message.
      This value is also set as the default value of sk_rcvbuf.
      Originally-by: default avatarJon Maloy <jon.maloy@ericsson.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Jon Maloy <jon.maloy@ericsson.com>
      [Ying: added sysctl variation to Jon's original patch]
      Signed-off-by: default avatarYing Xue <ying.xue@windriver.com>
      [PG: don't compile sysctl.c if not config'd; add Documentation]
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cc79dd1b
    • Ying Xue's avatar
      tipc: update code comments to reflect new uapi header path · 8941bbcd
      Ying Xue authored
      Files tipc.h and tipc_config.h were moved to uapi directory, but
      the corresponding comments were not updated at the same time.
      Signed-off-by: default avatarYing Xue <ying.xue@windriver.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8941bbcd
    • Eliezer Tamir's avatar
      net: add socket option for low latency polling · dafcc438
      Eliezer Tamir authored
      adds a socket option for low latency polling.
      This allows overriding the global sysctl value with a per-socket one.
      Unexport sysctl_net_ll_poll since for now it's not needed in modules.
      Signed-off-by: default avatarEliezer Tamir <eliezer.tamir@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      dafcc438
    • Eliezer Tamir's avatar
      net: remove NET_LL_RX_POLL config menue · 89bf1b5a
      Eliezer Tamir authored
      Remove NET_LL_RX_POLL from the config menu.
      Change default to y.
      Busy polling still needs to be enabled at run time.
      Signed-off-by: default avatarEliezer Tamir <eliezer.tamir@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      89bf1b5a
    • Eliezer Tamir's avatar
      net: convert low latency sockets to sched_clock() · 9a3c71aa
      Eliezer Tamir authored
      Use sched_clock() instead of get_cycles().
      We can use sched_clock() because we don't care much about accuracy.
      Remove the dependency on X86_TSC
      Signed-off-by: default avatarEliezer Tamir <eliezer.tamir@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9a3c71aa
    • Eliezer Tamir's avatar
      net: change sysctl_net_ll_poll into an unsigned int · eb6db622
      Eliezer Tamir authored
      There is no reason for sysctl_net_ll_poll to be an unsigned long.
      Change it into an unsigned int.
      Fix the proc handler.
      Signed-off-by: default avatarEliezer Tamir <eliezer.tamir@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      eb6db622
  2. 14 Jun, 2013 18 commits
  3. 13 Jun, 2013 10 commits
  4. 12 Jun, 2013 3 commits