1. 27 Jul, 2019 6 commits
  2. 26 Jul, 2019 3 commits
  3. 25 Jul, 2019 8 commits
    • David S. Miller's avatar
      Merge branch 'tipc-link-changeover-issues' · b591c6f6
      David S. Miller authored
      Tuong Lien says:
      
      ====================
      tipc: link changeover issues
      
      This patch series is to resolve some issues found with the current link
      changeover mechanism, it also includes an optimization for the link
      synching.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b591c6f6
    • Tuong Lien's avatar
      tipc: fix changeover issues due to large packet · 2320bcda
      Tuong Lien authored
      In conjunction with changing the interfaces' MTU (e.g. especially in
      the case of a bonding) where the TIPC links are brought up and down
      in a short time, a couple of issues were detected with the current link
      changeover mechanism:
      
      1) When one link is up but immediately forced down again, the failover
      procedure will be carried out in order to failover all the messages in
      the link's transmq queue onto the other working link. The link and node
      state is also set to FAILINGOVER as part of the process. The message
      will be transmited in form of a FAILOVER_MSG, so its size is plus of 40
      bytes (= the message header size). There is no problem if the original
      message size is not larger than the link's MTU - 40, and indeed this is
      the max size of a normal payload messages. However, in the situation
      above, because the link has just been up, the messages in the link's
      transmq are almost SYNCH_MSGs which had been generated by the link
      synching procedure, then their size might reach the max value already!
      When the FAILOVER_MSG is built on the top of such a SYNCH_MSG, its size
      will exceed the link's MTU. As a result, the messages are dropped
      silently and the failover procedure will never end up, the link will
      not be able to exit the FAILINGOVER state, so cannot be re-established.
      
      2) The same scenario above can happen more easily in case the MTU of
      the links is set differently or when changing. In that case, as long as
      a large message in the failure link's transmq queue was built and
      fragmented with its link's MTU > the other link's one, the issue will
      happen (there is no need of a link synching in advance).
      
      3) The link synching procedure also faces with the same issue but since
      the link synching is only started upon receipt of a SYNCH_MSG, dropping
      the message will not result in a state deadlock, but it is not expected
      as design.
      
      The 1) & 3) issues are resolved by the last commit that only a dummy
      SYNCH_MSG (i.e. without data) is generated at the link synching, so the
      size of a FAILOVER_MSG if any then will never exceed the link's MTU.
      
      For the 2) issue, the only solution is trying to fragment the messages
      in the failure link's transmq queue according to the working link's MTU
      so they can be failovered then. A new function is made to accomplish
      this, it will still be a TUNNEL PROTOCOL/FAILOVER MSG but if the
      original message size is too large, it will be fragmented & reassembled
      at the receiving side.
      Acked-by: default avatarYing Xue <ying.xue@windriver.com>
      Acked-by: default avatarJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: default avatarTuong Lien <tuong.t.lien@dektech.com.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2320bcda
    • Tuong Lien's avatar
      tipc: optimize link synching mechanism · 4929a932
      Tuong Lien authored
      This commit along with the next one are to resolve the issues with the
      link changeover mechanism. See that commit for details.
      
      Basically, for the link synching, from now on, we will send only one
      single ("dummy") SYNCH message to peer. The SYNCH message does not
      contain any data, just a header conveying the synch point to the peer.
      
      A new node capability flag ("TIPC_TUNNEL_ENHANCED") is introduced for
      backward compatible!
      Acked-by: default avatarYing Xue <ying.xue@windriver.com>
      Acked-by: default avatarJon Maloy <jon.maloy@ericsson.com>
      Suggested-by: default avatarJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: default avatarTuong Lien <tuong.t.lien@dektech.com.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4929a932
    • Ding Xiang's avatar
      ptp: ptp_dte: remove redundant dev_err message · 37f7c66f
      Ding Xiang authored
      devm_ioremap_resource already contains error message, so remove
      the redundant dev_err message
      Signed-off-by: default avatarDing Xiang <dingxiang@cmss.chinamobile.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      37f7c66f
    • David S. Miller's avatar
      Merge branch 'mlxsw-Two-small-updates' · f2ad83af
      David S. Miller authored
      Ido Schimmel says:
      
      ====================
      mlxsw: Two small updates
      
      Patch #1, from Amit, exposes the size of the key-value database (KVD)
      where different entries (e.g., routes, neighbours) are stored in the
      device. This allows users to understand how many entries can be
      offloaded and is also useful for writing scale tests.
      
      Patch #2 increases the number of IPv6 nexthop groups mlxsw can offload.
      The problem and solution are explained in detail in the commit message.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f2ad83af
    • Ido Schimmel's avatar
      mlxsw: spectrum_router: Increase scale of IPv6 nexthop groups · fc25996e
      Ido Schimmel authored
      Unlike IPv4, the kernel does not consolidate IPv6 nexthop groups. To
      avoid exhausting the device's adjacency table - where nexthops are
      stored - the driver does this consolidation instead.
      
      Each nexthop group is hashed by XOR-ing the interface indexes of all the
      member nexthop devices. However, the ifindex itself is not hashed, which
      can result in identical keys used for different groups and finally an
      -EBUSY error from rhashtable due to too long objects list.
      
      Improve the situation by hashing the ifindex itself.
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Acked-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fc25996e
    • Amit Cohen's avatar
      mlxsw: spectrum: Expose KVD size for Spectrum-2 · b06689cc
      Amit Cohen authored
      Unlike Spectrum-1, the KVD (Key-value database) of Spectrum-2 is not
      partitioned, so only expose the entire KVD size. This enables users to
      query the total size of the KVD.
      Signed-off-by: default avatarAmit Cohen <amitc@mellanox.com>
      Acked-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b06689cc
    • Wolfram Sang's avatar
      net: sfc: falcon: convert to i2c_new_dummy_device · c93496e9
      Wolfram Sang authored
      Move from i2c_new_dummy() to i2c_new_dummy_device(). So, we now get an
      ERRPTR which we use in error handling.
      Signed-off-by: default avatarWolfram Sang <wsa+renesas@sang-engineering.com>
      Acked-by: default avatarEdward Cree <ecree@solarflare.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c93496e9
  4. 24 Jul, 2019 23 commits