1. 11 Nov, 2018 18 commits
    • John Hurley's avatar
      nfp: flower: offload tunnel decap rules via indirect TC blocks · 3166dd07
      John Hurley authored
      Previously, TC block tunnel decap rules were only offloaded when a
      callback was triggered through registration of the rules egress device.
      This meant that the driver had no access to the ingress netdev and so
      could not verify it was the same tunnel type that the rule implied.
      
      Register tunnel devices for indirect TC block offloads in NFP, giving
      access to new rules based on the ingress device rather than egress. Use
      this to verify the netdev type of VXLAN and Geneve based rules and offload
      the rules to HW if applicable.
      
      Tunnel registration is done via a netdev notifier. On notifier
      registration, this is triggered for already existing netdevs. This means
      that NFP can register for offloads from devices that exist before it is
      loaded (filter rules will be replayed from the TC core). Similarly, on
      notifier unregister, a call is triggered for each currently active netdev.
      This allows the driver to unregister any indirect block callbacks that may
      still be active.
      Signed-off-by: default avatarJohn Hurley <john.hurley@netronome.com>
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3166dd07
    • John Hurley's avatar
      nfp: flower: increase scope of netdev checking functions · 65b7970e
      John Hurley authored
      Both the actions and tunnel_conf files contain local functions that check
      the type of an input netdev. In preparation for re-use with tunnel offload
      via indirect blocks, move these to static inline functions in a header
      file.
      Signed-off-by: default avatarJohn Hurley <john.hurley@netronome.com>
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      65b7970e
    • John Hurley's avatar
      nfp: flower: allow non repr netdev offload · 7885b4fc
      John Hurley authored
      Previously the offload functions in NFP assumed that the ingress (or
      egress) netdev passed to them was an nfp repr.
      
      Modify the driver to permit the passing of non repr netdevs as the ingress
      device for an offload rule candidate. This may include devices such as
      tunnels. The driver should then base its offload decision on a combination
      of ingress device and egress port for a rule.
      Signed-off-by: default avatarJohn Hurley <john.hurley@netronome.com>
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7885b4fc
    • John Hurley's avatar
      net: sched: register callbacks for indirect tc block binds · 7f76fa36
      John Hurley authored
      Currently drivers can register to receive TC block bind/unbind callbacks
      by implementing the setup_tc ndo in any of their given netdevs. However,
      drivers may also be interested in binds to higher level devices (e.g.
      tunnel drivers) to potentially offload filters applied to them.
      
      Introduce indirect block devs which allows drivers to register callbacks
      for block binds on other devices. The callback is triggered when the
      device is bound to a block, allowing the driver to register for rules
      applied to that block using already available functions.
      
      Freeing an indirect block callback will trigger an unbind event (if
      necessary) to direct the driver to remove any offloaded rules and unreg
      any block rule callbacks. It is the responsibility of the implementing
      driver to clean any registered indirect block callbacks before exiting,
      if the block it still active at such a time.
      
      Allow registering an indirect block dev callback for a device that is
      already bound to a block. In this case (if it is an ingress block),
      register and also trigger the callback meaning that any already installed
      rules can be replayed to the calling driver.
      Signed-off-by: default avatarJohn Hurley <john.hurley@netronome.com>
      Signed-off-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7f76fa36
    • David S. Miller's avatar
      Merge branch 'PHYID-matching-macros' · d1ce0114
      David S. Miller authored
      Heiner Kallweit says:
      
      ====================
      net: phy: add macros for PHYID matching in PHY driver config
      
      Add macros for PHYID matching to be used in PHY driver configs.
      By using these macros some boilerplate code can be avoided.
      
      Use them initially in the Realtek PHY drivers.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d1ce0114
    • Heiner Kallweit's avatar
      net: phy: realtek: use new PHYID matching macros · ca494936
      Heiner Kallweit authored
      Use new macros for PHYID matching to avoid boilerplate code.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ca494936
    • Heiner Kallweit's avatar
      net: phy: add macros for PHYID matching · aa2af2eb
      Heiner Kallweit authored
      Add macros for PHYID matching to be used in PHY driver configs.
      By using these macros some boilerplate code can be avoided.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      aa2af2eb
    • David S. Miller's avatar
      Merge branch 'phylib-simplifications' · fa28a2b2
      David S. Miller authored
      Heiner Kallweit says:
      
      ====================
      net: phy: further phylib simplifications after recent changes to the state machine
      
      After the recent changes to the state machine phylib can be further
      simplified (w/o having to make any assumptions).
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fa28a2b2
    • Heiner Kallweit's avatar
      net: phy: improve and inline phy_change · 34d884e3
      Heiner Kallweit authored
      Now that phy_mac_interrupt() doesn't call phy_change() any longer it's
      called from phy_interrupt() only. Therefore phy_interrupt_is_valid()
      returns true always and the check can be removed.
      In case of PHY_HALTED phy_interrupt() bails out immediately,
      therefore the second check for PHY_HALTED including the call to
      phy_disable_interrupts() can be removed.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      34d884e3
    • Heiner Kallweit's avatar
      net: phy: simplify phy_mac_interrupt and related functions · d73a2156
      Heiner Kallweit authored
      When using phy_mac_interrupt() the irq number is set to
      PHY_IGNORE_INTERRUPT, therefore phy_interrupt_is_valid() returns false.
      As a result phy_change() effectively just calls phy_trigger_machine()
      when called from phy_mac_interrupt() via phy_change_work(). So we can
      call phy_trigger_machine() from phy_mac_interrupt() directly and
      remove some now unneeded code.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d73a2156
    • Heiner Kallweit's avatar
      net: phy: don't set state PHY_CHANGELINK in phy_change · 8deeb630
      Heiner Kallweit authored
      State PHY_CHANGELINK isn't needed here, we can call the state machine
      directly. We just have to remove the check for phy_polling_mode() to
      make this work also in interrupt mode. Removing this check doesn't
      cause any overhead because when not polling the state machine is
      called only if required by some event.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8deeb630
    • David S. Miller's avatar
      Merge branch 'remove-PHY_HAS_INTERRUPT' · d79e26a7
      David S. Miller authored
      Heiner Kallweit says:
      
      ====================
      net: phy: replace PHY_HAS_INTERRUPT with a check for config_intr and ack_interrupt
      
      Flag PHY_HAS_INTERRUPT is used only here for this small check. I think
      using interrupts isn't possible if a driver defines neither
      config_intr nor ack_interrupts callback. So we can replace checking
      flag PHY_HAS_INTERRUPT with checking for these callbacks.
      This allows to remove this flag from all driver configs.
      
      v2:
      - add helper for check in patch 1
      - remove PHY_HAS_INTERRUPT from all drivers, not only Realtek
      - remove flag PHY_HAS_INTERRUPT completely
      
      v3:
      - rebase patch 2
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d79e26a7
    • Heiner Kallweit's avatar
      net: phy: remove flag PHY_HAS_INTERRUPT from driver configs · a4307c0e
      Heiner Kallweit authored
      Now that flag PHY_HAS_INTERRUPT has been replaced with a check for
      callbacks config_intr and ack_interrupt, we can remove setting this
      flag from all driver configs.
      Last but not least remove flag PHY_HAS_INTERRUPT completely.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Reviewed-by: default avatarAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a4307c0e
    • Heiner Kallweit's avatar
      net: phy: replace PHY_HAS_INTERRUPT with a check for config_intr and ack_interrupt · 0d2e778e
      Heiner Kallweit authored
      Flag PHY_HAS_INTERRUPT is used only here for this small check. I think
      using interrupts isn't possible if a driver defines neither
      config_intr nor ack_interrupts callback. So we can replace checking
      flag PHY_HAS_INTERRUPT with checking for these callbacks.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Reviewed-by: default avatarAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0d2e778e
    • David S. Miller's avatar
      sctp: Fix SKB list traversal in sctp_intl_store_ordered(). · e15e067d
      David S. Miller authored
      Same change as made to sctp_intl_store_reasm().
      
      To be fully correct, an iterator has an undefined value when something
      like skb_queue_walk() naturally terminates.
      
      This will actually matter when SKB queues are converted over to
      list_head.
      
      Formalize what this code ends up doing with the current
      implementation.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e15e067d
    • David S. Miller's avatar
      sctp: Fix SKB list traversal in sctp_intl_store_reasm(). · 348bbc25
      David S. Miller authored
      To be fully correct, an iterator has an undefined value when something
      like skb_queue_walk() naturally terminates.
      
      This will actually matter when SKB queues are converted over to
      list_head.
      
      Formalize what this code ends up doing with the current
      implementation.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      348bbc25
    • David S. Miller's avatar
      iucv: Remove SKB list assumptions. · 9e733177
      David S. Miller authored
      Eliminate the assumption that SKBs and SKB list heads can
      be cast to eachother in SKB list handling code.
      
      This change also appears to fix a bug since the list->next pointer is
      sampled outside of holding the SKB queue lock.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9e733177
    • David S. Miller's avatar
      brcmfmac: Use standard SKB list accessors in brcmf_sdiod_sglist_rw. · 4a5a553d
      David S. Miller authored
      Instead of direct SKB list pointer accesses.
      
      The loops in this function had to be rewritten to accommodate this
      more easily.
      
      The first loop iterates now over the target list in the outer loop,
      and triggers an mmc data operation when the per-operation limits are
      hit.
      
      Then after the loops, if we have any residue, we trigger the last
      and final operation.
      
      For the page aligned workaround, where we have to copy the read data
      back into the original list of SKBs, we use a two-tiered loop.  The
      outer loop stays the same and iterates over pktlist, and then we have
      an inner loop which uses skb_peek_next().  The break logic has been
      simplified because we know that the aggregate length of the SKBs in
      the source and destination lists are the same.
      
      This change also ends up fixing a bug, having to do with the
      maintainance of the seg_sz variable and how it drove the outermost
      loop.  It begins as:
      
      	seg_sz = target_list->qlen;
      
      ie. the number of packets in the target_list queue.  The loop
      structure was then:
      
      	while (seq_sz) {
      		...
      		while (not at end of target_list) {
      			...
      			sg_cnt++
      			...
      		}
      		...
      		seg_sz -= sg_cnt;
      
      The assumption built into that last statement is that sg_cnt counts
      how many packets from target_list have been fully processed by the
      inner loop.  But this not true.
      
      If we hit one of the limits, such as the max segment size or the max
      request size, we will break and copy a partial packet then contine
      back up to the top of the outermost loop.
      
      With the new loops we don't have this problem as we don't guard the
      loop exit with a packet count, but instead use the progression of the
      pkt_next SKB through the list to the end.  The general structure is:
      
      	sg_cnt = 0;
      	skb_queue_walk(target_list, pkt_next) {
      		pkt_offset = 0;
      		...
      		sg_cnt++;
      		...
      		while (pkt_offset < pkt_next->len) {
      			pkt_offset += sg_data_size;
      			if (queued up max per request)
      				mmc_submit_one();
      		}
      	}
      	if (sg_cnt)
      		mmc_submit_one();
      
      The variables that maintain where we are in the MMC command state such
      as req_sz, sg_cnt, and sgl are reset when we emit one of these full
      sized requests.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4a5a553d
  2. 10 Nov, 2018 22 commits