1. 25 Aug, 2021 4 commits
    • Juergen Gross's avatar
      xen/netfront: don't read data from request on the ring page · 162081ec
      Juergen Gross authored
      In order to avoid a malicious backend being able to influence the local
      processing of a request build the request locally first and then copy
      it to the ring page. Any reading from the request influencing the
      processing in the frontend needs to be done on the local instance.
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      162081ec
    • Juergen Gross's avatar
      xen/netfront: read response from backend only once · 8446066b
      Juergen Gross authored
      In order to avoid problems in case the backend is modifying a response
      on the ring page while the frontend has already seen it, just read the
      response into a local buffer in one go and then operate on that buffer
      only.
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarJan Beulich <jbeulich@suse.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8446066b
    • Alok Prasad's avatar
      qed: Enable automatic recovery on error condition. · 755f9053
      Alok Prasad authored
      This patch enables automatic recovery by default in case of various
      error condition like fw assert , hardware error etc.
      This also ensure driver can handle multiple iteration of assertion
      conditions.
      Signed-off-by: default avatarAriel Elior <aelior@marvell.com>
      Signed-off-by: default avatarShai Malin <smalin@marvell.com>
      Signed-off-by: default avatarIgor Russkikh <irusskikh@marvell.com>
      Signed-off-by: default avatarAlok Prasad <palok@marvell.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      755f9053
    • Gilad Naaman's avatar
      net-next: When a bond have a massive amount of VLANs with IPv6 addresses,... · 406f42fa
      Gilad Naaman authored
      net-next: When a bond have a massive amount of VLANs with IPv6 addresses, performance of changing link state, attaching a VRF, changing an IPv6 address, etc. go down dramtically.
      
      The source of most of the slow down is the `dev_addr_lists.c` module,
      which mainatins a linked list of HW addresses.
      When using IPv6, this list grows for each IPv6 address added on a
      VLAN, since each IPv6 address has a multicast HW address associated with
      it.
      
      When performing any modification to the involved links, this list is
      traversed many times, often for nothing, all while holding the RTNL
      lock.
      
      Instead, this patch adds an auxilliary rbtree which cuts down
      traversal time significantly.
      
      Performance can be seen with the following script:
      
      	#!/bin/bash
      	ip netns del test || true 2>/dev/null
      	ip netns add test
      
      	echo 1 | ip netns exec test tee /proc/sys/net/ipv6/conf/all/keep_addr_on_down > /dev/null
      
      	set -e
      
      	ip -n test link add foo type veth peer name bar
      	ip -n test link add b1 type bond
      	ip -n test link add florp type vrf table 10
      
      	ip -n test link set bar master b1
      	ip -n test link set foo up
      	ip -n test link set bar up
      	ip -n test link set b1 up
      	ip -n test link set florp up
      
      	VLAN_COUNT=1500
      	BASE_DEV=b1
      
      	echo Creating vlans
      	ip netns exec test time -p bash -c "for i in \$(seq 1 $VLAN_COUNT);
      	do ip -n test link add link $BASE_DEV name foo.\$i type vlan id \$i; done"
      
      	echo Bringing them up
      	ip netns exec test time -p bash -c "for i in \$(seq 1 $VLAN_COUNT);
      	do ip -n test link set foo.\$i up; done"
      
      	echo Assiging IPv6 Addresses
      	ip netns exec test time -p bash -c "for i in \$(seq 1 $VLAN_COUNT);
      	do ip -n test address add dev foo.\$i 2000::\$i/64; done"
      
      	echo Attaching to VRF
      	ip netns exec test time -p bash -c "for i in \$(seq 1 $VLAN_COUNT);
      	do ip -n test link set foo.\$i master florp; done"
      
      On an Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz machine, the performance
      before the patch is (truncated):
      
      	Creating vlans
      	real 108.35
      	Bringing them up
      	real 4.96
      	Assiging IPv6 Addresses
      	real 19.22
      	Attaching to VRF
      	real 458.84
      
      After the patch:
      
      	Creating vlans
      	real 5.59
      	Bringing them up
      	real 5.07
      	Assiging IPv6 Addresses
      	real 5.64
      	Attaching to VRF
      	real 25.37
      
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Jakub Kicinski <kuba@kernel.org>
      Cc: Lu Wei <luwei32@huawei.com>
      Cc: Xiongfeng Wang <wangxiongfeng2@huawei.com>
      Cc: Taehee Yoo <ap420073@gmail.com>
      Signed-off-by: default avatarGilad Naaman <gnaaman@drivenets.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      406f42fa
  2. 24 Aug, 2021 24 commits
    • Kangmin Park's avatar
      net: bridge: change return type of br_handle_ingress_vlan_tunnel · a37c5c26
      Kangmin Park authored
      br_handle_ingress_vlan_tunnel() is only referenced in
      br_handle_frame(). If br_handle_ingress_vlan_tunnel() is called and
      return non-zero value, goto drop in br_handle_frame().
      
      But, br_handle_ingress_vlan_tunnel() always return 0. So, the
      routines that check the return value and goto drop has no meaning.
      
      Therefore, change return type of br_handle_ingress_vlan_tunnel() to
      void and remove if statement of br_handle_frame().
      Signed-off-by: default avatarKangmin Park <l4stpr0gr4m@gmail.com>
      Acked-by: default avatarNikolay Aleksandrov <nikolay@nvidia.com>
      Link: https://lore.kernel.org/r/20210823102118.17966-1-l4stpr0gr4m@gmail.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      a37c5c26
    • Po-Hsu Lin's avatar
      selftests/net: Use kselftest skip code for skipped tests · 7844ec21
      Po-Hsu Lin authored
      There are several test cases in the net directory are still using
      exit 0 or exit 1 when they need to be skipped. Use kselftest
      framework skip code instead so it can help us to distinguish the
      return status.
      
      Criterion to filter out what should be fixed in net directory:
        grep -r "exit [01]" -B1 | grep -i skip
      
      This change might cause some false-positives if people are running
      these test scripts directly and only checking their return codes,
      which will change from 0 to 4. However I think the impact should be
      small as most of our scripts here are already using this skip code.
      And there will be no such issue if running them with the kselftest
      framework.
      Signed-off-by: default avatarPo-Hsu Lin <po-hsu.lin@canonical.com>
      Reviewed-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Tested-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Link: https://lore.kernel.org/r/20210823085854.40216-1-po-hsu.lin@canonical.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      7844ec21
    • Jakub Kicinski's avatar
      Merge branch 'ethtool-extend-coalesce-uapi' · 3a62c333
      Jakub Kicinski authored
      Yufeng Mo says:
      
      ====================
      ethtool: extend coalesce uAPI
      
      In order to support some configuration in coalesce uAPI, this series
      extend coalesce uAPI and add support for CQE mode.
      
      Below is some test result with HNS3 driver:
      1. old ethtool(ioctl) + new kernel:
      estuary:/$ ethtool -c eth0
      Coalesce parameters for eth0:
      Adaptive RX: on  TX: on
      stats-block-usecs: 0
      sample-interval: 0
      pkt-rate-low: 0
      pkt-rate-high: 0
      
      rx-usecs: 20
      rx-frames: 0
      rx-usecs-irq: 0
      rx-frames-irq: 0
      
      tx-usecs: 20
      tx-frames: 0
      tx-usecs-irq: 0
      tx-frames-irq: 0
      
      rx-usecs-low: 0
      rx-frame-low: 0
      tx-usecs-low: 0
      tx-frame-low: 0
      
      rx-usecs-high: 0
      rx-frame-high: 0
      tx-usecs-high: 0
      tx-frame-high: 0
      
      2. ethtool(netlink with cqe mode) + kernel without cqe mode:
      estuary:/$ ethtool -c eth0
      Coalesce parameters for eth0:
      Adaptive RX: on  TX: on
      stats-block-usecs: n/a
      sample-interval: n/a
      pkt-rate-low: n/a
      pkt-rate-high: n/a
      
      rx-usecs: 20
      rx-frames: 0
      rx-usecs-irq: n/a
      rx-frames-irq: n/a
      
      tx-usecs: 20
      tx-frames: 0
      tx-usecs-irq: n/a
      tx-frames-irq: n/a
      
      rx-usecs-low: n/a
      rx-frame-low: n/a
      tx-usecs-low: n/a
      tx-frame-low: n/a
      
      rx-usecs-high: 0
      rx-frame-high: n/a
      tx-usecs-high: 0
      tx-frame-high: n/a
      
      CQE mode RX: n/a  TX: n/a
      
      3. ethool(netlink with cqe mode) + kernel with cqe mode:
      estuary:/$ ethtool -c eth0
      Coalesce parameters for eth0:
      Adaptive RX: on  TX: on
      stats-block-usecs: n/a
      sample-interval: n/a
      pkt-rate-low: n/a
      pkt-rate-high: n/a
      
      rx-usecs: 20
      rx-frames: 0
      rx-usecs-irq: n/a
      rx-frames-irq: n/a
      
      tx-usecs: 20
      tx-frames: 0
      tx-usecs-irq: n/a
      tx-frames-irq: n/a
      
      rx-usecs-low: n/a
      rx-frame-low: n/a
      tx-usecs-low: n/a
      tx-frame-low: n/a
      
      rx-usecs-high: 0
      rx-frame-high: n/a
      tx-usecs-high: 0
      tx-frame-high: n/a
      
      CQE mode RX: off  TX: off
      
      4. ethool(netlink without cqe mode) + kernel with cqe mode:
      estuary:/$ ethtool -c eth0
      Coalesce parameters for eth0:
      Adaptive RX: on  TX: on
      stats-block-usecs: n/a
      sample-interval: n/a
      pkt-rate-low: n/a
      pkt-rate-high: n/a
      
      rx-usecs: 20
      rx-frames: 0
      rx-usecs-irq: n/a
      rx-frames-irq: n/a
      
      tx-usecs: 20
      tx-frames: 0
      tx-usecs-irq: n/a
      tx-frames-irq: n/a
      
      rx-usecs-low: n/a
      rx-frame-low: n/a
      tx-usecs-low: n/a
      tx-frame-low: n/a
      
      rx-usecs-high: 0
      rx-frame-high: n/a
      tx-usecs-high: 0
      tx-frame-high: n/a
      
      Change log:
      V2 -> V3:
               fix some warning on W=1 builds in #2
      
      V1 -> V2:
               1. fix compile error using allmodconfig in #2
               2. move some property-related modifications from #2 to #1
                  for better review suggested by Jakub Kicinski.
      
      Change log from RFC:
      V3 -> V4:
               add document explaining the difference between CQE and EQE
               in #1 suggested by Jakub Kicinski.
      
      V2 -> V3:
               1. split #1 into adding new parameter and adding new attributes.
               2. use NLA_POLICY_MAX(NLA_U8, 1) instead of NLA_U8.
               3. modify the description of CQE in Document.
      
      V1 -> V2:
               refactor #1&#2 in V1 suggestted by Jakub Kicinski.
      ====================
      
      Link: https://lore.kernel.org/r/1629444920-25437-1-git-send-email-moyufeng@huawei.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      3a62c333
    • Yufeng Mo's avatar
      net: hns3: add ethtool support for CQE/EQE mode configuration · cce1689e
      Yufeng Mo authored
      Add support in ethtool for switching EQE/CQE mode.
      Signed-off-by: default avatarYufeng Mo <moyufeng@huawei.com>
      Signed-off-by: default avatarHuazhong Tan <tanhuazhong@huawei.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      cce1689e
    • Yufeng Mo's avatar
      net: hns3: add support for EQE/CQE mode configuration · 9f0c6f4b
      Yufeng Mo authored
      For device whose version is above V3(include V3), the GL can
      select EQE or CQE mode, so adds support for it.
      
      In CQE mode, the coalesced timer will restart when the first new
      completion occurs, while in EQE mode, the timer will not restart.
      Signed-off-by: default avatarYufeng Mo <moyufeng@huawei.com>
      Signed-off-by: default avatarHuazhong Tan <tanhuazhong@huawei.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      9f0c6f4b
    • Yufeng Mo's avatar
      ethtool: extend coalesce setting uAPI with CQE mode · f3ccfda1
      Yufeng Mo authored
      In order to support more coalesce parameters through netlink,
      add two new parameter kernel_coal and extack for .set_coalesce
      and .get_coalesce, then some extra info can return to user with
      the netlink API.
      Signed-off-by: default avatarYufeng Mo <moyufeng@huawei.com>
      Signed-off-by: default avatarHuazhong Tan <tanhuazhong@huawei.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      f3ccfda1
    • Yufeng Mo's avatar
      ethtool: add two coalesce attributes for CQE mode · 029ee6b1
      Yufeng Mo authored
      Currently, there are many drivers who support CQE mode configuration,
      some configure it as a fixed when initialized, some provide an
      interface to change it by ethtool private flags. In order to make it
      more generic, add two new 'ETHTOOL_A_COALESCE_USE_CQE_TX' and
      'ETHTOOL_A_COALESCE_USE_CQE_RX' coalesce attributes, then these
      parameters can be accessed by ethtool netlink coalesce uAPI.
      
      Also add an new structure kernel_ethtool_coalesce, then the
      new parameter can be added into this struct.
      Signed-off-by: default avatarYufeng Mo <moyufeng@huawei.com>
      Signed-off-by: default avatarHuazhong Tan <tanhuazhong@huawei.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      029ee6b1
    • Jakub Kicinski's avatar
      netdevice: move xdp_rxq within netdev_rx_queue · 95d1d249
      Jakub Kicinski authored
      Both struct netdev_rx_queue and struct xdp_rxq_info are cacheline
      aligned. This causes extra padding before and after the xdp_rxq
      member. Move the member upfront, so that it's naturally aligned.
      
      Before:
      	/* size: 256, cachelines: 4, members: 6 */
      	/* sum members: 160, holes: 1, sum holes: 40 */
      	/* padding: 56 */
      	/* paddings: 1, sum paddings: 36 */
      	/* forced alignments: 1, forced holes: 1, sum forced holes: 40 */
      
      After:
      	/* size: 192, cachelines: 3, members: 6 */
      	/* padding: 32 */
      	/* paddings: 1, sum paddings: 36 */
      	/* forced alignments: 1 */
      Acked-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Link: https://lore.kernel.org/r/20210823180135.1153608-1-kuba@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      95d1d249
    • Heiner Kallweit's avatar
      r8169: enable ASPM L0s state · 18a9eae2
      Heiner Kallweit authored
      ASPM is disabled completely because we've seen different types of
      problems in the past. However it seems these problems occurred with
      L1 or L1 sub-states only. On all the chip versions I've seen the
      acceptable L0s exit latency is 512ns. This should be short enough not
      to cause problems. If the actual L0s exit latency of the PCIe link
      is bigger than 512ns then the PCI core will disable L0s anyway.
      So let's give it a try and disable L1 and L1 sub-states only.
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      18a9eae2
    • Yunsheng Lin's avatar
      page_pool: use relaxed atomic for release side accounting · 7fb9b66d
      Yunsheng Lin authored
      There is no need to synchronize the account updating, so
      use the relaxed atomic to avoid some memory barrier in the
      data path.
      Acked-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarYunsheng Lin <linyunsheng@huawei.com>
      Acked-by: default avatarIlias Apalodimas <ilias.apalodimas@linaro.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7fb9b66d
    • David S. Miller's avatar
      Merge branch 'dsa-sw-bridging' · 669f047e
      David S. Miller authored
      Vladimir Oltean says:
      
      ====================
      Plug holes in DSA's software bridging support
      
      Changes in v2:
      - Make sure that leaving an unoffloaded bridge works well too
      - Remove a set but unused variable
      - Tweak a commit message
      
      This series addresses some oddities reported by Alvin while he was
      working on the new rtl8365mb driver (a driver which does not implement
      bridge offloading for now, and relies on software bridging).
      
      First is that DSA behaves, in the lack of a .port_bridge_join method, as
      if the operation succeeds, and does not kick off its internal procedures
      for software bridging (the same procedures that were written for indirect
      software bridging, meaning bridging with an unoffloaded software LAG).
      
      Second is that even after being patched to treat ports with software
      bridging as standalone, we still don't get rid of bridge VLANs, even
      though we have code to ignore them, that code manages to get bypassed.
      This is in fact a recurring issue which was brought up by Tobias
      Waldekranz a while ago, but the solution never made it to the git tree.
      
      After debugging with Florian the last time:
      https://patchwork.kernel.org/project/netdevbpf/patch/20210320225928.2481575-3-olteanv@gmail.com/
      I became very concerned about sending these patches to stable kernels.
      They are relatively large reworks, and they are only tested properly on
      net-next.
      
      A few commands on my test vehicle which has ds->vlan_filtering_is_global
      set to true:
      
      | Nothing is committed to hardware when we add VLAN 100 on a standalone
      | port
      $ ip link add link sw0p2 name sw0p2.100 type vlan id 100
      | When a neighbor port joins a VLAN-aware bridge, VLAN filtering gets
      | enabled globally on the switch. This replays the VLAN 100 from
      | sw0p2.100 and also installs VLAN 1 from the bridge on sw0p0.
      $ ip link add br0 type bridge vlan_filtering 1 && ip link set sw0p0 master br0
      [   97.948087] sja1105 spi2.0: Reset switch and programmed static config. Reason: VLAN filtering
      [   97.957989] sja1105 spi2.0: sja1105_bridge_vlan_add: port 2 vlan 100
      [   97.964442] sja1105 spi2.0: sja1105_bridge_vlan_add: port 4 vlan 100
      [   97.971202] device sw0p0 entered promiscuous mode
      [   97.976129] sja1105 spi2.0: sja1105_bridge_vlan_add: port 0 vlan 1
      [   97.982640] sja1105 spi2.0: sja1105_bridge_vlan_add: port 4 vlan 1
      | We can see that sw0p2, the standalone port, is now filtering because
      | of the bridge
      $ ethtool -k sw0p2 | grep vlan
      rx-vlan-filter: on [fixed]
      | When we make the bridge VLAN-unaware, the 8021q upper sw0p2.100 is
      | uncomitted from hardware. The VLANs managed by the bridge still remain
      | committed to hardware, because they are managed by the bridge.
      $ ip link set br0 type bridge vlan_filtering 0
      [  134.218869] sja1105 spi2.0: Reset switch and programmed static config. Reason: VLAN filtering
      [  134.228913] sja1105 spi2.0: sja1105_bridge_vlan_del: port 2 vlan 100
      | And now the standalone port is not filtering anymore.
      ethtool -k sw0p2 | grep vlan
      rx-vlan-filter: off [fixed]
      
      The same test with .port_bridge_join and .port_bridge_leave commented
      out from this driver:
      
      | Not a flinch
      $ ip link add link sw0p2 name sw0p2.100 type vlan id 100
      $ ip link add br0 type bridge vlan_filtering 1 && ip link set sw0p0 master br0
      Warning: dsa_core: Offloading not supported.
      $ ethtool -k sw0p2 | grep vlan
      rx-vlan-filter: off [fixed]
      $ ip link set br0 type bridge vlan_filtering 0
      $ ethtool -k sw0p2 | grep vlan
      rx-vlan-filter: off [fixed]
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      669f047e
    • Vladimir Oltean's avatar
      net: dsa: let drivers state that they need VLAN filtering while standalone · 58adf9dc
      Vladimir Oltean authored
      As explained in commit e358bef7 ("net: dsa: Give drivers the chance
      to veto certain upper devices"), the hellcreek driver uses some tricks
      to comply with the network stack expectations: it enforces port
      separation in standalone mode using VLANs. For untagged traffic,
      bridging between ports is prevented by using different PVIDs, and for
      VLAN-tagged traffic, it never accepts 8021q uppers with the same VID on
      two ports, so packets with one VLAN cannot leak from one port to another.
      
      That is almost fine*, and has worked because hellcreek relied on an
      implicit behavior of the DSA core that was changed by the previous
      patch: the standalone ports declare the 'rx-vlan-filter' feature as 'on
      [fixed]'. Since most of the DSA drivers are actually VLAN-unaware in
      standalone mode, that feature was actually incorrectly reflecting the
      hardware/driver state, so there was a desire to fix it. This leaves the
      hellcreek driver in a situation where it has to explicitly request this
      behavior from the DSA framework.
      
      We configure the ports as follows:
      
      - Standalone: 'rx-vlan-filter' is on. An 8021q upper on top of a
        standalone hellcreek port will go through dsa_slave_vlan_rx_add_vid
        and will add a VLAN to the hardware tables, giving the driver the
        opportunity to refuse it through .port_prechangeupper.
      
      - Bridged with vlan_filtering=0: 'rx-vlan-filter' is off. An 8021q upper
        on top of a bridged hellcreek port will not go through
        dsa_slave_vlan_rx_add_vid, because there will not be any attempt to
        offload this VLAN. The driver already disables VLAN awareness, so that
        upper should receive the traffic it needs.
      
      - Bridged with vlan_filtering=1: 'rx-vlan-filter' is on. An 8021q upper
        on top of a bridged hellcreek port will call dsa_slave_vlan_rx_add_vid,
        and can again be vetoed through .port_prechangeupper.
      
      *It is not actually completely fine, because if I follow through
      correctly, we can have the following situation:
      
      ip link add br0 type bridge vlan_filtering 0
      ip link set lan0 master br0 # lan0 now becomes VLAN-unaware
      ip link set lan0 nomaster # lan0 fails to become VLAN-aware again, therefore breaking isolation
      
      This patch fixes that corner case by extending the DSA core logic, based
      on this requested attribute, to change the VLAN awareness state of the
      switch (port) when it leaves the bridge.
      Signed-off-by: default avatarVladimir Oltean <vladimir.oltean@nxp.com>
      Reviewed-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Acked-by: Kurt Kanzenbach's avatarKurt Kanzenbach <kurt@linutronix.de>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      58adf9dc
    • Vladimir Oltean's avatar
      net: dsa: don't advertise 'rx-vlan-filter' when not needed · 06cfb2df
      Vladimir Oltean authored
      There have been multiple independent reports about
      dsa_slave_vlan_rx_add_vid being called (and consequently calling the
      drivers' .port_vlan_add) when it isn't needed, and sometimes (not
      always) causing problems in the process.
      
      Case 1:
      mv88e6xxx_port_vlan_prepare is stubborn and only accepts VLANs on
      bridged ports. That is understandably so, because standalone mv88e6xxx
      ports are VLAN-unaware, and VTU entries are said to be a scarce
      resource.
      
      Otherwise said, the following fails lamentably on mv88e6xxx:
      
      ip link add br0 type bridge vlan_filtering 1
      ip link set lan3 master br0
      ip link add link lan10 name lan10.1 type vlan id 1
      [485256.724147] mv88e6085 d0032004.mdio-mii:12: p10: hw VLAN 1 already used by port 3 in br0
      RTNETLINK answers: Operation not supported
      
      This has become a worse issue since commit 9b236d2a ("net: dsa:
      Advertise the VLAN offload netdev ability only if switch supports it").
      Up to that point, the driver was returning -EOPNOTSUPP and DSA was
      reconverting that error to 0, making the 8021q upper think all is ok
      (but obviously the error message was there even prior to this change).
      After that change the -EOPNOTSUPP is propagated to vlan_vid_add, and it
      is a hard error.
      
      Case 2:
      Ports that don't offload the Linux bridge (have a dp->bridge_dev = NULL
      because they don't implement .port_bridge_{join,leave}). Understandably,
      a standalone port should not offload VLANs either, it should remain VLAN
      unaware and any VLAN should be a software VLAN (as long as the hardware
      is not quirky, that is).
      
      In fact, dsa_slave_port_obj_add does do the right thing and rejects
      switchdev VLAN objects coming from the bridge when that bridge is not
      offloaded:
      
      	case SWITCHDEV_OBJ_ID_PORT_VLAN:
      		if (!dsa_port_offloads_bridge_port(dp, obj->orig_dev))
      			return -EOPNOTSUPP;
      
      		err = dsa_slave_vlan_add(dev, obj, extack);
      
      But it seems that the bridge is able to trick us. The __vlan_vid_add
      from br_vlan.c has:
      
      	/* Try switchdev op first. In case it is not supported, fallback to
      	 * 8021q add.
      	 */
      	err = br_switchdev_port_vlan_add(dev, v->vid, flags, extack);
      	if (err == -EOPNOTSUPP)
      		return vlan_vid_add(dev, br->vlan_proto, v->vid);
      
      So it says "no, no, you need this VLAN in your life!". And we, naive as
      we are, say "oh, this comes from the vlan_vid_add code path, it must be
      an 8021q upper, sure, I'll take that". And we end up with that bridge
      VLAN installed on our port anyway. But this time, it has the wrong flags:
      if the bridge was trying to install VLAN 1 as a pvid/untagged VLAN,
      failed via switchdev, retried via vlan_vid_add, we have this comment:
      
      	/* This API only allows programming tagged, non-PVID VIDs */
      
      So what we do makes absolutely no sense.
      
      Backtracing a bit, we see the common pattern. We allow the network stack
      to think that our standalone ports are VLAN-aware, but they aren't, for
      the vast majority of switches. The quirky ones should not dictate the
      norm. The dsa_slave_vlan_rx_add_vid and dsa_slave_vlan_rx_kill_vid
      methods exist for drivers that need the 'rx-vlan-filter: on' feature in
      ethtool -k, which can be due to any of the following reasons:
      
      1. vlan_filtering_is_global = true, and some ports are under a
         VLAN-aware bridge while others are standalone, and the standalone
         ports would otherwise drop VLAN-tagged traffic. This is described in
         commit 061f6a50 ("net: dsa: Add ndo_vlan_rx_{add, kill}_vid
         implementation").
      
      2. the ports that are under a VLAN-aware bridge should also set this
         feature, for 8021q uppers having a VID not claimed by the bridge.
         In this case, the driver will essentially not even know that the VID
         is coming from the 8021q layer and not the bridge.
      
      3. Hellcreek. This driver needs it because in standalone mode, it uses
         unique VLANs per port to ensure separation. For separation of untagged
         traffic, it uses different PVIDs for each port, and for separation of
         VLAN-tagged traffic, it never accepts 8021q uppers with the same vid
         on two ports.
      
      If a driver does not fall under any of the above 3 categories, there is
      no reason why it should advertise the 'rx-vlan-filter' feature, therefore
      no reason why it should offload the VLANs added through vlan_vid_add.
      
      This commit fixes the problem by removing the 'rx-vlan-filter' feature
      from the slave devices when they operate in standalone mode, and when
      they offload a VLAN-unaware bridge.
      
      The way it works is that vlan_vid_add will now stop its processing here:
      
      vlan_add_rx_filter_info:
      	if (!vlan_hw_filter_capable(dev, proto))
      		return 0;
      
      So the VLAN will still be saved in the interface's VLAN RX filtering
      list, but because it does not declare VLAN filtering in its features,
      the 8021q module will return zero without committing that VLAN to
      hardware.
      
      This gives the drivers what they want, since it keeps the 8021q VLANs
      away from the VLAN table until VLAN awareness is enabled (point at which
      the ports are no longer standalone, hence in the mv88e6xxx case, the
      check in mv88e6xxx_port_vlan_prepare passes).
      
      Since the issue predates the existence of the hellcreek driver, case 3
      will be dealt with in a separate patch.
      
      The main change that this patch makes is to no longer set
      NETIF_F_HW_VLAN_CTAG_FILTER unconditionally, but toggle it dynamically
      (for most switches, never).
      
      The second part of the patch addresses an issue that the first part
      introduces: because the 'rx-vlan-filter' feature is now dynamically
      toggled, and our .ndo_vlan_rx_add_vid does not get called when
      'rx-vlan-filter' is off, we need to avoid bugs such as the following by
      replaying the VLANs from 8021q uppers every time we enable VLAN
      filtering:
      
      ip link add link lan0 name lan0.100 type vlan id 100
      ip addr add 192.168.100.1/24 dev lan0.100
      ping 192.168.100.2 # should work
      ip link add br0 type bridge vlan_filtering 0
      ip link set lan0 master br0
      ping 192.168.100.2 # should still work
      ip link set br0 type bridge vlan_filtering 1
      ping 192.168.100.2 # should still work but doesn't
      
      As reported by Florian, some drivers look at ds->vlan_filtering in
      their .port_vlan_add() implementation. So this patch also makes sure
      that ds->vlan_filtering is committed before calling the driver. This is
      the reason why it is first committed, then restored on the failure path.
      Reported-by: default avatarTobias Waldekranz <tobias@waldekranz.com>
      Reported-by: default avatarAlvin Šipraga <alsi@bang-olufsen.dk>
      Signed-off-by: default avatarVladimir Oltean <vladimir.oltean@nxp.com>
      Reviewed-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Tested-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      06cfb2df
    • Vladimir Oltean's avatar
      net: dsa: properly fall back to software bridging · 67b5fb5d
      Vladimir Oltean authored
      If the driver does not implement .port_bridge_{join,leave}, then we must
      fall back to standalone operation on that port, and trigger the error
      path of dsa_port_bridge_join. This sets dp->bridge_dev = NULL.
      
      In turn, having a non-NULL dp->bridge_dev when there is no offloading
      support makes the following things go wrong:
      
      - dsa_default_offload_fwd_mark make the wrong decision in setting
        skb->offload_fwd_mark. It should set skb->offload_fwd_mark = 0 for
        ports that don't offload the bridge, which should instruct the bridge
        to forward in software. But this does not happen, dp->bridge_dev is
        incorrectly set to point to the bridge, so the bridge is told that
        packets have been forwarded in hardware, which they haven't.
      
      - switchdev objects (MDBs, VLANs) should not be offloaded by ports that
        don't offload the bridge. Standalone ports should behave as packet-in,
        packet-out and the bridge should not be able to manipulate the pvid of
        the port, or tag stripping on egress, or ingress filtering. This
        should already work fine because dsa_slave_port_obj_add has:
      
      	case SWITCHDEV_OBJ_ID_PORT_VLAN:
      		if (!dsa_port_offloads_bridge_port(dp, obj->orig_dev))
      			return -EOPNOTSUPP;
      
      		err = dsa_slave_vlan_add(dev, obj, extack);
      
        but since dsa_port_offloads_bridge_port works based on dp->bridge_dev,
        this is again sabotaging us.
      
      All the above work in case the port has an unoffloaded LAG interface, so
      this is well exercised code, we should apply it for plain unoffloaded
      bridge ports too.
      Reported-by: default avatarAlvin Šipraga <alsi@bang-olufsen.dk>
      Signed-off-by: default avatarVladimir Oltean <vladimir.oltean@nxp.com>
      Reviewed-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      67b5fb5d
    • Vladimir Oltean's avatar
      net: dsa: don't call switchdev_bridge_port_unoffload for unoffloaded bridge ports · 09dba21b
      Vladimir Oltean authored
      For ports that have a NULL dp->bridge_dev, dsa_port_to_bridge_port()
      also returns NULL as expected.
      
      Issue #1 is that we are performing a NULL pointer dereference on brport_dev.
      
      Issue #2 is that these are ports on which switchdev_bridge_port_offload
      has not been called, so we should not call switchdev_bridge_port_unoffload
      on them either.
      
      Both issues are addressed by checking against a NULL brport_dev in
      dsa_port_pre_bridge_leave and exiting early.
      
      Fixes: 2f5dc00f ("net: bridge: switchdev: let drivers inform which bridge ports are offloaded")
      Signed-off-by: default avatarVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      09dba21b
    • David S. Miller's avatar
      Merge branch 'mptcp-refactor' · 0384dd9d
      David S. Miller authored
      Mat Martineau says:
      
      ====================
      mptcp: Refactor ADD_ADDR/RM_ADDR handling
      
      This patch set changes the way MPTCP ADD_ADDR and RM_ADDR options are
      handled to improve the reliability of sending and updating address
      advertisements. The information used to populate outgoing advertisement
      option headers is now stored separately to avoid rare cases where a more
      recent request would overwrite something that had not been sent
      yet. While the peers would recover from this, it's better to avoid the
      problem in the first place.
      
      Patch 1 moves an advertisement option check under a lock so the changes
      made in the next several patches will not introduce a race.
      
      Patches 2-4 make sure ADD_ADDR, ADD_ADDR echo, and RM_ADDR options use
      separate flags and data.
      
      Patch 5 removes some now-redundant flags.
      
      Patch 6 adds a selftest that confirms the advertisement reliability
      improvements.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0384dd9d
    • Yonglong Li's avatar
      selftests: mptcp: add_addr and echo race test · 33c563ad
      Yonglong Li authored
      This patch added an extra test for the singal_address_tests() to do the
      ADD_ADDR and ADD_ADDR_ECHO race test.
      Co-developed-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarYonglong Li <liyonglong@chinatelecom.cn>
      Signed-off-by: default avatarMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      33c563ad
    • Yonglong Li's avatar
      mptcp: remove MPTCP_ADD_ADDR_IPV6 and MPTCP_ADD_ADDR_PORT · c233ef13
      Yonglong Li authored
      MPTCP_ADD_ADDR_IPV6 and MPTCP_ADD_ADDR_PORT are not necessary, we can get
      these info from pm.local or pm.remote.
      
      Drop mptcp_pm_should_add_signal_ipv6 and mptcp_pm_should_add_signal_port
      too.
      Co-developed-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarYonglong Li <liyonglong@chinatelecom.cn>
      Signed-off-by: default avatarMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c233ef13
    • Yonglong Li's avatar
      mptcp: build ADD_ADDR/echo-ADD_ADDR option according pm.add_signal · f462a446
      Yonglong Li authored
      According to the MPTCP_ADD_ADDR_SIGNAL or MPTCP_ADD_ADDR_ECHO flag, build
      the ADD_ADDR/ADD_ADDR_ECHO option.
      
      In mptcp_pm_add_addr_signal(), use opts->addr to save the announced
      ADD_ADDR or ADD_ADDR_ECHO address.
      Co-developed-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Co-developed-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarYonglong Li <liyonglong@chinatelecom.cn>
      Signed-off-by: default avatarMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f462a446
    • Yonglong Li's avatar
      mptcp: fix ADD_ADDR and RM_ADDR maybe flush addr_signal each other · 119c0220
      Yonglong Li authored
      ADD_ADDR shares pm.addr_signal with RM_ADDR, so after RM_ADDR/ADD_ADDR
      has done, we should not clean ADD_ADDR/RM_ADDR's addr_signal.
      Co-developed-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarYonglong Li <liyonglong@chinatelecom.cn>
      Signed-off-by: default avatarMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      119c0220
    • Yonglong Li's avatar
      mptcp: make MPTCP_ADD_ADDR_SIGNAL and MPTCP_ADD_ADDR_ECHO separate · 18fc1a92
      Yonglong Li authored
      Use MPTCP_ADD_ADDR_SIGNAL only for the action of sending ADD_ADDR, and
      use MPTCP_ADD_ADDR_ECHO only for the action of sending ADD_ADDR echo.
      
      Use msk->pm.local to save the announced ADD_ADDR address only, and reuse
      msk->pm.remote to save the announced ADD_ADDR_ECHO address.
      
      To prepare for the next patch.
      Co-developed-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarYonglong Li <liyonglong@chinatelecom.cn>
      Signed-off-by: default avatarMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      18fc1a92
    • Yonglong Li's avatar
      mptcp: move drop_other_suboptions check under pm lock · 1f5e9e2f
      Yonglong Li authored
      This patch moved the drop_other_suboptions check from
      mptcp_established_options_add_addr() into mptcp_pm_add_addr_signal(), do
      it under the PM lock to avoid the race between this check and
      mptcp_pm_add_addr_signal().
      
      For this, added a new parameter for mptcp_pm_add_addr_signal() to get
      the drop_other_suboptions value. And drop the other suboptions after the
      option length check if drop_other_suboptions is true.
      
      Additionally, always drop the other suboption for TCP pure ack:
      that makes both the code simpler and the MPTCP behaviour more
      consistent.
      Co-developed-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: default avatarGeliang Tang <geliangtang@gmail.com>
      Co-developed-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarYonglong Li <liyonglong@chinatelecom.cn>
      Signed-off-by: default avatarMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1f5e9e2f
    • Yajun Deng's avatar
      net: ipv4: Move ip_options_fragment() out of loop · faf482ca
      Yajun Deng authored
      The ip_options_fragment() only called when iter->offset is equal to zero,
      so move it out of loop, and inline 'Copy the flags to each fragment.'
      As also, remove the unused parameter in ip_frag_ipcb().
      Signed-off-by: default avatarYajun Deng <yajun.deng@linux.dev>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      faf482ca
    • Heiner Kallweit's avatar
      cxgb4: improve printing NIC information · 1bb39cb6
      Heiner Kallweit authored
      Currently the interface name and PCI address are printed twice, because
      netdev_info() is printing this information implicitly already. This results
      in messages like the following. remove the duplicated information.
      
      cxgb4 0000:81:00.4 eth3: eth3: Chelsio T6225-OCP-SO (0000:81:00.4) 1G/10G/25GBASE-SFP28
      Signed-off-by: default avatarHeiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1bb39cb6
  3. 23 Aug, 2021 12 commits
    • Tang Bin's avatar
      via-velocity: Use of_device_get_match_data to simplify code · f6a4e0e8
      Tang Bin authored
      Retrieve OF match data, it's better and cleaner to use
      'of_device_get_match_data' over 'of_match_device'.
      Signed-off-by: default avatarTang Bin <tangbin@cmss.chinamobile.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f6a4e0e8
    • Tang Bin's avatar
      via-rhine: Use of_device_get_match_data to simplify code · b708a96d
      Tang Bin authored
      Retrieve OF match data, it's better and cleaner to use
      'of_device_get_match_data' over 'of_match_device'.
      Signed-off-by: default avatarTang Bin <tangbin@cmss.chinamobile.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b708a96d
    • Christophe JAILLET's avatar
      hinic: switch from 'pci_' to 'dma_' API · 609c1308
      Christophe JAILLET authored
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script below.
      
      It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
      'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
      This is less verbose.
      
      It has been compile tested.
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      609c1308
    • Christophe JAILLET's avatar
      qlcnic: switch from 'pci_' to 'dma_' API · a14e3904
      Christophe JAILLET authored
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script below.
      
      It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
      'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
      This is less verbose.
      
      It has been compile tested.
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a14e3904
    • Christophe JAILLET's avatar
      net/mellanox: switch from 'pci_' to 'dma_' API · eb9c5c0d
      Christophe JAILLET authored
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script below.
      
      It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
      'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
      This is less verbose.
      
      It has been compile tested.
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      eb9c5c0d
    • Christophe JAILLET's avatar
      net: 8139cp: switch from 'pci_' to 'dma_' API · a0991bf4
      Christophe JAILLET authored
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script below.
      
      It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
      'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
      This is less verbose.
      
      It has been compile tested.
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a0991bf4
    • Christophe JAILLET's avatar
      vmxnet3: switch from 'pci_' to 'dma_' API · bf7bec46
      Christophe JAILLET authored
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script below.
      
      It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
      'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
      This is less verbose.
      
      The explicit 'err = -EIO;' has been removed because
      'dma_set_mask_and_coherent()' returns 0 or -EIO, so its return code can be
      used directly.
      
      It has been compile tested.
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bf7bec46
    • Christophe JAILLET's avatar
      myri10ge: switch from 'pci_' to 'dma_' API · 75bacb6d
      Christophe JAILLET authored
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script below.
      
      It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
      'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
      This is less verbose.
      
      A message split on 2 lines has been merged.
      
      It has been compile tested.
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      75bacb6d
    • David S. Miller's avatar
      Merge tag 'wireless-drivers-next-2021-08-22' of... · e6a70a02
      David S. Miller authored
      Merge tag 'wireless-drivers-next-2021-08-22' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
      
      Kalle Valo says:
      
      ====================
      wireless-drivers-next patches for v5.15
      
      First set of patches for v5.15. This got delayed as I have been mostly
      offline for the last few weeks. The biggest change is removal of
      prism54 driver, otherwise just smaller changes.
      
      Major changes:
      
      ath5k, ath9k, ath10k, ath11k:
      
      * switch from 'pci_' to 'dma_' API
      
      brcmfmac
      
      * allow per-board firmware binaries
      
      * add support 43752 SDIO device
      
      prism54
      
      * remove the obsoleted driver, everyone should be using p54 driver instead
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e6a70a02
    • Christophe JAILLET's avatar
      net: sunhme: Remove unused macros · 056b29ae
      Christophe JAILLET authored
      The usage of these macros has been removed in commit db1a8611
      ("sunhme: Convert to pure OF driver."). So they can be removed.
      
      This simplifies code and helps for removing the wrappers in
      include/linux/pci-dma-compat.h.
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      056b29ae
    • Christophe JAILLET's avatar
      qtnfmac: switch from 'pci_' to 'dma_' API · 06e1359c
      Christophe JAILLET authored
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script below.
      
      It has been compile tested.
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      06e1359c
    • Christophe JAILLET's avatar
      forcedeth: switch from 'pci_' to 'dma_' API · e5c88bc9
      Christophe JAILLET authored
      The wrappers in include/linux/pci-dma-compat.h should go away.
      
      The patch has been generated with the coccinelle script below.
      
      It has been hand modified to use 'dma_set_mask_and_coherent()' instead of
      'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable.
      This is less verbose.
      
      It has been compile tested.
      
      @@
      @@
      -    PCI_DMA_BIDIRECTIONAL
      +    DMA_BIDIRECTIONAL
      
      @@
      @@
      -    PCI_DMA_TODEVICE
      +    DMA_TO_DEVICE
      
      @@
      @@
      -    PCI_DMA_FROMDEVICE
      +    DMA_FROM_DEVICE
      
      @@
      @@
      -    PCI_DMA_NONE
      +    DMA_NONE
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_alloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3;
      @@
      -    pci_zalloc_consistent(e1, e2, e3)
      +    dma_alloc_coherent(&e1->dev, e2, e3, GFP_)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_free_consistent(e1, e2, e3, e4)
      +    dma_free_coherent(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_single(e1, e2, e3, e4)
      +    dma_map_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_single(e1, e2, e3, e4)
      +    dma_unmap_single(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4, e5;
      @@
      -    pci_map_page(e1, e2, e3, e4, e5)
      +    dma_map_page(&e1->dev, e2, e3, e4, e5)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_page(e1, e2, e3, e4)
      +    dma_unmap_page(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_map_sg(e1, e2, e3, e4)
      +    dma_map_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_unmap_sg(e1, e2, e3, e4)
      +    dma_unmap_sg(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_cpu(e1, e2, e3, e4)
      +    dma_sync_single_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_single_for_device(e1, e2, e3, e4)
      +    dma_sync_single_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_cpu(e1, e2, e3, e4)
      +    dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2, e3, e4;
      @@
      -    pci_dma_sync_sg_for_device(e1, e2, e3, e4)
      +    dma_sync_sg_for_device(&e1->dev, e2, e3, e4)
      
      @@
      expression e1, e2;
      @@
      -    pci_dma_mapping_error(e1, e2)
      +    dma_mapping_error(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_dma_mask(e1, e2)
      +    dma_set_mask(&e1->dev, e2)
      
      @@
      expression e1, e2;
      @@
      -    pci_set_consistent_dma_mask(e1, e2)
      +    dma_set_coherent_mask(&e1->dev, e2)
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Reviewed-by: default avatarZhu Yanjun <zyjzyj2000@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e5c88bc9