- 28 Jan, 2022 10 commits
-
-
Jakub Kicinski authored
Ido Schimmel says: ==================== mlxsw: Various updates This patchset contains miscellaneous updates for mlxsw. No user visible changes that I am aware of. Patches #1-#5 rework registration of internal traps in preparation of line cards support. Patch #6 improves driver resilience against a misbehaving device. Patch #7 prevents the driver from overwriting device internal actions. See the commit message for more details. ==================== Link: https://lore.kernel.org/r/20220127090226.283442-1-idosch@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ido Schimmel authored
In Spectrum-2 and later ASICs, each TCAM region has a default action that is executed in case a packet did not match any rule in the region. The location of the action in the database (KVDL) is computed by adding the region's index to a base value. Some TCAM regions are not exposed to the host and used internally by the device. Allocate KVDL entries for the default actions of these regions to avoid the host from overwriting them. With mlxsw, lookups in the internal regions are not currently performed, but it is a good practice not to overwrite their default actions. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Amit Cohen authored
When processing events generated by the device's firmware, the driver protects itself from events reported for non-existent local ports, but not for the CPU port (local port 0), which exists, but does not have all the fields as any local port. This can result in a NULL pointer dereference when trying access 'struct mlxsw_sp_port' fields which are not initialized for CPU port. Commit 63b08b1f ("mlxsw: spectrum: Protect driver from buggy firmware") already handled such issue by bailing early when processing a PUDE event reported for the CPU port. Generalize the approach by moving the check to a common function and making use of it in all relevant places. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
For event traps which are used in core, avoid having a separate trap group for each event. Instead of that introduce a single core event trap group and use it for all event traps. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
These functions belong to core.c alongside the functions that register/unregister a single trap. Move it there. Make the functions possibly usable by other parts of mlxsw code. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
Instead of initializing the trap groups used by core in spectrum.c over op, do it directly in core.c Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The call inits the EMAD group, but other groups as well. Therefore, move it out of EMAD init code and call it before. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
Instead of calling the same code four times, do it in a loop over array which contains trap grups to be set. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxJakub Kicinski authored
Saeed Mahameed says: ==================== mlx5-updates-2022-01-27 1) Dima, adds an internal mlx5 steering callback per steering provider (FW vs SW steering), to advertise steering capabilities implemented by each module, this helps upper modules in mlx5 to know what is supported and what's not without the need to tell what is the underlying steering mode. 2nd patch is the usecase where this interface is used to implement Vlan Push/pop for uplink with SW steering, where in FW mode it's not supported yet. 2) Roi Dayan improves code readability and maintainability as preparation step for multi attribute instance per flow in mlx5 TC module Currently the mlx5_flow object contains a single mlx5_attr instance. However, multi table actions (e.g. CT) instantiate multiple attr instances. This is a refactoring series in a preparation to support multiple attribute instances per flow. The commits prepare functions to get attr instance instead of using flow->attr and also using attr->flags if the flag is more relevant to be attr flag and not a flow flag considering there will be multiple attr instances. i.e. CT and SAMPLE flags. * tag 'mlx5-updates-2022-01-27' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5: VLAN push on RX, pop on TX net/mlx5: Introduce software defined steering capabilities net/mlx5: Remove unused TIR modify bitmask enums net/mlx5e: CT, Remove redundant flow args from tc ct calls net/mlx5e: TC, Store mapped tunnel id on flow attr net/mlx5e: Test CT and SAMPLE on flow attr net/mlx5e: Refactor eswitch attr flags to just attr flags net/mlx5e: CT, Don't set flow flag CT for ct clear flow net/mlx5e: TC, Hold sample_attr on stack instead of pointer net/mlx5e: TC, Reject rules with multiple CT actions net/mlx5e: TC, Refactor mlx5e_tc_add_flow_mod_hdr() to get flow attr net/mlx5e: TC, Pass attr to tc_act can_offload() net/mlx5e: TC, Split pedit offloads verify from alloc_tc_pedit_action() net/mlx5e: TC, Move pedit_headers_action to parse_attr net/mlx5e: Move counter creation call to alloc_flow_attr_counter() net/mlx5e: Pass attr arg for attaching/detaching encaps net/mlx5e: Move code chunk setting encap dests into its own function ==================== Link: https://lore.kernel.org/r/20220127204007.146300-1-saeed@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueJakub Kicinski authored
Tony Nguyen says: ==================== 1GbE Intel Wired LAN Driver Updates 2022-01-27 Christophe Jaillet removes useless DMA-32 fallback calls from applicable Intel drivers and simplifies code as a result of the removal. * '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: igbvf: Remove useless DMA-32 fallback configuration igb: Remove useless DMA-32 fallback configuration igc: Remove useless DMA-32 fallback configuration ice: Remove useless DMA-32 fallback configuration iavf: Remove useless DMA-32 fallback configuration e1000e: Remove useless DMA-32 fallback configuration i40e: Remove useless DMA-32 fallback configuration ixgbevf: Remove useless DMA-32 fallback configuration ixgbe: Remove useless DMA-32 fallback configuration ixgb: Remove useless DMA-32 fallback configuration ==================== Link: https://lore.kernel.org/r/20220127215224.422113-1-anthony.l.nguyen@intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 27 Jan, 2022 30 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Dima Chumak authored
Some older NIC hardware isn't capable of doing VLAN push on RX and pop on TX. A workaround has been added in software to support it, but it has a performance penalty since it requires a hairpin + loopback. There's no such limitation with the newer NICs, so no need to pay the price of the w/a. With this change the software w/a is disabled for certain HW versions and steering modes that support it. Signed-off-by: Dima Chumak <dchumak@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Dima Chumak authored
There are two different internal steering modes, abstracted from the rest of the driver. In order to keep upper layer of the driver agnostic to the differences in capabilities of the steering modes, this patch introduces mlx5_fs_get_capabilities() API to check if a certain software defined capability is supported. It differs from the capabilities exposed by the hardware, as it takes into account the flow steering mode (SMFS/DMFS) currently enabled. This implementation supports only two capability flags: MLX5_FLOW_STEERING_CAP_VLAN_PUSH_ON_RX MLX5_FLOW_STEERING_CAP_VLAN_POP_ON_TX They map to DR_ACTION_STATE_PUSH_VLAN and DR_ACTION_STATE_POP_VLAN actions, implemented in SW steering earlier in commit f5e22be5 ("net/mlx5: DR, Split modify VLAN state to separate pop/push states"). Which enables using of pop/push vlan without restrictions, e.g. doing vlan pop on TX and RX, compared to FW steering that supports only vlan pop on RX and push on TX. Other capabilities can be added in the future. Signed-off-by: Dima Chumak <dchumak@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Tariq Toukan authored
struct mlx5_ifc_modify_tir_bitmask_bits is used for the bitmask of MODIFY_TIR operations. Remove the unused bitmask enums. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
The flow arg is not being used so remove it. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
In preparation for multiple attr instances the tunnel_id should be attr specific and not flow specific. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Currently the mlx5_flow object contains a single mlx5_attr instance. However, multi table actions (e.g. CT) instantiate multiple attr instances. Prepare for multiple attr instances by testing for CT or SAMPLE flag on attr flags instead of flow flag. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Chris Mi <cmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
The flags are flow attrs and not esw specific attr flags. Refactor to remove the esw prefix and move from eswitch.h to en_tc.h where struct mlx5_flow_attr exists. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
ct clear action is a normal flow with a modify header for registers to 0. there is no need for any special handling in tc_ct.c. Parsing of ct clear action still allocates mod acts to set 0 on the registers and the driver continue to add a normal rule with modify hdr context. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
In later commit we are going to instantiate multiple attr instances for flow instead of single attr. Parsing TC sample allocates a new memory but there is no symmetric cleanup in the infrastructure. To avoid asymmetric alloc/free use sample_attr as part of the flow attr and not allocated and held as a pointer. This will avoid a cleanup leak when sample action is not on the first attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
The driver doesn't support multiple CT actions. Multiple CT clear actions are ok as they are redundant also with another CT actions. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
In later commit we are going to instantiate multiple attr instances for flow instead of single attr. Make sure mlx5e_tc_add_flow_mod_hdr() use the correct attr and not flow->attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
In later commit we are going to instantiate multiple attr instances for flow instead of single attr. Make sure the parsing using correct attr and not flow->attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Split pedit verify part into a new subfunction for better maintainability. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Move pedit_headers_action from flow parse_state to flow parse_attr. In a follow up commit we are going to have multiple attr per flow and pedit_headers_action are unique per attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Move shared code to alloc_flow_attr_counter() for reuse by the next patches. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
In later commit that we will have multiple attr instances per flow we would like to pass a specific attr instance to set encaps. Currently the mlx5_flow object contains a single mlx5_attr instance. However, multi table actions (e.g. CT) instantiate multiple attr instances. Currently mlx5e_attach/detach_encap() reads the first attr instance from the flow instance. Modify the functions to receive the attr instance as a parameter which is set by the calling function. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Roi Dayan authored
Split setting encap dests code chunk out of mlx5e_tc_add_fdb_flow() to make the function smaller for maintainability and reuse. For symmetry do the same for mlx5e_tc_del_fdb_flow(). While at it refactor cleanup to first check for encap flag like done when setting encap dests. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds authored
Pull networking fixes from Jakub Kicinski: "Including fixes from netfilter and can. Current release - new code bugs: - tcp: add a missing sk_defer_free_flush() in tcp_splice_read() - tcp: add a stub for sk_defer_free_flush(), fix CONFIG_INET=n - nf_tables: set last expression in register tracking area - nft_connlimit: fix memleak if nf_ct_netns_get() fails - mptcp: fix removing ids bitmap setting - bonding: use rcu_dereference_rtnl when getting active slave - fix three cases of sleep in atomic context in drivers: lan966x, gve - handful of build fixes for esoteric drivers after netdev->dev_addr was made const Previous releases - regressions: - revert "ipv6: Honor all IPv6 PIO Valid Lifetime values", it broke Linux compatibility with USGv6 tests - procfs: show net device bound packet types - ipv4: fix ip option filtering for locally generated fragments - phy: broadcom: hook up soft_reset for BCM54616S Previous releases - always broken: - ipv4: raw: lock the socket in raw_bind() - ipv4: decrease the use of shared IPID generator to decrease the chance of attackers guessing the values - procfs: fix cross-netns information leakage in /proc/net/ptype - ethtool: fix link extended state for big endian - bridge: vlan: fix single net device option dumping - ping: fix the sk_bound_dev_if match in ping_lookup" * tag 'net-5.17-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (86 commits) net: bridge: vlan: fix memory leak in __allowed_ingress net: socket: rename SKB_DROP_REASON_SOCKET_FILTER ipv4: remove sparse error in ip_neigh_gw4() ipv4: avoid using shared IP generator for connected sockets ipv4: tcp: send zero IPID in SYNACK messages ipv4: raw: lock the socket in raw_bind() MAINTAINERS: add missing IPv4/IPv6 header paths MAINTAINERS: add more files to eth PHY net: stmmac: dwmac-sun8i: use return val of readl_poll_timeout() net: bridge: vlan: fix single net device option dumping net: stmmac: skip only stmmac_ptp_register when resume from suspend net: stmmac: configure PTP clock source prior to PTP initialization Revert "ipv6: Honor all IPv6 PIO Valid Lifetime values" connector/cn_proc: Use task_is_in_init_pid_ns() pid: Introduce helper task_is_in_init_pid_ns() gve: Fix GFP flags when allocing pages net: lan966x: Fix sleep in atomic context when updating MAC table net: lan966x: Fix sleep in atomic context when injecting frames ethernet: seeq/ether3: don't write directly to netdev->dev_addr ethernet: 8390/etherh: don't write directly to netdev->dev_addr ...
-
Tim Yi authored
When using per-vlan state, if vlan snooping and stats are disabled, untagged or priority-tagged ingress frame will go to check pvid state. If the port state is forwarding and the pvid state is not learning/forwarding, untagged or priority-tagged frame will be dropped but skb memory is not freed. Should free skb when __allowed_ingress returns false. Fixes: a580c76d ("net: bridge: vlan: add per-vlan state") Signed-off-by: Tim Yi <tim.yi@pica8.com> Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com> Link: https://lore.kernel.org/r/20220127074953.12632-1-tim.yi@pica8.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Christophe JAILLET authored
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-