Commit 75d6d8b5 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge branch 'devlink-mlx5-add-port-function-attributes-for-ipsec'

Saeed Mahameed says:

====================
{devlink,mlx5}: Add port function attributes for ipsec

From Dima:

Introduce hypervisor-level control knobs to set the functionality of PCI
VF devices passed through to guests. The administrator of a hypervisor
host may choose to change the settings of a port function from the
defaults configured by the device firmware.

The software stack has two types of IPsec offload - crypto and packet.
Specifically, the ip xfrm command has sub-commands for "state" and
"policy" that have an "offload" parameter. With ip xfrm state, both
crypto and packet offload types are supported, while ip xfrm policy can
only be offloaded in packet mode.

The series introduces two new boolean attributes of a port function:
ipsec_crypto and ipsec_packet. The goal is to provide a similar level of
granularity for controlling VF IPsec offload capabilities, which would
be aligned with the software model. This will allow users to decide if
they want both types of offload enabled for a VF, just one of them, or
none at all (which is the default).

At a high level, the difference between the two knobs is that with
ipsec_crypto, only XFRM state can be offloaded. Specifically, only the
crypto operation (Encrypt/Decrypt) is offloaded. With ipsec_packet, both
XFRM state and policy can be offloaded. Furthermore, in addition to
crypto operation offload, IPsec encapsulation is also offloaded. For
XFRM state, choosing between crypto and packet offload types is
possible. From the HW perspective, different resources may be required
for each offload type.

Examples of when a user prefers to enable IPsec packet offload for a VF
when using switchdev mode:

  $ devlink port show pci/0000:06:00.0/1
      pci/0000:06:00.0/1: type eth netdev enp6s0pf0vf0 flavour pcivf pfnum 0 vfnum 0
          function:
          hw_addr 00:00:00:00:00:00 roce enable migratable disable ipsec_crypto disable ipsec_packet disable

  $ devlink port function set pci/0000:06:00.0/1 ipsec_packet enable

  $ devlink port show pci/0000:06:00.0/1
      pci/0000:06:00.0/1: type eth netdev enp6s0pf0vf0 flavour pcivf pfnum 0 vfnum 0
          function:
          hw_addr 00:00:00:00:00:00 roce enable migratable disable ipsec_crypto disable ipsec_packet enable

This enables the corresponding IPsec capability of the function before
it's enumerated, so when the driver reads the capability from the device
firmware, it is enabled. The driver is then able to configure
corresponding features and ops of the VF net device to support IPsec
state and policy offloading.

v2: https://lore.kernel.org/netdev/20230421104901.897946-1-dchumak@nvidia.com/
====================

Link: https://lore.kernel.org/r/20230825062836.103744-1-saeed@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents aa05346d b691b111
...@@ -190,6 +190,26 @@ explicitly enable the VF migratable capability. ...@@ -190,6 +190,26 @@ explicitly enable the VF migratable capability.
mlx5 driver support devlink port function attr mechanism to setup migratable mlx5 driver support devlink port function attr mechanism to setup migratable
capability. (refer to Documentation/networking/devlink/devlink-port.rst) capability. (refer to Documentation/networking/devlink/devlink-port.rst)
IPsec crypto capability setup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
User who wants mlx5 PCI VFs to be able to perform IPsec crypto offloading need
to explicitly enable the VF ipsec_crypto capability. Enabling IPsec capability
for VFs is supported starting with ConnectX6dx devices and above. When a VF has
IPsec capability enabled, any IPsec offloading is blocked on the PF.
mlx5 driver support devlink port function attr mechanism to setup ipsec_crypto
capability. (refer to Documentation/networking/devlink/devlink-port.rst)
IPsec packet capability setup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
User who wants mlx5 PCI VFs to be able to perform IPsec packet offloading need
to explicitly enable the VF ipsec_packet capability. Enabling IPsec capability
for VFs is supported starting with ConnectX6dx devices and above. When a VF has
IPsec capability enabled, any IPsec offloading is blocked on the PF.
mlx5 driver support devlink port function attr mechanism to setup ipsec_packet
capability. (refer to Documentation/networking/devlink/devlink-port.rst)
SF state setup SF state setup
-------------- --------------
......
...@@ -128,6 +128,12 @@ Users may also set the RoCE capability of the function using ...@@ -128,6 +128,12 @@ Users may also set the RoCE capability of the function using
Users may also set the function as migratable using Users may also set the function as migratable using
'devlink port function set migratable' command. 'devlink port function set migratable' command.
Users may also set the IPsec crypto capability of the function using
`devlink port function set ipsec_crypto` command.
Users may also set the IPsec packet capability of the function using
`devlink port function set ipsec_packet` command.
Function attributes Function attributes
=================== ===================
...@@ -240,6 +246,55 @@ Attach VF to the VM. ...@@ -240,6 +246,55 @@ Attach VF to the VM.
Start the VM. Start the VM.
Perform live migration. Perform live migration.
IPsec crypto capability setup
-----------------------------
When user enables IPsec crypto capability for a VF, user application can offload
XFRM state crypto operation (Encrypt/Decrypt) to this VF.
When IPsec crypto capability is disabled (default) for a VF, the XFRM state is
processed in software by the kernel.
- Get IPsec crypto capability of the VF device::
$ devlink port show pci/0000:06:00.0/2
pci/0000:06:00.0/2: type eth netdev enp6s0pf0vf1 flavour pcivf pfnum 0 vfnum 1
function:
hw_addr 00:00:00:00:00:00 ipsec_crypto disabled
- Set IPsec crypto capability of the VF device::
$ devlink port function set pci/0000:06:00.0/2 ipsec_crypto enable
$ devlink port show pci/0000:06:00.0/2
pci/0000:06:00.0/2: type eth netdev enp6s0pf0vf1 flavour pcivf pfnum 0 vfnum 1
function:
hw_addr 00:00:00:00:00:00 ipsec_crypto enabled
IPsec packet capability setup
-----------------------------
When user enables IPsec packet capability for a VF, user application can offload
XFRM state and policy crypto operation (Encrypt/Decrypt) to this VF, as well as
IPsec encapsulation.
When IPsec packet capability is disabled (default) for a VF, the XFRM state and
policy is processed in software by the kernel.
- Get IPsec packet capability of the VF device::
$ devlink port show pci/0000:06:00.0/2
pci/0000:06:00.0/2: type eth netdev enp6s0pf0vf1 flavour pcivf pfnum 0 vfnum 1
function:
hw_addr 00:00:00:00:00:00 ipsec_packet disabled
- Set IPsec packet capability of the VF device::
$ devlink port function set pci/0000:06:00.0/2 ipsec_packet enable
$ devlink port show pci/0000:06:00.0/2
pci/0000:06:00.0/2: type eth netdev enp6s0pf0vf1 flavour pcivf pfnum 0 vfnum 1
function:
hw_addr 00:00:00:00:00:00 ipsec_packet enabled
Subfunction Subfunction
============ ============
......
...@@ -69,7 +69,7 @@ mlx5_core-$(CONFIG_MLX5_TC_SAMPLE) += en/tc/sample.o ...@@ -69,7 +69,7 @@ mlx5_core-$(CONFIG_MLX5_TC_SAMPLE) += en/tc/sample.o
# #
mlx5_core-$(CONFIG_MLX5_ESWITCH) += eswitch.o eswitch_offloads.o eswitch_offloads_termtbl.o \ mlx5_core-$(CONFIG_MLX5_ESWITCH) += eswitch.o eswitch_offloads.o eswitch_offloads_termtbl.o \
ecpf.o rdma.o esw/legacy.o \ ecpf.o rdma.o esw/legacy.o \
esw/devlink_port.o esw/vporttbl.o esw/qos.o esw/devlink_port.o esw/vporttbl.o esw/qos.o esw/ipsec.o
mlx5_core-$(CONFIG_MLX5_ESWITCH) += esw/acl/helper.o \ mlx5_core-$(CONFIG_MLX5_ESWITCH) += esw/acl/helper.o \
esw/acl/egress_lgcy.o esw/acl/egress_ofld.o \ esw/acl/egress_lgcy.o esw/acl/egress_ofld.o \
......
...@@ -38,6 +38,7 @@ ...@@ -38,6 +38,7 @@
#include <net/netevent.h> #include <net/netevent.h>
#include "en.h" #include "en.h"
#include "eswitch.h"
#include "ipsec.h" #include "ipsec.h"
#include "ipsec_rxtx.h" #include "ipsec_rxtx.h"
#include "en_rep.h" #include "en_rep.h"
...@@ -670,6 +671,11 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x, ...@@ -670,6 +671,11 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
if (err) if (err)
goto err_xfrm; goto err_xfrm;
if (!mlx5_eswitch_block_ipsec(priv->mdev)) {
err = -EBUSY;
goto err_xfrm;
}
/* check esn */ /* check esn */
if (x->props.flags & XFRM_STATE_ESN) if (x->props.flags & XFRM_STATE_ESN)
mlx5e_ipsec_update_esn_state(sa_entry); mlx5e_ipsec_update_esn_state(sa_entry);
...@@ -678,7 +684,7 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x, ...@@ -678,7 +684,7 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
err = mlx5_ipsec_create_work(sa_entry); err = mlx5_ipsec_create_work(sa_entry);
if (err) if (err)
goto err_xfrm; goto unblock_ipsec;
err = mlx5e_ipsec_create_dwork(sa_entry); err = mlx5e_ipsec_create_dwork(sa_entry);
if (err) if (err)
...@@ -735,6 +741,8 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x, ...@@ -735,6 +741,8 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
if (sa_entry->work) if (sa_entry->work)
kfree(sa_entry->work->data); kfree(sa_entry->work->data);
kfree(sa_entry->work); kfree(sa_entry->work);
unblock_ipsec:
mlx5_eswitch_unblock_ipsec(priv->mdev);
err_xfrm: err_xfrm:
kfree(sa_entry); kfree(sa_entry);
NL_SET_ERR_MSG_WEAK_MOD(extack, "Device failed to offload this state"); NL_SET_ERR_MSG_WEAK_MOD(extack, "Device failed to offload this state");
...@@ -764,6 +772,7 @@ static void mlx5e_xfrm_del_state(struct xfrm_state *x) ...@@ -764,6 +772,7 @@ static void mlx5e_xfrm_del_state(struct xfrm_state *x)
static void mlx5e_xfrm_free_state(struct xfrm_state *x) static void mlx5e_xfrm_free_state(struct xfrm_state *x)
{ {
struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x); struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x);
struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ) if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ)
goto sa_entry_free; goto sa_entry_free;
...@@ -780,6 +789,7 @@ static void mlx5e_xfrm_free_state(struct xfrm_state *x) ...@@ -780,6 +789,7 @@ static void mlx5e_xfrm_free_state(struct xfrm_state *x)
if (sa_entry->work) if (sa_entry->work)
kfree(sa_entry->work->data); kfree(sa_entry->work->data);
kfree(sa_entry->work); kfree(sa_entry->work);
mlx5_eswitch_unblock_ipsec(ipsec->mdev);
sa_entry_free: sa_entry_free:
kfree(sa_entry); kfree(sa_entry);
} }
...@@ -1055,6 +1065,11 @@ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x, ...@@ -1055,6 +1065,11 @@ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x,
pol_entry->x = x; pol_entry->x = x;
pol_entry->ipsec = priv->ipsec; pol_entry->ipsec = priv->ipsec;
if (!mlx5_eswitch_block_ipsec(priv->mdev)) {
err = -EBUSY;
goto ipsec_busy;
}
mlx5e_ipsec_build_accel_pol_attrs(pol_entry, &pol_entry->attrs); mlx5e_ipsec_build_accel_pol_attrs(pol_entry, &pol_entry->attrs);
err = mlx5e_accel_ipsec_fs_add_pol(pol_entry); err = mlx5e_accel_ipsec_fs_add_pol(pol_entry);
if (err) if (err)
...@@ -1064,6 +1079,8 @@ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x, ...@@ -1064,6 +1079,8 @@ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x,
return 0; return 0;
err_fs: err_fs:
mlx5_eswitch_unblock_ipsec(priv->mdev);
ipsec_busy:
kfree(pol_entry); kfree(pol_entry);
NL_SET_ERR_MSG_MOD(extack, "Device failed to offload this policy"); NL_SET_ERR_MSG_MOD(extack, "Device failed to offload this policy");
return err; return err;
...@@ -1074,6 +1091,7 @@ static void mlx5e_xfrm_del_policy(struct xfrm_policy *x) ...@@ -1074,6 +1091,7 @@ static void mlx5e_xfrm_del_policy(struct xfrm_policy *x)
struct mlx5e_ipsec_pol_entry *pol_entry = to_ipsec_pol_entry(x); struct mlx5e_ipsec_pol_entry *pol_entry = to_ipsec_pol_entry(x);
mlx5e_accel_ipsec_fs_del_pol(pol_entry); mlx5e_accel_ipsec_fs_del_pol(pol_entry);
mlx5_eswitch_unblock_ipsec(pol_entry->ipsec->mdev);
} }
static void mlx5e_xfrm_free_policy(struct xfrm_policy *x) static void mlx5e_xfrm_free_policy(struct xfrm_policy *x)
......
...@@ -254,6 +254,8 @@ static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, ...@@ -254,6 +254,8 @@ static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
mlx5_del_flow_rules(rx->sa.rule); mlx5_del_flow_rules(rx->sa.rule);
mlx5_destroy_flow_group(rx->sa.group); mlx5_destroy_flow_group(rx->sa.group);
mlx5_destroy_flow_table(rx->ft.sa); mlx5_destroy_flow_table(rx->ft.sa);
if (rx->allow_tunnel_mode)
mlx5_eswitch_unblock_encap(mdev);
if (rx == ipsec->rx_esw) { if (rx == ipsec->rx_esw) {
mlx5_esw_ipsec_rx_status_destroy(ipsec, rx); mlx5_esw_ipsec_rx_status_destroy(ipsec, rx);
} else { } else {
...@@ -357,6 +359,8 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, ...@@ -357,6 +359,8 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
goto err_add; goto err_add;
/* Create FT */ /* Create FT */
if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_TUNNEL)
rx->allow_tunnel_mode = mlx5_eswitch_block_encap(mdev);
if (rx->allow_tunnel_mode) if (rx->allow_tunnel_mode)
flags = MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT; flags = MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT;
ft = ipsec_ft_create(attr.ns, attr.sa_level, attr.prio, 2, flags); ft = ipsec_ft_create(attr.ns, attr.sa_level, attr.prio, 2, flags);
...@@ -411,6 +415,8 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, ...@@ -411,6 +415,8 @@ static int rx_create(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
err_fs: err_fs:
mlx5_destroy_flow_table(rx->ft.sa); mlx5_destroy_flow_table(rx->ft.sa);
err_fs_ft: err_fs_ft:
if (rx->allow_tunnel_mode)
mlx5_eswitch_unblock_encap(mdev);
mlx5_del_flow_rules(rx->status.rule); mlx5_del_flow_rules(rx->status.rule);
mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr); mlx5_modify_header_dealloc(mdev, rx->status.modify_hdr);
err_add: err_add:
...@@ -428,26 +434,19 @@ static int rx_get(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, ...@@ -428,26 +434,19 @@ static int rx_get(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
if (rx->ft.refcnt) if (rx->ft.refcnt)
goto skip; goto skip;
if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_TUNNEL) err = mlx5_eswitch_block_mode(mdev);
rx->allow_tunnel_mode = mlx5_eswitch_block_encap(mdev);
err = mlx5_eswitch_block_mode_trylock(mdev);
if (err) if (err)
goto err_out; return err;
err = rx_create(mdev, ipsec, rx, family); err = rx_create(mdev, ipsec, rx, family);
mlx5_eswitch_block_mode_unlock(mdev, err); if (err) {
if (err) mlx5_eswitch_unblock_mode(mdev);
goto err_out; return err;
}
skip: skip:
rx->ft.refcnt++; rx->ft.refcnt++;
return 0; return 0;
err_out:
if (rx->allow_tunnel_mode)
mlx5_eswitch_unblock_encap(mdev);
return err;
} }
static void rx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_rx *rx, static void rx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_rx *rx,
...@@ -456,12 +455,8 @@ static void rx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_rx *rx, ...@@ -456,12 +455,8 @@ static void rx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_rx *rx,
if (--rx->ft.refcnt) if (--rx->ft.refcnt)
return; return;
mlx5_eswitch_unblock_mode_lock(ipsec->mdev);
rx_destroy(ipsec->mdev, ipsec, rx, family); rx_destroy(ipsec->mdev, ipsec, rx, family);
mlx5_eswitch_unblock_mode_unlock(ipsec->mdev); mlx5_eswitch_unblock_mode(ipsec->mdev);
if (rx->allow_tunnel_mode)
mlx5_eswitch_unblock_encap(ipsec->mdev);
} }
static struct mlx5e_ipsec_rx *rx_ft_get(struct mlx5_core_dev *mdev, static struct mlx5e_ipsec_rx *rx_ft_get(struct mlx5_core_dev *mdev,
...@@ -581,6 +576,8 @@ static void tx_destroy(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx, ...@@ -581,6 +576,8 @@ static void tx_destroy(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx,
mlx5_destroy_flow_group(tx->sa.group); mlx5_destroy_flow_group(tx->sa.group);
} }
mlx5_destroy_flow_table(tx->ft.sa); mlx5_destroy_flow_table(tx->ft.sa);
if (tx->allow_tunnel_mode)
mlx5_eswitch_unblock_encap(ipsec->mdev);
mlx5_del_flow_rules(tx->status.rule); mlx5_del_flow_rules(tx->status.rule);
mlx5_destroy_flow_table(tx->ft.status); mlx5_destroy_flow_table(tx->ft.status);
} }
...@@ -621,6 +618,8 @@ static int tx_create(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx, ...@@ -621,6 +618,8 @@ static int tx_create(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx,
if (err) if (err)
goto err_status_rule; goto err_status_rule;
if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_TUNNEL)
tx->allow_tunnel_mode = mlx5_eswitch_block_encap(mdev);
if (tx->allow_tunnel_mode) if (tx->allow_tunnel_mode)
flags = MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT; flags = MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT;
ft = ipsec_ft_create(tx->ns, attr.sa_level, attr.prio, 4, flags); ft = ipsec_ft_create(tx->ns, attr.sa_level, attr.prio, 4, flags);
...@@ -687,6 +686,8 @@ static int tx_create(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx, ...@@ -687,6 +686,8 @@ static int tx_create(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx,
err_sa_miss: err_sa_miss:
mlx5_destroy_flow_table(tx->ft.sa); mlx5_destroy_flow_table(tx->ft.sa);
err_sa_ft: err_sa_ft:
if (tx->allow_tunnel_mode)
mlx5_eswitch_unblock_encap(mdev);
mlx5_del_flow_rules(tx->status.rule); mlx5_del_flow_rules(tx->status.rule);
err_status_rule: err_status_rule:
mlx5_destroy_flow_table(tx->ft.status); mlx5_destroy_flow_table(tx->ft.status);
...@@ -720,32 +721,22 @@ static int tx_get(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec, ...@@ -720,32 +721,22 @@ static int tx_get(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
if (tx->ft.refcnt) if (tx->ft.refcnt)
goto skip; goto skip;
if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_TUNNEL) err = mlx5_eswitch_block_mode(mdev);
tx->allow_tunnel_mode = mlx5_eswitch_block_encap(mdev);
err = mlx5_eswitch_block_mode_trylock(mdev);
if (err) if (err)
goto err_out; return err;
err = tx_create(ipsec, tx, ipsec->roce); err = tx_create(ipsec, tx, ipsec->roce);
if (err) { if (err) {
mlx5_eswitch_block_mode_unlock(mdev, err); mlx5_eswitch_unblock_mode(mdev);
goto err_out; return err;
} }
if (tx == ipsec->tx_esw) if (tx == ipsec->tx_esw)
ipsec_esw_tx_ft_policy_set(mdev, tx->ft.pol); ipsec_esw_tx_ft_policy_set(mdev, tx->ft.pol);
mlx5_eswitch_block_mode_unlock(mdev, err);
skip: skip:
tx->ft.refcnt++; tx->ft.refcnt++;
return 0; return 0;
err_out:
if (tx->allow_tunnel_mode)
mlx5_eswitch_unblock_encap(mdev);
return err;
} }
static void tx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx) static void tx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx)
...@@ -753,19 +744,13 @@ static void tx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx) ...@@ -753,19 +744,13 @@ static void tx_put(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx)
if (--tx->ft.refcnt) if (--tx->ft.refcnt)
return; return;
mlx5_eswitch_unblock_mode_lock(ipsec->mdev);
if (tx == ipsec->tx_esw) { if (tx == ipsec->tx_esw) {
mlx5_esw_ipsec_restore_dest_uplink(ipsec->mdev); mlx5_esw_ipsec_restore_dest_uplink(ipsec->mdev);
ipsec_esw_tx_ft_policy_set(ipsec->mdev, NULL); ipsec_esw_tx_ft_policy_set(ipsec->mdev, NULL);
} }
tx_destroy(ipsec, tx, ipsec->roce); tx_destroy(ipsec, tx, ipsec->roce);
mlx5_eswitch_unblock_mode(ipsec->mdev);
mlx5_eswitch_unblock_mode_unlock(ipsec->mdev);
if (tx->allow_tunnel_mode)
mlx5_eswitch_unblock_encap(ipsec->mdev);
} }
static struct mlx5_flow_table *tx_ft_get_policy(struct mlx5_core_dev *mdev, static struct mlx5_flow_table *tx_ft_get_policy(struct mlx5_core_dev *mdev,
......
...@@ -92,6 +92,12 @@ static const struct devlink_port_ops mlx5_esw_pf_vf_dl_port_ops = { ...@@ -92,6 +92,12 @@ static const struct devlink_port_ops mlx5_esw_pf_vf_dl_port_ops = {
.port_fn_roce_set = mlx5_devlink_port_fn_roce_set, .port_fn_roce_set = mlx5_devlink_port_fn_roce_set,
.port_fn_migratable_get = mlx5_devlink_port_fn_migratable_get, .port_fn_migratable_get = mlx5_devlink_port_fn_migratable_get,
.port_fn_migratable_set = mlx5_devlink_port_fn_migratable_set, .port_fn_migratable_set = mlx5_devlink_port_fn_migratable_set,
#ifdef CONFIG_XFRM_OFFLOAD
.port_fn_ipsec_crypto_get = mlx5_devlink_port_fn_ipsec_crypto_get,
.port_fn_ipsec_crypto_set = mlx5_devlink_port_fn_ipsec_crypto_set,
.port_fn_ipsec_packet_get = mlx5_devlink_port_fn_ipsec_packet_get,
.port_fn_ipsec_packet_set = mlx5_devlink_port_fn_ipsec_packet_set,
#endif /* CONFIG_XFRM_OFFLOAD */
}; };
static void mlx5_esw_offloads_sf_devlink_port_attrs_set(struct mlx5_eswitch *esw, static void mlx5_esw_offloads_sf_devlink_port_attrs_set(struct mlx5_eswitch *esw,
......
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
// Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#include <linux/mlx5/device.h>
#include <linux/mlx5/vport.h>
#include "mlx5_core.h"
#include "eswitch.h"
static int esw_ipsec_vf_query_generic(struct mlx5_core_dev *dev, u16 vport_num, bool *result)
{
int query_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
void *hca_cap, *query_cap;
int err;
if (!MLX5_CAP_GEN(dev, vhca_resource_manager))
return -EOPNOTSUPP;
if (!mlx5_esw_ipsec_vf_offload_supported(dev)) {
*result = false;
return 0;
}
query_cap = kvzalloc(query_sz, GFP_KERNEL);
if (!query_cap)
return -ENOMEM;
err = mlx5_vport_get_other_func_general_cap(dev, vport_num, query_cap);
if (err)
goto free;
hca_cap = MLX5_ADDR_OF(query_hca_cap_out, query_cap, capability);
*result = MLX5_GET(cmd_hca_cap, hca_cap, ipsec_offload);
free:
kvfree(query_cap);
return err;
}
enum esw_vport_ipsec_offload {
MLX5_ESW_VPORT_IPSEC_CRYPTO_OFFLOAD,
MLX5_ESW_VPORT_IPSEC_PACKET_OFFLOAD,
};
int mlx5_esw_ipsec_vf_offload_get(struct mlx5_core_dev *dev, struct mlx5_vport *vport)
{
int query_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
void *hca_cap, *query_cap;
bool ipsec_enabled;
int err;
/* Querying IPsec caps only makes sense when generic ipsec_offload
* HCA cap is enabled
*/
err = esw_ipsec_vf_query_generic(dev, vport->vport, &ipsec_enabled);
if (err)
return err;
if (!ipsec_enabled) {
vport->info.ipsec_crypto_enabled = false;
vport->info.ipsec_packet_enabled = false;
return 0;
}
query_cap = kvzalloc(query_sz, GFP_KERNEL);
if (!query_cap)
return -ENOMEM;
err = mlx5_vport_get_other_func_cap(dev, vport->vport, query_cap, MLX5_CAP_IPSEC);
if (err)
goto free;
hca_cap = MLX5_ADDR_OF(query_hca_cap_out, query_cap, capability);
vport->info.ipsec_crypto_enabled =
MLX5_GET(ipsec_cap, hca_cap, ipsec_crypto_offload);
vport->info.ipsec_packet_enabled =
MLX5_GET(ipsec_cap, hca_cap, ipsec_full_offload);
free:
kvfree(query_cap);
return err;
}
static int esw_ipsec_vf_set_generic(struct mlx5_core_dev *dev, u16 vport_num, bool ipsec_ofld)
{
int query_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
int set_sz = MLX5_ST_SZ_BYTES(set_hca_cap_in);
void *hca_cap, *query_cap, *cap;
int ret;
if (!MLX5_CAP_GEN(dev, vhca_resource_manager))
return -EOPNOTSUPP;
query_cap = kvzalloc(query_sz, GFP_KERNEL);
hca_cap = kvzalloc(set_sz, GFP_KERNEL);
if (!hca_cap || !query_cap) {
ret = -ENOMEM;
goto free;
}
ret = mlx5_vport_get_other_func_general_cap(dev, vport_num, query_cap);
if (ret)
goto free;
cap = MLX5_ADDR_OF(set_hca_cap_in, hca_cap, capability);
memcpy(cap, MLX5_ADDR_OF(query_hca_cap_out, query_cap, capability),
MLX5_UN_SZ_BYTES(hca_cap_union));
MLX5_SET(cmd_hca_cap, cap, ipsec_offload, ipsec_ofld);
MLX5_SET(set_hca_cap_in, hca_cap, opcode, MLX5_CMD_OP_SET_HCA_CAP);
MLX5_SET(set_hca_cap_in, hca_cap, other_function, 1);
MLX5_SET(set_hca_cap_in, hca_cap, function_id, vport_num);
MLX5_SET(set_hca_cap_in, hca_cap, op_mod,
MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE << 1);
ret = mlx5_cmd_exec_in(dev, set_hca_cap, hca_cap);
free:
kvfree(hca_cap);
kvfree(query_cap);
return ret;
}
static int esw_ipsec_vf_set_bytype(struct mlx5_core_dev *dev, struct mlx5_vport *vport,
bool enable, enum esw_vport_ipsec_offload type)
{
int query_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
int set_sz = MLX5_ST_SZ_BYTES(set_hca_cap_in);
void *hca_cap, *query_cap, *cap;
int ret;
if (!MLX5_CAP_GEN(dev, vhca_resource_manager))
return -EOPNOTSUPP;
query_cap = kvzalloc(query_sz, GFP_KERNEL);
hca_cap = kvzalloc(set_sz, GFP_KERNEL);
if (!hca_cap || !query_cap) {
ret = -ENOMEM;
goto free;
}
ret = mlx5_vport_get_other_func_cap(dev, vport->vport, query_cap, MLX5_CAP_IPSEC);
if (ret)
goto free;
cap = MLX5_ADDR_OF(set_hca_cap_in, hca_cap, capability);
memcpy(cap, MLX5_ADDR_OF(query_hca_cap_out, query_cap, capability),
MLX5_UN_SZ_BYTES(hca_cap_union));
switch (type) {
case MLX5_ESW_VPORT_IPSEC_CRYPTO_OFFLOAD:
MLX5_SET(ipsec_cap, cap, ipsec_crypto_offload, enable);
break;
case MLX5_ESW_VPORT_IPSEC_PACKET_OFFLOAD:
MLX5_SET(ipsec_cap, cap, ipsec_full_offload, enable);
break;
default:
ret = -EOPNOTSUPP;
goto free;
}
MLX5_SET(set_hca_cap_in, hca_cap, opcode, MLX5_CMD_OP_SET_HCA_CAP);
MLX5_SET(set_hca_cap_in, hca_cap, other_function, 1);
MLX5_SET(set_hca_cap_in, hca_cap, function_id, vport->vport);
MLX5_SET(set_hca_cap_in, hca_cap, op_mod,
MLX5_SET_HCA_CAP_OP_MOD_IPSEC << 1);
ret = mlx5_cmd_exec_in(dev, set_hca_cap, hca_cap);
free:
kvfree(hca_cap);
kvfree(query_cap);
return ret;
}
static int esw_ipsec_vf_crypto_aux_caps_set(struct mlx5_core_dev *dev, u16 vport_num, bool enable)
{
int query_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
int set_sz = MLX5_ST_SZ_BYTES(set_hca_cap_in);
struct mlx5_eswitch *esw = dev->priv.eswitch;
void *hca_cap, *query_cap, *cap;
int ret;
query_cap = kvzalloc(query_sz, GFP_KERNEL);
hca_cap = kvzalloc(set_sz, GFP_KERNEL);
if (!hca_cap || !query_cap) {
ret = -ENOMEM;
goto free;
}
ret = mlx5_vport_get_other_func_cap(dev, vport_num, query_cap, MLX5_CAP_ETHERNET_OFFLOADS);
if (ret)
goto free;
cap = MLX5_ADDR_OF(set_hca_cap_in, hca_cap, capability);
memcpy(cap, MLX5_ADDR_OF(query_hca_cap_out, query_cap, capability),
MLX5_UN_SZ_BYTES(hca_cap_union));
MLX5_SET(per_protocol_networking_offload_caps, cap, insert_trailer, enable);
MLX5_SET(set_hca_cap_in, hca_cap, opcode, MLX5_CMD_OP_SET_HCA_CAP);
MLX5_SET(set_hca_cap_in, hca_cap, other_function, 1);
MLX5_SET(set_hca_cap_in, hca_cap, function_id, vport_num);
MLX5_SET(set_hca_cap_in, hca_cap, op_mod,
MLX5_SET_HCA_CAP_OP_MOD_ETHERNET_OFFLOADS << 1);
ret = mlx5_cmd_exec_in(esw->dev, set_hca_cap, hca_cap);
free:
kvfree(hca_cap);
kvfree(query_cap);
return ret;
}
static int esw_ipsec_vf_offload_set_bytype(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
bool enable, enum esw_vport_ipsec_offload type)
{
struct mlx5_core_dev *dev = esw->dev;
int err;
if (vport->vport == MLX5_VPORT_PF)
return -EOPNOTSUPP;
if (type == MLX5_ESW_VPORT_IPSEC_CRYPTO_OFFLOAD) {
err = esw_ipsec_vf_crypto_aux_caps_set(dev, vport->vport, enable);
if (err)
return err;
}
if (enable) {
err = esw_ipsec_vf_set_generic(dev, vport->vport, enable);
if (err)
return err;
err = esw_ipsec_vf_set_bytype(dev, vport, enable, type);
if (err)
return err;
} else {
err = esw_ipsec_vf_set_bytype(dev, vport, enable, type);
if (err)
return err;
err = mlx5_esw_ipsec_vf_offload_get(dev, vport);
if (err)
return err;
/* The generic ipsec_offload cap can be disabled only if both
* ipsec_crypto_offload and ipsec_full_offload aren't enabled.
*/
if (!vport->info.ipsec_crypto_enabled &&
!vport->info.ipsec_packet_enabled) {
err = esw_ipsec_vf_set_generic(dev, vport->vport, enable);
if (err)
return err;
}
}
switch (type) {
case MLX5_ESW_VPORT_IPSEC_CRYPTO_OFFLOAD:
vport->info.ipsec_crypto_enabled = enable;
break;
case MLX5_ESW_VPORT_IPSEC_PACKET_OFFLOAD:
vport->info.ipsec_packet_enabled = enable;
break;
default:
return -EINVAL;
}
return 0;
}
static int esw_ipsec_offload_supported(struct mlx5_core_dev *dev, u16 vport_num)
{
int query_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
void *hca_cap, *query_cap;
int ret;
query_cap = kvzalloc(query_sz, GFP_KERNEL);
if (!query_cap)
return -ENOMEM;
ret = mlx5_vport_get_other_func_cap(dev, vport_num, query_cap, MLX5_CAP_GENERAL);
if (ret)
goto free;
hca_cap = MLX5_ADDR_OF(query_hca_cap_out, query_cap, capability);
if (!MLX5_GET(cmd_hca_cap, hca_cap, log_max_dek))
ret = -EOPNOTSUPP;
free:
kvfree(query_cap);
return ret;
}
bool mlx5_esw_ipsec_vf_offload_supported(struct mlx5_core_dev *dev)
{
/* Old firmware doesn't support ipsec_offload capability for VFs. This
* can be detected by checking reformat_add_esp_trasport capability -
* when this cap isn't supported it means firmware cannot be trusted
* about what it reports for ipsec_offload cap.
*/
return MLX5_CAP_FLOWTABLE_NIC_TX(dev, reformat_add_esp_trasport);
}
int mlx5_esw_ipsec_vf_crypto_offload_supported(struct mlx5_core_dev *dev,
u16 vport_num)
{
int query_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
void *hca_cap, *query_cap;
int err;
if (!mlx5_esw_ipsec_vf_offload_supported(dev))
return -EOPNOTSUPP;
err = esw_ipsec_offload_supported(dev, vport_num);
if (err)
return err;
query_cap = kvzalloc(query_sz, GFP_KERNEL);
if (!query_cap)
return -ENOMEM;
err = mlx5_vport_get_other_func_cap(dev, vport_num, query_cap, MLX5_CAP_ETHERNET_OFFLOADS);
if (err)
goto free;
hca_cap = MLX5_ADDR_OF(query_hca_cap_out, query_cap, capability);
if (!MLX5_GET(per_protocol_networking_offload_caps, hca_cap, swp))
goto free;
free:
kvfree(query_cap);
return err;
}
int mlx5_esw_ipsec_vf_packet_offload_supported(struct mlx5_core_dev *dev,
u16 vport_num)
{
int query_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
void *hca_cap, *query_cap;
int ret;
if (!mlx5_esw_ipsec_vf_offload_supported(dev))
return -EOPNOTSUPP;
ret = esw_ipsec_offload_supported(dev, vport_num);
if (ret)
return ret;
query_cap = kvzalloc(query_sz, GFP_KERNEL);
if (!query_cap)
return -ENOMEM;
ret = mlx5_vport_get_other_func_cap(dev, vport_num, query_cap, MLX5_CAP_FLOW_TABLE);
if (ret)
goto out;
hca_cap = MLX5_ADDR_OF(query_hca_cap_out, query_cap, capability);
if (!MLX5_GET(flow_table_nic_cap, hca_cap, flow_table_properties_nic_receive.decap)) {
ret = -EOPNOTSUPP;
goto out;
}
out:
kvfree(query_cap);
return ret;
}
int mlx5_esw_ipsec_vf_crypto_offload_set(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
bool enable)
{
return esw_ipsec_vf_offload_set_bytype(esw, vport, enable,
MLX5_ESW_VPORT_IPSEC_CRYPTO_OFFLOAD);
}
int mlx5_esw_ipsec_vf_packet_offload_set(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
bool enable)
{
return esw_ipsec_vf_offload_set_bytype(esw, vport, enable,
MLX5_ESW_VPORT_IPSEC_PACKET_OFFLOAD);
}
...@@ -48,6 +48,7 @@ ...@@ -48,6 +48,7 @@
#include "devlink.h" #include "devlink.h"
#include "ecpf.h" #include "ecpf.h"
#include "en/mod_hdr.h" #include "en/mod_hdr.h"
#include "en_accel/ipsec.h"
enum { enum {
MLX5_ACTION_NONE = 0, MLX5_ACTION_NONE = 0,
...@@ -831,6 +832,8 @@ static int mlx5_esw_vport_caps_get(struct mlx5_eswitch *esw, struct mlx5_vport * ...@@ -831,6 +832,8 @@ static int mlx5_esw_vport_caps_get(struct mlx5_eswitch *esw, struct mlx5_vport *
hca_caps = MLX5_ADDR_OF(query_hca_cap_out, query_ctx, capability); hca_caps = MLX5_ADDR_OF(query_hca_cap_out, query_ctx, capability);
vport->info.mig_enabled = MLX5_GET(cmd_hca_cap_2, hca_caps, migratable); vport->info.mig_enabled = MLX5_GET(cmd_hca_cap_2, hca_caps, migratable);
err = mlx5_esw_ipsec_vf_offload_get(esw->dev, vport);
out_free: out_free:
kfree(query_ctx); kfree(query_ctx);
return err; return err;
...@@ -913,6 +916,9 @@ int mlx5_esw_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vport, ...@@ -913,6 +916,9 @@ int mlx5_esw_vport_enable(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
/* Sync with current vport context */ /* Sync with current vport context */
vport->enabled_events = enabled_events; vport->enabled_events = enabled_events;
vport->enabled = true; vport->enabled = true;
if (vport->vport != MLX5_VPORT_PF &&
(vport->info.ipsec_crypto_enabled || vport->info.ipsec_packet_enabled))
esw->enabled_ipsec_vf_count++;
/* Esw manager is trusted by default. Host PF (vport 0) is trusted as well /* Esw manager is trusted by default. Host PF (vport 0) is trusted as well
* in smartNIC as it's a vport group manager. * in smartNIC as it's a vport group manager.
...@@ -969,6 +975,10 @@ void mlx5_esw_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vport) ...@@ -969,6 +975,10 @@ void mlx5_esw_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vport)
MLX5_CAP_GEN(esw->dev, vhca_resource_manager)) MLX5_CAP_GEN(esw->dev, vhca_resource_manager))
mlx5_esw_vport_vhca_id_clear(esw, vport_num); mlx5_esw_vport_vhca_id_clear(esw, vport_num);
if (vport->vport != MLX5_VPORT_PF &&
(vport->info.ipsec_crypto_enabled || vport->info.ipsec_packet_enabled))
esw->enabled_ipsec_vf_count--;
/* We don't assume VFs will cleanup after themselves. /* We don't assume VFs will cleanup after themselves.
* Calling vport change handler while vport is disabled will cleanup * Calling vport change handler while vport is disabled will cleanup
* the vport resources. * the vport resources.
...@@ -2336,3 +2346,34 @@ struct mlx5_core_dev *mlx5_eswitch_get_core_dev(struct mlx5_eswitch *esw) ...@@ -2336,3 +2346,34 @@ struct mlx5_core_dev *mlx5_eswitch_get_core_dev(struct mlx5_eswitch *esw)
return mlx5_esw_allowed(esw) ? esw->dev : NULL; return mlx5_esw_allowed(esw) ? esw->dev : NULL;
} }
EXPORT_SYMBOL(mlx5_eswitch_get_core_dev); EXPORT_SYMBOL(mlx5_eswitch_get_core_dev);
bool mlx5_eswitch_block_ipsec(struct mlx5_core_dev *dev)
{
struct mlx5_eswitch *esw = dev->priv.eswitch;
if (!mlx5_esw_allowed(esw))
return true;
mutex_lock(&esw->state_lock);
if (esw->enabled_ipsec_vf_count) {
mutex_unlock(&esw->state_lock);
return false;
}
dev->num_ipsec_offloads++;
mutex_unlock(&esw->state_lock);
return true;
}
void mlx5_eswitch_unblock_ipsec(struct mlx5_core_dev *dev)
{
struct mlx5_eswitch *esw = dev->priv.eswitch;
if (!mlx5_esw_allowed(esw))
/* Failure means no eswitch => core dev is not a PF */
return;
mutex_lock(&esw->state_lock);
dev->num_ipsec_offloads--;
mutex_unlock(&esw->state_lock);
}
...@@ -163,6 +163,8 @@ struct mlx5_vport_info { ...@@ -163,6 +163,8 @@ struct mlx5_vport_info {
u8 trusted: 1; u8 trusted: 1;
u8 roce_enabled: 1; u8 roce_enabled: 1;
u8 mig_enabled: 1; u8 mig_enabled: 1;
u8 ipsec_crypto_enabled: 1;
u8 ipsec_packet_enabled: 1;
}; };
/* Vport context events */ /* Vport context events */
...@@ -380,6 +382,7 @@ struct mlx5_eswitch { ...@@ -380,6 +382,7 @@ struct mlx5_eswitch {
struct blocking_notifier_head n_head; struct blocking_notifier_head n_head;
struct xarray paired; struct xarray paired;
struct mlx5_devcom_comp_dev *devcom; struct mlx5_devcom_comp_dev *devcom;
u16 enabled_ipsec_vf_count;
}; };
void esw_offloads_disable(struct mlx5_eswitch *esw); void esw_offloads_disable(struct mlx5_eswitch *esw);
...@@ -558,6 +561,16 @@ int mlx5_devlink_port_fn_migratable_get(struct devlink_port *port, bool *is_enab ...@@ -558,6 +561,16 @@ int mlx5_devlink_port_fn_migratable_get(struct devlink_port *port, bool *is_enab
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
int mlx5_devlink_port_fn_migratable_set(struct devlink_port *port, bool enable, int mlx5_devlink_port_fn_migratable_set(struct devlink_port *port, bool enable,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
#ifdef CONFIG_XFRM_OFFLOAD
int mlx5_devlink_port_fn_ipsec_crypto_get(struct devlink_port *port, bool *is_enabled,
struct netlink_ext_ack *extack);
int mlx5_devlink_port_fn_ipsec_crypto_set(struct devlink_port *port, bool enable,
struct netlink_ext_ack *extack);
int mlx5_devlink_port_fn_ipsec_packet_get(struct devlink_port *port, bool *is_enabled,
struct netlink_ext_ack *extack);
int mlx5_devlink_port_fn_ipsec_packet_set(struct devlink_port *port, bool enable,
struct netlink_ext_ack *extack);
#endif /* CONFIG_XFRM_OFFLOAD */
void *mlx5_eswitch_get_uplink_priv(struct mlx5_eswitch *esw, u8 rep_type); void *mlx5_eswitch_get_uplink_priv(struct mlx5_eswitch *esw, u8 rep_type);
int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw, int __mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
...@@ -829,10 +842,8 @@ int mlx5_eswitch_reload_reps(struct mlx5_eswitch *esw); ...@@ -829,10 +842,8 @@ int mlx5_eswitch_reload_reps(struct mlx5_eswitch *esw);
bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev); bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev);
void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev); void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev);
int mlx5_eswitch_block_mode_trylock(struct mlx5_core_dev *dev); int mlx5_eswitch_block_mode(struct mlx5_core_dev *dev);
void mlx5_eswitch_block_mode_unlock(struct mlx5_core_dev *dev, int err); void mlx5_eswitch_unblock_mode(struct mlx5_core_dev *dev);
void mlx5_eswitch_unblock_mode_lock(struct mlx5_core_dev *dev);
void mlx5_eswitch_unblock_mode_unlock(struct mlx5_core_dev *dev);
static inline int mlx5_eswitch_num_vfs(struct mlx5_eswitch *esw) static inline int mlx5_eswitch_num_vfs(struct mlx5_eswitch *esw)
{ {
...@@ -857,6 +868,22 @@ mlx5_eswitch_get_slow_fdb(struct mlx5_eswitch *esw) ...@@ -857,6 +868,22 @@ mlx5_eswitch_get_slow_fdb(struct mlx5_eswitch *esw)
int mlx5_eswitch_restore_ipsec_rule(struct mlx5_eswitch *esw, struct mlx5_flow_handle *rule, int mlx5_eswitch_restore_ipsec_rule(struct mlx5_eswitch *esw, struct mlx5_flow_handle *rule,
struct mlx5_esw_flow_attr *esw_attr, int attr_idx); struct mlx5_esw_flow_attr *esw_attr, int attr_idx);
bool mlx5_eswitch_block_ipsec(struct mlx5_core_dev *dev);
void mlx5_eswitch_unblock_ipsec(struct mlx5_core_dev *dev);
bool mlx5_esw_ipsec_vf_offload_supported(struct mlx5_core_dev *dev);
int mlx5_esw_ipsec_vf_offload_get(struct mlx5_core_dev *dev,
struct mlx5_vport *vport);
int mlx5_esw_ipsec_vf_crypto_offload_supported(struct mlx5_core_dev *dev,
u16 vport_num);
int mlx5_esw_ipsec_vf_crypto_offload_set(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
bool enable);
int mlx5_esw_ipsec_vf_packet_offload_set(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
bool enable);
int mlx5_esw_ipsec_vf_packet_offload_supported(struct mlx5_core_dev *dev,
u16 vport_num);
void mlx5_esw_vport_ipsec_offload_enable(struct mlx5_eswitch *esw);
void mlx5_esw_vport_ipsec_offload_disable(struct mlx5_eswitch *esw);
#else /* CONFIG_MLX5_ESWITCH */ #else /* CONFIG_MLX5_ESWITCH */
/* eswitch API stubs */ /* eswitch API stubs */
static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; } static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; }
...@@ -916,13 +943,14 @@ static inline void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev) ...@@ -916,13 +943,14 @@ static inline void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev)
{ {
} }
static inline int mlx5_eswitch_block_mode_trylock(struct mlx5_core_dev *dev) { return 0; } static inline int mlx5_eswitch_block_mode(struct mlx5_core_dev *dev) { return 0; }
static inline void mlx5_eswitch_unblock_mode(struct mlx5_core_dev *dev) {}
static inline void mlx5_eswitch_block_mode_unlock(struct mlx5_core_dev *dev, int err) {} static inline bool mlx5_eswitch_block_ipsec(struct mlx5_core_dev *dev)
{
static inline void mlx5_eswitch_unblock_mode_lock(struct mlx5_core_dev *dev) {} return false;
}
static inline void mlx5_eswitch_unblock_mode_unlock(struct mlx5_core_dev *dev) {} static inline void mlx5_eswitch_unblock_ipsec(struct mlx5_core_dev *dev) {}
#endif /* CONFIG_MLX5_ESWITCH */ #endif /* CONFIG_MLX5_ESWITCH */
#endif /* __MLX5_ESWITCH_H__ */ #endif /* __MLX5_ESWITCH_H__ */
...@@ -3641,65 +3641,32 @@ static bool esw_offloads_devlink_ns_eq_netdev_ns(struct devlink *devlink) ...@@ -3641,65 +3641,32 @@ static bool esw_offloads_devlink_ns_eq_netdev_ns(struct devlink *devlink)
return net_eq(devl_net, netdev_net); return net_eq(devl_net, netdev_net);
} }
int mlx5_eswitch_block_mode_trylock(struct mlx5_core_dev *dev) int mlx5_eswitch_block_mode(struct mlx5_core_dev *dev)
{ {
struct devlink *devlink = priv_to_devlink(dev); struct mlx5_eswitch *esw = dev->priv.eswitch;
struct mlx5_eswitch *esw;
int err; int err;
devl_lock(devlink); if (!mlx5_esw_allowed(esw))
esw = mlx5_devlink_eswitch_get(devlink);
if (IS_ERR(esw)) {
/* Failure means no eswitch => not possible to change eswitch mode */
devl_unlock(devlink);
return 0; return 0;
}
/* Take TC into account */
err = mlx5_esw_try_lock(esw); err = mlx5_esw_try_lock(esw);
if (err < 0) { if (err < 0)
devl_unlock(devlink);
return err; return err;
}
return 0;
}
void mlx5_eswitch_block_mode_unlock(struct mlx5_core_dev *dev, int err)
{
struct devlink *devlink = priv_to_devlink(dev);
struct mlx5_eswitch *esw;
esw = mlx5_devlink_eswitch_get(devlink);
if (IS_ERR(esw))
return;
if (!err)
esw->offloads.num_block_mode++; esw->offloads.num_block_mode++;
mlx5_esw_unlock(esw); mlx5_esw_unlock(esw);
devl_unlock(devlink); return 0;
} }
void mlx5_eswitch_unblock_mode_lock(struct mlx5_core_dev *dev) void mlx5_eswitch_unblock_mode(struct mlx5_core_dev *dev)
{ {
struct devlink *devlink = priv_to_devlink(dev); struct mlx5_eswitch *esw = dev->priv.eswitch;
struct mlx5_eswitch *esw;
esw = mlx5_devlink_eswitch_get(devlink); if (!mlx5_esw_allowed(esw))
if (IS_ERR(esw))
return; return;
down_write(&esw->mode_lock); down_write(&esw->mode_lock);
}
void mlx5_eswitch_unblock_mode_unlock(struct mlx5_core_dev *dev)
{
struct devlink *devlink = priv_to_devlink(dev);
struct mlx5_eswitch *esw;
esw = mlx5_devlink_eswitch_get(devlink);
if (IS_ERR(esw))
return;
esw->offloads.num_block_mode--; esw->offloads.num_block_mode--;
up_write(&esw->mode_lock); up_write(&esw->mode_lock);
} }
...@@ -3903,38 +3870,28 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode) ...@@ -3903,38 +3870,28 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode)
bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev) bool mlx5_eswitch_block_encap(struct mlx5_core_dev *dev)
{ {
struct devlink *devlink = priv_to_devlink(dev); struct mlx5_eswitch *esw = dev->priv.eswitch;
struct mlx5_eswitch *esw;
devl_lock(devlink); if (!mlx5_esw_allowed(esw))
esw = mlx5_devlink_eswitch_get(devlink);
if (IS_ERR(esw)) {
devl_unlock(devlink);
/* Failure means no eswitch => not possible to change encap */
return true; return true;
}
down_write(&esw->mode_lock); down_write(&esw->mode_lock);
if (esw->mode != MLX5_ESWITCH_LEGACY && if (esw->mode != MLX5_ESWITCH_LEGACY &&
esw->offloads.encap != DEVLINK_ESWITCH_ENCAP_MODE_NONE) { esw->offloads.encap != DEVLINK_ESWITCH_ENCAP_MODE_NONE) {
up_write(&esw->mode_lock); up_write(&esw->mode_lock);
devl_unlock(devlink);
return false; return false;
} }
esw->offloads.num_block_encap++; esw->offloads.num_block_encap++;
up_write(&esw->mode_lock); up_write(&esw->mode_lock);
devl_unlock(devlink);
return true; return true;
} }
void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev) void mlx5_eswitch_unblock_encap(struct mlx5_core_dev *dev)
{ {
struct devlink *devlink = priv_to_devlink(dev); struct mlx5_eswitch *esw = dev->priv.eswitch;
struct mlx5_eswitch *esw;
esw = mlx5_devlink_eswitch_get(devlink); if (!mlx5_esw_allowed(esw))
if (IS_ERR(esw))
return; return;
down_write(&esw->mode_lock); down_write(&esw->mode_lock);
...@@ -4410,3 +4367,172 @@ mlx5_eswitch_restore_ipsec_rule(struct mlx5_eswitch *esw, struct mlx5_flow_handl ...@@ -4410,3 +4367,172 @@ mlx5_eswitch_restore_ipsec_rule(struct mlx5_eswitch *esw, struct mlx5_flow_handl
return mlx5_modify_rule_destination(rule, &new_dest, &old_dest); return mlx5_modify_rule_destination(rule, &new_dest, &old_dest);
} }
#ifdef CONFIG_XFRM_OFFLOAD
int mlx5_devlink_port_fn_ipsec_crypto_get(struct devlink_port *port, bool *is_enabled,
struct netlink_ext_ack *extack)
{
struct mlx5_eswitch *esw;
struct mlx5_vport *vport;
int err = 0;
esw = mlx5_devlink_eswitch_get(port->devlink);
if (IS_ERR(esw))
return PTR_ERR(esw);
if (!mlx5_esw_ipsec_vf_offload_supported(esw->dev)) {
NL_SET_ERR_MSG_MOD(extack, "Device doesn't support IPSec crypto");
return -EOPNOTSUPP;
}
vport = mlx5_devlink_port_vport_get(port);
mutex_lock(&esw->state_lock);
if (!vport->enabled) {
err = -EOPNOTSUPP;
goto unlock;
}
*is_enabled = vport->info.ipsec_crypto_enabled;
unlock:
mutex_unlock(&esw->state_lock);
return err;
}
int mlx5_devlink_port_fn_ipsec_crypto_set(struct devlink_port *port, bool enable,
struct netlink_ext_ack *extack)
{
struct mlx5_eswitch *esw;
struct mlx5_vport *vport;
u16 vport_num;
int err;
esw = mlx5_devlink_eswitch_get(port->devlink);
if (IS_ERR(esw))
return PTR_ERR(esw);
vport_num = mlx5_esw_devlink_port_index_to_vport_num(port->index);
err = mlx5_esw_ipsec_vf_crypto_offload_supported(esw->dev, vport_num);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Device doesn't support IPsec crypto");
return err;
}
vport = mlx5_devlink_port_vport_get(port);
mutex_lock(&esw->state_lock);
if (!vport->enabled) {
err = -EOPNOTSUPP;
NL_SET_ERR_MSG_MOD(extack, "Eswitch vport is disabled");
goto unlock;
}
if (vport->info.ipsec_crypto_enabled == enable)
goto unlock;
if (!esw->enabled_ipsec_vf_count && esw->dev->num_ipsec_offloads) {
err = -EBUSY;
goto unlock;
}
err = mlx5_esw_ipsec_vf_crypto_offload_set(esw, vport, enable);
if (err) {
NL_SET_ERR_MSG_MOD(extack, "Failed to set IPsec crypto");
goto unlock;
}
vport->info.ipsec_crypto_enabled = enable;
if (enable)
esw->enabled_ipsec_vf_count++;
else
esw->enabled_ipsec_vf_count--;
unlock:
mutex_unlock(&esw->state_lock);
return err;
}
int mlx5_devlink_port_fn_ipsec_packet_get(struct devlink_port *port, bool *is_enabled,
struct netlink_ext_ack *extack)
{
struct mlx5_eswitch *esw;
struct mlx5_vport *vport;
int err = 0;
esw = mlx5_devlink_eswitch_get(port->devlink);
if (IS_ERR(esw))
return PTR_ERR(esw);
if (!mlx5_esw_ipsec_vf_offload_supported(esw->dev)) {
NL_SET_ERR_MSG_MOD(extack, "Device doesn't support IPsec packet");
return -EOPNOTSUPP;
}
vport = mlx5_devlink_port_vport_get(port);
mutex_lock(&esw->state_lock);
if (!vport->enabled) {
err = -EOPNOTSUPP;
goto unlock;
}
*is_enabled = vport->info.ipsec_packet_enabled;
unlock:
mutex_unlock(&esw->state_lock);
return err;
}
int mlx5_devlink_port_fn_ipsec_packet_set(struct devlink_port *port,
bool enable,
struct netlink_ext_ack *extack)
{
struct mlx5_eswitch *esw;
struct mlx5_vport *vport;
u16 vport_num;
int err;
esw = mlx5_devlink_eswitch_get(port->devlink);
if (IS_ERR(esw))
return PTR_ERR(esw);
vport_num = mlx5_esw_devlink_port_index_to_vport_num(port->index);
err = mlx5_esw_ipsec_vf_packet_offload_supported(esw->dev, vport_num);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Device doesn't support IPsec packet mode");
return err;
}
vport = mlx5_devlink_port_vport_get(port);
mutex_lock(&esw->state_lock);
if (!vport->enabled) {
err = -EOPNOTSUPP;
NL_SET_ERR_MSG_MOD(extack, "Eswitch vport is disabled");
goto unlock;
}
if (vport->info.ipsec_packet_enabled == enable)
goto unlock;
if (!esw->enabled_ipsec_vf_count && esw->dev->num_ipsec_offloads) {
err = -EBUSY;
goto unlock;
}
err = mlx5_esw_ipsec_vf_packet_offload_set(esw, vport, enable);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Failed to set IPsec packet mode");
goto unlock;
}
vport->info.ipsec_packet_enabled = enable;
if (enable)
esw->enabled_ipsec_vf_count++;
else
esw->enabled_ipsec_vf_count--;
unlock:
mutex_unlock(&esw->state_lock);
return err;
}
#endif /* CONFIG_XFRM_OFFLOAD */
...@@ -813,6 +813,7 @@ struct mlx5_core_dev { ...@@ -813,6 +813,7 @@ struct mlx5_core_dev {
/* MACsec notifier chain to sync MACsec core and IB database */ /* MACsec notifier chain to sync MACsec core and IB database */
struct blocking_notifier_head macsec_nh; struct blocking_notifier_head macsec_nh;
#endif #endif
u64 num_ipsec_offloads;
}; };
struct mlx5_db { struct mlx5_db {
......
...@@ -65,9 +65,11 @@ enum { ...@@ -65,9 +65,11 @@ enum {
enum { enum {
MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE = 0x0, MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE = 0x0,
MLX5_SET_HCA_CAP_OP_MOD_ETHERNET_OFFLOADS = 0x1,
MLX5_SET_HCA_CAP_OP_MOD_ODP = 0x2, MLX5_SET_HCA_CAP_OP_MOD_ODP = 0x2,
MLX5_SET_HCA_CAP_OP_MOD_ATOMIC = 0x3, MLX5_SET_HCA_CAP_OP_MOD_ATOMIC = 0x3,
MLX5_SET_HCA_CAP_OP_MOD_ROCE = 0x4, MLX5_SET_HCA_CAP_OP_MOD_ROCE = 0x4,
MLX5_SET_HCA_CAP_OP_MOD_IPSEC = 0x15,
MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE2 = 0x20, MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE2 = 0x20,
MLX5_SET_HCA_CAP_OP_MOD_PORT_SELECTION = 0x25, MLX5_SET_HCA_CAP_OP_MOD_PORT_SELECTION = 0x25,
}; };
...@@ -3451,6 +3453,7 @@ union mlx5_ifc_hca_cap_union_bits { ...@@ -3451,6 +3453,7 @@ union mlx5_ifc_hca_cap_union_bits {
struct mlx5_ifc_virtio_emulation_cap_bits virtio_emulation_cap; struct mlx5_ifc_virtio_emulation_cap_bits virtio_emulation_cap;
struct mlx5_ifc_macsec_cap_bits macsec_cap; struct mlx5_ifc_macsec_cap_bits macsec_cap;
struct mlx5_ifc_crypto_cap_bits crypto_cap; struct mlx5_ifc_crypto_cap_bits crypto_cap;
struct mlx5_ifc_ipsec_cap_bits ipsec_cap;
u8 reserved_at_0[0x8000]; u8 reserved_at_0[0x8000];
}; };
......
...@@ -1583,6 +1583,24 @@ void devlink_free(struct devlink *devlink); ...@@ -1583,6 +1583,24 @@ void devlink_free(struct devlink *devlink);
* Should be used by device drivers set * Should be used by device drivers set
* the admin state of a function managed * the admin state of a function managed
* by the devlink port. * by the devlink port.
* @port_fn_ipsec_crypto_get: Callback used to get port function's ipsec_crypto
* capability. Should be used by device drivers
* to report the current state of ipsec_crypto
* capability of a function managed by the devlink
* port.
* @port_fn_ipsec_crypto_set: Callback used to set port function's ipsec_crypto
* capability. Should be used by device drivers to
* enable/disable ipsec_crypto capability of a
* function managed by the devlink port.
* @port_fn_ipsec_packet_get: Callback used to get port function's ipsec_packet
* capability. Should be used by device drivers
* to report the current state of ipsec_packet
* capability of a function managed by the devlink
* port.
* @port_fn_ipsec_packet_set: Callback used to set port function's ipsec_packet
* capability. Should be used by device drivers to
* enable/disable ipsec_packet capability of a
* function managed by the devlink port.
* *
* Note: Driver should return -EOPNOTSUPP if it doesn't support * Note: Driver should return -EOPNOTSUPP if it doesn't support
* port function (@port_fn_*) handling for a particular port. * port function (@port_fn_*) handling for a particular port.
...@@ -1620,6 +1638,18 @@ struct devlink_port_ops { ...@@ -1620,6 +1638,18 @@ struct devlink_port_ops {
int (*port_fn_state_set)(struct devlink_port *port, int (*port_fn_state_set)(struct devlink_port *port,
enum devlink_port_fn_state state, enum devlink_port_fn_state state,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
int (*port_fn_ipsec_crypto_get)(struct devlink_port *devlink_port,
bool *is_enable,
struct netlink_ext_ack *extack);
int (*port_fn_ipsec_crypto_set)(struct devlink_port *devlink_port,
bool enable,
struct netlink_ext_ack *extack);
int (*port_fn_ipsec_packet_get)(struct devlink_port *devlink_port,
bool *is_enable,
struct netlink_ext_ack *extack);
int (*port_fn_ipsec_packet_set)(struct devlink_port *devlink_port,
bool enable,
struct netlink_ext_ack *extack);
}; };
void devlink_port_init(struct devlink *devlink, void devlink_port_init(struct devlink *devlink,
......
...@@ -661,6 +661,8 @@ enum devlink_resource_unit { ...@@ -661,6 +661,8 @@ enum devlink_resource_unit {
enum devlink_port_fn_attr_cap { enum devlink_port_fn_attr_cap {
DEVLINK_PORT_FN_ATTR_CAP_ROCE_BIT, DEVLINK_PORT_FN_ATTR_CAP_ROCE_BIT,
DEVLINK_PORT_FN_ATTR_CAP_MIGRATABLE_BIT, DEVLINK_PORT_FN_ATTR_CAP_MIGRATABLE_BIT,
DEVLINK_PORT_FN_ATTR_CAP_IPSEC_CRYPTO_BIT,
DEVLINK_PORT_FN_ATTR_CAP_IPSEC_PACKET_BIT,
/* Add new caps above */ /* Add new caps above */
__DEVLINK_PORT_FN_ATTR_CAPS_MAX, __DEVLINK_PORT_FN_ATTR_CAPS_MAX,
...@@ -669,6 +671,8 @@ enum devlink_port_fn_attr_cap { ...@@ -669,6 +671,8 @@ enum devlink_port_fn_attr_cap {
#define DEVLINK_PORT_FN_CAP_ROCE _BITUL(DEVLINK_PORT_FN_ATTR_CAP_ROCE_BIT) #define DEVLINK_PORT_FN_CAP_ROCE _BITUL(DEVLINK_PORT_FN_ATTR_CAP_ROCE_BIT)
#define DEVLINK_PORT_FN_CAP_MIGRATABLE \ #define DEVLINK_PORT_FN_CAP_MIGRATABLE \
_BITUL(DEVLINK_PORT_FN_ATTR_CAP_MIGRATABLE_BIT) _BITUL(DEVLINK_PORT_FN_ATTR_CAP_MIGRATABLE_BIT)
#define DEVLINK_PORT_FN_CAP_IPSEC_CRYPTO _BITUL(DEVLINK_PORT_FN_ATTR_CAP_IPSEC_CRYPTO_BIT)
#define DEVLINK_PORT_FN_CAP_IPSEC_PACKET _BITUL(DEVLINK_PORT_FN_ATTR_CAP_IPSEC_PACKET_BIT)
enum devlink_port_function_attr { enum devlink_port_function_attr {
DEVLINK_PORT_FUNCTION_ATTR_UNSPEC, DEVLINK_PORT_FUNCTION_ATTR_UNSPEC,
......
...@@ -492,6 +492,50 @@ static int devlink_port_fn_migratable_fill(struct devlink_port *devlink_port, ...@@ -492,6 +492,50 @@ static int devlink_port_fn_migratable_fill(struct devlink_port *devlink_port,
return 0; return 0;
} }
static int devlink_port_fn_ipsec_crypto_fill(struct devlink_port *devlink_port,
struct nla_bitfield32 *caps,
struct netlink_ext_ack *extack)
{
bool is_enable;
int err;
if (!devlink_port->ops->port_fn_ipsec_crypto_get ||
devlink_port->attrs.flavour != DEVLINK_PORT_FLAVOUR_PCI_VF)
return 0;
err = devlink_port->ops->port_fn_ipsec_crypto_get(devlink_port, &is_enable, extack);
if (err) {
if (err == -EOPNOTSUPP)
return 0;
return err;
}
devlink_port_fn_cap_fill(caps, DEVLINK_PORT_FN_CAP_IPSEC_CRYPTO, is_enable);
return 0;
}
static int devlink_port_fn_ipsec_packet_fill(struct devlink_port *devlink_port,
struct nla_bitfield32 *caps,
struct netlink_ext_ack *extack)
{
bool is_enable;
int err;
if (!devlink_port->ops->port_fn_ipsec_packet_get ||
devlink_port->attrs.flavour != DEVLINK_PORT_FLAVOUR_PCI_VF)
return 0;
err = devlink_port->ops->port_fn_ipsec_packet_get(devlink_port, &is_enable, extack);
if (err) {
if (err == -EOPNOTSUPP)
return 0;
return err;
}
devlink_port_fn_cap_fill(caps, DEVLINK_PORT_FN_CAP_IPSEC_PACKET, is_enable);
return 0;
}
static int devlink_port_fn_caps_fill(struct devlink_port *devlink_port, static int devlink_port_fn_caps_fill(struct devlink_port *devlink_port,
struct sk_buff *msg, struct sk_buff *msg,
struct netlink_ext_ack *extack, struct netlink_ext_ack *extack,
...@@ -508,6 +552,14 @@ static int devlink_port_fn_caps_fill(struct devlink_port *devlink_port, ...@@ -508,6 +552,14 @@ static int devlink_port_fn_caps_fill(struct devlink_port *devlink_port,
if (err) if (err)
return err; return err;
err = devlink_port_fn_ipsec_crypto_fill(devlink_port, &caps, extack);
if (err)
return err;
err = devlink_port_fn_ipsec_packet_fill(devlink_port, &caps, extack);
if (err)
return err;
if (!caps.selector) if (!caps.selector)
return 0; return 0;
err = nla_put_bitfield32(msg, DEVLINK_PORT_FN_ATTR_CAPS, caps.value, err = nla_put_bitfield32(msg, DEVLINK_PORT_FN_ATTR_CAPS, caps.value,
...@@ -838,6 +890,20 @@ devlink_port_fn_roce_set(struct devlink_port *devlink_port, bool enable, ...@@ -838,6 +890,20 @@ devlink_port_fn_roce_set(struct devlink_port *devlink_port, bool enable,
extack); extack);
} }
static int
devlink_port_fn_ipsec_crypto_set(struct devlink_port *devlink_port, bool enable,
struct netlink_ext_ack *extack)
{
return devlink_port->ops->port_fn_ipsec_crypto_set(devlink_port, enable, extack);
}
static int
devlink_port_fn_ipsec_packet_set(struct devlink_port *devlink_port, bool enable,
struct netlink_ext_ack *extack)
{
return devlink_port->ops->port_fn_ipsec_packet_set(devlink_port, enable, extack);
}
static int devlink_port_fn_caps_set(struct devlink_port *devlink_port, static int devlink_port_fn_caps_set(struct devlink_port *devlink_port,
const struct nlattr *attr, const struct nlattr *attr,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
...@@ -862,6 +928,20 @@ static int devlink_port_fn_caps_set(struct devlink_port *devlink_port, ...@@ -862,6 +928,20 @@ static int devlink_port_fn_caps_set(struct devlink_port *devlink_port,
if (err) if (err)
return err; return err;
} }
if (caps.selector & DEVLINK_PORT_FN_CAP_IPSEC_CRYPTO) {
err = devlink_port_fn_ipsec_crypto_set(devlink_port, caps_value &
DEVLINK_PORT_FN_CAP_IPSEC_CRYPTO,
extack);
if (err)
return err;
}
if (caps.selector & DEVLINK_PORT_FN_CAP_IPSEC_PACKET) {
err = devlink_port_fn_ipsec_packet_set(devlink_port, caps_value &
DEVLINK_PORT_FN_CAP_IPSEC_PACKET,
extack);
if (err)
return err;
}
return 0; return 0;
} }
...@@ -1226,6 +1306,30 @@ static int devlink_port_function_validate(struct devlink_port *devlink_port, ...@@ -1226,6 +1306,30 @@ static int devlink_port_function_validate(struct devlink_port *devlink_port,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
} }
if (caps.selector & DEVLINK_PORT_FN_CAP_IPSEC_CRYPTO) {
if (!ops->port_fn_ipsec_crypto_set) {
NL_SET_ERR_MSG_ATTR(extack, attr,
"Port doesn't support ipsec_crypto function attribute");
return -EOPNOTSUPP;
}
if (devlink_port->attrs.flavour != DEVLINK_PORT_FLAVOUR_PCI_VF) {
NL_SET_ERR_MSG_ATTR(extack, attr,
"ipsec_crypto function attribute supported for VFs only");
return -EOPNOTSUPP;
}
}
if (caps.selector & DEVLINK_PORT_FN_CAP_IPSEC_PACKET) {
if (!ops->port_fn_ipsec_packet_set) {
NL_SET_ERR_MSG_ATTR(extack, attr,
"Port doesn't support ipsec_packet function attribute");
return -EOPNOTSUPP;
}
if (devlink_port->attrs.flavour != DEVLINK_PORT_FLAVOUR_PCI_VF) {
NL_SET_ERR_MSG_ATTR(extack, attr,
"ipsec_packet function attribute supported for VFs only");
return -EOPNOTSUPP;
}
}
} }
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment