• Maciej Fijalkowski's avatar
    ice: introduce XDP_TX fallback path · 22bf877e
    Maciej Fijalkowski authored
    Under rare circumstances there might be a situation where a requirement
    of having XDP Tx queue per CPU could not be fulfilled and some of the Tx
    resources have to be shared between CPUs. This yields a need for placing
    accesses to xdp_ring inside a critical section protected by spinlock.
    These accesses happen to be in the hot path, so let's introduce the
    static branch that will be triggered from the control plane when driver
    could not provide Tx queue dedicated for XDP on each CPU.
    
    Currently, the design that has been picked is to allow any number of XDP
    Tx queues that is at least half of a count of CPUs that platform has.
    For lower number driver will bail out with a response to user that there
    were not enough Tx resources that would allow configuring XDP. The
    sharing of rings is signalled via static branch enablement which in turn
    indicates that lock for xdp_ring accesses needs to be taken in hot path.
    
    Approach based on static branch has no impact on performance of a
    non-fallback path. One thing that is needed to be mentioned is a fact
    that the static branch will act as a global driver switch, meaning that
    if one PF got out of Tx resources, then other PFs that ice driver is
    servicing will suffer. However, given the fact that HW that ice driver
    is handling has 1024 Tx queues per each PF, this is currently an
    unlikely scenario.
    Signed-off-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
    Tested-by: default avatarGeorge Kuruvinakunnel <george.kuruvinakunnel@intel.com>
    Signed-off-by: default avatarTony Nguyen <anthony.l.nguyen@intel.com>
    22bf877e
ice_main.c 198 KB