Commit 432e629e authored by Saeed Mahameed's avatar Saeed Mahameed Committed by David S. Miller

net/mlx4_en: Don't reuse RX page when XDP is set

When a new rx packet arrives, the rx path will decide whether to reuse
the remainder of the page or not according to one of the below conditions:
1. frag_info->frag_stride == PAGE_SIZE / 2
2. frags->page_offset + frag_info->frag_size > PAGE_SIZE;

The first condition is no met for when XDP is set.
For XDP, page_offset is always set to priv->rx_headroom which is
XDP_PACKET_HEADROOM and frag_info->frag_size is around mtu size + some
padding, still the 2nd release condition will hold since
XDP_PACKET_HEADROOM + 1536 < PAGE_SIZE, as a result the page will not
be released and will be _wrongly_ reused for next free rx descriptor.

In XDP there is an assumption to have a page per packet and reuse can
break such assumption and might cause packet data corruptions.

Fix this by adding an extra condition (!priv->rx_headroom) to the 2nd
case to avoid page reuse when XDP is set, since rx_headroom is set to 0
for non XDP setup and set to XDP_PACKET_HEADROOM for XDP setup.

No additional cache line is required for the new condition.

Fixes: 34db548b ("mlx4: add page recycling in receive path")
Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
Signed-off-by: default avatarTariq Toukan <tariqt@mellanox.com>
Suggested-by: default avatarMartin KaFai Lau <kafai@fb.com>
CC: Eric Dumazet <edumazet@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent c1334597
...@@ -474,10 +474,10 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv, ...@@ -474,10 +474,10 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv,
{ {
const struct mlx4_en_frag_info *frag_info = priv->frag_info; const struct mlx4_en_frag_info *frag_info = priv->frag_info;
unsigned int truesize = 0; unsigned int truesize = 0;
bool release = true;
int nr, frag_size; int nr, frag_size;
struct page *page; struct page *page;
dma_addr_t dma; dma_addr_t dma;
bool release;
/* Collect used fragments while replacing them in the HW descriptors */ /* Collect used fragments while replacing them in the HW descriptors */
for (nr = 0;; frags++) { for (nr = 0;; frags++) {
...@@ -500,7 +500,11 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv, ...@@ -500,7 +500,11 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv,
release = page_count(page) != 1 || release = page_count(page) != 1 ||
page_is_pfmemalloc(page) || page_is_pfmemalloc(page) ||
page_to_nid(page) != numa_mem_id(); page_to_nid(page) != numa_mem_id();
} else { } else if (!priv->rx_headroom) {
/* rx_headroom for non XDP setup is always 0.
* When XDP is set, the above condition will
* guarantee page is always released.
*/
u32 sz_align = ALIGN(frag_size, SMP_CACHE_BYTES); u32 sz_align = ALIGN(frag_size, SMP_CACHE_BYTES);
frags->page_offset += sz_align; frags->page_offset += sz_align;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment