Commit c37d6e3f authored by Shannon Nelson's avatar Shannon Nelson Committed by David S. Miller

ionic: restrict received packets to mtu size

Make sure the NIC drops packets that are larger than the
specified MTU.

The front end of the NIC will accept packets larger than MTU and
will copy all the data it can to fill up the driver's posted
buffers - if the buffers are not long enough the packet will
then get dropped.  With the Rx SG buffers allocagted as full
pages, we are currently setting up more space than MTU size
available and end up receiving some packets that are larger
than MTU, up to the size of buffers posted.  To be sure the
NIC doesn't waste our time with oversized packets we need to
lie a little in the SG descriptor about how long is the last
SG element.

At dealloc time, we know the allocation was a page, so the
deallocation doesn't care about what length we put in the
descriptor.
Signed-off-by: default avatarShannon Nelson <snelson@pensando.io>
Reviewed-by: default avatarAndrew Lunn <andrew@lunn.ch>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 24cfa8c7
...@@ -343,6 +343,8 @@ void ionic_rx_fill(struct ionic_queue *q) ...@@ -343,6 +343,8 @@ void ionic_rx_fill(struct ionic_queue *q)
struct ionic_rxq_sg_desc *sg_desc; struct ionic_rxq_sg_desc *sg_desc;
struct ionic_rxq_sg_elem *sg_elem; struct ionic_rxq_sg_elem *sg_elem;
struct ionic_rxq_desc *desc; struct ionic_rxq_desc *desc;
unsigned int remain_len;
unsigned int seg_len;
unsigned int nfrags; unsigned int nfrags;
bool ring_doorbell; bool ring_doorbell;
unsigned int i, j; unsigned int i, j;
...@@ -352,6 +354,7 @@ void ionic_rx_fill(struct ionic_queue *q) ...@@ -352,6 +354,7 @@ void ionic_rx_fill(struct ionic_queue *q)
nfrags = round_up(len, PAGE_SIZE) / PAGE_SIZE; nfrags = round_up(len, PAGE_SIZE) / PAGE_SIZE;
for (i = ionic_q_space_avail(q); i; i--) { for (i = ionic_q_space_avail(q); i; i--) {
remain_len = len;
desc_info = q->head; desc_info = q->head;
desc = desc_info->desc; desc = desc_info->desc;
sg_desc = desc_info->sg_desc; sg_desc = desc_info->sg_desc;
...@@ -375,7 +378,9 @@ void ionic_rx_fill(struct ionic_queue *q) ...@@ -375,7 +378,9 @@ void ionic_rx_fill(struct ionic_queue *q)
return; return;
} }
desc->addr = cpu_to_le64(page_info->dma_addr); desc->addr = cpu_to_le64(page_info->dma_addr);
desc->len = cpu_to_le16(PAGE_SIZE); seg_len = min_t(unsigned int, PAGE_SIZE, len);
desc->len = cpu_to_le16(seg_len);
remain_len -= seg_len;
page_info++; page_info++;
/* fill sg descriptors - pages[1..n] */ /* fill sg descriptors - pages[1..n] */
...@@ -391,7 +396,9 @@ void ionic_rx_fill(struct ionic_queue *q) ...@@ -391,7 +396,9 @@ void ionic_rx_fill(struct ionic_queue *q)
return; return;
} }
sg_elem->addr = cpu_to_le64(page_info->dma_addr); sg_elem->addr = cpu_to_le64(page_info->dma_addr);
sg_elem->len = cpu_to_le16(PAGE_SIZE); seg_len = min_t(unsigned int, PAGE_SIZE, remain_len);
sg_elem->len = cpu_to_le16(seg_len);
remain_len -= seg_len;
page_info++; page_info++;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment