-
Nick Child authored
When the length of a packet is under the rx_copybreak threshold, the buffer is copied into a new skb and sent up the stack. This allows the dma mapped memory to be recycled back to FW. Previously, the reuse of the DMA space was handled immediately. This means that further packet processing has to wait until h_add_logical_lan finishes for this packet. Therefore, when reusing a packet, offload the hcall to the replenish function. As a result, much of the shared logic between the recycle and replenish functions can be removed. This change increases TCP_RR packet rate by another 15% (370k to 430k txns). We can see the ftrace data supports this: PREV: ibmveth_poll = 8078553.0 us / 190999.0 hits = AVG 42.3 us NEW: ibmveth_poll = 7632787.0 us / 224060.0 hits = AVG 34.07 us Signed-off-by: Nick Child <nnac123@linux.ibm.com> Reviewed-by: Shannon Nelson <shannon.nelson@amd.com> Link: https://patch.msgid.link/20240801211215.128101-3-nnac123@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
b5381a55