Commit 5933dd2f authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller

net: NET_SKB_PAD should depend on L1_CACHE_BYTES

In old kernels, NET_SKB_PAD was defined to 16.

Then commit d6301d3d (net: Increase default NET_SKB_PAD to 32), and
commit 18e8c134 (net: Increase NET_SKB_PAD to 64 bytes) increased it
to 64.

While first patch was governed by network stack needs, second was more
driven by performance issues on current hardware. Real intent was to
align data on a cache line boundary.

So use max(32, L1_CACHE_BYTES) instead of 64, to be more generic.

Remove microblaze and powerpc own NET_SKB_PAD definitions.

Thanks to Alexander Duyck and David Miller for their comments.
Suggested-by: default avatarDavid Miller <davem@davemloft.net>
Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent a95d8c88
...@@ -101,10 +101,7 @@ extern struct dentry *of_debugfs_root; ...@@ -101,10 +101,7 @@ extern struct dentry *of_debugfs_root;
* MicroBlaze doesn't handle unaligned accesses in hardware. * MicroBlaze doesn't handle unaligned accesses in hardware.
* *
* Based on this we force the IP header alignment in network drivers. * Based on this we force the IP header alignment in network drivers.
* We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
* cacheline alignment of buffers.
*/ */
#define NET_IP_ALIGN 2 #define NET_IP_ALIGN 2
#define NET_SKB_PAD L1_CACHE_BYTES
#endif /* _ASM_MICROBLAZE_SYSTEM_H */ #endif /* _ASM_MICROBLAZE_SYSTEM_H */
...@@ -515,11 +515,8 @@ __cmpxchg_local(volatile void *ptr, unsigned long old, unsigned long new, ...@@ -515,11 +515,8 @@ __cmpxchg_local(volatile void *ptr, unsigned long old, unsigned long new,
* powers of 2 writes until it reaches sufficient alignment). * powers of 2 writes until it reaches sufficient alignment).
* *
* Based on this we disable the IP header alignment in network drivers. * Based on this we disable the IP header alignment in network drivers.
* We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
* cacheline alignment of buffers.
*/ */
#define NET_IP_ALIGN 0 #define NET_IP_ALIGN 0
#define NET_SKB_PAD L1_CACHE_BYTES
#define cmpxchg64(ptr, o, n) \ #define cmpxchg64(ptr, o, n) \
({ \ ({ \
......
...@@ -1414,12 +1414,14 @@ static inline int skb_network_offset(const struct sk_buff *skb) ...@@ -1414,12 +1414,14 @@ static inline int skb_network_offset(const struct sk_buff *skb)
* *
* Various parts of the networking layer expect at least 32 bytes of * Various parts of the networking layer expect at least 32 bytes of
* headroom, you should not reduce this. * headroom, you should not reduce this.
* With RPS, we raised NET_SKB_PAD to 64 so that get_rps_cpus() fetches span *
* a 64 bytes aligned block to fit modern (>= 64 bytes) cache line sizes * Using max(32, L1_CACHE_BYTES) makes sense (especially with RPS)
* to reduce average number of cache lines per packet.
* get_rps_cpus() for example only access one 64 bytes aligned block :
* NET_IP_ALIGN(2) + ethernet_header(14) + IP_header(20/40) + ports(8) * NET_IP_ALIGN(2) + ethernet_header(14) + IP_header(20/40) + ports(8)
*/ */
#ifndef NET_SKB_PAD #ifndef NET_SKB_PAD
#define NET_SKB_PAD 64 #define NET_SKB_PAD max(32, L1_CACHE_BYTES)
#endif #endif
extern int ___pskb_trim(struct sk_buff *skb, unsigned int len); extern int ___pskb_trim(struct sk_buff *skb, unsigned int len);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment