- 02 May, 2017 23 commits
-
-
Sunil Goutham authored
Adds support for XDP_DROP. Also since in XDP mode there is just a single buffer per page, made changes to recycle DMA mapping info as well along with pages. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sunil Goutham authored
Adds basic XDP support i.e attaching a BPF program to an interface. Also takes care of allocating separate Tx queues for XDP path and for network stack packet transmission. This patch doesn't support handling of any of the XDP actions, all are treated as XDP_PASS i.e packets will be handed over to the network stack. Changes also involve allocating one receive buffer per page in XDP mode and multiple in normal mode i.e when no BPF program is attached. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sunil Goutham authored
Get rid of unnecessary double pointer references and type casting in receive buffer allocation code. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sunil Goutham authored
Optimized CQE handling with below changes - Feeing descriptors back to SQ in bulk i.e once per NAPI instance instead for every CQE_TX, this will reduce number of atomic updates to 'sq->free_cnt'. - Checking errors in CQE_TX and CQE_RX before calling appropriate fn()s to update error stats i.e reduce branching. Also removed debug messages in packet handling path which otherwise causes issues if DEBUG is enabled. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sunil Goutham authored
Receive buffer's physical address or iova will anyway not go beyond 49bits, since it is the max supported HW address. As per perf, updating bitfields i.e buf_addr:42 in RBDR descriptor entry consumes lots of cpu cycles, hence changed it to a 64bit field with alignment requirements taken care of. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sunil Goutham authored
Adds support for page recycling for allocating receive buffers to reduce cost of refilling RBDR ring. Also got rid of using compound pages when pagesize is 4K, only order-0 pages now. Only page is recycled, DMA mappings still needs to be done for every receive buffer allocated due to following constraints - Cannot have just one receive buffer per 64KB page. - There is just one buffer ring shared across 8 Rx queues, so buffers of same page can go to any Rx queue. - HW gives buffer address where packet has been DMA'ed and not the index into buffer ring. This makes it not possible to resue DMA mapping info. So unfortunately have to go through costly mapping route for every buffer. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
We should call ipxitf_put() if the copy_to_user() fails. Reported-by: 李强 <liqiang6-s@360.cn> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Jump is now the only one using value action opcode. This is going to change soon. So introduce helpers to work with this. Convert TC_ACT_JUMP. This also fixes the TC_ACT_JUMP check, which is incorrectly done as a bit check, not a value check. Fixes: e0ee84de ("net sched actions: Complete the JUMPX opcode") Signed-off-by: Jiri Pirko <jiri@mellanox.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Sudarsana Reddy Kalluru says: ==================== qed*: PTP bug fixes. The series addresses couple of issues in the PTP implementation. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
sudarsana.kalluru@cavium.com authored
PTP hardware filter configuration performed by the driver for a given user requested config is not correct for some of the PTP modes. Following changes are needed for PTP config-filter implementation. 1. NIG_REG_TX_PTP_EN register - Bits 0/1/2 respectively enables TimeSync/"V1 frame format support"/"V2 frame format support" on the TX side. Set the associated bits based on the user request. 2. ptp4l application fails to operate in Peer Delay mode. Following changes are needed to fix this, a. Driver should enable (set to 0) DA #1-related bits for IPv4, IPv6 and MAC destination addresses in these registers: NIG_REG_TX_LLH_PTP_RULE_MASK NIG_REG_LLH_PTP_RULE_MASK b. NIG_REG_LLH_PTP_PARAM_MASK/NIG_REG_TX_LLH_PTP_PARAM_MASK should be set to 0x0 in all modes. Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
sudarsana.kalluru@cavium.com authored
PTP Tx timestamping data structures are not protected against the concurrent access in the Tx paths. Protecting the same using atomic bit locks. Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jan Kiszka authored
The IOT2000 is industrial controller platform, derived from the Intel Galileo Gen2 board. The variant IOT2020 comes with one LAN port, the IOT2040 has two of them. They can be told apart based on the board asset tag in the DMI table. Based on patch by Sascha Weisenberger. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Sascha Weisenberger <sascha.weisenberger@siemens.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Timmy Li authored
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated is not enough for ethtool_get_strings(), which will cause random memory corruption. When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the the following can be observed without this patch: [ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80 [ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070. [ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70) [ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk [ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k [ 43.115218] Next obj: start=ffff801fb0b69098, len=80 [ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b. [ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38) [ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_ [ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai Signed-off-by: Timmy Li <lixiaoping3@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Be careful when comparing tcp_time_stamp to some u32 quantity, otherwise result can be surprising. Fixes: 7c106d7e ("[TCP]: TCP Low Priority congestion control") Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
When the instruction right before the branch destination is a 64 bit load immediate, we currently calculate the wrong jump offset in the ctx->offset[] array as we only account one instruction slot for the 64 bit load immediate although it uses two BPF instructions. Fix it up by setting the offset into the right slot after we incremented the index. Before (ldimm64 test 1): [...] 00000020: 52800007 mov w7, #0x0 // #0 00000024: d2800060 mov x0, #0x3 // #3 00000028: d2800041 mov x1, #0x2 // #2 0000002c: eb01001f cmp x0, x1 00000030: 54ffff82 b.cs 0x00000020 00000034: d29fffe7 mov x7, #0xffff // #65535 00000038: f2bfffe7 movk x7, #0xffff, lsl #16 0000003c: f2dfffe7 movk x7, #0xffff, lsl #32 00000040: f2ffffe7 movk x7, #0xffff, lsl #48 00000044: d29dddc7 mov x7, #0xeeee // #61166 00000048: f2bdddc7 movk x7, #0xeeee, lsl #16 0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32 00000050: f2fdddc7 movk x7, #0xeeee, lsl #48 [...] After (ldimm64 test 1): [...] 00000020: 52800007 mov w7, #0x0 // #0 00000024: d2800060 mov x0, #0x3 // #3 00000028: d2800041 mov x1, #0x2 // #2 0000002c: eb01001f cmp x0, x1 00000030: 540000a2 b.cs 0x00000044 00000034: d29fffe7 mov x7, #0xffff // #65535 00000038: f2bfffe7 movk x7, #0xffff, lsl #16 0000003c: f2dfffe7 movk x7, #0xffff, lsl #32 00000040: f2ffffe7 movk x7, #0xffff, lsl #48 00000044: d29dddc7 mov x7, #0xeeee // #61166 00000048: f2bdddc7 movk x7, #0xeeee, lsl #16 0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32 00000050: f2fdddc7 movk x7, #0xeeee, lsl #48 [...] Also, add a couple of test cases to make sure JITs pass this test. Tested on Cavium ThunderX ARMv8. The added test cases all pass after the fix. Fixes: 8eee539d ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()") Reported-by: David S. Miller <davem@davemloft.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Cc: Xi Wang <xi.wang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
This work adds BPF_XADD for BPF_W/BPF_DW to the arm64 JIT and therefore completes JITing of all BPF instructions, meaning we can thus also remove the 'notyet' label and do not need to fall back to the interpreter when BPF_XADD is used in a program! This now also brings arm64 JIT in line with x86_64, s390x, ppc64, sparc64, where all current eBPF features are supported. BPF_W example from test_bpf: .u.insns_int = { BPF_ALU32_IMM(BPF_MOV, R0, 0x12), BPF_ST_MEM(BPF_W, R10, -40, 0x10), BPF_STX_XADD(BPF_W, R10, R0, -40), BPF_LDX_MEM(BPF_W, R0, R10, -40), BPF_EXIT_INSN(), }, [...] 00000020: 52800247 mov w7, #0x12 // #18 00000024: 928004eb mov x11, #0xffffffffffffffd8 // #-40 00000028: d280020a mov x10, #0x10 // #16 0000002c: b82b6b2a str w10, [x25,x11] // start of xadd mapping: 00000030: 928004ea mov x10, #0xffffffffffffffd8 // #-40 00000034: 8b19014a add x10, x10, x25 00000038: f9800151 prfm pstl1strm, [x10] 0000003c: 885f7d4b ldxr w11, [x10] 00000040: 0b07016b add w11, w11, w7 00000044: 880b7d4b stxr w11, w11, [x10] 00000048: 35ffffab cbnz w11, 0x0000003c // end of xadd mapping: [...] BPF_DW example from test_bpf: .u.insns_int = { BPF_ALU32_IMM(BPF_MOV, R0, 0x12), BPF_ST_MEM(BPF_DW, R10, -40, 0x10), BPF_STX_XADD(BPF_DW, R10, R0, -40), BPF_LDX_MEM(BPF_DW, R0, R10, -40), BPF_EXIT_INSN(), }, [...] 00000020: 52800247 mov w7, #0x12 // #18 00000024: 928004eb mov x11, #0xffffffffffffffd8 // #-40 00000028: d280020a mov x10, #0x10 // #16 0000002c: f82b6b2a str x10, [x25,x11] // start of xadd mapping: 00000030: 928004ea mov x10, #0xffffffffffffffd8 // #-40 00000034: 8b19014a add x10, x10, x25 00000038: f9800151 prfm pstl1strm, [x10] 0000003c: c85f7d4b ldxr x11, [x10] 00000040: 8b07016b add x11, x11, x7 00000044: c80b7d4b stxr w11, x11, [x10] 00000048: 35ffffab cbnz w11, 0x0000003c // end of xadd mapping: [...] Tested on Cavium ThunderX ARMv8, test suite results after the patch: No JIT: [ 3751.855362] test_bpf: Summary: 311 PASSED, 0 FAILED, [0/303 JIT'ed] With JIT: [ 3573.759527] test_bpf: Summary: 311 PASSED, 0 FAILED, [303/303 JIT'ed] Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
I say: ==================== Fix some bpf program testing framework bugs This series fixes two issue: 1) Accidental user pointer dereference in bpf_test_finish() 2) The packet data given to the test programs is not aligned correctly The first issue is fixed simply because we have a kernel side copy of the datastructure in question already. And the second bug is a simple matter of applying NET_IP_ALIGN where needed. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Miller authored
Make sure we apply NET_IP_ALIGN when reserving headroom for SKB and XDP test runs, just like a real driver would. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Daniel Borkmann <daniel@iogearbox.net>
-
David Miller authored
Instead, pass the kattr in which has a kernel side copy of this data structure from userspace already. Fix based upon a suggestion from Alexei Starovoitov. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Daniel Borkmann <daniel@iogearbox.net>
-
David S. Miller authored
This fixes the testcase on big-endian. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Fix kdoc parameter spelling from extact to extack. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
Fix the following warnings triggered by 51570a5a ("A Sample of using socket cookie and uid for traffic monitoring"): In file included from /home/foo/net-next/samples/bpf/cookie_uid_helper_example.c:54:0: /home/foo/net-next/samples/bpf/cookie_uid_helper_example.c: In function 'prog_load': /home/foo/net-next/samples/bpf/cookie_uid_helper_example.c:119:27: warning: overflow in implicit constant conversion [-Woverflow] -32 + offsetof(struct stats, uid)), ^ /home/foo/net-next/samples/bpf/libbpf.h:135:12: note: in definition of macro 'BPF_STX_MEM' .off = OFF, \ ^ /home/foo/net-next/samples/bpf/cookie_uid_helper_example.c:121:27: warning: overflow in implicit constant conversion [-Woverflow] -32 + offsetof(struct stats, packets), 1), ^ /home/foo/net-next/samples/bpf/libbpf.h:155:12: note: in definition of macro 'BPF_ST_MEM' .off = OFF, \ ^ /home/foo/net-next/samples/bpf/cookie_uid_helper_example.c:129:27: warning: overflow in implicit constant conversion [-Woverflow] -32 + offsetof(struct stats, bytes)), ^ /home/foo/net-next/samples/bpf/libbpf.h:135:12: note: in definition of macro 'BPF_STX_MEM' .off = OFF, \ ^ HOSTLD /home/foo/net-next/samples/bpf/per_socket_stats_example Fixes: 51570a5a ("A Sample of using socket cookie and uid for traffic monitoring") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Like other JITs, sparc64 maintains an array of instruction offsets but stores the entries off by one. This is done because jumps to the exit block are indexed to one past the last BPF instruction. So if we size the array by the program length, we need to record the previous instruction in order to stay within the array bounds. This is explained in ARM JIT commit 8eee539d ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()"). But this scheme requires a little bit of careful handling when the instruction before the branch destination is a 64-bit load immediate. It takes up 2 BPF instruction slots. Therefore, we have to fill in the array entry for the second half of the 64-bit load immediate instruction rather than for the one for the beginning of that instruction. Fixes: 7a12b503 ("sparc64: Add eBPF JIT.") Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 01 May, 2017 17 commits
-
-
Florian Westphal authored
By using smaller datatypes this (rather large) struct shrinks considerably (80 -> 48 bytes on x86_64). As this is embedded in other structs, this also rerduces size of several others, e.g. cls_fl_head or nft_hash. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
We do not want to include things like stdio.h and friends into eBPF program builds. bpf_util.h is for host compiled programs, so eBPF C-code helpers don't really belong there. Add a new bpf_endian.h as a quick fix for this for now. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Since that change also made the nfrag function not necessary for exports, remove it. Fixes: 89a23c8b ("ip6_tunnel: Fix missing tunnel encapsulation limit option") Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vivien Didelot says: ==================== net: dsa: mv88e6xxx: 802.1s and 88E6390 VTU This patch series adds support for the VLAN Table Unit (a.k.a. the VTU) to the 88E6390 family of Marvell Ethernet switch chips. The plumbing for the per VLAN Spanning Tree support is added as a side effect of the necessary refactoring. The patchset is split up so that no duplication of code is introduced. With this patchset applied, the mv88e6xxx driver has 2 new function pointers for the VTU GetNext and VTU Load/Purge operations (with 3 implementations), both handling programmation of 802.1q and 802.1s. On a ZII Rev C board (featuring 2 88E6390X chips) with all ports bridged together, we obtain the following hardware VLAN configuration: # cat /sys/class/net/br0/bridge/vlan_filtering 1 # cat /sys/class/net/br0/bridge/default_pvid 42 # bridge vlan add dev lan3 vid 666 # bridge vlan show port vlan ids lan1 42 PVID Egress Untagged lan1 42 PVID Egress Untagged lan2 42 PVID Egress Untagged lan2 42 PVID Egress Untagged lan3 42 PVID Egress Untagged 666 lan3 42 PVID Egress Untagged 666 lan4 42 PVID Egress Untagged lan4 42 PVID Egress Untagged lan5 42 PVID Egress Untagged lan5 42 PVID Egress Untagged lan6 42 PVID Egress Untagged lan6 42 PVID Egress Untagged lan7 42 PVID Egress Untagged lan7 42 PVID Egress Untagged lan8 42 PVID Egress Untagged lan8 42 PVID Egress Untagged br0 42 PVID Egress Untagged Below are the technical details for the different implementations. All switch families have up to 3 dedicated VTU Data registers used to program 802.1q and 802.1s, both using 2-bit values. On 88E6185 and 88E6352 families, port membership and state are adjacent, while the 88E6390 family share the same bits: Bits 88E6185/88E6352 88E6390 ----- ----------------- -------------------------- 0-1 Port 0 membership Port 0 membership or state 2-3 Port 0 state Port 1 membership or state 4-5 Port 1 membership Port 2 membership or state 6-7 Port 1 state Port 3 membership or state 8-9 Port 2 membership Port 4 membership or state 10-11 Port 2 state Port 5 membership or state ... ... ... The 88E6185 family programs all ports membership and state in a single VTU GetNext or Load/Purge operation. The 88E6352 family introduced an indirect Spanning Tree Unit table (a.k.a. STU) which requires additional STU GetNext and Load/Purge operations to read and write the ports state bits. The 88E6390 family also has an STU and requires data bits to be accessed before and after every single VTU or STU operation. Finally, the 88E6390 family introduced a 13th bit for the VLAN ID, which must be taken care of regardless the VTU operating mode. This means that iterating over the VTU now starts or ends with value 8191, not 4095. Patch 1 adds a max_vid field to the chip info structure. Patch 2 adds 802.1q and 802.1s data to the generic VTU entry structure. Patches 3 to 10 move helpers to a dedicated file (later made static). Patches 11 and 12 abstract handling of the STU behind VTU operations. Patches 13 and 14 add the new function pointers for VTU operations. Patches 15 and 18 polish the VTU code and add VTU support for 88E6390. Changes in v2: - add Reviewed-by tags - fix comments in 8/18 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
The 6390 family of chips use only 2 of the 3 VTU Data registers to pack the MemberTag and PortState VLAN data. This means that they must be written or read before or after each VTU/STU operations. Implement this variant to add support for VTU with such chips. These chips have a 13th bit for the VID thus set their max_vid to 8191. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Newer chips such as the 88E6390 have a VTU Page bit in the VTU VID register to specify a 13th bit for the VID. This can be used to support 8K VLANs. When dumping the whole VTU, all VID bits must be set to one, including this VTU Page bit. Add support for VID greater than 4095. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Make the code which fetches or initializes a new VTU entry more concise. This allows us the get rid of the old underscore prefix naming. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Now that we have chip operations for VTU accesses, mark all helpers from global1_vtu.c as static. Only the various implementations of the GetNext, LoadPurge and Flush operations need to be exposed. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Add a new vtu_loadpurge operation to the chip info structure to differ the various implementations of the VTU accesses. Now that the STU handling is abstracted behind VTU operations, kill the obsolete MV88E6XXX_FLAG_STU flag. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Add a new vtu_getnext operation to the chip info structure to differ the various implementations of the VTU accesses. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Now that the code writes both VTU and STU data when loading a VTU entry, load the corresponding STU entry at the same time. This allows us to get rid of the STU management in the _mv88e6xxx_vtu_new helper and thus remove the separate implementations of STU Load/Purge and STU GetNext, as well as the unused family checks. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Now that the code reads both VTU and STU data on VTU GetNext operation, fetch the STU entry data of a VTU entry at the same time. The STU data bits are masked with the VTU data bits and they are now all read at the same time a VTU GetNext operation is issued. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Extract the generic portion of code to issue an STU GetNext operation, which will be used in other implementations. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
The code to access the VTU Data registers currently only supports the 88E6185 family and alike: 2-bit membership adjacent to 2-bit port state. Even though the 88E6352 family introduced an indirect table to program the VLAN Spanning Tree states, the usage of the VTU Data registers remains the same regardless the VTU or STU operation. Now that the mv88e6xxx_vtu_entry structure contains both port membership and states data, factorize the code to access them in global1_vtu.c. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Even though every switch model has a different way to access the VTU Data bits, the base implementation of the VTU GetNext operation remains the same: wait, write the first VID to iterate from, start the operation, and read the next VID. Move this generic implementation into global1_vtu.c and abstract the handling of the start VID (similarly to the ATU GetNext implementation), before introducing a new chip operation for specific chips. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Add helpers to access the VTU VID register in the global1_vtu.c file. At the same time, move mv88e6xxx_g1_vtu_vid_write at the beginning of _mv88e6xxx_vtu_loadpurge, which adds no functional changes but makes future patches simpler. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-