- 06 Feb, 2017 23 commits
-
-
Ido Schimmel authored
Up until now we had two interfaces for neighbour related configuration: ndo_neigh_{construct,destroy} and NEIGH_UPDATE netevents. The ndos were used to add and remove neighbours from the driver's cache, whereas the netevent was used to reflect the neighbours into the device's tables. However, if the NUD state of a neighbour isn't NUD_VALID or if the neighbour is dead, then there's really no reason for us to keep it inside our cache. The only exception to this rule are neighbours that are also used for nexthops, which we periodically refresh to get them resolved. We can therefore eliminate the ndo entry point into the driver and simplify the code, making it similar to the FIB reflection, which is based solely on events. This also helps us avoid a locking issue, in which the RIF cache was traversed without proper locking during insertion into the neigh entry cache. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Since commit 33b1341c ("mlxsw: spectrum_router: Fix handling of neighbour structure") we no longer use destination IP for neighbour lookup, so remove it. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
We currently associate each neighbour entry with a work item, so it's not possible to have multiple events queued for the same neighbour entry. However, this is about to be changed so that the neighbour entry is only resolved when the work item is scheduled. The above can result in a mismatch between the kernel's and the device's neighbour table, unless the associated work items are processed in the order in which they were submitted. Do that by migrating the NEIGH_UPDATE work items to be processed in the ordered workqueue which was recently introduced in mlxsw in commit a3832b31 ("mlxsw: core: Create an ordered workqueue for FIB offload"). Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
We always use zero delay before queueing a work on the ordered workqueue ('mlxsw_owq'), so use work_struct directly instead of delayable work. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'linux-can-next-for-4.11-20170206' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2017-02-06 this is a pull request of 16 patches for net-next/master. The first two patches by David Jander and me add the rx-offload framework for CAN devices to the kernel. The remaining 14 patches convert the flexcan driver to make use of it. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Elad Raz authored
HTGT register length is limited to 32 bytes and not 256 bytes. Signed-off-by: Elad Raz <eladr@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jingju Hou authored
The mvneta itself does not support WOL, but the PHY might. So pass the calls to the PHY Signed-off-by: Jingju Hou <houjingj@marvell.com> Signed-off-by: Jisheng Zhang <jszhang@marvell.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marc Kleine-Budde authored
This patch switches the imx6 and vf610 based SoCs from the hardware FIFO to the timestamp based rx offloading. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
The flexcan IP core has 64 mailboxes. For now they are configured for RX as a hardware FIFO. This FIFO has a fixed depth of 6 CAN frames. In some high load scenarios it turns out thas this buffer is too small. In order to have a buffer larger than the 6 frames FIFO, this patch adds support for timestamp based offloading via the generic rx-offload infrastructure. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
In order to receive RTR frames in the non HW FIFO mode the RSS and EACEN bits of the reg_ctrl2 have to be activated. As this has no side effect in the FIFO mode, we do this unconditionally on cores with the reg_ctrl2. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Modern flexcan IP cores support two RX modes. One is using the 6 fames deep hardware FIFO, the other is using up to 64 mailboxes (in non FIFO mode). For now only the HW FIFO mode is activated. In order to make use of the RX mailboxes the individual RX masking feature has to be activated, otherwise matching mailboxes are overwritten during the reception process. This however switches on the individual RX masking, which uses reg_rximr registers for masking. This patch activates the individual RX masking feature unconditionally and initializes the mask registers (reg_rximr) with 0x0 == "don't care", which switches off any filtering. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch converts the flexcan driver to make use of the rx-offload can_rx_offload_irq_offload_fifo() helper function. The idea is to read the CAN frames already in the interrupt context, as the depth of the flexcan HW FIFO is too shallow, resulting in too many missed frames. During a normal NAPI poll the frames are the pushed into the upper layers. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch makes the TX mailbox selectable duing runtime. This is a preparation patch to use of the hardware FIFO selectable via runtime. As the TX mailbox number is different in HW FIFO and normal mode. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch converts the define FLEXCAN_IFLAG_DEFAULT into the runtime calculated value priv->reg_imask1_default. This is a preparation patch to make the TX mailbox selectable during runtime, too. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch changes the flexcan_irq() function to only return IRQ_HANDLED, if the interrupt really has been handled, otherwise IRQ_NONE is returned. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch folds in the do_bus_err() function into flexcan_poll_bus_err(). Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch removed the not needed initialisation from the new_state, rx_state, tx_state variabled. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch converts the rx_errors and tx_errors from int into bool values, to reflect their actual meaning. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch changes the declaration of the devtype data to const. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch removes the write only member pdata from the struct flexcan_priv. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch adds some missing register definitions, which are needed in an upcoming patch. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Some CAN controllers don't implement a FIFO in hardware, but fill their mailboxes in a particular order (from lowest to highest or highest to lowest). This makes problems to read the frames in the correct order from the hardware, as new frames might be filled into just read (low) mailboxes. This gets worse, when following new frames are received into not read (higher) mailboxes. On the bright side some these CAN controllers put a timestamp on each received CAN frame. This patch adds support to offload CAN frames in interrupt context, order them by timestamp and then transmitted in a NAPI context. Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
David Jander authored
Some CAN controllers have a usable FIFO already but can still benefit from off-loading the CAN controller FIFO. The CAN frames of the FIFO are read and put into a skb queue during interrupt and then transmitted in a NAPI context. Signed-off-by: David Jander <david@protonic.nl> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
- 05 Feb, 2017 17 commits
-
-
David S. Miller authored
Eric Dumazet says: ==================== net: get rid of __napi_complete() This patch series removes __napi_complete() calls, in an effort to make NAPI API simpler and generalize GRO and napi_complete_done() ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
All __napi_complete() callers have been converted to use the more standard napi_complete_done(), we can now remove this NAPI method for good. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We plan to remove __napi_complete() soon, this driver is the last user. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() We plan to remove __napi_complete() to reduce NAPI complexity. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() to : 1) Get support of gro_flush_timeout if opt-in 2) Not rearm interrupts for busy-polling users. 3) use standard NAPI API. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() to : 1) Get support of gro_flush_timeout if opt-in 2) Not rearm interrupts for busy-polling users. 3) use standard NAPI API. Note that rx_lock seems to be useless, NAPI logic should not need this extra care. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() to : 1) Get support of gro_flush_timeout if opt-in 2) Not rearm interrupts for busy-polling users. 3) use standard NAPI API and get rid of napi_gro_flush() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() to : 1) Get support of gro_flush_timeout if opt-in 2) Not rearm interrupts for busy-polling users. 3) use standard NAPI API. 4) get rid of baroque code and ease maintenance. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() to : 1) Get support of gro_flush_timeout if opt-in 2) Not rearm interrupts for busy-polling users. 3) use standard NAPI API. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() to : 1) Get support of gro_flush_timeout if opt-in 2) Not rearm interrupts for busy-polling users. 3) use standard NAPI API. 4) get rid of baroque code and ease maintenance. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() to : 1) Get support of gro_flush_timeout if opt-in 2) Not rearm interrupts for busy-polling users. 3) use standard NAPI API. 4) get rid of baroque code and ease maintenance. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() to : 1) Get support of gro_flush_timeout if opt-in 2) Not rearm interrupts for busy-polling users. 3) use standard NAPI API. 4) Eventually get rid of napi_gro_flush() in the future. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use napi_complete_done() instead of __napi_complete() to : 1) Get support of gro_flush_timeout if opt-in 2) Not rearm interrupts for busy-polling users. 3) use standard NAPI API. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
David Ahern says: ==================== net: ipv6: Improve user experience with multipath routes This series closes a couple of gaps between IPv4 and IPv6 with respect to multipath routes: 1. IPv4 allows all nexthops of multipath routes to be deleted using just the prefix and length; IPv6 only deletes the first nexthop for the route if only the prefix and length are given. 2. IPv4 returns multipath routes encoded in the RTA_MULTIPATH attribute. IPv6 returns a series of routes with the same prefix and length - one for each nexthop. This happens for both dumps and notifications. IPv6 does accept RTA_MULTIPATH encoded routes, but installs them as a series of routes. Patch 1 addresses the first item by allowing IPv6 multipath routes to be deleted using just the prefix and length. Patch 2 addresses the second allowing IPv6 multipath routes to be returned encoded in the RTA_MULTIPATH. Patches 3 and 4 upate the RTM_{NEW,DEL}ROUTE notifications to generate 1 notification with RTA_MULTIPATH where applicable. Patch 5 prints IPv6 addresses in compressed format when showing route replace errors. This was noticed testing REPLACE failures. The end result for multipath routes: 1. Dump - RTA_MULTIPATH used for multipath routes $ ip -6 ro ls vrf red 2001:db8:1::/120 dev eth1 proto kernel metric 256 pref medium 2001:db8:2::/120 dev eth2 proto kernel metric 256 pref medium 2001:db8:200::/120 metric 1024 nexthop via 2001:db8:1::2 dev eth1 weight 1 nexthop via 2001:db8:2::2 dev eth2 weight 1 ... 2. Route Add - one notification with RTA_MULTIPATH attribute $ ip -6 ro add vrf red 2001:db8:200::/120 nexthop via 2001:db8:1::2 nexthop via 2001:db8:2::2 $ ip mon route 2001:db8:200::/120 table red metric 1024 nexthop via 2001:db8:1::2 dev eth1 weight 1 nexthop via 2001:db8:2::2 dev eth2 weight 1 2. Route Replace - one notification with RTA_MULTIPATH attribute $ ip -6 ro replace vrf red 2001:db8:200::/120 nexthop via 2001:db8:1::16 nexthop via 2001:db8:2::16 $ ip mon route Replaced 2001:db8:200::/120 table red metric 1024 nexthop via 2001:db8:1::16 dev eth1 weight 1 nexthop via 2001:db8:2::16 dev eth2 weight 1 - on a failure after the insertion of the first nexthop (which means the original route has been replaced in the FIB), a notification is sent with the successful nexthops and then the nexthops are deleted with one notification per hop. This is consistent with how it works today except the successful additions are coalesced into 1 notification. 3. Route Delete - delete of entire multipath route using prefix/length only 1 notification is generated: $ ip -6 ro del vrf red 2001:db8:200::/120 $ ip mon route Deleted 2001:db8:200::/120 table red metric 1024 nexthop via 2001:db8:1::16 dev eth1 weight 1 nexthop via 2001:db8:2::16 dev eth2 weight 1 - if a delete request contains nexthops one notification is generated per nexthop deleted. This is unavoidable since IPv6 alllows a single nexthop to be deleted within a multipath route 4. Route Appends - IPv6 allows nexthops to be appended to an existing route. In this case one notification is sent for the new route with the append flag set. $ ip -6 ro append vrf red 2001:db8:200::/120 nexthop via 2001:db8:2::20 nexthop via 2001:db8:1::20 $ ip mon route Append 2001:db8:200::/120 table red metric 1024 nexthop via 2001:db8:1::2 dev eth1 weight 1 nexthop via 2001:db8:2::2 dev eth2 weight 1 nexthop via 2001:db8:2::20 dev eth2 weight 1 nexthop via 2001:db8:1::20 dev eth1 weight 1 - on failure of an append, a notification is sent with the route containing all of the nexthops successfully added, and it is followed by delete notifications as the hops are removed returning the route to its prior state. This is consistent with how it works today except the successful additions are coalesced into 1 notification. Addresses some of the inconsistencies also noted by Roopa at netdev0.1: https://www.netdev01.org/docs/prabhu-linux_ipv4_ipv6_inconsistencies_talk_slides.pdf v4 - changed series to do encoding in 1 patch and updating notificatons in separate patches to make it easier to review and understand - 1 notification for delete when using prefix/length; 1 notification for append - handle delete of a single nexthop without RTA_MULTIPATH in delete request - upated commit messages and cover letter v3 - removed the need for a user API to opt-in to change. Requiring an API just shifts the difference from same API with different behavior to different API to achieve equivalent behavior - route notifications changed to use RTA_MULTIPATH for add and replace - upated commit messages and cover letter v2 - fixed locking in patch 1 as noted by DaveM - changed user API for patch 2 to require an rtmsg with RTM_F_ALL_NEXTHOPS set in rtm_flags - revamped explanation of patch 2 and cover letter ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
ip6_print_replace_route_err logs an error if a route replace fails with IPv6 addresses in the full format. e.g,: IPv6: IPV6: multipath route replace failed (check consistency of installed routes): 2001:0db8:0200:0000:0000:0000:0000:0000 nexthop 2001:0db8:0001:0000:0000:0000:0000:0016 ifi 0 Change the message to dump the addresses in the compressed format. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
If an entire multipath route is deleted using prefix and len (without any nexthops), send a single RTM_DELROUTE notification with the full route using RTA_MULTIPATH. This is done by generating the skb before the route delete when all of the sibling routes are still present but sending it after the route has been removed from the FIB. The skip_notify flag is used to tell the lower fib code not to send notifications for the individual nexthop routes. If a route is deleted using RTA_MULTIPATH for any nexthops or a single nexthop entry is deleted, then the nexthops are deleted one at a time with notifications sent as each hop is deleted. This is necessary given that IPv6 allows individual hops within a route to be deleted. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Change ip6_route_multipath_add to send one notifciation with the full route encoded with RTA_MULTIPATH instead of a series of individual routes. This is done by adding a skip_notify flag to the nl_info struct. The flag is used to skip sending of the notification in the fib code that actually inserts the route. Once the full route has been added, a notification is generated with all nexthops. ip6_route_multipath_add handles 3 use cases: new routes, route replace, and route append. The multipath notification generated needs to be consistent with the order of the nexthops and it should be consistent with the order in a FIB dump which means the route with the first nexthop needs to be used as the route reference. For the first 2 cases (new and replace), a reference to the route used to send the notification is obtained by saving the first route added. For the append case, the last route added is used to loop back to its first sibling route which is the first nexthop in the multipath route. Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-