- 13 Mar, 2017 40 commits
-
-
Arkadi Sharshevsky authored
Introduce periodic task for dumping the activity status for the ACL rule TCAM entries. This is done in order to emulate last use statistics. Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.comi> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arkadi Sharshevsky authored
Currently the ACL rules can be accessed only by hashing. In order to dump the activity the rules are also placed in a list. Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arkadi Sharshevsky authored
Add support for retrieving TCAM entry activity. In order to support ACL rule activity corresponding TCAM entry should be queried. Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arkadi Sharshevsky authored
Add support for allocating generic flow counter. Generic flow counter can count packets or packets and bytes and can be assigned to different hardware processes. First use will be for counting packets and bytes of ACL rules, and will be introduced in the following patches. Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com> Reviewed-by: Ido schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arkadi Sharshevsky authored
The MGPC register retrieves generic flow counter value. It will be used to query ACL counters. Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arkadi Sharshevsky authored
Add implementation for counter allocator. The ASIC has special memory pool for various counting purposes. Counter memory is distributed between equal size banks. The static sub-pool configuration should specify the following parameters for each sub-pool: - Number of required banks. - Maximum entry size. Each module can add dedicated sub-pool or use existing one. Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Geliang Tang authored
Use setup_timer() instead of init_timer() to simplify the code. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Geliang Tang authored
Use setup_timer() instead of init_timer() to simplify the code. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jakub Kicinski says: ==================== nfp: XDP adjust head support This series adds support for XDP adjust head. Bulk of the code is actually just paying technical debt. On reconfiguration request nfp was allocating new resources separately leaving device running with the existing set of rings. We used to manage the new resources in special ring set structures. This set is simply separating the datapath part of the device structure from the control information allowing the new datapath structure to be allocated with all new memory and rings. The swap operation is now greatly simplified. We also save a lot of parameter passing this way. Hopefully the churn is worth the negative diffstat. Support for XDP adjust head is done in a pretty standard way. NFP is a bit special because it prepends metadata before packet data so we have to do a bit of memcpying in case XDP will run. We also luck out a little bit because the fact that we already have prepend space allocated means that one byte is enough to store the extra XDP space (256 of standard prepend space is a bit inconvenient since it would normally require 16bits or boolean with additional shifts). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Support prepending data from XDP. We are already always allocating some headroom because FW may prepend metadata to packets. xdp_adjust_head() can be supported by making sure that headroom is big enough for XDP. In case FW had prepended metadata to the packet, however, we have to move it out of the way before we call XDP. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
XDP may require us to move metadata to make room for pushing headers. Track meta data location with a pointer and pass it explicitly to functions. While at it validate that meta_len from the descriptor is not bogus. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Rename pkt_off variable to dma_off, it should hold data offset counting from beginning of DMA mapping. Compute the value only in XDP context. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
NFP_NET_CFG_RX_OFFSET is 32bit wide, make sure what we read from there is reasonable for packet headroom. This allows us to store the rx_offset in a 8bit variable. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Instead of testing if xdp_prog is present store the dma direction in data path structure. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Instead of passing around sets of rings and their parameters just store all information in the data path structure. We will no longer user xchg() on XDP programs when we swap programs while the traffic is guaranteed not to be flowing. This allows us to simply assign the entire data path structures instead of copying field by field. The optimization to reallocate only the rings on the side (RX/TX) which has been changed is also removed since it seems like it's not worth the code complexity. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Use xdp_prog member of data path struct to carry the xdp_prog to alloc/free free functions. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Move the mtu member from ring set to data path struct. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Use fl_bufsz member of data path struct to carry desired size of free list entries. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Instead of passing variables around use dp to store number of tx rings for the stack and number of IRQ vectors. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Make callers of nfp_net_ring_reconfig() pass newly allocated data path structure. We will gradually make use of that structure instead of passing parameters around to all the allocation functions. This commit adds allocation and propagation of new data path struct, no parameters are converted, yet. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Control BAR pointer is used to unmask interrupts so it should be in the first cacheline of adapter structure. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Move all data path information into a separate structure. This way we will be able to allocate new data path with all new rings etc. and swap it in easily. No functional changes. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Joao Pinto says: ==================== prepare mac operations for multiple queues As agreed with David Miller, this patch-set is the first of 3 to enable multiple queues in stmmac. This first one concentrates on mac operations adding functionalities as: a) Configuration through DT b) RX and TX scheduling algorithms programming b) TX queues weight programming (essential in weightes algorithms) c) RX enable as DCB or AVB (preparing for future AVB support) d) Mapping RX queue to DMA channel e) IRQ treatment prepared for multiple queues f) Debug dump prepared for multiple queues g) CBS configuration In v3 patch-set version I included a new patch to enable CBS configuration (Patch 9). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
This patch adds the configuration of the AVB Credit-Based Shaper. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
This patch prepares mac debug dump for multiple queues. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
This patch prepares mac irq status treatment for multiple queues. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
This patch adapts flow_ctrl function to prepare it for multiple queues. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
This patch adds the functionality of RX queue to dma channel mapping based on configuration. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
This patch introduces the enabling of RX queues as DCB or as AVB based on configuration. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
This patch adds TX queues weight programming. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
This patch adds the RX and TX scheduling algorithms programming. It introduces the multiple queues configuration function (stmmac_mtl_configuration) in stmmac_main. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joao Pinto authored
This patch adds the multiple queues configuration in the Device Tree. It was also created a set of structures to keep the RX and TX queues configurations to be used in the driver. Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Thierry Reding says: ==================== net: stmmac: Fixes and Tegra186 support This series of patches start with a few cleanups that I ran across while adding Tegra186 support to the stmmac driver. It then adds code for FIFO size parsing from feature registers and finally enables support for the incarnation of the Synopsys DWC QOS IP found on NVIDIA Tegra186 SoCs. This is based on next-20170310. Changes in v2: - address review comments by Mikko and Joao - add two additional cleanup patches ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thierry Reding authored
The NVIDIA Tegra186 SoC contains an instance of the Synopsys DWC ethernet QOS IP core. The binding that it uses is slightly different from existing ones because of the integration (clocks, resets, ...). Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thierry Reding authored
Split out the binding specific parts of ->probe() and ->remove() to enable the driver to support variants of the binding. This is useful in order to keep backwards-compatibility while making it easy for a sub- driver to deal only with the updated bindings rather than having to add compatibility quirks all over the place. Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com> Reviewed-By: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thierry Reding authored
Program the receive queue size based on the RX FIFO size and enable hardware flow control for large FIFOs. Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thierry Reding authored
New version of this core encode the FIFO sizes in one of the feature registers. Use these sizes as default, but still allow device tree to override them for backwards compatibility. Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thierry Reding authored
When DMA mapping an SKB fragment, the mapping must be checked for errors, otherwise the DMA debug code will complain upon unmap. Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thierry Reding authored
clk_prepare_enable() and clk_disable_unprepare() for this clock aren't properly balanced, which can trigger a WARN_ON() in the common clock framework. Reviewed-By: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thierry Reding authored
If an error occurs while opening the device, make sure to disable the PTP reference clock. Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-