- 30 Mar, 2016 31 commits
-
-
Liad Kaufman authored
Change the CMD queue to be queue #0 (rather than queue #9) when working in DQA mode. Signed-off-by: Liad Kaufman <liad.kaufman@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Liad Kaufman authored
In DQA mode, allocate a dedicated queue (#3) for content after beacon (AKA "CaB"). Signed-off-by: Liad Kaufman <liad.kaufman@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Liad Kaufman authored
Set the correct sta_id in the SCD_QUEUE_CONFIG command sent to the FW when enabling/disabling queues. This is needed in DQA-mode to allow the FW to associate between queue and STA. Signed-off-by: Liad Kaufman <liad.kaufman@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Liad Kaufman authored
Use the reserved BSS Client queue when connecting to an AP in DQA mode. Signed-off-by: Liad Kaufman <liad.kaufman@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Oren Givon authored
Edit some of the 9560 series and 5165 series PCI IDs. These devices do not exist yet. Signed-off-by: Oren Givon <oren.givon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
Improve current RSS configuration: * Use netdev_rss_key instead of keeping a local copy. * Configure also UDP hashing to have UDP traffic spread across queues. * Do not direct RSS traffic to our fallback queue. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
We want to request an interrupt vector for RSS queue per CPU, one vector for fallback queue, and one for non-rx interrupts. Future patch will make sure that no RSS traffic is directed to fallback queue. This will enable us to enable fast path on traffic that otherwise would have been received on the fallback queue. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Chaya Rachel Ivgi authored
The driver can read the current state during D0I3, therefore there is no reason not to do it. Signed-off-by: Chaya Rachel Ivgi <chaya.rachel.ivgi@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Liad Kaufman authored
For some reason, this was defined as a signed variable. Make it unsigned. Signed-off-by: Liad Kaufman <liad.kaufman@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
David Spinadel authored
Auxilary station ID in flag in scan config command wasn't set although we set the station ID. Add the flag. Signed-off-by: David Spinadel <david.spinadel@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
Currently the code checks if hardware reported both L4 and L3 checksums as valid, and only then reports it as validated to the stack. However, IPv6 does not have checksum at all and the L3 checksum valid bit is always off for IPv6 packets, with the result of the stack re-validating L4 checksum. Fix code to set CHECKSUM_UNNECESSARY also for IPv6 packets whose TCP/UDP checksum was verified. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Eva Rachel Retuya authored
Use alloc_ordered_workqueue() to allocate the workqueue instead of create_singlethread_workqueue() since the latter is deprecated and is scheduled for removal. There are work items doing related operations that shouldn't be swapped when queued in a certain order hence preserve the strict execution ordering of a single threaded (ST) workqueue by switching to alloc_ordered_workqueue(). WQ_MEM_RECLAIM flag is not needed since the worker is not depended during memory reclaim. Signed-off-by: Eva Rachel Retuya <eraretuya@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
API version lower than 16 is not supported anymore - don't load older ucode. Remove code handling older versions. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Ayala Beker authored
Gscan capabilities were updated with new capabilities supported by the device. Update GSCAN capabilities TLV. Signed-off-by: Ayala Beker <ayala.beker@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Emmanuel Grumbach authored
We have a module parameter, this is enough. per platform customizations will be done through the init script of the platform. Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
Due to hardware bug, upon any shadow free-queue register write access, a legacy RBD shadow register must be written as well. This is required in order to trigger a copy of the shadow registers values after MAC exits sleep state. Specifically, the driver has to write (any value) to the legacy RBD register each time FRBDCB is accessed. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Liad Kaufman authored
"DQA" is shorthand for "dynamic queue allocation". This enables on-demand allocation of queues per RA/TID rather than statically allocating per vif, thus allowing a potential benefit of various factors. Please refer to the DOC section this patch adds to sta.h to see a more in-depth explanation of this feature. There are many things to take into consideration when working in DQA mode, and this patch is only one in a series. Note that default operation mode is non-DQA mode, unless the FW indicates that it supports DQA mode. This patch enables support of DQA for a station connected to an AP, and works in a non-aggregated mode. When a frame for an unused RA/TID arrives at the driver, it isn't TXed immediately, but deferred first until a suitable queue is first allocated for it, and then TXed by a worker that both allocates the queues and TXes deferred traffic. When a STA is removed, its queues goes back into the queue pools for reuse as needed. Signed-off-by: Liad Kaufman <liad.kaufman@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Johannes Berg authored
"is_data_qos == true" is equivalent to "tid < IWL_MAX_TID_COUNT" since tid is only assigned (and range-checked) in that case. This removes a (harmless) smatch warning that occurs because it can't seem to follow the above logic from the code. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Haim Dreyfuss authored
Update device id and FW serial number for 2X2 antenna devices in 9000 generation product. These will not be available on the market in the coming year. Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Emmanuel Grumbach authored
This allows to disable uapsd for BSS only, or P2P client separately. Remove the now unneeded IWL_MVM_P2P_UAPSD_STANDALONE constant. Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Emmanuel Grumbach authored
iwlwifi / iwlmvm didn't destroy their mutexes. Fix that. Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
TX CMD API has changed to support offload assist. Currently we do not enable checksum yet, but must set the padding indication, to avoid FW errors. Set other amsdu flag as well. The rest of the flags will be configured only if HW csum is enabled and will be set in future patches. This change is backward compatible. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
We insert padding if the MAC header's size is not a multiple of 4 to ensure that the SNAP header is DWORD aligned. When we do so, we let the firmware know by setting a bit in Tx command (TX_CMD_FLG_MH_PAD) which will instruct the firmware to drop those 2 bytes before sending the frame. However, this is not needed for AMSDU as the sub frame header (14B) complements the MAC header (26B) so that the SNAP header is DWORD aligned without adding any pad. Until 9000, the firmware didn't check the TX_CMD_FLG_MH_PAD bit but rather checked the length of the MAC header itself and assumed the entity that enqueued the frame (driver or internal firmware code) added the pad. Since the driver inserted the pad even for AMSDU this logic applied. Note that the padding is a DMA optimization but it's not strictly needed, so we could pad even if it was not needed. However, the CSUM hardware introduced for the 9000 devices requires to not pad AMSDU as it is not needed, and will fail if such a pad exists. Due to older FW not checking the padding bit but checking the mac header size itself - we cannot do this adjustments for older generations. Do not align the size if it is an AMSDU and HW checksum is enabled - which will only happen on 9000 devices and on. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Emmanuel Grumbach authored
This makes u-APSD work with more peers. Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Emmanuel Grumbach authored
Bjorn pointed out that printing an error value as an hexadecimal isn't very convenient. Change that. Reported-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Chaya Rachel Ivgi authored
Use RX_HANDLER_ASYNC_UNLOCKED instead of unlock and re-lock the mutex independently. Signed-off-by: Chaya Rachel Ivgi <chaya.rachel.ivgi@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Luca Coelho authored
We don't use the refcount value anymore, all the refcounting is done in the runtime PM usage_count value. Remove it. Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
When entering suspend the driver calls iwl_disable_interrupts() and then iwl_pcie_disable_ict(). On resume the driver calls only iwl_pcie_reset_ict() without calling explicitly to iwl_enable_interrupts(). This mostly works since iwl_pcie_reset_ict is calling to iwl_enable_interrupts, but it doesn't work when there is no ict_table in MSIx mode. The result is that driver tries to resume but fails since it doesn't get the RX interrupt from FW indicating that d0i3 exit was completed. Fix it by adding an explicit call to enable interrupts. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
My patch resized the pool size, but neglected to resize the global table, which is obviously wrong since the global table maps the pool's rxb to vid one to one. This results in a panic in 9000 devices. Add a build bug to avoid such a case in the future. Fixes: 7b542436 ("iwlwifi: pcie: fine tune number of rxbs") Reported-by: Haim Dreyfuss <haim.dreyfuss@intel.com> Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Aviya Erenfeld authored
Add debugfs entry named lqm_send_cmd for kicking a measurement. This hook takes the duration and the timeout as parameter. Signed-off-by: Aviya Erenfeld <aviya.erenfeld@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Aviya Erenfeld authored
LQM stands for Link Quality Measurement. The firmware will collect a defined set of statitics (see the notification for details) that allow to know how busy the medium is. The driver issues a request to the firmware that includes the duration of the measurement (the firmware needs to be on channel for that amount of time) and the timeout (in case the firmware has a lot of offchannel activities). If the timeout elapses, the firmware will send partial results which are still valuable. In case of disassociation / channel switch and alike, the driver is in charge of stopping the measurements and the firmware will reply with partial results. The user space API for now is debugfs only and will be implmemented in an upcoming patch. Signed-off-by: Aviya Erenfeld <aviya.erenfeld@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 20 Mar, 2016 4 commits
-
-
Golan Ben-Ami authored
In case of FW error, support dumping the UMAC internal txfifos. To do so, support version 2 of shared memory cfg command, which contains the sizes of the internal txfifos, and move the command to the system group. Signed-off-by: Golan Ben-Ami <golan.ben.ami@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Matti Gottlieb authored
Paging contains 3 sections in the fw. The first for the paging separator, The second for the CSS block, the third with the paging data. Currently if the driver finds the paging separator, and there is only section left (CSS), once reading the CSS section, the driver will attempt to read the paging data and will go out of the arrays bounds. Make sure that the FW image contains the right amount of sections for paging. Signed-off-by: Matti Gottlieb <matti.gottlieb@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Matti Gottlieb authored
Currently the driver has 2 buffers for paging: 1. paging db - this contains all of the pages that were in the FW image, that the driver stores for the FW. This is allocated for each block separately (not contiguous). 2. download buffer - we need to provide this empty buffer for the iwl_sdio_load_fw_chunk function to copy the requested pages to the shared memory. This is one big buffer of contiguous memory whose size is the size of all the blocks that the fw paging section can contain. This download buffer size is too big, and causes the allocation to fail sometimes. Since the driver allocates memory for each block separately, it is not possible for the FW to request all of the pages in one request (the FW gives an address and size, so blocks need to be contiguous for this to happen), therefore the FW is limited to request only one block. Decrease the size of the paging download buffer to be the size of a paging block. Signed-off-by: Matti Gottlieb <matti.gottlieb@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
Sara Sharon authored
Currently when stop flow is performed, there might be transport TX RTPM references that are not freed in case we unmap a queue that still has packets not reclaimed. Fix that. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
-
- 19 Mar, 2016 5 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-nextLinus Torvalds authored
Pull networking updates from David Miller: "Highlights: 1) Support more Realtek wireless chips, from Jes Sorenson. 2) New BPF types for per-cpu hash and arrap maps, from Alexei Starovoitov. 3) Make several TCP sysctls per-namespace, from Nikolay Borisov. 4) Allow the use of SO_REUSEPORT in order to do per-thread processing of incoming TCP/UDP connections. The muxing can be done using a BPF program which hashes the incoming packet. From Craig Gallek. 5) Add a multiplexer for TCP streams, to provide a messaged based interface. BPF programs can be used to determine the message boundaries. From Tom Herbert. 6) Add 802.1AE MACSEC support, from Sabrina Dubroca. 7) Avoid factorial complexity when taking down an inetdev interface with lots of configured addresses. We were doing things like traversing the entire address less for each address removed, and flushing the entire netfilter conntrack table for every address as well. 8) Add and use SKB bulk free infrastructure, from Jesper Brouer. 9) Allow offloading u32 classifiers to hardware, and implement for ixgbe, from John Fastabend. 10) Allow configuring IRQ coalescing parameters on a per-queue basis, from Kan Liang. 11) Extend ethtool so that larger link mode masks can be supported. From David Decotigny. 12) Introduce devlink, which can be used to configure port link types (ethernet vs Infiniband, etc.), port splitting, and switch device level attributes as a whole. From Jiri Pirko. 13) Hardware offload support for flower classifiers, from Amir Vadai. 14) Add "Local Checksum Offload". Basically, for a tunneled packet the checksum of the outer header is 'constant' (because with the checksum field filled into the inner protocol header, the payload of the outer frame checksums to 'zero'), and we can take advantage of that in various ways. From Edward Cree" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1548 commits) bonding: fix bond_get_stats() net: bcmgenet: fix dma api length mismatch net/mlx4_core: Fix backward compatibility on VFs phy: mdio-thunder: Fix some Kconfig typos lan78xx: add ndo_get_stats64 lan78xx: handle statistics counter rollover RDS: TCP: Remove unused constant RDS: TCP: Add sysctl tunables for sndbuf/rcvbuf on rds-tcp socket net: smc911x: convert pxa dma to dmaengine team: remove duplicate set of flag IFF_MULTICAST bonding: remove duplicate set of flag IFF_MULTICAST net: fix a comment typo ethernet: micrel: fix some error codes ip_tunnels, bpf: define IP_TUNNEL_OPTS_MAX and use it bpf, dst: add and use dst_tclassid helper bpf: make skb->tc_classid also readable net: mvneta: bm: clarify dependencies cls_bpf: reset class and reuse major in da ldmvsw: Checkpatch sunvnet.c and sunvnet_common.c ldmvsw: Add ldmvsw.c driver code ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroupLinus Torvalds authored
Pull cgroup updates from Tejun Heo: "cgroup changes for v4.6-rc1. No userland visible behavior changes in this pull request. I'll send out a separate pull request for the addition of cgroup namespace support. - The biggest change is the revamping of cgroup core task migration and controller handling logic. There are quite a few places where controllers and tasks are manipulated. Previously, many of those places implemented custom operations for each specific use case assuming specific starting conditions. While this worked, it makes the code fragile and difficult to follow. The bulk of this pull request restructures these operations so that most related operations are performed through common helpers which implement recursive (subtrees are always processed consistently) and idempotent (they make cgroup hierarchy converge to the target state rather than performing operations assuming specific starting conditions). This makes the code a lot easier to understand, verify and extend. - Implicit controller support is added. This is primarily for using perf_event on the v2 hierarchy so that perf can match cgroup v2 path without requiring the user to do anything special. The kernel portion of perf_event changes is acked but userland changes are still pending review. - cgroup_no_v1= boot parameter added to ease testing cgroup v2 in certain environments. - There is a regression introduced during v4.4 devel cycle where attempts to migrate zombie tasks can mess up internal object management. This was fixed earlier this week and included in this pull request w/ stable cc'd. - Misc non-critical fixes and improvements" * 'for-4.6' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: (44 commits) cgroup: avoid false positive gcc-6 warning cgroup: ignore css_sets associated with dead cgroups during migration Documentation: cgroup v2: Trivial heading correction. cgroup: implement cgroup_subsys->implicit_on_dfl cgroup: use css_set->mg_dst_cgrp for the migration target cgroup cgroup: make cgroup[_taskset]_migrate() take cgroup_root instead of cgroup cgroup: move migration destination verification out of cgroup_migrate_prepare_dst() cgroup: fix incorrect destination cgroup in cgroup_update_dfl_csses() cgroup: Trivial correction to reflect controller. cgroup: remove stale item in cgroup-v1 document INDEX file. cgroup: update css iteration in cgroup_update_dfl_csses() cgroup: allocate 2x cgrp_cset_links when setting up a new root cgroup: make cgroup_calc_subtree_ss_mask() take @this_ss_mask cgroup: reimplement rebind_subsystems() using cgroup_apply_control() and friends cgroup: use cgroup_apply_enable_control() in cgroup creation path cgroup: combine cgroup_mutex locking and offline css draining cgroup: factor out cgroup_{apply|finalize}_control() from cgroup_subtree_control_write() cgroup: introduce cgroup_{save|propagate|restore}_control() cgroup: make cgroup_drain_offline() and cgroup_apply_control_{disable|enable}() recursive cgroup: factor out cgroup_apply_control_enable() from cgroup_subtree_control_write() ...
-
Eric Dumazet authored
bond_get_stats() can be called from rtnetlink (with RTNL held) or from /proc/net/dev seq handler (with RCU held) The logic added in commit 5f0c5f73 ("bonding: make global bonding stats more reliable") kind of assumed only one cpu could run there. If multiple threads are reading /proc/net/dev, stats can be really messed up after a while. A second problem is that some fields are 32bit, so we need to properly handle the wrap around problem. Given that RTNL is not always held, we need to use bond_for_each_slave_rcu(). Fixes: 5f0c5f73 ("bonding: make global bonding stats more reliable") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Andy Gospodarek <gospo@cumulusnetworks.com> Cc: Jay Vosburgh <j.vosburgh@gmail.com> Cc: Veaceslav Falico <vfalico@gmail.com> Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
When un-mapping skb->data in __bcmgenet_tx_reclaim(), we must use the length that was used in original dma_map_single(), instead of skb->len that might be bigger (includes the frags) We simply can store skb_len into tx_cb_ptr->dma_len and use it at unmap time. Fixes: 1c1008c7 ("net: bcmgenet: add main driver file") Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eli Cohen authored
Commit 85743f1e ("net/mlx4_core: Set UAR page size to 4KB regardless of system page size") introduced dependency where old VF drivers without this fix fail to load if the PF driver runs with this commit. To resolve this add a module parameter which disables that functionality by default. If both the PF and VFs are running with a driver with that commit the administrator may set the module param to true. The module parameter is called enable_4k_uar. Fixes: 85743f1e ('net/mlx4_core: Set UAR page size to 4KB ...') Signed-off-by: Eli Cohen <eli@mellanox.com> Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: David S. Miller <davem@davemloft.net>
-