- 18 Jan, 2014 14 commits
-
-
sfeldma@cumulusnetworks.com authored
If link is IFF_SLAVE, extend link dev netlink attributes to include slave attributes with new IFLA_SLAVE nest. Add netlink notification (RTM_NEWLINK) when slave status changes from backup to active, or visa-versa. Adds new ndo_get_slave op to net_device_ops to fill skb with IFLA_SLAVE attributes. Currently only used by bonding driver, but could be used by other aggregating devices with slaves. Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
sfeldma@cumulusnetworks.com authored
Add sub-directory under /sys/class/net/<interface>/slave with read-only attributes for slave. Directory only appears when <interface> is a slave. $ tree /sys/class/net/eth2/slave/ /sys/class/net/eth2/slave/ ├── ad_aggregator_id ├── link_failure_count ├── mii_status ├── perm_hwaddr ├── queue_id └── state $ cat /sys/class/net/eth2/slave/* 2 0 up 40:02:10:ef:06:01 0 active Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
Jesse Brandeburg reported that commit acaf4e70 caused a panic when adding a network namespace while vxlan module was present in the system: [<ffffffff814d0865>] vxlan_lowerdev_event+0xf5/0x100 [<ffffffff816e9e5d>] notifier_call_chain+0x4d/0x70 [<ffffffff810912be>] __raw_notifier_call_chain+0xe/0x10 [<ffffffff810912d6>] raw_notifier_call_chain+0x16/0x20 [<ffffffff815d9610>] call_netdevice_notifiers_info+0x40/0x70 [<ffffffff815d9656>] call_netdevice_notifiers+0x16/0x20 [<ffffffff815e1bce>] register_netdevice+0x1be/0x3a0 [<ffffffff815e1dce>] register_netdev+0x1e/0x30 [<ffffffff814cb94a>] loopback_net_init+0x4a/0xb0 [<ffffffffa016ed6e>] ? lockd_init_net+0x6e/0xb0 [lockd] [<ffffffff815d6bac>] ops_init+0x4c/0x150 [<ffffffff815d6d23>] setup_net+0x73/0x110 [<ffffffff815d725b>] copy_net_ns+0x7b/0x100 [<ffffffff81090e11>] create_new_namespaces+0x101/0x1b0 [<ffffffff81090f45>] copy_namespaces+0x85/0xb0 [<ffffffff810693d5>] copy_process.part.26+0x935/0x1500 [<ffffffff811d5186>] ? mntput+0x26/0x40 [<ffffffff8106a15c>] do_fork+0xbc/0x2e0 [<ffffffff811b7f2e>] ? ____fput+0xe/0x10 [<ffffffff81089c5c>] ? task_work_run+0xac/0xe0 [<ffffffff8106a406>] SyS_clone+0x16/0x20 [<ffffffff816ee689>] stub_clone+0x69/0x90 [<ffffffff816ee329>] ? system_call_fastpath+0x16/0x1b Apparently loopback device is being registered first and thus we receive an event notification when vxlan_net is not ready. Hence, when we call net_generic() and request vxlan_net_id, we seem to access garbage at that point in time. In setup_net() where we set up a newly allocated network namespace, we traverse the list of pernet ops ... list_for_each_entry(ops, &pernet_list, list) { error = ops_init(ops, net); if (error < 0) goto out_undo; } ... and loopback_net_init() is invoked first here, so in the middle of setup_net() we get this notification in vxlan. As currently we only care about devices that unregister, move access through net_generic() there. Fix is based on Cong Wang's proposal, but only changes what is needed here. It sucks a bit as we only work around the actual cure: right now it seems the only way to check if a netns actually finished traversing all init ops would be to check if it's part of net_namespace_list. But that I find quite expensive each time we go through a notifier callback. Anyway, did a couple of tests and it seems good for now. Fixes: acaf4e70 ("net: vxlan: when lower dev unregisters remove vxlan dev as well") Reported-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Tested-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Aaron Brown says: ==================== Intel Wired LAN Driver Updates This series contains updates to ixgbe Ethan Zhao. The first one replaces the magic number "63" with a macro, IXGBE_MAX_VFS_DRV_LIMIT, the second moves the call to set driver_max_VFS to before SRIOV is enabled. The code of these patches match the v3 (1/2) and v2 (2/2) versions sent to the e1000-devel and netdev mailing lists. The intermediate versions (v4, v5) are from sorting out style issues, mostly tabs to spaces and split lines probably introduced via mailer. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
ethan.zhao authored
commit 43dc4e01 Limit number of reported VFs to device specific value It doesn't work and always returns -EBUSY because VFs are already enabled. ixgbe_enable_sriov() pci_enable_sriov() sriov_enable() { ... .. iov->ctrl |= PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE; pci_cfg_access_lock(dev); ... ... } pci_sriov_set_totalvfs() { ... ... if (dev->sriov->ctrl & PCI_SRIOV_CTRL_VFE) return -EBUSY; ... } So should set driver_max_VFs with pci_sriov_set_totalvfs() before enable VFs with ixgbe_enable_sriov(). V2: revised for net-next tree. Signed-off-by: Ethan Zhao <ethan.kernel@gmail.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
ethan.zhao authored
Because ixgbe driver limit the max number of VF functions could be enabled to 63, so define one macro IXGBE_MAX_VFS_DRV_LIMIT and cleanup the const 63 in code. v3: revised for net-next tree. Signed-off-by: Ethan Zhao <ethan.kernel@gmail.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This patch : 1) Remove a dst leak if DST_NOCACHE was set on dst Fix this by holding a reference only if dst really cached. 2) Remove a lockdep warning in __tunnel_dst_set() This was reported by Cong Wang. 3) Remove usage of a spinlock where xchg() is enough 4) Remove some spurious inline keywords. Let compiler decide for us. Fixes: 7d442fab ("ipv4: Cache dst in tunnels") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Cong Wang <cwang@twopensource.com> Cc: Tom Herbert <therbert@google.com> Cc: Maciej Żenczykowski <maze@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Simon Horman authored
The r7s72100 SoC includes a fast ethernet controller. Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Simon Horman authored
Return a boolean from sh_eth_is_gether() and refactor it as a one-liner. Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Flavio Leitner authored
The RFC 3810 defines two type of messages for multicast listeners. The "Current State Report" message, as the name implies, refreshes the *current* state to the querier. Since the querier sends Query messages periodically, there is no need to retransmit the report. On the other hand, any change should be reported immediately using "State Change Report" messages. Since it's an event triggered by a change and that it can be affected by packet loss, the rfc states it should be retransmitted [RobVar] times to make sure routers will receive timely. Currently, we are sending "Current State Reports" after DAD is completed. Before that, we send messages using unspecified address (::) which should be silently discarded by routers. This patch changes to send "State Change Report" messages after DAD is completed fixing the behavior to be RFC compliant and also to pass TAHI IPv6 testsuite. Signed-off-by: Flavio Leitner <fbl@redhat.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Remove function qlcnic_enable_eswitch which was defined but never used in current code. Compile tested only. Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Functions only used in one file should be static. Found by running make namespacecheck Compile tested only. Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florent Fourcot authored
This patch is following the commit b903d324 (ipv6: tcp: fix TCLASS value in ACK messages sent from TIME_WAIT). For the same reason than tclass, we have to store the flow label in the inet_timewait_sock to provide consistency of flow label on the last ACK. Signed-off-by: Florent Fourcot <florent.fourcot@enst-bretagne.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-nextDavid S. Miller authored
John W. Linville says: ==================== Please pull this batch of updates for the 3.14 stream! For the mac80211 bits, Johannes says: "This time I have uAPSD fixes since I was working on that, hwsim improvements to make dynamic radios possible for the test suite, the evidently long-overdue channel_change_time removal and a few other small collected fix and improvements." For the iwlwifi bits, Emmanuel says: "Besides a few trivial patches, I have an important workaround for a HW issue that has kept me busy for a long time. Along with it, a fix that prevents an error from being printed. Eyal fixes our behavior against SISO APs and Ilan fixes an issue with multiple interface scenarios. Eliad fixes an error path in our init flow. We also have a few 'static analyzers' fix." For the NFC bits, Samuel says: "It includes: * A new NFC driver for Marvell's 8897, and a few NCI fixes and improvements needed to support this chipset. * An LLCP fix for how we were setting the default MIU on a p2p link. If there is no explicit MIU extension announced at connection time, we must use the default one and not the one announced at LLCP link establishement time. * A pn544 EEPROM config update. Some of the currently EEPROM configured values are overwriting the firmware ones while other should not be set by the driver itself. * Some NFC digital stack fixes and improvements. Asynchronous functions are better documented, RF technologies and CRC functions are set upon PSL_REQ reception, and a few minor bugs are fixed. * Minor and miscelaneous pn533, mei_phy and port100 fixes." For the ath bits, Kalle says: "Janusz added Kconfig option for DFS. The DFS code was there already, but after fixes to mac80211 we can now enable it. Bartosz added a runtime firmware feature flag to disable P2P. Our 10.1 firmware branch doesn't support P2P and ath10k can now disable that. He also added a limit for how many clients can connect to ath10k AP. Michal fixed WEP shared authentication, in case someone still uses it. And I added firmware debug log to help the firmware engineers." Along with that is a small batch of ath9k updates and a few other bits here and there. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 17 Jan, 2014 26 commits
-
-
John W. Linville authored
Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next into for-davem
-
David S. Miller authored
Michael Dalton says: ==================== virtio-net: mergeable rx buffer size auto-tuning The virtio-net device currently uses aligned MTU-sized mergeable receive packet buffers. Network throughput for workloads with large average packet size can be improved by posting larger receive packet buffers. However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE) buffers reduces the throughput of workloads that do not benefit from GRO and have no large inbound packets. This patchset introduces virtio-net mergeable buffer size auto-tuning, with buffer sizes ranging from aligned MTU-size to PAGE_SIZE. Packet buffer size is chosen based on a per-receive queue EWMA of incoming packet size. To unify mergeable receive buffer memory allocation and improve SKB frag coalescing, all mergeable buffer memory allocation is migrated to per-receive queue page frag allocators. The per-receive queue mergeable packet buffer size is exported via sysfs, and the network device sysfs layer has been extended to add support for device-specific per-receive queue sysfs attribute groups. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Dalton authored
Add initial support for per-rx queue sysfs attributes to virtio-net. If mergeable packet buffers are enabled, adds a read-only mergeable packet buffer size sysfs attribute for each RX queue. Suggested-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael Dalton <mwdalton@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Dalton authored
To ensure ewma_read() without a lock returns a valid but possibly out of date average, modify ewma_add() by using ACCESS_ONCE to prevent intermediate wrong values from being written to avg->internal. Suggested-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Michael Dalton <mwdalton@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Dalton authored
Extend existing support for netdevice receive queue sysfs attributes to permit a device-specific attribute group. Initial use case for this support will be to allow the virtio-net device to export per-receive queue mergeable receive buffer size. Signed-off-by: Michael Dalton <mwdalton@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Dalton authored
Commit 2613af0e ("virtio_net: migrate mergeable rx buffers to page frag allocators") changed the mergeable receive buffer size from PAGE_SIZE to MTU-size, introducing a single-stream regression for benchmarks with large average packet size. There is no single optimal buffer size for all workloads. For workloads with packet size <= MTU bytes, MTU + virtio-net header-sized buffers are preferred as larger buffers reduce the TCP window due to SKB truesize. However, single-stream workloads with large average packet sizes have higher throughput if larger (e.g., PAGE_SIZE) buffers are used. This commit auto-tunes the mergeable receiver buffer packet size by choosing the packet buffer size based on an EWMA of the recent packet sizes for the receive queue. Packet buffer sizes range from MTU_SIZE + virtio-net header len to PAGE_SIZE. This improves throughput for large packet workloads, as any workload with average packet size >= PAGE_SIZE will use PAGE_SIZE buffers. These optimizations interact positively with recent commit ba275241 ("virtio-net: coalesce rx frags when possible during rx"), which coalesces adjacent RX SKB fragments in virtio_net. The coalescing optimizations benefit buffers of any size. Benchmarks taken from an average of 5 netperf 30-second TCP_STREAM runs between two QEMU VMs on a single physical machine. Each VM has two VCPUs with all offloads & vhost enabled. All VMs and vhost threads run in a single 4 CPU cgroup cpuset, using cgroups to ensure that other processes in the system will not be scheduled on the benchmark CPUs. Trunk includes SKB rx frag coalescing. net-next w/ virtio_net before 2613af0e (PAGE_SIZE bufs): 14642.85Gb/s net-next (MTU-size bufs): 13170.01Gb/s net-next + auto-tune: 14555.94Gb/s Jason Wang also reported a throughput increase on mlx4 from 22Gb/s using MTU-sized buffers to about 26Gb/s using auto-tuning. Signed-off-by: Michael Dalton <mwdalton@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Dalton authored
The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC mergeable rx buffer allocations. This commit migrates virtio-net to use per-receive queue page frags for GFP_ATOMIC allocation. This change unifies mergeable rx buffer memory allocation, which now will use skb_refill_frag() for both atomic and GFP-WAIT buffer allocations. To address fragmentation concerns, if after buffer allocation there is too little space left in the page frag to allocate a subsequent buffer, the remaining space is added to the current allocated buffer so that the remaining space can be used to store packet data. Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael Dalton <mwdalton@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Dalton authored
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation strategy employed by netdev_alloc_frag, which attempts higher-order page allocations whether or not GFP_WAIT is set, falling back to successively lower-order page allocations on failure. Part of migration of virtio-net to per-receive queue page frag allocators. Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Michael Dalton <mwdalton@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
The error code was not set if change indev fail, so the error condition wasn't reflected in the return value. Fix to return a negative error code from this error handling case instead of 0. Fixes: 2519a602 ('net_sched: optimize tcf_match_indev()') Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ying Xue says: ==================== tipc: align TIPC behaviours of waiting for events with other stacks Comparing the current implementations of waiting for events in TIPC socket layer with other stacks, TIPC's behaviour is very different because wait_event_interruptible_timeout()/wait_event_interruptible() are always used by TIPC to wait for events while relevant socket or port variables are fed to them as their arguments. As socket lock has to be released temporarily before the two routines of waiting for events are called, their arguments associated with socket or port structures are out of socket lock protection. This might cause serious issues where the process of calling socket syscall such as sendsmg(), connect(), accept(), and recvmsg(), cannot be waken up at all even if proper event arrives or improperly be woken up although the condition of waking up the process is not satisfied in practice. Therefore, aligning its behaviours with similar functions implemented in other stacks, for instance, sk_stream_wait_connect() and inet_csk_wait_for_connect() etc, can avoid above risks for us. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ying Xue authored
Standardize the behaviour of waiting for events in TIPC recvmsg() so that all variables of socket or port structures are protected within socket lock, allowing the process of calling recvmsg() to be woken up at appropriate time. Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ying Xue authored
Standardize the behaviour of waiting for events in TIPC send_packet() so that all variables of socket or port structures are protected within socket lock, allowing the process of calling sendmsg() to be woken up at appropriate time. Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ying Xue authored
Comparing the behaviour of how to wait for events in TIPC sendmsg() with other stacks, the TIPC implementation might be perceived as different, and sometimes even incorrect. For instance, sk_sleep() and tport->congested variables associated with socket are exposed without socket lock protection while wait_event_interruptible_timeout() accesses them. So standardizing it with similar implementation in other stacks can help us correct these errors which the process of calling sendmsg() cannot be woken up event if an expected event arrive at socket or improperly woken up although the wake condition doesn't match. Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ying Xue authored
Comparing the behaviour of how to wait for events in TIPC accept() with other stacks, the TIPC implementation might be perceived as different, and sometimes even incorrect. As sk_sleep() and sk->sk_receive_queue variables associated with socket are not protected by socket lock, the process of calling accept() may be woken up improperly or sometimes cannot be woken up at all. After standardizing it with inet_csk_wait_for_connect routine, we can get benefits including: avoiding 'thundering herd' phenomenon, adding a timeout mechanism for accept(), coping with a pending signal, and having sk_sleep() and sk->sk_receive_queue being always protected within socket lock scope and so on. Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ying Xue authored
Comparing the behaviour of how to wait for events in TIPC connect() with other stacks, the TIPC implementation might be perceived as different, and sometimes even incorrect. For instance, as both sock->state and sk_sleep() are directly fed to wait_event_interruptible_timeout() as its arguments, and socket lock has to be released before we call wait_event_interruptible_timeout(), the two variables associated with socket are exposed out of socket lock protection, thereby probably getting stale values so that the process of calling connect() cannot be woken up exactly even if correct event arrives or it is woken up improperly even if the wake condition is not satisfied in practice. Therefore, standardizing its behaviour with sk_stream_wait_connect routine can avoid these risks. Additionally the implementation of connect routine is simplified as a whole, allowing it to return correct values in all different cases. Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
wangweidong authored
When go the right path, the status is 0, no need to assign it again. So just remove the assignment. Signed-off-by: Wang Weidong <wangweidong1@huawei.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jason Wang authored
It looks like there's no need for those two fields: - Unless there's a failure for the first refill try, rq->max should be always equal to the vring size. - rq->num is only used to determine the condition that we need to do the refill, we could check vq->num_free instead. - rq->num was required to be increased or decreased explicitly after each get/put which results a bad API. So this patch removes them both to make the code simpler. Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lad, Prabhakar authored
This patch fixes following sparse warning davinci_mdio.c:85:27: warning: symbol 'default_pdata' was not declared. Should it be static? Also makes the default_pdata as a constant. Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com> Acked-by: Mugunthan V N <mugunthanvnm@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Veaceslav Falico authored
Currently, if a slave's name change, we just pass it by. However, if the slave is a current primary_slave, then we end up with using a slave, whose name != params.primary, for primary_slave. And vice-versa, if we don't have a primary_slave but have params.primary set - we will not detected a new primary_slave. Fix this by catching the NETDEV_CHANGENAME event and setting primary_slave accordingly. Also, if the primary_slave was changed, issue a reselection of the active slave, cause the priorities have changed. Reported-by: Ding Tianhong <dingtianhong@huawei.com> CC: Ding Tianhong <dingtianhong@huawei.com> CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Acked-by: Ding Tianhong <dingtianhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
WANG Cong authored
In tcf_register_action() we check either ->type or ->kind to see if there is an existing action registered, but ipt action registers two actions with same type but different kinds. They should have different types too. Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.open-mesh.org/linux-mergeDavid S. Miller authored
Included change: - properly format already existing kerneldoc Signed-off-by: David S. Miller <davem@davemloft.net>
-
WANG Cong authored
Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Refinements to cloud support in the Firmware API. Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com> Signed-off-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Check that the descriptors were allocated before trying to dump them to the logfile. While we're there, de-trick-ify the code so as to be easier to read and not abusing the types and unions. Change-ID: I22898f4b22cecda3582d4d9e4018da9cd540f177 Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com> Signed-off-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Veaceslav Falico authored
Now it catches the NETDEV_CHANGEMTU notification, which is signaled after the actual change happened on the device, and returns NOTIFY_BAD, so that the change on the device is reverted. This might be quite costly and messy, so use the new NETDEV_PRECHANGEMTU to catch the MTU change before the actual change happens and signal that it's forbidden to do it. CC: Jiri Pirko <jiri@resnulli.us> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Acked-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Veaceslav Falico authored
Currently, if a device changes its mtu, first the change happens (invloving all the side effects), and after that the NETDEV_CHANGEMTU is sent so that other devices can catch up with the new mtu. However, if they return NOTIFY_BAD, then the change is reverted and error returned. This is a really long and costy operation (sometimes). To fix this, add NETDEV_PRECHANGEMTU notification which is called prior to any change actually happening, and if any callee returns NOTIFY_BAD - the change is aborted. This way we're skipping all the playing with apply/revert the mtu. CC: "David S. Miller" <davem@davemloft.net> CC: Jiri Pirko <jiri@resnulli.us> CC: Eric Dumazet <edumazet@google.com> CC: Nicolas Dichtel <nicolas.dichtel@6wind.com> CC: Cong Wang <amwang@redhat.com> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Acked-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-