- 01 Sep, 2016 8 commits
-
-
Robert Foss authored
From: Grant Grundler <grundler@chromium.org> The miii_nway_restart() causes a PHY link change activity and ax88772_link_reset will be called. link_reset will set AX_CMD_WRITE_MEDIUM_MODE register correctly. The asix_write_medium_mode in reset() fills in a default value to the register which may be different from the negotiation result. So do this first. Ignore the ret value since it's ignored in XXX_link_reset() functions. Signed-off-by: Grant Grundler <grundler@google.com> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Robert Foss authored
From: Grant Grundler <grundler@chromium.org> https://lkml.org/lkml/2014/11/11/947 Ben Hutchings is correct. IEEE 802.3 spec section "22.2.4.1.1 Reset" requires up to 500ms delay. Mitigate the "max" delay by polling the phy until BCM_RESET bit is clear. Signed-off-by: Grant Grundler <grundler@chromium.org> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Robert Foss authored
From: Allan Chou <allan@asix.com.tw> The change fixes AX88772x resume failure by - Restore incorrect AX88772A PHY registers when resetting - Need to stop MAC operation when suspending - Need to restart MII when restoring PHY Signed-off-by: Allan Chou <allan@asix.com.tw> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Robert Foss authored
From: Vincent Palatin <vpalatin@chromium.org> Check the answers from the USB stack and avoid re-sending multiple times the request if the device has disappeared. Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Robert Foss authored
From: Freddy Xin <freddy@asix.com.tw> In order to R/W registers in suspend/resume functions, in_pm flags are added to some functions to determine whether the nopm version of usb functions is called. Save BMCR and ANAR PHY registers in suspend function and restore them in resume function. Reset HW in resume function to ensure the PHY works correctly. Signed-off-by: Freddy Xin <freddy@asix.com.tw> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
Check for ethtool_ops structures that are only stored in the ethtool_ops field of a net_device structure or passed as the second argument to netdev_set_default_ethtool_ops. These contexts are declared const, so ethtool_ops structures that have these properties can be declared as const also. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r disable optional_qualifier@ identifier i; position p; @@ static struct ethtool_ops i@p = { ... }; @ok1@ identifier r.i; struct net_device e; position p; @@ e.ethtool_ops = &i@p; @ok2@ identifier r.i; expression e; position p; @@ netdev_set_default_ethtool_ops(e, &i@p) @bad@ position p != {r.p,ok1.p,ok2.p}; identifier r.i; @@ i@p @depends on !bad disable optional_qualifier@ identifier r.i; @@ static +const struct ethtool_ops i = { ... }; // </smpl> Suggested-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
Check for ethtool_ops structures that are only stored in the ethtool_ops field of a net_device structure or passed as the second argument to netdev_set_default_ethtool_ops. These contexts are declared const, so ethtool_ops structures that have these properties can be declared as const also. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r disable optional_qualifier@ identifier i; position p; @@ static struct ethtool_ops i@p = { ... }; @ok1@ identifier r.i; struct net_device e; position p; @@ e.ethtool_ops = &i@p; @ok2@ identifier r.i; expression e; position p; @@ netdev_set_default_ethtool_ops(e, &i@p) @bad@ position p != {r.p,ok1.p,ok2.p}; identifier r.i; @@ i@p @depends on !bad disable optional_qualifier@ identifier r.i; @@ static +const struct ethtool_ops i = { ... }; // </smpl> Suggested-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
Check for ethtool_ops structures that are only stored in the ethtool_ops field of a net_device structure or passed as the second argument to netdev_set_default_ethtool_ops. These contexts are declared const, so ethtool_ops structures that have these properties can be declared as const also. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r disable optional_qualifier@ identifier i; position p; @@ static struct ethtool_ops i@p = { ... }; @ok1@ identifier r.i; struct net_device e; position p; @@ e.ethtool_ops = &i@p; @ok2@ identifier r.i; expression e; position p; @@ netdev_set_default_ethtool_ops(e, &i@p) @bad@ position p != {r.p,ok1.p,ok2.p}; identifier r.i; @@ i@p @depends on !bad disable optional_qualifier@ identifier r.i; @@ static +const struct ethtool_ops i = { ... }; // </smpl> Suggested-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 31 Aug, 2016 32 commits
-
-
David S. Miller authored
Guillaume Nault says: ==================== ppp: fix deadlock upon recursive xmit This series fixes the issue reported by Feng where packets looping through a ppp device makes the module deadlock: https://marc.info/?l=linux-netdev&m=147134567319038&w=2 The problem can occur on virtual interfaces (e.g. PPP over L2TP, or PPPoE on vxlan devices), when a PPP packet is routed back to the PPP interface. PPP's xmit path isn't reentrant, so patch #1 uses a per-cpu variable to detect and break recursion. Patch #2 sets the NETIF_F_LLTX flag to avoid lock inversion issues between ppp and txqueue locks. There are multiple entry points to the PPP xmit path. This series has been tested with lockdep and should address recursion issues no matter how the packet entered the path. A similar issue in L2TP is not covered by this series: l2tp_xmit_skb() also isn't reentrant, and it can be called as part of PPP's xmit path (pppol2tp_xmit()), or directly from the L2TP socket (l2tp_ppp_sendmsg()). If a packet is sent by l2tp_ppp_sendmsg() and routed to the parent PPP interface, then it's going to hit l2tp_xmit_skb() again. Breaking recursion as done in ppp_generic is not enough, because we'd still have a lock inversion issue (locking in l2tp_xmit_skb() can happen before or after locking in ppp_generic). The best approach would be to use the ip_tunnel functions and remove the socket locking in l2tp_xmit_skb(). But that'd be something for net-next. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guillaume Nault authored
ppp_xmit_process() already locks the xmit path. If HARD_TX_LOCK() tries to hold the _xmit_lock we can get lock inversion. [ 973.726130] ====================================================== [ 973.727311] [ INFO: possible circular locking dependency detected ] [ 973.728546] 4.8.0-rc2 #1 Tainted: G O [ 973.728986] ------------------------------------------------------- [ 973.728986] accel-pppd/1806 is trying to acquire lock: [ 973.728986] (&qdisc_xmit_lock_key){+.-...}, at: [<ffffffff8146f6fe>] sch_direct_xmit+0x8d/0x221 [ 973.728986] [ 973.728986] but task is already holding lock: [ 973.728986] (l2tp_sock){+.-...}, at: [<ffffffffa0202c4a>] l2tp_xmit_skb+0x1e8/0x5d7 [l2tp_core] [ 973.728986] [ 973.728986] which lock already depends on the new lock. [ 973.728986] [ 973.728986] [ 973.728986] the existing dependency chain (in reverse order) is: [ 973.728986] -> #3 (l2tp_sock){+.-...}: [ 973.728986] [<ffffffff810b3130>] lock_acquire+0x150/0x217 [ 973.728986] [<ffffffff815752f4>] _raw_spin_lock+0x2d/0x3c [ 973.728986] [<ffffffffa0202c4a>] l2tp_xmit_skb+0x1e8/0x5d7 [l2tp_core] [ 973.728986] [<ffffffffa01b2466>] pppol2tp_xmit+0x1f2/0x25e [l2tp_ppp] [ 973.728986] [<ffffffffa0184f59>] ppp_channel_push+0xb5/0x14a [ppp_generic] [ 973.728986] [<ffffffffa01853ed>] ppp_write+0x104/0x11c [ppp_generic] [ 973.728986] [<ffffffff811b2ec6>] __vfs_write+0x56/0x120 [ 973.728986] [<ffffffff811b3f4c>] vfs_write+0xbd/0x11b [ 973.728986] [<ffffffff811b4cb2>] SyS_write+0x5e/0x96 [ 973.728986] [<ffffffff81575ba5>] entry_SYSCALL_64_fastpath+0x18/0xa8 [ 973.728986] -> #2 (&(&pch->downl)->rlock){+.-...}: [ 973.728986] [<ffffffff810b3130>] lock_acquire+0x150/0x217 [ 973.728986] [<ffffffff81575334>] _raw_spin_lock_bh+0x31/0x40 [ 973.728986] [<ffffffffa01808e2>] ppp_push+0xa7/0x82d [ppp_generic] [ 973.728986] [<ffffffffa0184675>] __ppp_xmit_process+0x48/0x877 [ppp_generic] [ 973.728986] [<ffffffffa018505b>] ppp_xmit_process+0x4b/0xaf [ppp_generic] [ 973.728986] [<ffffffffa01853f7>] ppp_write+0x10e/0x11c [ppp_generic] [ 973.728986] [<ffffffff811b2ec6>] __vfs_write+0x56/0x120 [ 973.728986] [<ffffffff811b3f4c>] vfs_write+0xbd/0x11b [ 973.728986] [<ffffffff811b4cb2>] SyS_write+0x5e/0x96 [ 973.728986] [<ffffffff81575ba5>] entry_SYSCALL_64_fastpath+0x18/0xa8 [ 973.728986] -> #1 (&(&ppp->wlock)->rlock){+.-...}: [ 973.728986] [<ffffffff810b3130>] lock_acquire+0x150/0x217 [ 973.728986] [<ffffffff81575334>] _raw_spin_lock_bh+0x31/0x40 [ 973.728986] [<ffffffffa0184654>] __ppp_xmit_process+0x27/0x877 [ppp_generic] [ 973.728986] [<ffffffffa018505b>] ppp_xmit_process+0x4b/0xaf [ppp_generic] [ 973.728986] [<ffffffffa01852da>] ppp_start_xmit+0x21b/0x22a [ppp_generic] [ 973.728986] [<ffffffff8143f767>] dev_hard_start_xmit+0x1a9/0x43d [ 973.728986] [<ffffffff8146f747>] sch_direct_xmit+0xd6/0x221 [ 973.728986] [<ffffffff814401e4>] __dev_queue_xmit+0x62a/0x912 [ 973.728986] [<ffffffff814404d7>] dev_queue_xmit+0xb/0xd [ 973.728986] [<ffffffff81449978>] neigh_direct_output+0xc/0xe [ 973.728986] [<ffffffff8150e62b>] ip6_finish_output2+0x5a9/0x623 [ 973.728986] [<ffffffff81512128>] ip6_output+0x15e/0x16a [ 973.728986] [<ffffffff8153ef86>] dst_output+0x76/0x7f [ 973.728986] [<ffffffff8153f737>] mld_sendpack+0x335/0x404 [ 973.728986] [<ffffffff81541c61>] mld_send_initial_cr.part.21+0x99/0xa2 [ 973.728986] [<ffffffff8154441d>] ipv6_mc_dad_complete+0x42/0x71 [ 973.728986] [<ffffffff8151c4bd>] addrconf_dad_completed+0x1cf/0x2ea [ 973.728986] [<ffffffff8151e4fa>] addrconf_dad_work+0x453/0x520 [ 973.728986] [<ffffffff8107a393>] process_one_work+0x365/0x6f0 [ 973.728986] [<ffffffff8107aecd>] worker_thread+0x2de/0x421 [ 973.728986] [<ffffffff810816fb>] kthread+0x121/0x130 [ 973.728986] [<ffffffff81575dbf>] ret_from_fork+0x1f/0x40 [ 973.728986] -> #0 (&qdisc_xmit_lock_key){+.-...}: [ 973.728986] [<ffffffff810b28d6>] __lock_acquire+0x1118/0x1483 [ 973.728986] [<ffffffff810b3130>] lock_acquire+0x150/0x217 [ 973.728986] [<ffffffff815752f4>] _raw_spin_lock+0x2d/0x3c [ 973.728986] [<ffffffff8146f6fe>] sch_direct_xmit+0x8d/0x221 [ 973.728986] [<ffffffff814401e4>] __dev_queue_xmit+0x62a/0x912 [ 973.728986] [<ffffffff814404d7>] dev_queue_xmit+0xb/0xd [ 973.728986] [<ffffffff81449978>] neigh_direct_output+0xc/0xe [ 973.728986] [<ffffffff81487811>] ip_finish_output2+0x5db/0x609 [ 973.728986] [<ffffffff81489590>] ip_finish_output+0x152/0x15e [ 973.728986] [<ffffffff8148a0d4>] ip_output+0x8c/0x96 [ 973.728986] [<ffffffff81489652>] ip_local_out+0x41/0x4a [ 973.728986] [<ffffffff81489e7d>] ip_queue_xmit+0x5a5/0x609 [ 973.728986] [<ffffffffa0202fe4>] l2tp_xmit_skb+0x582/0x5d7 [l2tp_core] [ 973.728986] [<ffffffffa01b2466>] pppol2tp_xmit+0x1f2/0x25e [l2tp_ppp] [ 973.728986] [<ffffffffa0184f59>] ppp_channel_push+0xb5/0x14a [ppp_generic] [ 973.728986] [<ffffffffa01853ed>] ppp_write+0x104/0x11c [ppp_generic] [ 973.728986] [<ffffffff811b2ec6>] __vfs_write+0x56/0x120 [ 973.728986] [<ffffffff811b3f4c>] vfs_write+0xbd/0x11b [ 973.728986] [<ffffffff811b4cb2>] SyS_write+0x5e/0x96 [ 973.728986] [<ffffffff81575ba5>] entry_SYSCALL_64_fastpath+0x18/0xa8 [ 973.728986] [ 973.728986] other info that might help us debug this: [ 973.728986] [ 973.728986] Chain exists of: &qdisc_xmit_lock_key --> &(&pch->downl)->rlock --> l2tp_sock [ 973.728986] Possible unsafe locking scenario: [ 973.728986] [ 973.728986] CPU0 CPU1 [ 973.728986] ---- ---- [ 973.728986] lock(l2tp_sock); [ 973.728986] lock(&(&pch->downl)->rlock); [ 973.728986] lock(l2tp_sock); [ 973.728986] lock(&qdisc_xmit_lock_key); [ 973.728986] [ 973.728986] *** DEADLOCK *** [ 973.728986] [ 973.728986] 6 locks held by accel-pppd/1806: [ 973.728986] #0: (&(&pch->downl)->rlock){+.-...}, at: [<ffffffffa0184efa>] ppp_channel_push+0x56/0x14a [ppp_generic] [ 973.728986] #1: (l2tp_sock){+.-...}, at: [<ffffffffa0202c4a>] l2tp_xmit_skb+0x1e8/0x5d7 [l2tp_core] [ 973.728986] #2: (rcu_read_lock){......}, at: [<ffffffff81486981>] rcu_lock_acquire+0x0/0x20 [ 973.728986] #3: (rcu_read_lock_bh){......}, at: [<ffffffff81486981>] rcu_lock_acquire+0x0/0x20 [ 973.728986] #4: (rcu_read_lock_bh){......}, at: [<ffffffff814340e3>] rcu_lock_acquire+0x0/0x20 [ 973.728986] #5: (dev->qdisc_running_key ?: &qdisc_running_key#2){+.....}, at: [<ffffffff8144011e>] __dev_queue_xmit+0x564/0x912 [ 973.728986] [ 973.728986] stack backtrace: [ 973.728986] CPU: 2 PID: 1806 Comm: accel-pppd Tainted: G O 4.8.0-rc2 #1 [ 973.728986] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014 [ 973.728986] ffff7fffffffffff ffff88003436f850 ffffffff812a20f4 ffffffff82156e30 [ 973.728986] ffffffff82156920 ffff88003436f890 ffffffff8115c759 ffff88003344ae00 [ 973.728986] ffff88003344b5c0 0000000000000002 0000000000000006 ffff88003344b5e8 [ 973.728986] Call Trace: [ 973.728986] [<ffffffff812a20f4>] dump_stack+0x67/0x90 [ 973.728986] [<ffffffff8115c759>] print_circular_bug+0x22e/0x23c [ 973.728986] [<ffffffff810b28d6>] __lock_acquire+0x1118/0x1483 [ 973.728986] [<ffffffff810b3130>] lock_acquire+0x150/0x217 [ 973.728986] [<ffffffff810b3130>] ? lock_acquire+0x150/0x217 [ 973.728986] [<ffffffff8146f6fe>] ? sch_direct_xmit+0x8d/0x221 [ 973.728986] [<ffffffff815752f4>] _raw_spin_lock+0x2d/0x3c [ 973.728986] [<ffffffff8146f6fe>] ? sch_direct_xmit+0x8d/0x221 [ 973.728986] [<ffffffff8146f6fe>] sch_direct_xmit+0x8d/0x221 [ 973.728986] [<ffffffff814401e4>] __dev_queue_xmit+0x62a/0x912 [ 973.728986] [<ffffffff814404d7>] dev_queue_xmit+0xb/0xd [ 973.728986] [<ffffffff81449978>] neigh_direct_output+0xc/0xe [ 973.728986] [<ffffffff81487811>] ip_finish_output2+0x5db/0x609 [ 973.728986] [<ffffffff81486853>] ? dst_mtu+0x29/0x2e [ 973.728986] [<ffffffff81489590>] ip_finish_output+0x152/0x15e [ 973.728986] [<ffffffff8148a0bc>] ? ip_output+0x74/0x96 [ 973.728986] [<ffffffff8148a0d4>] ip_output+0x8c/0x96 [ 973.728986] [<ffffffff81489652>] ip_local_out+0x41/0x4a [ 973.728986] [<ffffffff81489e7d>] ip_queue_xmit+0x5a5/0x609 [ 973.728986] [<ffffffff814c559e>] ? udp_set_csum+0x207/0x21e [ 973.728986] [<ffffffffa0202fe4>] l2tp_xmit_skb+0x582/0x5d7 [l2tp_core] [ 973.728986] [<ffffffffa01b2466>] pppol2tp_xmit+0x1f2/0x25e [l2tp_ppp] [ 973.728986] [<ffffffffa0184f59>] ppp_channel_push+0xb5/0x14a [ppp_generic] [ 973.728986] [<ffffffffa01853ed>] ppp_write+0x104/0x11c [ppp_generic] [ 973.728986] [<ffffffff811b2ec6>] __vfs_write+0x56/0x120 [ 973.728986] [<ffffffff8124c11d>] ? fsnotify_perm+0x27/0x95 [ 973.728986] [<ffffffff8124d41d>] ? security_file_permission+0x4d/0x54 [ 973.728986] [<ffffffff811b3f4c>] vfs_write+0xbd/0x11b [ 973.728986] [<ffffffff811b4cb2>] SyS_write+0x5e/0x96 [ 973.728986] [<ffffffff81575ba5>] entry_SYSCALL_64_fastpath+0x18/0xa8 [ 973.728986] [<ffffffff810ae0fa>] ? trace_hardirqs_off_caller+0x121/0x12f Signed-off-by: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guillaume Nault authored
In case of misconfiguration, a virtual PPP channel might send packets back to their parent PPP interface. This typically happens in misconfigured L2TP setups, where PPP's peer IP address is set with the IP of the L2TP peer. When that happens the system hangs due to PPP trying to recursively lock its xmit path. [ 243.332155] BUG: spinlock recursion on CPU#1, accel-pppd/926 [ 243.333272] lock: 0xffff880033d90f18, .magic: dead4ead, .owner: accel-pppd/926, .owner_cpu: 1 [ 243.334859] CPU: 1 PID: 926 Comm: accel-pppd Not tainted 4.8.0-rc2 #1 [ 243.336010] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014 [ 243.336018] ffff7fffffffffff ffff8800319a77a0 ffffffff8128de85 ffff880033d90f18 [ 243.336018] ffff880033ad8000 ffff8800319a77d8 ffffffff810ad7c0 ffffffff0000039e [ 243.336018] ffff880033d90f18 ffff880033d90f60 ffff880033d90f18 ffff880033d90f28 [ 243.336018] Call Trace: [ 243.336018] [<ffffffff8128de85>] dump_stack+0x4f/0x65 [ 243.336018] [<ffffffff810ad7c0>] spin_dump+0xe1/0xeb [ 243.336018] [<ffffffff810ad7f0>] spin_bug+0x26/0x28 [ 243.336018] [<ffffffff810ad8b9>] do_raw_spin_lock+0x5c/0x160 [ 243.336018] [<ffffffff815522aa>] _raw_spin_lock_bh+0x35/0x3c [ 243.336018] [<ffffffffa01a88e2>] ? ppp_push+0xa7/0x82d [ppp_generic] [ 243.336018] [<ffffffffa01a88e2>] ppp_push+0xa7/0x82d [ppp_generic] [ 243.336018] [<ffffffff810adada>] ? do_raw_spin_unlock+0xc2/0xcc [ 243.336018] [<ffffffff81084962>] ? preempt_count_sub+0x13/0xc7 [ 243.336018] [<ffffffff81552438>] ? _raw_spin_unlock_irqrestore+0x34/0x49 [ 243.336018] [<ffffffffa01ac657>] ppp_xmit_process+0x48/0x877 [ppp_generic] [ 243.336018] [<ffffffff81084962>] ? preempt_count_sub+0x13/0xc7 [ 243.336018] [<ffffffff81408cd3>] ? skb_queue_tail+0x71/0x7c [ 243.336018] [<ffffffffa01ad1c5>] ppp_start_xmit+0x21b/0x22a [ppp_generic] [ 243.336018] [<ffffffff81426af1>] dev_hard_start_xmit+0x15e/0x32c [ 243.336018] [<ffffffff81454ed7>] sch_direct_xmit+0xd6/0x221 [ 243.336018] [<ffffffff814273a8>] __dev_queue_xmit+0x52a/0x820 [ 243.336018] [<ffffffff814276a9>] dev_queue_xmit+0xb/0xd [ 243.336018] [<ffffffff81430a3c>] neigh_direct_output+0xc/0xe [ 243.336018] [<ffffffff8146b5d7>] ip_finish_output2+0x4d2/0x548 [ 243.336018] [<ffffffff8146a8e6>] ? dst_mtu+0x29/0x2e [ 243.336018] [<ffffffff8146d49c>] ip_finish_output+0x152/0x15e [ 243.336018] [<ffffffff8146df84>] ? ip_output+0x74/0x96 [ 243.336018] [<ffffffff8146df9c>] ip_output+0x8c/0x96 [ 243.336018] [<ffffffff8146d55e>] ip_local_out+0x41/0x4a [ 243.336018] [<ffffffff8146dd15>] ip_queue_xmit+0x531/0x5c5 [ 243.336018] [<ffffffff814a82cd>] ? udp_set_csum+0x207/0x21e [ 243.336018] [<ffffffffa01f2f04>] l2tp_xmit_skb+0x582/0x5d7 [l2tp_core] [ 243.336018] [<ffffffffa01ea458>] pppol2tp_xmit+0x1eb/0x257 [l2tp_ppp] [ 243.336018] [<ffffffffa01acf17>] ppp_channel_push+0x91/0x102 [ppp_generic] [ 243.336018] [<ffffffffa01ad2d8>] ppp_write+0x104/0x11c [ppp_generic] [ 243.336018] [<ffffffff811a3c1e>] __vfs_write+0x56/0x120 [ 243.336018] [<ffffffff81239801>] ? fsnotify_perm+0x27/0x95 [ 243.336018] [<ffffffff8123ab01>] ? security_file_permission+0x4d/0x54 [ 243.336018] [<ffffffff811a4ca4>] vfs_write+0xbd/0x11b [ 243.336018] [<ffffffff811a5a0a>] SyS_write+0x5e/0x96 [ 243.336018] [<ffffffff81552a1b>] entry_SYSCALL_64_fastpath+0x13/0x94 The main entry points for sending packets over a PPP unit are the .write() and .ndo_start_xmit() callbacks (simplified view): .write(unit fd) or .ndo_start_xmit() \ CALL ppp_xmit_process() \ LOCK unit's xmit path (ppp->wlock) | CALL ppp_push() \ LOCK channel's xmit path (chan->downl) | CALL lower layer's .start_xmit() callback \ ... might recursively call .ndo_start_xmit() ... / RETURN from .start_xmit() | UNLOCK channel's xmit path / RETURN from ppp_push() | UNLOCK unit's xmit path / RETURN from ppp_xmit_process() Packets can also be directly sent on channels (e.g. LCP packets): .write(channel fd) or ppp_output_wakeup() \ CALL ppp_channel_push() \ LOCK channel's xmit path (chan->downl) | CALL lower layer's .start_xmit() callback \ ... might call .ndo_start_xmit() ... / RETURN from .start_xmit() | UNLOCK channel's xmit path / RETURN from ppp_channel_push() Key points about the lower layer's .start_xmit() callback: * It can be called directly by a channel fd .write() or by ppp_output_wakeup() or indirectly by a unit fd .write() or by .ndo_start_xmit(). * In any case, it's always called with chan->downl held. * It might route the packet back to its parent unit using .ndo_start_xmit() as entry point. This patch detects and breaks recursion in ppp_xmit_process(). This function is a good candidate for the task because it's called early enough after .ndo_start_xmit(), it's always part of the recursion loop and it's on the path of whatever entry point is used to send a packet on a PPP unit. Recursion detection is done using the per-cpu ppp_xmit_recursion variable. Since ppp_channel_push() too locks the channel's xmit path and calls the lower layer's .start_xmit() callback, we need to also increment ppp_xmit_recursion there. However there's no need to check for recursion, as it's out of the recursion loop. Reported-by: Feng Gao <gfree.wind@gmail.com> Signed-off-by: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Casting away const is bad practice. Since this is ARM specific driver don't have hardware actually test this. Having getter functions for ops is really unnecessary code bloat, but not going to touch that. Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vivien Didelot says: ==================== net: dsa: add MDB support This patchset adds the switchdev MDB object support to the DSA layer. The MDB support for the mv88e6xxx driver is very similar to the FDB support. The FDB operations care about unicast addresses while the MDB operations care about multicast addresses. Both operation set load/purge/dump the Address Translation Table (ATU), thus common code is used. Changes in v2 based on Andrew's comments: - drop "group" in multicast database related doc and comment - change _one for more relevant _fid in mv88e6xxx_port_db_dump_one - return -EOPNOTSUPP if switchdev obj ID is neither _FDB nor _MDB ==================== Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Add support for the MDB operations. This consists of loading/purging/dumping multicast addresses for a given port in the ATU. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
The MDB support for the mv88e6xxx driver will be very similar to the FDB support, since it consists of loading/purging/dumping address to/from the Address Translation Unit (ATU). Prepare the support for MDB by making the FDB code accessing the ATU generic. The FDB operations now provide access to the unicast addresses while the MDB operations will provide access to the multicast addresses. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Add SWITCHDEV_OBJ_ID_PORT_MDB support to the DSA layer. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Raghu Vatsavayi says: ==================== liquidio CN23XX support Following patchset adds support for new device "CN23XX" in liquidio family of adapters. As adviced by you I have split the previous V3 patch of 18 patches into two halves. This first patchset has first 10 patches, which are tested against net-next. I will post the second half after this one. This V4 patch also addressed all the comments from previous submission: 1) Avoid busy loop while reading registers. 2) Other minor comments about debug messages and constants. Please apply patches in following order as some of the patches depend on earlier patches. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
Add firmware download support for cn23xx device. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch adds support msix interrupt for cn23xx device. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch adds support for cn23xx queue manipulation. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
Adds support for initializing cn23xx device registers related to mac, input/output and pf global config. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
Add support for cn23xx device init and sriov queue config. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
Add support for cn23xx specific queue definitions and features. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch adds register definitions and structures for new device cn23xx. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
Add support of common irq enable functionality for both iq(instruction queue) and oq(output queue). Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch contains changes for firmware version management. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
Consolidate common functionality of various devices from different files into lio_core.c/octeon_console.c. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Geert Uytterhoeven authored
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
Check for ethtool_ops structures that are only stored in the ethtool_ops field of a net_device structure or passed as the second argument to netdev_set_default_ethtool_ops. These contexts are declared const, so ethtool_ops structures that have these properties can be declared as const also. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r disable optional_qualifier@ identifier i; position p; @@ static struct ethtool_ops i@p = { ... }; @ok1@ identifier r.i; struct net_device e; position p; @@ e.ethtool_ops = &i@p; @ok2@ identifier r.i; expression e; position p; @@ netdev_set_default_ethtool_ops(e, &i@p) @bad@ position p != {r.p,ok1.p,ok2.p}; identifier r.i; @@ i@p @depends on !bad disable optional_qualifier@ identifier r.i; @@ static +const struct ethtool_ops i = { ... }; // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
Check for ethtool_ops structures that are only stored in the ethtool_ops field of a net_device structure or passed as the second argument to netdev_set_default_ethtool_ops. These contexts are declared const, so ethtool_ops structures that have these properties can be declared as const also. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r disable optional_qualifier@ identifier i; position p; @@ static struct ethtool_ops i@p = { ... }; @ok1@ identifier r.i; struct net_device e; position p; @@ e.ethtool_ops = &i@p; @ok2@ identifier r.i; expression e; position p; @@ netdev_set_default_ethtool_ops(e, &i@p) @bad@ position p != {r.p,ok1.p,ok2.p}; identifier r.i; @@ i@p @depends on !bad disable optional_qualifier@ identifier r.i; @@ static +const struct ethtool_ops i = { ... }; // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
Check for ethtool_ops structures that are only stored in the ethtool_ops field of a net_device structure or passed as the second argument to netdev_set_default_ethtool_ops. These contexts are declared const, so ethtool_ops structures that have these properties can be declared as const also. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r disable optional_qualifier@ identifier i; position p; @@ static struct ethtool_ops i@p = { ... }; @ok1@ identifier r.i; struct net_device e; position p; @@ e.ethtool_ops = &i@p; @ok2@ identifier r.i; expression e; position p; @@ netdev_set_default_ethtool_ops(e, &i@p) @bad@ position p != {r.p,ok1.p,ok2.p}; identifier r.i; @@ i@p @depends on !bad disable optional_qualifier@ identifier r.i; @@ static +const struct ethtool_ops i = { ... }; // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
Check for ethtool_ops structures that are only stored in the ethtool_ops field of a net_device structure or passed as the second argument to netdev_set_default_ethtool_ops. These contexts are declared const, so ethtool_ops structures that have these properties can be declared as const also. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r disable optional_qualifier@ identifier i; position p; @@ static struct ethtool_ops i@p = { ... }; @ok1@ identifier r.i; struct net_device e; position p; @@ e.ethtool_ops = &i@p; @ok2@ identifier r.i; expression e; position p; @@ netdev_set_default_ethtool_ops(e, &i@p) @bad@ position p != {r.p,ok1.p,ok2.p}; identifier r.i; @@ i@p @depends on !bad disable optional_qualifier@ identifier r.i; @@ static +const struct ethtool_ops i = { ... }; // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
Check for ethtool_ops structures that are only stored in the ethtool_ops field of a net_device structure or passed as the second argument to netdev_set_default_ethtool_ops. These contexts are declared const, so ethtool_ops structures that have these properties can be declared as const also. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r disable optional_qualifier@ identifier i; position p; @@ static struct ethtool_ops i@p = { ... }; @ok1@ identifier r.i; struct net_device e; position p; @@ e.ethtool_ops = &i@p; @ok2@ identifier r.i; expression e; position p; @@ netdev_set_default_ethtool_ops(e, &i@p) @bad@ position p != {r.p,ok1.p,ok2.p}; identifier r.i; @@ i@p @depends on !bad disable optional_qualifier@ identifier r.i; @@ static +const struct ethtool_ops i = { ... }; // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Acked-by: Mark Einon <mark.einon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arnd Bergmann authored
The addition of the per-queue statistics introduced a harmless warning on all 32-bit architectures: drivers/net/ethernet/qlogic/qede/qede_ethtool.c: In function 'qede_get_ethtool_stats': drivers/net/ethernet/qlogic/qede/qede_ethtool.c:244:31: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast] buf[cnt++] = QEDE_TQSTATS_DATA(edev, ^ drivers/net/ethernet/qlogic/qede/qede_ethtool.c:244:22: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast] buf[cnt++] = QEDE_TQSTATS_DATA(edev, ^ This changes the cast to 'void *' to shut up the warning, which avoids the assumptions on the size of the pointer type. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 68db9ec2 ("qede: Add support for per-queue stats.") Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arnd Bergmann authored
When CONFIG_PM_SLEEP is disabled, we get a couple of harmless warnings: drivers/net/ethernet/renesas/ravb_main.c:2117:12: error: 'ravb_resume' defined but not used [-Werror=unused-function] drivers/net/ethernet/renesas/ravb_main.c:2104:12: error: 'ravb_suspend' defined but not used [-Werror=unused-function] The simplest solution here is to replace the #ifdef with __maybe_unused annotations, which lets the compiler do the right thing by itself. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 0184165b ("ravb: add sleep PM suspend/resume support") Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
David Ahern says: ==================== net: mpls: fragmentation and gso fixes for locally originated traffic This series fixes mtu and fragmentation for tunnels using lwtunnel output redirect, and fixes GSO for MPLS for locally originated traffic reported by Lennert Buytenhek. A follow on series will address fragmentation and GSO for forwarded MPLS traffic. Hardware offload of GSO with MPLS also needs to be addressed. Simon: Can you verify this works with OVS for single and multiple labels? v4 - more updates to mpls_gso_segment per Alex's comments (thanks, Alex) - updates to teaching OVS about marking MPLS labels as the network header v3 - updates to mpls_gso_segment per Alex's comments - dropped skb->encapsulation = 1 from mpls_xmit per Alex's comment v2 - consistent use of network_header in skb to fix GSO for MPLS - update MPLS code in OVS to network_header and inner_network_header ==================== Tested-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
veth does not really transmit packets only moves the skb from one netdev to another so gso and checksum is not really needed. Add the features to mpls_features to get the same benefit and performance with MPLS as without it. Reported-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
As reported by Lennert the MPLS GSO code is failing to properly segment large packets. There are a couple of problems: 1. the inner protocol is not set so the gso segment functions for inner protocol layers are not getting run, and 2 MPLS labels for packets that use the "native" (non-OVS) MPLS code are not properly accounted for in mpls_gso_segment. The MPLS GSO code was added for OVS. It is re-using skb_mac_gso_segment to call the gso segment functions for the higher layer protocols. That means skb_mac_gso_segment is called twice -- once with the network protocol set to MPLS and again with the network protocol set to the inner protocol. This patch sets the inner skb protocol addressing item 1 above and sets the network_header and inner_network_header to mark where the MPLS labels start and end. The MPLS code in OVS is also updated to set the two network markers. >From there the MPLS GSO code uses the difference between the network header and the inner network header to know the size of the MPLS header that was pushed. It then pulls the MPLS header, resets the mac_len and protocol for the inner protocol and then calls skb_mac_gso_segment to segment the skb. Afterward the inner protocol segmentation is done the skb protocol is set to mpls for each segment and the network and mac headers restored. Reported-by: Lennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Roopa Prabhu authored
Today mpls iptunnel lwtunnel_output redirect expects the tunnel output function to handle fragmentation. This is ok but can be avoided if we did not do the mpls output redirect too early. ie we could wait until ip fragmentation is done and then call mpls output for each ip fragment. To make this work we will need, 1) the lwtunnel state to carry encap headroom 2) and do the redirect to the encap output handler on the ip fragment (essentially do the output redirect after fragmentation) This patch adds tunnel headroom in lwtstate to make sure we account for tunnel data in mtu calculations during fragmentation and adds new xmit redirect handler to redirect to lwtunnel xmit func after ip fragmentation. This includes IPV6 and some mtu fixes and testing from David Ahern. Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
After commit 145dd5f9 ("net: flush the softnet backlog in process context"), we can easily batch calls to flush_all_backlogs() for all devices processed in rollback_registered_many() Tested: Before patch, on an idle host. modprobe dummy numdummies=10000 perf stat -e context-switches -a rmmod dummy Performance counter stats for 'system wide': 1,211,798 context-switches 1.302137465 seconds time elapsed After patch: perf stat -e context-switches -a rmmod dummy Performance counter stats for 'system wide': 225,523 context-switches 0.721623566 seconds time elapsed Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-