- 09 Apr, 2015 40 commits
-
-
Malcolm Priestley authored
commit 40c8790b upstream. When the driver sets this rate a power of zero value is set causing data flow stoppage until another rate is tried. Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Peter Zijlstra authored
commit d525211f upstream. Vince reported a watchdog lockup like: [<ffffffff8115e114>] perf_tp_event+0xc4/0x210 [<ffffffff810b4f8a>] perf_trace_lock+0x12a/0x160 [<ffffffff810b7f10>] lock_release+0x130/0x260 [<ffffffff816c7474>] _raw_spin_unlock_irqrestore+0x24/0x40 [<ffffffff8107bb4d>] do_send_sig_info+0x5d/0x80 [<ffffffff811f69df>] send_sigio_to_task+0x12f/0x1a0 [<ffffffff811f71ce>] send_sigio+0xae/0x100 [<ffffffff811f72b7>] kill_fasync+0x97/0xf0 [<ffffffff8115d0b4>] perf_event_wakeup+0xd4/0xf0 [<ffffffff8115d103>] perf_pending_event+0x33/0x60 [<ffffffff8114e3fc>] irq_work_run_list+0x4c/0x80 [<ffffffff8114e448>] irq_work_run+0x18/0x40 [<ffffffff810196af>] smp_trace_irq_work_interrupt+0x3f/0xc0 [<ffffffff816c99bd>] trace_irq_work_interrupt+0x6d/0x80 Which is caused by an irq_work generating new irq_work and therefore not allowing forward progress. This happens because processing the perf irq_work triggers another perf event (tracepoint stuff) which in turn generates an irq_work ad infinitum. Avoid this by raising the recursion counter in the irq_work -- which effectively disables all software events (including tracepoints) from actually triggering again. Reported-by: Vince Weaver <vincent.weaver@maine.edu> Tested-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20150219170311.GH21418@twins.programming.kicks-ass.netSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Bob Copeland authored
commit d0c22119 upstream. The mesh forwarding path was not checking that data frames were protected when running an encrypted network; add the necessary check. Reported-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Bob Copeland <me@bobcopeland.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Michal Kazior authored
commit aa75ebc2 upstream. Some APs experience problems when working with U-APSD. Decreasing the probability of that happening by using legacy mode for all ACs but VO isn't enough. Cisco 4410N originally forced us to enable VO by default only because it treated non-VO ACs as legacy. However some APs (notably Netgear R7000) silently reclassify packets to different ACs. Since u-APSD ACs require trigger frames for frame retrieval clients would never see some frames (e.g. ARP responses) or would fetch them accidentally after a long time. It makes little sense to enable u-APSD queues by default because it needs userspace applications to be aware of it to actually take advantage of the possible additional powersavings. Implicitly depending on driver autotrigger frame support doesn't make much sense. Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Johannes Berg authored
commit 496fcc29 upstream. As HT/VHT depend heavily on QoS/WMM, it's not a good idea to let userspace add clients that have HT/VHT but not QoS/WMM. Since it does so in certain cases we've observed (client is using HT IEs but not QoS/WMM) just ignore the HT/VHT info at this point and don't pass it down to the drivers which might unconditionally use it. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Bart Van Assche authored
commit 75c3d0bf upstream. This patch fixes the incorrect use of __transport_register_session() in tcm_qla2xxx_check_initiator_node_acl() code, that does not perform explicit se_tpg->session_lock when accessing se_tpg->tpg_sess_list to add new se_sess nodes. Given that tcm_qla2xxx_check_initiator_node_acl() is not called with qla_hw->hardware_lock held for all accesses of ->tpg_sess_list, the code should be using transport_register_session() instead. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Giridhar Malavali <giridhar.malavali@qlogic.com> Cc: Quinn Tran <quinn.tran@qlogic.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Dan Carpenter authored
commit d556546e upstream. This patch adds a missing set of conditional check braces in ft_invl_hw_context() originally introduced by commit dcd998cc when handling DDP failures in ft_recv_write_data() code. commit dcd998cc Author: Kiran Patil <kiran.patil@intel.com> Date: Wed Aug 3 09:20:01 2011 +0000 tcm_fc: Handle DDP/SW fc_frame_payload_get failures in ft_recv_write_data Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Kiran Patil <kiran.patil@intel.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Lars-Peter Clausen authored
commit 328f494d upstream. When inserting a new register into a block at the lower end the present bitmap is currently shifted into the wrong direction. The effect of this is that the bitmap becomes corrupted and registers which are present might be reported as not present and vice versa. Fix this by shifting left rather than right. Fixes: 472fdec7("regmap: rbtree: Reduce number of nodes, take 2") Reported-by: Daniel Baluta <daniel.baluta@gmail.com> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit 07892b10 upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit 2bf4c1d4 upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Acked-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit 08641d9b upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit eaddf6fd upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit 24cc883c upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit 00a14c29 upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit bd14016f upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit 4c523ef6 upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit b4a18c8b upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit e8371aa0 upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Acked-by: Paul Handrigan <Paul.Handrigan@cirrus.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Takashi Iwai authored
commit d7f58db4 upstream. The correct values referred by a boolean control are value.integer.value[], not value.enumerated.item[]. The former is long while the latter is int, so it's even incompatible on 64bit architectures. Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Eric Nelson authored
commit c7d910b8 upstream. The SGTL5000_CHIP_ANA_POWER register is cached. Update the cached value instead of writing it directly. Patch inspired by Russell King's more colorful remarks in this patch: https://github.com/SolidRun/linux-imx6-3.14/commit/dd4bf6aSigned-off-by: Eric Nelson <eric.nelson@boundarydevices.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
willy tarreau authored
commit 71f6d1b3 upstream. Right now the mvneta driver doesn't handle Tx IRQ, and relies on two mechanisms to flush Tx descriptors : a flush at the end of mvneta_tx() and a timer. If a burst of packets is emitted faster than the device can send them, then the queue is stopped until next wake-up of the timer 10ms later. This causes jerky output traffic with bursts and pauses, making it difficult to reach line rate with very few streams. A test on UDP traffic shows that it's not possible to go beyond 134 Mbps / 12 kpps of outgoing traffic with 1500-bytes IP packets. Routed traffic tends to observe pauses as well if the traffic is bursty, making it even burstier after the wake-up. It seems that this feature was inherited from the original driver but nothing there mentions any reason for not using the interrupt instead, which the chip supports. Thus, this patch enables Tx interrupts and removes the timer. It does the two at once because it's not really possible to make the two mechanisms coexist, so a split patch doesn't make sense. First tests performed on a Mirabox (Armada 370) show that less CPU seems to be used when sending traffic. One reason might be that we now call the mvneta_tx_done_gbe() with a mask indicating which queues have been done instead of looping over all of them. The same UDP test above now happily reaches 987 Mbps / 87.7 kpps. Single-stream TCP traffic can now more easily reach line rate. HTTP transfers of 1 MB objects over a single connection went from 730 to 840 Mbps. It is even possible to go significantly higher (>900 Mbps) by tweaking tcp_tso_win_divisor. Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Cc: Arnaud Ebalard <arno@natisbad.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> Tested-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
willy tarreau authored
commit 40ba35e7 upstream. Marvell has not published the chip's datasheet yet, so it's very hard to find the relevant bits to manipulate to change the IRQ behaviour. Fortunately, these bits are described in the proprietary LSP patch set which is publicly available here : http://www.plugcomputer.org/downloads/mirabox/ So let's put them back in the driver in order to reduce the burden of current and future maintenance. Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
willy tarreau authored
commit 29021366 upstream. If a queue timeout is reported, we can oops because of some schedules while the caller is atomic, as shown below : mvneta d0070000.ethernet eth0: tx timeout BUG: scheduling while atomic: bash/1528/0x00000100 Modules linked in: slhttp_ethdiv(C) [last unloaded: slhttp_ethdiv] CPU: 2 PID: 1528 Comm: bash Tainted: G WC 3.13.0-rc4-mvebu-nf #180 [<c0011bd9>] (unwind_backtrace+0x1/0x98) from [<c000f1ab>] (show_stack+0xb/0xc) [<c000f1ab>] (show_stack+0xb/0xc) from [<c02ad323>] (dump_stack+0x4f/0x64) [<c02ad323>] (dump_stack+0x4f/0x64) from [<c02abe67>] (__schedule_bug+0x37/0x4c) [<c02abe67>] (__schedule_bug+0x37/0x4c) from [<c02ae261>] (__schedule+0x325/0x3ec) [<c02ae261>] (__schedule+0x325/0x3ec) from [<c02adb97>] (schedule_timeout+0xb7/0x118) [<c02adb97>] (schedule_timeout+0xb7/0x118) from [<c0020a67>] (msleep+0xf/0x14) [<c0020a67>] (msleep+0xf/0x14) from [<c01dcbe5>] (mvneta_stop_dev+0x21/0x194) [<c01dcbe5>] (mvneta_stop_dev+0x21/0x194) from [<c01dcfe9>] (mvneta_tx_timeout+0x19/0x24) [<c01dcfe9>] (mvneta_tx_timeout+0x19/0x24) from [<c024afc7>] (dev_watchdog+0x18b/0x1c4) [<c024afc7>] (dev_watchdog+0x18b/0x1c4) from [<c0020b53>] (call_timer_fn.isra.27+0x17/0x5c) [<c0020b53>] (call_timer_fn.isra.27+0x17/0x5c) from [<c0020cad>] (run_timer_softirq+0x115/0x170) [<c0020cad>] (run_timer_softirq+0x115/0x170) from [<c001ccb9>] (__do_softirq+0xbd/0x1a8) [<c001ccb9>] (__do_softirq+0xbd/0x1a8) from [<c001cfad>] (irq_exit+0x61/0x98) [<c001cfad>] (irq_exit+0x61/0x98) from [<c000d4bf>] (handle_IRQ+0x27/0x60) [<c000d4bf>] (handle_IRQ+0x27/0x60) from [<c000843b>] (armada_370_xp_handle_irq+0x33/0xc8) [<c000843b>] (armada_370_xp_handle_irq+0x33/0xc8) from [<c000fba9>] (__irq_usr+0x49/0x60) Ben Hutchings attempted to propose a better fix consisting in using a scheduled work for this, but while it fixed this panic, it caused other random freezes and panics proving that the reset sequence in the driver is unreliable and that additional fixes should be investigated. When sending multiple streams over a link limited to 100 Mbps, Tx timeouts happen from time to time, and the driver correctly recovers only when the function is disabled. Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Cc: Ben Hutchings <ben@decadent.org.uk> Tested-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
willy tarreau authored
commit 74c41b04 upstream. Stats writers are mvneta_rx() and mvneta_tx(). They don't lock anything when they update the stats, and as a result, it randomly happens that the stats freeze on SMP if two updates happen during stats retrieval. This is very easily reproducible by starting two HTTP servers and binding each of them to a different CPU, then consulting /proc/net/dev in loops during transfers, the interface should immediately lock up. This issue also randomly happens upon link state changes during transfers, because the stats are collected in this situation, but it takes more attempts to reproduce it. The comments in netdevice.h suggest using per_cpu stats instead to get rid of this issue. This patch implements this. It merges both rx_stats and tx_stats into a single "stats" member with a single syncp. Both mvneta_rx() and mvneta_rx() now only update the a single CPU's counters. In turn, mvneta_get_stats64() does the summing by iterating over all CPUs to get their respective stats. With this change, stats are still correct and no more lockup is encountered. Note that this bug was present since the first import of the mvneta driver. It might make sense to backport it to some stable trees. If so, it depends on "d33dc73 net: mvneta: increase the 64-bit rx/tx stats out of the hot path". Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Tested-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: David S. Miller <davem@davemloft.net> [wt: port to 3.10 : u64_stats_init() does not exist in 3.10 and is not needed] Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
willy tarreau authored
commit dc4277dd upstream. Better count packets and bytes in the stack and on 32 bit then accumulate them at the end for once. This saves two memory writes and two memory barriers per packet. The incoming packet rate was increased by 4.7% on the Openblocks AX3 thanks to this. Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Tested-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Wei Yang authored
[ Upstream commit befdf897 ] pci_match_id() just match the static pci_device_id, which may return NULL if someone binds the driver to a device manually using /sys/bus/pci/drivers/.../new_id. This patch wrap up a helper function __mlx4_remove_one() which does the tear down function but preserve the drv_data. Functions like mlx4_pci_err_detected() and mlx4_restart_one() will call this one with out releasing drvdata. Fixes: 97a5221f "net/mlx4_core: pass pci_device_id.driver_data to __mlx4_init_one during reset". CC: Bjorn Helgaas <bhelgaas@google.com> CC: Amir Vadai <amirv@mellanox.com> CC: Jack Morgenstein <jackm@dev.mellanox.co.il> CC: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com> Acked-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Wei Yang authored
commit 97a5221f upstream. The second parameter of __mlx4_init_one() is used to identify whether the pci_dev is a PF or VF. Currently, when it is invoked in mlx4_pci_slot_reset() this information is missed. This patch match the pci_dev with mlx4_pci_table and passes the pci_device_id.driver_data to __mlx4_init_one() in mlx4_pci_slot_reset(). Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Eric Dumazet authored
commit 5925a055 upstream. sk_dst_cache has __rcu annotation, so we need a cast to avoid following sparse error : include/net/sock.h:1774:19: warning: incorrect type in initializer (different address spaces) include/net/sock.h:1774:19: expected struct dst_entry [noderef] <asn:4>*__ret include/net/sock.h:1774:19: got struct dst_entry *dst Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: kbuild test robot <fengguang.wu@intel.com> Fixes: 7f502361 ("ipv4: irq safe sk_dst_[re]set() and ipv4_sk_update_pmtu() fix") Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Thomas Fitzsimmons authored
commit 0a198587 upstream. This commit fixes the command value generated for CSUM calculation when running in big endian mode. The Ethernet protocol ID for IP was being unconditionally byte-swapped in the layer 3 protocol check (with swab16), which caused the mvneta driver to not function correctly in big endian mode. This patch byte-swaps the ID conditionally with htons. Cc: <stable@vger.kernel.org> # v3.13+ Signed-off-by: Thomas Fitzsimmons <fitzsim@fitzsim.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Ben Hutchings authored
commit 640d7efe upstream. *_result[len] is parsed as *(_result[len]) which is not at all what we want to touch here. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Fixes: 84a7c0b1 ("dns_resolver: assure that dns_query() result is null-terminated") Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Gao feng authored
commit 33d99113 upstream. commit 25fb6ca4 "net IPv6 : Fix broken IPv6 routing table after loopback down-up" allocates addrconf router for ipv6 address when lo device up. but commit a881ae1f "ipv6:don't call addrconf_dst_alloc again when enable lo" breaks this behavior. Since the addrconf router is moved to the garbage list when lo device down, we should release this router and rellocate a new one for ipv6 address when lo device up. This patch solves bug 67951 on bugzilla https://bugzilla.kernel.org/show_bug.cgi?id=67951 change from v1: use ip6_rt_put to repleace ip6_del_rt, thanks Hannes! change code style, suggested by Sergei. CC: Sabrina Dubroca <sd@queasysnail.net> CC: Hannes Frederic Sowa <hannes@stressinduktion.org> Reported-by: Weilong Chen <chenweilong@huawei.com> Signed-off-by: Weilong Chen <chenweilong@huawei.com> Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Zoltan Kiss authored
commit 36d5fe6a upstream. skb_zerocopy can copy elements of the frags array between skbs, but it doesn't orphan them. Also, it doesn't handle errors, so this patch takes care of that as well, and modify the callers accordingly. skb_tx_error() is also added to the callers so they will signal the failed delivery towards the creator of the skb. Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net> [bwh: Backported to 3.13: skb_zerocopy() is new in 3.14, but was moved from a static function in nfnetlink_queue. We need to patch that and its caller, but not openvswitch.] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Joerg Roedel authored
commit d97eb896 upstream. When an interrupt is migrated away from a cpu it will stay in its vector_irq array until smp_irq_move_cleanup_interrupt succeeded. The cfg->move_in_progress flag is cleared already when the IPI was sent. When the interrupt is destroyed after migration its 'struct irq_desc' is freed and the vector_irq arrays are cleaned up. But since cfg->move_in_progress is already 0 the references at cpus before the last migration will not be cleared. So this would leave a reference to an already destroyed irq alive. When the cpu is taken down at this point, the check_irq_vectors_for_cpu_disable() function finds a valid irq number in the vector_irq array, but gets NULL for its descriptor and dereferences it, causing a kernel panic. This has been observed on real systems at shutdown. Add a check to check_irq_vectors_for_cpu_disable() for a valid 'struct irq_desc' to prevent this issue. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Jiang Liu <jiang.liu@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan Beulich <JBeulich@suse.com> Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Yinghai Lu <yinghai@kernel.org> Cc: alnovak@suse.com Cc: joro@8bytes.org Link: http://lkml.kernel.org/r/20150204132754.GA10078@suse.deSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Joerg Roedel authored
commit 9db4ad91 upstream. Check for the ->map and not the ->unmap pointer. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Jan Kiszka authored
commit acead1b0 upstream. Add CPU ID for Atom N2600/N2800 processors. Datasheets indicate support for this, detailed information about potential quirks or limitations are missing, though. So we just reuse the definition for the previous ATOM series. Tests on N2800 systems showed that this addition is fine an can reduce power consumption by about 0.25 W (personally confirmed on Intel DN2800MT). Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Romeo Cane authored
commit 1028ccf5 upstream. Declaring sys_call_table as a pointer causes the compiler to generate the wrong lookup code in arch_syscall_addr(). <arch_syscall_addr>: lis r9,-16384 rlwinm r3,r3,2,0,29 - lwz r11,30640(r9) - lwzx r3,r11,r3 + addi r9,r9,30640 + lwzx r3,r9,r3 blr The actual sys_call_table symbol, declared in assembler, is an array. If we lie about that to the compiler we get the wrong code generated, as above. This definition seems only to be used by the syscall tracing code in kernel/trace/trace_syscalls.c. With this patch I can successfully use the syscall tracepoints: bash-3815 [002] .... 333.239082: sys_write -> 0x2 bash-3815 [002] .... 333.239087: sys_dup2(oldfd: a, newfd: 1) bash-3815 [002] .... 333.239088: sys_dup2 -> 0x1 bash-3815 [002] .... 333.239092: sys_fcntl(fd: a, cmd: 1, arg: 0) bash-3815 [002] .... 333.239093: sys_fcntl -> 0x1 bash-3815 [002] .... 333.239094: sys_close(fd: a) bash-3815 [002] .... 333.239094: sys_close -> 0x0 Signed-off-by: Romeo Cane <romeo.cane.ext@coriant.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Quentin Casasnovas authored
commit f84598bd upstream. mc_saved_tmp is a static array allocated on the stack, we need to make sure mc_saved_count stays within its bounds, otherwise we're overflowing the stack in _save_mc(). A specially crafted microcode header could lead to a kernel crash or potentially kernel execution. Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1422964824-22056-1-git-send-email-quentin.casasnovas@oracle.comSigned-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Andy Lutomirski authored
commit ff7e0055 upstream. The commit 4982223e module: set nx before marking module MODULE_STATE_COMING. introduced a regression: if a module fails to parse its arguments or if mod_sysfs_setup fails, then the module's memory will be freed while still read-only. Anything that reuses that memory will crash as soon as it tries to write to it. Cc: stable@vger.kernel.org # v3.16 Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Rusty Russell authored
commit 4982223e upstream. We currently set RO & NX on modules very late: after we move them from MODULE_STATE_UNFORMED to MODULE_STATE_COMING, and after we call parse_args() (which can exec code in the module). Much better is to do it in complete_formation() and then call the notifier. This means that the notifiers will be called on a module which is already RO & NX, so that may cause problems (ftrace already changed so they're unaffected). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-
Michael Ellerman authored
commit 875ebe94 upstream. Anton has a busy ppc64le KVM box where guests sometimes hit the infamous "kernel BUG at kernel/smpboot.c:134!" issue during boot: BUG_ON(td->cpu != smp_processor_id()); Basically a per CPU hotplug thread scheduled on the wrong CPU. The oops output confirms it: CPU: 0 Comm: watchdog/130 The problem is that we aren't ensuring the CPU active bit is set for the secondary before allowing the master to continue on. The master unparks the secondary CPU's kthreads and the scheduler looks for a CPU to run on. It calls select_task_rq() and realises the suggested CPU is not in the cpus_allowed mask. It then ends up in select_fallback_rq(), and since the active bit isnt't set we choose some other CPU to run on. This seems to have been introduced by 6acbfb96 "sched: Fix hotplug vs. set_cpus_allowed_ptr()", which changed from setting active before online to setting active after online. However that was in turn fixing a bug where other code assumed an active CPU was also online, so we can't just revert that fix. The simplest fix is just to spin waiting for both active & online to be set. We already have a barrier prior to set_cpu_online() (which also sets active), to ensure all other setup is completed before online & active are set. Fixes: 6acbfb96 ("sched: Fix hotplug vs. set_cpus_allowed_ptr()") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
-