- 27 Apr, 2015 14 commits
-
-
Alexei Starovoitov authored
[ Upstream commit c3de6317 ] Due to missing bounds check the DAG pass of the BPF verifier can corrupt the memory which can cause random crashes during program loading: [8.449451] BUG: unable to handle kernel paging request at ffffffffffffffff [8.451293] IP: [<ffffffff811de33d>] kmem_cache_alloc_trace+0x8d/0x2f0 [8.452329] Oops: 0000 [#1] SMP [8.452329] Call Trace: [8.452329] [<ffffffff8116cc82>] bpf_check+0x852/0x2000 [8.452329] [<ffffffff8116b7e4>] bpf_prog_load+0x1e4/0x310 [8.452329] [<ffffffff811b190f>] ? might_fault+0x5f/0xb0 [8.452329] [<ffffffff8116c206>] SyS_bpf+0x806/0xa30 Fixes: f1bca824 ("bpf: add search pruning optimization to verifier") Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Eric Dumazet authored
[ Upstream commit 074975d0 ] Commit 9a2620c8 ("bnx2x: prevent WARN during driver unload") switched the napi/busy_lock locking mechanism from spin_lock() into spin_lock_bh(), breaking inter-operability with netconsole, as netpoll disables interrupts prior to calling our napi mechanism. This switches the driver into using atomic assignments instead of the spinlock mechanisms previously employed. Based on initial patch from Yuval Mintz & Ariel Elior I basically added softirq starvation avoidance, and mixture of atomic operations, plain writes and barriers. Note this slightly reduces the overhead for this driver when no busy_poll sockets are in use. Fixes: 9a2620c8 ("bnx2x: prevent WARN during driver unload") Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Eric Dumazet authored
[ Upstream commit b50edd78 ] I noticed tcpdump was giving funky timestamps for locally generated SYNACK messages on loopback interface. 11:42:46.938990 IP 127.0.0.1.48245 > 127.0.0.2.23850: S 945476042:945476042(0) win 43690 <mss 65495,nop,nop,sackOK,nop,wscale 7> 20:28:58.502209 IP 127.0.0.2.23850 > 127.0.0.1.48245: S 3160535375:3160535375(0) ack 945476043 win 43690 <mss 65495,nop,nop,sackOK,nop,wscale 7> This is because we need to clear skb->tstamp before entering lower stack, otherwise net_timestamp_check() does not set skb->tstamp. Fixes: 7faee5c0 ("tcp: remove TCP_SKB_CB(skb)->when") Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Jack Morgenstein authored
[ Upstream commit fde913e2 ] Commit 1daa4303 ("net/mlx4_core: Deprecate error message at ConnectX-2 cards startup to debug") did the deprecation only for port 1 of the card. Need to deprecate for port 2 as well. Fixes: 1daa4303 ("net/mlx4_core: Deprecate error message at ConnectX-2 cards startup to debug") Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Amir Vadai <amirv@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
hannes@stressinduktion.org authored
[ Upstream commit f60e5990 ] We should not consult skb->sk for output decisions in xmit recursion levels > 0 in the stack. Otherwise local socket settings could influence the result of e.g. tunnel encapsulation process. ipv6 does not conform with this in three places: 1) ip6_fragment: we do consult ipv6_npinfo for frag_size 2) sk_mc_loop in ipv6 uses skb->sk and checks if we should loop the packet back to the local socket 3) ip6_skb_dst_mtu could query the settings from the user socket and force a wrong MTU Furthermore: In sk_mc_loop we could potentially land in WARN_ON(1) if we use a PF_PACKET socket ontop of an IPv6-backed vxlan device. Reuse xmit_recursion as we are currently only interested in protecting tunnel devices. Cc: Jiri Pirko <jiri@resnulli.us> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Neal Cardwell authored
[ Upstream commit 666b8051 ] On processing cumulative ACKs, the FRTO code was not checking the SACKed bit, meaning that there could be a spurious FRTO undo on a cumulative ACK of a previously SACKed skb. The FRTO code should only consider a cumulative ACK to indicate that an original/unretransmitted skb is newly ACKed if the skb was not yet SACKed. The effect of the spurious FRTO undo would typically be to make the connection think that all previously-sent packets were in flight when they really weren't, leading to a stall and an RTO. Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Fixes: e33099f9 ("tcp: implement RFC5682 F-RTO") Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Jonathan Davies authored
[ Upstream commit 0c36820e ] xen-netfront limits transmitted skbs to be at most 44 segments in size. However, GSO permits up to 65536 bytes, which means a maximum of 45 segments of 1448 bytes each. This slight reduction in the size of packets means a slight loss in efficiency. Since c/s 9ecd1a75, xen-netfront sets gso_max_size to XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER, where XEN_NETIF_MAX_TX_SIZE is 65535 bytes. The calculation used by tcp_tso_autosize (and also tcp_xmit_size_goal since c/s 6c09fa09) in determining when to split an skb into two is sk->sk_gso_max_size - 1 - MAX_TCP_HEADER. So the maximum permitted size of an skb is calculated to be (XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER) - 1 - MAX_TCP_HEADER. Intuitively, this looks like the wrong formula -- we don't need two TCP headers. Instead, there is no need to deviate from the default gso_max_size of 65536 as this already accommodates the size of the header. Currently, the largest skb transmitted by netfront is 63712 bytes (44 segments of 1448 bytes each), as observed via tcpdump. This patch makes netfront send skbs of up to 65160 bytes (45 segments of 1448 bytes each). Similarly, the maximum allowable mtu does not need to subtract MAX_TCP_HEADER as it relates to the size of the whole packet, including the header. Fixes: 9ecd1a75 ("xen-netfront: reduce gso_max_size to account for max TCP header") Signed-off-by: Jonathan Davies <jonathan.davies@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Anton Nayshtut authored
[ Upstream commit f5e2dc5d ] Before commit 3900f290 ("bonding: slight optimizztion for bond_slave_override()") the override logic was to send packets with non-zero queue_id through the slave with corresponding queue_id, under two conditions only - if the slave can transmit and it's up. The above mentioned commit changed this logic by introducing an additional condition - whether the bond is active (indirectly, using the slave_can_tx and later - bond_is_active_slave), that prevents the user from implementing more complex policies according to the Documentation/networking/bonding.txt. Signed-off-by: Anton Nayshtut <anton@swortex.com> Signed-off-by: Alexey Bogoslavsky <alexey@swortex.com> Signed-off-by: Andy Gospodarek <gospo@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alexey Kodanev authored
[ Upstream commit 4ad19de8 ] tcp_v6_fill_cb() will be called twice if socket's state changes from TCP_TIME_WAIT to TCP_LISTEN. That can result in control buffer data corruption because in the second tcp_v6_fill_cb() call it's not copying IP6CB(skb) anymore, but 'seq', 'end_seq', etc., so we can get weird and unpredictable results. Performance loss of up to 1200% has been observed in LTP/vxlan03 test. This can be fixed by copying inet6_skb_parm to the beginning of 'cb' only if xfrm6_policy_check() and tcp_v6_fill_cb() are going to be called again. Fixes: 2dc49d16 ("tcp6: don't move IP6CB before xfrm6_policy_check()") Signed-off-by: Alexey Kodanev <alexey.kodanev@oracle.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
D.S. Ljungmark authored
[ Upstream commit 6fd99094 ] A local route may have a lower hop_limit set than global routes do. RFC 3756, Section 4.2.7, "Parameter Spoofing" > 1. The attacker includes a Current Hop Limit of one or another small > number which the attacker knows will cause legitimate packets to > be dropped before they reach their destination. > As an example, one possible approach to mitigate this threat is to > ignore very small hop limits. The nodes could implement a > configurable minimum hop limit, and ignore attempts to set it below > said limit. Signed-off-by: D.S. Ljungmark <ljungmark@modio.se> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Ido Shamay authored
[ Upstream commit e5eda89d ] Netdevice registration should be performed a the end of the driver initialization flow. If we don't do that, after calling register_netdevice, device callbacks may be issued by higher layers of the stack before final configuration of the device is done. For example (VXLAN configuration race), mlx4_SET_PORT_VXLAN was issued after the register_netdev command. System network scripts may configure the interface (UP) right after the registration, which also attach unicast VXLAN steering rule, before mlx4_SET_PORT_VXLAN was called, causing the firmware to fail the rule attachment. Fixes: 837052d0 ("net/mlx4_en: Add netdev support for TCP/IP offloads of vxlan tunneling") Signed-off-by: Ido Shamay <idos@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Michal Kubeček authored
[ Upstream commit d0c294c5 ] On s390x, gcc 4.8 compiles this part of tcp_v6_early_demux() struct dst_entry *dst = sk->sk_rx_dst; if (dst) dst = dst_check(dst, inet6_sk(sk)->rx_dst_cookie); to code reading sk->sk_rx_dst twice, once for the test and once for the argument of ip6_dst_check() (dst_check() is inline). This allows ip6_dst_check() to be called with null first argument, causing a crash. Protect sk->sk_rx_dst access by READ_ONCE() both in IPv4 and IPv6 TCP early demux code. Fixes: 41063e9d ("ipv4: Early TCP socket demux.") Fixes: c7109986 ("ipv6: Early TCP socket demux") Signed-off-by: Michal Kubecek <mkubecek@suse.cz> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Christian Borntraeger authored
[ Upstream commit 43239cbe ] Feedback has shown that WRITE_ONCE(x, val) is easier to use than ASSIGN_ONCE(val,x). There are no in-tree users yet, so lets change it for 3.19. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Davidlohr Bueso <dave@stgolabs.net> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Christian Borntraeger authored
[ Upstream commit 230fa253 ] ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Let's provide READ_ONCE/ASSIGN_ONCE that will do all accesses via scalar types as suggested by Linus Torvalds. Accesses larger than the machines word size cannot be guaranteed to be atomic. These macros will use memcpy and emit a build warning. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 24 Apr, 2015 26 commits
-
-
Igor Mammedov authored
[ Upstream commit 74496134 ] KVM guest can fail to startup with following trace on host: qemu-system-x86: page allocation failure: order:4, mode:0x40d0 Call Trace: dump_stack+0x47/0x67 warn_alloc_failed+0xee/0x150 __alloc_pages_direct_compact+0x14a/0x150 __alloc_pages_nodemask+0x776/0xb80 alloc_kmem_pages+0x3a/0x110 kmalloc_order+0x13/0x50 kmemdup+0x1b/0x40 __kvm_set_memory_region+0x24a/0x9f0 [kvm] kvm_set_ioapic+0x130/0x130 [kvm] kvm_set_memory_region+0x21/0x40 [kvm] kvm_vm_ioctl+0x43f/0x750 [kvm] Failure happens when attempting to allocate pages for 'struct kvm_memslots', however it doesn't have to be present in physically contiguous (kmalloc-ed) address space, change allocation to kvm_kvzalloc() so that it will be vmalloc-ed when its size is more then a page. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dave Chinner authored
[ Upstream commit 5885ebda ] A new fsync vs power fail test in xfstests indicated that XFS can have unreliable data consistency when doing extending truncates that require block zeroing. The blocks beyond EOF get zeroed in memory, but we never force those changes to disk before we run the transaction that extends the file size and exposes those blocks to userspace. This can result in the blocks not being correctly zeroed after a crash. Because in-memory behaviour is correct, tools like fsx don't pick up any coherency problems - it's not until the filesystem is shutdown or the system crashes after writing the truncate transaction to the journal but before the zeroed data in the page cache is flushed that the issue is exposed. Fix this by also flushing the dirty data in memory region between the old size and new size when we've found blocks that need zeroing in the truncate process. Reported-by: Liu Bo <bo.li.liu@oracle.com> cc: <stable@vger.kernel.org> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Omar Sandoval authored
[ Upstream commit 6f30b7e3 ] Commit 4f579ae7 (ext4: fix punch hole on files with indirect mapping) rewrote FALLOC_FL_PUNCH_HOLE for ext4 files with indirect mapping. However, there are bugs in several corner cases. This fixes 5 distinct bugs: 1. When there is at least one entire level of indirection between the start and end of the punch range and the end of the punch range is the first block of its level, we can't return early; we have to free the intervening levels. 2. When the end is at a higher level of indirection than the start and ext4_find_shared returns a top branch for the end, we still need to free the rest of the shared branch it returns; we can't decrement partial2. 3. When a punch happens within one level of indirection, we need to converge on an indirect block that contains the start and end. However, because the branches returned from ext4_find_shared do not necessarily start at the same level (e.g., the partial2 chain will be shallower if the last block occurs at the beginning of an indirect group), the walk of the two chains can end up "missing" each other and freeing a bunch of extra blocks in the process. This mismatch can be handled by first making sure that the chains are at the same level, then walking them together until they converge. 4. When the punch happens within one level of indirection and ext4_find_shared returns a top branch for the start, we must free it, but only if the end does not occur within that branch. 5. When the punch happens within one level of indirection and ext4_find_shared returns a top branch for the end, then we shouldn't free the block referenced by the end of the returned chain (this mirrors the different levels case). Signed-off-by: Omar Sandoval <osandov@osandov.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Preeti U Murthy authored
[ Upstream commit a127d2bc ] The hrtimer mode of broadcast queues hrtimers in the idle entry path so as to wakeup cpus in deep idle states. The associated call graph is : cpuidle_idle_call() |____ clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, ....)) |_____tick_broadcast_set_event() |____clockevents_program_event() |____bc_set_next() The hrtimer_{start/cancel} functions call into tracing which uses RCU. But it is not legal to call into RCU in cpuidle because it is one of the quiescent states. Hence protect this region with RCU_NONIDLE which informs RCU that the cpu is momentarily non-idle. As an aside it is helpful to point out that the clock event device that is programmed here is not a per-cpu clock device; it is a pseudo clock device, used by the broadcast framework alone. The per-cpu clock device programming never goes through bc_set_next(). Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: linuxppc-dev@ozlabs.org Cc: mpe@ellerman.id.au Cc: tglx@linutronix.de Link: http://lkml.kernel.org/r/20150318104705.17763.56668.stgit@preeti.in.ibm.comSigned-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Majd Dibbiny authored
[ Upstream commit 61a3855b ] For RoCE ports, we set the u32 PMA values based on u64 HCA counters. In case of overflow, according to the IB spec, we have to saturate a counter to its max value, do that. Fixes: c3779134 ('IB/mlx4: Support PMA counters for IBoE') Signed-off-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Uwe Kleine-König authored
[ Upstream commit da321133 ] The rate provided at the output of a clk-divider is calculated as: DIV_ROUND_UP(parent_rate, div) since commit b11d282d (clk: divider: fix rate calculation for fractional rates). So to yield a rate not bigger than r parent_rate must be <= r * div. The effect of choosing a parent rate that is too big as was done before this patch results in wrongly ruling out good dividers. Note that this is not a complete fix as __clk_round_rate might return a value >= its 2nd parameter. Also for dividers with CLK_DIVIDER_ROUND_CLOSEST set the calculation is not accurate. But this fixes the test case by Sascha Hauer that uses a chain of three dividers under a fixed clock. Fixes: b11d282d (clk: divider: fix rate calculation for fractional rates) Suggested-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Acked-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Michael Turquette <mturquette@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Uwe Kleine-König authored
[ Upstream commit 26bac95a ] It's an invalid approach to assume that among two divider values the one nearer the exact divider is the better one. Assume a parent rate of 1000 Hz, a divider with CLK_DIVIDER_POWER_OF_TWO and a target rate of 89 Hz. The exact divider is ~ 11.236 so 8 and 16 are the candidates to choose from yielding rates 125 Hz and 62.5 Hz respectivly. While 8 is nearer to 11.236 than 16 is, the latter is still the better divider as 62.5 is nearer to 89 than 125 is. Fixes: 774b5143 (clk: divider: Add round to closest divider) Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Acked-by: Sascha Hauer <s.hauer@pengutronix.de> Acked-by: Maxime Coquelin <maxime.coquelin@st.com> Signed-off-by: Michael Turquette <mturquette@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Hans Verkuil authored
[ Upstream commit 0e661006 ] Stopping the vb2 thread (as used by several DVB devices) can result in an 'UNBALANCED' warning such as this: vb2: counters for queue ffff880407ee9828: UNBALANCED! vb2: setup: 1 start_streaming: 1 stop_streaming: 1 vb2: wait_prepare: 249333 wait_finish: 249334 This is due to a race condition between stopping the thread and calling vb2_internal_streamoff(). While I have not been able to deduce the exact mechanism how this race condition can produce this warning, I can see that the way the stream is stopped is likely to lead to a race somewhere. This patch simplifies how this is done by first ensuring that the thread is completely stopped before cleaning up the vb2 queue. It does that by setting threadio->stop to true, followed by a call to vb2_queue_error() which will wake up the thread. The thread sees that 'stop' is true and it will exit. The call to kthread_stop() waits until the thread has exited, and only then is the queue cleaned up by calling __vb2_cleanup_fileio(). This is a much cleaner sequence and the warning has now disappeared. Reported-by: Jurgen Kramer <gtmkramer@xs4all.nl> Tested-by: Jurgen Kramer <gtmkramer@xs4all.nl> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Cc: <stable@vger.kernel.org> # for v3.18 and up Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Geert Uytterhoeven authored
[ Upstream commit 8e48a2d5 ] Unlike scan_async_group(), soc_of_bind() doesn't allocate its soc_camera_async_client structure using devm_kzalloc(), but has it embedded inside the soc_of_info structure. Hence on failure, it must free the whole soc_of_info structure, and not just the embedded soc_camera_async_client structure, as the latter causes a warning, and may cause slab corruption: soc-camera-pdrv soc-camera-pdrv.0: Probing soc-camera-pdrv.0 ------------[ cut here ]------------ WARNING: CPU: 0 PID: 1 at drivers/base/devres.c:887 devm_kfree+0x30/0x40() CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.19.0-shmobile-08386-g37feb0d093cb2d8e #128 Hardware name: Generic R8A7791 (Flattened Device Tree) Backtrace: [<c0011e7c>] (dump_backtrace) from [<c0012024>] (show_stack+0x18/0x1c) r6:c05a923b r5:00000009 r4:00000000 r3:00204140 [<c001200c>] (show_stack) from [<c048ed30>] (dump_stack+0x78/0x94) [<c048ecb8>] (dump_stack) from [<c002687c>] (warn_slowpath_common+0x8c/0xb8) r4:00000000 r3:00000000 [<c00267f0>] (warn_slowpath_common) from [<c0026980>] (warn_slowpath_null+0x24/0x2c) r8:ee7d8214 r7:ed83b810 r6:ed83bc20 r5:fffffffa r4:ed83e510 [<c002695c>] (warn_slowpath_null) from [<c025e0cc>] (devm_kfree+0x30/0x40) [<c025e09c>] (devm_kfree) from [<c032bbf4>] (soc_of_bind.isra.14+0x194/0x1d4) [<c032ba60>] (soc_of_bind.isra.14) from [<c032c6b8>] (soc_camera_host_register+0x208/0x31c) r9:00000070 r8:ee7e05d0 r7:ee153210 r6:00000000 r5:ee7e0218 r4:ed83bc20 [<c032c4b0>] (soc_camera_host_register) from [<c032e80c>] (rcar_vin_probe+0x1f4/0x238) r8:ee153200 r7:00000008 r6:ee153210 r5:ed83bc10 r4:c066319c r3:000000c0 [<c032e618>] (rcar_vin_probe) from [<c025c334>] (platform_drv_probe+0x50/0xa0) r10:00000000 r9:c0662fa8 r8:00000000 r7:c06a3700 r6:c0662fa8 r5:ee153210 r4:00000000 [<c025c2e4>] (platform_drv_probe) from [<c025af08>] (driver_probe_device+0xc4/0x208) r6:c06a36f4 r5:00000000 r4:ee153210 r3:c025c2e4 [<c025ae44>] (driver_probe_device) from [<c025b108>] (__driver_attach+0x70/0x94) r9:c066f9c0 r8:c0624a98 r7:c065b790 r6:c0662fa8 r5:ee153244 r4:ee153210 [<c025b098>] (__driver_attach) from [<c025984c>] (bus_for_each_dev+0x74/0x98) r6:c025b098 r5:c0662fa8 r4:00000000 r3:00000001 [<c02597d8>] (bus_for_each_dev) from [<c025b1dc>] (driver_attach+0x20/0x28) r6:ed83c200 r5:00000000 r4:c0662fa8 [<c025b1bc>] (driver_attach) from [<c025a00c>] (bus_add_driver+0xdc/0x1c4) [<c0259f30>] (bus_add_driver) from [<c025b8f4>] (driver_register+0xa4/0xe8) r7:c0624a98 r6:00000000 r5:c060b010 r4:c0662fa8 [<c025b850>] (driver_register) from [<c025ccd0>] (__platform_driver_register+0x50/0x64) r5:c060b010 r4:ed8394c0 [<c025cc80>] (__platform_driver_register) from [<c060b028>] (rcar_vin_driver_init+0x18/0x20) [<c060b010>] (rcar_vin_driver_init) from [<c05edde8>] (do_one_initcall+0x108/0x1b8) [<c05edce0>] (do_one_initcall) from [<c05edfb4>] (kernel_init_freeable+0x11c/0x1e4) r9:c066f9c0 r8:c066f9c0 r7:c062eab0 r6:c06252c4 r5:000000ad r4:00000006 [<c05ede98>] (kernel_init_freeable) from [<c048c3d0>] (kernel_init+0x10/0xec) r9:00000000 r8:00000000 r7:00000000 r6:00000000 r5:c048c3c0 r4:00000000 [<c048c3c0>] (kernel_init) from [<c000eba0>] (ret_from_fork+0x14/0x34) r4:00000000 r3:ee04e000 ---[ end trace e3a984cc0335c8a0 ]--- rcar_vin e6ef1000.video: group probe failed: -6 Fixes: 1ddc6a6c ("[media] soc_camera: add support for dt binding soc_camera drivers") Cc: <stable@vger.kernel.org> # 3.17+ Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Marek Szyprowski authored
[ Upstream commit 05b676ab ] TASK_SIZE is depends on the systems architecture (32 or 64 bits) and it should not be used for defining offset boundary for mmaping buffers for CAPTURE and OUTPUT queues. This patch fixes support for MMAP calls on the CAPTURE queue on 64bit architectures (like ARM64). Cc: stable@vger.kernel.org Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kamil Debski <k.debski@samsung.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Hans Verkuil authored
[ Upstream commit ab312030 ] The v4l2_dev field of struct video_device must be set correctly. This was never done for this driver, so no video nodes were created anymore. Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Cc: <stable@vger.kernel.org> # for v3.11 and up Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mike Christie authored
[ Upstream commit b815fc12 ] This fixes a oops due to a double list add when adding a reject PDU for iscsit_allocate_iovecs allocation failures. The cmd has already been added to the conn_cmd_list in iscsit_setup_scsi_cmd, so this has us call iscsit_reject_cmd. Note that for ERL0 the reject PDU is not actually sent, so this patch is not completely tested. Just verified we do not oops. The problem is the add reject functions return -1 which is returned all the way up to iscsi_target_rx_thread which for ERL0 will drop the connection. Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Cc: <stable@vger.kernel.org> # v3.10+ Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Al Viro authored
[ Upstream commit deeb8525 ] If we fail past the aio_setup_ring(), we need to destroy the mapping. We don't need to care about anybody having found ctx, or added requests to it, since the last failure exit is exactly the failure to make ctx visible to lookups. Reproducer (based on one by Joe Mario <jmario@redhat.com>): void count(char *p) { char s[80]; printf("%s: ", p); fflush(stdout); sprintf(s, "/bin/cat /proc/%d/maps|/bin/fgrep -c '/[aio] (deleted)'", getpid()); system(s); } int main() { io_context_t *ctx; int created, limit, i, destroyed; FILE *f; count("before"); if ((f = fopen("/proc/sys/fs/aio-max-nr", "r")) == NULL) perror("opening aio-max-nr"); else if (fscanf(f, "%d", &limit) != 1) fprintf(stderr, "can't parse aio-max-nr\n"); else if ((ctx = calloc(limit, sizeof(io_context_t))) == NULL) perror("allocating aio_context_t array"); else { for (i = 0, created = 0; i < limit; i++) { if (io_setup(1000, ctx + created) == 0) created++; } for (i = 0, destroyed = 0; i < created; i++) if (io_destroy(ctx[i]) == 0) destroyed++; printf("created %d, failed %d, destroyed %d\n", created, limit - created, destroyed); count("after"); } } Found-by: Joe Mario <jmario@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Al Viro authored
[ Upstream commit 64b4e252 ] "ocfs2 syncs the wrong range" had been broken; prior to it the code was doing the wrong thing in case of O_APPEND, all right, but _after_ it we were syncing the wrong range in 100% cases. *ppos, aka iocb->ki_pos is incremented prior to that point, so we are always doing sync on the area _after_ the one we'd written to. Spotted by Joseph Qi <joseph.qi@huawei.com> back in January; unfortunately, I'd missed his mail back then ;-/ Cc: stable@vger.kernel.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
John Soni Jose authored
[ Upstream commit 2e7cee02 ] Kernel panic was happening as iscsi_host_remove() was called on a host which was not yet added. Signed-off-by: John Soni Jose <sony.john-n@emulex.com> Reviewed-by: Mike Christie <michaelc@cs.wisc.edu> Cc: <stable@vger.kernel.org> Signed-off-by: James Bottomley <JBottomley@Odin.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Rafael J. Wysocki authored
[ Upstream commit f82daee4 ] Commit 84c91b7a (PM / hibernate: avoid unsafe pages in e820 reserved regions) is reported to make resume from hibernation on Lenovo x230 unreliable, so revert it. We will revisit the issue the commit in question was supposed to fix in the future. Link: https://bugzilla.kernel.org/show_bug.cgi?id=96111Reported-by: rhn <kebuac.rhn@porcupinefactory.org> Cc: 3.17+ <stable@vger.kernel.org> # 3.17+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Benjamin Herrenschmidt authored
[ Upstream commit 41d94893 ] The "sdc" node is missing the ranges property, it needs to be treated as having an empty one otherwise translation fails for its children. Fixes 746c9e9f, "of/base: Fix PowerPC address parsing hack" Tested-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Grant Likely <grant.likely@linaro.org> Cc: Stable <stable@vger.kernel.org> # v3.18+ Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Larry Finger authored
[ Upstream commit be0b5e63 ] Transmission of an AP beacon does not call the TX interrupt service routine, which usually does the cleanup. Instead, cleanup is handled in a tasklet completion routine. Unfortunately, this routine has a serious bug in that it does not release the DMA mapping before it frees the skb, thus one IOMMU mapping is leaked for each beacon. The test system failed with no free IOMMU mapping slots approximately one hour after hostapd was used to start an AP. This issue was reported and tested at https://github.com/lwfinger/rtlwifi_new/issues/30. Reported-and-tested-by: Kevin Mullican <kevin@mullican.com> Cc: Kevin Mullican <kevin@mullican.com> Signed-off-by: Shao Fu <shaofu@realtek.com> Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Cc: Stable <stable@vger.kernel.org> [3.18+] Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alex Williamson authored
[ Upstream commit 71684406 ] Device domains never span IOMMU hardware units, which allows the domain ID space for each IOMMU to be an independent address space. Therefore we can have multiple, independent domains, each with the same domain->id, but attached to different hardware units. This is also why we need to do a heavy-weight search for VM domains since they can span multiple IOMMUs hardware units and we don't require a single global ID to use for all hardware units. Therefore, if we call iommu_detach_domain() across all active IOMMU hardware units for a non-VM domain, the result is that we clear domain IDs that are not associated with our domain, allowing them to be re-allocated and causing apparent coherency issues when the device cannot access IOVAs for the intended domain. This bug was introduced in commit fb170fb4 ("iommu/vt-d: Introduce helper functions to make code symmetric for readability"), but is significantly exacerbated by the more recent commit 62c22167 ("iommu/vt-d: Fix dmar_domain leak in iommu_attach_device") which calls domain_exit() more frequently to resolve a domain leak. Fixes: fb170fb4 ("iommu/vt-d: Introduce helper functions to make code symmetric for readability") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: stable@vger.kernel.org # v3.17+ Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
David Disseldorp authored
[ Upstream commit e1e9bda2 ] Under intermittent network outages, find_writable_file() is susceptible to the following race condition, which results in a user-after-free in the cifs_writepages code-path: Thread 1 Thread 2 ======== ======== inv_file = NULL refind = 0 spin_lock(&cifs_file_list_lock) // invalidHandle found on openFileList inv_file = open_file // inv_file->count currently 1 cifsFileInfo_get(inv_file) // inv_file->count = 2 spin_unlock(&cifs_file_list_lock); cifs_reopen_file() cifs_close() // fails (rc != 0) ->cifsFileInfo_put() spin_lock(&cifs_file_list_lock) // inv_file->count = 1 spin_unlock(&cifs_file_list_lock) spin_lock(&cifs_file_list_lock); list_move_tail(&inv_file->flist, &cifs_inode->openFileList); spin_unlock(&cifs_file_list_lock); cifsFileInfo_put(inv_file); ->spin_lock(&cifs_file_list_lock) // inv_file->count = 0 list_del(&cifs_file->flist); // cleanup!! kfree(cifs_file); spin_unlock(&cifs_file_list_lock); spin_lock(&cifs_file_list_lock); ++refind; // refind = 1 goto refind_writable; At this point we loop back through with an invalid inv_file pointer and a refind value of 1. On second pass, inv_file is not overwritten on openFileList traversal, and is subsequently dereferenced. Signed-off-by: David Disseldorp <ddiss@suse.de> Reviewed-by: Jeff Layton <jlayton@samba.org> CC: <stable@vger.kernel.org> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sachin Prabhu authored
[ Upstream commit 2477bc58 ] While attempting to clone a file on a samba server, we receive a STATUS_INVALID_DEVICE_REQUEST. This is mapped to -EOPNOTSUPP which isn't handled in smb2_clone_range(). We end up looping in the while loop making same call to the samba server over and over again. The proposed fix is to exit and return the error value when encountered with an unhandled error. Cc: <stable@vger.kernel.org> Signed-off-by: Sachin Prabhu <sprabhu@redhat.com> Signed-off-by: Steve French <steve.french@primarydata.com> Signed-off-by: Steve French <smfrench@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Stefan Agner authored
[ Upstream commit 8e4934c6 ] When the receiver was enabled during startup, a character could have been in the FIFO when the UART get initially used. The driver configures the (receive) watermark level, and flushes the FIFO. However, the receive flag (RDRF) could still be set at that stage (as mentioned in the register description of UARTx_RWFIFO). This leads to an interrupt which won't be handled properly in interrupt mode: The receive interrupt function lpuart_rxint checks the FIFO count, which is 0 at that point (due to the flush during initialization). The problem does not manifest when using DMA to receive characters. Fix this situation by explicitly read the status register, which leads to clearing of the RDRF flag. Due to the flush just after the status flag read, a explicit data read is not to required. Signed-off-by: Stefan Agner <stefan@agner.ch> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Stefan Agner authored
[ Upstream commit 4e8f2459 ] Specify transmit FIFO size which might be different depending on LPUART instance. This makes sure uart_wait_until_sent in serial core getting called, which in turn waits and checks if the FIFO is really empty on shutdown by using the tx_empty callback. Without the call of this callback, the last several characters might not yet be transmitted when closing the serial port. This can be reproduced by simply using echo and redirect the output to a ttyLP device. Signed-off-by: Stefan Agner <stefan@agner.ch> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Lu Baolu authored
[ Upstream commit 227a4fd8 ] When a device with an isochronous endpoint is plugged into the Intel xHCI host controller, and the driver submits multiple frames per URB, the xHCI driver will set the Block Event Interrupt (BEI) flag on all but the last TD for the URB. This causes the host controller to place an event on the event ring, but not send an interrupt. When the last TD for the URB completes, BEI is cleared, and we get an interrupt for the whole URB. However, under Intel xHCI host controllers, if the event ring is full of events from transfers with BEI set, an "Event Ring is Full" event will be posted to the last entry of the event ring, but no interrupt is generated. Host will cease all transfer and command executions and wait until software completes handling the pending events in the event ring. That means xHC stops, but event of "event ring is full" is not notified. As the result, the xHC looks like dead to user. This patch is to apply XHCI_AVOID_BEI quirk to Intel xHC devices. And it should be backported to kernels as old as 3.0, that contains the commit 69e848c2 ("Intel xhci: Support EHCI/xHCI port switching."). Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Alistair Grant <akgrant0710@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Lu Baolu authored
[ Upstream commit 9425183d ] Linux xHCI driver doesn't report and handle port cofig error change. If Port Configure Error for root hub port occurs, CEC bit in PORTSC would be set by xHC and remains 1. This happends when the root port fails to configure its link partner, e.g. the port fails to exchange port capabilities information using Port Capability LMPs. Then the Port Status Change Events will be blocked until all status change bits(CEC is one of the change bits) are cleared('0') (refer to xHCI spec 4.19.2). Otherwise, the port status change event for this root port will not be generated anymore, then root port would look like dead for user and can't be recovered until a Host Controller Reset(HCRST). This patch is to check CEC bit in PORTSC in xhci_get_port_status() and set a Config Error in the return status if CEC is set. This will cause a ClearPortFeature request, where CEC bit is cleared in xhci_clear_port_change_bit(). [The commit log is based on initial Marvell patch posted at http://marc.info/?l=linux-kernel&m=142323612321434&w=2] Reported-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Cc: stable <stable@vger.kernel.org> # v3.2+ Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Thomas Schlichter authored
[ Upstream commit c7e8bdf5 ] Fix a bug that leads to showing the name and description of C-state C0 as "<null>" in sysfs after the ACPI C-states changed (e.g. after AC->DC or DC->AC transition). The function poll_idle_init() in drivers/cpuidle/driver.c initializes the state 0 during cpuidle_register_driver(), so we better do not overwrite it again with '\0' during acpi_processor_cst_has_changed(). Signed-off-by: Thomas Schlichter <thomas.schlichter@web.de> Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: 3.13+ <stable@vger.kernel.org> # 3.13+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-