- 16 Jul, 2014 40 commits
-
-
Peter Ujfalusi authored
commit e6c111fa upstream. For some unknown reason the parameters for snd_soc_test_bits() were in wrong order: It was: snd_soc_test_bits(codec, val, mask, reg); /* WRONG!!! */ while it should be: snd_soc_test_bits(codec, reg, mask, val); Signed-off-by:
Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by:
Mark Brown <broonie@linaro.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Christoph Hellwig authored
commit 12337901 upstream. Note nobody's ever noticed because the typical client probably never requests FILES_AVAIL without also requesting something else on the list. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
J. Bruce Fields <bfields@redhat.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Ming Lei authored
commit e8edca6f upstream. Firstly, it isn't necessary to hold lock of vblk->vq_lock when notifying hypervisor about queued I/O. Secondly, virtqueue_notify() will cause world switch and it may take long time on some hypervisors(such as, qemu-arm), so it isn't good to hold the lock and block other vCPUs. On arm64 quad core VM(qemu-kvm), the patch can increase I/O performance a lot with VIRTIO_RING_F_EVENT_IDX enabled: - without the patch: 14K IOPS - with the patch: 34K IOPS fio script: [global] direct=1 bsrange=4k-4k timeout=10 numjobs=4 ioengine=libaio iodepth=64 filename=/dev/vdc group_reporting=1 [f1] rw=randread Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: virtualization@lists.linux-foundation.org Signed-off-by:
Ming Lei <ming.lei@canonical.com> Acked-by:
Rusty Russell <rusty@rustcorp.com.au> Signed-off-by:
Jens Axboe <axboe@fb.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
James Hogan authored
commit 7006e2df upstream. Each MIPS KVM guest has its own copy of the KVM exception vector. This contains the TLB refill exception handler at offset 0x000, the general exception handler at offset 0x180, and interrupt exception handlers at offset 0x200 in case Cause_IV=1. A common handler is copied to offset 0x2000 and offset 0x3000 is used for temporarily storing k1 during entry from guest. However the amount of memory allocated for this purpose is calculated as 0x200 rounded up to the next page boundary, which is insufficient if 4KB pages are in use. This can lead to the common handler at offset 0x2000 being overwritten and infinitely recursive exceptions on the next exit from the guest. Increase the minimum size from 0x200 to 0x4000 to cover the full use of the page. Signed-off-by:
James Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: kvm@vger.kernel.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: Sanjay Lal <sanjayl@kymasys.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Yann Droneaud authored
commit b6f04d3d upstream. The i386 ABI disagrees with most other ABIs regarding alignment of data types larger than 4 bytes: on most ABIs a padding must be added at end of the structures, while it is not required on i386. So for most ABI struct c4iw_create_cq_resp gets implicitly padded to be aligned on a 8 bytes multiple, while for i386, such padding is not added. The tool pahole can be used to find such implicit padding: $ pahole --anon_include \ --nested_anon_include \ --recursive \ --class_name c4iw_create_cq_resp \ drivers/infiniband/hw/cxgb4/iw_cxgb4.o Then, structure layout can be compared between i386 and x86_64: +++ obj-i386/drivers/infiniband/hw/cxgb4/iw_cxgb4.o.pahole.txt 2014-03-28 11:43:05.547432195 +0100 --- obj-x86_64/drivers/infiniband/hw/cxgb4/iw_cxgb4.o.pahole.txt 2014-03-28 10:55:10.990133017 +0100 @@ -14,9 +13,8 @@ struct c4iw_create_cq_resp { __u32 size; /* 28 4 */ __u32 qid_mask; /* 32 4 */ - /* size: 36, cachelines: 1, members: 6 */ - /* last cacheline: 36 bytes */ + /* size: 40, cachelines: 1, members: 6 */ + /* padding: 4 */ + /* last cacheline: 40 bytes */ }; This ABI disagreement will make an x86_64 kernel try to write past the buffer provided by an i386 binary. When boundary check will be implemented, the x86_64 kernel will refuse to write past the i386 userspace provided buffer and the uverbs will fail. If the structure is on a page boundary and the next page is not mapped, ib_copy_to_udata() will fail and the uverb will fail. This patch adds an explicit padding at end of structure c4iw_create_cq_resp, and, like 92b0ca7c ("IB/mlx5: Fix stack info leak in mlx5_ib_alloc_ucontext()"), makes function c4iw_create_cq() not writting this padding field to userspace. This way, x86_64 kernel will be able to write struct c4iw_create_cq_resp as expected by unpatched and patched i386 libcxgb4. Link: http://marc.info/?i=cover.1399309513.git.ydroneaud@opteya.com Fixes: cfdda9d7 ("RDMA/cxgb4: Add driver for Chelsio T4 RNIC") Fixes: e24a72a3 ("RDMA/cxgb4: Fix four byte info leak in c4iw_create_cq()") Cc: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Yann Droneaud <ydroneaud@opteya.com> Acked-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Bart Van Assche authored
commit 8ec0a0e6 upstream. Avoid leaking a kref count in ib_umad_open() if port->ib_dev == NULL or if nonseekable_open() fails. Avoid leaking a kref count, that sm_sem is kept down and also that the IB_PORT_SM capability mask is not cleared in ib_umad_sm_open() if nonseekable_open() fails. Since container_of() never returns NULL, remove the code that tests whether container_of() returns NULL. Moving the kref_get() call from the start of ib_umad_*open() to the end is safe since it is the responsibility of the caller of these functions to ensure that the cdev pointer remains valid until at least when these functions return. Signed-off-by:
Bart Van Assche <bvanassche@acm.org> [ydroneaud@opteya.com: rework a bit to reduce the amount of code changed] Signed-off-by:
Yann Droneaud <ydroneaud@opteya.com> [ nonseekable_open() can't actually fail, but.... - Roland ] Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Aleksander Morgado authored
commit 0ce5fb58 upstream. A set of new VID/PIDs retrieved from the out-of-tree GobiNet/GobiSerial Sierra Wireless drivers. Signed-off-by:
Aleksander Morgado <aleksander@aleksander.es> Link: http://marc.info/?l=linux-usb&m=140136310027293&w=2Signed-off-by:
Johan Hovold <jhovold@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> [ Aleksander: backport to 3.13-stable ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Aleksander Morgado authored
commit ff1fcd50 upstream. Signed-off-by:
Aleksander Morgado <aleksander@aleksander.es> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> [ Aleksander: backport to 3.13-stable ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Radim Krčmář authored
commit a100d88d upstream. We try to free two pages when only one has been allocated. Cleanup path is unlikely, so I haven't found any trace that would fit, but I hope that free_pages_prepare() does catch it. Signed-off-by:
Radim Krčmář <rkrcmar@redhat.com> Reviewed-by:
Amos Kong <akong@redhat.com> Acked-by:
Jason Wang <jasowang@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Thomas Petazzoni authored
commit b7e46062 upstream. The pxa3xx_nand driver currently uses __raw_writel() and __raw_readl() to access I/O registers. However, those functions do not do any endianness swapping, which means that they won't work when the CPU runs in big-endian but the I/O registers are little endian, which is the common situation for ARM systems running big endian. Since __raw_writel() and __raw_readl() do not include any memory barriers and the pxa3xx_nand driver can only be compiled for ARM platforms, the closest I/o accessors functions that do endianess swapping are writel_relaxed() and readl_relaxed(). This patch has been verified to work on Armada XP GP: without the patch, the NAND is not detected when the kernel runs big endian while it is properly detected when the kernel runs little endian. With the patch applied, the NAND is properly detected in both situations (little and big endian). Signed-off-by:
Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by:
Brian Norris <computersforpeace@gmail.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Arik Nemtsov authored
commit 923eaf36 upstream. Doing so will lead to an oops for a p2p-dev interface, since it has no netdev. Signed-off-by:
Arik Nemtsov <arikx.nemtsov@intel.com> Signed-off-by:
Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Felix Fietkau authored
commit 53d04525 upstream. If the rate control algorithm uses a selection table, it is leaked when the station is destroyed - fix that. Signed-off-by:
Felix Fietkau <nbd@openwrt.org> Reported-by:
Christophe Prévotaux <cprevotaux@nltinc.com> Fixes: 0d528d85 ("mac80211: improve the rate control API") [add commit log entry, remove pointless NULL check] Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Christian Borntraeger authored
commit 993072ee upstream. The IRB might be 96 bytes if the extended-I/O-measurement facility is used. This feature is currently not used by Linux, but struct irb already has the emw defined. So let's make the irb in lowcore match the size of the internal data structure to be future proof. We also have to add a pad, to correctly align the paste. The bigger irb field also circumvents a bug in some QEMU versions that always write the emw field on test subchannel and therefore destroy the paste definitions of this CPU. Running under these QEMU version broke some timing functions in the VDSO and all users of these functions, e.g. some JREs. Signed-off-by:
Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Sebastian Ott <sebott@linux.vnet.ibm.com> Cc: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Huang Rui authored
commit e4d58f5d upstream. TEST 12 and TEST 24 unlinks the URB write request for N times. When host and gadget both initialize pattern 1 (mod 63) data series to transfer, the gadget side will complain the wrong data which is not expected. Because in host side, usbtest doesn't fill the data buffer as mod 63 and this patch fixed it. [20285.488974] dwc3 dwc3.0.auto: ep1out-bulk: Transfer Not Ready [20285.489181] dwc3 dwc3.0.auto: ep1out-bulk: reason Transfer Not Active [20285.489423] dwc3 dwc3.0.auto: ep1out-bulk: req ffff8800aa6cb480 dma aeb50800 length 512 last [20285.489727] dwc3 dwc3.0.auto: ep1out-bulk: cmd 'Start Transfer' params 00000000 a9eaf000 00000000 [20285.490055] dwc3 dwc3.0.auto: Command Complete --> 0 [20285.490281] dwc3 dwc3.0.auto: ep1out-bulk: Transfer Not Ready [20285.490492] dwc3 dwc3.0.auto: ep1out-bulk: reason Transfer Active [20285.490713] dwc3 dwc3.0.auto: ep1out-bulk: endpoint busy [20285.490909] dwc3 dwc3.0.auto: ep1out-bulk: Transfer Complete [20285.491117] dwc3 dwc3.0.auto: request ffff8800aa6cb480 from ep1out-bulk completed 512/512 ===> 0 [20285.491431] zero gadget: bad OUT byte, buf[1] = 0 [20285.491605] dwc3 dwc3.0.auto: ep1out-bulk: cmd 'Set Stall' params 00000000 00000000 00000000 [20285.491915] dwc3 dwc3.0.auto: Command Complete --> 0 [20285.492099] dwc3 dwc3.0.auto: queing request ffff8800aa6cb480 to ep1out-bulk length 512 [20285.492387] dwc3 dwc3.0.auto: ep1out-bulk: Transfer Not Ready [20285.492595] dwc3 dwc3.0.auto: ep1out-bulk: reason Transfer Not Active [20285.492830] dwc3 dwc3.0.auto: ep1out-bulk: req ffff8800aa6cb480 dma aeb51000 length 512 last [20285.493135] dwc3 dwc3.0.auto: ep1out-bulk: cmd 'Start Transfer' params 00000000 a9eaf000 00000000 [20285.493465] dwc3 dwc3.0.auto: Command Complete --> 0 Signed-off-by:
Huang Rui <ray.huang@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Tomas Winkler authored
commit c40765d9 upstream. According the spec the host should read H_CSR again after asserting reset H_RST to ensure that reset was read by the firmware Signed-off-by:
Tomas Winkler <tomas.winkler@intel.com> Signed-off-by:
Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> [ kamal: backport to 3.13-stable: also includes mei_me_hw_reset() change from 33ec0826 mei: revamp mei reset state machine ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Tomas Winkler authored
commit 07cd7be3 upstream. It my take time till ME_RDY will be cleared after the reset, so we cannot check the bit before we got the interrupt Signed-off-by:
Tomas Winkler <tomas.winkler@intel.com> Signed-off-by:
Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Tomas Winkler authored
commit b04ada92 upstream. We cleared H_RST for H_CSR on spurious interrupt generated when ME_RDY while cleared and not while ME_RDY is set. The spurious interrupt is not delivered on all platforms in this case the driver may fail to initialize. Signed-off-by:
Tomas Winkler <tomas.winkler@intel.com> Signed-off-by:
Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> [ kamal: backport to 3.13-stable: context ] Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Dennis Dalessandro authored
commit 7e6d3e5c upstream. This patch addresses an issue where the legacy diagpacket is sent in from the user, but the driver operates on only the extended diagpkt. This patch specifically initializes the extended diagpkt based on the legacy packet. Reported-by:
Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Reviewed-by:
Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by:
Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Mike Marciniszyn authored
commit 911eccd2 upstream. The code used a literal 1 in dispatching an IB_EVENT_PKEY_CHANGE. As of the dual port qib QDR card, this is not necessarily correct. Change to use the port as specified in the call. Reported-by:
Alex Estrin <alex.estrin@intel.com> Reviewed-by:
Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by:
Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Yann Droneaud authored
commit 43bc8893 upstream. The i386 ABI disagrees with most other ABIs regarding alignment of data type larger than 4 bytes: on most ABIs a padding must be added at end of the structures, while it is not required on i386. So for most ABIs struct mlx5_ib_create_srq gets implicitly padded to be aligned on a 8 bytes multiple, while for i386, such padding is not added. Tool pahole could be used to find such implicit padding: $ pahole --anon_include \ --nested_anon_include \ --recursive \ --class_name mlx5_ib_create_srq \ drivers/infiniband/hw/mlx5/mlx5_ib.o Then, structure layout can be compared between i386 and x86_64: +++ obj-i386/drivers/infiniband/hw/mlx5/mlx5_ib.o.pahole.txt 2014-03-28 11:43:07.386413682 +0100 --- obj-x86_64/drivers/infiniband/hw/mlx5/mlx5_ib.o.pahole.txt 2014-03-27 13:06:17.788472721 +0100 @@ -69,7 +68,6 @@ struct mlx5_ib_create_srq { __u64 db_addr; /* 8 8 */ __u32 flags; /* 16 4 */ - /* size: 20, cachelines: 1, members: 3 */ - /* last cacheline: 20 bytes */ + /* size: 24, cachelines: 1, members: 3 */ + /* padding: 4 */ + /* last cacheline: 24 bytes */ }; ABI disagreement will make an x86_64 kernel try to read past the buffer provided by an i386 binary. When boundary check will be implemented, the x86_64 kernel will refuse to read past the i386 userspace provided buffer and the uverb will fail. Anyway, if the structure lay in memory on a page boundary and next page is not mapped, ib_copy_from_udata() will fail and the uverb will fail. This patch makes create_srq_user() takes care of the input data size to handle the case where no padding was provided. This way, x86_64 kernel will be able to handle struct mlx5_ib_create_srq as sent by unpatched and patched i386 libmlx5. Link: http://marc.info/?i=cover.1399309513.git.ydroneaud@opteya.com Fixes: e126ba97 ("mlx5: Add driver for Mellanox Connect-IB adapter") Signed-off-by:
Yann Droneaud <ydroneaud@opteya.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Yann Droneaud authored
commit a8237b32 upstream. The i386 ABI disagrees with most other ABIs regarding alignment of data type larger than 4 bytes: on most ABIs a padding must be added at end of the structures, while it is not required on i386. So for most ABI struct mlx5_ib_create_cq get padded to be aligned on a 8 bytes multiple, while for i386, such padding is not added. The tool pahole can be used to find such implicit padding: $ pahole --anon_include \ --nested_anon_include \ --recursive \ --class_name mlx5_ib_create_cq \ drivers/infiniband/hw/mlx5/mlx5_ib.o Then, structure layout can be compared between i386 and x86_64: +++ obj-i386/drivers/infiniband/hw/mlx5/mlx5_ib.o.pahole.txt 2014-03-28 11:43:07.386413682 +0100 --- obj-x86_64/drivers/infiniband/hw/mlx5/mlx5_ib.o.pahole.txt 2014-03-27 13:06:17.788472721 +0100 @@ -34,9 +34,8 @@ struct mlx5_ib_create_cq { __u64 db_addr; /* 8 8 */ __u32 cqe_size; /* 16 4 */ - /* size: 20, cachelines: 1, members: 3 */ - /* last cacheline: 20 bytes */ + /* size: 24, cachelines: 1, members: 3 */ + /* padding: 4 */ + /* last cacheline: 24 bytes */ }; This ABI disagreement will make an x86_64 kernel try to read past the buffer provided by an i386 binary. When boundary check will be implemented, a x86_64 kernel will refuse to read past the i386 userspace provided buffer and the uverb will fail. Anyway, if the structure lies in memory on a page boundary and next page is not mapped, ib_copy_from_udata() will fail when trying to read the 4 bytes of padding and the uverb will fail. This patch makes create_cq_user() takes care of the input data size to handle the case where no padding is provided. This way, x86_64 kernel will be able to handle struct mlx5_ib_create_cq as sent by unpatched and patched i386 libmlx5. Link: http://marc.info/?i=cover.1399309513.git.ydroneaud@opteya.com Fixes: e126ba97 ("mlx5: Add driver for Mellanox Connect-IB adapter") Signed-off-by:
Yann Droneaud <ydroneaud@opteya.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Maurizio Lombardi authored
commit b5b60778 upstream. The variable "size" is expressed as number of blocks and not as number of clusters, this could trigger a kernel panic when using ext4 with the size of a cluster different from the size of a block. Signed-off-by:
Maurizio Lombardi <mlombard@redhat.com> Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jan Kara authored
commit eeece469 upstream. Tail of a page straddling inode size must be zeroed when being written out due to POSIX requirement that modifications of mmaped page beyond inode size must not be written to the file. ext4_bio_write_page() did this only for blocks fully beyond inode size but didn't properly zero blocks partially beyond inode size. Fix this. The problem has been uncovered by mmap_11-4 test in openposix test suite (part of LTP). Reported-by:
Xiaoguang Wang <wangxg.fnst@cn.fujitsu.com> Fixes: 5a0dc736 Fixes: bd2d0210Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Andreas Schrägle authored
commit 754a292f upstream. Add support for Marvell Technology Group Ltd. 88SE91A0 SATA 6Gb/s Controller by adding its PCI ID. Signed-off-by:
Andreas Schrägle <ajs124.ajs124@gmail.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Martin Schwidefsky authored
commit b6f42962 upstream. Analog to git commit 28b92e09 first cast tk->wall_to_monotonic.tv_nsec to u64 before doing the shift with tk->shift to avoid loosing relevant bits on a 32-bit kernel. Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Paolo Bonzini authored
commit fc57ac2c upstream. When Hyper-V enlightenments are in effect, Windows prefers to issue an Hyper-V MSR write to issue an EOI rather than an x2apic MSR write. The Hyper-V MSR write is not handled by the processor, and besides being slower, this also causes bugs with APIC virtualization. The reason is that on EOI the processor will modify the highest in-service interrupt (SVI) field of the VMCS, as explained in section 29.1.4 of the SDM; every other step in EOI virtualization is already done by apic_send_eoi or on VM entry, but this one is missing. We need to do the same, and be careful not to muck with the isr_count and highest_isr_cache fields that are unused when virtual interrupt delivery is enabled. Reviewed-by:
Yang Zhang <yang.z.zhang@intel.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Krzysztof Hałasa authored
commit c7d37a66 upstream. Without this fix, freshly rebooted Linux creates a new IBSS instead of joining an existing one. Only when jiffies counter overflows after 5 minutes the IBSS can be successfully joined. Signed-off-by:
Krzysztof Hałasa <khalasa@piap.pl> [edit commit message slightly] Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Stephen Boyd authored
commit ffd07de6 upstream. Failure to terminate this match table can lead to boot failures depending on where the compiler places the match table. Cc: Kamlakant Patel <kamlakant.patel@broadcom.com> Cc: Mona Anonuevo <manonuevo@micron.com> Cc: linux-mtd@lists.infradead.org Signed-off-by:
Stephen Boyd <sboyd@codeaurora.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jonathan Cameron authored
commit a91a73c8 upstream. Reported-by:
Erik Habbinga <Erik.Habbinga@schneider-electric.com> Signed-off-by:
Jonathan Cameron <jic23@kernel.org> Acked-by:
Hartmut Knaack <knaack.h@gmx.de> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Pavel Shilovsky authored
commit 663a9621 upstream. Signed-off-by:
Pavel Shilovsky <pshilovsky@samba.org> Reviewed-by:
Shirish Pargaonkar <spargaonkar@suse.com> Signed-off-by:
Steve French <smfrench@gmail.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Takashi Iwai authored
commit deb29e90 upstream. When ivtv PCM device is accessed at the state where no firmware is loaded, it oopses like: BUG: unable to handle kernel NULL pointer dereference at 0000000000000050 IP: [<ffffffffa049a881>] try_mailbox.isra.0+0x11/0x50 [ivtv] Call Trace: [<ffffffffa049aa20>] ivtv_api_call+0x160/0x6b0 [ivtv] [<ffffffffa049af86>] ivtv_api+0x16/0x40 [ivtv] [<ffffffffa049b10c>] ivtv_vapi+0xac/0xc0 [ivtv] [<ffffffffa049d40d>] ivtv_start_v4l2_encode_stream+0x19d/0x630 [ivtv] [<ffffffffa0530653>] snd_ivtv_pcm_capture_open+0x173/0x1c0 [ivtv_alsa] [<ffffffffa04526f1>] snd_pcm_open_substream+0x51/0x100 [snd_pcm] [<ffffffffa0452853>] snd_pcm_open+0xb3/0x260 [snd_pcm] [<ffffffffa0452a37>] snd_pcm_capture_open+0x37/0x50 [snd_pcm] [<ffffffffa033f557>] snd_open+0xa7/0x1e0 [snd] [<ffffffff8118a628>] chrdev_open+0x88/0x1d0 [<ffffffff811840be>] do_dentry_open+0x1de/0x270 [<ffffffff81193a73>] do_last+0x1c3/0xec0 [<ffffffff81194826>] path_openat+0xb6/0x670 [<ffffffff81195b65>] do_filp_open+0x35/0x80 [<ffffffff81185449>] do_sys_open+0x129/0x210 [<ffffffff815b782d>] system_call_fastpath+0x1a/0x1f This patch adds the check of firmware at PCM open callback like other open callbacks of this driver. Bugzilla: https://apibugzilla.novell.com/show_bug.cgi?id=875440Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Olivier Langlois authored
commit 3b35fc81 upstream. timestamps in v4l2 buffers returned to userspace are updated in uvc_video_clock_update() which uses timestamps fetched from uvc_video_clock_decode() by calling unconditionally ktime_get_ts(). Hence setting the module clock param to realtime has no effect before this patch. This has been tested with ffmpeg: ffmpeg -y -f v4l2 -input_format yuyv422 -video_size 640x480 -framerate 30 -i /dev/video0 \ -f alsa -acodec pcm_s16le -ar 16000 -ac 1 -i default \ -c:v libx264 -preset ultrafast \ -c:a libfdk_aac \ out.mkv and inspecting the v4l2 input starting timestamp. Signed-off-by:
Olivier Langlois <olivier@trillion01.com> Signed-off-by:
Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by:
Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Trond Myklebust authored
commit c789102c upstream. If the accept() call fails, we need to put the module reference. Signed-off-by:
Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by:
J. Bruce Fields <bfields@redhat.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Russell King authored
commit 3683f44c upstream. While debugging the FEC ethernet driver using stacktrace, it was noticed that the stacktraces always begin as follows: [<c00117b4>] save_stack_trace_tsk+0x0/0x98 [<c0011870>] save_stack_trace+0x24/0x28 ... This is because the stack trace code includes the stack frames for itself. This is incorrect behaviour, and also leads to "skip" doing the wrong thing (which is the number of stack frames to avoid recording.) Perversely, it does the right thing when passed a non-current thread. Fix this by ensuring that we have a known constant number of frames above the main stack trace function, and always skip these. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Jeff Mahoney authored
commit 22e7478d upstream. Prior to commit 0e4f6a79 (Fix reiserfs_file_release()), reiserfs truncates serialized on i_mutex. They mostly still do, with the exception of reiserfs_file_release. That blocks out other writers via the tailpack mutex and the inode openers counter adjusted in reiserfs_file_open. However, NFS will call reiserfs_setattr without having called ->open, so we end up with a race when nfs is calling ->setattr while another process is releasing the file. Ultimately, it triggers the BUG_ON(inode->i_size != new_file_size) check in maybe_indirect_to_direct. The solution is to pull the lock into reiserfs_setattr to encompass the truncate_setsize call as well. Signed-off-by:
Jeff Mahoney <jeffm@suse.com> Signed-off-by:
Jan Kara <jack@suse.cz> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Pekon Gupta authored
commit f034d87d upstream. As subpage write is enabled by default for all drivers, nand_write_subpage_hwecc causes a crash if the driver did not register ecc->hwctl or ecc->calculate. This behavior was introduced in commit 837a6ba4 "mtd: nand: subpage write support for hardware based ECC schemes". This fixes a crash by emulating subpage write support by padding sub-page data with 0xff on either sides to make it full page compatible. Reported-by:
Helmut Schaa <helmut.schaa@googlemail.com> Tested-by:
Helmut Schaa <helmut.schaa@googlemail.com> Signed-off-by:
Pekon Gupta <pekon@ti.com> Reviewed-by:
Scott Wood <scottwood@freescale.com> Signed-off-by:
Brian Norris <computersforpeace@gmail.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
pekon gupta authored
commit f306e8c3 upstream. fixes: commit 62116e51 mtd: nand: omap2: Support for hardware BCH error correction. In omap_elm_correct_data(), if bitflip_count in an erased-page is within the correctable limit (< ecc.strength), then it is not indicated back to the caller ecc->read_page(). This mis-guides upper layers like MTD and UBIFS layer to assume erased-page as perfectly clean and use it for writing even if actual bitflip_count was dangerously high (bitflip_count > mtd->bitflip_threshold). This patch fixes this above issue, by returning 'stats' to caller ecc->read_page() under all scenarios. Reported-by:
Brian Norris <computersforpeace@gmail.com> Signed-off-by:
Pekon Gupta <pekon@ti.com> Signed-off-by:
Brian Norris <computersforpeace@gmail.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Sagi Grimberg authored
commit f5ebec96 upstream. disconnected_handler works are scheduled on system_wq. When attempting to unload, first make sure all works have cleaned up. Signed-off-by:
Sagi Grimberg <sagig@mellanox.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Sagi Grimberg authored
commit 88c4015f upstream. There are 4 RDMA_CM events that all basically mean that the user should teardown the IB connection: - DISCONNECTED - ADDR_CHANGE - DEVICE_REMOVAL - TIMEWAIT_EXIT Only in DISCONNECTED/ADDR_CHANGE it makes sense to call rdma_disconnect (send DREQ/DREP to our initiator). So we keep the same teardown handler for all of them but only indicate calling rdma_disconnect for the relevant events. This patch also removes redundant debug prints for each single event. v2 changes: - Call isert_disconnected_handler() for DEVICE_REMOVAL (Or + Sag) Signed-off-by:
Sagi Grimberg <sagig@mellanox.com> Signed-off-by:
Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-
Bart Van Assche authored
commit 024ca901 upstream. Avoid that the loops that iterate over the request ring can encounter a pointer to a SCSI command in req->scmnd that is no longer associated with that request. If the function srp_unmap_data() is invoked twice for a SCSI command that is not in flight then that would cause ib_fmr_pool_unmap() to be invoked with an invalid pointer as argument, resulting in a kernel oops. Reported-by:
Sagi Grimberg <sagig@mellanox.com> Reference: http://thread.gmane.org/gmane.linux.drivers.rdma/19068/focus=19069Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Reviewed-by:
Sagi Grimberg <sagig@mellanox.com> Signed-off-by:
Roland Dreier <roland@purestorage.com> Signed-off-by:
Kamal Mostafa <kamal@canonical.com>
-