- 25 May, 2017 35 commits
-
-
Dennis Yang authored
commit 583da48e upstream. When growing raid5 device on machine with small memory, there is chance that mdadm will be killed and the following bug report can be observed. The same bug could also be reproduced in linux-4.10.6. [57600.075774] BUG: unable to handle kernel NULL pointer dereference at (null) [57600.083796] IP: [<ffffffff81a6aa87>] _raw_spin_lock+0x7/0x20 [57600.110378] PGD 421cf067 PUD 4442d067 PMD 0 [57600.114678] Oops: 0002 [#1] SMP [57600.180799] CPU: 1 PID: 25990 Comm: mdadm Tainted: P O 4.2.8 #1 [57600.187849] Hardware name: To be filled by O.E.M. To be filled by O.E.M./MAHOBAY, BIOS QV05AR66 03/06/2013 [57600.197490] task: ffff880044e47240 ti: ffff880043070000 task.ti: ffff880043070000 [57600.204963] RIP: 0010:[<ffffffff81a6aa87>] [<ffffffff81a6aa87>] _raw_spin_lock+0x7/0x20 [57600.213057] RSP: 0018:ffff880043073810 EFLAGS: 00010046 [57600.218359] RAX: 0000000000000000 RBX: 000000000000000c RCX: ffff88011e296dd0 [57600.225486] RDX: 0000000000000001 RSI: ffffe8ffffcb46c0 RDI: 0000000000000000 [57600.232613] RBP: ffff880043073878 R08: ffff88011e5f8170 R09: 0000000000000282 [57600.239739] R10: 0000000000000005 R11: 28f5c28f5c28f5c3 R12: ffff880043073838 [57600.246872] R13: ffffe8ffffcb46c0 R14: 0000000000000000 R15: ffff8800b9706a00 [57600.253999] FS: 00007f576106c700(0000) GS:ffff88011e280000(0000) knlGS:0000000000000000 [57600.262078] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [57600.267817] CR2: 0000000000000000 CR3: 00000000428fe000 CR4: 00000000001406e0 [57600.274942] Stack: [57600.276949] ffffffff8114ee35 ffff880043073868 0000000000000282 000000000000eb3f [57600.284383] ffffffff81119043 ffff880043073838 ffff880043073838 ffff88003e197b98 [57600.291820] ffffe8ffffcb46c0 ffff88003e197360 0000000000000286 ffff880043073968 [57600.299254] Call Trace: [57600.301698] [<ffffffff8114ee35>] ? cache_flusharray+0x35/0xe0 [57600.307523] [<ffffffff81119043>] ? __page_cache_release+0x23/0x110 [57600.313779] [<ffffffff8114eb53>] kmem_cache_free+0x63/0xc0 [57600.319344] [<ffffffff81579942>] drop_one_stripe+0x62/0x90 [57600.324915] [<ffffffff81579b5b>] raid5_cache_scan+0x8b/0xb0 [57600.330563] [<ffffffff8111b98a>] shrink_slab.part.36+0x19a/0x250 [57600.336650] [<ffffffff8111e38c>] shrink_zone+0x23c/0x250 [57600.342039] [<ffffffff8111e4f3>] do_try_to_free_pages+0x153/0x420 [57600.348210] [<ffffffff8111e851>] try_to_free_pages+0x91/0xa0 [57600.353959] [<ffffffff811145b1>] __alloc_pages_nodemask+0x4d1/0x8b0 [57600.360303] [<ffffffff8157a30b>] check_reshape+0x62b/0x770 [57600.365866] [<ffffffff8157a4a5>] raid5_check_reshape+0x55/0xa0 [57600.371778] [<ffffffff81583df7>] update_raid_disks+0xc7/0x110 [57600.377604] [<ffffffff81592b73>] md_ioctl+0xd83/0x1b10 [57600.382827] [<ffffffff81385380>] blkdev_ioctl+0x170/0x690 [57600.388307] [<ffffffff81195238>] block_ioctl+0x38/0x40 [57600.393525] [<ffffffff811731c5>] do_vfs_ioctl+0x2b5/0x480 [57600.399010] [<ffffffff8115e07b>] ? vfs_write+0x14b/0x1f0 [57600.404400] [<ffffffff811733cc>] SyS_ioctl+0x3c/0x70 [57600.409447] [<ffffffff81a6ad97>] entry_SYSCALL_64_fastpath+0x12/0x6a [57600.415875] Code: 00 00 00 00 55 48 89 e5 8b 07 85 c0 74 04 31 c0 5d c3 ba 01 00 00 00 f0 0f b1 17 85 c0 75 ef b0 01 5d c3 90 31 c0 ba 01 00 00 00 <f0> 0f b1 17 85 c0 75 01 c3 55 89 c6 48 89 e5 e8 85 d1 63 ff 5d [57600.435460] RIP [<ffffffff81a6aa87>] _raw_spin_lock+0x7/0x20 [57600.441208] RSP <ffff880043073810> [57600.444690] CR2: 0000000000000000 [57600.448000] ---[ end trace cbc6b5cc4bf9831d ]--- The problem is that resize_stripes() releases new stripe_heads before assigning new slab cache to conf->slab_cache. If the shrinker function raid5_cache_scan() gets called after resize_stripes() starting releasing new stripes but right before new slab cache being assigned, it is possible that these new stripe_heads will be freed with the old slab_cache which was already been destoryed and that triggers this bug. Signed-off-by: Dennis Yang <dennisyang@qnap.com> Fixes: edbe83ab ("md/raid5: allow the stripe_cache to grow and shrink.") Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joe Thornber authored
commit 0377a07c upstream. When decrementing the reference count for a block, the free count wasn't being updated if the reference count went to zero. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Joe Thornber authored
commit 91bcdb92 upstream. These calls were the wrong way round in __write_initial_superblock. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 13840d38 upstream. Change the type of the parameter "retain_bytes" from unsigned to unsigned long, so that on 64-bit machines the user can set more than 4GiB of data to be retained. Also, change the type of the variable "count" in the function "__evict_old_buffers" to unsigned long. The assignment "count = c->n_buffers[LIST_CLEAN] + c->n_buffers[LIST_DIRTY];" could result in unsigned long to unsigned overflow and that could result in buffers not being freed when they should. While at it, avoid division in get_retain_buffers(). Division is slow, we can change it to shift because we have precalculated the log2 of block size. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Snitzer authored
commit 10add84e upstream. Otherwise it is possible to trigger crashes due to the metadata being inaccessible yet these methods don't safely account for that possibility without these checks. Reported-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bart Van Assche authored
commit c1d7ecf7 upstream. Requeuing a request immediately while path initialization is ongoing causes high CPU usage, something that is undesired. Hence delay requeuing while path initialization is in progress. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bart Van Assche authored
commit 7083abbb upstream. If blk_get_request() fails, check whether the failure is due to a path being removed. If that is the case, fail the path by triggering a call to fail_path(). This avoids that the following scenario can be encountered while removing paths: * CPU usage of a kworker thread jumps to 100%. * Removing the DM device becomes impossible. Delay requeueing if blk_get_request() returns -EBUSY or -EWOULDBLOCK, and the queue is not dying, because in these cases immediate requeuing is inappropriate. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bart Van Assche authored
commit 89bfce76 upstream. activate_path() is renamed to activate_path_work() which now calls activate_or_offline_path(). activate_or_offline_path() will be used by the next commit. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bart Van Assche authored
commit 06eb061f upstream. If blk_get_request() returns ENODEV then multipath_clone_and_map() causes a request to be requeued immediately. This can cause a kworker thread to spend 100% of the CPU time of a single core in __blk_mq_run_hw_queue() and also can cause device removal to never finish. Avoid this by only requeuing after a delay if blk_get_request() fails. Additionally, reduce the requeue delay. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 390020ad upstream. dm-bufio checks a watermark when it allocates a new buffer in __bufio_new(). However, it doesn't check the watermark when the user changes /sys/module/dm_bufio/parameters/max_cache_size_bytes. This may result in a problem - if the watermark is high enough so that all possible buffers are allocated and if the user lowers the value of "max_cache_size_bytes", the watermark will never be checked against the new value because no new buffer would be allocated. To fix this, change __evict_old_buffers() so that it checks the watermark. __evict_old_buffers() is called every 30 seconds, so if the user reduces "max_cache_size_bytes", dm-bufio will react to this change within 30 seconds and decrease memory consumption. Depends-on: 1b0fb5a5 ("dm bufio: avoid a possible ABBA deadlock") Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 1b0fb5a5 upstream. __get_memory_limit() tests if dm_bufio_cache_size changed and calls __cache_size_refresh() if it did. It takes dm_bufio_clients_lock while it already holds the client lock. However, lock ordering is violated because in cleanup_old_buffers() dm_bufio_clients_lock is taken before the client lock. This results in a possible deadlock and lockdep engine warning. Fix this deadlock by changing mutex_lock() to mutex_trylock(). If the lock can't be taken, it will be re-checked next time when a new buffer is allocated. Also add "unlikely" to the if condition, so that the optimizer assumes that the condition is false. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mikulas Patocka authored
commit 7b81ef8b upstream. Since the commit 0cf45031 ("dm raid: add support for the MD RAID0 personality"), the dm-raid subsystem can activate a RAID-0 array. Therefore, add MD_RAID0 to the dependencies of DM_RAID, so that MD_RAID0 will be selected when DM_RAID is selected. Fixes: 0cf45031 ("dm raid: add support for the MD RAID0 personality") Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vinothkumar Raja authored
commit 7d1fedb6 upstream. dm_btree_find_lowest_key() is giving incorrect results. find_key() traverses the btree correctly for finding the highest key, but there is an error in the way it traverses the btree for retrieving the lowest key. dm_btree_find_lowest_key() fetches the first key of the rightmost block of the btree instead of fetching the first key from the leftmost block. Fix this by conditionally passing the correct parameter to value64() based on the @find_highest flag. Signed-off-by: Erez Zadok <ezk@fsl.cs.sunysb.edu> Signed-off-by: Vinothkumar Raja <vinraja@cs.stonybrook.edu> Signed-off-by: Nidhi Panpalia <npanpalia@cs.stonybrook.edu> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Abeni authored
commit eea40b8f upstream. The infiniband address handle can be triggered to resolve an ipv6 address in response to MAD packets, regardless of the ipv6 module being disabled via the kernel command line argument. That will cause a call into the ipv6 routing code, which is not initialized, and a conseguent oops. This commit addresses the above issue replacing the direct lookup call with an indirect one via the ipv6 stub, which is properly initialized according to the ipv6 status (e.g. if ipv6 is disabled, the routing lookup fails gracefully) Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Doug Ledford <dledford@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sagi Grimberg authored
commit 0a49f2c3 upstream. In case we got an initial sg_offset, we need to account for it in the mr length. Fixes: ff2ba993 ("IB/core: Add passing an offset into the SG to ib_map_mr_sg") Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Tested-by: Israel Rukshin <israelr@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Sverdlin authored
commit 49b2e27a upstream. During reset "refactoring" the output configuration was lost. This commit repairs sound on EDB93XX boards. Fixes: 9a397f47 ("ASoC: cs4271: add regulator consumer support") Signed-off-by: Alexander Sverdlin <alexander.sverdlin@gmail.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Petr Vandrovec authored
commit fd5c7869 upstream. When TPM2 log has entries with more than 3 digests, or with digests not listed in the log header, log gets misparsed, eventually leading to kernel complaint that code tried to vmalloc 512MB of memory (I have no idea what would happen on bigger system). So code should not parse only first 3 digests: both event header and event itself are already in memory, so we can parse any number of digests, as long as we do not try to parse whole memory when given count of 0xFFFFFFFF. So this change: * Rejects event entry with more digests than log header describes. Digest types should be unique, and all should be described in log header, so there cannot be more digests in the event than in the header. * Reject event entry with digest that is not described in the log header. In theory code could hardcode information about digest IDs already assigned by TCG, but if firmware authors cannot get event log format right, why should anyone believe that they got event log content right. Fixes: 4d23cc32 ("tpm: add securityfs support for TPM 2.0 firmware event log") Signed-off-by: Petr Vandrovec <petr@vmware.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hon Ching \(Vicky) Lo authored
commit 31574d32 upstream. The current code passes the address of tpm_chip as the argument to dev_get_drvdata() without prior NULL check in tpm_ibmvtpm_get_desired_dma. This resulted an oops during kernel boot when vTPM is enabled in Power partition configured in active memory sharing mode. The vio_driver's get_desired_dma() is called before the probe(), which for vtpm is tpm_ibmvtpm_probe, and it's this latter function that initializes the driver and set data. Attempting to get data before the probe() caused the problem. This patch adds a NULL check to the tpm_ibmvtpm_get_desired_dma. fixes: 9e0d39d8 ("tpm: Remove useless priv field in struct tpm_vendor_specific") Signed-off-by: Hon Ching(Vicky) Lo <honclo@linux.vnet.ibm.com> Reviewed-by: Jarkko Sakkine <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jerry Snitselaar authored
commit 8569defd upstream. Make sure size of response buffer is at least 6 bytes, or we will underflow and pass large size_t to memcpy_fromio(). This was encountered while testing earlier version of locality patchset. Fixes: 30fc8d13 ("tpm: TPM 2.0 CRB Interface") Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nayna Jain authored
commit 0afb7118 upstream. Currently, there is an unnecessary 1 msec delay added in i2c_nuvoton_write_status() for the successful case. This function is called multiple times during send() and recv(), which implies adding multiple extra delays for every TPM operation. This patch calls usleep_range() only if retry is to be done. Signed-off-by: Nayna Jain <nayna@linux.vnet.ibm.com> Reviewed-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nayna Jain authored
commit a233a028 upstream. Commit 500462a9 "timers: Switch to a non-cascading wheel" replaced the 'classic' timer wheel, which aimed for near 'exact' expiry of the timers. Their analysis was that the vast majority of timeout timers are used as safeguards, not as real timers, and are cancelled or rearmed before expiration. The only exception noted to this were networking timers with a small expiry time. Not included in the analysis was the TPM polling timer, which resulted in a longer normal delay and, every so often, a very long delay. The non-cascading wheel delay is based on CONFIG_HZ. For a description of the different rings and their delays, refer to the comments in kernel/time/timer.c. Below are the delays given for rings 0 - 2, which explains the longer "normal" delays and the very, long delays as seen on systems with CONFIG_HZ 250. * HZ 1000 steps * Level Offset Granularity Range * 0 0 1 ms 0 ms - 63 ms * 1 64 8 ms 64 ms - 511 ms * 2 128 64 ms 512 ms - 4095 ms (512ms - ~4s) * HZ 250 * Level Offset Granularity Range * 0 0 4 ms 0 ms - 255 ms * 1 64 32 ms 256 ms - 2047 ms (256ms - ~2s) * 2 128 256 ms 2048 ms - 16383 ms (~2s - ~16s) Below is a comparison of extending the TPM with 1000 measurements, using msleep() vs. usleep_delay() when configured for 1000 hz vs. 250 hz, before and after commit 500462a9. linux-4.7 | msleep() usleep_range() 1000 hz: 0m44.628s | 1m34.497s 29.243s 250 hz: 1m28.510s | 4m49.269s 32.386s linux-4.7 | min-max (msleep) min-max (usleep_range) 1000 hz: 0:017 - 2:760s | 0:015 - 3:967s 0:014 - 0:418s 250 hz: 0:028 - 1:954s | 0:040 - 4:096s 0:016 - 0:816s This patch replaces the msleep() with usleep_range() calls in the i2c nuvoton driver with a consistent max range value. Signed-of-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Signed-off-by: Nayna Jain <nayna@linux.vnet.ibm.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Huewe authored
commit 5cc0101d upstream. Testing the implementation with a Raspberry Pi 2 showed that under some circumstances its SPI master erroneously releases the CS line before the transfer is complete, i.e. before the end of the last clock. In this case the TPM ignores the transfer and misses for example the GO command. The driver is unable to detect this communication problem and will wait for a command response that is never going to arrive, timing out eventually. As a workaround, the small delay ensures that the CS line is held long enough, even with a faulty SPI master. Other SPI masters are not affected, except for a negligible performance penalty. Fixes: 0edbfea5 ("tpm/tpm_tis_spi: Add support for spi phy") Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com> Signed-off-by: Peter Huewe <peter.huewe@infineon.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Benoit Houyere <benoit.houyere@st.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Huewe authored
commit 591e48c2 upstream. Limiting transfers to MAX_SPI_FRAMESIZE was not expected by the upper layers, as tpm_tis has no such limitation. Add a loop to hide that limitation. v2: Moved scope of spi_message to the top as requested by Jarkko Fixes: 0edbfea5 ("tpm/tpm_tis_spi: Add support for spi phy") Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com> Signed-off-by: Peter Huewe <peter.huewe@infineon.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Benoit Houyere <benoit.houyere@st.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Huewe authored
commit e110cc69 upstream. Wait states are signaled in the last byte received from the TPM in response to the header, not the first byte. Check rx_buf[3] instead of rx_buf[0]. Fixes: 0edbfea5 ("tpm/tpm_tis_spi: Add support for spi phy") Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com> Signed-off-by: Peter Huewe <peter.huewe@infineon.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Benoit Houyere <benoit.houyere@st.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Huewe authored
commit 975094dd upstream. Abort the transfer with ETIMEDOUT when the TPM signals more than TPM_RETRY wait states. Continuing with the transfer in this state will only lead to arbitrary failures in other parts of the code. Fixes: 0edbfea5 ("tpm/tpm_tis_spi: Add support for spi phy") Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com> Signed-off-by: Peter Huewe <peter.huewe@infineon.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Benoit Houyere <benoit.houyere@st.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Huewe authored
commit f848f214 upstream. The algorithm for sending data to the TPM is mostly identical to the algorithm for receiving data from the TPM, so a single function is sufficient to handle both cases. This is a prequisite for all the other fixes, so we don't have to fix everything twice (send/receive) v2: u16 instead of u8 for the length. Fixes: 0edbfea5 ("tpm/tpm_tis_spi: Add support for spi phy") Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com> Signed-off-by: Peter Huewe <peter.huewe@infineon.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Benoit Houyere <benoit.houyere@st.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Amir Goldstein authored
commit 4ff33aaf upstream. When delivering an event to userspace for a file on an NFS share, if the file is deleted on server side before user reads the event, user will not get the event. If the event queue contained several events, the stale event is quietly dropped and read() returns to user with events read so far in the buffer. If the event queue contains a single stale event or if the stale event is a permission event, read() returns to user with the kernel internal error code 518 (EOPENSTALE), which is not a POSIX error code. Check the internal return value -EOPENSTALE in fanotify_read(), just the same as it is checked in path_openat() and drop the event in the cases that it is not already dropped. This is a reproducer from Marko Rauhamaa: Just take the example program listed under "man fanotify" ("fantest") and follow these steps: ============================================================== NFS Server NFS Client(1) NFS Client(2) ============================================================== # echo foo >/nfsshare/bar.txt # cat /nfsshare/bar.txt foo # ./fantest /nfsshare Press enter key to terminate. Listening for events. # rm -f /nfsshare/bar.txt # cat /nfsshare/bar.txt read: Unknown error 518 cat: /nfsshare/bar.txt: Operation not permitted ============================================================== where NFS Client (1) and (2) are two terminal sessions on a single NFS Client machine. Reported-by: Marko Rauhamaa <marko.rauhamaa@f-secure.com> Tested-by: Marko Rauhamaa <marko.rauhamaa@f-secure.com> Cc: <linux-api@vger.kernel.org> Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jeeja KP authored
commit 96001376 upstream. Using jiffies in hdac_wait_for_cmd_dmas() to determine when to time out when interrupts are off (snd_hdac_bus_stop_cmd_io()/spin_lock_irq()) causes hard lockup so unlock while waiting using jiffies. ---<-snip->--- <0>[ 1211.603046] NMI watchdog: Watchdog detected hard LOCKUP on cpu 3 <4>[ 1211.603047] Modules linked in: snd_hda_intel i915 vgem <4>[ 1211.603053] irq event stamp: 13366 <4>[ 1211.603053] hardirqs last enabled at (13365): ... <4>[ 1211.603059] Call Trace: <4>[ 1211.603059] ? delay_tsc+0x3d/0xc0 <4>[ 1211.603059] __delay+0xa/0x10 <4>[ 1211.603060] __const_udelay+0x31/0x40 <4>[ 1211.603060] snd_hdac_bus_stop_cmd_io+0x96/0xe0 [snd_hda_core] <4>[ 1211.603060] ? azx_dev_disconnect+0x20/0x20 [snd_hda_intel] <4>[ 1211.603061] snd_hdac_bus_stop_chip+0xb1/0x100 [snd_hda_core] <4>[ 1211.603061] azx_stop_chip+0x9/0x10 [snd_hda_codec] <4>[ 1211.603061] azx_suspend+0x72/0x220 [snd_hda_intel] <4>[ 1211.603061] pci_pm_suspend+0x71/0x140 <4>[ 1211.603062] dpm_run_callback+0x6f/0x330 <4>[ 1211.603062] ? pci_pm_freeze+0xe0/0xe0 <4>[ 1211.603062] __device_suspend+0xf9/0x370 <4>[ 1211.603062] ? dpm_watchdog_set+0x60/0x60 <4>[ 1211.603063] async_suspend+0x1a/0x90 <4>[ 1211.603063] async_run_entry_fn+0x34/0x160 <4>[ 1211.603063] process_one_work+0x1f4/0x6d0 <4>[ 1211.603063] ? process_one_work+0x16e/0x6d0 <4>[ 1211.603064] worker_thread+0x49/0x4a0 <4>[ 1211.603064] kthread+0x107/0x140 <4>[ 1211.603064] ? process_one_work+0x6d0/0x6d0 <4>[ 1211.603065] ? kthread_create_on_node+0x40/0x40 <4>[ 1211.603065] ret_from_fork+0x2e/0x40 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100419 Fixes: 38b19ed7 ("ALSA: hda: fix to wait for RIRB & CORB DMA to set") Reported-by: Marta Lofstedt <marta.lofstedt@intel.com> Suggested-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jeeja KP <jeeja.kp@intel.com> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Steffen authored
commit 302a6ad7 upstream. TIS v1.3 for TPM 1.2 and PTP for TPM 2.0 disagree about which timeout value applies to reading a valid burstcount. It is TIMEOUT_D according to TIS, but TIMEOUT_A according to PTP, so choose the appropriate value depending on whether we deal with a TPM 1.2 or a TPM 2.0. This is important since according to the PTP TIMEOUT_D is much smaller than TIMEOUT_A. So the previous implementation could run into timeouts with a TPM 2.0, even though the TPM was behaving perfectly fine. During tpm2_probe TIMEOUT_D will be used even with a TPM 2.0, because TPM_CHIP_FLAG_TPM2 is not yet set. This is fine, since the timeout values will only be changed afterwards by tpm_get_timeouts. Until then TIS_TIMEOUT_D_MAX applies, which is large enough. Fixes: aec04cbd ("tpm: TPM 2.0 FIFO Interface") Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com> Signed-off-by: Peter Huewe <peter.huewe@infineon.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vamsi Krishna Samavedam authored
commit 2f964780 upstream. Format specifier %p can leak kernel addresses while not valuing the kptr_restrict system settings. When kptr_restrict is set to (1), kernel pointers printed using the %pK format specifier will be replaced with Zeros. Debugging Note : &pK prints only Zeros as address. If you need actual address information, write 0 to kptr_restrict. echo 0 > /proc/sys/kernel/kptr_restrict [Found by poking around in a random vendor kernel tree, it would be nice if someone would actually send these types of patches upstream - gkh] Signed-off-by: Vamsi Krishna Samavedam <vskrishn@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Willy Tarreau authored
commit 3e21f4af upstream. The lp_setup() code doesn't apply any bounds checking when passing "lp=none", and only in this case, resulting in an overflow of the parport_nr[] array. All versions in Git history are affected. Reported-By: Roee Hay <roee.hay@hcl.com> Cc: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Hovold authored
commit 46c319b8 upstream. Make sure to check the number of endpoints to avoid dereferencing a NULL-pointer should a malicious device lack endpoints. Fixes: 1da177e4 ("Linux-2.6.12-rc2") Signed-off-by: Johan Hovold <johan@kernel.org> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alan Stern authored
commit 628c2893 upstream. The ene_usb6250 sub-driver in usb-storage does USB I/O to buffers on the stack, which doesn't work with vmapped stacks. This patch fixes the problem by allocating a separate 512-byte buffer at probe time and using it for all of the offending I/O operations. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-and-tested-by: Andreas Hartmann <andihartmann@01019freenet.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Maksim Salau authored
commit 0bd193d6 upstream. get_version_reply is not freed if function returns with success. Fixes: 942a4873 ("usb: misc: legousbtower: Fix buffers on stack") Reported-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Maksim Salau <maksim.salau@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Maksim Salau authored
commit 942a4873 upstream. Allocate buffers on HEAP instead of STACK for local structures that are to be received using usb_control_msg(). Signed-off-by: Maksim Salau <maksim.salau@gmail.com> Tested-by: Alfredo Rafael Vicente Boix <alviboi@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 20 May, 2017 5 commits
-
-
Greg Kroah-Hartman authored
-
Kees Cook authored
commit 6330d553 upstream. When built as a module and running with update_ms >= 0, pstore will Oops during module unload since the work timer is still running. This makes sure the worker is stopped before unloading. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kees Cook authored
commit e9a330c4 upstream. The per-prz spinlock should be using the dynamic initializer so that lockdep can correctly track it. Without this, under lockdep, we get a warning at boot that the lock is in non-static memory. Fixes: 10970449 ("pstore: Make spinlock per zone instead of global") Fixes: 76d5692a ("pstore: Correctly initialize spinlock and flags") Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ankit Kumar authored
commit 041939c1 upstream. After commit c950fd6f kernel registers pstore write based on flag set. Pstore write for powerpc is broken as flags(PSTORE_FLAGS_DMESG) is not set for powerpc architecture. On panic, kernel doesn't write message to /fs/pstore/dmesg*(Entry doesn't gets created at all). This patch enables pstore write for powerpc architecture by setting PSTORE_FLAGS_DMESG flag. Fixes: c950fd6f ("pstore: Split pstore fragile flags") Signed-off-by: Ankit Kumar <ankit@linux.vnet.ibm.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dan Williams authored
commit d5483fed upstream. Fix failures to create namespaces due to the vmem_altmap not advertising enough free space to store the memmap. WARNING: CPU: 15 PID: 8022 at arch/x86/mm/init_64.c:656 arch_add_memory+0xde/0xf0 [..] Call Trace: dump_stack+0x63/0x83 __warn+0xcb/0xf0 warn_slowpath_null+0x1d/0x20 arch_add_memory+0xde/0xf0 devm_memremap_pages+0x244/0x440 pmem_attach_disk+0x37e/0x490 [nd_pmem] nd_pmem_probe+0x7e/0xa0 [nd_pmem] nvdimm_bus_probe+0x71/0x120 [libnvdimm] driver_probe_device+0x2bb/0x460 bind_store+0x114/0x160 drv_attr_store+0x25/0x30 In commit 658922e5 "libnvdimm, pfn: fix memmap reservation sizing" we arranged for the capacity to be allocated, but failed to also update the 'npfns' parameter. This leads to cases where there is enough capacity reserved to hold all the allocated sections, but vmemmap_populate_hugepages() still encounters -ENOMEM from altmap_alloc_block_buf(). This fix is a stop-gap until we can teach the core memory hotplug implementation to permit sub-section hotplug. Fixes: 658922e5 ("libnvdimm, pfn: fix memmap reservation sizing") Reported-by: Anisha Allada <anisha.allada@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-