- 02 Nov, 2017 23 commits
-
-
Eric Biggers authored
commit f66665c0 upstream. In eCryptfs, we failed to verify that the authentication token keys are not revoked before dereferencing their payloads, which is problematic because the payload of a revoked key is NULL. request_key() *does* skip revoked keys, but there is still a window where the key can be revoked before we acquire the key semaphore. Fix it by updating ecryptfs_get_key_payload_data() to return -EKEYREVOKED if the key payload is NULL. For completeness we check this for "encrypted" keys as well as "user" keys, although encrypted keys cannot be revoked currently. Alternatively we could use key_validate(), but since we'll also need to fix ecryptfs_get_key_payload_data() to validate the payload length, it seems appropriate to just check the payload pointer. Fixes: 237fead6 ("[PATCH] ecryptfs: fs/Makefile and fs/Kconfig") Reviewed-by:
James Morris <james.l.morris@oracle.com> Cc: Michael Halcrow <mhalcrow@google.com> Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Guillaume Tucker authored
commit fc1111b8 upstream. The device tree nodes all correctly describe the regulators as syr827 or syr828, but the I2C device id is currently set to the wildcard value of syr82x in the driver. This causes udev to fail to match the driver module with the modalias data from sysfs. Fix this by replacing the I2C device ids with ones that match the device tree descriptions, with syr827 and syr828. Tested on Firefly rk3288 board. The syr82x id was not used anywhere. Fixes: e80c47bd (regulator: fan53555: Export I2C module alias information) Signed-off-by:
Guillaume Tucker <guillaume.tucker@collabora.com> Signed-off-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Herbert Xu authored
commit 1137b5e2 upstream. An independent security researcher, Mohamed Ghannam, has reported this vulnerability to Beyond Security's SecuriTeam Secure Disclosure program. The xfrm_dump_policy_done function expects xfrm_dump_policy to have been called at least once or it will crash. This can be triggered if a dump fails because the target socket's receive buffer is full. This patch fixes it by using the cb->start mechanism to ensure that the initialisation is always done regardless of the buffer situation. Fixes: 12a169e7 ("ipsec: Put dumpers on the dump list") Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johannes Berg authored
commit 51e13359 upstream. If we try to connect while already connected/connecting, but this fails, we set ssid_len=0 but leave current_bss hanging, leading to errors. Check all of this better, first of all ensuring that we can't try to connect to a different SSID while connected/ing; ensure that prev_bssid is set for re-association attempts even in the case of the driver supporting the connect() method, and don't reset ssid_len in the failure cases. While at it, also reset ssid_len while disconnecting unless we were connected and expect a disconnected event, and warn on a successful connection without ssid_len being set. Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jimmy Assarsson authored
commit e1d2d132 upstream. To avoid kernel warning "Unhandled message (68)", ignore the CMD_FLUSH_QUEUE_REPLY message for now. As of Leaf v2 firmware version v4.1.844 (2017-02-15), flush tx queue is synchronous. There is a capability bit indicating whether flushing tx queue is synchronous or asynchronous. A proper solution would be to query the device for capabilities. If the synchronous tx flush capability bit is set, we should wait for CMD_FLUSH_QUEUE_REPLY message, while flushing the tx queue. Signed-off-by:
Jimmy Assarsson <jimmyassarsson@gmail.com> Signed-off-by:
Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jimmy Assarsson authored
commit 8f65a923 upstream. If the return value from kvaser_usb_send_simple_msg() was non-zero, the return value from kvaser_usb_flush_queue() was printed in the kernel warning. Signed-off-by:
Jimmy Assarsson <jimmyassarsson@gmail.com> Signed-off-by:
Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gerhard Bertelsmann authored
commit 3a379f5b upstream. Fix loopback mode by setting the right flag and remove presume mode. Signed-off-by:
Gerhard Bertelsmann <info@gerhard-bertelsmann.de> Signed-off-by:
Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rex Zhu authored
commit 8b95f4f7 upstream. refresh_rate was not initialized when program display gap. this patch can fix vce ring test failed when do S3 on Polaris10. bug: https://bugs.freedesktop.org/show_bug.cgi?id=103102 bug: https://bugzilla.kernel.org/show_bug.cgi?id=196615Reviewed-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Rex Zhu <Rex.Zhu@amd.com> Signed-off-by:
Alex Deucher <alexander.deucher@amd.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
commit 587c3c9f upstream. Commit 109bade9 ("scsi: sg: use standard lists for sg_requests") introduced an off-by-one error in sg_ioctl(), which was fixed by commit bd46fc40 ("scsi: sg: off by one in sg_ioctl()"). Unfortunately commit 4759df90 ("scsi: sg: factor out sg_fill_request_table()") moved that code, and reintroduced the bug (perhaps due to a botched rebase). Fix it again. Fixes: 4759df90 ("scsi: sg: factor out sg_fill_request_table()") Signed-off-by:
Ben Hutchings <ben.hutchings@codethink.co.uk> Acked-by:
Douglas Gilbert <dgilbert@interlog.com> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steffen Maier authored
commit ab31fd0c upstream. v4.10 commit 6f2ce1c6 ("scsi: zfcp: fix rport unblock race with LUN recovery") extended accessing parent pointer fields of struct zfcp_erp_action for tracing. If an erp_action has never been enqueued before, these parent pointer fields are uninitialized and NULL. Examples are zfcp objects freshly added to the parent object's children list, before enqueueing their first recovery subsequently. In zfcp_erp_try_rport_unblock(), we iterate such list. Accessing erp_action fields can cause a NULL pointer dereference. Since the kernel can read from lowcore on s390, it does not immediately cause a kernel page fault. Instead it can cause hangs on trying to acquire the wrong erp_action->adapter->dbf->rec_lock in zfcp_dbf_rec_action_lvl() ^bogus^ while holding already other locks with IRQs disabled. Real life example from attaching lots of LUNs in parallel on many CPUs: crash> bt 17723 PID: 17723 TASK: ... CPU: 25 COMMAND: "zfcperp0.0.1800" LOWCORE INFO: -psw : 0x0404300180000000 0x000000000038e424 -function : _raw_spin_lock_wait_flags at 38e424 ... #0 [fdde8fc90] zfcp_dbf_rec_action_lvl at 3e0004e9862 [zfcp] #1 [fdde8fce8] zfcp_erp_try_rport_unblock at 3e0004dfddc [zfcp] #2 [fdde8fd38] zfcp_erp_strategy at 3e0004e0234 [zfcp] #3 [fdde8fda8] zfcp_erp_thread at 3e0004e0a12 [zfcp] #4 [fdde8fe60] kthread at 173550 #5 [fdde8feb8] kernel_thread_starter at 10add2 zfcp_adapter zfcp_port zfcp_unit <address>, 0x404040d600000000 scsi_device NULL, returning early! zfcp_scsi_dev.status = 0x40000000 0x40000000 ZFCP_STATUS_COMMON_RUNNING crash> zfcp_unit <address> struct zfcp_unit { erp_action = { adapter = 0x0, port = 0x0, unit = 0x0, }, } zfcp_erp_action is always fully embedded into its container object. Such container object is never moved in its object tree (only add or delete). Hence, erp_action parent pointers can never change. To fix the issue, initialize the erp_action parent pointers before adding the erp_action container to any list and thus before it becomes accessible from outside of its initializing function. In order to also close the time window between zfcp_erp_setup_act() memsetting the entire erp_action to zero and setting the parent pointers again, drop the memset and instead explicitly initialize individually all erp_action fields except for parent pointers. To be extra careful not to introduce any other unintended side effect, even keep zeroing the erp_action fields for list and timer. Also double-check with WARN_ON_ONCE that erp_action parent pointers never change, so we get to know when we would deviate from previous behavior. Signed-off-by:
Steffen Maier <maier@linux.vnet.ibm.com> Fixes: 6f2ce1c6 ("scsi: zfcp: fix rport unblock race with LUN recovery") Reviewed-by:
Benjamin Block <bblock@linux.vnet.ibm.com> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Howells authored
commit ea678998 upstream. This fixes CVE-2017-12193. Fix a case in the assoc_array implementation in which a new leaf is added that needs to go into a node that happens to be full, where the existing leaves in that node cluster together at that level to the exclusion of new leaf. What needs to happen is that the existing leaves get moved out to a new node, N1, at level + 1 and the existing node needs replacing with one, N0, that has pointers to the new leaf and to N1. The code that tries to do this gets this wrong in two ways: (1) The pointer that should've pointed from N0 to N1 is set to point recursively to N0 instead. (2) The backpointer from N0 needs to be set correctly in the case N0 is either the root node or reached through a shortcut. Fix this by removing this path and using the split_node path instead, which achieves the same end, but in a more general way (thanks to Eric Biggers for spotting the redundancy). The problem manifests itself as: BUG: unable to handle kernel NULL pointer dereference at 0000000000000010 IP: assoc_array_apply_edit+0x59/0xe5 Fixes: 3cb98950 ("Add a generic associative array implementation.") Reported-and-tested-by:
WU Fan <u3536072@connect.hku.hk> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dmitry Torokhov authored
commit a5082947 upstream. parse_hid_report_descriptor() has a while (i < length) loop, which only guarantees that there's at least 1 byte in the buffer, but the loop body can read multiple bytes which causes out-of-bounds access. Reported-by:
Andrey Konovalov <andreyknvl@google.com> Reviewed-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kai-Heng Feng authored
commit 57a95b41 upstream. ELAN0611 touchpad uses elan_i2c as its driver. It can be found on Lenovo ideapad 320-15IKB. So add it to ACPI table to enable the touchpad. [Ido Adiv <idoad123@gmail.com> reports that the same ACPI ID is used for Elan touchpad in ideapad 520]. BugLink: https://bugs.launchpad.net/bugs/1723736Signed-off-by:
Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Juergen Gross authored
commit 298d275d upstream. In case gntdev_mmap() succeeds only partially in mapping grant pages it will leave some vital information uninitialized needed later for cleanup. This will lead to an out of bounds array access when unmapping the already mapped pages. So just initialize the data needed for unmapping the pages a little bit earlier. Reported-by:
Arthur Borsboom <arthurborsboom@gmail.com> Signed-off-by:
Juergen Gross <jgross@suse.com> Reviewed-by:
Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by:
Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Miklos Szeredi authored
commit c6cdd514 upstream. Marios Titas running a Haskell program noticed a problem with fuse's readdirplus: when it is interrupted by a signal, it skips one directory entry. The reason is that fuse erronously updates ctx->pos after a failed dir_emit(). The issue originates from the patch adding readdirplus support. Reported-by:
Jakob Unterwurzacher <jakobunt@gmail.com> Tested-by:
Marios Titas <redneb@gmx.com> Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Fixes: 0b05b183 ("fuse: implement NFS-like readdirplus support") Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Florian Fainelli authored
commit c0368e4d upstream. There was an inversion in how the error path in bcm_qspi_probe() is done which would make us trip over a KASAN use-after-free report. Turns out that qspi->dev_ids does not get allocated until later in the probe process. Fix this by introducing a new lable: qspi_resource_err which takes care of cleaning up the SPI master instance. Fixes: fa236a7e ("spi: bcm-qspi: Add Broadcom MSPI driver") Signed-off-by:
Florian Fainelli <f.fainelli@gmail.com> Signed-off-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Baruch Siach authored
commit a2b4a79b upstream. The SPI_IOC_MESSAGE() macro references _IOC_SIZEBITS. Add linux/ioctl.h to make sure this macro is defined. This fixes the following build failure of lcdproc with the musl libc: In file included from .../sysroot/usr/include/sys/ioctl.h:7:0, from hd44780-spi.c:31: hd44780-spi.c: In function 'spi_transfer': hd44780-spi.c:89:24: error: '_IOC_SIZEBITS' undeclared (first use in this function) status = ioctl(p->fd, SPI_IOC_MESSAGE(1), &xfer); ^ Signed-off-by:
Baruch Siach <baruch@tkos.co.il> Signed-off-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Greg Kurz authored
commit ac64115a upstream. The following program causes a kernel oops: #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/ioctl.h> #include <linux/kvm.h> main() { int fd = open("/dev/kvm", O_RDWR); ioctl(fd, KVM_CHECK_EXTENSION, KVM_CAP_PPC_HTM); } This happens because when using the global KVM fd with KVM_CHECK_EXTENSION, kvm_vm_ioctl_check_extension() gets called with a NULL kvm argument, which gets dereferenced in is_kvmppc_hv_enabled(). Spotted while reading the code. Let's use the hv_enabled fallback variable, like everywhere else in this function. Fixes: 23528bb2 ("KVM: PPC: Introduce KVM_CAP_PPC_HTM") Signed-off-by:
Greg Kurz <groug@kaod.org> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Paul Mackerras <paulus@ozlabs.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mayank Rana authored
commit b3207c65 upstream. xhci_stop_device() calls xhci_queue_stop_endpoint() multiple times without checking the return value. xhci_queue_stop_endpoint() can return error if the HC is already halted or unable to queue commands. This can cause a deadlock condition as xhci_stop_device() would end up waiting indefinitely for a completion for the command that didn't get queued. Fix this by checking the return value and bailing out of xhci_stop_device() in case of error. This patch happens to fix potential memory leaks of the allocated command structures as well. Fixes: c311e391 ("xhci: rework command timeout and cancellation,") Signed-off-by:
Mayank Rana <mrana@codeaurora.org> Signed-off-by:
Jack Pham <jackp@codeaurora.org> Signed-off-by:
Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jeff Layton authored
commit 6c2838fb upstream. sparse warns: fs/ceph/caps.c:2042:9: warning: context imbalance in 'try_flush_caps' - wrong count at exit We need to exit this function with the lock unlocked, but a couple of cases leave it locked. Signed-off-by:
Jeff Layton <jlayton@redhat.com> Reviewed-by:
"Yan, Zheng" <zyan@redhat.com> Reviewed-by:
Ilya Dryomov <idryomov@gmail.com> Signed-off-by:
Ilya Dryomov <idryomov@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hui Wang authored
commit f265788c upstream. We have several Dell laptops which use the codec alc236, the headset mic can't work on these machines. Following the commit 736f20a7, we add the pin cfg table to make the headset mic work. Signed-off-by:
Hui Wang <hui.wang@canonical.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kailang Yang authored
commit 736f20a7 upstream. Add support for ALC236/ALC3204. Add headset mode support for ALC236/ALC3204. Signed-off-by:
Kailang Yang <kailang@realtek.com> Signed-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tejun Heo authored
commit 692b4825 upstream. Josef reported a HARDIRQ-safe -> HARDIRQ-unsafe lock order detected by lockdep: [ 1270.472259] WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected [ 1270.472783] 4.14.0-rc1-xfstests-12888-g76833e8 #110 Not tainted [ 1270.473240] ----------------------------------------------------- [ 1270.473710] kworker/u5:2/5157 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: [ 1270.474239] (&(&lock->wait_lock)->rlock){+.+.}, at: [<ffffffff8da253d2>] __mutex_unlock_slowpath+0xa2/0x280 [ 1270.474994] [ 1270.474994] and this task is already holding: [ 1270.475440] (&pool->lock/1){-.-.}, at: [<ffffffff8d2992f6>] worker_thread+0x366/0x3c0 [ 1270.476046] which would create a new lock dependency: [ 1270.476436] (&pool->lock/1){-.-.} -> (&(&lock->wait_lock)->rlock){+.+.} [ 1270.476949] [ 1270.476949] but this new dependency connects a HARDIRQ-irq-safe lock: [ 1270.477553] (&pool->lock/1){-.-.} ... [ 1270.488900] to a HARDIRQ-irq-unsafe lock: [ 1270.489327] (&(&lock->wait_lock)->rlock){+.+.} ... [ 1270.494735] Possible interrupt unsafe locking scenario: [ 1270.494735] [ 1270.495250] CPU0 CPU1 [ 1270.495600] ---- ---- [ 1270.495947] lock(&(&lock->wait_lock)->rlock); [ 1270.496295] local_irq_disable(); [ 1270.496753] lock(&pool->lock/1); [ 1270.497205] lock(&(&lock->wait_lock)->rlock); [ 1270.497744] <Interrupt> [ 1270.497948] lock(&pool->lock/1); , which will cause a irq inversion deadlock if the above lock scenario happens. The root cause of this safe -> unsafe lock order is the mutex_unlock(pool->manager_arb) in manage_workers() with pool->lock held. Unlocking mutex while holding an irq spinlock was never safe and this problem has been around forever but it never got noticed because the only time the mutex is usually trylocked while holding irqlock making actual failures very unlikely and lockdep annotation missed the condition until the recent b9c16a0e ("locking/mutex: Fix lockdep_assert_held() fail"). Using mutex for pool->manager_arb has always been a bit of stretch. It primarily is an mechanism to arbitrate managership between workers which can easily be done with a pool flag. The only reason it became a mutex is that pool destruction path wants to exclude parallel managing operations. This patch replaces the mutex with a new pool flag POOL_MANAGER_ACTIVE and make the destruction path wait for the current manager on a wait queue. v2: Drop unnecessary flag clearing before pool destruction as suggested by Boqun. Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
Josef Bacik <josef@toxicpanda.com> Reviewed-by:
Lai Jiangshan <jiangshanlai@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Boqun Feng <boqun.feng@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 Oct, 2017 17 commits
-
-
Greg Kroah-Hartman authored
-
Eric Biggers authored
commit d124b2c5 upstream. When the file /proc/fs/fscache/objects (available with CONFIG_FSCACHE_OBJECT_LIST=y) is opened, we request a user key with description "fscache:objlist", then access its payload. However, a revoked key has a NULL payload, and we failed to check for this. request_key() *does* skip revoked keys, but there is still a window where the key can be revoked before we access its payload. Fix it by checking for a NULL payload, treating it like a key which was already revoked at the time it was requested. Fixes: 4fbf4291 ("FS-Cache: Allow the current state of all objects to be dumped") Reviewed-by:
James Morris <james.l.morris@oracle.com> Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
David Howells authored
commit 363b02da upstream. Consolidate KEY_FLAG_INSTANTIATED, KEY_FLAG_NEGATIVE and the rejection error into one field such that: (1) The instantiation state can be modified/read atomically. (2) The error can be accessed atomically with the state. (3) The error isn't stored unioned with the payload pointers. This deals with the problem that the state is spread over three different objects (two bits and a separate variable) and reading or updating them atomically isn't practical, given that not only can uninstantiated keys change into instantiated or rejected keys, but rejected keys can also turn into instantiated keys - and someone accessing the key might not be using any locking. The main side effect of this problem is that what was held in the payload may change, depending on the state. For instance, you might observe the key to be in the rejected state. You then read the cached error, but if the key semaphore wasn't locked, the key might've become instantiated between the two reads - and you might now have something in hand that isn't actually an error code. The state is now KEY_IS_UNINSTANTIATED, KEY_IS_POSITIVE or a negative error code if the key is negatively instantiated. The key_is_instantiated() function is replaced with key_is_positive() to avoid confusion as negative keys are also 'instantiated'. Additionally, barriering is included: (1) Order payload-set before state-set during instantiation. (2) Order state-read before payload-read when using the key. Further separate barriering is necessary if RCU is being used to access the payload content after reading the payload pointers. Fixes: 146aa8b1 ("KEYS: Merge the type-specific data with the payload data") Reported-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit d60b5b78 upstream. When an fscrypt-encrypted file is opened, we request the file's master key from the keyrings service as a logon key, then access its payload. However, a revoked key has a NULL payload, and we failed to check for this. request_key() *does* skip revoked keys, but there is still a window where the key can be revoked before we acquire its semaphore. Fix it by checking for a NULL payload, treating it like a key which was already revoked at the time it was requested. Fixes: 88bd6ccd ("ext4 crypto: add encryption key management facilities") Reviewed-by:
James Morris <james.l.morris@oracle.com> Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Brian Foster authored
commit 40214d12 upstream. The writeback rework in commit fbcc0256 ("xfs: Introduce writeback context for writepages") introduced a subtle change in behavior with regard to the block mapping used across the ->writepages() sequence. The previous xfs_cluster_write() code would only flush pages up to EOF at the time of the writepage, thus ensuring that any pages due to file-extending writes would be handled on a separate cycle and with a new, updated block mapping. The updated code establishes a block mapping in xfs_writepage_map() that could extend beyond EOF if the file has post-eof preallocation. Because we now use the generic writeback infrastructure and pass the cached mapping to each writepage call, there is no implicit EOF limit in place. If eofblocks trimming occurs during ->writepages(), any post-eof portion of the cached mapping becomes invalid. The eofblocks code has no means to serialize against writeback because there are no pages associated with post-eof blocks. Therefore if an eofblocks trim occurs and is followed by a file-extending buffered write, not only has the mapping become invalid, but we could end up writing a page to disk based on the invalid mapping. Consider the following sequence of events: - A buffered write creates a delalloc extent and post-eof speculative preallocation. - Writeback starts and on the first writepage cycle, the delalloc extent is converted to real blocks (including the post-eof blocks) and the mapping is cached. - The file is closed and xfs_release() trims post-eof blocks. The cached writeback mapping is now invalid. - Another buffered write appends the file with a delalloc extent. - The concurrent writeback cycle picks up the just written page because the writeback range end is LLONG_MAX. xfs_writepage_map() attributes it to the (now invalid) cached mapping and writes the data to an incorrect location on disk (and where the file offset is still backed by a delalloc extent). This problem is reproduced by xfstests test generic/464, which triggers racing writes, appends, open/closes and writeback requests. To address this problem, trim the mapping used during writeback to within EOF when the mapping is validated. This ensures the mapping is revalidated for any pages encountered beyond EOF as of the time the current mapping was cached or last validated. Reported-by:
Eryu Guan <eguan@redhat.com> Diagnosed-by:
Eryu Guan <eguan@redhat.com> Signed-off-by:
Brian Foster <bfoster@redhat.com> Reviewed-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Chinner authored
commit 793d7dbe upstream. Recently we've had warnings arise from the vm handing us pages without bufferheads attached to them. This should not ever occur in XFS, but we don't defend against it properly if it does. The only place where we remove bufferheads from a page is in xfs_vm_releasepage(), but we can't tell the difference here between "page is dirty so don't release" and "page is dirty but is being invalidated so release it". In some places that are invalidating pages ask for pages to be released and follow up afterward calling ->releasepage by checking whether the page was dirty and then aborting the invalidation. This is a possible vector for releasing buffers from a page but then leaving it in the mapping, so we really do need to avoid dirty pages in xfs_vm_releasepage(). To differentiate between invalidated pages and normal pages, we need to clear the page dirty flag when invalidating the pages. This can be done through xfs_vm_invalidatepage(), and will result xfs_vm_releasepage() seeing the page as clean which matches the bufferhead state on the page after calling block_invalidatepage(). Hence we can re-add the page dirty check in xfs_vm_releasepage to catch the case where we might be releasing a page that is actually dirty and so should not have the bufferheads on it removed. This will remove one possible vector of "dirty page with no bufferheads" and so help narrow down the search for the root cause of that problem. Signed-Off-By:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Sandeen authored
commit 93e8befc upstream. Jason reported that a corrupted filesystem failed to replay the log with a metadata block out of bounds warning: XFS (dm-2): _xfs_buf_find: Block out of range: block 0x80270fff8, EOFS 0x9c40000 _xfs_buf_find() and xfs_btree_get_bufs() return NULL if that happens, and then when xfs_alloc_fix_freelist() calls xfs_trans_binval() on that NULL bp, we oops with: BUG: unable to handle kernel NULL pointer dereference at 00000000000000f8 We don't handle _xfs_buf_find errors very well, every caller higher up the stack gets to guess at why it failed. But we should at least handle it somehow, so return EFSCORRUPTED here. Reported-by:
Jason L Tibbitts III <tibbs@math.uh.edu> Signed-off-by:
Eric Sandeen <sandeen@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Brian Foster authored
commit f35c5e10 upstream. xfs_attr3_root_inactive() walks the attr fork tree to invalidate the associated blocks. xfs_attr3_node_inactive() recursively descends from internal blocks to leaf blocks, caching block address values along the way to revisit parent blocks, locate the next entry and descend down that branch of the tree. The code that attempts to reread the parent block is unsafe because it assumes that the local xfs_da_node_entry pointer remains valid after an xfs_trans_brelse() and re-read of the parent buffer. Under heavy memory pressure, it is possible that the buffer has been reclaimed and reallocated by the time the parent block is reread. This means that 'btree' can point to an invalid memory address, lead to a random/garbage value for child_fsb and cause the subsequent read of the attr fork to go off the rails and return a NULL buffer for an attr fork offset that is most likely not allocated. Note that this problem can be manufactured by setting XFS_ATTR_BTREE_REF to 0 to prevent LRU caching of attr buffers, creating a file with a multi-level attr fork and removing it to trigger inactivation. To address this problem, reinit the node/btree pointers to the parent buffer after it has been re-read. This ensures btree points to a valid record and allows the walk to proceed. Signed-off-by:
Brian Foster <bfoster@redhat.com> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Chinner authored
commit 67f2ffe3 upstream. If we get ENOSPC half way through setting the ACL, the inode mode can still be changed even though the ACL does not exist. Reorder the operation to only change the mode of the inode if the ACL is set correctly. Whilst this does not fix the problem with crash consistency (that requires attribute addition to be a deferred op) it does prevent ENOSPC and other non-fatal errors setting an xattr to be handled sanely. This fixes xfstests generic/449. Signed-Off-By:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Brian Foster <bfoster@redhat.com> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Chinner authored
commit bb9c2e54 upstream. Various utility functions and interfaces that iterate internal devices try to reference the realtime device even when RT support is not compiled into the kernel. Make sure this code is excluded from the CONFIG_XFS_RT=n build, and where appropriate stub functions to return fatal errors if they ever get called when RT support is not present. Signed-Off-By:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Brian Foster <bfoster@redhat.com> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dave Chinner authored
commit 20413e37 upstream. Prevent kmemcheck from throwing warnings about reading uninitialised memory when formatting inodes into the incore log buffer. There are several issues here - we don't always log all the fields in the inode log format item, and we never log the inode the di_next_unlinked field. In the case of the inode log format item, this is exacerbated by the old xfs_inode_log_format structure padding issue. Hence make the padded, 64 bit aligned version of the structure the one we always use for formatting the log and get rid of the 64 bit variant. This means we'll always log the 64-bit version and so recovery only needs to convert from the unpadded 32 bit version from older 32 bit kernels. Signed-Off-By:
Dave Chinner <dchinner@redhat.com> Tested-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reviewed-by:
Brian Foster <bfoster@redhat.com> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christoph Hellwig authored
commit e12199f8 upstream. If we got two AIO writes into a COW area the second one might not have any COW extents left to convert. Handle that case gracefully instead of triggering an assert or accessing beyond the bounds of the extent list. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Darrick J. Wong authored
commit 52bfcdd7 upstream. Since the CoW fork exists as a secondary data structure to the data fork, we must always swap cow forks during swapext. We also need to swap the extent counts and reset the cowblocks tags. Reviewed-by:
Brian Foster <bfoster@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Carlos Maiolino authored
commit 842f6e9f upstream. My previous patch: d3a304b6 check for XFS_LI_FAILED flag xfs_iflush done, so the failed item can be properly resubmitted. In the loop scanning other inodes being completed, it should check the current item for the XFS_LI_FAILED, and not the initial one. The state of the initial inode is checked after the loop ends Kudos to Eric for catching this. Signed-off-by:
Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Darrick J. Wong authored
commit 9789dd9e upstream. We call __xfs_ag_resv_init to make a per-AG reservation for each AG. This makes the reservation per-AG, not per-filesystem. Therefore, it is incorrect to adjust m_ag_max_usable for each AG. Adjust it only when we're reserving AG 0's blocks so that we only do it once per fs. Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by:
Brian Foster <bfoster@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eryu Guan authored
commit ee70daab upstream. Since commit d531d91d ("xfs: always use unwritten extents for direct I/O writes"), we start allocating unwritten extents for all direct writes to allow appending aio in XFS. But for dio writes that could extend file size we update the in-core inode size first, then convert the unwritten extents to real allocations at dio completion time in xfs_dio_write_end_io(). Thus a racing direct read could see the new i_size and find the unwritten extents first and read zeros instead of actual data, if the direct writer also takes a shared iolock. Fix it by updating the in-core inode size after the unwritten extent conversion. To do this, introduce a new boolean argument to xfs_iomap_write_unwritten() to tell if we want to update in-core i_size or not. Suggested-by:
Brian Foster <bfoster@redhat.com> Reviewed-by:
Brian Foster <bfoster@redhat.com> Signed-off-by:
Eryu Guan <eguan@redhat.com> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> [hch: backported to the old direct I/O code before Linux 4.10] Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eryu Guan authored
commit d20a5e38 upstream. The 'did_zero' param of xfs_zero_range() was not passed to iomap_zero_range() correctly. This was introduced by commit 7bb41db3 ("xfs: handle 64-bit length in xfs_iozero"), and found by code inspection. Signed-off-by:
Eryu Guan <eguan@redhat.com> Reviewed-by:
Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-