- 26 Jul, 2019 40 commits
-
-
Luis Henriques authored
commit d31d07b9 upstream. Commit e450f4d1 ("ceph: pass inclusive lend parameter to filemap_write_and_wait_range()") fixed the end offset parameter used to call filemap_write_and_wait_range and invalidate_inode_pages2_range. Unfortunately it missed truncate_inode_pages_range, introducing a regression that is easily detected by xfstest generic/130. The problem is that when doing direct IO it is possible that an extra page is truncated from the page cache when the end offset is page aligned. This can cause data loss if that page hasn't been sync'ed to the OSDs. While there, change code to use PAGE_ALIGN macro instead. Cc: stable@vger.kernel.org Fixes: e450f4d1 ("ceph: pass inclusive lend parameter to filemap_write_and_wait_range()") Signed-off-by: Luis Henriques <lhenriques@suse.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 3140aafb upstream. The recent fix for Icelake HDMI codec introduced the mapping from pin NID to the i915 gfx port number. However, it forgot the reverse mapping from the port number to the pin NID that is used in the ELD notifier callback. As a result, it's processed to a wrong widget and gives a warning like snd_hda_codec_hdmi hdaudioC0D2: HDMI: pin nid 5 not registered This patch corrects it with a proper reverse mapping function. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=204133 Fixes: b0d8bc50 ("ALSA: hda: hdmi - add Icelake support") Reviewed-by: Kai Vehmanen <kai.vehmanen@linux.intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit eb417711 upstream. INTEL_GET_VENDOR_VERB is defined twice identically. Let's remove a superfluous line. Fixes: b0d8bc50 ("ALSA: hda: hdmi - add Icelake support") Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hui Wang authored
commit 4b4e0e32 upstream. Without this patch, the headset-mic and headphone-mic don't work. Cc: <stable@vger.kernel.org> Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kailang Yang authored
commit fbc57129 upstream. It assigned to wrong model. So, The headphone Mic can't work. Fixes: 3f640970 ("ALSA: hda - Fix headset mic detection problem for several Dell laptops") Signed-off-by: Kailang Yang <kailang@realtek.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 4914da2f upstream. We apply the codec resume forcibly at system resume callback for updating and syncing the jack detection state that may have changed during sleeping. This is, however, superfluous for the codec like Intel HDMI/DP, where the jack detection is managed via the audio component notification; i.e. the jack state change shall be reported sooner or later from the graphics side at mode change. This patch changes the codec resume callback to avoid the forcible resume conditionally with a new flag, codec->relaxed_resume, for reducing the resume time. The flag is set in the codec probe. Although this doesn't fix the entire bug mentioned in the bugzilla entry below, it's still a good optimization and some improvements are seen. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=201901 Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit ede34f39 upstream. The fix for the racy writes and ioctls to sequencer widened the application of client->ioctl_mutex to the whole write loop. Although it does unlock/relock for the lengthy operation like the event dup, the loop keeps the ioctl_mutex for the whole time in other situations. This may take quite long time if the user-space would give a huge buffer, and this is a likely cause of some weird behavior spotted by syzcaller fuzzer. This patch puts a simple workaround, just adding a mutex break in the loop when a large number of events have been processed. This shouldn't hit any performance drop because the threshold is set high enough for usual operations. Fixes: 7bd80091 ("ALSA: seq: More protection for concurrent write and ioctl races") Reported-by: syzbot+97aae04ce27e39cbfca9@syzkaller.appspotmail.com Reported-by: syzbot+4c595632b98bb8ffcc66@syzkaller.appspotmail.com Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Xiao Ni authored
commit d9771f5e upstream. commit d5d885fd ("md: introduce new personality funciton start()") splits the init job to two parts. The first part run() does the jobs that do not require the md threads. The second part start() does the jobs that require the md threads. Now it just does run() in adding new journal device. It needs to do the second part start() too. Fixes: d5d885fd ("md: introduce new personality funciton start()") Cc: stable@vger.kernel.org #v4.9+ Reported-by: Michal Soltys <soltys@ziu.info> Signed-off-by: Xiao Ni <xni@redhat.com> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Brown authored
commit c2c928c9 upstream. Back in ff9fb72b (debugfs: return error values, not NULL) the debugfs APIs were changed to return error pointers rather than NULL pointers on error, breaking the error checking in ASoC. Update the code to use IS_ERR() and log the codes that are returned as part of the error messages. Fixes: ff9fb72b (debugfs: return error values, not NULL) Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Brown authored
commit ceaea851 upstream. Back in ff9fb72b (debugfs: return error values, not NULL) the debugfs APIs were changed to return error pointers rather than NULL pointers on error, breaking the error checking in ASoC. Update the code to use IS_ERR() and log the codes that are returned as part of the error messages. Fixes: ff9fb72b (debugfs: return error values, not NULL) Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christophe Leroy authored
commit aeb87246 upstream. All mapping iterator logic is based on the assumption that sg->offset is always lower than PAGE_SIZE. But there are situations where sg->offset is such that the SG item is on the second page. In that case sg_copy_to_buffer() fails properly copying the data into the buffer. One of the reason is that the data will be outside the kmapped area used to access that data. This patch fixes the issue by adjusting the mapping iterator offset and pgoffset fields such that offset is always lower than PAGE_SIZE. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Fixes: 4225fc85 ("lib/scatterlist: use page iterator in the mapping iterator") Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
commit 75369089 upstream. The bvec tracks the list of pages, so if the number of pages changes due to a re-encode, we need to reset the bvec as well. Fixes: 277e4ab7 ("SUNRPC: Simplify TCP receive code by switching...") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
commit 58bbeab4 upstream. If the client has to stop in pnfs_update_layout() to wait for another layoutget to complete, it currently exits and defaults to I/O through the MDS if the layoutget was successful. Fixes: d03360aa ("pNFS: Ensure we return the error if someone kills...") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
commit 8e04fdfa upstream. mirror->mirror_ds can be NULL if uninitialised, but can contain a PTR_ERR() if call to GETDEVICEINFO failed. Fixes: 65990d1a ("pNFS/flexfiles: Fix a deadlock on LAYOUTGET") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: stable@vger.kernel.org # 4.10+ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Max Kellermann authored
commit db531db9 upstream. This reverts commit be4c2d47. That commit caused a severe memory leak in nfs_readdir_make_qstr(). When listing a directory with more than 100 files (this is how many struct nfs_cache_array_entry elements fit in one 4kB page), all allocated file name strings past those 100 leak. The root of the leakage is that those string pointers are managed in pages which are never linked into the page cache. fs/nfs/dir.c puts pages into the page cache by calling read_cache_page(); the callback function nfs_readdir_filler() will then fill the given page struct which was passed to it, which is already linked in the page cache (by do_read_cache_page() calling add_to_page_cache_lru()). Commit be4c2d47 added another (local) array of allocated pages, to be filled with more data, instead of discarding excess items received from the NFS server. Those additional pages can be used by the next nfs_readdir_filler() call (from within the same nfs_readdir() call). The leak happens when some of those additional pages are never used (copied to the page cache using copy_highpage()). The pages will be freed by nfs_readdir_free_pages(), but their contents will not. The commit did not invoke nfs_readdir_clear_array() (and doing so would have been dangerous, because it did not track which of those pages were already copied to the page cache, risking double free bugs). How to reproduce the leak: - Use a kernel with CONFIG_SLUB_DEBUG_ON. - Create a directory on a NFS mount with more than 100 files with names long enough to use the "kmalloc-32" slab (so we can easily look up the allocation counts): for i in `seq 110`; do touch ${i}_0123456789abcdef; done - Drop all caches: echo 3 >/proc/sys/vm/drop_caches - Check the allocation counter: grep nfs_readdir /sys/kernel/slab/kmalloc-32/alloc_calls 30564391 nfs_readdir_add_to_array+0x73/0xd0 age=534558/4791307/6540952 pid=370-1048386 cpus=0-47 nodes=0-1 - Request a directory listing and check the allocation counters again: ls [...] grep nfs_readdir /sys/kernel/slab/kmalloc-32/alloc_calls 30564511 nfs_readdir_add_to_array+0x73/0xd0 age=207/4792999/6542663 pid=370-1048386 cpus=0-47 nodes=0-1 There are now 120 new allocations. - Drop all caches and check the counters again: echo 3 >/proc/sys/vm/drop_caches grep nfs_readdir /sys/kernel/slab/kmalloc-32/alloc_calls 30564401 nfs_readdir_add_to_array+0x73/0xd0 age=735/4793524/6543176 pid=370-1048386 cpus=0-47 nodes=0-1 110 allocations are gone, but 10 have leaked and will never be freed. Unhelpfully, those allocations are explicitly excluded from KMEMLEAK, that's why my initial attempts with KMEMLEAK were not successful: /* * Avoid a kmemleak false positive. The pointer to the name is stored * in a page cache page which kmemleak does not scan. */ kmemleak_not_leak(string->name); It would be possible to solve this bug without reverting the whole commit: - keep track of which pages were not used, and call nfs_readdir_clear_array() on them, or - manually link those pages into the page cache But for now I have decided to just revert the commit, because the real fix would require complex considerations, risking more dangerous (crash) bugs, which may seem unsuitable for the stable branches. Signed-off-by: Max Kellermann <mk@cm4all.com> Cc: stable@vger.kernel.org # v5.1+ Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Trond Myklebust authored
commit 44942b4e upstream. According to the open() manpage, Linux reserves the access mode 3 to mean "check for read and write permission on the file and return a file descriptor that can't be used for reading or writing." Currently, the NFSv4 code will ask the server to open the file, and will use an incorrect share access mode of 0. Since it has an incorrect share access mode, the client later forgets to send a corresponding close, meaning it can leak stateids on the server. Fixes: ce4ef7c0 ("NFS: Split out NFS v4 file operations") Cc: stable@vger.kernel.org # 3.6+ Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Julien Thierry authored
commit 17ce302f upstream. In the presence of any form of instrumentation, nmi_enter() should be done before calling any traceable code and any instrumentation code. Currently, nmi_enter() is done in handle_domain_nmi(), which is much too late as instrumentation code might get called before. Move the nmi_enter/exit() calls to the arch IRQ vector handler. On arm64, it is not possible to know if the IRQ vector handler was called because of an NMI before acknowledging the interrupt. However, It is possible to know whether normal interrupts could be taken in the interrupted context (i.e. if taking an NMI in that context could introduce a potential race condition). When interrupting a context with IRQs disabled, call nmi_enter() as soon as possible. In contexts with IRQs enabled, defer this to the interrupt controller, which is in a better position to know if an interrupt taken is an NMI. Fixes: bc3c03cc ("arm64: Enable the support of pseudo-NMIs") Cc: <stable@vger.kernel.org> # 5.1.x- Cc: Will Deacon <will.deacon@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Dmitry Osipenko authored
commit 560d1bca upstream. _set_opp_custom() receives a set of OPP supplies as its arguments and the caller of it passes NULL when the supplies are not valid. But _set_opp_custom(), by mistake, checks for error by performing IS_ERR(old_supply) on it which will always evaluate to false. The problem was spotted during of testing of upcoming update for the NVIDIA Tegra CPUFreq driver. Cc: stable <stable@vger.kernel.org> Fixes: 7e535993 ("OPP: Separate out custom OPP handler specific code") Reported-by: Marc Dietrich <marvin24@gmx.de> Signed-off-by: Dmitry Osipenko <digetx@gmail.com> [ Viresh: Massaged changelog ] Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emmanuel Grumbach authored
commit 94022562 upstream. Otherwise it'll stay set forever which is clearly buggy. Cc: stable@vger.kernel.org Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johannes Berg authored
commit c56e00a3 upstream. In AP (and IBSS) mode, we can only set GTKs to firmware after we have sent down the multicast station, but this we can only do after we've enabled beaconing, etc. However, during rfkill exit, hostapd will configure the keys before starting the AP, and cfg80211/mac80211 accept it happily. On earlier devices, this didn't bother us as GTK TX wasn't really handled in firmware, we just put the key material into the TX cmd and thus it only mattered when we actually transmitted a frame. On newer devices, however, the firmware needs to track all of this and that doesn't work if we add the key before the (multicast) sta it belongs to. To fix this, keep a list of keys to add during AP enable, and call the function there. Cc: stable@vger.kernel.org Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emmanuel Grumbach authored
commit ed3e4c6d upstream. Newest devices have a new firmware load mechanism. This mechanism is called the context info. It means that the driver doesn't need to load the sections of the firmware. The driver rather prepares a place in DRAM, with pointers to the relevant sections of the firmware, and the firmware loads itself. At the end of the process, the firmware sends the ALIVE interrupt. This is different from the previous scheme in which the driver expected the FH_TX interrupt after each section being transferred over the DMA. In order to support this new flow, we enabled all the interrupts. This broke the assumption that we have in the code that the RF-Kill interrupt can't interrupt the firmware load flow. Change the context info flow to enable only the ALIVE interrupt, and re-enable all the other interrupts only after the firmware is alive. Then, we won't see the RF-Kill interrupt until then. Getting the RF-Kill interrupt while loading the firmware made us kill the firmware while it is loading and we ended up dumping garbage instead of the firmware state. Re-enable the ALIVE | RX interrupts from the ISR when we get the ALIVE interrupt to be able to get the RX interrupt that comes immediately afterwards for the ALIVE notification. This is needed for non MSI-X only. Cc: stable@vger.kernel.org Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emmanuel Grumbach authored
commit 0d53cfd0 upstream. iwl_mvm_send_cmd returns 0 when the command won't be sent because RF-Kill is asserted. Do the same when we call iwl_get_shared_mem_conf since it is not sent through iwl_mvm_send_cmd but directly calls the transport layer. Cc: stable@vger.kernel.org Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emmanuel Grumbach authored
commit ec46ae30 upstream. We added code to restock the buffer upon ALIVE interrupt when MSI-X is disabled. This was added as part of the context info code. This code was added only if the ISR debug level is set which is very unlikely to be related. Move this code to run even when the ISR debug level is not set. Note that gen2 devices work with MSI-X in most cases so that this path is seldom used. Cc: stable@vger.kernel.org Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Emmanuel Grumbach authored
commit 3b57a10c upstream. Sometimes the register status can include interrupts that were masked. We can, for example, get the RF-Kill bit set in the interrupt status register although this interrupt was masked. Then if we get the ALIVE interrupt (for example) that was not masked, we need to *not* service the RF-Kill interrupt. Fix this in the MSI-X interrupt handler. Cc: stable@vger.kernel.org Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oren Givon authored
commit 498d3eb5 upstream. The 22000 series FW that was meant to be used with hr is also the FW that is used for hr1 and has a different RF ID. Add support to load the hr FW when hr1 RF ID is detected. Cc: stable@vger.kernel.org # 5.1+ Signed-off-by: Oren Givon <oren.givon@intel.com> Signed-off-by: Luciano Coelho <luciano.coelho@intel.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jon Hunter authored
commit ece6031e upstream. The GPU regulator enable ramp delay for Jetson TX1 is set to 1ms which not sufficient because the enable ramp delay has been measured to be greater than 1ms. Furthermore, the downstream kernels released by NVIDIA for Jetson TX1 are using a enable ramp delay 2ms and a settling delay of 160us. Update the GPU regulator enable ramp delay for Jetson TX1 to be 2ms and add a settling delay of 160us. Cc: stable@vger.kernel.org Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Fixes: 5e6b9a89 ("arm64: tegra: Add VDD_GPU regulator to Jetson TX1") Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Krzysztof Kozlowski authored
commit 16da0eb5 upstream. On S2MPS11 device, the buck7 and buck8 regulator voltages start at 750 mV, not 600 mV. Using wrong minimal value caused shifting of these regulator values by 150 mV (e.g. buck7 usually configured to v1.35 V was reported as 1.2 V). On most of the boards these regulators are left in default state so this was only affecting reported voltage. However if any driver wanted to change them, then effectively it would set voltage 150 mV higher than intended. Cc: <stable@vger.kernel.org> Fixes: cb74685e ("regulator: s2mps11: Add samsung s2mps11 regulator driver") Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Krzysztof Kozlowski authored
commit 70ca117b upstream. If devm_gpiod_get_from_of_node() call returns ERR_PTR, it is assigned into an array of GPIO descriptors and used later because such error is not treated as critical thus it is not propagated back to the probe function. All code later expects that such GPIO descriptor is either a NULL or proper value. This later might lead to dereference of ERR_PTR. Only devices with S2MPS14 flavor are affected (other do not control regulators with GPIOs). Fixes: 1c984942 ("regulator: s2mps11: Pass descriptor instead of GPIO number") Cc: <stable@vger.kernel.org> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hui Wang authored
commit 771a081e upstream. In the function alps_is_cs19_trackpoint(), we check if the param[1] is in the 0x20~0x2f range, but the code we wrote for this checking is not correct: (param[1] & 0x20) does not mean param[1] is in the range of 0x20~0x2f, it also means the param[1] is in the range of 0x30~0x3f, 0x60~0x6f... Now fix it with a new condition checking ((param[1] & 0xf0) == 0x20). Fixes: 7e4935cc ("Input: alps - don't handle ALPS cs19 trackpoint-only device") Cc: stable@vger.kernel.org Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nick Black authored
commit 1976d7d2 upstream. Adds the Lenovo T580 to the SMBus intertouch list for Synaptics touchpads. I've tested with this for a week now, and it seems a great improvement. It's also nice to have the complaint gone from dmesg. Signed-off-by: Nick Black <dankamongmen@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hui Wang authored
commit 7e4935cc upstream. On a latest Lenovo laptop, the trackpoint and 3 buttons below it don't work at all, when we move the trackpoint or press those 3 buttons, the kernel will print out: "Rejected trackstick packet from non DualPoint device" This device is identified as an alps touchpad but the packet has trackpoint format, so the alps.c drops the packet and prints out the message above. According to XiaoXiao's explanation, this device is named cs19 and is trackpoint-only device, its firmware is only for trackpoint, it is independent of touchpad and is a device completely different from DualPoint ones. To drive this device with mininal changes to the existing driver, we just let the alps driver not handle this device, then the trackpoint.c will be the driver of this device if the trackpoint driver is enabled. (if not, this device will fallback to a bare PS/2 device) With the trackpoint.c, this trackpoint and 3 buttons all work well, they have all features that the trackpoint should have, like scrolling-screen, drag-and-drop and frame-selection. Signed-off-by: XiaoXiao Liu <sliuuxiaonxiao@gmail.com> Signed-off-by: Hui Wang <hui.wang@canonical.com> Reviewed-by: Pali Rohár <pali.rohar@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Grant Hernandez authored
commit 2a017fd8 upstream. The GTCO tablet input driver configures itself from an HID report sent via USB during the initial enumeration process. Some debugging messages are generated during the parsing. A debugging message indentation counter is not bounds checked, leading to the ability for a specially crafted HID report to cause '-' and null bytes be written past the end of the indentation array. As long as the kernel has CONFIG_DYNAMIC_DEBUG enabled, this code will not be optimized out. This was discovered during code review after a previous syzkaller bug was found in this driver. Signed-off-by: Grant Hernandez <granthernandez@google.com> Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Coly Li authored
commit f54d801d upstream. Commit 9baf3097 ("bcache: fix for gc and write-back race") added a new work queue dc->writeback_write_wq, but forgot to destroy it in the error condition when creating dc->writeback_thread failed. This patch destroys dc->writeback_write_wq if kthread_create() returns error pointer to dc->writeback_thread, then a memory leak is avoided. Fixes: 9baf3097 ("bcache: fix for gc and write-back race") Signed-off-by: Coly Li <colyli@suse.de> Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Coly Li authored
commit 54619998 upstream. In bch_cached_dev_files[] from driver/md/bcache/sysfs.c, sysfs_errors is incorrectly inserted in. The correct entry should be sysfs_io_errors. This patch fixes the problem and now I/O errors of cached device can be read from /sys/block/bcache<N>/bcache/io_errors. Fixes: c7b7bd07 ("bcache: add io_disable to struct cached_dev") Signed-off-by: Coly Li <colyli@suse.de> Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Coly Li authored
commit 578df99b upstream. When md raid device (e.g. raid456) is used as backing device, read-ahead requests on a degrading and recovering md raid device might be failured immediately by md raid code, but indeed this md raid array can still be read or write for normal I/O requests. Therefore such failed read-ahead request are not real hardware failure. Further more, after degrading and recovering accomplished, read-ahead requests will be handled by md raid array again. For such condition, I/O failures of read-ahead requests don't indicate real health status (because normal I/O still be served), they should not be counted into I/O error counter dc->io_errors. Since there is no simple way to detect whether the backing divice is a md raid device, this patch simply ignores I/O failures for read-ahead bios on backing device, to avoid bogus backing device failure on a degrading md raid array. Suggested-and-tested-by: Thorsten Knabe <linux@thorsten-knabe.de> Signed-off-by: Coly Li <colyli@suse.de> Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Coly Li authored
commit ba82c1ac upstream. This reverts commit 6268dc2c. This patch depends on commit c4dc2497 ("bcache: fix high CPU occupancy during journal") which is reverted in previous patch. So revert this one too. Fixes: 6268dc2c ("bcache: free heap cache_set->flush_btree in bch_journal_free") Signed-off-by: Coly Li <colyli@suse.de> Cc: stable@vger.kernel.org Cc: Shenghui Wang <shhuiw@foxmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Coly Li authored
commit 249a5f6d upstream. This reverts commit c4dc2497. This patch enlarges a race between normal btree flush code path and flush_btree_write(), which causes deadlock when journal space is exhausted. Reverts this patch makes the race window from 128 btree nodes to only 1 btree nodes. Fixes: c4dc2497 ("bcache: fix high CPU occupancy during journal") Signed-off-by: Coly Li <colyli@suse.de> Cc: stable@vger.kernel.org Cc: Tang Junhui <tang.junhui.linux@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Coly Li authored
commit 695277f1 upstream. This reverts commit 6147305c. Although this patch helps the failed bcache device to stop faster when too many I/O errors detected on corresponding cached device, setting CACHE_SET_IO_DISABLE bit to cache set c->flags was not a good idea. This operation will disable all I/Os on cache set, which means other attached bcache devices won't work neither. Without this patch, the failed bcache device can also be stopped eventually if internal I/O accomplished (e.g. writeback). Therefore here I revert it. Fixes: 6147305c ("bcache: set CACHE_SET_IO_DISABLE in bch_cached_dev_error()") Reported-by: Yong Li <mr.liyong@qq.com> Signed-off-by: Coly Li <colyli@suse.de> Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aurelien Aptel authored
commit 7e5a70ad upstream. Prevent deadlock between open_shroot() and cifs_mark_open_files_invalid() by releasing the lock before entering SMB2_open, taking it again after and checking if we still need to use the result. Link: https://lore.kernel.org/linux-cifs/684ed01c-cbca-2716-bc28-b0a59a0f8521@prodrive-technologies.com/T/#u Fixes: 3d4ef9a1 ("smb3: fix redundant opens on root") Signed-off-by: Aurelien Aptel <aaptel@suse.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by: Steve French <stfrench@microsoft.com> CC: Stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ronnie Sahlberg authored
commit aa081859 upstream. Servers can defer destaging any data and updating the mtime until close(). This means that if we do a setinfo to modify the mtime while other handles are open for write the server may overwrite our setinfo timestamps when if flushes the file on close() of the writeable handle. To solve this we add an explicit flush when the mtime is about to be updated. This fixes "cp -p" to preserve mtime when copying a file onto an SMB2 share. CC: Stable <stable@vger.kernel.org> Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com> Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com> Signed-off-by: Steve French <stfrench@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-