- 08 May, 2016 10 commits
-
-
Tom Lendacky authored
[ Upstream commit f709b45e ] Prevent information from leaking to userspace by doing a memset to 0 of the export state structure before setting the structure values and copying it. This prevents un-initialized padding areas from being copied into the export area. Cc: <stable@vger.kernel.org> # 3.14.x- Reported-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Xiaodong Liu authored
[ Upstream commit 0851561d ] In sha_complete_job, incorrect mcryptd_hash_request_ctx pointer is used when check and complete other jobs. If the memory of first completed req is freed, while still completing other jobs in the func, kernel will crash since NULL pointer is assigned to RIP. Cc: <stable@vger.kernel.org> Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Yingjoe Chen authored
[ Upstream commit 5fedbb92 ] The debounce time unit for gpio_chip.set_debounce is us but mtk_gpio_set_debounce regard it as ms. Fix this by correct debounce time array dbnc_arr so it can find correct debounce setting. Debounce time for first debounce setting is 500us, correct this as well. While I'm at it, also change the debounce time array name to "debounce_time" for readability. Cc: stable@vger.kernel.org Signed-off-by: Yingjoe Chen <yingjoe.chen@mediatek.com> Reviewed-by: Daniel Kurtz <djkurtz@chromium.org> Acked-by: Hongzhou Yang <hongzhou.yang@mediatek.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Alex Deucher authored
[ Upstream commit 7403c515 ] This got lost somewhere along the way. This fixes audio not working until set_property was called. Noticed-by: Hyungwon Hwang <hyungwon.hwang7@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dmitry Ivanov authored
[ Upstream commit 8f815cdd ] A non-privileged user can create a netlink socket with the same port_id as used by an existing open nl80211 netlink socket (e.g. as used by a hostapd process) with a different protocol number. Closing this socket will then lead to the notification going to nl80211's socket release notification handler, and possibly cause an action such as removing a virtual interface. Fix this issue by checking that the netlink protocol is NETLINK_GENERIC. Since generic netlink has no notifier chain of its own, we can't fix the problem more generically. Fixes: 026331c4 ("cfg80211/mac80211: allow registering for and sending action frames") Cc: stable@vger.kernel.org Signed-off-by: Dmitry Ivanov <dima@ubnt.com> [rewrite commit message] Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Dmitry Ivanov authored
[ Upstream commit e2726020 ] All existing users of NETLINK_URELEASE use it to clean up resources that were previously allocated to a socket via some command. As a result, no users require getting this notification for unbound sockets. Sending it for unbound sockets, however, is a problem because any user (including unprivileged users) can create a socket that uses the same ID as an existing socket. Binding this new socket will fail, but if the NETLINK_URELEASE notification is generated for such sockets, the users thereof will be tricked into thinking the socket that they allocated the resources for is closed. In the nl80211 case, this will cause destruction of virtual interfaces that still belong to an existing hostapd process; this is the case that Dmitry noticed. In the NFC case, it will cause a poll abort. In the case of netlink log/queue it will cause them to stop reporting events, as if NFULNL_CFG_CMD_UNBIND/NFQNL_CFG_CMD_UNBIND had been called. Fix this problem by checking that the socket is bound before generating the NETLINK_URELEASE notification. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Ivanov <dima@ubnt.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sebastian Ott authored
[ Upstream commit 9d89d9e6 ] Newer machines might use a different (larger) format for function measurement blocks. To ensure that we comply with the alignment requirement on these machines and prevent memory corruption (when firmware writes more data than we expect) add 16 padding bytes at the end of the fmb. Cc: stable@vger.kernel.org # v4.1+ Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Vladis Dronov authored
[ Upstream commit 162f98de ] The gtco driver expects at least one valid endpoint. If given malicious descriptors that specify 0 for the number of endpoints, it will crash in the probe function. Ensure there is at least one endpoint on the interface before using it. Also let's fix a minor coding style issue. The full correct report of this issue can be found in the public Red Hat Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1283385Reported-by: Ralf Spenneberg <ralf@spenneberg.net> Signed-off-by: Vladis Dronov <vdronov@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Emmanuel Grumbach authored
[ Upstream commit 9fc515bc ] IWL_INFO is not an error but still printed by default. "can't access the RSA semaphore it is write protected" seems worrisome but it is not really a problem. CC: <stable@vger.kernel.org> [4.1+] Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sasha Levin authored
This reverts commit 79b768de. Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 25 Apr, 2016 3 commits
-
-
Sasha Levin authored
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mike Galbraith authored
Backport of 81ad4276 failed to adjust for intervening ->get_trip_temp() argument type change, thus causing stack protector to panic. drivers/thermal/thermal_core.c: In function ‘thermal_zone_device_register’: drivers/thermal/thermal_core.c:1569:41: warning: passing argument 3 of ‘tz->ops->get_trip_temp’ from incompatible pointer type [-Wincompatible-pointer-types] if (tz->ops->get_trip_temp(tz, count, &trip_temp)) ^ drivers/thermal/thermal_core.c:1569:41: note: expected ‘long unsigned int *’ but argument is of type ‘int *’ CC: <stable@vger.kernel.org> #3.18,#4.1 Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
-
Eric Dumazet authored
[ Upstream commit c2e7204d ] Tracking idle time in bictcp_cwnd_event() is imprecise, as epoch_start is normally set at ACK processing time, not at send time. Doing a proper fix would need to add an additional state variable, and does not seem worth the trouble, given CUBIC bug has been there forever before Jana noticed it. Let's simply not set epoch_start in the future, otherwise bictcp_update() could overflow and CUBIC would again grow cwnd too fast. This was detected thanks to a packetdrill test Neal wrote that was flaky before applying this fix. Fixes: 30927520 ("tcp_cubic: better follow cubic curve after idle period") Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Cc: Jana Iyengar <jri@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
- 20 Apr, 2016 27 commits
-
-
Eric Dumazet authored
[ Upstream commit 30927520 ] Jana Iyengar found an interesting issue on CUBIC : The epoch is only updated/reset initially and when experiencing losses. The delta "t" of now - epoch_start can be arbitrary large after app idle as well as the bic_target. Consequentially the slope (inverse of ca->cnt) would be really large, and eventually ca->cnt would be lower-bounded in the end to 2 to have delayed-ACK slow-start behavior. This particularly shows up when slow_start_after_idle is disabled as a dangerous cwnd inflation (1.5 x RTT) after few seconds of idle time. Jana initial fix was to reset epoch_start if app limited, but Neal pointed out it would ask the CUBIC algorithm to recalculate the curve so that we again start growing steeply upward from where cwnd is now (as CUBIC does just after a loss). Ideally we'd want the cwnd growth curve to be the same shape, just shifted later in time by the amount of the idle period. Reported-by: Jana Iyengar <jri@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Cc: Stephen Hemminger <stephen@networkplumber.org> Cc: Sangtae Ha <sangtae.ha@gmail.com> Cc: Lawrence Brakmo <lawrence@brakmo.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Robert Dobrowolski authored
[ Upstream commit e86103a7 ] On BXT platform Host Controller and Device Controller figure as same PCI device but with different device function. HCD should not pass data to Device Controller but only to Host Controllers. Checking if companion device is Host Controller, otherwise skip. Cc: <stable@vger.kernel.org> Signed-off-by: Robert Dobrowolski <robert.dobrowolski@linux.intel.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Hans de Goede authored
[ Upstream commit 13630746 ] Add a new NO_REPORT_LUNS quirk and set it for Seagate drives with an usb-id of: 0bc2:331a, as these will fail to respond to a REPORT_LUNS command. Cc: stable@vger.kernel.org Reported-and-tested-by: David Webb <djw@noc.ac.uk> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mathias Nyman authored
[ Upstream commit 98d74f9c ] PCI hotpluggable xhci controllers such as some Alpine Ridge solutions will remove the xhci controller from the PCI bus when the last USB device is disconnected. Add a flag to indicate that the host is being removed to avoid queueing configure_endpoint commands for the dropped endpoints. For PCI hotplugged controllers this will prevent 5 second command timeouts For static xhci controllers the configure_endpoint command is not needed in the removal case as everything will be returned, freed, and the controller is reset. For now the flag is only set for PCI connected host controllers. Cc: <stable@vger.kernel.org> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Roger Quadros authored
[ Upstream commit ad6b1d91 ] The problem seems to be that if a new device is detected while we have already removed the shared HCD, then many of the xhci operations (e.g. xhci_alloc_dev(), xhci_setup_device()) hang as command never completes. I don't think XHCI can operate without the shared HCD as we've already called xhci_halt() in xhci_only_stop_hcd() when shared HCD goes away. We need to prevent new commands from being queued not only when HCD is dying but also when HCD is halted. The following lockup was detected while testing the otg state machine. [ 178.199951] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller [ 178.205799] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 1 [ 178.214458] xhci-hcd xhci-hcd.0.auto: hcc params 0x0220f04c hci version 0x100 quirks 0x00010010 [ 178.223619] xhci-hcd xhci-hcd.0.auto: irq 400, io mem 0x48890000 [ 178.230677] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 [ 178.237796] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 178.245358] usb usb1: Product: xHCI Host Controller [ 178.250483] usb usb1: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd [ 178.257783] usb usb1: SerialNumber: xhci-hcd.0.auto [ 178.267014] hub 1-0:1.0: USB hub found [ 178.272108] hub 1-0:1.0: 1 port detected [ 178.278371] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller [ 178.284171] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 2 [ 178.294038] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003 [ 178.301183] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 178.308776] usb usb2: Product: xHCI Host Controller [ 178.313902] usb usb2: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd [ 178.321222] usb usb2: SerialNumber: xhci-hcd.0.auto [ 178.329061] hub 2-0:1.0: USB hub found [ 178.333126] hub 2-0:1.0: 1 port detected [ 178.567585] dwc3 48890000.usb: usb_otg_start_host 0 [ 178.572707] xhci-hcd xhci-hcd.0.auto: remove, state 4 [ 178.578064] usb usb2: USB disconnect, device number 1 [ 178.586565] xhci-hcd xhci-hcd.0.auto: USB bus 2 deregistered [ 178.592585] xhci-hcd xhci-hcd.0.auto: remove, state 1 [ 178.597924] usb usb1: USB disconnect, device number 1 [ 178.603248] usb 1-1: new high-speed USB device number 2 using xhci-hcd [ 190.597337] INFO: task kworker/u4:0:6 blocked for more than 10 seconds. [ 190.604273] Not tainted 4.0.0-rc1-00024-g6111320 #1058 [ 190.610228] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 190.618443] kworker/u4:0 D c05c0ac0 0 6 2 0x00000000 [ 190.625120] Workqueue: usb_otg usb_otg_work [ 190.629533] [<c05c0ac0>] (__schedule) from [<c05c10ac>] (schedule+0x34/0x98) [ 190.636915] [<c05c10ac>] (schedule) from [<c05c1318>] (schedule_preempt_disabled+0xc/0x10) [ 190.645591] [<c05c1318>] (schedule_preempt_disabled) from [<c05c23d0>] (mutex_lock_nested+0x1ac/0x3fc) [ 190.655353] [<c05c23d0>] (mutex_lock_nested) from [<c046cf8c>] (usb_disconnect+0x3c/0x208) [ 190.664043] [<c046cf8c>] (usb_disconnect) from [<c0470cf0>] (_usb_remove_hcd+0x98/0x1d8) [ 190.672535] [<c0470cf0>] (_usb_remove_hcd) from [<c0485da8>] (usb_otg_start_host+0x50/0xf4) [ 190.681299] [<c0485da8>] (usb_otg_start_host) from [<c04849a4>] (otg_set_protocol+0x5c/0xd0) [ 190.690153] [<c04849a4>] (otg_set_protocol) from [<c0484b88>] (otg_set_state+0x170/0xbfc) [ 190.698735] [<c0484b88>] (otg_set_state) from [<c0485740>] (otg_statemachine+0x12c/0x470) [ 190.707326] [<c0485740>] (otg_statemachine) from [<c0053c84>] (process_one_work+0x1b4/0x4a0) [ 190.716162] [<c0053c84>] (process_one_work) from [<c00540f8>] (worker_thread+0x154/0x44c) [ 190.724742] [<c00540f8>] (worker_thread) from [<c0058f88>] (kthread+0xd4/0xf0) [ 190.732328] [<c0058f88>] (kthread) from [<c000e810>] (ret_from_fork+0x14/0x24) [ 190.739898] 5 locks held by kworker/u4:0/6: [ 190.744274] #0: ("%s""usb_otg"){.+.+.+}, at: [<c0053bf4>] process_one_work+0x124/0x4a0 [ 190.752799] #1: ((&otgd->work)){+.+.+.}, at: [<c0053bf4>] process_one_work+0x124/0x4a0 [ 190.761326] #2: (&otgd->fsm.lock){+.+.+.}, at: [<c048562c>] otg_statemachine+0x18/0x470 [ 190.769934] #3: (usb_bus_list_lock){+.+.+.}, at: [<c0470ce8>] _usb_remove_hcd+0x90/0x1d8 [ 190.778635] #4: (&dev->mutex){......}, at: [<c046cf8c>] usb_disconnect+0x3c/0x208 [ 190.786700] INFO: task kworker/1:0:14 blocked for more than 10 seconds. [ 190.793633] Not tainted 4.0.0-rc1-00024-g6111320 #1058 [ 190.799567] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 190.807783] kworker/1:0 D c05c0ac0 0 14 2 0x00000000 [ 190.814457] Workqueue: usb_hub_wq hub_event [ 190.818866] [<c05c0ac0>] (__schedule) from [<c05c10ac>] (schedule+0x34/0x98) [ 190.826252] [<c05c10ac>] (schedule) from [<c05c4e40>] (schedule_timeout+0x13c/0x1ec) [ 190.834377] [<c05c4e40>] (schedule_timeout) from [<c05c19f0>] (wait_for_common+0xbc/0x150) [ 190.843062] [<c05c19f0>] (wait_for_common) from [<bf068a3c>] (xhci_setup_device+0x164/0x5cc [xhci_hcd]) [ 190.852986] [<bf068a3c>] (xhci_setup_device [xhci_hcd]) from [<c046b7f4>] (hub_port_init+0x3f4/0xb10) [ 190.862667] [<c046b7f4>] (hub_port_init) from [<c046eb64>] (hub_event+0x704/0x1018) [ 190.870704] [<c046eb64>] (hub_event) from [<c0053c84>] (process_one_work+0x1b4/0x4a0) [ 190.878919] [<c0053c84>] (process_one_work) from [<c00540f8>] (worker_thread+0x154/0x44c) [ 190.887503] [<c00540f8>] (worker_thread) from [<c0058f88>] (kthread+0xd4/0xf0) [ 190.895076] [<c0058f88>] (kthread) from [<c000e810>] (ret_from_fork+0x14/0x24) [ 190.902650] 5 locks held by kworker/1:0/14: [ 190.907023] #0: ("usb_hub_wq"){.+.+.+}, at: [<c0053bf4>] process_one_work+0x124/0x4a0 [ 190.915454] #1: ((&hub->events)){+.+.+.}, at: [<c0053bf4>] process_one_work+0x124/0x4a0 [ 190.924070] #2: (&dev->mutex){......}, at: [<c046e490>] hub_event+0x30/0x1018 [ 190.931768] #3: (&port_dev->status_lock){+.+.+.}, at: [<c046eb50>] hub_event+0x6f0/0x1018 [ 190.940558] #4: (&bus->usb_address0_mutex){+.+.+.}, at: [<c046b458>] hub_port_init+0x58/0xb10 Signed-off-by: Roger Quadros <rogerq@ti.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Lu Baolu authored
[ Upstream commit 71504062 ] This patch fixes some wild pointers produced by xhci_mem_cleanup. These wild pointers will cause system crash if xhci_mem_cleanup() is called twice. Reported-and-tested-by: Pengcheng Li <lpc.li@hisilicon.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Cc: stable@vger.kernel.org Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Yoshihiro Shimoda authored
[ Upstream commit 0a380be8 ] On some xHCI controllers (e.g. R-Car SoCs), the AC64 bit (bit 0) of HCCPARAMS1 is set to 1. However, the xHCs don't support 64-bit address memory pointers actually. So, in this case, this driver should call dma_set_coherent_mask(dev, DMA_BIT_MASK(32)) in xhci_gen_setup(). Otherwise, the xHCI controller will be died after a usb device is connected if it runs on above 4GB physical memory environment. So, this patch adds a new quirk XHCI_NO_64BIT_SUPPORT to resolve such an issue. Cc: <stable@vger.kernel.org> Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Mathias Nyman authored
[ Upstream commit 671ffdff ] Give USB3 devices a better chance to enumerate at USB 3 speeds if they are connected to a suspended host. Solves an issue with NEC uPD720200 host hanging when partially enumerating a USB3 device as USB2 after host controller runtime resume. Cc: <stable@vger.kernel.org> Tested-by: Mike Murdoch <main.haarp@gmail.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Rafal Redzimski authored
[ Upstream commit 0d46faca ] Broxton B0 also requires XHCI_PME_STUCK_QUIRK. Adding PCI device ID for Broxton B and adding to quirk. Cc: <stable@vger.kernel.org> Signed-off-by: Rafal Redzimski <rafal.f.redzimski@intel.com> Signed-off-by: Robert Dobrowolski <robert.dobrowolski@linux.intel.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Rui Salvaterra authored
[ Upstream commit 3e26a691 ] Based on Sergey's test patch [1], this fixes zram with lz4 compression on big endian cpus. Note that the 64-bit preprocessor test is not a cleanup, it's part of the fix, since those identifiers are bogus (for example, __ppc64__ isn't defined anywhere else in the kernel, which means we'd fall into the 32-bit definitions on ppc64). Tested on ppc64 with no regression on x86_64. [1] http://marc.info/?l=linux-kernel&m=145994470805853&w=4 Cc: stable@vger.kernel.org Suggested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Rui Salvaterra <rsalvaterra@gmail.com> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Andy Shevchenko authored
[ Upstream commit 3fe6409c ] The commit 89500520 ("dmaengine: dw: apply both HS interfaces and remove slave_id usage") cleaned up the code to avoid usage of depricated slave_id member of generic slave configuration. Meanwhile it broke the master selection by removing important call to dwc_set_masters() in ->device_alloc_chan_resources() which copied masters from custom slave configuration to the internal channel structure. Everything works until now since there is no customized connection of DesignWare DMA IP to the bus, i.e. one bus and one or more masters are in use. The configurations where 2 masters are connected to the different masters are not working anymore. We are expecting one user of such configuration and need to select masters properly. Besides that it is obviously a performance regression since only one master is in use in multi-master configuration. Select masters in accordance with what user asked for. Keep this patch in a form more suitable for back porting. We are safe to take necessary data in ->device_alloc_chan_resources() because we don't support generic slave configuration embedded into custom one, and thus the only way to provide such is to use the parameter to a filter function which is called exactly before channel resource allocation. While here, replase BUG_ON to less noisy dev_warn() and prevent channel allocation in case of error. Fixes: 89500520 ("dmaengine: dw: apply both HS interfaces and remove slave_id usage") Cc: stable@vger.kernel.org Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Seth Forshee authored
[ Upstream commit 87243deb ] Starting with 4.1 the tracing subsystem has its own filesystem which is automounted in the tracing subdirectory of debugfs. Prior to this debugfs could be bind mounted in a cloned mount namespace, but if tracefs has been mounted under debugfs this now fails because there is a locked child mount. This creates a regression for container software which bind mounts debugfs to satisfy the assumption of some userspace software. In other pseudo filesystems such as proc and sysfs we're already creating mountpoints like this in such a way that no dirents can be created in the directories, allowing them to be exceptions to some MNT_LOCKED tests. In fact we're already do this for the tracefs mountpoint in sysfs. Do the same in debugfs_create_automount(), since the intention here is clearly to create a mountpoint. This fixes the regression, as locked child mounts on permanently empty directories do not cause a bind mount to fail. Cc: stable@vger.kernel.org # v4.1+ Signed-off-by: Seth Forshee <seth.forshee@canonical.com> Acked-by: Serge Hallyn <serge.hallyn@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Kailang Yang authored
[ Upstream commit adcdd0d5 ] This is Dell usb dock audio workaround. It was fixed the master volume keep lower. [Some background: the patch essentially skips the controls of a couple of FU volumes. Although the firmware exposes the dB and the value information via the usb descriptor, changing the values (we set the min volume as default) screws up the device. Although this has been fixed in the newer firmware, the devices are shipped with the old firmware, thus we need the workaround in the driver side. -- tiwai] Signed-off-by: Kailang Yang <kailang@realtek.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Sven Eckelmann authored
[ Upstream commit c636b95e ] The Lenovo Thinkpad T460s requires the alc_fixup_tpt440_dock as well in order to get working sound output on the docking stations headphone jack. Patch tested on a Thinkpad T460s (20F9CT01WW) using a ThinkPad Ultradock on kernel 4.4.6. Signed-off-by: Sven Eckelmann <sven@narfation.org> Tested-by: Simon Wunderlich <sw@simonwunderlich.de> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Hui Wang authored
[ Upstream commit e549d190 ] The front mic jack (pink color) can't detect any plug or unplug. After applying this fix, both detecting function and recording function work well. BugLink: https://bugs.launchpad.net/bugs/1564712 Cc: stable@vger.kernel.org Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
David Matlack authored
[ Upstream commit fc5b7f3b ] An interrupt handler that uses the fpu can kill a KVM VM, if it runs under the following conditions: - the guest's xcr0 register is loaded on the cpu - the guest's fpu context is not loaded - the host is using eagerfpu Note that the guest's xcr0 register and fpu context are not loaded as part of the atomic world switch into "guest mode". They are loaded by KVM while the cpu is still in "host mode". Usage of the fpu in interrupt context is gated by irq_fpu_usable(). The interrupt handler will look something like this: if (irq_fpu_usable()) { kernel_fpu_begin(); [... code that uses the fpu ...] kernel_fpu_end(); } As long as the guest's fpu is not loaded and the host is using eager fpu, irq_fpu_usable() returns true (interrupted_kernel_fpu_idle() returns true). The interrupt handler proceeds to use the fpu with the guest's xcr0 live. kernel_fpu_begin() saves the current fpu context. If this uses XSAVE[OPT], it may leave the xsave area in an undesirable state. According to the SDM, during XSAVE bit i of XSTATE_BV is not modified if bit i is 0 in xcr0. So it's possible that XSTATE_BV[i] == 1 and xcr0[i] == 0 following an XSAVE. kernel_fpu_end() restores the fpu context. Now if any bit i in XSTATE_BV == 1 while xcr0[i] == 0, XRSTOR generates a #GP. The fault is trapped and SIGSEGV is delivered to the current process. Only pre-4.2 kernels appear to be vulnerable to this sequence of events. Commit 653f52c3 ("kvm,x86: load guest FPU context more eagerly") from 4.2 forces the guest's fpu to always be loaded on eagerfpu hosts. This patch fixes the bug by keeping the host's xcr0 loaded outside of the interrupts-disabled region where KVM switches into guest mode. Cc: stable@vger.kernel.org Suggested-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: David Matlack <dmatlack@google.com> [Move load after goto cancel_injection. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Helge Deller authored
[ Upstream commit 2ef4dfd9 ] Handling exceptions from modules never worked on parisc. It was just masked by the fact that exceptions from modules don't happen during normal use. When a module triggers an exception in get_user() we need to load the main kernel dp value before accessing the exception_data structure, and afterwards restore the original dp value of the module on exit. Noticed-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Helge Deller authored
[ Upstream commit ef72f311 ] The kernel module testcase (lib/test_user_copy.c) exhibited a kernel crash on parisc if the parameters for copy_from_user were reversed ("illegal reversed copy_to_user" testcase). Fix this potential crash by checking the fault handler if the faulting address is in the exception table. Signed-off-by: Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Helge Deller authored
[ Upstream commit e3893027 ] We want to avoid the kernel module loader to create function pointers for the kernel fixup routines of get_user() and put_user(). Changing the external reference from function type to int type fixes this. This unbreaks exception handling for get_user() and put_user() when called from a kernel module. Signed-off-by: Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Yong Li authored
[ Upstream commit 9b8e3ec3 ] The current implementation only uses the first byte in val, the second byte is always 0. Change it to use cpu_to_le16 to write the two bytes into the register Cc: stable@vger.kernel.org Signed-off-by: Yong Li <sdliyong@gmail.com> Reviewed-by: Phil Reid <preid@electromag.com.au> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Michael S. Tsirkin authored
[ Upstream commit 05dbcb43 ] The spec says: after writing 0 to device_status, the driver MUST wait for a read of device_status to return 0 before reinitializing the device. Cc: stable@vger.kernel.org Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Bjørn Mork authored
[ Upstream commit d48d5691 ] Thomas reports: "Windows: 00 diagnostics 01 modem 02 at-port 03 nmea 04 nic Linux: T: Bus=02 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#= 4 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=2001 ProdID=7e19 Rev=02.32 S: Manufacturer=Mobile Connect S: Product=Mobile Connect S: SerialNumber=0123456789ABCDEF C: #Ifs= 6 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option I: If#= 1 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option I: If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan I: If#= 5 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage" Reported-by: Thomas Schäfer <tschaefer@t-online.de> Cc: <stable@vger.kernel.org> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Martyn Welch authored
[ Upstream commit cddc9434 ] The CP2105 is used in the GE Healthcare Remote Alarm Box, with the Manufacturer ID of 0x1901 and Product ID of 0x0194. Signed-off-by: Martyn Welch <martyn.welch@collabora.co.uk> Cc: stable <stable@vger.kernel.org> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Josh Boyer authored
[ Upstream commit ea6db90e ] A Fedora user reports that the ftdi_sio driver works properly for the ICP DAS I-7561U device. Further, the user manual for these devices instructs users to load the driver and add the ids using the sysfs interface. Add support for these in the driver directly so that the devices work out of the box instead of needing manual configuration. Reported-by: <thesource@mail.ru> CC: stable <stable@vger.kernel.org> Signed-off-by: Josh Boyer <jwboyer@fedoraproject.org> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Filipe Manana authored
[ Upstream commit 56f23fdb ] If we rename an inode A (be it a file or a directory), create a new inode B with the old name of inode A and under the same parent directory, fsync inode B and then power fail, at log tree replay time we end up removing inode A completely. If inode A is a directory then all its files are gone too. Example scenarios where this happens: This is reproducible with the following steps, taken from a couple of test cases written for fstests which are going to be submitted upstream soon: # Scenario 1 mkfs.btrfs -f /dev/sdc mount /dev/sdc /mnt mkdir -p /mnt/a/x echo "hello" > /mnt/a/x/foo echo "world" > /mnt/a/x/bar sync mv /mnt/a/x /mnt/a/y mkdir /mnt/a/x xfs_io -c fsync /mnt/a/x <power failure happens> The next time the fs is mounted, log tree replay happens and the directory "y" does not exist nor do the files "foo" and "bar" exist anywhere (neither in "y" nor in "x", nor the root nor anywhere). # Scenario 2 mkfs.btrfs -f /dev/sdc mount /dev/sdc /mnt mkdir /mnt/a echo "hello" > /mnt/a/foo sync mv /mnt/a/foo /mnt/a/bar echo "world" > /mnt/a/foo xfs_io -c fsync /mnt/a/foo <power failure happens> The next time the fs is mounted, log tree replay happens and the file "bar" does not exists anymore. A file with the name "foo" exists and it matches the second file we created. Another related problem that does not involve file/data loss is when a new inode is created with the name of a deleted snapshot and we fsync it: mkfs.btrfs -f /dev/sdc mount /dev/sdc /mnt mkdir /mnt/testdir btrfs subvolume snapshot /mnt /mnt/testdir/snap btrfs subvolume delete /mnt/testdir/snap rmdir /mnt/testdir mkdir /mnt/testdir xfs_io -c fsync /mnt/testdir # or fsync some file inside /mnt/testdir <power failure> The next time the fs is mounted the log replay procedure fails because it attempts to delete the snapshot entry (which has dir item key type of BTRFS_ROOT_ITEM_KEY) as if it were a regular (non-root) entry, resulting in the following error that causes mount to fail: [52174.510532] BTRFS info (device dm-0): failed to delete reference to snap, inode 257 parent 257 [52174.512570] ------------[ cut here ]------------ [52174.513278] WARNING: CPU: 12 PID: 28024 at fs/btrfs/inode.c:3986 __btrfs_unlink_inode+0x178/0x351 [btrfs]() [52174.514681] BTRFS: Transaction aborted (error -2) [52174.515630] Modules linked in: btrfs dm_flakey dm_mod overlay crc32c_generic ppdev xor raid6_pq acpi_cpufreq parport_pc tpm_tis sg parport tpm evdev i2c_piix4 proc [52174.521568] CPU: 12 PID: 28024 Comm: mount Tainted: G W 4.5.0-rc6-btrfs-next-27+ #1 [52174.522805] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS by qemu-project.org 04/01/2014 [52174.524053] 0000000000000000 ffff8801df2a7710 ffffffff81264e93 ffff8801df2a7758 [52174.524053] 0000000000000009 ffff8801df2a7748 ffffffff81051618 ffffffffa03591cd [52174.524053] 00000000fffffffe ffff88015e6e5000 ffff88016dbc3c88 ffff88016dbc3c88 [52174.524053] Call Trace: [52174.524053] [<ffffffff81264e93>] dump_stack+0x67/0x90 [52174.524053] [<ffffffff81051618>] warn_slowpath_common+0x99/0xb2 [52174.524053] [<ffffffffa03591cd>] ? __btrfs_unlink_inode+0x178/0x351 [btrfs] [52174.524053] [<ffffffff81051679>] warn_slowpath_fmt+0x48/0x50 [52174.524053] [<ffffffffa03591cd>] __btrfs_unlink_inode+0x178/0x351 [btrfs] [52174.524053] [<ffffffff8118f5e9>] ? iput+0xb0/0x284 [52174.524053] [<ffffffffa0359fe8>] btrfs_unlink_inode+0x1c/0x3d [btrfs] [52174.524053] [<ffffffffa038631e>] check_item_in_log+0x1fe/0x29b [btrfs] [52174.524053] [<ffffffffa0386522>] replay_dir_deletes+0x167/0x1cf [btrfs] [52174.524053] [<ffffffffa038739e>] fixup_inode_link_count+0x289/0x2aa [btrfs] [52174.524053] [<ffffffffa038748a>] fixup_inode_link_counts+0xcb/0x105 [btrfs] [52174.524053] [<ffffffffa038a5ec>] btrfs_recover_log_trees+0x258/0x32c [btrfs] [52174.524053] [<ffffffffa03885b2>] ? replay_one_extent+0x511/0x511 [btrfs] [52174.524053] [<ffffffffa034f288>] open_ctree+0x1dd4/0x21b9 [btrfs] [52174.524053] [<ffffffffa032b753>] btrfs_mount+0x97e/0xaed [btrfs] [52174.524053] [<ffffffff8108e1b7>] ? trace_hardirqs_on+0xd/0xf [52174.524053] [<ffffffff8117bafa>] mount_fs+0x67/0x131 [52174.524053] [<ffffffff81193003>] vfs_kern_mount+0x6c/0xde [52174.524053] [<ffffffffa032af81>] btrfs_mount+0x1ac/0xaed [btrfs] [52174.524053] [<ffffffff8108e1b7>] ? trace_hardirqs_on+0xd/0xf [52174.524053] [<ffffffff8108c262>] ? lockdep_init_map+0xb9/0x1b3 [52174.524053] [<ffffffff8117bafa>] mount_fs+0x67/0x131 [52174.524053] [<ffffffff81193003>] vfs_kern_mount+0x6c/0xde [52174.524053] [<ffffffff8119590f>] do_mount+0x8a6/0x9e8 [52174.524053] [<ffffffff811358dd>] ? strndup_user+0x3f/0x59 [52174.524053] [<ffffffff81195c65>] SyS_mount+0x77/0x9f [52174.524053] [<ffffffff814935d7>] entry_SYSCALL_64_fastpath+0x12/0x6b [52174.561288] ---[ end trace 6b53049efb1a3ea6 ]--- Fix this by forcing a transaction commit when such cases happen. This means we check in the commit root of the subvolume tree if there was any other inode with the same reference when the inode we are fsync'ing is a new inode (created in the current transaction). Test cases for fstests, covering all the scenarios given above, were submitted upstream for fstests: * fstests: generic test for fsync after renaming directory https://patchwork.kernel.org/patch/8694281/ * fstests: generic test for fsync after renaming file https://patchwork.kernel.org/patch/8694301/ * fstests: add btrfs test for fsync after snapshot deletion https://patchwork.kernel.org/patch/8670671/ Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Filipe Manana authored
[ Upstream commit a89ca6f2 ] When we have the no_holes feature enabled, if a we truncate a file to a smaller size, truncate it again but to a size greater than or equals to its original size and fsync it, the log tree will not have any information about the hole covering the range [truncate_1_offset, new_file_size[. Which means if the fsync log is replayed, the file will remain with the state it had before both truncate operations. Without the no_holes feature this does not happen, since when the inode is logged (full sync flag is set) it will find in the fs/subvol tree a leaf with a generation matching the current transaction id that has an explicit extent item representing the hole. Fix this by adding an explicit extent item representing a hole between the last extent and the inode's i_size if we are doing a full sync. The issue is easy to reproduce with the following test case for fstests: . ./common/rc . ./common/filter . ./common/dmflakey _need_to_be_root _supported_fs generic _supported_os Linux _require_scratch _require_dm_flakey # This test was motivated by an issue found in btrfs when the btrfs # no-holes feature is enabled (introduced in kernel 3.14). So enable # the feature if the fs being tested is btrfs. if [ $FSTYP == "btrfs" ]; then _require_btrfs_fs_feature "no_holes" _require_btrfs_mkfs_feature "no-holes" MKFS_OPTIONS="$MKFS_OPTIONS -O no-holes" fi rm -f $seqres.full _scratch_mkfs >>$seqres.full 2>&1 _init_flakey _mount_flakey # Create our test files and make sure everything is durably persisted. $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 64K" \ -c "pwrite -S 0xbb 64K 61K" \ $SCRATCH_MNT/foo | _filter_xfs_io $XFS_IO_PROG -f -c "pwrite -S 0xee 0 64K" \ -c "pwrite -S 0xff 64K 61K" \ $SCRATCH_MNT/bar | _filter_xfs_io sync # Now truncate our file foo to a smaller size (64Kb) and then truncate # it to the size it had before the shrinking truncate (125Kb). Then # fsync our file. If a power failure happens after the fsync, we expect # our file to have a size of 125Kb, with the first 64Kb of data having # the value 0xaa and the second 61Kb of data having the value 0x00. $XFS_IO_PROG -c "truncate 64K" \ -c "truncate 125K" \ -c "fsync" \ $SCRATCH_MNT/foo # Do something similar to our file bar, but the first truncation sets # the file size to 0 and the second truncation expands the size to the # double of what it was initially. $XFS_IO_PROG -c "truncate 0" \ -c "truncate 253K" \ -c "fsync" \ $SCRATCH_MNT/bar _load_flakey_table $FLAKEY_DROP_WRITES _unmount_flakey # Allow writes again, mount to trigger log replay and validate file # contents. _load_flakey_table $FLAKEY_ALLOW_WRITES _mount_flakey # We expect foo to have a size of 125Kb, the first 64Kb of data all # having the value 0xaa and the remaining 61Kb to be a hole (all bytes # with value 0x00). echo "File foo content after log replay:" od -t x1 $SCRATCH_MNT/foo # We expect bar to have a size of 253Kb and no extents (any byte read # from bar has the value 0x00). echo "File bar content after log replay:" od -t x1 $SCRATCH_MNT/bar status=0 exit The expected file contents in the golden output are: File foo content after log replay: 0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa * 0200000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 * 0372000 File bar content after log replay: 0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 * 0772000 Without this fix, their contents are: File foo content after log replay: 0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa * 0200000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb * 0372000 File bar content after log replay: 0000000 ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee * 0200000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff * 0372000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 * 0772000 A test case submission for fstests follows soon. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-
Filipe Manana authored
[ Upstream commit 36283bf7 ] After commit 4f764e51 ("Btrfs: remove deleted xattrs on fsync log replay"), we can end up in a situation where during log replay we end up deleting xattrs that were never deleted when their file was last fsynced. This happens in the fast fsync path (flag BTRFS_INODE_NEEDS_FULL_SYNC is not set in the inode) if the inode has the flag BTRFS_INODE_COPY_EVERYTHING set, the xattr was added in a past transaction and the leaf where the xattr is located was not updated (COWed or created) in the current transaction. In this scenario the xattr item never ends up in the log tree and therefore at log replay time, which makes the replay code delete the xattr from the fs/subvol tree as it thinks that xattr was deleted prior to the last fsync. Fix this by always logging all xattrs, which is the simplest and most reliable way to detect deleted xattrs and replay the deletes at log replay time. This issue is reproducible with the following test case for fstests: seq=`basename $0` seqres=$RESULT_DIR/$seq echo "QA output created by $seq" here=`pwd` tmp=/tmp/$$ status=1 # failure is the default! _cleanup() { _cleanup_flakey rm -f $tmp.* } trap "_cleanup; exit \$status" 0 1 2 3 15 # get standard environment, filters and checks . ./common/rc . ./common/filter . ./common/dmflakey . ./common/attr # real QA test starts here # We create a lot of xattrs for a single file. Only btrfs and xfs are currently # able to store such a large mount of xattrs per file, other filesystems such # as ext3/4 and f2fs for example, fail with ENOSPC even if we attempt to add # less than 1000 xattrs with very small values. _supported_fs btrfs xfs _supported_os Linux _need_to_be_root _require_scratch _require_dm_flakey _require_attrs _require_metadata_journaling $SCRATCH_DEV rm -f $seqres.full _scratch_mkfs >> $seqres.full 2>&1 _init_flakey _mount_flakey # Create the test file with some initial data and make sure everything is # durably persisted. $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 32k" $SCRATCH_MNT/foo | _filter_xfs_io sync # Add many small xattrs to our file. # We create such a large amount because it's needed to trigger the issue found # in btrfs - we need to have an amount that causes the fs to have at least 3 # btree leafs with xattrs stored in them, and it must work on any leaf size # (maximum leaf/node size is 64Kb). num_xattrs=2000 for ((i = 1; i <= $num_xattrs; i++)); do name="user.attr_$(printf "%04d" $i)" $SETFATTR_PROG -n $name -v "val_$(printf "%04d" $i)" $SCRATCH_MNT/foo done # Sync the filesystem to force a commit of the current btrfs transaction, this # is a necessary condition to trigger the bug on btrfs. sync # Now update our file's data and fsync the file. # After a successful fsync, if the fsync log/journal is replayed we expect to # see all the xattrs we added before with the same values (and the updated file # data of course). Btrfs used to delete some of these xattrs when it replayed # its fsync log/journal. $XFS_IO_PROG -c "pwrite -S 0xbb 8K 16K" \ -c "fsync" \ $SCRATCH_MNT/foo | _filter_xfs_io # Simulate a crash/power loss. _load_flakey_table $FLAKEY_DROP_WRITES _unmount_flakey # Allow writes again and mount. This makes the fs replay its fsync log. _load_flakey_table $FLAKEY_ALLOW_WRITES _mount_flakey echo "File content after crash and log replay:" od -t x1 $SCRATCH_MNT/foo echo "File xattrs after crash and log replay:" for ((i = 1; i <= $num_xattrs; i++)); do name="user.attr_$(printf "%04d" $i)" echo -n "$name=" $GETFATTR_PROG --absolute-names -n $name --only-values $SCRATCH_MNT/foo echo done status=0 exit The golden output expects all xattrs to be available, and with the correct values, after the fsync log is replayed. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
-