- 28 Oct, 2016 40 commits
-
-
Steffen Maier authored
commit 771bf035 upstream. With commit 2c55b750 ("[SCSI] zfcp: Redesign of the debug tracing for SAN records.") we lost the N_Port-ID where an ELS response comes from. With commit 7c7dc196 ("[SCSI] zfcp: Simplify handling of ct and els requests") we lost the N_Port-ID where a CT response comes from. It's especially useful if the request SAN trace record with D_ID was already lost due to trace buffer wrap. GS uses an open WKA port handle and ELS just a D_ID, and only for ELS we could get D_ID from QTCB bottom via zfcp_fsf_req. To cover both cases, add a new field to zfcp_fsf_ct_els and fill it in on request to use in SAN response trace. Strictly speaking the D_ID on SAN response is the FC frame's S_ID. We don't need a field for the other end which is always us. Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com> Fixes: 2c55b750 ("[SCSI] zfcp: Redesign of the debug tracing for SAN records.") Fixes: 7c7dc196 ("[SCSI] zfcp: Simplify handling of ct and els requests") Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steffen Maier authored
commit 7c964ffe upstream. This information was lost with commit a54ca0f6 ("[SCSI] zfcp: Redesign of the debug tracing for HBA records.") but is required to debug e.g. invalid handle situations. Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com> Fixes: a54ca0f6 ("[SCSI] zfcp: Redesign of the debug tracing for HBA records.") Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steffen Maier authored
commit d27a7cb9 upstream. Since commit a54ca0f6 ("[SCSI] zfcp: Redesign of the debug tracing for HBA records.") HBA records no longer contain WWPN, D_ID, or LUN to reduce duplicate information which is already in REC records. In contrast to "regular" target ports, we don't use recovery to open WKA ports such as directory/nameserver, so we don't get REC records. Therefore, introduce pseudo REC running records without any actual recovery action but including D_ID of WKA port on open/close. Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com> Fixes: a54ca0f6 ("[SCSI] zfcp: Redesign of the debug tracing for HBA records.") Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steffen Maier authored
commit 0102a30a upstream. bring back commit d21e9daa ("[SCSI] zfcp: Dont use 0 to indicate invalid LUN in rec trace") which was lost with commit ae0904f6 ("[SCSI] zfcp: Redesign of the debug tracing for recovery actions.") Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com> Fixes: ae0904f6 ("[SCSI] zfcp: Redesign of the debug tracing for recovery actions.") Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steffen Maier authored
commit 35f040df upstream. While retaining the actual filtering according to trace level, the following commits started to write such filtered records with a hardcoded record level of 1 instead of the actual record level: commit 250a1352 ("[SCSI] zfcp: Redesign of the debug tracing for SCSI records.") commit a54ca0f6 ("[SCSI] zfcp: Redesign of the debug tracing for HBA records.") Now we can distinguish written records again for offline level filtering. Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com> Fixes: 250a1352 ("[SCSI] zfcp: Redesign of the debug tracing for SCSI records.") Fixes: a54ca0f6 ("[SCSI] zfcp: Redesign of the debug tracing for HBA records.") Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steffen Maier authored
commit 4eeaa4f3 upstream. On a successful end of reopen port forced, zfcp_erp_strategy_followup_success() re-uses the port erp_action and the subsequent zfcp_erp_action_cleanup() now sees ZFCP_ERP_SUCCEEDED with erp_action->action==ZFCP_ERP_ACTION_REOPEN_PORT instead of ZFCP_ERP_ACTION_REOPEN_PORT_FORCED but must not perform zfcp_scsi_schedule_rport_register(). We can detect this because the fresh port reopen erp_action is in its very first step ZFCP_ERP_STEP_UNINITIALIZED. Otherwise this opens a time window with unblocked rport (until the followup port reopen recovery would block it again). If a scsi_cmnd timeout occurs during this time window fc_timed_out() cannot work as desired and such command would indeed time out and trigger scsi_eh. This prevents a clean and timely path failover. This should not happen if the path issue can be recovered on FC transport layer such as path issues involving RSCNs. Also, unnecessary and repeated DID_IMM_RETRY for pending and undesired new requests occur because internally zfcp still has its zfcp_port blocked. As follow-on errors with scsi_eh, it can cause, in the worst case, permanently lost paths due to one of: sd <scsidev>: [<scsidisk>] Medium access timeout failure. Offlining disk! sd <scsidev>: Device offlined - not ready after error recovery For fix validation and to aid future debugging with other recoveries we now also trace (un)blocking of rports. Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com> Fixes: 5767620c ("[SCSI] zfcp: Do not unblock rport from REOPEN_PORT_FORCED") Fixes: a2fa0aed ("[SCSI] zfcp: Block FC transport rports early on errors") Fixes: 5f852be9 ("[SCSI] zfcp: Fix deadlock between zfcp ERP and SCSI") Fixes: 338151e0 ("[SCSI] zfcp: make use of fc_remote_port_delete when target port is unavailable") Fixes: 3859f6a2 ("[PATCH] zfcp: add rports to enable scsi_add_device to work again") Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steffen Maier authored
commit 70369f8e upstream. In the hardware data router case, introduced with kernel 3.2 commit 86a9668a ("[SCSI] zfcp: support for hardware data router") the ELS/GS request&response length needs to be initialized as in the chained SBAL case. Otherwise, the FCP channel rejects ELS requests with FSF_REQUEST_SIZE_TOO_LARGE. Such ELS requests can be issued by user space through BSG / HBA API, or zfcp itself uses ADISC ELS for remote port link test on RSCN. The latter can cause a short path outage due to unnecessary remote target port recovery because the always failing ADISC cannot detect extremely short path interruptions beyond the local FCP channel. Below example is decoded with zfcpdbf from s390-tools: Timestamp : ... Area : SAN Subarea : 00 Level : 1 Exception : - CPU id : .. Caller : zfcp_dbf_san_req+0408 Record id : 1 Tag : fssels1 Request id : 0x<reqid> Destination ID : 0x00<target d_id> Payload info : 52000000 00000000 <our wwpn > [ADISC] <our wwnn > 00<s_id> 00000000 00000000 00000000 00000000 00000000 Timestamp : ... Area : HBA Subarea : 00 Level : 1 Exception : - CPU id : .. Caller : zfcp_dbf_hba_fsf_res+0740 Record id : 1 Tag : fs_ferr Request id : 0x<reqid> Request status : 0x00000010 FSF cmnd : 0x0000000b [FSF_QTCB_SEND_ELS] FSF sequence no: 0x... FSF issued : ... FSF stat : 0x00000061 [FSF_REQUEST_SIZE_TOO_LARGE] FSF stat qual : 00000000 00000000 00000000 00000000 Prot stat : 0x00000100 Prot stat qual : 00000000 00000000 00000000 00000000 Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com> Fixes: 86a9668a ("[SCSI] zfcp: support for hardware data router") Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Steffen Maier authored
commit bd77befa upstream. For an NPIV-enabled FCP device, zfcp can erroneously show "NPort (fabric via point-to-point)" instead of "NPIV VPORT" for the port_type sysfs attribute of the corresponding fc_host. s390-tools that can be affected are dbginfo.sh and ziomon. zfcp_fsf_exchange_config_evaluate() ignores fsf_qtcb_bottom_config.connection_features indicating NPIV and only sets fc_host_port_type to FC_PORTTYPE_NPORT if fsf_qtcb_bottom_config.fc_topology is FSF_TOPO_FABRIC. Only the independent zfcp_fsf_exchange_port_evaluate() evaluates connection_features to overwrite fc_host_port_type to FC_PORTTYPE_NPIV in case of NPIV. Code was introduced with upstream kernel 2.6.30 commit 0282985d ("[SCSI] zfcp: Report fc_host_port_type as NPIV"). This works during FCP device recovery (such as set online) because it performs FSF_QTCB_EXCHANGE_CONFIG_DATA followed by FSF_QTCB_EXCHANGE_PORT_DATA in sequence. However, the zfcp-specific scsi host sysfs attributes "requests", "megabytes", or "seconds_active" trigger only zfcp_fsf_exchange_config_evaluate() resetting fc_host port_type to FC_PORTTYPE_NPORT despite NPIV. The zfcp-specific scsi host sysfs attribute "utilization" triggers only zfcp_fsf_exchange_port_evaluate() correcting the fc_host port_type again in case of NPIV. Evaluate fsf_qtcb_bottom_config.connection_features in zfcp_fsf_exchange_config_evaluate() where it belongs to. Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com> Fixes: 0282985d ("[SCSI] zfcp: Report fc_host_port_type as NPIV") Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Richard Weinberger authored
commit 23654188 upstream. When Fastmap is used we can face here an -EBADMSG since Fastmap cannot know about unmaps. If the erasure was interrupted the PEB may show ECC errors and UBI would go to ro-mode as it assumes that the PEB was check during attach time, which is not the case with Fastmap. Fixes: dbb7d2a8 ("UBI: Add fastmap core") Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Laurent Dufour authored
commit 05af40e8 upstream. This commit fixes a stack corruption in the pseries specific code dealing with the huge pages. In __pSeries_lpar_hugepage_invalidate() the buffer used to pass arguments to the hypervisor is not large enough. This leads to a stack corruption where a previously saved register could be corrupted leading to unexpected result in the caller, like the following panic: Oops: Kernel access of bad area, sig: 11 [#1] SMP NR_CPUS=2048 NUMA pSeries Modules linked in: virtio_balloon ip_tables x_tables autofs4 virtio_blk 8139too virtio_pci virtio_ring 8139cp virtio CPU: 11 PID: 1916 Comm: mmstress Not tainted 4.8.0 #76 task: c000000005394880 task.stack: c000000005570000 NIP: c00000000027bf6c LR: c00000000027bf64 CTR: 0000000000000000 REGS: c000000005573820 TRAP: 0300 Not tainted (4.8.0) MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE> CR: 84822884 XER: 20000000 CFAR: c00000000010a924 DAR: 420000000014e5e0 DSISR: 40000000 SOFTE: 1 GPR00: c00000000027bf64 c000000005573aa0 c000000000e02800 c000000004447964 GPR04: c00000000404de18 c000000004d38810 00000000042100f5 00000000f5002104 GPR08: e0000000f5002104 0000000000000001 042100f5000000e0 00000000042100f5 GPR12: 0000000000002200 c00000000fe02c00 c00000000404de18 0000000000000000 GPR16: c1ffffffffffe7ff 00003fff62000000 420000000014e5e0 00003fff63000000 GPR20: 0008000000000000 c0000000f7014800 0405e600000000e0 0000000000010000 GPR24: c000000004d38810 c000000004447c10 c00000000404de18 c000000004447964 GPR28: c000000005573b10 c000000004d38810 00003fff62000000 420000000014e5e0 NIP [c00000000027bf6c] zap_huge_pmd+0x4c/0x470 LR [c00000000027bf64] zap_huge_pmd+0x44/0x470 Call Trace: [c000000005573aa0] [c00000000027bf64] zap_huge_pmd+0x44/0x470 (unreliable) [c000000005573af0] [c00000000022bbd8] unmap_page_range+0xcf8/0xed0 [c000000005573c30] [c00000000022c2d4] unmap_vmas+0x84/0x120 [c000000005573c80] [c000000000235448] unmap_region+0xd8/0x1b0 [c000000005573d80] [c0000000002378f0] do_munmap+0x2d0/0x4c0 [c000000005573df0] [c000000000237be4] SyS_munmap+0x64/0xb0 [c000000005573e30] [c000000000009560] system_call+0x38/0x108 Instruction dump: fbe1fff8 fb81ffe0 7c7f1b78 7ca32b78 7cbd2b78 f8010010 7c9a2378 f821ffb1 7cde3378 4bfffea9 7c7b1b79 41820298 <e87f0000> 48000130 7fa5eb78 7fc4f378 Most of the time, the bug is surfacing in a caller up in the stack from __pSeries_lpar_hugepage_invalidate() which is quite confusing. This bug is pending since v3.11 but was hidden if a caller of the caller of __pSeries_lpar_hugepage_invalidate() has pushed the corruped register (r18 in this case) in the stack and is not using it until restoring it. GCC 6.2.0 seems to raise it more frequently. This commit also change the definition of the parameter buffer in pSeries_lpar_flush_hash_range() to rely on the global define PLPAR_HCALL9_BUFSIZE (no functional change here). Fixes: 1a527286 ("powerpc: Optimize hugepage invalidate") Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paul Mackerras authored
commit 1a34439e upstream. Debugging a data corruption issue with virtio-net/vhost-net led to the observation that __copy_tofrom_user was occasionally returning a value 16 larger than it should. Since the return value from __copy_tofrom_user is the number of bytes not copied, this means that __copy_tofrom_user can occasionally return a value larger than the number of bytes it was asked to copy. In turn this can cause higher-level copy functions such as copy_page_to_iter_iovec to corrupt memory by copying data into the wrong memory locations. It turns out that the failing case involves a fault on the store at label 79, and at that point the first unmodified byte of the destination is at R3 + 16. Consequently the exception handler for that store needs to add 16 to R3 before using it to work out how many bytes were not copied, but in this one case it was not adding the offset to R3. To fix it, this moves the label 179 to the point where we add 16 to R3. I have checked manually all the exception handlers for the loads and stores in this code and the rest of them are correct (it would be excellent to have an automated test of all the exception cases). This bug has been present since this code was initially committed in May 2002 to Linux version 2.5.20. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gavin Shan authored
commit 5adaf862 upstream. This fixes the warnings reported from sparse: pci.c:312:33: warning: restricted __be64 degrades to integer pci.c:313:33: warning: restricted __be64 degrades to integer Fixes: cee72d5b ("powerpc/powernv: Display diag data on p7ioc EEH errors") Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gavin Shan authored
commit a7032132 upstream. The hub diag-data type is filled with big-endian data by OPAL call opal_pci_get_hub_diag_data(). We need convert it to CPU-endian value before using it. The issue is reported by sparse as pointed by Michael Ellerman: eeh-powernv.c:1309:21: warning: restricted __be16 degrades to integer This converts hub diag-data type to CPU-endian before using it in pnv_eeh_get_and_dump_hub_diag(). Fixes: 2a485ad7 ("powerpc/powernv: Drop PHB operation next_error()") Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gavin Shan authored
commit d63e51b3 upstream. The PE number (@frozen_pe_no), filled by opal_pci_next_error() is in big-endian format. It should be converted to CPU-endian before it is passed to opal_pci_eeh_freeze_clear() when clearing the frozen state if the PE is invalid one. As Michael Ellerman pointed out, the issue is also detected by sparse: eeh-powernv.c:1541:41: warning: incorrect type in argument 2 (different base types) This passes CPU-endian PE number to opal_pci_eeh_freeze_clear() and it should be part of commit <0f36db77> ("powerpc/eeh: Fix wrong printed PE number"), which was merged to 4.3 kernel. Fixes: 71b540ad ("powerpc/powernv: Don't escalate non-existing frozen PE") Suggested-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Anton Blanchard authored
commit 5045ea37 upstream. __kernel_get_syscall_map() and __kernel_clock_getres() use cmpli to check if the passed in pointer is non zero. cmpli maps to a 32 bit compare on binutils, so we ignore the top 32 bits. A simple test case can be created by passing in a bogus pointer with the bottom 32 bits clear. Using a clk_id that is handled by the VDSO, then one that is handled by the kernel shows the problem: printf("%d\n", clock_getres(CLOCK_REALTIME, (void *)0x100000000)); printf("%d\n", clock_getres(CLOCK_BOOTTIME, (void *)0x100000000)); And we get: 0 -1 The bigger issue is if we pass a valid pointer with the bottom 32 bits clear, in this case we will return success but won't write any data to the pointer. I stumbled across this issue because the LLVM integrated assembler doesn't accept cmpli with 3 arguments. Fix this by converting them to cmpldi. Fixes: a7f290da ("[PATCH] powerpc: Merge vdso's and add vdso support to 32 bits kernel") Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rabin Vincent authored
commit f659b100 upstream. As the documentation for kthread_stop() says, "if threadfn() may call do_exit() itself, the caller must ensure task_struct can't go away". dm-crypt does not ensure this and therefore crashes when crypt_dtr() calls kthread_stop(). The crash is trivially reproducible by adding a delay before the call to kthread_stop() and just opening and closing a dm-crypt device. general protection fault: 0000 [#1] PREEMPT SMP CPU: 0 PID: 533 Comm: cryptsetup Not tainted 4.8.0-rc7+ #7 task: ffff88003bd0df40 task.stack: ffff8800375b4000 RIP: 0010: kthread_stop+0x52/0x300 Call Trace: crypt_dtr+0x77/0x120 dm_table_destroy+0x6f/0x120 __dm_destroy+0x130/0x250 dm_destroy+0x13/0x20 dev_remove+0xe6/0x120 ? dev_suspend+0x250/0x250 ctl_ioctl+0x1fc/0x530 ? __lock_acquire+0x24f/0x1b10 dm_ctl_ioctl+0x13/0x20 do_vfs_ioctl+0x91/0x6a0 ? ____fput+0xe/0x10 ? entry_SYSCALL_64_fastpath+0x5/0xbd ? trace_hardirqs_on_caller+0x151/0x1e0 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1f/0xbd This problem was introduced by bcbd94ff ("dm crypt: fix a possible hang due to race condition on exit"). Looking at the description of that patch (excerpted below), it seems like the problem it addresses can be solved by just using set_current_state instead of __set_current_state, since we obviously need the memory barrier. | dm crypt: fix a possible hang due to race condition on exit | | A kernel thread executes __set_current_state(TASK_INTERRUPTIBLE), | __add_wait_queue, spin_unlock_irq and then tests kthread_should_stop(). | It is possible that the processor reorders memory accesses so that | kthread_should_stop() is executed before __set_current_state(). If | such reordering happens, there is a possible race on thread | termination: [...] So this patch just reverts the aforementioned patch and changes the __set_current_state(TASK_INTERRUPTIBLE) to set_current_state(...). This fixes the crash and should also fix the potential hang. Fixes: bcbd94ff ("dm crypt: fix a possible hang due to race condition on exit") Cc: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mike Snitzer authored
commit f10e06b7 upstream. If pg_init_retries is set and a request is queued against a multipath device with all underlying block device request_queues in the "dying" state then an infinite loop is triggered because activate_path() never succeeds and hence never calls pg_init_done(). This change avoids that device removal triggers an infinite loop by failing the activate_path() which causes the "dying" path to be failed. Reported-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Minfei Huang authored
commit 8dc23658 upstream. dm_resume() will return success (0) rather than -EINVAL if !dm_suspended_md() upon retry within dm_resume(). Reset the error code at the start of dm_resume()'s retry loop. Also, remove a useless assignment at the end of dm_resume(). Fixes: ffcc3936 ("dm: enhance internal suspend and resume interface") Signed-off-by: Minfei Huang <mnghuan@gmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bart Van Assche authored
commit 3b785fbc upstream. This avoids that new requests are queued while __dm_destroy() is in progress. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Adrian Hunter authored
commit 3bccbe20 upstream. The MTC packet provides a 8-bit slice of CTC which is related to TSC by the TMA packet, however the TMA packet only provides the lower 16 bits of CTC. If mtc_shift > 8 then some of the MTC bits are not in the CTC provided by the TMA packet. Fix-up the last_mtc calculated from the TMA packet by copying the missing bits from the current MTC assuming the least difference between the two, and that the current MTC comes after last_mtc. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/1475062896-22274-2-git-send-email-adrian.hunter@intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Adrian Hunter authored
commit 51ee6481 upstream. In cycle-accurate mode, timestamps can be calculated from CYC packets. The decoder also estimates timestamps based on the number of instructions since the last timestamp. For that to work in cycle-accurate mode, the instruction count needs to be reset to zero when a timestamp is calculated from a CYC packet, but that wasn't happening, so fix it. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/1475062896-22274-1-git-send-email-adrian.hunter@intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Adrian Hunter authored
commit 810c398b upstream. Fix occasional decoder errors decoding trace data collected in snapshot mode. Snapshot mode can take successive snapshots of trace which might overlap. The decoder checks whether there is an overlap but only looks at the current and previous buffer. However buffers that do not contain synchronization (i.e. PSB) packets cannot be decoded or used for overlap checking. That means the decoder actually needs to check overlaps between the current buffer and the previous buffer that contained usable data. Make that change. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Link: http://lkml.kernel.org/r/1474641528-18776-10-git-send-email-adrian.hunter@intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andrew Bresticker authored
commit d771fdf9 upstream. The ramoops buffer may be mapped as either I/O memory or uncached memory. On ARM64, this results in a device-type (strongly-ordered) mapping. Since unnaligned accesses to device-type memory will generate an alignment fault (regardless of whether or not strict alignment checking is enabled), it is not safe to use memcpy(). memcpy_fromio() is guaranteed to only use aligned accesses, so use that instead. Signed-off-by: Andrew Bresticker <abrestic@chromium.org> Signed-off-by: Enric Balletbo Serra <enric.balletbo@collabora.com> Reviewed-by: Puneet Kumar <puneetster@chromium.org> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Furquan Shaikh authored
commit 7e75678d upstream. persistent_ram_update uses vmap / iomap based on whether the buffer is in memory region or reserved region. However, both map it as non-cacheable memory. For armv8 specifically, non-cacheable mapping requests use a memory type that has to be accessed aligned to the request size. memcpy() doesn't guarantee that. Signed-off-by: Furquan Shaikh <furquan@google.com> Signed-off-by: Enric Balletbo Serra <enric.balletbo@collabora.com> Reviewed-by: Aaron Durbin <adurbin@chromium.org> Reviewed-by: Olof Johansson <olofj@chromium.org> Tested-by: Furquan Shaikh <furquan@chromium.org> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sebastian Andrzej Siewior authored
commit d5a9bf0b upstream. I have here a FPGA behind PCIe which exports SRAM which I use for pstore. Now it seems that the FPGA no longer supports cmpxchg based updates and writes back 0xff…ff and returns the same. This leads to crash during crash rendering pstore useless. Since I doubt that there is much benefit from using cmpxchg() here, I am dropping this atomic access and use the spinlock based version. Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Rabin Vincent <rabinv@axis.com> Tested-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Guenter Roeck <linux@roeck-us.net> [kees: remove "_locked" suffix since it's the only option now] Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sebastian Andrzej Siewior authored
commit 4407de74 upstream. A basic rmmod ramoops segfaults. Let's see why. Since commit 34f0ec82 ("pstore: Correct the max_dump_cnt clearing of ramoops") sets ->max_dump_cnt to zero before looping over ->przs but we didn't use it before that either. And since commit ee1d2674 ("pstore: add pstore unregister") we free that memory on rmmod. But even then, we looped until a NULL pointer or ERR. I don't see where it is ensured that the last member is NULL. Let's try this instead: simply error recovery and free. Clean up in error case where resources were allocated. And then, in the free path, rely on ->max_dump_cnt in the free path. Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helge Deller authored
commit 65bf34f5 upstream. Increase the initial kernel default page mapping size for 64-bit kernels to 64 MB and for 32-bit kernels to 32 MB. Due to the additional support of ftrace, tracepoint and huge pages the kernel size can exceed the sizes we used up to now. Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helge Deller authored
commit f8850abb upstream. Architecturally we need to keep __gp below 0x1000000. But because of ftrace and tracepoint support, the RO_DATA_SECTION now gets much bigger than it was before. By moving the linkage tables before RO_DATA_SECTION we can avoid that __gp gets positioned at a too high address. Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Helge Deller authored
commit 690d097c upstream. Increase the initial kernel default page mapping size for SMP kernels to 32MB and add a runtime check which panics early if the kernel is bigger than the initial mapping size. This fixes boot crashes of 32bit SMP kernels. Due to the introduction of huge page support in kernel 4.4 and it's required initial kernel layout in memory, a 32bit SMP kernel usually got bigger (in layout, not size) than 16MB. Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Srinivas Pandruvada authored
commit f9f4872d upstream. This is a requirement that MSR MSR_PM_ENABLE must be set to 0x01 before reading MSR_HWP_CAPABILITIES on a given CPU. If cpufreq init() is scheduled on a CPU which is not same as policy->cpu or migrates to a different CPU before calling msr read for MSR_HWP_CAPABILITIES, it is possible that MSR_PM_ENABLE was not to set to 0x01 on that CPU. This will cause GP fault. So like other places in this path rdmsrl_on_cpu should be used instead of rdmsrl. Moreover the scope of MSR_HWP_CAPABILITIES is on per thread basis, so it should be read from the same CPU, for which MSR MSR_HWP_REQUEST is getting set. dmesg dump or warning: [ 22.014488] WARNING: CPU: 139 PID: 1 at arch/x86/mm/extable.c:50 ex_handler_rdmsr_unsafe+0x68/0x70 [ 22.014492] unchecked MSR access error: RDMSR from 0x771 [ 22.014493] Modules linked in: [ 22.014507] CPU: 139 PID: 1 Comm: swapper/0 Not tainted 4.7.5+ #1 ... ... [ 22.014516] Call Trace: [ 22.014542] [<ffffffff813d7dd1>] dump_stack+0x63/0x82 [ 22.014558] [<ffffffff8107bc8b>] __warn+0xcb/0xf0 [ 22.014561] [<ffffffff8107bcff>] warn_slowpath_fmt+0x4f/0x60 [ 22.014563] [<ffffffff810676f8>] ex_handler_rdmsr_unsafe+0x68/0x70 [ 22.014564] [<ffffffff810677d9>] fixup_exception+0x39/0x50 [ 22.014604] [<ffffffff8102e400>] do_general_protection+0x80/0x150 [ 22.014610] [<ffffffff817f9ec8>] general_protection+0x28/0x30 [ 22.014635] [<ffffffff81687940>] ? get_target_pstate_use_performance+0xb0/0xb0 [ 22.014642] [<ffffffff810600c7>] ? native_read_msr+0x7/0x40 [ 22.014657] [<ffffffff81688123>] intel_pstate_hwp_set+0x23/0x130 [ 22.014660] [<ffffffff81688406>] intel_pstate_set_policy+0x1b6/0x340 [ 22.014662] [<ffffffff816829bb>] cpufreq_set_policy+0xeb/0x2c0 [ 22.014664] [<ffffffff81682f39>] cpufreq_init_policy+0x79/0xe0 [ 22.014666] [<ffffffff81682cb0>] ? cpufreq_update_policy+0x120/0x120 [ 22.014669] [<ffffffff816833a6>] cpufreq_online+0x406/0x820 [ 22.014671] [<ffffffff8168381f>] cpufreq_add_dev+0x5f/0x90 [ 22.014717] [<ffffffff81530ac8>] subsys_interface_register+0xb8/0x100 [ 22.014719] [<ffffffff816821bc>] cpufreq_register_driver+0x14c/0x210 [ 22.014749] [<ffffffff81fe1d90>] intel_pstate_init+0x39d/0x4d5 [ 22.014751] [<ffffffff81fe13f2>] ? cpufreq_gov_dbs_init+0x12/0x12 Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sergei Shtylyov authored
commit e330b9a6 upstream. of_irq_get[_byname]() return 0 iff irq_create_of_mapping() call fails. Returning both error code and 0 on failure is a sign of a misdesigned API, it makes the failure check unnecessarily complex and error prone. We should rely on the platform IRQ resource in this case, not return 0, especially as 0 can be a valid IRQ resource too... Fixes: aff008ad ("platform_get_irq: Revert to platform_get_resource if of_irq_get fails") Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Maik Broemme authored
commit 8e2e0317 upstream. Similar to the AR93xx and the AR94xx series, the AR95xx also have the same quirk for the Bus Reset. It will lead to instant system reset if the device is assigned via VFIO to a KVM VM. I've been able reproduce this behavior with a MikroTik R11e-2HnD. Fixes: c3e59ee4 ("PCI: Mark Atheros AR93xx to avoid bus reset") Signed-off-by: Maik Broemme <mbroemme@libmpq.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Haibo Chen authored
commit 02265cd6 upstream. Potentially overflowing expression 1000000 * data->timeout_clks with type unsigned int is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type unsigned long long. To avoid overflow, cast 1000000U to type unsigned long long. Special thanks to Coverity. Fixes: 7f05538a ("mmc: sdhci: fix data timeout (part 2)") Signed-off-by: Haibo Chen <haibo.chen@nxp.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Glöckner authored
commit 0ed50abb upstream. CMD23 aka SET_BLOCK_COUNT was introduced with MMC v3.1. Older versions of the specification allowed to terminate multi-block transfers only with CMD12. The patch fixes the following problem: mmc0: new MMC card at address 0001 mmcblk0: mmc0:0001 SDMB-16 15.3 MiB mmcblk0: timed out sending SET_BLOCK_COUNT command, card status 0x400900 ... blk_update_request: I/O error, dev mmcblk0, sector 0 Buffer I/O error on dev mmcblk0, logical block 0, async page read mmcblk0: unable to read partition table Signed-off-by: Daniel Glöckner <dg@emlix.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Larry Finger authored
commit 0c9d3491 upstream. Some RTL8821AE devices sold in Great Britain have the country code of 0x25 encoded in their EEPROM. This value is not tested in the routine that establishes the regulatory info for the chip. The fix is to set this code to have the same capabilities as the EU countries. In addition, the channels allowed for COUNTRY_CODE_ETSI were more properly suited for China and Israel, not the EU. This problem has also been fixed. Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lin Huang authored
commit c8a9a6da upstream. there define two devfreq_event_get_drvdata() function in devfreq-event.h when disable CONFIG_PM_DEVFREQ_EVENT, it will lead to build fail. So remove devfreq_event_get_drvdata() function. Fixes: f262f28c ("PM / devfreq: event: Add devfreq_event class") Signed-off-by: Lin Huang <hl@rock-chips.com> Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com> Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lucas Stach authored
commit d8846023 upstream. Initialize the GPU clock muxes to sane inputs. Until now they have not been changed from their default values, which means that both GPU3D shader and GPU2D core were fed by clock inputs whose rates exceed the maximium allowed frequency of the cores by as much as 200MHz. This fixes a severe GPU stability issue on i.MX6DL. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Acked-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jan Remmet authored
commit 8f9165c9 upstream. http://www.ti.com/lit/pdf/SWCZ010: DCDC o/p voltage can go higher than programmed value Impact: VDDI, VDD2, and VIO output programmed voltage level can go higher than expected or crash, when coming out of PFM to PWM mode or using DVFS. Description: When DCDC CLK SYNC bits are 11/01: * VIO 3-MHz oscillator is the source clock of the digital core and input clock of VDD1 and VDD2 * Turn-on of VDD1 and VDD2 HSD PFETis synchronized or at a constant phase shift * Current pulled though VCC1+VCC2 is Iload(VDD1) + Iload(VDD2) * The 3 HSD PFET will be turned-on at the same time, causing the highest possible switching noise on the application. This noise level depends on the layout, the VBAT level, and the load current. The noise level increases with improper layout. When DCDC CLK SYNC bits are 00: * VIO 3-MHz oscillator is the source clock of digital core * VDD1 and VDD2 are running on their own 3-MHz oscillator * Current pulled though VCC1+VCC2 average of Iload(VDD1) + Iload(VDD2) * The switching noise of the 3 SMPS will be randomly spread over time, causing lower overall switching noise. Workaround: Set DCDCCTRL_REG[1:0]= 00. Signed-off-by: Jan Remmet <j.remmet@phytec.de> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Alexander Usyskin authored
commit ac182e8a upstream. Add device ids for Intel Kabypoint PCH (Kabylake) Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Liu Gang authored
commit d71cf15b upstream. From the beginning of the gpio-mpc8xxx.c, the "handle_level_irq" has being used to handle GPIO interrupts in the PowerPC/Layerscape platforms. But actually, almost all PowerPC/Layerscape platforms assert an interrupt request upon either a high-to-low change or any change on the state of the signal. So the "handle_level_irq" is not reasonable for PowerPC/Layerscape GPIO interrupt, it should be "handle_edge_irq". Otherwise the system may lost some interrupts from the PIN's state changes. Signed-off-by: Liu Gang <Gang.Liu@nxp.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-