- 03 Aug, 2022 2 commits
-
-
Pawan Gupta authored
RSB fill sequence does not have any protection for miss-prediction of conditional branch at the end of the sequence. CPU can speculatively execute code immediately after the sequence, while RSB filling hasn't completed yet. #define __FILL_RETURN_BUFFER(reg, nr, sp) \ mov $(nr/2), reg; \ 771: \ ANNOTATE_INTRA_FUNCTION_CALL; \ call 772f; \ 773: /* speculation trap */ \ UNWIND_HINT_EMPTY; \ pause; \ lfence; \ jmp 773b; \ 772: \ ANNOTATE_INTRA_FUNCTION_CALL; \ call 774f; \ 775: /* speculation trap */ \ UNWIND_HINT_EMPTY; \ pause; \ lfence; \ jmp 775b; \ 774: \ add $(BITS_PER_LONG/8) * 2, sp; \ dec reg; \ jnz 771b; <----- CPU can miss-predict here. Before RSB is filled, RETs that come in program order after this macro can be executed speculatively, making them vulnerable to RSB-based attacks. Mitigate it by adding an LFENCE after the conditional branch to prevent speculation while RSB is being filled. Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de>
-
Daniel Sneddon authored
tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as documented for RET instructions after VM exits. Mitigate it with a new one-entry RSB stuffing mechanism and a new LFENCE. == Background == Indirect Branch Restricted Speculation (IBRS) was designed to help mitigate Branch Target Injection and Speculative Store Bypass, i.e. Spectre, attacks. IBRS prevents software run in less privileged modes from affecting branch prediction in more privileged modes. IBRS requires the MSR to be written on every privilege level change. To overcome some of the performance issues of IBRS, Enhanced IBRS was introduced. eIBRS is an "always on" IBRS, in other words, just turn it on once instead of writing the MSR on every privilege level change. When eIBRS is enabled, more privileged modes should be protected from less privileged modes, including protecting VMMs from guests. == Problem == Here's a simplification of how guests are run on Linux' KVM: void run_kvm_guest(void) { // Prepare to run guest VMRESUME(); // Clean up after guest runs } The execution flow for that would look something like this to the processor: 1. Host-side: call run_kvm_guest() 2. Host-side: VMRESUME 3. Guest runs, does "CALL guest_function" 4. VM exit, host runs again 5. Host might make some "cleanup" function calls 6. Host-side: RET from run_kvm_guest() Now, when back on the host, there are a couple of possible scenarios of post-guest activity the host needs to do before executing host code: * on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not touched and Linux has to do a 32-entry stuffing. * on eIBRS hardware, VM exit with IBRS enabled, or restoring the host IBRS=1 shortly after VM exit, has a documented side effect of flushing the RSB except in this PBRSB situation where the software needs to stuff the last RSB entry "by hand". IOW, with eIBRS supported, host RET instructions should no longer be influenced by guest behavior after the host retires a single CALL instruction. However, if the RET instructions are "unbalanced" with CALLs after a VM exit as is the RET in #6, it might speculatively use the address for the instruction after the CALL in #3 as an RSB prediction. This is a problem since the (untrusted) guest controls this address. Balanced CALL/RET instruction pairs such as in step #5 are not affected. == Solution == The PBRSB issue affects a wide variety of Intel processors which support eIBRS. But not all of them need mitigation. Today, X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e., eIBRS systems which enable legacy IBRS explicitly. However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT and most of them need a new mitigation. Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT. The lighter-weight mitigation performs a CALL instruction which is immediately followed by a speculative execution barrier (INT3). This steers speculative execution to the barrier -- just like a retpoline -- which ensures that speculation can never reach an unbalanced RET. Then, ensure this CALL is retired before continuing execution with an LFENCE. In other words, the window of exposure is opened at VM exit where RET behavior is troublesome. While the window is open, force RSB predictions sampling for RET targets to a dead end at the INT3. Close the window with the LFENCE. There is a subset of eIBRS systems which are not vulnerable to PBRSB. Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB. Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO. [ bp: Massage, incorporate review comments from Andy Cooper. ] Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Co-developed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de>
-
- 31 Jul, 2022 6 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linuxLinus Torvalds authored
Pull clk fix from Stephen Boyd: "One-liner fix of a NULL pointer deref in the Allwinner clk driver" * tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: clk: sunxi-ng: Fix H6 RTC clock definition
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Borislav Petkov: - Update the 'mitigations=' kernel param documentation - Check the IBPB feature flag before enabling IBPB in firmware calls because cloud vendors' fantasy when it comes to creating guest configurations is unlimited - Unexport sev_es_ghcb_hv_call() before 5.19 releases now that HyperV doesn't need it anymore - Remove dead CONFIG_* items * tag 'x86_urgent_for_v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: docs/kernel-parameters: Update descriptions for "mitigations=" param with retbleed x86/bugs: Do not enable IBPB at firmware entry when IBPB is not available Revert "x86/sev: Expose sev_es_ghcb_hv_call() for use by HyperV" x86/configs: Update configs in x86_debug.config
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull locking fix from Borislav Petkov: - Avoid rwsem lockups in certain situations when handling the handoff bit * tag 'locking_urgent_for_v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: locking/rwsem: Allow slowpath writer to ignore handoff bit if not set by first waiter
-
git://git.kernel.org/pub/scm/linux/kernel/git/ras/rasLinus Torvalds authored
Pull EDAC fixes from Borislav Petkov: - Relax the condition under which the DIMM label in ghes_edac is set in order to accomodate an HPE BIOS which sets only the device but not the bank - Two forgotten fixes to synopsys_edac when handling error interrupts * tag 'edac_urgent_for_v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras: EDAC/ghes: Set the DIMM label unconditionally EDAC/synopsys: Re-enable the error interrupts on v3 hw EDAC/synopsys: Use the correct register to disable the error interrupt on v3 hw
-
git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds authored
Pull ARM fixes from Russell King: "Last set of ARM fixes for 5.19: - fix for MAX_DMA_ADDRESS overflow - fix for find_*_bit performing an out of bounds memory access" * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: findbit: fix overflowing offset ARM: 9216/1: Fix MAX_DMA_ADDRESS overflow
-
- 30 Jul, 2022 2 commits
-
-
Waiman Long authored
With commit d257cc8c ("locking/rwsem: Make handoff bit handling more consistent"), the writer that sets the handoff bit can be interrupted out without clearing the bit if the wait queue isn't empty. This disables reader and writer optimistic lock spinning and stealing. Now if a non-first writer in the queue is somehow woken up or a new waiter enters the slowpath, it can't acquire the lock. This is not the case before commit d257cc8c as the writer that set the handoff bit will clear it when exiting out via the out_nolock path. This is less efficient as the busy rwsem stays in an unlock state for a longer time. In some cases, this new behavior may cause lockups as shown in [1] and [2]. This patch allows a non-first writer to ignore the handoff bit if it is not originally set or initiated by the first waiter. This patch is shown to be effective in fixing the lockup problem reported in [1]. [1] https://lore.kernel.org/lkml/20220617134325.GC30825@techsingularity.net/ [2] https://lore.kernel.org/lkml/3f02975c-1a9d-be20-32cf-f1d8e3dfafcc@oracle.com/ Fixes: d257cc8c ("locking/rwsem: Make handoff bit handling more consistent") Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: John Donnelly <john.p.donnelly@oracle.com> Tested-by: Mel Gorman <mgorman@techsingularity.net> Link: https://lore.kernel.org/r/20220622200419.778799-1-longman@redhat.com
-
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mmLinus Torvalds authored
Pull misc fixes from Andrew Morton: "Two hotfixes, both cc:stable" * tag 'mm-hotfixes-stable-2022-07-29' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: mm/hmm: fault non-owner device private entries page_alloc: fix invalid watermark check on a negative value
-
- 29 Jul, 2022 30 commits
-
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull block fix from Jens Axboe: "Just a single fix for NVMe, yet another quirk addition" * tag 'block-5.19-2022-07-29' of git://git.kernel.dk/linux-block: nvme-pci: Crucial P2 has bogus namespace ids
-
git://anongit.freedesktop.org/drm/drmLinus Torvalds authored
Pull more drm fixes from Dave Airlie: "Maxime had the dog^Wmailing list server eat his homework^Wmisc pull request. Two more small fixes, one in nouveau svm code and the other in simpledrm. nouveau: - page migration fix simpledrm: - fix mode_valid return value" * tag 'drm-fixes-2022-07-30' of git://anongit.freedesktop.org/drm/drm: nouveau/svm: Fix to migrate all requested pages drm/simpledrm: Fix return type of simpledrm_simple_display_pipe_mode_valid()
-
git://anongit.freedesktop.org/drm/drm-miscDave Airlie authored
One fix to fix simpledrm mode_valid return value, and one for page migration in nouveau Signed-off-by: Dave Airlie <airlied@redhat.com> From: Maxime Ripard <maxime@cerno.tech> Link: https://patchwork.freedesktop.org/patch/msgid/20220729094514.sfzhc3gqjgwgal62@penduick
-
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds authored
Pull SCSI fixes from James Bottomley: "Four fixes, three in drivers. The two biggest fixes are ufs and the remaining driver and core fix are small and obvious (and the core fix is low risk)" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: ufs: core: Fix a race condition related to device management scsi: core: Fix warning in scsi_alloc_sgtables() scsi: ufs: host: Hold reference returned by of_parse_phandle() scsi: mpt3sas: Stop fw fault watchdog work item during system shutdown
-
Eiichi Tsukata authored
Updates descriptions for "mitigations=off" and "mitigations=auto,nosmt" with the respective retbleed= settings. Signed-off-by: Eiichi Tsukata <eiichi.tsukata@nutanix.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: corbet@lwn.net Link: https://lore.kernel.org/r/20220728043907.165688-1-eiichi.tsukata@nutanix.com
-
Ralph Campbell authored
If hmm_range_fault() is called with the HMM_PFN_REQ_FAULT flag and a device private PTE is found, the hmm_range::dev_private_owner page is used to determine if the device private page should not be faulted in. However, if the device private page is not owned by the caller, hmm_range_fault() returns an error instead of calling migrate_to_ram() to fault in the page. For example, if a page is migrated to GPU private memory and a RDMA fault capable NIC tries to read the migrated page, without this patch it will get an error. With this patch, the page will be migrated back to system memory and the NIC will be able to read the data. Link: https://lkml.kernel.org/r/20220727000837.4128709-2-rcampbell@nvidia.com Link: https://lkml.kernel.org/r/20220725183615.4118795-2-rcampbell@nvidia.com Fixes: 08ddddda ("mm/hmm: check the device private page owner in hmm_range_fault()") Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> Reported-by: Felix Kuehling <felix.kuehling@amd.com> Reviewed-by: Alistair Popple <apopple@nvidia.com> Cc: Philip Yang <Philip.Yang@amd.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Jaewon Kim authored
There was a report that a task is waiting at the throttle_direct_reclaim. The pgscan_direct_throttle in vmstat was increasing. This is a bug where zone_watermark_fast returns true even when the free is very low. The commit f27ce0e1 ("page_alloc: consider highatomic reserve in watermark fast") changed the watermark fast to consider highatomic reserve. But it did not handle a negative value case which can be happened when reserved_highatomic pageblock is bigger than the actual free. If watermark is considered as ok for the negative value, allocating contexts for order-0 will consume all free pages without direct reclaim, and finally free page may become depleted except highatomic free. Then allocating contexts may fall into throttle_direct_reclaim. This symptom may easily happen in a system where wmark min is low and other reclaimers like kswapd does not make free pages quickly. Handle the negative case by using MIN. Link: https://lkml.kernel.org/r/20220725095212.25388-1-jaewon31.kim@samsung.com Fixes: f27ce0e1 ("page_alloc: consider highatomic reserve in watermark fast") Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com> Reported-by: GyeongHwan Hong <gh21.hong@samsung.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: Minchan Kim <minchan@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Yong-Taek Lee <ytk.lee@samsung.com> Cc: <stable@vger.kerenl.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Linus Torvalds authored
Merge tag 'perf-tools-fixes-for-v5.19-2022-07-29' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux Pull perf tools fixes from Arnaldo Carvalho de Melo: - Fix addresses for bss symbols, describing variables used in resolving data access in tools such as 'perf c2c' and 'perf mem'. - Skip symbols if SHF_ALLOC flag is not set, a technique used for listing deprecated symbols, its addresses are zeros, so not useful. - Remove undefined behavior from bpf_perf_object__next() when dealing with an empty bpf_objects_list list. - Make a ARM CoreSight disasm script work with both python2 and python3. - Sync x86's cpufeatures header with with the kernel sources. * tag 'perf-tools-fixes-for-v5.19-2022-07-29' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: perf bpf: Remove undefined behavior from bpf_perf_object__next() perf symbol: Skip symbols if SHF_ALLOC flag is not set perf symbol: Correct address for bss symbols perf scripts python: Let script to be python2 compliant tools headers cpufeatures: Sync with the kernel sources
-
git://git.kernel.org/pub/scm/linux/kernel/git/tj/wqLinus Torvalds authored
Pull workqueue fix from Tejun Heo: "Just one commit to suppress a spurious warning added during the 5.19 cycle" * tag 'wq-for-5.19-rc8-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: workqueue: Avoid a false warning in unbind_workers()
-
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pmLinus Torvalds authored
Pull power management fix from Rafael Wysocki: "Make some false positive RCU splats resulting from a recent intel_idle driver change go away (Waiman Long)" * tag 'pm-5.19-rc9' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: intel_idle: Fix false positive RCU splats due to incorrect hardirqs state
-
Lai Jiangshan authored
Doing set_cpus_allowed_ptr() with wq_unbound_cpumask can be possible fails and trigger the false warning. Use cpu_possible_mask instead when wq_unbound_cpumask has no active CPUs. It is very easy to trigger the warning: Set wq_unbound_cpumask to a small set of CPUs. Offline all the CPUs of wq_unbound_cpumask. Offline an extra CPU and trigger the warning. Fixes: 10a5a651 ("workqueue: Restrict kworker in the offline CPU pool running on housekeeping CPUs") Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com> Signed-off-by: Tejun Heo <tj@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linuxLinus Torvalds authored
Pull RISC-V fix from Palmer Dabbelt: "A build fix for 'make vdso_install' that avoids an issue trying to install the compat VDSO" * tag 'riscv-for-linus-5.19-rc9' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: riscv: compat: vdso: Fix vdso_install target
-
Linus Torvalds authored
Merge tag 'loongarch-fixes-5.19-5' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson Pull LoongArch fixes from Huacai Chen: - Fix cache size calculation, stack protection attributes, ptrace's fpr_set and "ROM Size" in boardinfo - Some cleanups and improvements of assembly - Some cleanups of unused code and useless code * tag 'loongarch-fixes-5.19-5' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: LoongArch: Fix wrong "ROM Size" of boardinfo LoongArch: Fix missing fcsr in ptrace's fpr_set LoongArch: Fix shared cache size calculation LoongArch: Disable executable stack by default LoongArch: Remove unused variables LoongArch: Remove clock setting during cpu hotplug stage LoongArch: Remove useless header compiler.h LoongArch: Remove several syntactic sugar macros for branches LoongArch: Re-tab the assembly files LoongArch: Simplify "BGT foo, zero" with BGTZ LoongArch: Simplify "BLT foo, zero" with BLTZ LoongArch: Simplify "BEQ/BNE foo, zero" with BEQZ/BNEZ LoongArch: Use the "move" pseudo-instruction where applicable LoongArch: Use the "jr" pseudo-instruction where applicable LoongArch: Use ABI names of registers where appropriate
-
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds authored
Pull powerpc fixes from Michael Ellerman: - Re-enable the new amdgpu display engine for powerpc, as long as the compiler is correctly configured. - Disable stack variable initialisation in prom_init to fix GCC 12 allmodconfig. Thanks to Dan Horák and Sudip Mukherjee. * tag 'powerpc-5.19-6' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: drm/amdgpu: Re-enable DCN for 64-bit powerpc powerpc/64s: Disable stack variable initialisation for prom_init
-
Tiezhu Yang authored
We can see the "ROM Size" is different in the following outputs: [root@linux loongson]# cat /sys/firmware/loongson/boardinfo BIOS Information Vendor : Loongson Version : vUDK2018-LoongArch-V2.0.pre-beta8 ROM Size : 63 KB Release Date : 06/15/2022 Board Information Manufacturer : Loongson Board Name : Loongson-LS3A5000-7A1000-1w-A2101 Family : LOONGSON64 [root@linux loongson]# dmidecode | head -11 ... Handle 0x0000, DMI type 0, 26 bytes BIOS Information Vendor: Loongson Version: vUDK2018-LoongArch-V2.0.pre-beta8 Release Date: 06/15/2022 ROM Size: 4 MB According to "BIOS Information (Type 0) structure" in the SMBIOS Reference Specification [1], it shows 64K * (n+1) is the size of the physical device containing the BIOS if the size is less than 16M. Additionally, we can see the related code in dmidecode [2]: u64 s = { .l = (code1 + 1) << 6 }; So the output of dmidecode is correct, the output of boardinfo is wrong, fix it. By the way, at present no need to consider the size is 16M or greater on LoongArch, because it is usually 4M or 8M which is enough to use. [1] https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.6.0.pdf [2] https://git.savannah.nongnu.org/cgit/dmidecode.git/tree/dmidecode.c#n347 Fixes: 628c3bb4 ("LoongArch: Add boot and setup routines") Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
Qi Hu authored
In file ptrace.c, function fpr_set does not copy fcsr data from ubuf to kbuf. That's the reason why fcsr cannot be modified by ptrace. This patch fixs this problem and allows users using ptrace to modify the fcsr. Co-developed-by: Xu Li <lixu@loongson.cn> Signed-off-by: Qi Hu <huqi@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
Huacai Chen authored
Current calculation of shared cache size is from the node (die) scope, but we hope 'lscpu' to show the shared cache size of the whole package for multi-die chips (e.g., Loongson-3C5000L, which contains 4 dies in one package). So fix it by multiplying nodes_per_package. Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
Huacai Chen authored
Disable executable stack for LoongArch by default, as all modern architectures do. Reported-by: Andreas Schwab <schwab@suse.de> Suggested-by: WANG Xuerui <git@xen0n.name> Link: https://sourceware.org/pipermail/binutils/2022-July/121992.htmlTested-by: WANG Xuerui <git@xen0n.name> Tested-by: Xi Ruoyao <xry111@xry111.site> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
Bibo Mao authored
There are some variables never used or referenced, this patch removes these varaibles and make the code cleaner. Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
Bibo Mao authored
On physical machine we can save power by disabling clock of hot removed cpu. However as different platforms require different methods to configure clocks, the code is platform-specific, and probably belongs to firmware/pmu or cpu regulator, rather than generic arch/loongarch code. Also, there is no such register on QEMU virt machine since the clock/frequency regulation is not emulated. This patch removes the hard-coded clock register accesses in generic LoongArch cpu hotplug flow. Reviewed-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
Jun Yi authored
The content of LoongArch's compiler.h is trivial, with some unused anywhere, so inline the definitions and remove the header. Signed-off-by: Jun Yi <yijun@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
WANG Xuerui authored
These syntactic sugars have been supported by upstream binutils from the beginning, so no need to patch them locally. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
WANG Xuerui authored
Reflow the *.S files for better stylistic consistency, namely hard tabs after mnemonic position, and vertical alignment of the first operand with hard tabs. Tab width is obviously 8. Some pre-existing intra-block vertical alignments are preserved. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
WANG Xuerui authored
Support for the syntactic sugar is present in upstream binutils port from the beginning. Use it for shorter lines and better consistency. Generated code should be identical. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
WANG Xuerui authored
Support for the syntactic sugar is present in upstream binutils port from the beginning. Use it for shorter lines and better consistency. Generated code should be identical. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
WANG Xuerui authored
While B{EQ,NE}Z and B{EQ,NE} are different instructions, and the vastly expanded range for branch destination does not really matter in the few cases touched, use the B{EQ,NE}Z where possible for shorter lines and better consistency (e.g. some places used "BEQ foo, zero", while some used "BEQ zero, foo"). Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
WANG Xuerui authored
Some of the assembly code in the LoongArch port likely originated from a time when the assembler did not support pseudo-instructions like "move" or "jr", so the desugared form was used and readability suffers (to a minor degree) as a result. As the upstream toolchain supports these pseudo-instructions from the beginning, migrate the existing few usages to them for better readability. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
WANG Xuerui authored
Some of the assembly code in the LoongArch port likely originated from a time when the assembler did not support pseudo-instructions like "move" or "jr", so the desugared form was used and readability suffers (to a minor degree) as a result. As the upstream toolchain supports these pseudo-instructions from the beginning, migrate the existing few usages to them for better readability. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
WANG Xuerui authored
Some of the assembly in the LoongArch port seem to come from a prehistoric time, when the assembler didn't even have support for the ABI names we all come to know and love, thus used raw register numbers which hampered readability. The usages are found with a regex match inside arch/loongarch, then manually adjusted for those non-definitions. Signed-off-by: WANG Xuerui <git@xen0n.name> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-
Russell King (Oracle) authored
When offset is larger than the size of the bit array, we should not attempt to access the array as we can perform an access beyond the end of the array. Fix this by changing the pre-condition. Using "cmp r2, r1; bhs ..." covers us for the size == 0 case, since this will always take the branch when r1 is zero, irrespective of the value of r2. This means we can fix this bug without adding any additional code! Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-