- 16 Jan, 2024 11 commits
-
-
Palmer Dabbelt authored
Andy Chiu <andy.chiu@sifive.com> says: This series provides support running Vector in kernel mode. Additionally, kernel-mode Vector can be configured to run without turnning off preemption on a CONFIG_PREEMPT kernel. Along with the suport, we add Vector optimized copy_{to,from}_user. And provide a simple threshold to decide when to run the vectorized functions. We decided to drop vectorized memcpy/memset/memmove for the moment due to the concern of memory side-effect in kernel_vector_begin(). The detailed description can be found at v9[0] This series is composed by 4 parts: patch 1-4: adds basic support for kernel-mode Vector patch 5: includes vectorized copy_{to,from}_user into the kernel patch 6: refactor context switch code in fpu [1] patch 7-10: provides some code refactors and support for preemptible kernel-mode Vector. This series can be merged if we feel any part of {1~4, 5, 6, 7~10} is mature enough. This patch is tested on a QEMU with V and verified that booting, normal userspace operations all work as usual with thresholds set to 0. Also, we test by launching multiple kernel threads which continuously executes and verifies Vector operations in the background. The module that tests these operation is expected to be upstream later. * b4-shazam-merge: riscv: vector: allow kernel-mode Vector with preemption riscv: vector: use kmem_cache to manage vector context riscv: vector: use a mask to write vstate_ctrl riscv: vector: do not pass task_struct into riscv_v_vstate_{save,restore}() riscv: fpu: drop SR_SD bit checking riscv: lib: vectorize copy_to_user/copy_from_user riscv: sched: defer restoring Vector context for user riscv: Add vector extension XOR implementation riscv: vector: make Vector always available for softirq context riscv: Add support for kernel mode vector Link: https://lore.kernel.org/r/20240115055929.4736-1-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andy Chiu authored
Add kernel_vstate to keep track of kernel-mode Vector registers when trap introduced context switch happens. Also, provide riscv_v_flags to let context save/restore routine track context status. Context tracking happens whenever the core starts its in-kernel Vector executions. An active (dirty) kernel task's V contexts will be saved to memory whenever a trap-introduced context switch happens. Or, when a softirq, which happens to nest on top of it, uses Vector. Context retoring happens when the execution transfer back to the original Kernel context where it first enable preempt_v. Also, provide a config CONFIG_RISCV_ISA_V_PREEMPTIVE to give users an option to disable preemptible kernel-mode Vector at build time. Users with constraint memory may want to disable this config as preemptible kernel-mode Vector needs extra space for tracking of per thread's kernel-mode V context. Or, users might as well want to disable it if all kernel-mode Vector code is time sensitive and cannot tolerate context switch overhead. Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-11-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andy Chiu authored
The allocation size of thread.vstate.datap is always riscv_v_vsize. So it is possbile to use kmem_cache_* to manage the allocation. This gives users more information regarding allocation of vector context via /proc/slabinfo. And it potentially reduces the latency of the first-use trap because of the allocation caches. Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-10-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andy Chiu authored
riscv_v_ctrl_set() should only touch bits within PR_RISCV_V_VSTATE_CTRL_MASK. So, use the mask when we really set task's vstate_ctrl. Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-9-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andy Chiu authored
riscv_v_vstate_{save,restore}() can operate only on the knowlege of struct __riscv_v_ext_state, and struct pt_regs. Let the caller decides which should be passed into the function. Meanwhile, the kernel-mode Vector is going to introduce another vstate, so this also makes functions potentially able to be reused. Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-8-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andy Chiu authored
SR_SD summarizes the dirty status of FS/VS/XS. However, the current code structure does not fully utilize it because each extension specific code is divided into an individual segment. So remove the SR_SD check for now. Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Reviewed-by: Song Shuai <songshuaishuai@tinylab.org> Reviewed-by: Guo Ren <guoren@kernel.org> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-7-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andy Chiu authored
This patch utilizes Vector to perform copy_to_user/copy_from_user. If Vector is available and the size of copy is large enough for Vector to perform better than scalar, then direct the kernel to do Vector copies for userspace. Though the best programming practice for users is to reduce the copy, this provides a faster variant when copies are inevitable. The optimal size for using Vector, copy_to_user_thres, is only a heuristic for now. We can add DT parsing if people feel the need of customizing it. The exception fixup code of the __asm_vector_usercopy must fallback to the scalar one because accessing user pages might fault, and must be sleepable. Current kernel-mode Vector does not allow tasks to be preemptible, so we must disactivate Vector and perform a scalar fallback in such case. The original implementation of Vector operations comes from https://github.com/sifive/sifive-libc, which we agree to contribute to Linux kernel. Co-developed-by: Jerry Shih <jerry.shih@sifive.com> Signed-off-by: Jerry Shih <jerry.shih@sifive.com> Co-developed-by: Nick Knight <nick.knight@sifive.com> Signed-off-by: Nick Knight <nick.knight@sifive.com> Suggested-by: Guo Ren <guoren@kernel.org> Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-6-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andy Chiu authored
User will use its Vector registers only after the kernel really returns to the userspace. So we can delay restoring Vector registers as long as we are still running in kernel mode. So, add a thread flag to indicates the need of restoring Vector and do the restore at the last arch-specific exit-to-user hook. This save the context restoring cost when we switch over multiple processes that run V in kernel mode. For example, if the kernel performs a context swicth from A->B->C, and returns to C's userspace, then there is no need to restore B's V-register. Besides, this also prevents us from repeatedly restoring V context when executing kernel-mode Vector multiple times. The cost of this is that we must disable preemption and mark vector as busy during vstate_{save,restore}. Because then the V context will not get restored back immediately when a trap-causing context switch happens in the middle of vstate_{save,restore}. Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-5-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Greentime Hu authored
This patch adds support for vector optimized XOR and it is tested in qemu. Co-developed-by: Han-Kuan Chen <hankuan.chen@sifive.com> Signed-off-by: Han-Kuan Chen <hankuan.chen@sifive.com> Signed-off-by: Greentime Hu <greentime.hu@sifive.com> Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-4-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andy Chiu authored
The goal of this patch is to provide full support of Vector in kernel softirq context. So that some of the crypto alogrithms won't need scalar fallbacks. By disabling bottom halves in active kernel-mode Vector, softirq will not be able to nest on top of any kernel-mode Vector. So, softirq context is able to use Vector whenever it runs. After this patch, Vector context cannot start with irqs disabled. Otherwise local_bh_enable() may run in a wrong context. Disabling bh is not enough for RT-kernel to prevent preeemption. So we must disable preemption, which also implies disabling bh on RT. Related-to: commit 696207d4 ("arm64/sve: Make kernel FPU protection RT friendly") Related-to: commit 66c3ec5a ("arm64: neon: Forbid when irqs are disabled") Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-3-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Greentime Hu authored
Add kernel_vector_begin() and kernel_vector_end() function declarations and corresponding definitions in kernel_mode_vector.c These are needed to wrap uses of vector in kernel mode. Co-developed-by: Vincent Chen <vincent.chen@sifive.com> Signed-off-by: Vincent Chen <vincent.chen@sifive.com> Signed-off-by: Greentime Hu <greentime.hu@sifive.com> Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-2-andy.chiu@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
- 11 Jan, 2024 20 commits
-
-
Palmer Dabbelt authored
guoren@kernel.org <guoren@kernel.org> says: From: Guo Ren <guoren@linux.alibaba.com> When the task is in COMPAT mode, the TASK_SIZE should be 2GB, so STACK_TOP_MAX and arch_get_mmap_end must be limited to 2 GB. This series fixes the problem made by commit: add2cc6b ("RISC-V: mm: Restrict address space for sv39,sv48,sv57") and optimizes the related coding convention of TASK_SIZE. * b4-shazam-merge: riscv: mm: Fixup compat arch_get_mmap_end riscv: mm: Fixup compat mode boot failure Link: https://lore.kernel.org/r/20231222115703.2404036-1-guoren@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Guo Ren authored
When the task is in COMPAT mode, the arch_get_mmap_end should be 2GB, not TASK_SIZE_64. The TASK_SIZE has contained is_compat_mode() detection, so change the definition of STACK_TOP_MAX to TASK_SIZE directly. Cc: stable@vger.kernel.org Fixes: add2cc6b ("RISC-V: mm: Restrict address space for sv39,sv48,sv57") Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Reviewed-by: Leonardo Bras <leobras@redhat.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231222115703.2404036-3-guoren@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Guo Ren authored
In COMPAT mode, the STACK_TOP is DEFAULT_MAP_WINDOW (0x80000000), but the TASK_SIZE is 0x7fff000. When the user stack is upon 0x7fff000, it will cause a user segment fault. Sometimes, it would cause boot failure when the whole rootfs is rv32. Freeing unused kernel image (initmem) memory: 2236K Run /sbin/init as init process Starting init: /sbin/init exists but couldn't execute it (error -14) Run /etc/init as init process ... Increase the TASK_SIZE to cover STACK_TOP. Cc: stable@vger.kernel.org Fixes: add2cc6b ("RISC-V: mm: Restrict address space for sv39,sv48,sv57") Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Reviewed-by: Leonardo Bras <leobras@redhat.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231222115703.2404036-2-guoren@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Christophe JAILLET authored
The ending NULL is not taken into account by strncat(), so switch to strlcat() to correctly compute the size of the available memory when appending CONFIG_CMDLINE to 'early_cmdline'. Fixes: 26e7aacb ("riscv: Allow to downgrade paging mode from the command line") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/9f66d2b58c8052d4055e90b8477ee55d9a0914f9.1698564026.git.christophe.jaillet@wanadoo.frSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Christoph Muellner <christoph.muellner@vrull.eu> says: From: Christoph Müllner <christoph.muellner@vrull.eu> When building the RISC-V selftests with a riscv32 compiler I ran into a couple of compiler warnings. While riscv32 support for these tests is questionable, the fixes are so trivial that it is probably best to simply apply them. Note that the missing-include patch and some format string warnings are also relevant for riscv64. * b4-shazam-merge: tools: selftests: riscv: Fix compile warnings in mm tests tools: selftests: riscv: Fix compile warnings in vector tests tools: selftests: riscv: Add missing include for vector test tools: selftests: riscv: Fix compile warnings in cbo tools: selftests: riscv: Fix compile warnings in hwprobe Link: https://lore.kernel.org/r/20231123185821.2272504-1-christoph.muellner@vrull.euSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Christoph Müllner authored
When building the mm tests with a riscv32 compiler, we see a range of shift-count-overflow errors from shifting 1UL by more than 32 bits in do_mmaps(). Since, the relevant code is only called from code that is gated by `__riscv_xlen == 64`, we can just apply the same gating to do_mmaps(). Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231123185821.2272504-6-christoph.muellner@vrull.euSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Christoph Müllner authored
GCC prints a couple of format string warnings when compiling the vector tests. Let's follow the recommendation in Documentation/printk-formats.txt to fix these warnings. Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231123185821.2272504-5-christoph.muellner@vrull.euSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Christoph Müllner authored
GCC raises the following warning: warning: 'status' may be used uninitialized The warning comes from the fact, that the signature of waitpid() is unknown and therefore the initialization of GCC cannot be guessed. Let's add the relevant header to address this warning. Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andy Chiu <andy.chiu@sifive.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231123185821.2272504-4-christoph.muellner@vrull.euSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Christoph Müllner authored
GCC prints a couple of format string warnings when compiling the cbo test. Let's follow the recommendation in Documentation/printk-formats.txt to fix these warnings. Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231123185821.2272504-3-christoph.muellner@vrull.euSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Christoph Müllner authored
GCC prints a couple of format string warnings when compiling the hwprobe test. Let's follow the recommendation in Documentation/printk-formats.txt to fix these warnings. Signed-off-by: Christoph Müllner <christoph.muellner@vrull.eu> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231123185821.2272504-2-christoph.muellner@vrull.euSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
Allow to defer the flushing of the TLB when unmapping pages, which allows to reduce the numbers of IPI and the number of sfence.vma. The ubenchmarch used in commit 43b3dfdd ("arm64: support batched/deferred tlb shootdown during page reclamation/migration") that was multithreaded to force the usage of IPI shows good performance improvement on all platforms: * Unmatched: ~34% * TH1520 : ~78% * Qemu : ~81% In addition, perf on qemu reports an important decrease in time spent dealing with IPIs: Before: 68.17% main [kernel.kallsyms] [k] __sbi_rfence_v02_call After : 8.64% main [kernel.kallsyms] [k] __sbi_rfence_v02_call * Benchmark: int stick_this_thread_to_core(int core_id) { int num_cores = sysconf(_SC_NPROCESSORS_ONLN); if (core_id < 0 || core_id >= num_cores) return EINVAL; cpu_set_t cpuset; CPU_ZERO(&cpuset); CPU_SET(core_id, &cpuset); pthread_t current_thread = pthread_self(); return pthread_setaffinity_np(current_thread, sizeof(cpu_set_t), &cpuset); } static void *fn_thread (void *p_data) { int ret; pthread_t thread; stick_this_thread_to_core((int)p_data); while (1) { sleep(1); } return NULL; } int main() { volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); pthread_t threads[4]; int ret; for (int i = 0; i < 4; ++i) { ret = pthread_create(&threads[i], NULL, fn_thread, (void *)i); if (ret) { printf("%s", strerror (ret)); } } memset(p, 0x88, SIZE); for (int k = 0; k < 10000; k++) { /* swap in */ for (int i = 0; i < SIZE; i += 4096) { (void)p[i]; } /* swap out */ madvise(p, SIZE, MADV_PAGEOUT); } for (int i = 0; i < 4; i++) { pthread_cancel(threads[i]); } for (int i = 0; i < 4; i++) { pthread_join(threads[i], NULL); } return 0; } Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Jisheng Zhang <jszhang@kernel.org> Tested-by: Jisheng Zhang <jszhang@kernel.org> # Tested on TH1520 Tested-by: Nam Cao <namcao@linutronix.de> Link: https://lore.kernel.org/r/20240108193640.344929-1-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Alexandre Ghiti authored
This will allow better TLB utilization and then should be more performant. Before: ---[ vmemmap start ]--- 0xffff8d8002000000-0xffff8d8012000000 0x000000046ec00000 256M PTE . .. .. D A G . . W R V ---[ vmemmap end ]--- After: ---[ vmemmap start ]--- 0xffff8d8002000000-0xffff8d8012000000 0x000000046ec00000 256M PMD . .. .. D A G . . W R V ---[ vmemmap end ]--- Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20231214132935.212864-1-alexghiti@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Daniel Henrique Barboza authored
Following the examples of cbom-block-size and cboz-block-size, cbop-block-size is the cache size of Zicbop (cbo.prefetch) operations. The most common case is to have all cache block sizes to be the same size (e.g. profiles such as rva22u64 mandates a 64 bytes size for all cache operations), but there's no specification requirement for that, and an implementation can have different cache sizes for each operation. Cc: Rob Herring <robh@kernel.org> Cc: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231029123500.739409-1-dbarboza@ventanamicro.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Jisheng Zhang <jszhang@kernel.org> says: Previously, we use alternative mechanism to dynamically patch the CMO operations for THEAD C906/C910 during boot for performance reason. But as pointed out by Arnd, "there is already a significant cost in accessing the invalidated cache lines afterwards, which is likely going to be much higher than the cost of an indirect branch". And indeed, there's no performance difference with GMAC and EMMC per my test on Sipeed Lichee Pi 4A board. Use riscv_nonstd_cache_ops for THEAD C906/C910 CMO to simplify the alternative code, and to acchieve Arnd's goal -- "I think moving the THEAD ops at the same level as all nonstandard operations makes sense, but I'd still leave CMO as an explicit fast path that avoids the indirect branch. This seems like the right thing to do both for readability and for platforms on which the indirect branch has a noticeable overhead." To make bisect easy, I use two patches here: patch1 does the conversion which just mimics current CMO behavior via. riscv_nonstd_cache_ops, I assume no functionalities changes. patch2 uses T-HEAD PA based CMO instructions so that we don't need to covert PA to VA. * b4-shazam-merge: riscv: errata: thead: use pa based instructions for CMO riscv: errata: thead: use riscv_nonstd_cache_ops for CMO Link: https://lore.kernel.org/r/20231114143338.2406-1-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Conor Dooley authored
There are some extensions that contain numbers, such as Zve32f, which are enabled by the "max" cpu type in QEMU. Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20231208-uncolored-oxidant-5ab37dd3ab84@spudSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Samuel Holland authored
The current description implies that only a single address translation mode is available to the operating system. However, some implementations support multiple address translation modes, and the operating system is free to choose between them. Per the RISC-V privileged specification, Sv48 implementations must also implement Sv39, and likewise Sv57 implies support for Sv48. This means it is possible to describe all supported address translation modes using a single value, by naming the largest supported mode. This appears to have been the intended usage of the property, so note it explicitly. Fixes: 4fd669a8 ("dt-bindings: riscv: convert cpu binding to json-schema") Signed-off-by: Samuel Holland <samuel.holland@sifive.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20231227175739.1453782-1-samuel.holland@sifive.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Anup Patel <apatel@ventanamicro.com> says: The SBI v2.0 specification is now frozen. The SBI v2.0 specification defines SBI debug console (DBCN) extension which replaces the legacy SBI v0.1 functions sbi_console_putchar() and sbi_console_getchar(). (Refer v2.0-rc5 at https://github.com/riscv-non-isa/riscv-sbi-doc/releases) This series adds support for SBI debug console (DBCN) extension in Linux RISC-V. To try these patches with KVM RISC-V, use KVMTOOL from the riscv_zbx_zicntr_smstateen_condops_v1 branch at: https://github.com/avpatel/kvmtool.git * b4-shazam-merge: RISC-V: Enable SBI based earlycon support tty: Add SBI debug console support to HVC SBI driver tty/serial: Add RISC-V SBI debug console based earlycon RISC-V: Add SBI debug console helper routines RISC-V: Add stubs for sbi_console_putchar/getchar() Link: https://lore.kernel.org/r/20231124070905.1043092-1-apatel@ventanamicro.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Andrew Jones authored
When the SUSP SBI extension is present it implies that the standard "suspend to RAM" type is available. Wire it up to the generic platform suspend support, also applying the already present support for non-retentive CPU suspend. When the kernel is built with CONFIG_SUSPEND, one can do 'echo mem > /sys/power/state' to suspend. Resumption will occur when a platform-specific wake-up event arrives. Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Tested-by: Samuel Holland <samuel.holland@sifive.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20231206110807.35882-4-ajones@ventanamicro.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Charlie Jenkins <charlie@rivosinc.com> says: When modules are loaded while there is not ample allocatable memory, there was previously not proper error handling. This series fixes a use-after-free error and a different issue that caused a non graceful exit after memory was not properly allocated. * b4-shazam-merge: riscv: Fix relocation_hashtable size riscv: Correctly free relocation hashtable on error riscv: Fix module loading free order Link: https://lore.kernel.org/r/20240104-module_loading_fix-v3-0-a71f8de6ce0f@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Palmer Dabbelt authored
Jisheng Zhang <jszhang@kernel.org> says: Some riscv implementations such as T-HEAD's C906, C908, C910 and C920 support efficient unaligned access, for performance reason we want to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To avoid performance regressions on non efficient unaligned access platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected. To solve this problem, runtime code patching based on the detected speed is a good solution. But that's not easy, it involves lots of work to modify vairous subsystems such as net, mm, lib and so on. This can be done step by step. So let's take an easier solution: add support to efficient unaligned access and hide the support under NONPORTABLE. patch1 introduces RISCV_EFFICIENT_UNALIGNED_ACCESS which depends on NONPORTABLE, if users know during config time that the kernel will be only run on those efficient unaligned access hw platforms, they can enable it. Obviously, generic unified kernel Image shouldn't enable it. patch2 adds support DCACHE_WORD_ACCESS when MMU and RISCV_EFFICIENT_UNALIGNED_ACCESS. Below test program and step shows how much performance can be improved: $ cat tt.c #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #define ITERATIONS 1000000 #define PATH "123456781234567812345678123456781" int main(void) { unsigned long i; struct stat buf; for (i = 0; i < ITERATIONS; i++) stat(PATH, &buf); return 0; } $ gcc -O2 tt.c $ touch 123456781234567812345678123456781 $ time ./a.out Per my test on T-HEAD C910 platforms, the above test performance is improved by about 7.5%. * b4-shazam-merge: riscv: select DCACHE_WORD_ACCESS for efficient unaligned access HW riscv: introduce RISCV_EFFICIENT_UNALIGNED_ACCESS Link: https://lore.kernel.org/r/20231225044207.3821-1-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
- 10 Jan, 2024 9 commits
-
-
Jisheng Zhang authored
T-HEAD CPUs such as C906/C910/C920 support phy address based CMO, use them so that we don't need to convert to virt address. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Guo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/20231114143338.2406-3-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Jisheng Zhang authored
Previously, we use alternative mechanism to dynamically patch the CMO operations for THEAD C906/C910 during boot for performance reason. But as pointed out by Arnd, "there is already a significant cost in accessing the invalidated cache lines afterwards, which is likely going to be much higher than the cost of an indirect branch". And indeed, there's no performance difference with GMAC and EMMC per my test on Sipeed Lichee Pi 4A board. Use riscv_nonstd_cache_ops for THEAD C906/C910 CMO to simplify the alternative code, and to acchieve Arnd's goal -- "I think moving the THEAD ops at the same level as all nonstandard operations makes sense, but I'd still leave CMO as an explicit fast path that avoids the indirect branch. This seems like the right thing to do both for readability and for platforms on which the indirect branch has a noticeable overhead." Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Tested-by: Emil Renner Berthing <emil.renner.berthing@canonical.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20231114143338.2406-2-jszhang@kernel.orgSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Anup Patel authored
Let us enable SBI based earlycon support in defconfig for both RV32 and RV64 so that "earlycon=sbi" can be used again. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231124070905.1043092-6-apatel@ventanamicro.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Atish Patra authored
RISC-V SBI specification supports advanced debug console support via SBI DBCN extension. Extend the HVC SBI driver to support it. Signed-off-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20231124070905.1043092-5-apatel@ventanamicro.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Anup Patel authored
We extend the existing RISC-V SBI earlycon support to use the new RISC-V SBI debug console extension. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20231124070905.1043092-4-apatel@ventanamicro.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Anup Patel authored
Let us provide SBI debug console helper routines which can be shared by serial/earlycon-riscv-sbi.c and hvc/hvc_riscv_sbi.c. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231124070905.1043092-3-apatel@ventanamicro.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Anup Patel authored
The functions sbi_console_putchar() and sbi_console_getchar() are not defined when CONFIG_RISCV_SBI_V01 is disabled so let us add stub of these functions to avoid "#ifdef" on user side. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231124070905.1043092-2-apatel@ventanamicro.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Charlie Jenkins authored
A second dereference is needed to get the accurate size of the relocation_hashtable. Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Fixes: d8792a57 ("riscv: Safely remove entries from relocation list") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Julia Lawall <julia.lawall@inria.fr> Closes: https://lore.kernel.org/r/202312120044.wTI1Uyaa-lkp@intel.com/Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org> Link: https://lore.kernel.org/r/20240104-module_loading_fix-v3-3-a71f8de6ce0f@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-
Charlie Jenkins authored
When there is not enough allocatable memory for the relocation hashtable, module loading should exit gracefully. Previously, this was attempted to be accomplished by checking if an unsigned number is less than zero which does not work. Instead have the caller check if the hashtable was correctly allocated and add a comment explaining that hashtable_bits that is 0 is valid. Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Fixes: d8792a57 ("riscv: Safely remove entries from relocation list") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Closes: https://lore.kernel.org/r/202312132019.iYGTwW0L-lkp@intel.com/Reported-by: kernel test robot <lkp@intel.com> Reported-by: Julia Lawall <julia.lawall@inria.fr> Closes: https://lore.kernel.org/r/202312120044.wTI1Uyaa-lkp@intel.com/Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org> Link: https://lore.kernel.org/r/20240104-module_loading_fix-v3-2-a71f8de6ce0f@rivosinc.comSigned-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-