- 05 Dec, 2022 2 commits
-
-
Juergen Gross authored
Instead of just saying "Disabled" when MTRRs are disabled for any reason, tell what is disabled and why. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20221205080433.16643-3-jgross@suse.com
-
Juergen Gross authored
With the decoupling of PAT and MTRR initialization, PAT will be used even with MTRRs disabled. This seems to break booting up as TDX guest, as the recommended sequence to set the PAT MSR across CPUs can't work in TDX guests due to disabling caches via setting CR0.CD isn't allowed in TDX mode. This is an inconsistency in the Intel documentation between the SDM and the TDX specification. For now handle TDX mode the same way as Xen PV guest mode by just accepting the current PAT MSR setting without trying to modify it. [ bp: Align conditions for better readability. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20221205080433.16643-2-jgross@suse.com
-
- 29 Nov, 2022 1 commit
-
-
Borislav Petkov authored
Carve it out into a special header, where it belongs. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221124164150.3040-1-bp@alien8.de
-
- 22 Nov, 2022 5 commits
-
-
Juergen Gross authored
Convert the remaining cases of static_cpu_has(X86_FEATURE_XENPV) and boot_cpu_has(X86_FEATURE_XENPV) to use cpu_feature_enabled(), allowing more efficient code in case the kernel is configured without CONFIG_XEN_PV. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20221104072701.20283-6-jgross@suse.com
-
Juergen Gross authored
Testing of X86_FEATURE_XENPV in setup_cpu_entry_area() can be removed, as this code path is 32-bit only, and Xen PV guests are 64-bit only. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20221104072701.20283-5-jgross@suse.com
-
Juergen Gross authored
Testing for Xen PV guest mode in a 32-bit only code section can be dropped, as Xen PV guests are supported in 64-bit mode only. While at it, switch from boot_cpu_has() to cpu_feature_enabled() in the 64-bit part of the code. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20221104072701.20283-4-jgross@suse.com
-
Juergen Gross authored
The check for 64-bit mode when testing X86_FEATURE_XENPV isn't needed, as Xen PV guests are no longer supported in 32-bit mode, see a13f2ef1 ("x86/xen: remove 32-bit Xen PV guest support"). While at it switch from boot_cpu_has() to cpu_feature_enabled(). Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20221104072701.20283-3-jgross@suse.com
-
Juergen Gross authored
Add X86_FEATURE_XENPV to the features handled specially in disabled-features.h. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20221104072701.20283-2-jgross@suse.com
-
- 11 Nov, 2022 1 commit
-
-
Tony W Wang-oc authored
On all recent Centaur platforms, ARB_DISABLE is handled by PMU automatically while entering C3 type state. No need for OS to issue the ARB_DISABLE, so set bm_control to zero to indicate that. Signed-off-by: Tony W Wang-oc <TonyWWang-oc@zhaoxin.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/all/1667792089-4904-1-git-send-email-TonyWWang-oc%40zhaoxin.com
-
- 10 Nov, 2022 13 commits
-
-
Juergen Gross authored
The way mtrr_if is initialized with the correct mtrr_ops structure is quite weird. Simplify that by dropping the vendor specific init functions and the mtrr_ops[] array. Replace those with direct assignments of the related vendor specific ops array to mtrr_if. Note that a direct assignment is okay even for 64-bit builds, where the symbol isn't present, as the related code will be subject to "dead code elimination" due to how cpu_feature_enabled() is implemented. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-17-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
Instead of explicitly calling cache_ap_init() in identify_secondary_cpu() use a CPU hotplug callback instead. By registering the callback only after having started the non-boot CPUs and initializing cache_aps_delayed_init with "true", calling set_cache_aps_delayed_init() at boot time can be dropped. It should be noted that this change results in cache_ap_init() being called a little bit later when hotplugging CPUs. By using a new hotplug slot right at the start of the low level bringup this is not problematic, as no operations requiring a specific caching mode are performed that early in CPU initialization. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-15-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
Today, PAT is usable only with MTRR being active, with some nasty tweaks to make PAT usable when running as a Xen PV guest which doesn't support MTRR. The reason for this coupling is that both PAT MSR changes and MTRR changes require a similar sequence and so full PAT support was added using the already available MTRR handling. Xen PV PAT handling can work without MTRR, as it just needs to consume the PAT MSR setting done by the hypervisor without the ability and need to change it. This in turn has resulted in a convoluted initialization sequence and wrong decisions regarding cache mode availability due to misguiding PAT availability flags. Fix all of that by allowing to use PAT without MTRR and by reworking the current PAT initialization sequence to match better with the newly introduced generic cache initialization. This removes the need of the recently added pat_force_disabled flag, so remove the remnants of the patch adding it. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-14-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
Instead of having a stop_machine() handler for either a specific MTRR register or all state at once, add a handler just for calling cache_cpu_init() if appropriate. Add functions for calling stop_machine() with this handler as well. Add a generic replacement for mtrr_bp_restore() and a wrapper for mtrr_bp_init(). Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-13-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
In order to prepare decoupling MTRR and PAT replace the MTRR-specific mtrr_aps_delayed_init flag with a more generic cache_aps_delayed_init one. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-12-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
There is no need for keeping __mtrr_enabled as it can easily be replaced by testing mtrr_if to be not NULL. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-11-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
In case of the generic cache interface being used (Intel CPUs or a 64-bit system), the initialization sequence of the boot CPU is more complicated than necessary: - check if MTRR enabled, if yes, call mtrr_bp_pat_init() which will disable caching, set the PAT MSR, and reenable caching - call mtrr_cleanup(), in case that changed anything, call cache_cpu_init() doing the same caching disable/enable dance as above, but this time with setting the (modified) MTRR state (even if MTRR was disabled) AND setting the PAT MSR (again even with disabled MTRR) The sequence can be simplified a lot while removing potential inconsistencies: - check if MTRR enabled, if yes, call mtrr_cleanup() and then cache_cpu_init() This ensures to: - no longer disable/enable caching more than once - avoid to set MTRRs and/or the PAT MSR on the boot processor in case of MTRR cleanups even if MTRRs meant to be disabled With that mtrr_bp_pat_init() can be removed. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-10-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
Instead of using an indirect call to mtrr_if->set_all just call the only possible target cache_cpu_init() directly. Remove the set_all function pointer from struct mtrr_ops. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-9-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
Add a main cache_cpu_init() init routine which initializes MTRR and/or PAT support depending on what has been detected on the system. Leave the MTRR-specific initialization in a MTRR-specific init function where the smp_changes_mask setting happens now with caches disabled. This global mask update was done with caches enabled before probably because atomic operations while running uncached might have been quite expensive. But since only systems with a broken BIOS should ever require to set any bit in smp_changes_mask, hurting those devices with a penalty of a few microseconds during boot shouldn't be a real issue. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-8-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
Prepare making PAT and MTRR support independent from each other by moving some code needed by both out of the MTRR-specific sources. [ bp: Massage commit message. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-7-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
Split the MTRR-specific actions from cache_disable() and cache_enable() into new functions mtrr_disable() and mtrr_enable(). Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-6-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
Rename the currently MTRR-specific functions prepare_set() and post_set() in preparation to move them. Make them non-static and put their prototypes into cacheinfo.h, where they will end after moving them to their final position anyway. Expand the comment before the functions with an introductory line and rename two related static variables, too. [ bp: Massage commit message. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-5-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
Juergen Gross authored
In MTRR code use_intel() is only used in one source file, and the relevant use_intel_if member of struct mtrr_ops is set only in generic_mtrr_ops. Replace use_intel() with a single flag in cacheinfo.c which can be set when assigning generic_mtrr_ops to mtrr_if. This allows to drop use_intel_if from mtrr_ops, while preparing to decouple PAT from MTRR. As another preparation for the PAT/MTRR decoupling use a bit for MTRR control and one for PAT control. For now set both bits together, this can be changed later. As the new flag will be set only if mtrr_enabled is set, the test for mtrr_enabled can be dropped at some places. [ bp: Massage commit message. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221102074713.21493-4-jgross@suse.comSigned-off-by: Borislav Petkov <bp@suse.de>
-
- 03 Nov, 2022 1 commit
-
-
Srinivas Pandruvada authored
Intel processors support additional software hint called EPB ("Energy Performance Bias") to guide the hardware heuristic of power management features to favor increasing dynamic performance or conserve energy consumption. Since this EPB hint is processor specific, the same value of hint can result in different behavior across generations of processors. commit 4ecc933b ("x86: intel_epb: Allow model specific normal EPB value")' introduced capability to update the default power up EPB based on the CPU model and updated the default EPB to 7 for Alder Lake mobile CPUs. The same change is required for other Alder Lake-N and Raptor Lake-P mobile CPUs as the current default of 6 results in higher uncore power consumption. This increase in power is related to memory clock frequency setting based on the EPB value. Depending on the EPB the minimum memory frequency is set by the firmware. At EPB = 7, the minimum memory frequency is 1/4th compared to EPB = 6. This results in significant power saving for idle and semi-idle workload on a Chrome platform. For example Change in power and performance from EPB change from 6 to 7 on Alder Lake-N: Workload Performance diff (%) power diff ---------------------------------------------------- VP9 FHD30 0 (FPS) -218 mw Google meet 0 (FPS) -385 mw This 200+ mw power saving is very significant for mobile platform for battery life and thermal reasons. But as the workload demands more memory bandwidth, the memory frequency will be increased very fast. There is no power savings for such busy workloads. For example: Workload Performance diff (%) from EPB 6 to 7 ------------------------------------------------------- Speedometer 2.0 -0.8 WebGL Aquarium 10K Fish -0.5 Unity 3D 2018 0.2 WebXPRT3 -0.5 There are run to run variations for performance scores for such busy workloads. So the difference is not significant. Add a new define ENERGY_PERF_BIAS_NORMAL_POWERSAVE for EPB 7 and use it for Alder Lake-N and Raptor Lake-P mobile CPUs. This modification is done originally by Jeremy Compostella <jeremy.compostella@intel.com>. Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/all/20221027220056.1534264-1-srinivas.pandruvada%40linux.intel.com
-
- 20 Oct, 2022 1 commit
-
-
Juergen Gross authored
The Cyrix CPU specific MTRR function cyrix_set_all() will never be called as the mtrr_ops->set_all() callback will only be called in the use_intel() case, which would require the use_intel_if member of struct mtrr_ops to be set, which isn't the case for Cyrix. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20221004081023.32402-3-jgross@suse.com
-
- 19 Oct, 2022 1 commit
-
-
Juergen Gross authored
Add a comment about set_mtrr_state() needing serialization. [ bp: Touchups. ] Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220820092533.29420-2-jgross@suse.com
-
- 17 Oct, 2022 1 commit
-
-
Borislav Petkov authored
Those mitigations are very talkative; use the printing helper which pays attention to the buffer size. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220809153419.10182-1-bp@alien8.de
-
- 16 Oct, 2022 10 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/crng/randomLinus Torvalds authored
Pull more random number generator updates from Jason Donenfeld: "This time with some large scale treewide cleanups. The intent of this pull is to clean up the way callers fetch random integers. The current rules for doing this right are: - If you want a secure or an insecure random u64, use get_random_u64() - If you want a secure or an insecure random u32, use get_random_u32() The old function prandom_u32() has been deprecated for a while now and is just a wrapper around get_random_u32(). Same for get_random_int(). - If you want a secure or an insecure random u16, use get_random_u16() - If you want a secure or an insecure random u8, use get_random_u8() - If you want secure or insecure random bytes, use get_random_bytes(). The old function prandom_bytes() has been deprecated for a while now and has long been a wrapper around get_random_bytes() - If you want a non-uniform random u32, u16, or u8 bounded by a certain open interval maximum, use prandom_u32_max() I say "non-uniform", because it doesn't do any rejection sampling or divisions. Hence, it stays within the prandom_*() namespace, not the get_random_*() namespace. I'm currently investigating a "uniform" function for 6.2. We'll see what comes of that. By applying these rules uniformly, we get several benefits: - By using prandom_u32_max() with an upper-bound that the compiler can prove at compile-time is ≤65536 or ≤256, internally get_random_u16() or get_random_u8() is used, which wastes fewer batched random bytes, and hence has higher throughput. - By using prandom_u32_max() instead of %, when the upper-bound is not a constant, division is still avoided, because prandom_u32_max() uses a faster multiplication-based trick instead. - By using get_random_u16() or get_random_u8() in cases where the return value is intended to indeed be a u16 or a u8, we waste fewer batched random bytes, and hence have higher throughput. This series was originally done by hand while I was on an airplane without Internet. Later, Kees and I worked on retroactively figuring out what could be done with Coccinelle and what had to be done manually, and then we split things up based on that. So while this touches a lot of files, the actual amount of code that's hand fiddled is comfortably small" * tag 'random-6.1-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: prandom: remove unused functions treewide: use get_random_bytes() when possible treewide: use get_random_u32() when possible treewide: use get_random_{u8,u16}() when possible, part 2 treewide: use get_random_{u8,u16}() when possible, part 1 treewide: use prandom_u32_max() when possible, part 2 treewide: use prandom_u32_max() when possible, part 1
-
Linus Torvalds authored
Merge tag 'perf-tools-for-v6.1-2-2022-10-16' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux Pull more perf tools updates from Arnaldo Carvalho de Melo: - Use BPF CO-RE (Compile Once, Run Everywhere) to support old kernels when using bperf (perf BPF based counters) with cgroups. - Support HiSilicon PCIe Performance Monitoring Unit (PMU), that monitors bandwidth, latency, bus utilization and buffer occupancy. Documented in Documentation/admin-guide/perf/hisi-pcie-pmu.rst. - User space tasks can migrate between CPUs, so when tracing selected CPUs, system-wide sideband is still needed, fix it in the setup of Intel PT on hybrid systems. - Fix metricgroups title message in 'perf list', it should state that the metrics groups are to be used with the '-M' option, not '-e'. - Sync the msr-index.h copy with the kernel sources, adding support for using "AMD64_TSC_RATIO" in filter expressions in 'perf trace' as well as decoding it when printing the MSR tracepoint arguments. - Fix program header size and alignment when generating a JIT ELF in 'perf inject'. - Add multiple new Intel PT 'perf test' entries, including a jitdump one. - Fix the 'perf test' entries for 'perf stat' CSV and JSON output when running on PowerPC due to an invalid topology number in that arch. - Fix the 'perf test' for arm_coresight failures on the ARM Juno system. - Fix the 'perf test' attr entry for PERF_FORMAT_LOST, adding this option to the or expression expected in the intercepted perf_event_open() syscall. - Add missing condition flags ('hs', 'lo', 'vc', 'vs') for arm64 in the 'perf annotate' asm parser. - Fix 'perf mem record -C' option processing, it was being chopped up when preparing the underlying 'perf record -e mem-events' and thus being ignored, requiring using '-- -C CPUs' as a workaround. - Improvements and tidy ups for 'perf test' shell infra. - Fix Intel PT information printing segfault in uClibc, where a NULL format was being passed to fprintf. * tag 'perf-tools-for-v6.1-2-2022-10-16' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (23 commits) tools arch x86: Sync the msr-index.h copy with the kernel sources perf auxtrace arm64: Add support for parsing HiSilicon PCIe Trace packet perf auxtrace arm64: Add support for HiSilicon PCIe Tune and Trace device driver perf auxtrace arm: Refactor event list iteration in auxtrace_record__init() perf tests stat+json_output: Include sanity check for topology perf tests stat+csv_output: Include sanity check for topology perf intel-pt: Fix system_wide dummy event for hybrid perf intel-pt: Fix segfault in intel_pt_print_info() with uClibc perf test: Fix attr tests for PERF_FORMAT_LOST perf test: test_intel_pt.sh: Add 9 tests perf inject: Fix GEN_ELF_TEXT_OFFSET for jit perf test: test_intel_pt.sh: Add jitdump test perf test: test_intel_pt.sh: Tidy some alignment perf test: test_intel_pt.sh: Print a message when skipping kernel tracing perf test: test_intel_pt.sh: Tidy some perf record options perf test: test_intel_pt.sh: Fix return checking again perf: Skip and warn on unknown format 'configN' attrs perf list: Fix metricgroups title message perf mem: Fix -C option behavior for perf mem record perf annotate: Add missing condition flags for arm64 ...
-
Linus Torvalds authored
Merge tag 'kbuild-fixes-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild fixes from Masahiro Yamada: - Fix CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y compile error for the combination of Clang >= 14 and GAS <= 2.35. - Drop vmlinux.bz2 from the rpm package as it just annoyingly increased the package size. - Fix modpost error under build environments using musl. - Make *.ll files keep value names for easier debugging - Fix single directory build - Prevent RISC-V from selecting the broken DWARF5 support when Clang and GAS are used together. * tag 'kbuild-fixes-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: lib/Kconfig.debug: Add check for non-constant .{s,u}leb128 support to DWARF5 kbuild: fix single directory build kbuild: add -fno-discard-value-names to cmd_cc_ll_c scripts/clang-tools: Convert clang-tidy args to list modpost: put modpost options before argument kbuild: Stop including vmlinux.bz2 in the rpm's Kconfig.debug: add toolchain checks for DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT Kconfig.debug: simplify the dependency of DEBUG_INFO_DWARF4/5
-
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linuxLinus Torvalds authored
Pull more clk updates from Stephen Boyd: "This is the final part of the clk patches for this merge window. The clk rate range series needed another week to fully bake. Maxime fixed the bug that broke clk notifiers and prevented this from being included in the first pull request. He also added a unit test on top to make sure it doesn't break so easily again. The majority of the series fixes up how the clk_set_rate_*() APIs work, particularly around when the rate constraints are dropped and how they move around when reparenting clks. Overall it's a much needed improvement to the clk rate range APIs that used to be pretty broken if you looked sideways. Beyond the core changes there are a few driver fixes for a compilation issue or improper data causing clks to fail to register or have the wrong parents. These are good to get in before the first -rc so that the system actually boots on the affected devices" * tag 'clk-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: (31 commits) clk: tegra: Fix Tegra PWM parent clock clk: at91: fix the build with binutils 2.27 clk: qcom: gcc-msm8660: Drop hardcoded fixed board clocks clk: mediatek: clk-mux: Add .determine_rate() callback clk: tests: Add tests for notifiers clk: Update req_rate on __clk_recalc_rates() clk: tests: Add missing test case for ranges clk: qcom: clk-rcg2: Take clock boundaries into consideration for gfx3d clk: Introduce the clk_hw_get_rate_range function clk: Zero the clk_rate_request structure clk: Stop forwarding clk_rate_requests to the parent clk: Constify clk_has_parent() clk: Introduce clk_core_has_parent() clk: Switch from __clk_determine_rate to clk_core_round_rate_nolock clk: Add our request boundaries in clk_core_init_rate_req clk: Introduce clk_hw_init_rate_request() clk: Move clk_core_init_rate_req() from clk_core_round_rate_nolock() to its caller clk: Change clk_core_init_rate_req prototype clk: Set req_rate on reparenting clk: Take into account uncached clocks in clk_set_rate_range() ...
-
git://git.samba.org/sfrench/cifs-2.6Linus Torvalds authored
Pull more cifs updates from Steve French: - fix a regression in guest mounts to old servers - improvements to directory leasing (caching directory entries safely beyond the root directory) - symlink improvement (reducing roundtrips needed to process symlinks) - an lseek fix (to problem where some dir entries could be skipped) - improved ioctl for returning more detailed information on directory change notifications - clarify multichannel interface query warning - cleanup fix (for better aligning buffers using ALIGN and round_up) - a compounding fix - fix some uninitialized variable bugs found by Coverity and the kernel test robot * tag '6.1-rc-smb3-client-fixes-part2' of git://git.samba.org/sfrench/cifs-2.6: smb3: improve SMB3 change notification support cifs: lease key is uninitialized in two additional functions when smb1 cifs: lease key is uninitialized in smb1 paths smb3: must initialize two ACL struct fields to zero cifs: fix double-fault crash during ntlmssp cifs: fix static checker warning cifs: use ALIGN() and round_up() macros cifs: find and use the dentry for cached non-root directories also cifs: enable caching of directories for which a lease is held cifs: prevent copying past input buffer boundaries cifs: fix uninitialised var in smb2_compound_op() cifs: improve symlink handling for smb2+ smb3: clarify multichannel warning cifs: fix regression in very old smb1 mounts cifs: fix skipping to incorrect offset in emit_cached_dirents
-
Tetsuo Handa authored
This reverts commit 78e5a339 ("cpumask: fix checking valid cpu range"). syzbot is hitting WARN_ON_ONCE(cpu >= nr_cpumask_bits) warning at cpu_max_bits_warn() [1], for commit 78e5a339 ("cpumask: fix checking valid cpu range") is broken. Obviously that patch hits WARN_ON_ONCE() when e.g. reading /proc/cpuinfo because passing "cpu + 1" instead of "cpu" will trivially hit cpu == nr_cpumask_bits condition. Although syzbot found this problem in linux-next.git on 2022/09/27 [2], this problem was not fixed immediately. As a result, that patch was sent to linux.git before the patch author recognizes this problem, and syzbot started failing to test changes in linux.git since 2022/10/10 [3]. Andrew Jones proposed a fix for x86 and riscv architectures [4]. But [2] and [5] indicate that affected locations are not limited to arch code. More delay before we find and fix affected locations, less tested kernel (and more difficult to bisect and fix) before release. We should have inspected and fixed basically all cpumask users before applying that patch. We should not crash kernels in order to ask existing cpumask users to update their code, even if limited to CONFIG_DEBUG_PER_CPU_MAPS=y case. Link: https://syzkaller.appspot.com/bug?extid=d0fd2bf0dd6da72496dd [1] Link: https://syzkaller.appspot.com/bug?extid=21da700f3c9f0bc40150 [2] Link: https://syzkaller.appspot.com/bug?extid=51a652e2d24d53e75734 [3] Link: https://lkml.kernel.org/r/20221014155845.1986223-1-ajones@ventanamicro.com [4] Link: https://syzkaller.appspot.com/bug?extid=4d46c43d81c3bd155060 [5] Reported-by: Andrew Jones <ajones@ventanamicro.com> Reported-by: syzbot+d0fd2bf0dd6da72496dd@syzkaller.appspotmail.com Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Yury Norov <yury.norov@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Nathan Chancellor authored
When building with a RISC-V kernel with DWARF5 debug info using clang and the GNU assembler, several instances of the following error appear: /tmp/vgettimeofday-48aa35.s:2963: Error: non-constant .uleb128 is not supported Dumping the .s file reveals these .uleb128 directives come from .debug_loc and .debug_ranges: .Ldebug_loc0: .byte 4 # DW_LLE_offset_pair .uleb128 .Lfunc_begin0-.Lfunc_begin0 # starting offset .uleb128 .Ltmp1-.Lfunc_begin0 # ending offset .byte 1 # Loc expr size .byte 90 # DW_OP_reg10 .byte 0 # DW_LLE_end_of_list .Ldebug_ranges0: .byte 4 # DW_RLE_offset_pair .uleb128 .Ltmp6-.Lfunc_begin0 # starting offset .uleb128 .Ltmp27-.Lfunc_begin0 # ending offset .byte 4 # DW_RLE_offset_pair .uleb128 .Ltmp28-.Lfunc_begin0 # starting offset .uleb128 .Ltmp30-.Lfunc_begin0 # ending offset .byte 0 # DW_RLE_end_of_list There is an outstanding binutils issue to support a non-constant operand to .sleb128 and .uleb128 in GAS for RISC-V but there does not appear to be any movement on it, due to concerns over how it would work with linker relaxation. To avoid these build errors, prevent DWARF5 from being selected when using clang and an assembler that does not have support for these symbol deltas, which can be easily checked in Kconfig with as-instr plus the small test program from the dwz test suite from the binutils issue. Link: https://sourceware.org/bugzilla/show_bug.cgi?id=27215 Link: https://github.com/ClangBuiltLinux/linux/issues/1719Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
-
Masahiro Yamada authored
Commit f110e5a2 ("kbuild: refactor single builds of *.ko") was wrong. KBUILD_MODULES _is_ needed for single builds. Otherwise, "make foo/bar/baz/" does not build module objects at all. Fixes: f110e5a2 ("kbuild: refactor single builds of *.ko") Reported-by: David Sterba <dsterba@suse.cz> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Tested-by: David Sterba <dsterba@suse.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slabLinus Torvalds authored
Pull slab hotfix from Vlastimil Babka: "A single fix for the common-kmalloc series, for warnings on mips and sparc64 reported by Guenter Roeck" * tag 'slab-for-6.1-rc1-hotfix' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation
-
- 15 Oct, 2022 4 commits
-
-
https://github.com/openrisc/linuxLinus Torvalds authored
Pull OpenRISC updates from Stafford Horne: "I have relocated to London so not much work from me while I get settled. Still, OpenRISC picked up two patches in this window: - Fix for kernel page table walking from Jann Horn - MAINTAINER entry cleanup from Palmer Dabbelt" * tag 'for-linus' of https://github.com/openrisc/linux: MAINTAINERS: git://github -> https://github.com for openrisc openrisc: Fix pagewalk usage in arch_dma_{clear, set}_uncached
-
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pciLinus Torvalds authored
Pull pci fix from Bjorn Helgaas: "Revert the attempt to distribute spare resources to unconfigured hotplug bridges at boot time. This fixed some dock hot-add scenarios, but Jonathan Cameron reported that it broke a topology with a multi-function device where one function was a Switch Upstream Port and the other was an Endpoint" * tag 'pci-v6.1-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: Revert "PCI: Distribute available resources for root buses, too"
-
Hyeonggon Yoo authored
After commit d6a71648 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator"), SLAB passes large ( > PAGE_SIZE * 2) requests to buddy like SLUB does. SLAB has been using kmalloc caches to allocate freelist_idx_t array for off slab caches. But after the commit, freelist_size can be bigger than KMALLOC_MAX_CACHE_SIZE. Instead of using pointer to kmalloc cache, use kmalloc_node() and only check if the kmalloc cache is off slab during calculate_slab_order(). If freelist_size > KMALLOC_MAX_CACHE_SIZE, no looping condition happens as it allocates freelist_idx_t array directly from buddy. Link: https://lore.kernel.org/all/20221014205818.GA1428667@roeck-us.net/Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Fixes: d6a71648 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator") Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
-
git://githubhttps://github.comPalmer Dabbelt authored
Github deprecated the git:// links about a year ago, so let's move to the https:// URLs instead. Reported-by: Conor Dooley <conor.dooley@microchip.com> Link: https://github.blog/2021-09-01-improving-git-protocol-security-github/Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Stafford Horne <shorne@gmail.com>
-