- 06 Apr, 2023 8 commits
-
-
Sean Christopherson authored
Now that KVM disallows changing feature MSRs, i.e. PERF_CAPABILITIES, after running a vCPU, WARN and bug the VM if the PMU is refreshed after the vCPU has run. Note, KVM has disallowed CPUID updates after running a vCPU since commit feb627e8 ("KVM: x86: Forbid KVM_SET_CPUID{,2} after KVM_RUN"), i.e. PERF_CAPABILITIES was the only remaining way to trigger a PMU refresh after KVM_RUN. Cc: Like Xu <like.xu.linux@gmail.com> Link: https://lore.kernel.org/r/20230311004618.920745-8-seanjc@google.comSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
Sean Christopherson authored
Disallow writes to feature MSRs after KVM_RUN to prevent userspace from changing the vCPU model after running the vCPU. Similar to guest CPUID, KVM uses feature MSRs to configure intercepts, determine what operations are/aren't allowed, etc. Changing the capabilities while the vCPU is active will at best yield unpredictable guest behavior, and at worst could be dangerous to KVM. Allow writing the current value, e.g. so that userspace can blindly set all MSRs when emulating RESET, and unconditionally allow writes to MSR_IA32_UCODE_REV so that userspace can emulate patch loads. Special case the VMX MSRs to keep the generic list small, i.e. so that KVM can do a linear walk of the generic list without incurring meaningful overhead. Cc: Like Xu <like.xu.linux@gmail.com> Cc: Yu Zhang <yu.c.zhang@linux.intel.com> Reviewed-by:
Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20230311004618.920745-7-seanjc@google.comSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
Sean Christopherson authored
Split the PERF_CAPABILITIES subtests into two parts so that the LBR format testcases don't execute after KVM_RUN. Similar to the guest CPUID model, KVM will soon disallow changing PERF_CAPABILITIES after KVM_RUN, at which point attempting to set the MSR after KVM_RUN will yield false positives and/or false negatives depending on what the test is trying to do. Land the LBR format test in a more generic "immutable features" test in anticipation of expanding its scope to other immutable features. Link: https://lore.kernel.org/r/20230311004618.920745-6-seanjc@google.comSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
Sean Christopherson authored
Add VMX MSRs to the runtime list of feature MSRs by iterating over the range of emulated MSRs instead of manually defining each MSR in the "all" list. Using the range definition reduces the cost of emulating a new VMX MSR, e.g. prevents forgetting to add an MSR to the list. Extracting the VMX MSRs from the "all" list, which is a compile-time constant, also shrinks the list to the point where the compiler can heavily optimize code that iterates over the list. No functional change intended. Reviewed-by:
Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20230311004618.920745-5-seanjc@google.comSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
Sean Christopherson authored
Add macros to track the range of VMX feature MSRs that are emulated by KVM to reduce the maintenance cost of extending the set of emulated MSRs. Note, KVM doesn't necessarily emulate all known/consumed VMX MSRs, e.g. PROCBASED_CTLS3 is consumed by KVM to enable IPI virtualization, but is not emulated as KVM doesn't emulate/virtualize IPI virtualization for nested guests. No functional change intended. Reviewed-by:
Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20230311004618.920745-4-seanjc@google.comSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
Sean Christopherson authored
Add a helper to query if a vCPU has run so that KVM doesn't have to open code the check on last_vmentry_cpu being set to a magic value. No functional change intended. Suggested-by:
Xiaoyao Li <xiaoyao.li@intel.com> Cc: Like Xu <like.xu.linux@gmail.com> Reviewed-by:
Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20230311004618.920745-3-seanjc@google.comSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
Sean Christopherson authored
Rename kvm_init_msr_list() to kvm_init_msr_lists() to clarify that it initializes multiple lists: MSRs to save, emulated MSRs, and feature MSRs. No functional change intended. Reviewed-by:
Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20230311004618.920745-2-seanjc@google.comSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
Sean Christopherson authored
Disallow enabling LBR support if the CPU supports architectural LBRs. Traditional LBR support is absent on CPU models that have architectural LBRs, and KVM doesn't yet support arch LBRs, i.e. KVM will pass through non-existent MSRs if userspace enables LBRs for the guest. Cc: stable@vger.kernel.org Cc: Yang Weijiang <weijiang.yang@intel.com> Cc: Like Xu <like.xu.linux@gmail.com> Reported-by:
Paolo Bonzini <pbonzini@redhat.com> Fixes: be635e34 ("KVM: vmx/pmu: Expose LBR_FMT in the MSR_IA32_PERF_CAPABILITIES") Tested-by:
Like Xu <likexu@tencent.com> Link: https://lore.kernel.org/r/20230128001427.2548858-1-seanjc@google.comSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
- 05 Apr, 2023 1 commit
-
-
Like Xu authored
The kvm_pmu_refresh() may be called repeatedly (e.g. configure guest CPUID repeatedly or update MSR_IA32_PERF_CAPABILITIES) and each call will use the last pmu->all_valid_pmc_idx value, with the residual bits introducing additional overhead later in the vPMU emulation. Fixes: b35e5548 ("KVM: x86/vPMU: Add lazy mechanism to release perf_event per vPMC") Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Like Xu <likexu@tencent.com> Link: https://lore.kernel.org/r/20230404071759.75376-1-likexu@tencent.comSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
- 23 Mar, 2023 1 commit
-
-
Mathias Krause authored
Move the 'version' member to the beginning of the structure to reuse an existing hole instead of introducing another one. This allows us to save 8 bytes for 64 bit builds. Signed-off-by:
Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20230217193336.15278-2-minipli@grsecurity.netSigned-off-by:
Sean Christopherson <seanjc@google.com>
-
- 16 Mar, 2023 9 commits
-
-
Thomas Huth authored
All kvm_arch_vm_ioctl() implementations now only deal with "int" types as return values, so we can change the return type of these functions to use "int" instead of "long". Signed-off-by:
Thomas Huth <thuth@redhat.com> Acked-by:
Anup Patel <anup@brainfault.org> Message-Id: <20230208140105.655814-7-thuth@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Thomas Huth authored
KVM functions use "long" return values for functions that are wired up to "struct file_operations", but otherwise use "int" return values for functions that can return 0/-errno in order to avoid unintentional divergences between 32-bit and 64-bit kernels. Some code still uses "long" in unnecessary spots, though, which can cause a little bit of confusion and unnecessary size casts. Let's change these spots to use "int" types, too. Signed-off-by:
Thomas Huth <thuth@redhat.com> Message-Id: <20230208140105.655814-6-thuth@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Thomas Huth authored
In case of success, this function returns the amount of handled bytes. However, this does not work for large values: The function is called from kvm_arch_vm_ioctl() (which still returns a long), which in turn is called from kvm_vm_ioctl() in virt/kvm/kvm_main.c. And that function stores the return value in an "int r" variable. So the upper 32-bits of the "long" return value are lost there. KVM ioctl functions should only return "int" values, so let's limit the amount of bytes that can be requested here to INT_MAX to avoid the problem with the truncated return value. We can then also change the return type of the function to "int" to make it clearer that it is not possible to return a "long" here. Fixes: f0376edb ("KVM: arm64: Add ioctl to fetch/store tags in a guest") Signed-off-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Reviewed-by:
Gavin Shan <gshan@redhat.com> Reviewed-by:
Steven Price <steven.price@arm.com> Message-Id: <20230208140105.655814-5-thuth@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Thomas Huth authored
The KVM_GET_NR_MMU_PAGES ioctl is quite questionable on 64-bit hosts since it fails to return the full 64 bits of the value that can be set with the corresponding KVM_SET_NR_MMU_PAGES call. Its "long" return value is truncated into an "int" in the kvm_arch_vm_ioctl() function. Since this ioctl also never has been used by userspace applications (QEMU, Google's internal VMM, kvmtool and CrosVM have been checked), it's likely the best if we remove this badly designed ioctl before anybody really tries to use it. Signed-off-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230208140105.655814-4-thuth@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Thomas Huth authored
These two functions only return normal integers, so it does not make sense to declare the return type as "long" here. Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com> Message-Id: <20230208140105.655814-3-thuth@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Thomas Huth authored
Most functions that are related to kvm_arch_vm_ioctl() already use "int" as return type to pass error values back to the caller. Some outlier functions use "long" instead for no good reason (they do not really require long values here). Let's standardize on "int" here to avoid casting the values back and forth between the two types. Reviewed-by:
Nicholas Piggin <npiggin@gmail.com> Signed-off-by:
Thomas Huth <thuth@redhat.com> Message-Id: <20230208140105.655814-2-thuth@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Emanuele Giuseppe Esposito authored
FLUSH_L1D was already added in 11e34e64, but the feature is not visible to userspace yet. The bit definition: CPUID.(EAX=7,ECX=0):EDX[bit 28] If the feature is supported by the host, kvm should support it too so that userspace can choose whether to expose it to the guest or not. One disadvantage of not exposing it is that the guest will report a non existing vulnerability in /sys/devices/system/cpu/vulnerabilities/mmio_stale_data because the mitigation is present only if the guest supports (FLUSH_L1D and MD_CLEAR) or FB_CLEAR. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20230201132905.549148-4-eesposit@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Emanuele Giuseppe Esposito authored
Expose IA32_FLUSH_CMD to the guest if the guest CPUID enumerates support for this MSR. As with IA32_PRED_CMD, permission for unintercepted writes to this MSR will be granted to the guest after the first non-zero write. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20230201132905.549148-3-eesposit@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Emanuele Giuseppe Esposito authored
Expose IA32_FLUSH_CMD to the guest if the guest CPUID enumerates support for this MSR. As with IA32_PRED_CMD, permission for unintercepted writes to this MSR will be granted to the guest after the first non-zero write. Co-developed-by:
Jim Mattson <jmattson@google.com> Signed-off-by:
Jim Mattson <jmattson@google.com> Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20230201132905.549148-2-eesposit@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- 14 Mar, 2023 21 commits
-
-
Sean Christopherson authored
Rename enable_evmcs to __kvm_is_using_evmcs to match its wrapper, and to avoid confusion with enabling eVMCS for nested virtualization, i.e. have "enable eVMCS" be reserved for "enable eVMCS support for L1". No functional change intended. Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230211003534.564198-4-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Wrap enable_evmcs in a helper and stub it out when CONFIG_HYPERV=n in order to eliminate the static branch nop placeholders. clang-14 is clever enough to elide the nop, but gcc-12 is not. Stubbing out the key reduces the size of kvm-intel.ko by ~7.5% (200KiB) when compiled with gcc-12 (there are a _lot_ of VMCS accesses throughout KVM). Reviewed-by:
Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230211003534.564198-3-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Move the macros that define the set of VMCS controls that are supported by eVMCS1 from hyperv.h to hyperv.c, i.e. make them "private". The macros should never be consumed directly by KVM at-large since the "final" set of supported controls depends on guest CPUID. No functional change intended. Reviewed-by:
Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230211003534.564198-2-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Lai Jiangshan authored
Drop FNAME(is_self_change_mapping) and instead rely on kvm_mmu_hugepage_adjust() to adjust the hugepage accordingly. Prior to commit 4cd071d1 ("KVM: x86/mmu: Move calls to thp_adjust() down a level"), the hugepage adjustment was done before allocating new shadow pages, i.e. failed to restrict the hugepage sizes if a new shadow page resulted in account_shadowed() changing the disallowed hugepage tracking. Removing FNAME(is_self_change_mapping) fixes a bug reported by Huang Hang where KVM unnecessarily forces a 4KiB page. FNAME(is_self_change_mapping) has a defect in that it blindly disables _all_ hugepage mappings rather than trying to reduce the size of the hugepage. If the guest is writing to a 1GiB page and the 1GiB is self-referential but a 2MiB page is not, then KVM can and should create a 2MiB mapping. Add a comment above the call to kvm_mmu_hugepage_adjust() to call out the new dependency on adjusting the hugepage size after walking indirect PTEs. Reported-by:
Huang Hang <hhuang@linux.alibaba.com> Signed-off-by:
Lai Jiangshan <jiangshan.ljs@antgroup.com> Link: https://lore.kernel.org/r/20221213125538.81209-1-jiangshanlai@gmail.com [sean: rework changelog after separating out the emulator change] Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230202182817.407394-4-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Lai Jiangshan authored
Move the detection of write #PF to shadow pages, i.e. a fault on a write to a page table that is being shadowed by KVM that is used to translate the write itself, from FNAME(is_self_change_mapping) to FNAME(fetch). There is no need to detect the self-referential write before kvm_faultin_pfn() as KVM does not consume EMULTYPE_WRITE_PF_TO_SP for accesses that resolve to "error or no-slot" pfns, i.e. KVM doesn't allow retrying MMIO accesses or writes to read-only memslots. Detecting the EMULTYPE_WRITE_PF_TO_SP scenario in FNAME(fetch) will allow dropping FNAME(is_self_change_mapping) entirely, as the hugepage interaction can be deferred to kvm_mmu_hugepage_adjust(). Cc: Huang Hang <hhuang@linux.alibaba.com> Signed-off-by:
Lai Jiangshan <jiangshan.ljs@antgroup.com> Link: https://lore.kernel.org/r/20221213125538.81209-1-jiangshanlai@gmail.com [sean: split to separate patch, write changelog] Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230202182817.407394-3-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use a new EMULTYPE flag, EMULTYPE_WRITE_PF_TO_SP, to track page faults on self-changing writes to shadowed page tables instead of propagating that information to the emulator via a semi-persistent vCPU flag. Using a flag in "struct kvm_vcpu_arch" is confusing, especially as implemented, as it's not at all obvious that clearing the flag only when emulation actually occurs is correct. E.g. if KVM sets the flag and then retries the fault without ever getting to the emulator, the flag will be left set for future calls into the emulator. But because the flag is consumed if and only if both EMULTYPE_PF and EMULTYPE_ALLOW_RETRY_PF are set, and because EMULTYPE_ALLOW_RETRY_PF is deliberately not set for direct MMUs, emulated MMIO, or while L2 is active, KVM avoids false positives on a stale flag since FNAME(page_fault) is guaranteed to be run and refresh the flag before it's ultimately consumed by the tail end of reexecute_instruction(). Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230202182817.407394-2-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Vipin Sharma authored
Add missing KVM_EXIT_* reasons in KVM selftests from include/uapi/linux/kvm.h Signed-off-by:
Vipin Sharma <vipinsh@google.com> Message-Id: <20230204014547.583711-5-vipinsh@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add and use a macro to generate the KVM exit reason strings array instead of relying on developers to correctly copy+paste+edit each string. Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230204014547.583711-4-vipinsh@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Vipin Sharma authored
Print what KVM exit reason a test was expecting and what it actually got int TEST_ASSERT_KVM_EXIT_REASON(). Signed-off-by:
Vipin Sharma <vipinsh@google.com> Message-Id: <20230204014547.583711-3-vipinsh@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Vipin Sharma authored
Make TEST_ASSERT_KVM_EXIT_REASON() macro and replace all exit reason test assert statements with it. No functional changes intended. Signed-off-by:
Vipin Sharma <vipinsh@google.com> Reviewed-by:
David Matlack <dmatlack@google.com> Message-Id: <20230204014547.583711-2-vipinsh@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
David Woodhouse authored
When kvm_xen_evtchn_send() takes the slow path because the shinfo GPC needs to be revalidated, it used to violate the SRCU vs. kvm->lock locking rules and potentially cause a deadlock. Now that lockdep is learning to catch such things, make sure that code path is exercised by the selftest. Link: https://lore.kernel.org/all/20230113124606.10221-2-dwmw2@infradead.orgSigned-off-by:
David Woodhouse <dwmw@amazon.co.uk> Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230204024151.1373296-5-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
David Woodhouse authored
The xen_shinfo_test started off with very few iterations, and the numbers we used in GUEST_SYNC() were precisely mapped to the RUNSTATE_xxx values anyway to start with. It has since grown quite a few more tests, and it's kind of awful to be handling them all as bare numbers. Especially when I want to add a new test in the middle. Define an enum for the test stages, and use it both in the guest code and the host switch statement. No functional change, if I can count to 24. Signed-off-by:
David Woodhouse <dwmw@amazon.co.uk> Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230204024151.1373296-4-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add wrappers to do hypercalls using VMCALL/VMMCALL and Xen's register ABI (as opposed to full Xen-style hypercalls through a hypervisor provided page). Using the common helpers dedups a pile of code, and uses the native hypercall instruction when running on AMD. Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230204024151.1373296-3-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Extract the guts of kvm_hypercall() to a macro so that Xen hypercalls, which have a different register ABI, can reuse the VMCALL vs. VMMCALL logic. No functional change intended. Signed-off-by:
Sean Christopherson <seanjc@google.com> Message-Id: <20230204024151.1373296-2-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
WARN if generating a GATag given a VM ID and vCPU ID doesn't yield the same IDs when pulling the IDs back out of the tag. Don't bother adding error handling to callers, this is very much a paranoid sanity check as KVM fully controls the VM ID and is supposed to reject too-big vCPU IDs. Signed-off-by:
Sean Christopherson <seanjc@google.com> Reviewed-by:
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Tested-by:
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Message-Id: <20230207002156.521736-4-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Suravee Suthikulpanit authored
Define AVIC_VCPU_ID_MASK based on AVIC_PHYSICAL_MAX_INDEX, i.e. the mask that effectively controls the largest guest physical APIC ID supported by x2AVIC, instead of hardcoding the number of bits to 8 (and the number of VM bits to 24). The AVIC GATag is programmed into the AMD IOMMU IRTE to provide a reference back to KVM in case the IOMMU cannot inject an interrupt into a non-running vCPU. In such a case, the IOMMU notifies software by creating a GALog entry with the corresponded GATag, and KVM then uses the GATag to find the correct VM+vCPU to kick. Dropping bit 8 from the GATag results in kicking the wrong vCPU when targeting vCPUs with x2APIC ID > 255. Fixes: 4d1d7942 ("KVM: SVM: Introduce logic to (de)activate x2AVIC mode") Cc: stable@vger.kernel.org Reported-by:
Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by:
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Co-developed-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Reviewed-by:
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Tested-by:
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Message-Id: <20230207002156.521736-3-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Define the "physical table max index mask" as bits 8:0, not 9:0. x2AVIC currently supports a max of 512 entries, i.e. the max index is 511, and the inputs to GENMASK_ULL() are inclusive. The bug is benign as bit 9 is reserved and never set by KVM, i.e. KVM is just clearing bits that are guaranteed to be zero. Note, as of this writing, APM "Rev. 3.39-October 2022" incorrectly states that bits 11:8 are reserved in Table B-1. VMCB Layout, Control Area. I.e. that table wasn't updated when x2AVIC support was added. Opportunistically fix the comment for the max AVIC ID to align with the code, and clean up comment formatting too. Fixes: 4d1d7942 ("KVM: SVM: Introduce logic to (de)activate x2AVIC mode") Cc: stable@vger.kernel.org Cc: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Reviewed-by:
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Tested-by:
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Message-Id: <20230207002156.521736-2-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Right now, if KVM memory stress tests are run with hugetlb sources but hugetlb is not available (either in the kernel or because /proc/sys/vm/nr_hugepages is 0) the test will fail with a memory allocation error. This makes it impossible to add tests that default to hugetlb-backed memory, because on a machine with a default configuration they will fail. Therefore, check HugePages_Total as well and, if zero, direct the user to enable hugepages in procfs. Furthermore, return KSFT_SKIP whenever hugetlb is not available. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Rong Tao authored
Code indentation should use tabs where possible and miss a '*'. Signed-off-by:
Rong Tao <rongtao@cestc.cn> Message-Id: <tencent_A492CB3F9592578451154442830EA1B02C07@qq.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Rong Tao authored
Code indentation should use tabs where possible. Signed-off-by:
Rong Tao <rongtao@cestc.cn> Message-Id: <tencent_31E6ACADCB6915E157CF5113C41803212107@qq.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
nested_vmx_check_controls() has already run by the time KVM checks host state, so the "host address space size" exit control can only be set on x86-64 hosts. Simplify the condition at the cost of adding some dead code to 32-bit kernels. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-