1. 05 Dec, 2022 13 commits
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/misc-6.2 into kvmarm-master/next · 86f27d84
      Marc Zyngier authored
      * kvm-arm64/misc-6.2:
        : .
        : Misc fixes for 6.2:
        :
        : - Fix formatting for the pvtime documentation
        :
        : - Fix a comment in the VHE-specific Makefile
        : .
        KVM: arm64: Fix typo in comment
        KVM: arm64: Fix pvtime documentation
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      86f27d84
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/pmu-unchained into kvmarm-master/next · 118bc846
      Marc Zyngier authored
      * kvm-arm64/pmu-unchained:
        : .
        : PMUv3 fixes and improvements:
        :
        : - Make the CHAIN event handling strictly follow the architecture
        :
        : - Add support for PMUv3p5 (64bit counters all the way)
        :
        : - Various fixes and cleanups
        : .
        KVM: arm64: PMU: Fix period computation for 64bit counters with 32bit overflow
        KVM: arm64: PMU: Sanitise PMCR_EL0.LP on first vcpu run
        KVM: arm64: PMU: Simplify PMCR_EL0 reset handling
        KVM: arm64: PMU: Replace version number '0' with ID_AA64DFR0_EL1_PMUVer_NI
        KVM: arm64: PMU: Make kvm_pmc the main data structure
        KVM: arm64: PMU: Simplify vcpu computation on perf overflow notification
        KVM: arm64: PMU: Allow PMUv3p5 to be exposed to the guest
        KVM: arm64: PMU: Implement PMUv3p5 long counter support
        KVM: arm64: PMU: Allow ID_DFR0_EL1.PerfMon to be set from userspace
        KVM: arm64: PMU: Allow ID_AA64DFR0_EL1.PMUver to be set from userspace
        KVM: arm64: PMU: Move the ID_AA64DFR0_EL1.PMUver limit to VM creation
        KVM: arm64: PMU: Do not let AArch32 change the counters' top 32 bits
        KVM: arm64: PMU: Simplify setting a counter to a specific value
        KVM: arm64: PMU: Add counter_index_to_*reg() helpers
        KVM: arm64: PMU: Only narrow counters that are not 64bit wide
        KVM: arm64: PMU: Narrow the overflow checking when required
        KVM: arm64: PMU: Distinguish between 64bit counter and 64bit overflow
        KVM: arm64: PMU: Always advertise the CHAIN event
        KVM: arm64: PMU: Align chained counter implementation with architecture pseudocode
        arm64: Add ID_DFR0_EL1.PerfMon values for PMUv3p7 and IMP_DEF
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      118bc846
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/mte-map-shared into kvmarm-master/next · 382b5b87
      Marc Zyngier authored
      * kvm-arm64/mte-map-shared:
        : .
        : Update the MTE support to allow the VMM to use shared mappings
        : to back the memslots exposed to MTE-enabled guests.
        :
        : Patches courtesy of Catalin Marinas and Peter Collingbourne.
        : .
        : Fix a number of issues with MTE, such as races on the tags
        : being initialised vs the PG_mte_tagged flag as well as the
        : lack of support for VM_SHARED when KVM is involved.
        :
        : Patches from Catalin Marinas and Peter Collingbourne.
        : .
        Documentation: document the ABI changes for KVM_CAP_ARM_MTE
        KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled
        KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled
        arm64: mte: Lock a page for MTE tag initialisation
        mm: Add PG_arch_3 page flag
        KVM: arm64: Simplify the sanitise_mte_tags() logic
        arm64: mte: Fix/clarify the PG_mte_tagged semantics
        mm: Do not enable PG_arch_2 for all 64-bit architectures
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      382b5b87
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/pkvm-vcpu-state into kvmarm-master/next · cfa72993
      Marc Zyngier authored
      * kvm-arm64/pkvm-vcpu-state: (25 commits)
        : .
        : Large drop of pKVM patches from Will Deacon and co, adding
        : a private vm/vcpu state at EL2, managed independently from
        : the EL1 state. From the cover letter:
        :
        : "This is version six of the pKVM EL2 state series, extending the pKVM
        : hypervisor code so that it can dynamically instantiate and manage VM
        : data structures without the host being able to access them directly.
        : These structures consist of a hyp VM, a set of hyp vCPUs and the stage-2
        : page-table for the MMU. The pages used to hold the hypervisor structures
        : are returned to the host when the VM is destroyed."
        : .
        KVM: arm64: Use the pKVM hyp vCPU structure in handle___kvm_vcpu_run()
        KVM: arm64: Don't unnecessarily map host kernel sections at EL2
        KVM: arm64: Explicitly map 'kvm_vgic_global_state' at EL2
        KVM: arm64: Maintain a copy of 'kvm_arm_vmid_bits' at EL2
        KVM: arm64: Unmap 'kvm_arm_hyp_percpu_base' from the host
        KVM: arm64: Return guest memory from EL2 via dedicated teardown memcache
        KVM: arm64: Instantiate guest stage-2 page-tables at EL2
        KVM: arm64: Consolidate stage-2 initialisation into a single function
        KVM: arm64: Add generic hyp_memcache helpers
        KVM: arm64: Provide I-cache invalidation by virtual address at EL2
        KVM: arm64: Initialise hypervisor copies of host symbols unconditionally
        KVM: arm64: Add per-cpu fixmap infrastructure at EL2
        KVM: arm64: Instantiate pKVM hypervisor VM and vCPU structures from EL1
        KVM: arm64: Add infrastructure to create and track pKVM instances at EL2
        KVM: arm64: Rename 'host_kvm' to 'host_mmu'
        KVM: arm64: Add hyp_spinlock_t static initializer
        KVM: arm64: Include asm/kvm_mmu.h in nvhe/mem_protect.h
        KVM: arm64: Add helpers to pin memory shared with the hypervisor at EL2
        KVM: arm64: Prevent the donation of no-map pages
        KVM: arm64: Implement do_donate() helper for donating memory
        ...
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      cfa72993
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/parallel-faults into kvmarm-master/next · fe8e3f44
      Marc Zyngier authored
      * kvm-arm64/parallel-faults:
        : .
        : Parallel stage-2 fault handling, courtesy of Oliver Upton.
        : From the cover letter:
        :
        : "Presently KVM only takes a read lock for stage 2 faults if it believes
        : the fault can be fixed by relaxing permissions on a PTE (write unprotect
        : for dirty logging). Otherwise, stage 2 faults grab the write lock, which
        : predictably can pile up all the vCPUs in a sufficiently large VM.
        :
        : Like the TDP MMU for x86, this series loosens the locking around
        : manipulations of the stage 2 page tables to allow parallel faults. RCU
        : and atomics are exploited to safely build/destroy the stage 2 page
        : tables in light of multiple software observers."
        : .
        KVM: arm64: Reject shared table walks in the hyp code
        KVM: arm64: Don't acquire RCU read lock for exclusive table walks
        KVM: arm64: Take a pointer to walker data in kvm_dereference_pteref()
        KVM: arm64: Handle stage-2 faults in parallel
        KVM: arm64: Make table->block changes parallel-aware
        KVM: arm64: Make leaf->leaf PTE changes parallel-aware
        KVM: arm64: Make block->table PTE changes parallel-aware
        KVM: arm64: Split init and set for table PTE
        KVM: arm64: Atomically update stage 2 leaf attributes in parallel walks
        KVM: arm64: Protect stage-2 traversal with RCU
        KVM: arm64: Tear down unlinked stage-2 subtree after break-before-make
        KVM: arm64: Use an opaque type for pteps
        KVM: arm64: Add a helper to tear down unlinked stage-2 subtrees
        KVM: arm64: Don't pass kvm_pgtable through kvm_pgtable_walk_data
        KVM: arm64: Pass mm_ops through the visitor context
        KVM: arm64: Stash observed pte value in visitor context
        KVM: arm64: Combine visitor arguments into a context structure
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      fe8e3f44
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/dirty-ring into kvmarm-master/next · a937f37d
      Marc Zyngier authored
      * kvm-arm64/dirty-ring:
        : .
        : Add support for the "per-vcpu dirty-ring tracking with a bitmap
        : and sprinkles on top", courtesy of Gavin Shan.
        :
        : This branch drags the kvmarm-fixes-6.1-3 tag which was already
        : merged in 6.1-rc4 so that the branch is in a working state.
        : .
        KVM: Push dirty information unconditionally to backup bitmap
        KVM: selftests: Automate choosing dirty ring size in dirty_log_test
        KVM: selftests: Clear dirty ring states between two modes in dirty_log_test
        KVM: selftests: Use host page size to map ring buffer in dirty_log_test
        KVM: arm64: Enable ring-based dirty memory tracking
        KVM: Support dirty ring in conjunction with bitmap
        KVM: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.h
        KVM: x86: Introduce KVM_REQ_DIRTY_RING_SOFT_FULL
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      a937f37d
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/52bit-fixes into kvmarm-master/next · 3bbcc8cc
      Marc Zyngier authored
      * kvm-arm64/52bit-fixes:
        : .
        : 52bit PA fixes, courtesy of Ryan Roberts. From the cover letter:
        :
        : "I've been adding support for FEAT_LPA2 to KVM and as part of that work have been
        : testing various (84) configurations of HW, host and guest kernels on FVP. This
        : has thrown up a couple of pre-existing bugs, for which the fixes are provided."
        : .
        KVM: arm64: Fix benign bug with incorrect use of VA_BITS
        KVM: arm64: Fix PAR_TO_HPFAR() to work independently of PA_BITS.
        KVM: arm64: Fix kvm init failure when mode!=vhe and VA_BITS=52.
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      3bbcc8cc
    • Ryan Roberts's avatar
      KVM: arm64: Fix benign bug with incorrect use of VA_BITS · 219072c0
      Ryan Roberts authored
      get_user_mapping_size() uses kvm's pgtable library to walk a user space
      page table created by the kernel, and in doing so, passes metadata
      that the library needs, including ia_bits, which defines the size of the
      input address.
      
      For the case where the kernel is compiled for 52 VA bits but runs on HW
      that does not support LVA, it will fall back to 48 VA bits at runtime.
      Therefore we must use vabits_actual rather than VA_BITS to get the true
      address size.
      
      This is benign in the current code base because the pgtable library only
      uses it for error checking.
      
      Fixes: 6011cf68 ("KVM: arm64: Walk userspace page tables to compute the THP mapping size")
      Signed-off-by: default avatarRyan Roberts <ryan.roberts@arm.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221205114031.3972780-1-ryan.roberts@arm.com
      219072c0
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/selftest/access-tracking into kvmarm-master/next · b1d10ee1
      Marc Zyngier authored
      * kvm-arm64/selftest/access-tracking:
        : .
        : Small series to add support for arm64 to access_tracking_perf_test and
        : correct a couple bugs along the way.
        :
        : Patches courtesy of Oliver Upton.
        : .
        KVM: selftests: Build access_tracking_perf_test for arm64
        KVM: selftests: Have perf_test_util signal when to stop vCPUs
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      b1d10ee1
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/selftest/s2-faults into kvmarm-master/next · adde0476
      Marc Zyngier authored
      * kvm-arm64/selftest/s2-faults:
        : .
        : New KVM/arm64 selftests exercising various sorts of S2 faults, courtesy
        : of Ricardo Koller. From the cover letter:
        :
        : "This series adds a new aarch64 selftest for testing stage 2 fault handling
        : for various combinations of guest accesses (e.g., write, S1PTW), backing
        : sources (e.g., anon), and types of faults (e.g., read on hugetlbfs with a
        : hole, write on a readonly memslot). Each test tries a different combination
        : and then checks that the access results in the right behavior (e.g., uffd
        : faults with the right address and write/read flag). [...]"
        : .
        KVM: selftests: aarch64: Add mix of tests into page_fault_test
        KVM: selftests: aarch64: Add readonly memslot tests into page_fault_test
        KVM: selftests: aarch64: Add dirty logging tests into page_fault_test
        KVM: selftests: aarch64: Add userfaultfd tests into page_fault_test
        KVM: selftests: aarch64: Add aarch64/page_fault_test
        KVM: selftests: Use the right memslot for code, page-tables, and data allocations
        KVM: selftests: Fix alignment in virt_arch_pgd_alloc() and vm_vaddr_alloc()
        KVM: selftests: Add vm->memslots[] and enum kvm_mem_region_type
        KVM: selftests: Stash backing_src_type in struct userspace_mem_region
        tools: Copy bitfield.h from the kernel sources
        KVM: selftests: aarch64: Construct DEFAULT_MAIR_EL1 using sysreg.h macros
        KVM: selftests: Add missing close and munmap in __vm_mem_region_delete()
        KVM: selftests: aarch64: Add virt_get_pte_hva() library function
        KVM: selftests: Add a userfaultfd library
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      adde0476
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/selftest/linked-bps into kvmarm-master/next · 02f6fdd4
      Marc Zyngier authored
      * kvm-arm64/selftest/linked-bps:
        : .
        : Additional selftests for the arm64 breakpoints/watchpoints,
        : courtesy of Reiji Watanabe. From the cover letter:
        :
        : "This series adds test cases for linked {break,watch}points to the
        : debug-exceptions test, and expands {break,watch}point tests to
        : use non-zero {break,watch}points (the current test always uses
        : {break,watch}point#0)."
        : .
        KVM: arm64: selftests: Test with every breakpoint/watchpoint
        KVM: arm64: selftests: Add a test case for a linked watchpoint
        KVM: arm64: selftests: Add a test case for a linked breakpoint
        KVM: arm64: selftests: Change debug_version() to take ID_AA64DFR0_EL1
        KVM: arm64: selftests: Stop unnecessary test stage tracking of debug-exceptions
        KVM: arm64: selftests: Add helpers to enable debug exceptions
        KVM: arm64: selftests: Remove the hard-coded {b,w}pn#0 from debug-exceptions
        KVM: arm64: selftests: Add write_dbg{b,w}{c,v}r helpers in debug-exceptions
        KVM: arm64: selftests: Use FIELD_GET() to extract ID register fields
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      02f6fdd4
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/selftest/memslot-fixes into kvmarm-master/next · f8faf02f
      Marc Zyngier authored
      * kvm-arm64/selftest/memslot-fixes:
        : .
        : KVM memslot selftest fixes for non-4kB page sizes, courtesy
        : of Gavin Shan. From the cover letter:
        :
        : "kvm/selftests/memslots_perf_test doesn't work with 64KB-page-size-host
        : and 4KB-page-size-guest on aarch64. In the implementation, the host and
        : guest page size have been hardcoded to 4KB. It's ovbiously not working
        : on aarch64 which supports 4KB, 16KB, 64KB individually on host and guest.
        :
        : This series tries to fix it. After the series is applied, the test runs
        : successfully with 64KB-page-size-host and 4KB-page-size-guest."
        : .
        KVM: selftests: memslot_perf_test: Report optimal memory slots
        KVM: selftests: memslot_perf_test: Consolidate memory
        KVM: selftests: memslot_perf_test: Support variable guest page size
        KVM: selftests: memslot_perf_test: Probe memory slots for once
        KVM: selftests: memslot_perf_test: Consolidate loop conditions in prepare_vm()
        KVM: selftests: memslot_perf_test: Use data->nslots in prepare_vm()
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      f8faf02f
    • Marc Zyngier's avatar
      KVM: arm64: PMU: Fix period computation for 64bit counters with 32bit overflow · 58ff6569
      Marc Zyngier authored
      Fix the bogus masking when computing the period of a 64bit counter
      with 32bit overflow. It really should be treated like a 32bit counter
      for the purpose of the period.
      Reported-by: default avatarRicardo Koller <ricarkol@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/Y4jbosgHbUDI0WF4@google.com
      58ff6569
  2. 29 Nov, 2022 10 commits
  3. 28 Nov, 2022 3 commits
    • Marc Zyngier's avatar
      KVM: arm64: PMU: Sanitise PMCR_EL0.LP on first vcpu run · 64d6820d
      Marc Zyngier authored
      Userspace can play some dirty tricks on us by selecting a given
      PMU version (such as PMUv3p5), restore a PMCR_EL0 value that
      has PMCR_EL0.LP set, and then switch the PMU version to PMUv3p1,
      for example. In this situation, we end-up with PMCR_EL0.LP being
      set and spreading havoc in the PMU emulation.
      
      This is specially hard as the first two step can be done on
      one vcpu and the third step on another, meaning that we need
      to sanitise *all* vcpus when the PMU version is changed.
      
      In orer to avoid a pretty complicated locking situation,
      defer the sanitisation of PMCR_EL0 to the point where the
      vcpu is actually run for the first tine, using the existing
      KVM_REQ_RELOAD_PMU request that calls into kvm_pmu_handle_pmcr().
      
      There is still an obscure corner case where userspace could
      do the above trick, and then save the VM without running it.
      They would then observe an inconsistent state (PMUv3.1 + LP set),
      but that state will be fixed on the first run anyway whenever
      the guest gets restored on a host.
      Reported-by: default avatarReiji Watanabe <reijiw@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      64d6820d
    • Marc Zyngier's avatar
      KVM: arm64: PMU: Simplify PMCR_EL0 reset handling · 292e8f14
      Marc Zyngier authored
      Resetting PMCR_EL0 is a pretty involved process that includes
      poisoning some of the writable bits, just because we can.
      
      It makes it hard to reason about about what gets configured,
      and just resetting things to 0 seems like a much saner option.
      
      Reduce reset_pmcr() to just preserving PMCR_EL0.N from the host,
      and setting PMCR_EL0.LC if we don't support AArch32.
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      292e8f14
    • Anshuman Khandual's avatar
      KVM: arm64: PMU: Replace version number '0' with ID_AA64DFR0_EL1_PMUVer_NI · 86815735
      Anshuman Khandual authored
      kvm_host_pmu_init() returns when detected PMU is either not implemented, or
      implementation defined. kvm_pmu_probe_armpmu() also has a similar situation.
      
      Extracted ID_AA64DFR0_EL1_PMUVer value, when PMU is not implemented is '0',
      which can be replaced with ID_AA64DFR0_EL1_PMUVer_NI defined as '0b0000'.
      
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: linux-perf-users@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-arm-kernel@lists.infradead.org
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221128135629.118346-1-anshuman.khandual@arm.com
      86815735
  4. 22 Nov, 2022 3 commits
  5. 19 Nov, 2022 8 commits
  6. 17 Nov, 2022 3 commits