Commit 9fef276f authored by Michael Kelley's avatar Michael Kelley Committed by Wei Liu

x86/hyperv: Use slow_virt_to_phys() in page transition hypervisor callback

In preparation for temporarily marking pages not present during a
transition between encrypted and decrypted, use slow_virt_to_phys()
in the hypervisor callback. As long as the PFN is correct,
slow_virt_to_phys() works even if the leaf PTE is not present.
The existing functions that depend on vmalloc_to_page() all
require that the leaf PTE be marked present, so they don't work.

Update the comments for slow_virt_to_phys() to note this broader usage
and the requirement to work even if the PTE is not marked present.
Signed-off-by: default avatarMichael Kelley <mhklinux@outlook.com>
Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: default avatarRick Edgecombe <rick.p.edgecombe@intel.com>
Link: https://lore.kernel.org/r/20240116022008.1023398-2-mhklinux@outlook.comSigned-off-by: default avatarWei Liu <wei.liu@kernel.org>
Message-ID: <20240116022008.1023398-2-mhklinux@outlook.com>
parent 04ed680e
......@@ -515,6 +515,8 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo
enum hv_mem_host_visibility visibility = enc ?
VMBUS_PAGE_NOT_VISIBLE : VMBUS_PAGE_VISIBLE_READ_WRITE;
u64 *pfn_array;
phys_addr_t paddr;
void *vaddr;
int ret = 0;
bool result = true;
int i, pfn;
......@@ -524,7 +526,15 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo
return false;
for (i = 0, pfn = 0; i < pagecount; i++) {
pfn_array[pfn] = virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE);
/*
* Use slow_virt_to_phys() because the PRESENT bit has been
* temporarily cleared in the PTEs. slow_virt_to_phys() works
* without the PRESENT bit while virt_to_hvpfn() or similar
* does not.
*/
vaddr = (void *)kbuffer + (i * HV_HYP_PAGE_SIZE);
paddr = slow_virt_to_phys(vaddr);
pfn_array[pfn] = paddr >> HV_HYP_PAGE_SHIFT;
pfn++;
if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) {
......
......@@ -755,10 +755,14 @@ pmd_t *lookup_pmd_address(unsigned long address)
* areas on 32-bit NUMA systems. The percpu areas can
* end up in this kind of memory, for instance.
*
* This could be optimized, but it is only intended to be
* used at initialization time, and keeping it
* unoptimized should increase the testing coverage for
* the more obscure platforms.
* Note that as long as the PTEs are well-formed with correct PFNs, this
* works without checking the PRESENT bit in the leaf PTE. This is unlike
* the similar vmalloc_to_page() and derivatives. Callers may depend on
* this behavior.
*
* This could be optimized, but it is only used in paths that are not perf
* sensitive, and keeping it unoptimized should increase the testing coverage
* for the more obscure platforms.
*/
phys_addr_t slow_virt_to_phys(void *__virt_addr)
{
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment