1. 24 Mar, 2020 7 commits
    • Thomas Hellstrom (VMware)'s avatar
      drm: Add a drm_get_unmapped_area() helper · b1823416
      Thomas Hellstrom (VMware) authored
      Unaligned virtual addresses makes it unlikely that huge page-table entries
      can be used.
      So align virtual buffer object address huge page boundaries to the
      underlying physical address huge page boundaries taking buffer object
      sizes into account to determine when it might be possible to use huge
      page-table entries.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarThomas Hellstrom (VMware) <thomas_os@shipmail.org>
      Reviewed-by: default avatarRoland Scheidegger <sroland@vmware.com>
      Acked-by: default avatarChristian König <christian.koenig@amd.com>
      b1823416
    • Thomas Hellstrom (VMware)'s avatar
      drm/vmwgfx: Support huge page faults · 75390281
      Thomas Hellstrom (VMware) authored
      With vmwgfx dirty-tracking we need a specialized huge_fault
      callback. Implement and hook it up.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarThomas Hellstrom (VMware) <thomas_os@shipmail.org>
      Reviewed-by: default avatarRoland Scheidegger <sroland@vmware.com>
      Acked-by: default avatarChristian König <christian.koenig@amd.com>
      75390281
    • Thomas Hellstrom (VMware)'s avatar
      drm/ttm, drm/vmwgfx: Support huge TTM pagefaults · 314b6580
      Thomas Hellstrom (VMware) authored
      Support huge (PMD-size and PUD-size) page-table entries by providing a
      huge_fault() callback.
      We still support private mappings and write-notify by splitting the huge
      page-table entries on write-access.
      
      Note that for huge page-faults to occur, either the kernel needs to be
      compiled with trans-huge-pages always enabled, or the kernel needs to be
      compiled with trans-huge-pages enabled using madvise, and the user-space
      app needs to call madvise() to enable trans-huge pages on a per-mapping
      basis.
      
      Furthermore huge page-faults will not succeed unless buffer objects and
      user-space addresses are aligned on huge page size boundaries.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarThomas Hellstrom (VMware) <thomas_os@shipmail.org>
      Reviewed-by: default avatarRoland Scheidegger <sroland@vmware.com>
      Reviewed-by: default avatarChristian König <christian.koenig@amd.com>
      314b6580
    • Thomas Hellstrom (VMware)'s avatar
      mm: Add vmf_insert_pfn_xxx_prot() for huge page-table entries · 9a9731b1
      Thomas Hellstrom (VMware) authored
      For graphics drivers needing to modify the page-protection, add
      huge page-table entries counterparts to vmf_insert_pfn_prot().
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarThomas Hellstrom (VMware) <thomas_os@shipmail.org>
      Acked-by: default avatarChristian König <christian.koenig@amd.com>
      Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      9a9731b1
    • Thomas Hellstrom (VMware)'s avatar
      mm: Split huge pages on write-notify or COW · 327e9fd4
      Thomas Hellstrom (VMware) authored
      The functions wp_huge_pmd() and wp_huge_pud() currently relies on the
      huge_fault() callback to split huge page table entries if needed.
      However for module users that requires export of the split_huge_xxx()
      functionality which may be undesired. Instead split pre-existing huge
      page-table entries on VM_FAULT_FALLBACK return.
      
      We currently only do COW and write-notify on the PTE level, so if the
      huge_fault() handler returns VM_FAULT_FALLBACK on wp faults,
      split the huge pages and page-table entries. Also do this for huge PUDs
      if there is no huge_fault() handler and the vma is not anonymous, similar
      to how it's done for PMDs.
      
      Note that fs/dax.c still does the splitting in the huge_fault() handler,
      but as huge_fault() A follow-up patch can remove the dax.c split_huge_pmd()
      if needed.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarThomas Hellstrom (VMware) <thomas_os@shipmail.org>
      Acked-by: default avatarChristian König <christian.koenig@amd.com>
      Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      327e9fd4
    • Thomas Hellstrom (VMware)'s avatar
      mm: Introduce vma_is_special_huge · 2484ca9b
      Thomas Hellstrom (VMware) authored
      For VM_PFNMAP and VM_MIXEDMAP vmas that want to support transhuge pages
      and -page table entries, introduce vma_is_special_huge() that takes the
      same codepaths as vma_is_dax().
      
      The use of "special" follows the definition in memory.c, vm_normal_page():
      "Special" mappings do not wish to be associated with a "struct page"
      (either it doesn't exist, or it exists but they don't want to touch it)
      
      For PAGE_SIZE pages, "special" is determined per page table entry to be
      able to deal with COW pages. But since we don't have huge COW pages,
      we can classify a vma as either "special huge" or "normal huge".
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarThomas Hellstrom (VMware) <thomas_os@shipmail.org>
      Acked-by: default avatarChristian König <christian.koenig@amd.com>
      Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      2484ca9b
    • Thomas Hellstrom (VMware)'s avatar
      fs: Constify vma argument to vma_is_dax · f05a3849
      Thomas Hellstrom (VMware) authored
      The function is used by upcoming vma_is_special_huge() with which we want
      to use a const vma argument. Since for vma_is_dax() the vma argument is
      only dereferenced for reading, constify it.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarThomas Hellstrom (VMware) <thomas_os@shipmail.org>
      Reviewed-by: default avatarRoland Scheidegger <sroland@vmware.com>
      Acked-by: default avatarChristian König <christian.koenig@amd.com>
      f05a3849
  2. 20 Mar, 2020 2 commits
  3. 19 Mar, 2020 31 commits