Commit f29f18eb authored by Ben Skeggs's avatar Ben Skeggs

drm/nouveau: avoid GPU page sizes > PAGE_SIZE for buffer objects in host memory

While the Tegra (GK20A, GM20B, GP10B) MMUs support large pages in host
memory, we're currently lacking IOMMU support for merging system pages
into large enough chunks to be mapped as such by the GPU.

The core VMM code actually supports automatically determining the best
page size to map with, which is intended for these situations, but for
various complicated reasons the DRM is currently forcing the page size
selection on a per-BO basis.

This should fix breakage reported on Tegra GPUs in the meantime, until
one or both of the above issues are resolved properly.
Reported-by: default avatarMikko Perttunen <cyndis@kapsi.fi>
Fixes: 7dc6a446 ("drm/nouveau: improve selection of GPU page size")
Signed-off-by: default avatarBen Skeggs <bskeggs@redhat.com>
Tested-by: default avatarThierry Reding <treding@nvidia.com>
parent 6cb0f2a3
......@@ -262,7 +262,8 @@ nouveau_bo_new(struct nouveau_cli *cli, u64 size, int align,
if (cli->device.info.family > NV_DEVICE_INFO_V0_CURIE &&
(flags & TTM_PL_FLAG_VRAM) && !vmm->page[i].vram)
continue;
if ((flags & TTM_PL_FLAG_TT ) && !vmm->page[i].host)
if ((flags & TTM_PL_FLAG_TT) &&
(!vmm->page[i].host || vmm->page[i].shift > PAGE_SHIFT))
continue;
/* Select this page size if it's the first that supports
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment