- 10 Oct, 2017 5 commits
-
-
Jani Nikula authored
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/4a95fb9d23d980830e3158d3c57258e6539965ce.1506586821.git.jani.nikula@intel.com
-
Jani Nikula authored
Prepare for merging parse_device_mapping() into parse_general_definitions(). No functional changes. Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/f1c3621e2622f4debdfb4a2f5c1959845754ac04.1506586821.git.jani.nikula@intel.com
-
Jani Nikula authored
In theory, these might clobber information for older VBT versions. We might have to store the BDB version for later parsing, but currently all code accessing these fields will only use them on newer platforms with new enough BDB versions. Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/0232d9cb258e8f83c4180cdb8aad1459a312ec2a.1506586821.git.jani.nikula@intel.com
-
Jani Nikula authored
Early return on failures. Rename the variable for later merging with parse_device_mappings(). Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/785abb904a572752fec68d90d34efeb67774dc1f.1506586821.git.jani.nikula@intel.com
-
Jani Nikula authored
While technically CHV isn't DDI, we do look at the VBT based DDI port info for HDMI DDC pin and DP AUX channel. (We call these "alternate", but they're really just something that aren't platform defaults.) In commit e4ab73a1 ("drm/i915: Respect alternate_ddc_pin for all DDI ports") Ville writes, "IIRC there may be CHV system that might actually need this." I'm not sure why there couldn't be even more platforms that need this, but start conservative, and parse the info for CHV in addition to DDI. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100553Reported-by: Marek Wilczewski <mw@3cte.pl> Cc: stable@vger.kernel.org Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/d0815082cb98487618429b62414854137049b888.1506586821.git.jani.nikula@intel.com
-
- 09 Oct, 2017 18 commits
-
-
Paulo Zanoni authored
If for some unexpected reason the registers all read zero it's better to WARN and return instead of dividing by zero and completely freezing the machine. I don't expect this to happen in the wild with the current code, but I accidentally triggered the division by zero while doing some debugging in an unusual environment. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171005213842.11423-2-paulo.r.zanoni@intel.com
-
Paulo Zanoni authored
Looks like we were missing them. Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170922205343.16006-2-paulo.r.zanoni@intel.com
-
Matthew Auld authored
It's a little unclear what the sg_mask actually is, so prefer the more meaningful name of sg_page_sizes. Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009110024.29114-1-matthew.auld@intel.comReviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-
Chris Wilson authored
Currently, we reject attempting to pin a large bo into the mappable aperture, but only after trying to create the vma. Under debug kernels, repeatedly creating and freeing that vma for an oversized bo consumes one-third of the runtime for pwrite/pread tests as it is spent on kmalloc/kfree tracking. If we move the rejection to before creating that vma, we lose some accuracy of checking against the fence_size as opposed to object size, though the fence can never be smaller than the object. Note that the vma creation itself will reject an attempt to create a vma larger than the GTT so we can remove one redundant test. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-7-chris@chris-wilson.co.ukReviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Chris Wilson authored
Both pread/pwrite GTT paths provide a fast fallback in case we cannot map the whole object at a time. Currently, we use the fallback for very large objects and for active objects that would require remapping, but we can also add active fault mappable objects to the list that we want to avoid evicting. The rationale is that such fault mappable objects are in active use and to evict requires tearing down the CPU PTE and forcing a page fault on the next access; more costly, and intefers with other processes, than our per-page GTT fallback. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-6-chris@chris-wilson.co.ukReviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Chris Wilson authored
As we have a lightweight fallback to insert a single page into the aperture, try to avoid any heavier evictions when attempting to insert the entire object. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-5-chris@chris-wilson.co.ukReviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Chris Wilson authored
If the caller says that he doesn't want to evict any other faulting vma, honour that flag. The logic was used in evict_something, but not the more specific evict_for_node, now being used as a preliminary probe since commit 606fec95 ("drm/i915: Prefer random replacement before eviction search"). Fixes: 606fec95 ("drm/i915: Prefer random replacement before eviction search") Fixes: 82118877 ("drm/i915: Choose not to evict faultable objects from the GGTT") References: https://bugs.freedesktop.org/show_bug.cgi?id=102490Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-4-chris@chris-wilson.co.ukReviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Chris Wilson authored
We don't wish to refault the entire object (other vma) when unbinding one partial vma. To do this track which vma have been faulted into the user's address space. v2: Use a local vma_offset to tidy up a multiline unmap_mapping_range(). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-3-chris@chris-wilson.co.ukReviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Chris Wilson authored
Following the pattern now used for obj->mm.pages, use just pin_fence and unpin_fence to control access to the fence registers. I.e. instead of calling get_fence(); pin_fence(), we now just need to call pin_fence(). This will make it easier to reduce the locking requirements around fence registers. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-2-chris@chris-wilson.co.ukReviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Chris Wilson authored
Acquire the fence register for the iomap in i915_vma_pin_iomap() on behalf of the caller. We probably want for the caller to specify whether the fence should be pinned for their usage, but at the moment all callers do want the associated fence, or none, so take it on their behalf. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-1-chris@chris-wilson.co.uk
-
Chris Wilson authored
Add assert_forcewakes_active() (the complementary function to assert_forcewakes_inactive) that documents the requirement of a function for its callers to be holding the forcewake ref (i.e. the function is part of a sequence over which RC6 must be prevented). One such example is during ringbuffer reset, where RC6 must be held across the whole reinitialisation sequence. v2: Include debug information in the WARN so we know which fw domain is missing. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20171009110301.21705-5-chris@chris-wilson.co.uk
-
Chris Wilson authored
The lowlevel reset functions expect the caller to be holding the rpm wakeref for the device access across the reset. We were not explicitly doing this in the sefltest, so for simplicity acquire the wakeref for the duration of all subtests. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009110301.21705-4-chris@chris-wilson.co.ukReviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
-
Chris Wilson authored
Resetting the engine requires us to hold the forcewake wakeref to prevent RC6 trying to happen in the middle of the reset sequence. The consequence of an unwanted RC6 event in the middle is that random state is then saved to the powercontext and restored later, which may overwrite the mmio state we need to preserve (e.g. PD_DIR_BASE in the legacy ringbuffer reset_ring_common()). This was noticed in the live_hangcheck selftests when Haswell would sporadically fail to restart during igt_reset_queue(). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009110301.21705-3-chris@chris-wilson.co.ukReviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
-
Chris Wilson authored
During hangcheck testing, we try to execute requests following the GPU reset, and in particular want to try and debug when those fail. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009110301.21705-2-chris@chris-wilson.co.ukReviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
-
Chris Wilson authored
We can use drm_printer to hide the differences between printk and seq_printf, and so make the i915_engine_info pretty printer able to be called from different contexts and not just debugfs. For instance, I want to use the pretty printer to debug kselftests. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009110301.21705-1-chris@chris-wilson.co.ukReviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
-
Chris Wilson authored
We only apply the hugepage PD redirection inside the ppGTT, so during i915_vma_insert() we want to exclude the GGTT from the additional alignment constraints (thereby avoiding the extra GTT pressure from fragmentation). Add an assert to document that intention alongside the comment. v2: After discussion with Matthew, make it a blanket GGTT ban (previously we allowed the expansion for appgtt, and so indirectly ggtt). There are issues we need to fix before allowing the current appgtt to be used with hugepages, and if we do, we probably want more care over when to expand/align, as the mappable aperture inside the ggtt is precious. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009092019.20747-1-chris@chris-wilson.co.ukReviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
Ville Syrjälä authored
Eliminate the duplicate code for pipe timing readout in intel_crtc_mode_get() by using the functions we use for the normal state readout. v2: Store dotclock in adjusted_mode instead of the final mode Cc: dri-devel@lists.freedesktop.org Cc: Rob Kramer <rob@solution-space.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/1459536530-17754-1-git-send-email-ville.syrjala@linux.intel.com
-
Ville Syrjälä authored
intel_crtc->config->cpu_transcoder isn't yet filled out when intel_crtc_mode_get() gets called during output probing, so we should not use it there. Instead intel_crtc_mode_get() figures out the correct transcoder on its own, and that's what we should use. If the BIOS boots LVDS on pipe B, intel_crtc_mode_get() would actually end up reading the timings from pipe A instead (since PIPE_A==0), which clearly isn't what we want. It looks to me like this may have been broken by commit eccb140b ("drm/i915: hw state readout&check support for cpu_transcoder") as that one removed the early initialization of cpu_transcoder from intel_crtc_init(). Cc: stable@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: Rob Kramer <rob@solution-space.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reported-by: Rob Kramer <rob@solution-space.com> Fixes: eccb140b ("drm/i915: hw state readout&check support for cpu_transcoder") References: https://lists.freedesktop.org/archives/dri-devel/2016-April/104142.htmlSigned-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/1459525046-19425-1-git-send-email-ville.syrjala@linux.intel.com
-
- 07 Oct, 2017 17 commits
-
-
Matthew Auld authored
For gen8+ platforms which support the 48b PPGTT, enable platform level support for 2M pages. Also enable for mock testing. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-22-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-21-chris@chris-wilson.co.uk
-
Matthew Auld authored
For gen9+ enable platform level support for 64K pages. Also enable for mock testing. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-21-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-20-chris@chris-wilson.co.uk
-
Matthew Auld authored
Currently gvt gtt handling doesn't support huge page entries, so disable for now. v2: remove useless 48b PPGTT check Suggested-by: Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-20-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-19-chris@chris-wilson.co.uk
-
Matthew Auld authored
Try to mix sg page sizes for 4K, 64K and 2M pages. v2: s/BIT(x) >> 12/BIT(x) >> PAGE_SHIFT/ Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-19-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-18-chris@chris-wilson.co.uk
-
Matthew Auld authored
v2: mock test page support configurations and add MI_STORE_DWORD test v3: run all mockable huge page tests on all platforms via the mock_device v4: add pin_update regression test various improvements suggested by Chris v5: fix issues reported by kbuild test single sg spanning multiple page sizes don't explode when running the live-tests through the appgtt v6: lots of improvements from Chris v7: run on each engine for igt_write_huge add simple tmpfs fallback test v8: size_t is bad don't break the i386 build Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-18-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-17-chris@chris-wilson.co.uk
-
Matthew Auld authored
Good to know, mostly for debugging purposes. v2: some improvements from Chris Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-17-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-16-chris@chris-wilson.co.uk
-
Matthew Auld authored
Now that we support multiple page sizes for the ppgtt, it would be useful to track the real usage for debugging purposes. Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-16-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-15-chris@chris-wilson.co.uk
-
Matthew Auld authored
Support inserting 64K pages into the 48b PPGTT. v2: check for 64K scratch v3: we should only have to re-adjust maybe_64K at every sg interval Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-15-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-14-chris@chris-wilson.co.uk
-
Matthew Auld authored
Before we can fully enable 64K pages, we need to first support a 64K scratch page if we intend to support the case where we have object sizes < 2M, since any scratch PTE must also point to a 64K region. Without this our 64K usage is limited to objects which completely fill the page-table, and therefore don't need any scratch. v2: add reminder about why 48b PPGTT Reported-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-14-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-13-chris@chris-wilson.co.uk
-
Matthew Auld authored
Support inserting 2M gtt pages into the 48b PPGTT. v2: sanity check sg->length against page_size v3: don't recalculate rem on each loop whitespace breakup Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-13-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-12-chris@chris-wilson.co.uk
-
Matthew Auld authored
When SW enables the use of 2M/1G pages, it must disable the GTT cache. v2: don't disable for Cherryview which doesn't even support 48b PPGTT! v3: explicitly check that the system does support 2M/1G pages v4: split WA and decision logic Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-12-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-11-chris@chris-wilson.co.uk
-
Matthew Auld authored
Before we can enable 64K pages through the IPS bit, we must first enable it through MMIO, otherwise the page-walker will simply ignore it. v2: add comment mentioning that 64K is BDW+ v3: move to more suitable home Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-11-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-10-chris@chris-wilson.co.uk
-
Matthew Auld authored
We can't mix 64K and 4K pte's in the same page-table, so for now we align 64K objects to 2M to avoid any potential mixing. This is potentially wasteful but in reality shouldn't be too bad since this only applies to the virtual address space of a 48b PPGTT. v2: don't separate logically connected ops Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-10-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-9-chris@chris-wilson.co.uk
-
Matthew Auld authored
For the 48b PPGTT try to align the vma start address to the required page size boundary to guarantee we use said page size in the gtt. If we are dealing with multiple page sizes, we can't guarantee anything and just align to the largest. For soft pinning and objects which need to be tightly packed into the lower 32bits we don't force any alignment. v2: various improvements suggested by Chris v3: use set_pages and better placement of page_sizes v4: prefer upper_32_bits() v5: assign vma->page_sizes = vma->obj->page_sizes directly prefer sizeof(vma->page_sizes) v6: fixup checking of end to exclude GGTT (which are assumed to be limited to 4G). Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-9-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-8-chris@chris-wilson.co.uk
-
Matthew Auld authored
Move the setting/clearing of the vma->pages to a vm operation. Doing so neatens things up a little, but more importantly gives us a sane place to also set/clear the vma->pages_sizes, which we introduce later in preparation for supporting huge-pages. v2: remove redundant vma->pages check v3: GEM_BUG_ON(vma->pages) following i915_vma_remove Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-8-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-7-chris@chris-wilson.co.uk
-
Matthew Auld authored
In preparation for supporting huge gtt pages for the ppgtt, we introduce page size members for gem objects. We fill in the page sizes by scanning the sg table. v2: pass the sg_mask to set_pages v3: calculate the sg_mask inline with populating the sg_table where possible, and pass to set_pages along with the pages. v4: bunch of improvements from Joonas v5: fix num_pages blunder introduce i915_sg_page_sizes helper v6: prefer GEM_BUG_ON(sizes == 0) Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-7-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-6-chris@chris-wilson.co.uk
-
Matthew Auld authored
Each backend is now responsible for calling __i915_gem_object_set_pages upon successfully gathering its backing storage. This eliminates the inconsistency between the async and sync paths, which stands out even more when we start throwing around an sg_mask in a later patch. Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-6-matthew.auld@intel.comSigned-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-5-chris@chris-wilson.co.uk
-