An error occurred fetching the project authors.
- 27 Jul, 2015 2 commits
-
-
Daniel Vetter authored
Chris rightfully suggested that documenting fences without documenting the BO tiling tracking doesn't make much sense, so fix that. The important bit to stress here (since it lead to some confusion) is the GEM doesn't really care about tiling. Except for a few select cases where the kernel needs to manage something that userspace can't take care of: Namely the limited number of fences and fixing up swizzling, although we still fail at the later. v2: Move the low-level tiling/swizzling functions and kerneldoc to i915_gem_fence.c and leave only the userspace interface here. Suggested by Chris. Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
Daniel Vetter authored
It fits more with the low-level fence code, and this move leaves only the userspace tiling ioctl handling in i915_gem_tiling.c. Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
- 14 Jul, 2015 1 commit
-
-
Daniel Vetter authored
This reverts commit 19ee835c. It breaks existing old userspace which doesn't handle UNKNOWN swizzling correct. Yes UNKNOWN was a thing back in 2009 and probably still is on some other platforms, but it still pretty clearly broke the testers machine. If we want this we need to extend the ioctl with new paramters that only new userspace looks at. Cc: Harald Arnesen <harald@skogtun.org> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reported-by: Harald Arnesen <harald@skogtun.org> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
-
- 09 Jul, 2015 1 commit
-
-
Chris Wilson authored
The old style of memory interleaving swizzled upto the end of the first even bank of memory, and then used the remainder as unswizzled on the unpaired bank - i.e. swizzling is not constant for all memory. This causes problems when we try to migrate memory and so the kernel prevents migration at all when we detect L-shaped inconsistent swizzling. However, this issue also extends to userspace who try to manually detile into memory as the swizzling for an individual page is unknown (it depends on its physical address only known to the kernel), userspace cannot correctly swizzle objects. v2: Mark the global swizzling as unknown rather than adjust the value reported to userspace. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=91105Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 16 Apr, 2015 1 commit
-
-
Chris Wilson authored
Since the removal of the user pin_ioctl, the only means for pinning an object is either through binding to the scanout or during execbuf reservation. As the later prevents a call to set-tiling, we need only check if the obj is pinned into the display plane to see if we need reject the set-tiling ioctl. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 Feb, 2015 1 commit
-
-
Chris Wilson authored
When we walk the list of vma, or even for protecting against concurrent framebuffer creation, we must hold the struct_mutex or else a second thread can corrupt the list as we walk it. Fixes regression from commit d7f46fc4 Author: Ben Widawsky <benjamin.widawsky@intel.com> Date: Fri Dec 6 14:10:55 2013 -0800 drm/i915: Make pin count per VMA References: https://bugs.freedesktop.org/show_bug.cgi?id=89085Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: Jani Nikula <jani.nikula@intel.com>
-
- 03 Dec, 2014 1 commit
-
-
John Harrison authored
The object structure contains the last read, write and fenced seqno values for use in syncrhonisation operations. These have now been replaced with their request structure counterparts. Note that to ensure that objects do not end up with dangling pointers, the assignments of last_*_req include reference count updates. Thus a request cannot be freed if an object is still hanging on to it for any reason. v2: Corrected 'last_rendering_' to 'last_read_' in a number of comments that did not get updated when 'last_rendering_seqno' became 'last_read|write_seqno' several millenia ago. For: VIZ-4377 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 Nov, 2014 1 commit
-
-
Daniel Vetter authored
Let's just throw in the towel on this one and take the cheap way out. Based on a patch from Chris Wilson, but checking for a different bit. Chris' patch checked for even bank layout, this one here for a magic bit. Given the evidence we've gathered (not much) both work I think, but checking for the magic bit might be more accurate. Anyway, works on my gm45 here. For paranoi restrict to gen4 (and mobile), since we've only ever seen this on gm45 and i965gm. Also add some debugfs output so that we can skip the tiled swapping tests properly in these cases. v2: Clean up the quirk'ed pin count in free_object to avoid upsetting the WARN_ON. Spotted by Chris. Cc: Chris Wilson <chris@chris-wilson.co.uk> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=28813 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=45092Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 11 Nov, 2014 1 commit
-
-
Chris Wilson authored
As obj->map_and_fenceable computation has changed to only be set when the object is bound inside the global GTT (and is suitable aligned to a fence region) we need to accommodate those changes when the tiling is adjusted. The easiest solution is to unbind from the global GTT if we are currently fenceable, but will not be after the tiling change. The bug has been exposed by commit f8fcadba218fe6d23b2e353fea1cf0a4be4c9454 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Fri Oct 31 13:53:52 2014 +0000 drm/i915: Only mark as map-and-fenceable when bound into the GGTT which tried to fix an oversight from commit e6a84468 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Mon Aug 11 12:00:12 2014 +0200 drm/i915: Force CPU relocations if not GTT mapped which changed the handling of obj->map_and_fenceable. Note that the alignment check is a vestige from our attempts to reduce the alignment requirements of tiled but unfenced buffers on gen2/3. Also, that was when unbinding from the GTT meant UC writes and clflushing, so we went to great pains to avoid such. That leaves the actual bug of setting map_and_fenceable to true if we're not bound to ggtt, which violates the change introduced in the above patch. Unbinding in that case really looks like the simplest and safest option, we have to do it anyway. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=85896 Testcase: igt/gem_concurrent_blit/gttX* Tested-by: huax.lu@intel.com Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Tested-by: Valtteri Rantala <valtteri.rantala@intel.com> [Jani: amend commit message per input from Daniel and bisect result from Valtteri] Signed-off-by: Jani Nikula <jani.nikula@intel.com>
-
- 07 Nov, 2014 1 commit
-
-
Chris Wilson authored
Userspace cares about whether or not swizzling depends on the page address for its direct access into bound objects. Extend the get_tiling ioctl to report the physical swizzling value in addition to the logical swizzling value so that userspace can accurately determine when it is possible for manual detiling. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Akash Goel <akash.goel@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Testcase: igt/gem_tiled_wc Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 Oct, 2014 1 commit
-
-
Jesse Barnes authored
Some machines (like MBAs) might use a tiled framebuffer but not enable display swizzling at boot time. We want to preserve that configuration if possible to prevent a boot time mode set. On IVB+ it shouldn't affect performance anyway since the memory controller does internal swizzling anyway. For most other configs we'll be able to enable swizzling at boot time, since the initial framebuffer won't be tiled, thus we won't see any corruption when we enable it. v2: preserve swizzling if BIOS had it set (Daniel) v3: preserve swizzling only if we inherited a tiled framebuffer (Daniel) check display swizzle setting in detect_bit_6_swizzle (Daniel) use gen6 as cutoff point (Daniel) v4: fixup swizzle preserve again, had wrong init order (Daniel) Reported-by: Kristian Høgsberg <hoegsberg@gmail.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 Sep, 2014 1 commit
-
-
Damien Lespiau authored
Previously, it was possible for the GPU memory accesses to be swizzled to try to optimize the fetches for tiled buffers. This swizzling was on top of what the memory controller in the uncore already does. With broadwell, we drop that GPU side swizzling, and the corresponding initialization in 3 units (GAM, GT, DE). All those bits are reserved, as specs put it: Before Gen8, there was a historical configuration control field to swizzle address bit[6] for in X/Y tiling modes. This was set in three different places: TILECTL[1:0], ARB_MODE[5:4], and DISP_ARB_CTL[14:13]" For Gen8 the swizzle fields are all reserved, and the CPU's memory controller performs all address swizzling modifications. This also means that user space doesn't have to manually swizzle when accessing tiled buffers from the CPU, and so we always return I915_BIT_6_SWIZZLE_NONE from i915_gem_detect_bit_6_swizzle(), which short-circuits the initialization of the registers mentionned above in i915_gem_init_swizzling(). v2: Refine the explanation a bit more (Daniel) v3: Make it BDW+ specific (Steve) Cc: Steve Aarnio <steve.j.aarnio@linux.intel.com> Signed-off-by: Damien Lespiau <damien.lespiau@intel.com> [danvet: Keep the actual code to set the tiling bits for now, in case some bios escaped to the wild that uses this - we'd need it for fastboot.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 11 Aug, 2014 1 commit
-
-
Chris Wilson authored
This migrates the fence tracking onto the existing seqno infrastructure so that the later conversion to tracking via requests is simplified. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 31 Mar, 2014 1 commit
-
-
Jani Nikula authored
Remove the rest of the references to drm_i915_private_t. No functional changes. Signed-off-by: Jani Nikula <jani.nikula@intel.com> [danvet: Drop hunk in i915_cmd_parser.c] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 Dec, 2013 1 commit
-
-
Ben Widawsky authored
Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 16 Oct, 2013 1 commit
-
-
Daniel Vetter authored
Assuming that all framebuffer related metadata is invariant simplifies our userspace input data checking. And current userspace always first updates the tiling of an object before creating a framebuffer with it. This allows us to upconvert a check in pin_and_fence to a WARN. In the future it should also be helpful to know which buffer objects are potential scanout targets for e.g. frontbuffer rendering tracking and similar things. Note that SNA shipped for one prerelease with code which will be broken through this patch. But users shouldn't notice since it's purely an optimization and will transparently fall back to allocating a new fb. i-g-t also had offending code (now fixed), but we don't really care about breaking the test-suite. Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Grumpily-reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 01 Oct, 2013 1 commit
-
-
Daniel Vetter authored
No buffer overflows here, but better safe than sorry. v2: - Fixup the sizeof conversion, I've missed the pointer deref (Jani). - Drop the redundant GFP_ZERO, kcalloc alreads memsets (Jani). - Use kmalloc_array for the execbuf fastpath to avoid the memset (Chris). I've opted to leave all other conversions as-is since they aren't in a fastpath and dealing with cleared memory instead of random garbage is just generally nicer. Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Jani Nikula <jani.nikula@intel.com> [danvet: Drop the contentious kmalloc_array hunk in execbuf.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 08 Aug, 2013 1 commit
-
-
Ben Widawsky authored
As alluded to in several patches, and it will be reiterated later... A VMA is an abstraction for a GEM BO bound into an address space. Therefore it stands to reason, that the existing bind, and unbind are the ones which will be the most impacted. This patch implements this, and updates all callers which weren't already updated in the series (because it was too messy). This patch represents the bulk of an earlier, larger patch. I've pulled out a bunch of things by the request of Daniel. The history is preserved for posterity with the email convention of ">" One big change from the original patch aside from a bunch of cropping is I've created an i915_vma_unbind() function. That is because we always have the VMA anyway, and doing an extra lookup is useful. There is a caveat, we retain an i915_gem_object_ggtt_unbind, for the global cases which might not talk in VMAs. > drm/i915: plumb VM into object operations > > This patch was formerly known as: > "drm/i915: Create VMAs (part 3) - plumbing" > > This patch adds a VM argument, bind/unbind, and the object > offset/size/color getters/setters. It preserves the old ggtt helper > functions because things still need, and will continue to need them. > > Some code will still need to be ported over after this. > > v2: Fix purge to pick an object and unbind all vmas > This was doable because of the global bound list change. > > v3: With the commit to actually pin/unpin pages in place, there is no > longer a need to check if unbind succeeded before calling put_pages(). > Make put_pages only BUG() after checking pin count. > > v4: Rebased on top of the new hangcheck work by Mika > plumbed eb_destroy also > Many checkpatch related fixes > > v5: Very large rebase > > v6: > Change BUG_ON to WARN_ON (Daniel) > Rename vm to ggtt in preallocate stolen, since it is always ggtt when > dealing with stolen memory. (Daniel) > list_for_each will short-circuit already (Daniel) > remove superflous space (Daniel) > Use per object list of vmas (Daniel) > Make obj_bound_any() use obj_bound for each vm (Ben) > s/bind_to_gtt/bind_to_vm/ (Ben) > > Fixed up the inactive shrinker. As Daniel noticed the code could > potentially count the same object multiple times. While it's not > possible in the current case, since 1 object can only ever be bound into > 1 address space thus far - we may as well try to get something more > future proof in place now. With a prep patch before this to switch over > to using the bound list + inactive check, we're now able to carry that > forward for every address space an object is bound into. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> [danvet: Rebase on top of the loss of "drm/i915: Cleanup more of VMA in destroy".] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 08 Jul, 2013 1 commit
-
-
Ben Widawsky authored
Soon we want to gut a lot of our existing assumptions how many address spaces an object can live in, and in doing so, embed the drm_mm_node in the object (and later the VMA). It's possible in the future we'll want to add more getter/setter methods, but for now this is enough to enable the VMAs. v2: Reworked commit message (Ben) Added comments to the main functions (Ben) sed -i "s/i915_gem_obj_set_color/i915_gem_obj_ggtt_set_color/" drivers/gpu/drm/i915/*.[ch] sed -i "s/i915_gem_obj_bound/i915_gem_obj_ggtt_bound/" drivers/gpu/drm/i915/*.[ch] sed -i "s/i915_gem_obj_size/i915_gem_obj_ggtt_size/" drivers/gpu/drm/i915/*.[ch] sed -i "s/i915_gem_obj_offset/i915_gem_obj_ggtt_offset/" drivers/gpu/drm/i915/*.[ch] (Daniel) v3: Rebased on new reserve_node patch Changed DRM_DEBUG_KMS to actually work (will need fixing later) Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 Apr, 2013 2 commits
-
-
Ville Syrjälä authored
BSpec contains several scattered notes which state that the maximum fence stride was increased to 256KB on IVB. Testing on real hardware agrees. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
Our checks for an invalid fence stride forgot to guard against zero stride on gen4+. Fix it. v2: Avoid duplicated code (danvet) Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 Mar, 2013 1 commit
-
-
Imre Deak authored
The i915 driver uses sg lists for memory without backing 'struct page' pages, similarly to other IO memory regions, setting only the DMA address for these. It does this, so that it can program the HW MMU tables in a uniform way both for sg lists with and without backing pages. Without a valid page pointer we can't call nth_page to get the current page in __sg_page_iter_next, so add a helper that relevant users can call separately. Also add a helper to get the DMA address of the current page (idea from Daniel). Convert all places in i915, to use the new API. Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 Mar, 2013 1 commit
-
-
Imre Deak authored
So far the assumption was that each dma scatter list entry contains only a single page. This might not hold in the future, when we'll introduce compact scatter lists, so prepare for this everywhere in the i915 code where we walk such a list. We'll fix the place _creating_ these lists separately in the next patch to help the reviewing/bisectability. Reference: http://www.spinics.net/lists/dri-devel/msg33917.htmlSigned-off-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 17 Jan, 2013 3 commits
-
-
Ben Widawsky authored
The purpose of the gtt structure is to help isolate our gtt specific properties from the rest of the code (in doing so it help us finish the isolation from the AGP connection). The following members are pulled out (and renamed): gtt_start gtt_total gtt_mappable_end gtt_mappable gtt_base_addr gsm The gtt structure will serve as a nice place to put gen specific gtt routines in upcoming patches. As far as what else I feel belongs in this structure: it is meant to encapsulate the GTT's physical properties. This is why I've not added fields which track various drm_mm properties, or things like gtt_mtrr (which is itself a pretty transient field). Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com> [Ben modified commit messages] Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Imre Deak authored
Signed-off-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Imre Deak authored
The two functions are rather similar, so merge them. Signed-off-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 07 Dec, 2012 1 commit
-
-
Chris Wilson authored
On a machine with bit17 swizzling, we need to store the bit17 of the physical page address in put-pages. This requires a memory allocation, on average less than a page, which may be difficult to satisfy is the request to put-pages is on behalf of the shrinker. We could allow that allocation to pull from the reserved memory pools, but it seems much safer to preallocate the array for tiled objects on affected machines. v2: Export i915_gem_object_needs_bit17_swizzle() for reuse. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 Oct, 2012 1 commit
-
-
Jesse Barnes authored
We don't have bit 6 swizzling on VLV, so this function is easy. Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 02 Oct, 2012 2 commits
-
-
David Howells authored
Convert #include "..." to #include <path/...> in drivers/gpu/. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Dave Airlie <airlied@redhat.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Dave Jones <davej@redhat.com>
-
David Howells authored
Remove redundant DRM UAPI header #inclusions from drivers/gpu/. Remove redundant #inclusions of core DRM UAPI headers (drm.h, drm_mode.h and drm_sarea.h). They are now #included via drmP.h and drm_crtc.h via a preceding patch. Without this patch and the patch to make include the UAPI headers from the core headers, after the UAPI split, the DRM C sources cannot find these UAPI headers because the DRM code relies on specific -I flags to make #include "..." work on headers in include/drm/ - but that does not work after the UAPI split without adding more -I flags. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Dave Airlie <airlied@redhat.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Dave Jones <davej@redhat.com>
-
- 20 Sep, 2012 1 commit
-
-
Chris Wilson authored
Rather than have multiple data structures for describing our page layout in conjunction with the array of pages, we can migrate all users over to a scatterlist. One major advantage, other than unifying the page tracking structures, this offers is that we replace the vmalloc'ed array (which can be up to a megabyte in size) with a chain of individual pages which helps reduce memory pressure. The disadvantage is that we then do not have a simple array to iterate, or to access randomly. The common case for this is in the relocation processing, which will typically fit within a single scatterlist page and so be almost the same cost as the simple array. For iterating over the array, the extra function call could be optimised away, but in reality is an insignificant cost of either binding the pages, or performing the pwrite/pread. v2: Fix drm_clflush_sg() to not invoke wbinvd as well! And fix the trivial compile error from rebasing. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 May, 2012 2 commits
-
-
Chris Wilson authored
If we fail to unbind and so abort the change in tiling, we will have removed the VMA for the object for no reason. The likelihood of unbind failing is slim (other than ERESTARTSYS which will cause userspace to try again), so the change is mostly for the principle. Also improve the slightly stale comment. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Rename obj->tiling_changed to obj->fence_dirty so that it is clear that it flags when the parameters for an active fence (including the no-fence) register are changed. Also, do not set this flag when the object does not have a fence register allocated currently and the gpu does not depend upon the unfence. This case works exactly like when a tiled object lost its fence and hence does not need additional handling for the tiling change in the code. v2: Use fence_dirty to better express what the flag tracks and add a few more details to the comments to serve as a reminder of how the GPU also uses the unfenced register slot. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> [danvet: Add some bikeshed to the commit message about the stricter use of fence_dirty.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 08 Feb, 2012 1 commit
-
-
Daniel Vetter authored
We have to do this manually. Somebody had a Great Idea. I've measured speed-ups just a few percent above the noise level (below 5% for the best case), but no slowdows. Chris Wilson measured quite a bit more (10-20% above the usual snb variance) on a more recent and better tuned version of sna, but also recorded a few slow-downs on benchmarks know for uglier amounts of snb-induced variance. v2: Incorporate Ben Widawsky's preliminary review comments and elaborate a bit about the performance impact in the changelog. v3: Add a comment as to why we don't need to check the 3rd memory channel. v4: Fixup whitespace. Acked-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Eric Anholt <eric@anholt.net> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 30 Jan, 2012 1 commit
-
-
Daniel Vetter authored
It looks like the desktop variants of i915 and i945 also have the DCC register to control dram channel interleave and cpu side bit6 swizzling. Unfortunately internal Cspec/ConfigDB documentation for these ancient chips have already been dropped and there seem to be no archives. Also somebody thought the swizzling behaviour is surely a worthy secret to keep and redacted any mention of these fields from the published Intel datasheets. I suspect the hw engineers were really proud of the page coloring they've achieved in their first dual channel dram controller with bit17 - after all Bspec explains in great length the optimal layout of page frame numbers modulo 4 for the color and depth buffers, too. Later on when they've started to work on VT-d they shamefully discoverd their stupidity and tried to cover the tracks ... Tested-by: Daniel Vetter <daniel.vetter@ffwll.ch> (i915g) Tested-by: Pavel Ondračka <pavel.ondracka@email.cz> (i945g) Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=42625Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 Oct, 2011 2 commits
-
-
Daniel Vetter authored
Use the helper function already employed by the pwrite/pread functions. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Keith Packard <keithp@keithp.com>
-
Daniel Vetter authored
Fixes tests/gem_tiled_pread on my snb. I know, mesa doesn't use this on gen6+, but I also hate failing testcases. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Keith Packard <keithp@keithp.com>
-
- 18 Jul, 2011 1 commit
-
-
Chris Wilson authored
Align unfenced buffers on older hardware to the power-of-two object size. The docs suggest that it should be possible to align only to a power-of-two tile height, but using the already computed fence size is easier and always correct. We also have to make sure that we unbind misaligned buffers upon tiling changes. In order to prevent a repetition of this bug, we change the interface to the alignment computation routines to force the caller to provide the requested alignment and size of the GTT binding rather than assume the current values on the object. Reported-and-tested-by: Sitosfe Wheeler <sitsofe@yahoo.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=36326Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@kernel.org Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Keith Packard <keithp@keithp.com>
-
- 14 May, 2011 1 commit
-
-
Jesse Barnes authored
Treat it like Ironlake and Sandy Bridge. Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: Keith Packard <keithp@keithp.com> Signed-off-by: Keith Packard <keithp@keithp.com>
-
- 07 Mar, 2011 1 commit
-
-
Chris Wilson authored
Early gen3 and gen2 chipset do not have the relaxed per-surface tiling constraints of the later chipsets, so we need to check that the GTT alignment is correct for the new tiling. If it is not, we need to rebind. Reported-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
-