1. 08 Aug, 2012 1 commit
  2. 26 Jul, 2012 5 commits
    • Eugeni Dodonov's avatar
      drm/i915: prevent possible pin leak on error path · ab3951eb
      Eugeni Dodonov authored
      We should not hit this under any sane conditions, but still, this does not
      looks right.
      
      CC: Chris Wilson <chris@chris-wilson.co.uk>
      CC: Daniel Vetter <daniel.vetter@ffwll.ch>
      CC: stable@vger.kernel.org
      Reported-by: default avatarHerton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
      Reviewed-by: default avatarChris Wlison <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarEugeni Dodonov <eugeni.dodonov@intel.com>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      ab3951eb
    • Daniel Vetter's avatar
      drm/i915: rip out sanitize_pm again · acbe9475
      Daniel Vetter authored
      We believe to have squashed all issues around the gen6+ rps interrupt
      generation and why the gpu sometimes got stuck. With that cleared up,
      there's no user left for the sanitize_pm infrastructure, so let's just
      rip it out.
      
      Note that 'intel_reg_write 0xa014 0x13070000' is the w/a if we find
      ourselves stuck again.
      Acked-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-Off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      acbe9475
    • Daniel Vetter's avatar
      drm/i915: Only set the down rps limit when at the loweset frequency · 20b46e59
      Daniel Vetter authored
      The power docs say that when the gt leaves rc6, it is in the lowest
      frequency and only about 25 usec later will switch to the frequency
      selected in GEN6_RPNSWREQ. If the downclock limit expires in that
      window and the down limit is set to the lowest possible frequency, the
      hw will not send the down interrupt. Which leads to a too high gpu
      clock and wasted power.
      
      Chris Wilson already worked on this with
      
      commit 7b9e0ae6
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Sat Apr 28 08:56:39 2012 +0100
      
          drm/i915: Always update RPS interrupts thresholds along with
          frequency
      
      but got the logic inverted: The current code set the down limit as
      long as we haven't reached it. Instead of only once with reached the
      lowest frequency.
      
      Note that we can't always set the downclock limit to 0, because
      otherwise the hw will keep on bugging us with downclock request irqs
      once the lowest level is reached.
      
      For similar reasons also always set the upclock limit, otherwise the
      hw might poke us again with interrupts.
      
      v2: Chris Wilson noticed that the limit reg is also computed in
      sanitize_pm. To avoid duplication, extract the code into a common
      function.
      Reviewed-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-Off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      20b46e59
    • Chris Wilson's avatar
      drm/i915: Export ability of changing cache levels to userspace · e6994aee
      Chris Wilson authored
      By selecting the cache level (essentially whether or not the CPU snoops
      any updates to the bo, and on more recent machines whether it resides
      inside the CPU's last-level-cache) a userspace driver is able to then
      manage all of its memory within buffer objects, if it so desires. This
      enables the userspace driver to accelerate uploads and more importantly
      downloads from the GPU and to able to mix CPU and GPU rendering/activity
      efficiently.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      [danvet: Added code comment about where we plan to stuff platform
      specific cacheing control bits in the ioctl struct.]
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      e6994aee
    • Chris Wilson's avatar
      drm/i915: Segregate memory domains in the GTT using coloring · 42d6ab48
      Chris Wilson authored
      Several functions of the GPU have the restriction that differing memory
      domains cannot be placed next to each other (as the GPU may prefetch
      beyond the end of one domain and hang as it crosses into the other
      domain). We use the facility of the drm_mm to mark ranges with a
      particular color that corresponds to the cache attributes of those pages
      in order to prevent allocating adjacent blocks of differing memory
      types.
      
      v2: Rebase ontop of drm_mm coloring v2.
      v3: Fix rebinding existing gtt_space and add a verification routine.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      42d6ab48
  3. 25 Jul, 2012 34 commits