- 04 Jun, 2013 1 commit
-
-
Daniel Vetter authored
As a prep work to fix it up: - Use intel_connector instead of drm_connector to avoid too much upcasting in the bugfix patch. - Extract the connector bpp clamping from the loop-over-connectors logic. - Bikeshed function names (to make it clearer that acompute_baseline_pipe_bpp runs in the compute stage of the modeset sequence) and add a comment to make it clearer what it does. No functional change in this patch. Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 Jun, 2013 10 commits
-
-
Daniel Vetter authored
We only need to do them if the pipe is actually running and if the framebuffers have changed. Removes two "wait for vblank timed out" messages when doing a suspend/resume cycle on my i855gm. v2: s/to_intel_ctrc(crtc)/intel_crtc/ spotted by Chris. Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
People don't like typedefs these days. Eliminate their use from intel_fb.c. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ville Syrjälä authored
Use container_of() instead of a cast to get struct intel_fbdev from struct drm_fb_helper. Also populate the fb_info->par correctly with the drm_fb_helper pointer instead of the intel_fbdev pointer. There's no actual functional change since the drm_fb_helper happens to be the first member inside intel_fbdev. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
Rework of per ring hangcheck made this obsolete. Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
Keep track of ring seqno progress and if there are no progress detected, declare hang. Use actual head (acthd) to distinguish between ring stuck and batchbuffer looping situation. Stuck ring will be kicked to trigger progress. This commit adds a hard limit for batchbuffer completion time. If batchbuffer completion time is more than 4.5 seconds, the gpu will be declared hung. Review comment from Ben which nicely clarifies the semantic change: "Maybe I'm just stating the functional changes of the patch, but in case they were unintended here is what I see as potential issues: 1. "If ring B is waiting on ring A via semaphore, and ring A is making progress, albeit slowly - the hangcheck will fire. The check will determine that A is moving, however ring B will appear hung because the ACTHD doesn't move. I honestly can't say if that's actually a realistic problem to hit it probably implies the timeout value is too low. 2. "There's also another corner case on the kick. If the seqno = 2 (though not stuck), and on the 3rd hangcheck, the ring is stuck, and we try to kick it... we don't actually try to find out if the kick helped" v2: use atchd to detect stuck ring from loop (Ben Widawsky) v3: Use acthd to check when ring needs kicking. Declare hang on third time in order to give time for kick_ring to take effect. v4: Update commit msg Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> [danvet: Paste in Ben's review comment.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
Since it will be used for the global bound/unbound list with full PPGTT, this helps clarify things for upcoming code rework. Recommended-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
If we properly keep track of the pages_pin_count, then when we later add multiple address spaces, the put_pages doesn't need any special checks to be able to perform it's job. CC: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> [danvet: Rebased on top of the fix for stolen memory pinning.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
The way the stolen handling works is we take a pin on the backing pages, but we never actually get a reference to the bo. On freeing objects allocated with stolen memory, the final unref will end up freeing the object with pinned pages count left. To enable an assertion to catch bugs in this code path, this patch cleans up that remaining pin. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
This makes it easier to catch leaks. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
It's not terribly interesting to know that a parameter doesn't exist, and it can get in the way of interesting messages, especially with the staggered VECS merging as we've done. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 31 May, 2013 29 commits
-
-
Paulo Zanoni authored
It just prints whether it's supported/enabled/disabled. Feature requested by the power management team. v2: Checkpatch started complaining about seq_printf with 1 argument. Requested-by: Kristen Accardi <kristen.c.accardi@intel.com> Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
IPS is still enabled by default. Feature requested by the power management team. This should also help testing the feature on some early pre-production hardware where there were relationship problems between IPS and PSR. v2: Rebase on top of the newest IPS implementation. v3: Check i915_enable_ips at compute_config, not supports_ips, so the kernel parameter will be ignored at haswell_get_pipe_config. Requested-by: Kristen Accardi <kristen.c.accardi@intel.com> Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
Intermediate Pixel Storage is a feature that should reduce the number of times the display engine wakes up memory to read pixels, so it should allow deeper PC states. IPS can only be enabled on ULT pipe A with 8:8:8 pipe pixel formats. With eDP 1920x1080 and correct watermarks but without FBC this moves my PC7 residency from 2.5% to around 38%. v2: - It's tied to pipe A, not port A - Add pipe_config support (Chris) - Add some assertions (Chris) - Rebase against latest dinq v3: - Don't ever set ips_enabled to false (Daniel) - Only check for ips_enabled at hsw_disable_ips (Daniel) v4: - Add hsw_compute_ips_config (Daniel) - Use the new dump_pipe_config (Daniel) Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
Now that we track the cpu transcoder we need accurately in the pipe config we can finally fix up the transcoder check. With the current code eDP on port D will be broken since we'd errornously cut the power. For reference see commit 2124b72e Author: Paulo Zanoni <paulo.r.zanoni@intel.com> Date: Fri Mar 22 14:07:23 2013 -0300 drm/i915: don't disable the power well yet v2: - Kill the now outdated comment (Paulo) - Add the missing crtc->base.enabled check and consolidate it (Paulo) - Smash all checks together, looks neater that way. v3: Kill the unused encoder variable. Cc: Takashi Iwai <tiwai@suse.de> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Xiang, Haihao authored
This will let userland only try to use the new ring when the appropriate kernel is present Signed-off-by: Xiang, Haihao <haihao.xiang@intel.com> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Xiang, Haihao authored
A user can run batchbuffer via VEBOX ring. Signed-off-by: Xiang, Haihao <haihao.xiang@intel.com> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Xiang, Haihao authored
v2: Removed rebase relic VECS ring from i915_gem_request_info (Damien) v3: s/hsw/hws in debugfs which I introduced in v2 (Jon) Signed-off-by: Xiang, Haihao <haihao.xiang@intel.com> [Order changed, and modified by] CC: "Bloomfield, Jon" <jon.bloomfield@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
Similar to a patch originally written by: v2: Reversed the meanings of masked and enabled (Haihao) Made non-destructive writes in case enable/disabler rps runs first (Haihao) v3: Reword error message (Damien) Modify postinstall to do the right thing based on previous fixup. (Ben) CC: Xiang, Haihao <haihao.xiang@intel.com> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
v2: Use the correct lock to protect PM interrupt regs, this was accidentally lost from earlier (Haihao) Fix return types (Ben) Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
The motivation here is we're going to add some new interrupt definitions and handling outside of the GT interrupts which is all we've managed so far (with some RPS exceptions). By consolidating the names in the future we can make thing a bit cleaner as we don't need to define register names twice, and we can leverage pretty decent overlap in HW registers since ILK. To explain briefly what is in the comments: there are two sets of interrupt masking/enabling registers. At least so far, the definitions of the two sets overlap. The old code setup distinct names for interrupts in each set, ie. one for global, and one for ring. This made things confusing when using the wrong defines in the wrong places. rebase: Modified VLV bits v2: Renamed GT_RENDER_MASTER to GT_RENDER_CS_MASTER (Damien) Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
It's overkill on older gens, but it's useful for newer gens. Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
PM interrupts have an expanded role on HSW. It helps route the EBOX interrupts. This patch is necessary to make the existing code which touches the mask, and enable registers more friendly to other code paths that also will need these registers. To be more explicit: At preinstall all interrupts are masked and disabled. This implies that preinstall should always happen before any enabling/disabling of RPS or other interrupts. The PMIMR is touched by the workqueue, so enable/disable touch IER and IIR. Similarly, the code currently expects IMR has no use outside of the RPS related interrupts so they unconditionally set 0, or ~0. We could use IER in the workqueue, and IMR elsewhere, but since the workqueue use-case is more transient the existing usage makes sense. Disable RPS events: IER := IER & ~GEN6_PM_RPS_EVENTS // Disable RPS related interrupts IIR := GEN6_PM_RPS_EVENTS // Disable any outstanding interrupts Enable RPS events: IER := IER | GEN6_PM_RPS_EVENTS // Enable the RPS related interrupts IIR := GEN6_PM_RPS_EVENTS // Make sure there were no leftover events (really shouldn't happen) v2: Shouldn't destroy PMIIR or PMIMR VEBOX interrupt state in enable/disable rps functions (Haihao) v3: Bug found by Chris where we were clearing the wrong bits at rps disable. expanded commit message v4: v3 was based off the wrong branch v5: Added the setting of PMIMR because of previous patch update CC: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
At the moment, these values are wiped out anyway by the rps enable/disable. That will be changed in the next patch though. v2: Add post install setup to address issue found by Damien in the next patch. replaced WARN_ON(dev_priv->rps.pm_iir != 0); with rps.pm_iir = 0; With the v2 of this patch and the deferred pm enabling (which changed since the original patches) we're now able to get PM interrupts before we've brought up enabled rps. At this point in boot, we don't want to do anything about it, so we simply ignore it. Since writing the original assertion, the code has changed quite a bit, and I believe removing this assertion is perfectly safe. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> [danvet: I don't agree with the justification to drop the WARN and added a FIXME to that effect.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
Just duplicates ironlake_irq_preinstall for now. v2: Add new PCH_NOP check (Damien) Add SDEIMR comment (Damien) Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> [danvet: Update now outdated comment.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
HSW has some special requirements for the VEBOX. Splitting out the interrupt handler will make the code a bit nicer and less error prone when we begin to handle those. The slight functional change in this patch (queueing work while holding the spinlock) is intentional as it makes a subsequent patch a bit nicer. The change should also only effect HSW platforms. Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
Now we compute the results for both 1/2 and 5/6 partitioning and then use hsw_find_best_result to choose which one to use. With this patch, Haswell watermarks support should be in good shape. The only improvement we're missing is the case where the primary plane is disabled: we always assume it's enabled, so we take it into consideration when calculating the watermarks. v2: - Check the latency when finding the best result Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
We were previously only setting the WM_PIPE registers, now we are setting the LP watermark registers. This should allow deeper PC states, resulting in power savings. We're only using 1/2 data buffer partitioning for now. v2: Merge both hsw_compute_pri_wm_* functions (Ville) v3: - Simplify hsw_compute_wm_results (Ville) - Rebase due to changes on the previous patch v4: Unconfuse wm_lp/level (Ville) Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
We were previously calling sandybridge_update_wm on HSW, but the SNB function didn't really match the HSW specification, so we were just writing the wrong values. With this patch, the haswell_update_wm function will set the correct values for the WM_PIPE registers, but it will still keep all the LP watermarks disabled. The patch may look a little bit over-complicated for now, but it's because much of the infrastructure for setting the LP watermarks is already in place, so we won't have too much code churn on the patch that sets the LP watermarks. v2: - Fix pixel_rate on panel fitter case (Ville) - Try to not overflow (Ville) - Remove useless variable (Ville) - Fix p->pri_horiz_pixels (Paulo) v3: - Fix rounding errors on hsw_wm_method2 (Ville) v4: - Fix memcmp bug (Paulo) Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
This was accidentally broken in the south error interrupt handling work: commit 8664281b Author: Paulo Zanoni <paulo.r.zanoni@intel.com> Date: Fri Apr 12 17:57:57 2013 -0300 drm/i915: report Gen5+ CPU and PCH FIFO underruns Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
v2: Add set_seqno which didn't exist before rebase (Haihao) Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Xiang, Haihao <haihao.xiang@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Xiang, Haihao authored
The flag will be useful to help share code between IVB, and HSW as the programming is similar in many places with this as one of the major differences. Signed-off-by: Xiang, Haihao <haihao.xiang@intel.com> [Commit message + small fix by] Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
Historically we considered the render ring to have special flush semantics and everything else to fall under a more general umbrella. Probably by coincidence more than anything we decided to make the bsd ring have the default *other* flush. As the new vebox ring exposes, the bsd ring is actually the weird one. Doing this allows us to call gen6_ring_flush for the vebox because calling blt_ring_flush would be weird... This patch should have no functional change. Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
Like the other rings, the VECS supports semaphores. The semaphore stuff is a bit wonky so this patch on it's own should be nice for review. This patch should have no functional impact. v2: Fix the English parts of clarification (again, register names were right, text was reversed) (Damien) Restore the still valid invariant. (Damien) The bsd semaphore register should be MI_SEMAPHORE_SYNC_VVE (Damien) Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
The video enhancement command streamer is a new ring on HSW which does what it sounds like it does. This patch provides the most minimal inception of the ring. In order to support a new ring, we need to bump the number. The patch may look trivial to the untrained eye, but bumping the number of rings is a bit scary. As such the patch is not terribly useful by itself, but a pretty nice place to find issues during a bisection. Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
This replaces the existing MBOX update code with a more generalized calculation for emitting mbox updates. We also create a sentinel for doing the updates so we can more abstractly deal with the rings. When doing MBOX updates the code must be aware of the /other/ rings. Until now the platforms which supported semaphores had a fixed number of rings and so it made sense for the code to be very specialized (hardcoded). The patch does contain a functional change, but should have no behavioral changes. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Ben Widawsky authored
Semaphores are tied very closely to the rings in the GPU. Trivial patch adds comments to the existing code so that when we add new rings we can include comments there as well. It also helps distinguish the ring to semaphore mailbox interactions by using the ringname in the semaphore data structures. This patch should have no functional impact. v2: The English parts (as opposed to register names) of the comments were reversed. (Damien) Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Mika Kuoppala authored
crtc is holding a reference to a cursor bo and it needs to be released when crtc is destroyed so that we don't leak the cursor bo. v2: Enhance set and move cursor so that disabled cursor is handled correctly (Ville Syrjälä) Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
It appears that a beneficial side-effect of Mika's more accurate hangman work is to speed up hang detection and execution. This exposes a bug in the reset code that then treats repeated simulated hangs as an indication that the machine is wedged. Jiggle the code around so that we only do the simulation processing from the hangcheck and avoid confusing it with a real hang. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=65060Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
- Correct cpu->pch display matching is already check when we detect the PCH type at driver load. - Plane/pipe state is already checked both when a) enabling, b) disabling and in c) the modeset state checker. No need to go overboard and also check it in in between a) and b). Cc: Paulo Zanoni <przanoni@gmail.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-