- 05 Jul, 2012 20 commits
-
-
Daniel Vetter authored
The issue with this check is that it results in userspace receiving an -EIO while the gpu reset hasn't completed, resulting in fallback to sw rendering or worse. Now there's also a stern comment in intel_ring_wait_seqno saying that intel_ring_begin should not return -EAGAIN, ever, because some callers can't handle that. But after an audit of the callsites I don't see any issues. I guess the last problematic spot disappeared with the removal of the pipelined fencing code. So do the right thing and call check_wedge, which should properly decide whether an -EAGAIN or -EIO is appropriate if wedged is set. Note that the early check for a wedged gpu before touching the ring is rather important (and it took me quite some time of acting like the densest doofus to figure that out): If we don't do that and the gpu died for good, not having been resurrect by the reset code, userspace can merrily fill up the entire ring until it notices that something is amiss. Allowing userspace to emit more render, despite that we know that it will fail can't lead to anything good (and by experience can lead to all sorts of havoc, including angering the OOM gods and hard-hanging the hw for good). v2: Fix EAGAIN mispell, noticed by Chris Wilson. Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
... instead of looping endless with no hope of ever serving that page-fault. We only need to break out of this loop when the gpu died, to run the reset work (and hopefully resurrect it). To clarify questions Chris raised on irc: This is about handling I/O errors not from our own code, but e.g. when the disk died when trying to swap in a gem bo. So this patch remidies the issue that the current handling only handles gpu-death-induced cases of -EIO. Admittedly, dying disks are much rarer than hanging gpus ...To clarify questions Chris raised on irc: This is about handling I/O errors not from our own code, but e.g. when the disk died when trying to swap in a gem bo. So this patch remidies the issue that the current handling only handles gpu-death-induced cases of -EIO. Admittedly, dying disks are much rarer than hanging gpus ... This seems to have been lost in: commit d9bc7e9f Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Mon Feb 7 13:09:31 2011 +0000 drm/i915: Fix infinite loop regression from 21dd3734Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
With the gpu reset no longer using a trylock we've increased the chances of userspace getting stuck quite a bit. To make that (hopefully) rare case more paletable time out when waiting for the gpu reset code to complete and signal this little issue to the caller by returning -EIO. This should help userspace to somewhat gracefully fall back and hopefully allow the user to grab some logs and reboot the machine (instead of staring at a frozen X screen in agony). Suggested by Chris Wilson because I've been stubborn about allowing the gpu reset code no to fail, ever (by removing the trylock). Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
So don't return -EAGAIN, even in the case of a gpu hang. Remap it to -EIO instead. Note that this isn't really an issue with interruptability, but more that we have quite a few codepaths (mostly around kms stuff) that simply can't handle any errors and hence not even -EAGAIN. Instead of adding proper failure paths so that we could restart these ioctls we've opted for the cheap way out of sleeping non-interruptibly. Which works everywhere but when the gpu dies, which this patch fixes. So essentially interruptible == false means 'wait for the gpu or die trying'.' This patch is a bit ugly because intel_ring_begin is all non-interruptible and hence only returns -EIO. But as the comment in there says, auditing all the callsites would be a pain. To avoid duplicating code, reuse i915_gem_check_wedge in __wait_seqno and intel_wait_ring_buffer. Also use the opportunity to clarify the different cases in i915_gem_check_wedge a bit with comments. v2: Don't access dev_priv->mm.interruptible from check_wedge - we might not hold dev->struct_mutex, making this racy. Instead pass interruptible in as a parameter. I've noticed this because I've hit a BUG_ON(!mutex_is_locked) at the top of check_wedge. This has been added in commit b4aca010 Author: Ben Widawsky <ben@bwidawsk.net> Date: Wed Apr 25 20:50:12 2012 -0700 drm/i915: extract some common olr+wedge code although that commit is missing any justification for this. I guess it's just copy&paste, because the same commit add the same BUG_ON check to check_olr, where it indeed makes sense. But in check_wedge everything we access is protected by other means, so this is superflous. And because it now gets in the way (we add a new caller in __wait_seqno, which can be called without dev->struct_mutext) let's just remove it. v3: Group all the i915_gem_check_wedge refactoring into this patch, so that this patch here is all about not returning -EAGAIN to callsites that can't handle syscall restarting. v4: Add clarification what interuptible == fales means in our code, requested by Ben Widawsky. v5: Fix EAGAIN mispell noticed by Chris Wilson. Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
Simply failing to reset the gpu because someone else might still hold the mutex isn't a great idea - I see reliable silent reset failures. And gpu reset simply needs to be reliable and Just Work. "But ... the deadlocks!" We already kick all processes waiting for the gpu before launching the reset work item. New waiters need to check the wedging state anyway and then bail out. If we have places that can deadlock, we simply need to fix them. "But ... testing!" We have the gpu hangman, and if the current gpu load gem_exec_nop isn't good enough to hit a specific case, we can add a new one. "But ... don't we return -EAGAIN for non-interruptible calls to wait_seqno now?" Yep, but this problem already exists in the current code. A follow up patch will remedy this by returning -EIO for non-interruptible sleeps if the gpu died and the low-level wait bails out with -EAGAIN. Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
Only bits 30:28, bit 31 is PIPE_DDI_FUNC_ENABLE. Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Eugeni Dodonov authored
This pollutes dmesg output even if we do not have FBC for the device, so move the DRM_DEBUG_KMS statement lower. v2: just kill the message as suggested by Daniel. Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Eugeni Dodonov authored
This is necessary for the modesetting to work correctly after a suspend-resume cycle. Without this, the pipes and clocks got the correct configuration, but the underlying DDI buffers configuration was lost. Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
This function is used to set the PCH_DREF_CONTROL register, which does not exist on LPT anymore. Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
Previously we had has_pch_split to tell us whether we had a PCH or not and we also had dev_priv->pch_type to tell us which kind of PCH it was, but it could only be used if we were 100% sure we did have a PCH. Now that PCH_NONE was added to dev_priv->pch_type we don't need has_pch_split anymore: we can just check for pch_type != PCH_NONE. The HAS_PCH_{IBX,CPT,LPT} macros use dev_priv->pch_type, so they can only be called after intel_detect_pch. The HAS_PCH_SPLIT macro looks at dev_priv->info->has_pch_split, which is available earlier. Since the goal is to implement HAS_PCH_SPLIT using dev_priv->pch_type instead of dev_priv->info->has_pch_split, we need to make sure that intel_detect_pch is called before any calls to HAS_PCH_SPLIT are made. So we moved the intel_detect_pch call to an earlier stage. Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Paulo Zanoni authored
And rely on the fact that it's 0 to assume that machines without a PCH will have PCH_NONE as dev_priv->pch_type. Just today I finally realized that HAS_PCH_IBX is true for machines without a PCH. IMHO this is totally counter-intuitive and I don't think it's a good idea to assume that we're going to check for HAS_PCH_IBX only after we check for HAS_PCH_SPLIT. I believe that in the future we'll have more PCH types and checks like: if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev)) will become more and more common. There's a good chance that we may break non-PCH machines by adding these checks in code that runs on all machines. I also believe that the HAS_PCH_SPLIT check will become less common as we add more and more different PCH types. We'll probably start replacing checks like: if (HAS_PCH_SPLIT(dev)) foo(); else bar(); with: if (HAS_PCH_NEW(dev)) baz(); else if (HAS_PCH_OLD(dev) || HAS_PCH_IBX(dev)) foo(); else bar(); and this may break gen 2/3/4. As far as we have investigated, this patch will affect the behavior of intel_hdmi_dpms and intel_dp_link_down on gen 4. In both functions the code inside the HAS_PCH_IBX check is for IBX-specific workarounds, so we should be safe. If we start bisecting gen 2/3/4 bugs to this commit we should consider replacing the HAS_PCH_IBX checks with something else. V2: Improve commit message, list possible side effects and solution. Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jesse Barnes authored
High frequency link configurations have the potential to cause trouble with long and/or cheap cables, so prefer slow and wide configurations instead. This patch has the potential to cause trouble for eDP configurations that lie about available lanes, so if we run into that we can make it conditional on eDP. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=45801 Tested-by: peter@colberg.org Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
While creating the new enable/disable_gt_powersave functions in commit 8090c6b9 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Sun Jun 24 16:42:32 2012 +0200 drm/i915: wrap up gt powersave enabling functions I've botched up the handling of ironlake_disable_rc6. Fix this up by calling it at the right place. Note though that ironlake_disable_rc6 does a bit more than just disabling rc6 - it also tears down all the allocated context objects. Hence we need to move intel_teardown_rc6 out and directly call it from intel_modeset_cleanup. Also properly mark ironlake_enable_rc6 as static and kill the un-used declaration in i915_drv.h. Note: In review a question popped out why disable_rc6 also tears down the backing object and why we should move that out - it's simply for consistency with gen6+ rps code, which does it that way. Cc: Ben Widawsky <ben@bwidawsk.net> Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Eugeni Dodonov authored
This commit moves force wake support routines into intel_pm modules, and exports the gen6_gt_check_fifodbg routine (used in I915_READ). Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Acked-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Eugeni Dodonov authored
For Haswell, on some of the early hardware revisions, it is possible to run into issues when RC6 state is enabled and when pipes change state. v2: add comment saying that this is for early revisions only. v3: beautify as suggested by Daniel Vetter. Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Acked-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Eugeni Dodonov authored
This is based on Ivy Bridge clock gating for now, but is subject to changes in the future. Note: Compared to the ivb clock gating this drops the the IDICOS medium uncore sharing tuned in commit 20848223 Author: Ben Widawsky <ben@bwidawsk.net> Date: Fri May 4 18:58:59 2012 -0700 drm/i915: set IDICOS to medium uncore resources Eugeni wants to benchmark the effect of this first. Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> [danvet: added note] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Eugeni Dodonov authored
We weren't disabling RC6 bits when bringing down RPS. Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Eugeni Dodonov authored
It should be working so let's turn it on by default and catch any possible issues faster. Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Eugeni Dodonov authored
Just a cosmetic change to simplify the if statement. Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Eugeni Dodonov authored
Most of the RPS and RC6 enabling functionality is similar to what we had on Gen6/Gen7, so we preserve most of the registers. Note that Haswell only has RC6, so account for that as well. As suggested by Daniel Vetter, to reduce the amount of changes in the patch, we still write the RC6p/RC6pp thresholds, but those are ignored on Haswell. Note: Some discussion about the nature of the new tuning constants popped up in review - the answer is that we don't know why they've changed, but the guide from VPG with the magic numbers simply has different values now. v2: Squash fix for ?: vs | operation precende bug into this patch. Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> [danvet: Added note to commit message. Squashed fix.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 Jul, 2012 3 commits
-
-
Eugeni Dodonov authored
There is a different ACK register for force wake on Haswell, so account for that. Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
As a w/a to prevent reads sporadically returning 0, we need to wait for the GT thread to return to TC0 before proceeding to read the registers. v2: adapt for Haswell changes (Eugeni). v3: use wait_for_atomic_us for thread status polling. v3: *really* use wait_for_atomic for polling. References: https://bugs.freedesktop.org/show_bug.cgi?id=50243Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Acked-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Chris Wilson authored
Tidy up the routines for interacting with the GT (in particular the forcewake dance) which are scattered throughout the code in a single structure. v2: use wait_for_atomic for polling. v3: *really* use wait_for_atomic for polling. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Reviewed-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 29 Jun, 2012 1 commit
-
-
Ben Widawsky authored
Daniel complained about this on initial review, but he graciously moved the patches forward. As promised, I am delivering the desired cleanup now. Hopefully I didn't screw the trivial patch up ;-) Signed-off-by: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 28 Jun, 2012 2 commits
-
-
Paulo Zanoni authored
Looks like a copy/paste error. Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
The prep to remove the flushing list in commit cc889e0f Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Wed Jun 13 20:45:19 2012 +0200 drm/i915: disable flushing_list/gpu_write_list causes quite some decent regressions. We can fix this by setting the CS_STALL bit to ensure that the following seqno write happens only after the cache flush has completed. But only do that when the caller actually wants the flush (and not also when we invalidate caches before starting the next batch). I've looked through all our ancient scrolls about gen6+ pipe control workarounds, and this seems to be indeed a legal combination: We're allowed to set the CS_STALL bit when we flush the render cache (which we do). While yelling at this code, also pass back the return value from intel_emit_post_sync_nonzero_flush properly. v2: Instead of emitting more pipe controls, set the CS_STALL bit on the write flush as suggested by Chris Wilson. It seems to work, too. Cc: Eric Anholt <eric@anholt.net> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=51436 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=51429Tested-by: Lu Hua <huax.lu@intel.com> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 Jun, 2012 3 commits
-
-
Jesper Juhl authored
If we ever hit the default case in the switch statement we'll return from the function without freeing the memory we just allocated to 'intel_plane' (but that has not been used). This patch gets rid of the leak by freeing the memory just before we return. Signed-off-by: Jesper Juhl <jj@chaosbits.net> Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jesse Barnes authored
We shouldn't hit this path anyway, but make it use the IVB sprite format definition to avoid confusion. Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Jesse Barnes authored
Or going from tiled to untiled may break. Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 25 Jun, 2012 6 commits
-
-
Daniel Vetter authored
drm/i915 now takes care itself of setting up the gtt for these chips. Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
This is the quick&dirty way Dave Airlie suggested to workaround the midlayer drm agp brain-damange. Note that i915_probe is only called when the driver has ksm enabled, so no need to check for that. We also need to move the intel_agp_enabled check at the right place. Note that the only thing this does is enforce the correct module load order (by using a symbol from intel-agp.ko) to ensure that the fake agp driver is ready before the drm core tries to set up the agp stuff. v2: Add a comment to explain why gen3 needs all this legacy fake agp stuff - we've shipped an XvMC library with a kms-enabled ddx that requires it (but only on gen3). v3: Make it clear that this is only a gen3 issue in the comment. Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
This single leftover use is due to a patch that went into 3.5 through -fixes. With the fake agp stuff on demise, at least for gen6+ we can't use this any more. Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
The enable functions grabbed dev->struct_mutex themselves, whereas the disable functions expected dev->struct_mutex to be held by the caller. Move the locking out to the (currently only) callsite of intel_enable_gt_powersave to make this more consistent. Originally this was prep work for future patches, but I've chased down a totally wrong alley. Still, I think this is a sensible clarification. Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
... instead of calling each one for each generation indiviudally. Notice that we've already managed to be inconsistent, the resume path is missing an IS_VLV check. As a nice benefit we can mark all the platform specific enable/disable functions as static and hide them in intel_pm.c Reviewed-by: Eugeni Dodonov <eugeni.dodonov@intel.com> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
Daniel Vetter authored
I want to merge the "no more fake agp on gen6+" patches into drm-intel-next (well, the last pieces). But a patch in 3.5-rc4 also adds a new use of dev->agp. Hence the backmarge to sort this out, for otherwise drm-intel-next merged into Linus' tree would conflict in the relevant code, things would compile but nicely OOPS at driver load :( Conflicts in this merge are just simple cases of "both branches changed/added lines at the same place". The only tricky part is to keep the order correct wrt the unwind code in case of errors in intel_ringbuffer.c (and the MI_DISPLAY_FLIP #defines in i915_reg.h together, obviously). Conflicts: drivers/gpu/drm/i915/i915_reg.h drivers/gpu/drm/i915/intel_ringbuffer.c Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 Jun, 2012 5 commits
-
-
Linus Torvalds authored
-
Anatol Pomozov authored
Coult -> Could Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds authored
Pull KVM fixes from Avi Kivity: "Fixing a scheduling-while-atomic bug in the ppc code, and a bug which allowed pci bridges to be assigned to guests." * git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: PPC: Book3S HV: Drop locks around call to kvmppc_pin_guest_page KVM: Fix PCI header check on device assignment
-
git://git.kernel.org/pub/scm/linux/kernel/git/roland/infinibandLinus Torvalds authored
Pull InfiniBand/RDMA fixes from Roland Dreier: - Fixes to new ocrdma driver - Typo in test in CMA * tag 'rdma-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband: RDMA/cma: QP type check on received REQs should be AND not OR RDMA/ocrdma: Fix off by one in ocrdma_query_gid() RDMA/ocrdma: Fixed RQ error CQE polling RDMA/ocrdma: Correct queue SGE calculation RDMA/ocrdma: Correct reported max queue sizes RDMA/ocrdma: Fixed GID table for vlan and events
-
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-socLinus Torvalds authored
Pull ARM SoC fixes from Olof Johansson: "Nothing very controversial in here. Most of the fixes are for OMAP this time around, with some orion/kirkwood and a tegra patch mixed in." * tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: ARM: Orion: Fix Virtual/Physical mixup with watchdog ARM: Kirkwood: clk_register_gate_fn: add fn assignment ARM: Orion5x - Restore parts of io.h, with rework ARM: OMAP4: hwmod data: Force HDMI in no-idle while enabled ARM: OMAP2+: mux: fix sparse warning ARM: OMAP2+: CM: increase the module disable timeout ARM: OMAP4: clock data: add clockdomains for clocks used as main clocks ARM: OMAP4: hwmod data: fix 32k sync timer idle modes ARM: OMAP4+: hwmod: fix issue causing IPs not going back to Smart-Standby ARM: OMAP: Fix Beagleboard DVI reset gpio arm/dts: OMAP2: Fix interrupt controller binding ARM: OMAP2: Fix tusb6010 GPIO interrupt for n8x0 ARM: OMAP2+: Fix MUSB ifdefs for platform init code ARM: tegra: make tegra_cpu_reset_handler_enable() __init ARM: OMAP: PM: Lock clocks list while generating summary ARM: iconnect: Remove include of removed linux/spi/orion_spi.h
-