- 23 Feb, 2010 2 commits
-
-
Paul Mundt authored
This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from the PPC and ia64 implementations. The thread flags happen to be the logical inverse of what the global fault mode is set to, so this works out pretty cleanly. By default the global fault mode is used, with tasks now being able to override their own settings via prctl(). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Follow the ARM change, which is what our alignment helpers are based on in the first place. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
- 22 Feb, 2010 5 commits
-
-
Kuninori Morimoto authored
When FSI and Network (= NFS file system) were used at the same time, the I/O of FSI was unstable. This patch updates the SPU2 clock (which is used for FSI) to solve this issue. Special thanks to Jeremy. Signed-off-by: Jeremy Baker <Jeremy.Baker@renesas.com> Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Magnus Damm authored
Update the sh7724 processor code to always enable vpu_clk. On the Ecovec board, set the vpu_clk to 166 Mhz. The 166MHz setting results in a divide-by-6 setup for vpu_clk and improves the VPU performance compared to the power-on-reset/bootloader configuration. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Magnus Damm authored
This patch adds a ->kick() callback to clk_div4_table and ties it into sh_clk_div4_set_rate(). A sh7724 specific kick function is also added that updates the KICK bit whenever div4 clocks in FRQCRA and FRQCRB have been set. Allows us to set the VPU clock. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Magnus Damm authored
This patch introduces struct clk_div4_table. The structure will be used to keep div4 specific data, and is with this patch replacing the struct clk_div_mult_table pointer arg used by the sh_clk_div4_register() functions. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Magnus Damm authored
Make sure the div4 bitfield is shifted according to the enable_bit value in sh_clk_div4_set_rate(). Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
- 18 Feb, 2010 5 commits
-
-
Matt Fleming authored
Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
-
Paul Mundt authored
This implements a bit of rework for the PMB code, which permits us to kill off the legacy PMB mode completely. Rather than trusting the boot loader to do the right thing, we do a quick verification of the PMB contents to determine whether to have the kernel setup the initial mappings or whether it needs to mangle them later on instead. If we're booting from legacy mappings, the kernel will now take control of them and make them match the kernel's initial mapping configuration. This is accomplished by breaking the initialization phase out in to multiple steps: synchronization, merging, and resizing. With the recent rework, the synchronization code establishes page links for compound mappings already, so we build on top of this for promoting mappings and reclaiming unused slots. At the same time, the changes introduced for the uncached helpers also permit us to dynamically resize the uncached mapping without any particular headaches. The smallest page size is more than sufficient for mapping all of kernel text, and as we're careful not to jump to any far off locations in the setup code the mapping can safely be resized regardless of whether we are executing from it or not. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
The PMB code is an example of something that spends an absurd amount of time running uncached when only a couple of operations really need to be. This switches over to the shiny new uncached helpers, permitting us to spend far more time running cached. Additionally, MMUCR twiddling is perfectly safe from cached space given that it's paired with a control register barrier, so fix that up, too. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
There are lots of registers that can only be updated from the uncached mapping, so we add some helpers for those cases in order to make it easier to ensure that we only make the jump when it's absolutely necessary. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
- 17 Feb, 2010 10 commits
-
-
Paul Mundt authored
This implements some locking for the PMB code. A high level rwlock is added for dealing with rw accesses on the entry map while a per-entry data structure spinlock is added to deal with the PMB entry changing out from underneath us. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Write-through PMB mappings still require the cache bit to be set, even if they're to be flagged with a different cache policy and bufferability bit. To reduce some of the confusion surrounding the flag encoding we centralize the cache mask based on the system cache policy while we're at it. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This plugs in entry sizing support for existing mappings and then builds on top of that for linking together entries that are mapping contiguous areas. This will ultimately permit us to coalesce mappings and promote head pages while reclaiming PMB slots for dynamic remapping. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This adds some helper routines for uncached mapping support. This simplifies some of the cases where we need to check the uncached mapping boundaries in addition to giving us a centralized location for building more complex manipulation on top of. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Some overdue cleanup of the PMB code, killing off unused functionality and duplication sprinkled about the tree. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Both the store queue API and the PMB remapping take unsigned long for their pgprot flags, which cuts off the extended protection bits. In the case of the PMB this isn't really a problem since the cache attribute bits that we care about are all in the lower 32-bits, but we do it just to be safe. The store queue remapping on the other hand depends on the extended prot bits for enabling userspace access to the mappings. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Magnus Damm authored
Update the sh7723 INTC tables with force_enable support to mask out pending unsupported SDHI interrupt sources. Without this patch the kernel locks up due to a pending SDHI interrupt that the tmio_mmc driver cannot handle. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Magnus Damm authored
Update the sh7722 INTC tables with force_enable support to mask out pending unsupported SDHI interrupt sources. Without this patch the kernel locks up due to a pending SDHI interrupt that the tmio_mmc driver cannot handle. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Presently there's an ordering issue with the chained handler change which places the set_irq_chip() after set_irq_chained_handler(). This causes a warning to be emitted as the IRQ chip needs to be set first. However, there is the caveat that redirect IRQs can't use the parent IRQ's irq chip as they are just dummy redirects, resulting in intc_enable() blowing up when set_irq_chained_handler() attempts to start up the redirect IRQ. In these cases we can just use dummy_irq_chip directly, as we already extract the parent IRQ and chip from the redirect handler. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
vmemmap and the vmsplit code amongst others need to be able to take page faults much earlier than trap_init() time, so move this in to the early CPU initialization. VBR setup for secondary CPUs is already handled through start_secondary(), so we only need to do this for the boot CPU. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
- 16 Feb, 2010 8 commits
-
-
Paul Mundt authored
The __va()/__pa() offsets and the boot memory offsets are consistent for all PMB users, so there is no need to special case these for legacy PMB. Kill the special casing off and depend on CONFIG_PMB across the board. This also fixes up yet another addressing bug for sh64. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This merges the code for iterating over the legacy PMB mappings and the code for synchronizing software state with the hardware mappings. There's really no reason to do the same iteration twice, and this also buys us the legacy entry logging facility for the dynamic PMB case. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
The PMB initialization code walks the entries and synchronizes the software PMB state with the hardware mappings, preserving the slot index. Unfortunately pmb_alloc() only tested the bit position in the entry map and failed to set it, resulting in subsequent remaps being able to be dynamically assigned a slot that trampled an existing boot mapping with general badness ensuing. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Nobuhiro Iwamatsu authored
Signed-off-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Magnus Damm authored
Update the sh7724 INTC tables with force_enable support to mask out pending unsupported SDHI interrupt sources. Without this patch the kernel locks up due to a pending SDHI interrupt that the tmio_mmc driver cannot handle. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Magnus Damm authored
Extend the shared INTC code with force_disable support to allow keeping mask bits statically disabled. Needed for SDHI support to mask out unsupported interrupt sources. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Phil Edworthy authored
Fixed SH-Mobile panning. Previously the address of the frame to be displayed was updated in the VSync end interrupt. This meant there was a minimum of 1 frame bewteen calling FBIOPAN_DISPLAY ioctl and the pan occuring. This meant that apps were not able to use the FBIO_WAITFORVSYNC ioctl to wait for the pan to complete. This patch moves the write to LDSA1R mirror reg into the pan ioctl. Tested on MS7724 board against 2.6.33-rc7 Signed-off-by: Phil Edworthy <phil.edworthy@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Phil Edworthy authored
Added FBIO_WAITFORVSYNC ioctl for SH-Mobile devices. Tested on MS7724 and MigoR boards against 2.6.33-rc7. Signed-off-by: Phil Edworthy <phil.edworthy@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
- 15 Feb, 2010 3 commits
-
-
Paul Mundt authored
The change for fixing up sh64 inadvertently inverted the logic for legacy PMB, fix that back up. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
-
Paul Mundt authored
This follows the parisc change to ensure that tracehook_signal_handler() is aware of when we are single-stepping in order to ptrace_notify() appropriately. While this was implemented for 32-bit SH, sh64 neglected to make use of TIF_SINGLESTEP when it was folded in with the 32-bit code, resulting in ptrace_notify() never being called. As sh64 uses all of the other abstractions already, this simply plugs in the thread flag in the appropriate enable/disable paths and fixes up the tracehook notification accordingly. With this in place, sh64 is brought in line with what 32-bit is already doing. Reported-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
- 12 Feb, 2010 7 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound-2.6Linus Torvalds authored
* 'fix/hda' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound-2.6: ALSA: hda - use WARN_ON_ONCE() for zero-division detection
-
git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intelLinus Torvalds authored
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel: drm/i915: hold ref on flip object until it completes drm/i915: Fix crash while aborting hibernation drm/i915: Correctly return -ENOMEM on allocation failure in cmdbuf ioctls. drm/i915: fix pipe source image setting in flip command drm/i915: fix flip done interrupt on Ironlake drm/i915: untangle page flip completion drm/i915: handle FBC and self-refresh better drm/i915: Increase fb alignment to 64k drm/i915: Update write_domains on active list after flush. drm/i915: Rework DPLL calculation parameters for Ironlake
-
Takashi Iwai authored
Replace the zero-division warning message with WARN_ON_ONCE() per the advice by Linus. This shouldn't happen, but if it happens, it's possible that the bug happens often due to buggy IRQs. Signed-off-by: Takashi Iwai <tiwai@suse.de>
-
Kyle McMartin authored
Mike Frysinger pointed out that calling tracehook_signal_handler with stepping=0 missed testing the thread flags, resulting in not calling ptrace_notify. Fix this by testing if we're single stepping or branch stepping and setting the flag accordingly. Tested, seems to work. Reported-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound-2.6Linus Torvalds authored
* 'fix/hda' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound-2.6: ALSA: hda-intel: Avoid divide by zero crash
-
git://git.kernel.org/pub/scm/linux/kernel/git/lrg/voltage-2.6Linus Torvalds authored
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/lrg/voltage-2.6: regulator/lp3971: vol_map out of bounds in lp3971_{ldo,dcdc}_set_voltage() regulator: Fix display of null constraints for regulators
-