Commit 2ff47823 authored by David Teigland's avatar David Teigland Committed by Steven Whitehouse

Merge branch 'master'

parents cd1344fe 7eb9b2f5
This diff is collapsed.
......@@ -86,6 +86,62 @@ Mount options
The default is infinite. Note that the size of read requests is
limited anyway to 32 pages (which is 128kbyte on i386).
Sysfs
~~~~~
FUSE sets up the following hierarchy in sysfs:
/sys/fs/fuse/connections/N/
where N is an increasing number allocated to each new connection.
For each connection the following attributes are defined:
'waiting'
The number of requests which are waiting to be transfered to
userspace or being processed by the filesystem daemon. If there is
no filesystem activity and 'waiting' is non-zero, then the
filesystem is hung or deadlocked.
'abort'
Writing anything into this file will abort the filesystem
connection. This means that all waiting requests will be aborted an
error returned for all aborted and new requests.
Only a privileged user may read or write these attributes.
Aborting a filesystem connection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to get into certain situations where the filesystem is
not responding. Reasons for this may be:
a) Broken userspace filesystem implementation
b) Network connection down
c) Accidental deadlock
d) Malicious deadlock
(For more on c) and d) see later sections)
In either of these cases it may be useful to abort the connection to
the filesystem. There are several ways to do this:
- Kill the filesystem daemon. Works in case of a) and b)
- Kill the filesystem daemon and all users of the filesystem. Works
in all cases except some malicious deadlocks
- Use forced umount (umount -f). Works in all cases but only if
filesystem is still attached (it hasn't been lazy unmounted)
- Abort filesystem through the sysfs interface. Most powerful
method, always works.
How do non-privileged mounts work?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......@@ -313,3 +369,10 @@ faulted with get_user_pages(). The 'req->locked' flag indicates
when the copy is taking place, and interruption is delayed until
this flag is unset.
Scenario 3 - Tricky deadlock with asynchronous read
---------------------------------------------------
The same situation as above, except thread-1 will wait on page lock
and hence it will be uninterruptible as well. The solution is to
abort the connection with forced umount (if mount is attached) or
through the abort attribute in sysfs.
......@@ -68,3 +68,4 @@ tuner=66 - LG NTSC (TALN mini series)
tuner=67 - Philips TD1316 Hybrid Tuner
tuner=68 - Philips TUV1236D ATSC/NTSC dual in
tuner=69 - Tena TNF 5335 MF
tuner=70 - Samsung TCPN 2121P30A
......@@ -1696,11 +1696,13 @@ M: mtk-manpages@gmx.net
W: ftp://ftp.kernel.org/pub/linux/docs/manpages
S: Maintained
MARVELL MV64340 ETHERNET DRIVER
MARVELL MV643XX ETHERNET DRIVER
P: Dale Farnsworth
M: dale@farnsworth.org
P: Manish Lachwani
L: linux-mips@linux-mips.org
M: mlachwani@mvista.com
L: netdev@vger.kernel.org
S: Supported
S: Odd Fixes for 2.4; Maintained for 2.6.
MATROX FRAMEBUFFER DRIVER
P: Petr Vandrovec
......
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 15
EXTRAVERSION =
SUBLEVEL = 16
EXTRAVERSION =-rc1
NAME=Sliding Snow Leopard
# *DOCUMENTATION*
......@@ -106,12 +106,13 @@ KBUILD_OUTPUT := $(shell cd $(KBUILD_OUTPUT) && /bin/pwd)
$(if $(KBUILD_OUTPUT),, \
$(error output directory "$(saved-output)" does not exist))
.PHONY: $(MAKECMDGOALS)
.PHONY: $(MAKECMDGOALS) cdbuilddir
$(MAKECMDGOALS) _all: cdbuilddir
$(filter-out _all,$(MAKECMDGOALS)) _all:
cdbuilddir:
$(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \
KBUILD_SRC=$(CURDIR) \
KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $@
KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $(MAKECMDGOALS)
# Leave processing to above invocation of make
skip-makefile := 1
......@@ -262,6 +263,13 @@ export quiet Q KBUILD_VERBOSE
# cc support functions to be used (only) in arch/$(ARCH)/Makefile
# See documentation in Documentation/kbuild/makefiles.txt
# as-option
# Usage: cflags-y += $(call as-option, -Wa$(comma)-isa=foo,)
as-option = $(shell if $(CC) $(CFLAGS) $(1) -Wa,-Z -c -o /dev/null \
-xassembler /dev/null > /dev/null 2>&1; then echo "$(1)"; \
else echo "$(2)"; fi ;)
# cc-option
# Usage: cflags-y += $(call cc-option, -march=winchip-c6, -march=i586)
......@@ -337,8 +345,9 @@ AFLAGS := -D__ASSEMBLY__
# Read KERNELRELEASE from .kernelrelease (if it exists)
KERNELRELEASE = $(shell cat .kernelrelease 2> /dev/null)
KERNELVERSION = $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
export VERSION PATCHLEVEL SUBLEVEL KERNELRELEASE \
export VERSION PATCHLEVEL SUBLEVEL KERNELRELEASE KERNELVERSION \
ARCH CONFIG_SHELL HOSTCC HOSTCFLAGS CROSS_COMPILE AS LD CC \
CPP AR NM STRIP OBJCOPY OBJDUMP MAKE AWK GENKSYMS PERL UTS_MACHINE \
HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
......@@ -433,6 +442,7 @@ export KBUILD_DEFCONFIG
config %config: scripts_basic outputmakefile FORCE
$(Q)mkdir -p include/linux
$(Q)$(MAKE) $(build)=scripts/kconfig $@
$(Q)$(MAKE) .kernelrelease
else
# ===========================================================================
......@@ -542,7 +552,7 @@ export INSTALL_PATH ?= /boot
# makefile but the arguement can be passed to make if needed.
#
MODLIB := $(INSTALL_MOD_PATH)/lib/modules/$(KERNELRELEASE)
MODLIB = $(INSTALL_MOD_PATH)/lib/modules/$(KERNELRELEASE)
export MODLIB
......@@ -783,12 +793,10 @@ endif
localver-full = $(localver)$(localver-auto)
# Store (new) KERNELRELASE string in .kernelrelease
kernelrelease = \
$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)$(localver-full)
kernelrelease = $(KERNELVERSION)$(localver-full)
.kernelrelease: FORCE
$(Q)rm -f .kernelrelease
$(Q)echo $(kernelrelease) > .kernelrelease
$(Q)echo " Building kernel $(kernelrelease)"
$(Q)rm -f $@
$(Q)echo $(kernelrelease) > $@
# Things we need to do before we recursively start building the kernel
......@@ -898,7 +906,7 @@ define filechk_version.h
)
endef
include/linux/version.h: $(srctree)/Makefile FORCE
include/linux/version.h: $(srctree)/Makefile .config FORCE
$(call filechk,version.h)
# ---------------------------------------------------------------------------
......@@ -1301,9 +1309,10 @@ checkstack:
$(PERL) $(src)/scripts/checkstack.pl $(ARCH)
kernelrelease:
@echo $(KERNELRELEASE)
$(if $(wildcard .kernelrelease), $(Q)echo $(KERNELRELEASE), \
$(error kernelrelease not valid - run 'make *config' to update it))
kernelversion:
@echo $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
@echo $(KERNELVERSION)
# FIXME Should go into a make.lib or something
# ===========================================================================
......
Linux kernel release 2.6.xx
Linux kernel release 2.6.xx <http://kernel.org>
These are the release notes for Linux version 2.6. Read them carefully,
as they tell you what this is all about, explain how to install the
......@@ -6,23 +6,31 @@ kernel, and what to do if something goes wrong.
WHAT IS LINUX?
Linux is a Unix clone written from scratch by Linus Torvalds with
assistance from a loosely-knit team of hackers across the Net.
It aims towards POSIX compliance.
Linux is a clone of the operating system Unix, written from scratch by
Linus Torvalds with assistance from a loosely-knit team of hackers across
the Net. It aims towards POSIX and Single UNIX Specification compliance.
It has all the features you would expect in a modern fully-fledged
Unix, including true multitasking, virtual memory, shared libraries,
demand loading, shared copy-on-write executables, proper memory
management and TCP/IP networking.
It has all the features you would expect in a modern fully-fledged Unix,
including true multitasking, virtual memory, shared libraries, demand
loading, shared copy-on-write executables, proper memory management,
and multistack networking including IPv4 and IPv6.
It is distributed under the GNU General Public License - see the
accompanying COPYING file for more details.
ON WHAT HARDWARE DOES IT RUN?
Linux was first developed for 386/486-based PCs. These days it also
runs on ARMs, DEC Alphas, SUN Sparcs, M68000 machines (like Atari and
Amiga), MIPS and PowerPC, and others.
Although originally developed first for 32-bit x86-based PCs (386 or higher),
today Linux also runs on (at least) the Compaq Alpha AXP, Sun SPARC and
UltraSPARC, Motorola 68000, PowerPC, PowerPC64, ARM, Hitachi SuperH,
IBM S/390, MIPS, HP PA-RISC, Intel IA-64, DEC VAX, AMD x86-64, AXIS CRIS,
and Renesas M32R architectures.
Linux is easily portable to most general-purpose 32- or 64-bit architectures
as long as they have a paged memory management unit (PMMU) and a port of the
GNU C compiler (gcc) (part of The GNU Compiler Collection, GCC). Linux has
also been ported to a number of architectures without a PMMU, although
functionality is then obviously somewhat limited.
DOCUMENTATION:
......
......@@ -141,7 +141,7 @@ int show_interrupts(struct seq_file *p, void *v)
if (i < NR_IRQS) {
action = irq_desc[i].action;
if (!action)
continue;
goto out;
seq_printf(p, "%3d: %10u ", i, kstat_irqs(i));
seq_printf(p, " %s", action->name);
for (action = action->next; action; action = action->next) {
......@@ -152,6 +152,7 @@ int show_interrupts(struct seq_file *p, void *v)
show_fiq_list(p, v);
seq_printf(p, "Err: %10lu\n", irq_err_count);
}
out:
return 0;
}
......
......@@ -527,7 +527,7 @@ static int ptrace_getfpregs(struct task_struct *tsk, void *ufp)
static int ptrace_setfpregs(struct task_struct *tsk, void *ufp)
{
set_stopped_child_used_math(tsk);
return copy_from_user(&task_threas_info(tsk)->fpstate, ufp,
return copy_from_user(&task_thread_info(tsk)->fpstate, ufp,
sizeof(struct user_fp)) ? -EFAULT : 0;
}
......
......@@ -37,10 +37,7 @@ CFLAGS += $(call cc-option,-mpreferred-stack-boundary=2)
# CPU-specific tuning. Anything which can be shared with UML should go here.
include $(srctree)/arch/i386/Makefile.cpu
# -mregparm=3 works ok on gcc-3.0 and later
#
cflags-$(CONFIG_REGPARM) += $(shell if [ $(call cc-version) -ge 0300 ] ; then \
echo "-mregparm=3"; fi ;)
cflags-$(CONFIG_REGPARM) += -mregparm=3
# Disable unit-at-a-time mode on pre-gcc-4.0 compilers, it makes gcc use
# a lot more stack due to the lack of sharing of stacklots:
......
......@@ -980,7 +980,7 @@ static int powernowk8_verify(struct cpufreq_policy *pol)
}
/* per CPU init entry point to the driver */
static int __init powernowk8_cpu_init(struct cpufreq_policy *pol)
static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
{
struct powernow_k8_data *data;
cpumask_t oldmask = CPU_MASK_ALL;
......@@ -1141,7 +1141,7 @@ static struct cpufreq_driver cpufreq_amd64_driver = {
};
/* driver entry point for init */
static int __init powernowk8_init(void)
static int __cpuinit powernowk8_init(void)
{
unsigned int i, supported_cpus = 0;
......
......@@ -268,7 +268,7 @@ static void __init permanent_kmaps_init(pgd_t *pgd_base)
pkmap_page_table = pte;
}
static void __devinit free_new_highpage(struct page *page)
static void __meminit free_new_highpage(struct page *page)
{
set_page_count(page, 1);
__free_page(page);
......
......@@ -628,9 +628,11 @@ static int pfm_write_ibr_dbr(int mode, pfm_context_t *ctx, void *arg, int count,
#include "perfmon_itanium.h"
#include "perfmon_mckinley.h"
#include "perfmon_montecito.h"
#include "perfmon_generic.h"
static pmu_config_t *pmu_confs[]={
&pmu_conf_mont,
&pmu_conf_mck,
&pmu_conf_ita,
&pmu_conf_gen, /* must be last */
......
This diff is collapsed.
......@@ -635,3 +635,39 @@ mem_init (void)
ia32_mem_init();
#endif
}
#ifdef CONFIG_MEMORY_HOTPLUG
void online_page(struct page *page)
{
ClearPageReserved(page);
set_page_count(page, 1);
__free_page(page);
totalram_pages++;
num_physpages++;
}
int add_memory(u64 start, u64 size)
{
pg_data_t *pgdat;
struct zone *zone;
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
int ret;
pgdat = NODE_DATA(0);
zone = pgdat->node_zones + ZONE_NORMAL;
ret = __add_pages(zone, start_pfn, nr_pages);
if (ret)
printk("%s: Problem encountered in __add_pages() as ret=%d\n",
__FUNCTION__, ret);
return ret;
}
int remove_memory(u64 start, u64 size)
{
return -EINVAL;
}
#endif
......@@ -454,14 +454,13 @@ static int __devinit is_valid_resource(struct pci_dev *dev, int idx)
return 0;
}
static void __devinit pcibios_fixup_device_resources(struct pci_dev *dev)
static void __devinit
pcibios_fixup_resources(struct pci_dev *dev, int start, int limit)
{
struct pci_bus_region region;
int i;
int limit = (dev->hdr_type == PCI_HEADER_TYPE_NORMAL) ? \
PCI_BRIDGE_RESOURCES : PCI_NUM_RESOURCES;
for (i = 0; i < limit; i++) {
for (i = start; i < limit; i++) {
if (!dev->resource[i].flags)
continue;
region.start = dev->resource[i].start;
......@@ -472,6 +471,16 @@ static void __devinit pcibios_fixup_device_resources(struct pci_dev *dev)
}
}
static void __devinit pcibios_fixup_device_resources(struct pci_dev *dev)
{
pcibios_fixup_resources(dev, 0, PCI_BRIDGE_RESOURCES);
}
static void __devinit pcibios_fixup_bridge_resources(struct pci_dev *dev)
{
pcibios_fixup_resources(dev, PCI_BRIDGE_RESOURCES, PCI_NUM_RESOURCES);
}
/*
* Called after each bus is probed, but before its children are examined.
*/
......@@ -482,7 +491,7 @@ pcibios_fixup_bus (struct pci_bus *b)
if (b->self) {
pci_read_bridge_bases(b);
pcibios_fixup_device_resources(b->self);
pcibios_fixup_bridge_resources(b->self);
}
list_for_each_entry(dev, &b->devices, bus_list)
pcibios_fixup_device_resources(dev);
......
......@@ -40,8 +40,8 @@ struct sn_flush_device_common {
unsigned long sfdl_force_int_addr;
unsigned long sfdl_flush_value;
volatile unsigned long *sfdl_flush_addr;
uint32_t sfdl_persistent_busnum;
uint32_t sfdl_persistent_segment;
u32 sfdl_persistent_busnum;
u32 sfdl_persistent_segment;
struct pcibus_info *sfdl_pcibus_info;
};
......@@ -56,7 +56,7 @@ struct sn_flush_device_kernel {
*/
struct sn_flush_nasid_entry {
struct sn_flush_device_kernel **widget_p; // Used as an array of wid_num
uint64_t iio_itte[8];
u64 iio_itte[8];
};
struct hubdev_info {
......@@ -70,8 +70,8 @@ struct hubdev_info {
void *hdi_nodepda;
void *hdi_node_vertex;
uint32_t max_segment_number;
uint32_t max_pcibus_number;
u32 max_segment_number;
u32 max_pcibus_number;
};
extern void hubdev_init_node(nodepda_t *, cnodeid_t);
......
This diff is collapsed.
......@@ -25,28 +25,28 @@
/* widget configuration registers */
struct widget_cfg{
uint32_t w_id; /* 0x04 */
uint32_t w_pad_0; /* 0x00 */
uint32_t w_status; /* 0x0c */
uint32_t w_pad_1; /* 0x08 */
uint32_t w_err_upper_addr; /* 0x14 */
uint32_t w_pad_2; /* 0x10 */
uint32_t w_err_lower_addr; /* 0x1c */
uint32_t w_pad_3; /* 0x18 */
uint32_t w_control; /* 0x24 */
uint32_t w_pad_4; /* 0x20 */
uint32_t w_req_timeout; /* 0x2c */
uint32_t w_pad_5; /* 0x28 */
uint32_t w_intdest_upper_addr; /* 0x34 */
uint32_t w_pad_6; /* 0x30 */
uint32_t w_intdest_lower_addr; /* 0x3c */
uint32_t w_pad_7; /* 0x38 */
uint32_t w_err_cmd_word; /* 0x44 */
uint32_t w_pad_8; /* 0x40 */
uint32_t w_llp_cfg; /* 0x4c */
uint32_t w_pad_9; /* 0x48 */
uint32_t w_tflush; /* 0x54 */
uint32_t w_pad_10; /* 0x50 */
u32 w_id; /* 0x04 */
u32 w_pad_0; /* 0x00 */
u32 w_status; /* 0x0c */
u32 w_pad_1; /* 0x08 */
u32 w_err_upper_addr; /* 0x14 */
u32 w_pad_2; /* 0x10 */
u32 w_err_lower_addr; /* 0x1c */
u32 w_pad_3; /* 0x18 */
u32 w_control; /* 0x24 */
u32 w_pad_4; /* 0x20 */
u32 w_req_timeout; /* 0x2c */
u32 w_pad_5; /* 0x28 */
u32 w_intdest_upper_addr; /* 0x34 */
u32 w_pad_6; /* 0x30 */
u32 w_intdest_lower_addr; /* 0x3c */
u32 w_pad_7; /* 0x38 */
u32 w_err_cmd_word; /* 0x44 */
u32 w_pad_8; /* 0x40 */
u32 w_llp_cfg; /* 0x4c */
u32 w_pad_9; /* 0x48 */
u32 w_tflush; /* 0x54 */
u32 w_pad_10; /* 0x50 */
};
/*
......@@ -63,7 +63,7 @@ struct xwidget_info{
struct xwidget_hwid xwi_hwid; /* Widget Identification */
char xwi_masterxid; /* Hub's Widget Port Number */
void *xwi_hubinfo; /* Hub's provider private info */
uint64_t *xwi_hub_provider; /* prom provider functions */
u64 *xwi_hub_provider; /* prom provider functions */
void *xwi_vertex;
};
......
......@@ -132,8 +132,8 @@ static inline u64 sal_get_pcibus_info(u64 segment, u64 busnum, u64 address)
* Retrieve the pci device information given the bus and device|function number.
*/
static inline u64
sal_get_pcidev_info(u64 segment, u64 bus_number, u64 devfn, u64 pci_dev,
u64 sn_irq_info)
sal_get_pcidev_info(u64 segment, u64 bus_number, u64 devfn, u64 pci_dev,
u64 sn_irq_info)
{
struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0;
......@@ -141,7 +141,7 @@ sal_get_pcidev_info(u64 segment, u64 bus_number, u64 devfn, u64 pci_dev,
SAL_CALL_NOLOCK(ret_stuff,
(u64) SN_SAL_IOIF_GET_PCIDEV_INFO,
(u64) segment, (u64) bus_number, (u64) devfn,
(u64) segment, (u64) bus_number, (u64) devfn,
(u64) pci_dev,
sn_irq_info, 0, 0);
return ret_stuff.v0;
......@@ -268,7 +268,7 @@ static void sn_fixup_ionodes(void)
*/
static void
sn_pci_window_fixup(struct pci_dev *dev, unsigned int count,
int64_t * pci_addrs)
s64 * pci_addrs)
{
struct pci_controller *controller = PCI_CONTROLLER(dev->bus);
unsigned int i;
......@@ -328,7 +328,7 @@ void sn_pci_fixup_slot(struct pci_dev *dev)
struct pci_bus *host_pci_bus;
struct pci_dev *host_pci_dev;
struct pcidev_info *pcidev_info;
int64_t pci_addrs[PCI_ROM_RESOURCE + 1];
s64 pci_addrs[PCI_ROM_RESOURCE + 1];
struct sn_irq_info *sn_irq_info;
unsigned long size;
unsigned int bus_no, devfn;
......
......@@ -28,7 +28,7 @@ extern int sn_ioif_inited;
static struct list_head **sn_irq_lh;
static spinlock_t sn_irq_info_lock = SPIN_LOCK_UNLOCKED; /* non-IRQ lock */
static inline uint64_t sn_intr_alloc(nasid_t local_nasid, int local_widget,
static inline u64 sn_intr_alloc(nasid_t local_nasid, int local_widget,
u64 sn_irq_info,
int req_irq, nasid_t req_nasid,
int req_slice)
......@@ -123,7 +123,7 @@ static void sn_set_affinity_irq(unsigned int irq, cpumask_t mask)
list_for_each_entry_safe(sn_irq_info, sn_irq_info_safe,
sn_irq_lh[irq], list) {
uint64_t bridge;
u64 bridge;
int local_widget, status;
nasid_t local_nasid;
struct sn_irq_info *new_irq_info;
......@@ -134,7 +134,7 @@ static void sn_set_affinity_irq(unsigned int irq, cpumask_t mask)
break;
memcpy(new_irq_info, sn_irq_info, sizeof(struct sn_irq_info));
bridge = (uint64_t) new_irq_info->irq_bridge;
bridge = (u64) new_irq_info->irq_bridge;
if (!bridge) {
kfree(new_irq_info);
break; /* irq is not a device interrupt */
......@@ -349,10 +349,10 @@ static void force_interrupt(int irq)
*/
static void sn_check_intr(int irq, struct sn_irq_info *sn_irq_info)
{
uint64_t regval;
u64 regval;
int irr_reg_num;
int irr_bit;
uint64_t irr_reg;
u64 irr_reg;
struct pcidev_info *pcidev_info;
struct pcibus_info *pcibus_info;
......
......@@ -245,7 +245,7 @@ static int cx_device_reload(struct cx_dev *cx_dev)
cx_dev->bt);
}
static inline uint64_t tiocx_intr_alloc(nasid_t nasid, int widget,
static inline u64 tiocx_intr_alloc(nasid_t nasid, int widget,
u64 sn_irq_info,
int req_irq, nasid_t req_nasid,
int req_slice)
......@@ -302,7 +302,7 @@ struct sn_irq_info *tiocx_irq_alloc(nasid_t nasid, int widget, int irq,
void tiocx_irq_free(struct sn_irq_info *sn_irq_info)
{
uint64_t bridge = (uint64_t) sn_irq_info->irq_bridge;
u64 bridge = (u64) sn_irq_info->irq_bridge;
nasid_t nasid = NASID_GET(bridge);
int widget;
......@@ -313,12 +313,12 @@ void tiocx_irq_free(struct sn_irq_info *sn_irq_info)
}
}
uint64_t tiocx_dma_addr(uint64_t addr)
u64 tiocx_dma_addr(u64 addr)
{
return PHYS_TO_TIODMA(addr);
}
uint64_t tiocx_swin_base(int nasid)
u64 tiocx_swin_base(int nasid)
{
return TIO_SWIN_BASE(nasid, TIOCX_CORELET);
}
......@@ -335,8 +335,8 @@ EXPORT_SYMBOL(tiocx_swin_base);
static void tio_conveyor_set(nasid_t nasid, int enable_flag)
{
uint64_t ice_frz;
uint64_t disable_cb = (1ull << 61);
u64 ice_frz;
u64 disable_cb = (1ull << 61);
if (!(nasid & 1))
return;
......@@ -388,7 +388,7 @@ static int is_fpga_tio(int nasid, int *bt)
static int bitstream_loaded(nasid_t nasid)
{
uint64_t cx_credits;
u64 cx_credits;
cx_credits = REMOTE_HUB_L(nasid, TIO_ICE_PMI_TX_DYN_CREDIT_STAT_CB3);
cx_credits &= TIO_ICE_PMI_TX_DYN_CREDIT_STAT_CB3_CREDIT_CNT_MASK;
......@@ -404,14 +404,14 @@ static int tiocx_reload(struct cx_dev *cx_dev)
nasid_t nasid = cx_dev->cx_id.nasid;
if (bitstream_loaded(nasid)) {
uint64_t cx_id;
u64 cx_id;
int rv;
rv = ia64_sn_sysctl_tio_clock_reset(nasid);
if (rv) {
printk(KERN_ALERT "CX port JTAG reset failed.\n");
} else {
cx_id = *(volatile uint64_t *)
cx_id = *(volatile u64 *)
(TIO_SWIN_BASE(nasid, TIOCX_CORELET) +
WIDGET_ID);
part_num = XWIDGET_PART_NUM(cx_id);
......
......@@ -18,10 +18,10 @@ int pcibr_invalidate_ate = 0; /* by default don't invalidate ATE on free */
* mark_ate: Mark the ate as either free or inuse.
*/
static void mark_ate(struct ate_resource *ate_resource, int start, int number,
uint64_t value)
u64 value)
{
uint64_t *ate = ate_resource->ate;
u64 *ate = ate_resource->ate;
int index;
int length = 0;
......@@ -38,7 +38,7 @@ static int find_free_ate(struct ate_resource *ate_resource, int start,
int count)
{
uint64_t *ate = ate_resource->ate;
u64 *ate = ate_resource->ate;
int index;
int start_free;
......@@ -119,7 +119,7 @@ static inline int alloc_ate_resource(struct ate_resource *ate_resource,
int pcibr_ate_alloc(struct pcibus_info *pcibus_info, int count)
{
int status = 0;
uint64_t flag;
u64 flag;
flag = pcibr_lock(pcibus_info);
status = alloc_ate_resource(&pcibus_info->pbi_int_ate_resource, count);
......@@ -139,7 +139,7 @@ int pcibr_ate_alloc(struct pcibus_info *pcibus_info, int count)
* Setup an Address Translation Entry as specified. Use either the Bridge
* internal maps or the external map RAM, as appropriate.
*/
static inline uint64_t *pcibr_ate_addr(struct pcibus_info *pcibus_info,
static inline u64 *pcibr_ate_addr(struct pcibus_info *pcibus_info,
int ate_index)
{
if (ate_index < pcibus_info->pbi_int_ate_size) {
......@@ -153,7 +153,7 @@ static inline uint64_t *pcibr_ate_addr(struct pcibus_info *pcibus_info,
*/
void inline
ate_write(struct pcibus_info *pcibus_info, int ate_index, int count,
volatile uint64_t ate)
volatile u64 ate)
{
while (count-- > 0) {
if (ate_index < pcibus_info->pbi_int_ate_size) {
......@@ -171,9 +171,9 @@ ate_write(struct pcibus_info *pcibus_info, int ate_index, int count,
void pcibr_ate_free(struct pcibus_info *pcibus_info, int index)
{
volatile uint64_t ate;
volatile u64 ate;
int count;
uint64_t flags;
u64 flags;
if (pcibr_invalidate_ate) {
/* For debugging purposes, clear the valid bit in the ATE */
......
......@@ -41,21 +41,21 @@ extern int sn_ioif_inited;
static dma_addr_t
pcibr_dmamap_ate32(struct pcidev_info *info,
uint64_t paddr, size_t req_size, uint64_t flags)
u64 paddr, size_t req_size, u64 flags)
{
struct pcidev_info *pcidev_info = info->pdi_host_pcidev_info;
struct pcibus_info *pcibus_info = (struct pcibus_info *)pcidev_info->
pdi_pcibus_info;
uint8_t internal_device = (PCI_SLOT(pcidev_info->pdi_host_pcidev_info->
u8 internal_device = (PCI_SLOT(pcidev_info->pdi_host_pcidev_info->
pdi_linux_pcidev->devfn)) - 1;
int ate_count;
int ate_index;
uint64_t ate_flags = flags | PCI32_ATE_V;
uint64_t ate;
uint64_t pci_addr;
uint64_t xio_addr;
uint64_t offset;
u64 ate_flags = flags | PCI32_ATE_V;
u64 ate;
u64 pci_addr;
u64 xio_addr;
u64 offset;
/* PIC in PCI-X mode does not supports 32bit PageMap mode */
if (IS_PIC_SOFT(pcibus_info) && IS_PCIX(pcibus_info)) {
......@@ -109,12 +109,12 @@ pcibr_dmamap_ate32(struct pcidev_info *info,
}
static dma_addr_t
pcibr_dmatrans_direct64(struct pcidev_info * info, uint64_t paddr,
uint64_t dma_attributes)
pcibr_dmatrans_direct64(struct pcidev_info * info, u64 paddr,
u64 dma_attributes)
{
struct pcibus_info *pcibus_info = (struct pcibus_info *)
((info->pdi_host_pcidev_info)->pdi_pcibus_info);
uint64_t pci_addr;
u64 pci_addr;
/* Translate to Crosstalk View of Physical Address */
pci_addr = (IS_PIC_SOFT(pcibus_info) ? PHYS_TO_DMA(paddr) :
......@@ -127,7 +127,7 @@ pcibr_dmatrans_direct64(struct pcidev_info * info, uint64_t paddr,
/* Handle Bridge Chipset differences */
if (IS_PIC_SOFT(pcibus_info)) {
pci_addr |=
((uint64_t) pcibus_info->
((u64) pcibus_info->
pbi_hub_xid << PIC_PCI64_ATTR_TARG_SHFT);
} else
pci_addr |= TIOCP_PCI64_CMDTYPE_MEM;
......@@ -142,17 +142,17 @@ pcibr_dmatrans_direct64(struct pcidev_info * info, uint64_t paddr,
static dma_addr_t
pcibr_dmatrans_direct32(struct pcidev_info * info,
uint64_t paddr, size_t req_size, uint64_t flags)
u64 paddr, size_t req_size, u64 flags)
{
struct pcidev_info *pcidev_info = info->pdi_host_pcidev_info;
struct pcibus_info *pcibus_info = (struct pcibus_info *)pcidev_info->
pdi_pcibus_info;
uint64_t xio_addr;
u64 xio_addr;
uint64_t xio_base;
uint64_t offset;
uint64_t endoff;
u64 xio_base;
u64 offset;
u64 endoff;
if (IS_PCIX(pcibus_info)) {
return 0;
......@@ -209,14 +209,14 @@ pcibr_dma_unmap(struct pci_dev *hwdev, dma_addr_t dma_handle, int direction)
* unlike the PIC Device(x) Write Request Buffer Flush register.
*/
void sn_dma_flush(uint64_t addr)
void sn_dma_flush(u64 addr)
{
nasid_t nasid;
int is_tio;
int wid_num;
int i, j;
uint64_t flags;
uint64_t itte;
u64 flags;
u64 itte;
struct hubdev_info *hubinfo;
volatile struct sn_flush_device_kernel *p;
volatile struct sn_flush_device_common *common;
......@@ -299,8 +299,8 @@ void sn_dma_flush(uint64_t addr)
* If CE ever needs the sn_dma_flush mechanism, we will have
* to account for that here and in tioce_bus_fixup().
*/
uint32_t tio_id = HUB_L(TIO_IOSPACE_ADDR(nasid, TIO_NODE_ID));
uint32_t revnum = XWIDGET_PART_REV_NUM(tio_id);
u32 tio_id = HUB_L(TIO_IOSPACE_ADDR(nasid, TIO_NODE_ID));
u32 revnum = XWIDGET_PART_REV_NUM(tio_id);
/* TIOCP BRINGUP WAR (PV907516): Don't write buffer flush reg */
if ((1 << XWIDGET_PART_REV_NUM_REV(revnum)) & PV907516) {
......@@ -315,7 +315,7 @@ void sn_dma_flush(uint64_t addr)
*common->sfdl_flush_addr = 0;
/* force an interrupt. */
*(volatile uint32_t *)(common->sfdl_force_int_addr) = 1;
*(volatile u32 *)(common->sfdl_force_int_addr) = 1;
/* wait for the interrupt to come back. */
while (*(common->sfdl_flush_addr) != 0x10f)
......
......@@ -23,7 +23,7 @@ int
sal_pcibr_slot_enable(struct pcibus_info *soft, int device, void *resp)
{
struct ia64_sal_retval ret_stuff;
uint64_t busnum;
u64 busnum;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
......@@ -40,7 +40,7 @@ sal_pcibr_slot_disable(struct pcibus_info *soft, int device, int action,
void *resp)
{
struct ia64_sal_retval ret_stuff;
uint64_t busnum;
u64 busnum;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
......@@ -56,7 +56,7 @@ sal_pcibr_slot_disable(struct pcibus_info *soft, int device, int action,
static int sal_pcibr_error_interrupt(struct pcibus_info *soft)
{
struct ia64_sal_retval ret_stuff;
uint64_t busnum;
u64 busnum;
int segment;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
......@@ -159,9 +159,9 @@ pcibr_bus_fixup(struct pcibus_bussoft *prom_bussoft, struct pci_controller *cont
/* Setup the PMU ATE map */
soft->pbi_int_ate_resource.lowest_free_index = 0;
soft->pbi_int_ate_resource.ate =
kmalloc(soft->pbi_int_ate_size * sizeof(uint64_t), GFP_KERNEL);
kmalloc(soft->pbi_int_ate_size * sizeof(u64), GFP_KERNEL);
memset(soft->pbi_int_ate_resource.ate, 0,
(soft->pbi_int_ate_size * sizeof(uint64_t)));
(soft->pbi_int_ate_size * sizeof(u64)));
if (prom_bussoft->bs_asic_type == PCIIO_ASIC_TYPE_TIOCP) {
/* TIO PCI Bridge: find nearest node with CPUs */
......@@ -203,7 +203,7 @@ void pcibr_target_interrupt(struct sn_irq_info *sn_irq_info)
struct pcidev_info *pcidev_info;
struct pcibus_info *pcibus_info;
int bit = sn_irq_info->irq_int_bit;
uint64_t xtalk_addr = sn_irq_info->irq_xtalkaddr;
u64 xtalk_addr = sn_irq_info->irq_xtalkaddr;
pcidev_info = (struct pcidev_info *)sn_irq_info->irq_pciioinfo;
if (pcidev_info) {
......
......@@ -23,7 +23,7 @@ union br_ptr {
/*
* Control Register Access -- Read/Write 0000_0020
*/
void pcireg_control_bit_clr(struct pcibus_info *pcibus_info, uint64_t bits)
void pcireg_control_bit_clr(struct pcibus_info *pcibus_info, u64 bits)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
......@@ -43,7 +43,7 @@ void pcireg_control_bit_clr(struct pcibus_info *pcibus_info, uint64_t bits)
}
}
void pcireg_control_bit_set(struct pcibus_info *pcibus_info, uint64_t bits)
void pcireg_control_bit_set(struct pcibus_info *pcibus_info, u64 bits)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
......@@ -66,10 +66,10 @@ void pcireg_control_bit_set(struct pcibus_info *pcibus_info, uint64_t bits)
/*
* PCI/PCIX Target Flush Register Access -- Read Only 0000_0050
*/
uint64_t pcireg_tflush_get(struct pcibus_info *pcibus_info)
u64 pcireg_tflush_get(struct pcibus_info *pcibus_info)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
uint64_t ret = 0;
u64 ret = 0;
if (pcibus_info) {
switch (pcibus_info->pbi_bridge_type) {
......@@ -96,10 +96,10 @@ uint64_t pcireg_tflush_get(struct pcibus_info *pcibus_info)
/*
* Interrupt Status Register Access -- Read Only 0000_0100
*/
uint64_t pcireg_intr_status_get(struct pcibus_info * pcibus_info)
u64 pcireg_intr_status_get(struct pcibus_info * pcibus_info)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
uint64_t ret = 0;
u64 ret = 0;
if (pcibus_info) {
switch (pcibus_info->pbi_bridge_type) {
......@@ -121,7 +121,7 @@ uint64_t pcireg_intr_status_get(struct pcibus_info * pcibus_info)
/*
* Interrupt Enable Register Access -- Read/Write 0000_0108
*/
void pcireg_intr_enable_bit_clr(struct pcibus_info *pcibus_info, uint64_t bits)
void pcireg_intr_enable_bit_clr(struct pcibus_info *pcibus_info, u64 bits)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
......@@ -141,7 +141,7 @@ void pcireg_intr_enable_bit_clr(struct pcibus_info *pcibus_info, uint64_t bits)
}
}
void pcireg_intr_enable_bit_set(struct pcibus_info *pcibus_info, uint64_t bits)
void pcireg_intr_enable_bit_set(struct pcibus_info *pcibus_info, u64 bits)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
......@@ -165,7 +165,7 @@ void pcireg_intr_enable_bit_set(struct pcibus_info *pcibus_info, uint64_t bits)
* Intr Host Address Register (int_addr) -- Read/Write 0000_0130 - 0000_0168
*/
void pcireg_intr_addr_addr_set(struct pcibus_info *pcibus_info, int int_n,
uint64_t addr)
u64 addr)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
......@@ -217,10 +217,10 @@ void pcireg_force_intr_set(struct pcibus_info *pcibus_info, int int_n)
/*
* Device(x) Write Buffer Flush Reg Access -- Read Only 0000_0240 - 0000_0258
*/
uint64_t pcireg_wrb_flush_get(struct pcibus_info *pcibus_info, int device)
u64 pcireg_wrb_flush_get(struct pcibus_info *pcibus_info, int device)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
uint64_t ret = 0;
u64 ret = 0;
if (pcibus_info) {
switch (pcibus_info->pbi_bridge_type) {
......@@ -242,7 +242,7 @@ uint64_t pcireg_wrb_flush_get(struct pcibus_info *pcibus_info, int device)
}
void pcireg_int_ate_set(struct pcibus_info *pcibus_info, int ate_index,
uint64_t val)
u64 val)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
......@@ -262,10 +262,10 @@ void pcireg_int_ate_set(struct pcibus_info *pcibus_info, int ate_index,
}
}
uint64_t __iomem *pcireg_int_ate_addr(struct pcibus_info *pcibus_info, int ate_index)
u64 __iomem *pcireg_int_ate_addr(struct pcibus_info *pcibus_info, int ate_index)
{
union br_ptr __iomem *ptr = (union br_ptr __iomem *)pcibus_info->pbi_buscommon.bs_base;
uint64_t __iomem *ret = NULL;
u64 __iomem *ret = NULL;
if (pcibus_info) {
switch (pcibus_info->pbi_bridge_type) {
......
......@@ -16,7 +16,7 @@
#include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/tioca_provider.h>
uint32_t tioca_gart_found;
u32 tioca_gart_found;
EXPORT_SYMBOL(tioca_gart_found); /* used by agp-sgi */
LIST_HEAD(tioca_list);
......@@ -34,8 +34,8 @@ static int tioca_gart_init(struct tioca_kernel *);
static int
tioca_gart_init(struct tioca_kernel *tioca_kern)
{
uint64_t ap_reg;
uint64_t offset;
u64 ap_reg;
u64 offset;
struct page *tmp;
struct tioca_common *tioca_common;
struct tioca __iomem *ca_base;
......@@ -214,7 +214,7 @@ void
tioca_fastwrite_enable(struct tioca_kernel *tioca_kern)
{
int cap_ptr;
uint32_t reg;
u32 reg;
struct tioca __iomem *tioca_base;
struct pci_dev *pdev;
struct tioca_common *common;
......@@ -276,7 +276,7 @@ EXPORT_SYMBOL(tioca_fastwrite_enable); /* used by agp-sgi */
* We will always use 0x1
* 55:55 - Swap bytes Currently unused
*/
static uint64_t
static u64
tioca_dma_d64(unsigned long paddr)
{
dma_addr_t bus_addr;
......@@ -318,15 +318,15 @@ tioca_dma_d64(unsigned long paddr)
* and so a given CA can only directly target nodes in the range
* xxx - xxx+255.
*/
static uint64_t
tioca_dma_d48(struct pci_dev *pdev, uint64_t paddr)
static u64
tioca_dma_d48(struct pci_dev *pdev, u64 paddr)
{
struct tioca_common *tioca_common;
struct tioca __iomem *ca_base;
uint64_t ct_addr;
u64 ct_addr;
dma_addr_t bus_addr;
uint32_t node_upper;
uint64_t agp_dma_extn;
u32 node_upper;
u64 agp_dma_extn;
struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(pdev);
tioca_common = (struct tioca_common *)pcidev_info->pdi_pcibus_info;
......@@ -367,10 +367,10 @@ tioca_dma_d48(struct pci_dev *pdev, uint64_t paddr)
* dma_addr_t is guarenteed to be contiguous in CA bus space.
*/
static dma_addr_t
tioca_dma_mapped(struct pci_dev *pdev, uint64_t paddr, size_t req_size)
tioca_dma_mapped(struct pci_dev *pdev, u64 paddr, size_t req_size)
{
int i, ps, ps_shift, entry, entries, mapsize, last_entry;
uint64_t xio_addr, end_xio_addr;
u64 xio_addr, end_xio_addr;
struct tioca_common *tioca_common;
struct tioca_kernel *tioca_kern;
dma_addr_t bus_addr = 0;
......@@ -514,10 +514,10 @@ tioca_dma_unmap(struct pci_dev *pdev, dma_addr_t bus_addr, int dir)
* The mapping mode used is based on the devices dma_mask. As a last resort
* use the GART mapped mode.
*/
static uint64_t
tioca_dma_map(struct pci_dev *pdev, uint64_t paddr, size_t byte_count)
static u64
tioca_dma_map(struct pci_dev *pdev, u64 paddr, size_t byte_count)
{
uint64_t mapaddr;
u64 mapaddr;
/*
* If card is 64 or 48 bit addresable, use a direct mapping. 32
......@@ -554,8 +554,8 @@ tioca_error_intr_handler(int irq, void *arg, struct pt_regs *pt)
{
struct tioca_common *soft = arg;
struct ia64_sal_retval ret_stuff;
uint64_t segment;
uint64_t busnum;
u64 segment;
u64 busnum;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
......@@ -620,7 +620,7 @@ tioca_bus_fixup(struct pcibus_bussoft *prom_bussoft, struct pci_controller *cont
INIT_LIST_HEAD(&tioca_kern->ca_dmamaps);
tioca_kern->ca_closest_node =
nasid_to_cnodeid(tioca_common->ca_closest_nasid);
tioca_common->ca_kernel_private = (uint64_t) tioca_kern;
tioca_common->ca_kernel_private = (u64) tioca_kern;
bus = pci_find_bus(tioca_common->ca_common.bs_persist_segment,
tioca_common->ca_common.bs_persist_busnum);
......
......@@ -81,10 +81,10 @@
* 61 - 0 since this is not an MSI transaction
* 60:54 - reserved, MBZ
*/
static uint64_t
static u64
tioce_dma_d64(unsigned long ct_addr)
{
uint64_t bus_addr;
u64 bus_addr;
bus_addr = ct_addr | (1UL << 63);
......@@ -141,9 +141,9 @@ pcidev_to_tioce(struct pci_dev *pdev, struct tioce **base,
* length, and if enough resources exist, fill in the ATE's and construct a
* tioce_dmamap struct to track the mapping.
*/
static uint64_t
static u64
tioce_alloc_map(struct tioce_kernel *ce_kern, int type, int port,
uint64_t ct_addr, int len)
u64 ct_addr, int len)
{
int i;
int j;
......@@ -152,11 +152,11 @@ tioce_alloc_map(struct tioce_kernel *ce_kern, int type, int port,
int entries;
int nates;
int pagesize;
uint64_t *ate_shadow;
uint64_t *ate_reg;
uint64_t addr;
u64 *ate_shadow;
u64 *ate_reg;
u64 addr;
struct tioce *ce_mmr;
uint64_t bus_base;
u64 bus_base;
struct tioce_dmamap *map;
ce_mmr = (struct tioce *)ce_kern->ce_common->ce_pcibus.bs_base;
......@@ -224,7 +224,7 @@ tioce_alloc_map(struct tioce_kernel *ce_kern, int type, int port,
addr = ct_addr;
for (j = 0; j < nates; j++) {
uint64_t ate;
u64 ate;
ate = ATE_MAKE(addr, pagesize);
ate_shadow[i + j] = ate;
......@@ -252,15 +252,15 @@ tioce_alloc_map(struct tioce_kernel *ce_kern, int type, int port,
*
* Map @paddr into 32-bit bus space of the CE associated with @pcidev_info.
*/
static uint64_t
tioce_dma_d32(struct pci_dev *pdev, uint64_t ct_addr)
static u64
tioce_dma_d32(struct pci_dev *pdev, u64 ct_addr)
{
int dma_ok;
int port;
struct tioce *ce_mmr;
struct tioce_kernel *ce_kern;
uint64_t ct_upper;
uint64_t ct_lower;
u64 ct_upper;
u64 ct_lower;
dma_addr_t bus_addr;
ct_upper = ct_addr & ~0x3fffffffUL;
......@@ -269,7 +269,7 @@ tioce_dma_d32(struct pci_dev *pdev, uint64_t ct_addr)
pcidev_to_tioce(pdev, &ce_mmr, &ce_kern, &port);
if (ce_kern->ce_port[port].dirmap_refcnt == 0) {
uint64_t tmp;
u64 tmp;
ce_kern->ce_port[port].dirmap_shadow = ct_upper;
writeq(ct_upper, &ce_mmr->ce_ure_dir_map[port]);
......@@ -295,10 +295,10 @@ tioce_dma_d32(struct pci_dev *pdev, uint64_t ct_addr)
* Given a TIOCE bus address, set the appropriate bit to indicate barrier
* attributes.
*/
static uint64_t
tioce_dma_barrier(uint64_t bus_addr, int on)
static u64
tioce_dma_barrier(u64 bus_addr, int on)
{
uint64_t barrier_bit;
u64 barrier_bit;
/* barrier not supported in M40/M40S mode */
if (TIOCE_M40_ADDR(bus_addr) || TIOCE_M40S_ADDR(bus_addr))
......@@ -351,7 +351,7 @@ tioce_dma_unmap(struct pci_dev *pdev, dma_addr_t bus_addr, int dir)
list_for_each_entry(map, &ce_kern->ce_dmamap_list,
ce_dmamap_list) {
uint64_t last;
u64 last;
last = map->pci_start + map->nbytes - 1;
if (bus_addr >= map->pci_start && bus_addr <= last)
......@@ -385,17 +385,17 @@ tioce_dma_unmap(struct pci_dev *pdev, dma_addr_t bus_addr, int dir)
* This is the main wrapper for mapping host physical pages to CE PCI space.
* The mapping mode used is based on the device's dma_mask.
*/
static uint64_t
tioce_do_dma_map(struct pci_dev *pdev, uint64_t paddr, size_t byte_count,
static u64
tioce_do_dma_map(struct pci_dev *pdev, u64 paddr, size_t byte_count,
int barrier)
{
unsigned long flags;
uint64_t ct_addr;
uint64_t mapaddr = 0;
u64 ct_addr;
u64 mapaddr = 0;
struct tioce_kernel *ce_kern;
struct tioce_dmamap *map;
int port;
uint64_t dma_mask;
u64 dma_mask;
dma_mask = (barrier) ? pdev->dev.coherent_dma_mask : pdev->dma_mask;
......@@ -425,7 +425,7 @@ tioce_do_dma_map(struct pci_dev *pdev, uint64_t paddr, size_t byte_count,
* address bits than this device can support.
*/
list_for_each_entry(map, &ce_kern->ce_dmamap_list, ce_dmamap_list) {
uint64_t last;
u64 last;
last = map->ct_start + map->nbytes - 1;
if (ct_addr >= map->ct_start &&
......@@ -501,8 +501,8 @@ tioce_do_dma_map(struct pci_dev *pdev, uint64_t paddr, size_t byte_count,
* Simply call tioce_do_dma_map() to create a map with the barrier bit clear
* in the address.
*/
static uint64_t
tioce_dma(struct pci_dev *pdev, uint64_t paddr, size_t byte_count)
static u64
tioce_dma(struct pci_dev *pdev, u64 paddr, size_t byte_count)
{
return tioce_do_dma_map(pdev, paddr, byte_count, 0);
}
......@@ -515,8 +515,8 @@ tioce_dma(struct pci_dev *pdev, uint64_t paddr, size_t byte_count)
*
* Simply call tioce_do_dma_map() to create a map with the barrier bit set
* in the address.
*/ static uint64_t
tioce_dma_consistent(struct pci_dev *pdev, uint64_t paddr, size_t byte_count)
*/ static u64
tioce_dma_consistent(struct pci_dev *pdev, u64 paddr, size_t byte_count)
{
return tioce_do_dma_map(pdev, paddr, byte_count, 1);
}
......@@ -551,7 +551,7 @@ tioce_error_intr_handler(int irq, void *arg, struct pt_regs *pt)
tioce_kern_init(struct tioce_common *tioce_common)
{
int i;
uint32_t tmp;
u32 tmp;
struct tioce *tioce_mmr;
struct tioce_kernel *tioce_kern;
......@@ -563,7 +563,7 @@ tioce_kern_init(struct tioce_common *tioce_common)
tioce_kern->ce_common = tioce_common;
spin_lock_init(&tioce_kern->ce_lock);
INIT_LIST_HEAD(&tioce_kern->ce_dmamap_list);
tioce_common->ce_kernel_private = (uint64_t) tioce_kern;
tioce_common->ce_kernel_private = (u64) tioce_kern;
/*
* Determine the secondary bus number of the port2 logical PPB.
......@@ -575,7 +575,7 @@ tioce_kern_init(struct tioce_common *tioce_common)
raw_pci_ops->read(tioce_common->ce_pcibus.bs_persist_segment,
tioce_common->ce_pcibus.bs_persist_busnum,
PCI_DEVFN(2, 0), PCI_SECONDARY_BUS, 1, &tmp);
tioce_kern->ce_port1_secondary = (uint8_t) tmp;
tioce_kern->ce_port1_secondary = (u8) tmp;
/*
* Set PMU pagesize to the largest size available, and zero out
......@@ -615,7 +615,7 @@ tioce_force_interrupt(struct sn_irq_info *sn_irq_info)
struct pcidev_info *pcidev_info;
struct tioce_common *ce_common;
struct tioce *ce_mmr;
uint64_t force_int_val;
u64 force_int_val;
if (!sn_irq_info->irq_bridge)
return;
......@@ -687,7 +687,7 @@ tioce_target_interrupt(struct sn_irq_info *sn_irq_info)
struct tioce_common *ce_common;
struct tioce *ce_mmr;
int bit;
uint64_t vector;
u64 vector;
pcidev_info = (struct pcidev_info *)sn_irq_info->irq_pciioinfo;
if (!pcidev_info)
......@@ -699,7 +699,7 @@ tioce_target_interrupt(struct sn_irq_info *sn_irq_info)
bit = sn_irq_info->irq_int_bit;
__sn_setq_relaxed(&ce_mmr->ce_adm_int_mask, (1UL << bit));
vector = (uint64_t)sn_irq_info->irq_irq << INTR_VECTOR_SHFT;
vector = (u64)sn_irq_info->irq_irq << INTR_VECTOR_SHFT;
vector |= sn_irq_info->irq_xtalkaddr;
writeq(vector, &ce_mmr->ce_adm_int_dest[bit]);
__sn_clrq_relaxed(&ce_mmr->ce_adm_int_mask, (1UL << bit));
......
......@@ -12,6 +12,9 @@
#include <linux/reboot.h>
#include <asm/reboot.h>
void (*pm_power_off)(void);
EXPORT_SYMBOL(pm_power_off);
/*
* Urgs ... Too many MIPS machines to handle this in a generic way.
* So handle all using function pointers to machine specific
......@@ -33,6 +36,9 @@ void machine_halt(void)
void machine_power_off(void)
{
if (pm_power_off)
pm_power_off();
_machine_power_off();
}
This diff is collapsed.
......@@ -17,7 +17,7 @@ config SH_STANDARD_BIOS
config EARLY_SCIF_CONSOLE
bool "Use early SCIF console"
depends on CPU_SH4
depends on CPU_SH4 || CPU_SH2A && !SH_STANDARD_BIOS
config EARLY_PRINTK
bool "Early printk support"
......
......@@ -17,10 +17,30 @@
cflags-y := -mb
cflags-$(CONFIG_CPU_LITTLE_ENDIAN) := -ml
isa-y := any
isa-$(CONFIG_CPU_SH2) := sh2
isa-$(CONFIG_CPU_SH3) := sh3
isa-$(CONFIG_CPU_SH4) := sh4
isa-$(CONFIG_CPU_SH4A) := sh4a
isa-$(CONFIG_CPU_SH2A) := sh2a
isa-$(CONFIG_SH_DSP) := $(isa-y)-dsp
ifndef CONFIG_MMU
isa-y := $(isa-y)-nommu
endif
ifndef CONFIG_SH_FPU
isa-y := $(isa-y)-nofpu
endif
cflags-y += $(call as-option,-Wa$(comma)-isa=$(isa-y),)
cflags-$(CONFIG_CPU_SH2) += -m2
cflags-$(CONFIG_CPU_SH3) += -m3
cflags-$(CONFIG_CPU_SH4) += -m4 \
$(call cc-option,-mno-implicit-fp,-m4-nofpu)
cflags-$(CONFIG_CPU_SH4A) += $(call cc-option,-m4a-nofpu,)
cflags-$(CONFIG_SH_DSP) += -Wa,-dsp
cflags-$(CONFIG_SH_KGDB) += -g
......@@ -67,9 +87,7 @@ machdir-$(CONFIG_SH_7300_SOLUTION_ENGINE) := se/7300
machdir-$(CONFIG_SH_73180_SOLUTION_ENGINE) := se/73180
machdir-$(CONFIG_SH_STB1_HARP) := harp
machdir-$(CONFIG_SH_STB1_OVERDRIVE) := overdrive
machdir-$(CONFIG_SH_HP620) := hp6xx/hp620
machdir-$(CONFIG_SH_HP680) := hp6xx/hp680
machdir-$(CONFIG_SH_HP690) := hp6xx/hp690
machdir-$(CONFIG_SH_HP6XX) := hp6xx
machdir-$(CONFIG_SH_CQREEK) := cqreek
machdir-$(CONFIG_SH_DMIDA) := dmida
machdir-$(CONFIG_SH_EC3104) := ec3104
......@@ -119,31 +137,39 @@ boot := arch/sh/boot
CPPFLAGS_vmlinux.lds := -traditional
ifneq ($(KBUILD_SRC),)
incdir-prefix := $(srctree)/include/asm-sh/
else
incdir-prefix :=
endif
# Update machine arch and proc symlinks if something which affects
# them changed. We use .arch and .mach to indicate when they were
# updated last, otherwise make uses the target directory mtime.
include/asm-sh/.cpu: $(wildcard include/config/cpu/*.h) include/config/MARKER
@echo ' SYMLINK include/asm-sh/cpu -> include/asm-sh/$(cpuincdir-y)'
ifneq ($(KBUILD_SRC),)
$(Q)mkdir -p include/asm-sh
$(Q)ln -fsn $(srctree)/include/asm-sh/$(cpuincdir-y) include/asm-sh/cpu
else
$(Q)ln -fsn $(cpuincdir-y) include/asm-sh/cpu
endif
$(Q)if [ ! -d include/asm-sh ]; then mkdir -p include/asm-sh; fi
$(Q)ln -fsn $(incdir-prefix)$(cpuincdir-y) include/asm-sh/cpu
@touch $@
# Most boards have their own mach directories. For the ones that
# don't, just reference the parent directory so the semantics are
# kept roughly the same.
include/asm-sh/.mach: $(wildcard include/config/sh/*.h) include/config/MARKER
@echo ' SYMLINK include/asm-sh/mach -> include/asm-sh/$(incdir-y)'
ifneq ($(KBUILD_SRC),)
$(Q)mkdir -p include/asm-sh
$(Q)ln -fsn $(srctree)/include/asm-sh/$(incdir-y) include/asm-sh/mach
else
$(Q)ln -fsn $(incdir-y) include/asm-sh/mach
endif
@echo -n ' SYMLINK include/asm-sh/mach -> '
$(Q)if [ ! -d include/asm-sh ]; then mkdir -p include/asm-sh; fi
$(Q)if [ -d $(incdir-prefix)$(incdir-y) ]; then \
echo -e 'include/asm-sh/$(incdir-y)'; \
ln -fsn $(incdir-prefix)$(incdir-y) \
include/asm-sh/mach; \
else \
echo -e 'include/asm-sh'; \
ln -fsn $(incdir-prefix) include/asm-sh/mach; \
fi
@touch $@
archprepare: maketools include/asm-sh/.cpu include/asm-sh/.mach
.PHONY: maketools FORCE
......
#
# Makefile for the HP620 specific parts of the kernel
# Makefile for the HP6xx specific parts of the kernel
#
obj-y := mach.o setup.o
......
/*
* linux/arch/sh/boards/hp6xx/hp620/mach.c
*
* Copyright (C) 2000 Stuart Menefy (stuart.menefy@st.com)
*
* May be copied or modified under the terms of the GNU General Public
* License. See linux/COPYING for more information.
*
* Machine vector for the HP620
*/
#include <linux/init.h>
#include <asm/machvec.h>
#include <asm/rtc.h>
#include <asm/machvec_init.h>
#include <asm/io.h>
#include <asm/hd64461/hd64461.h>
#include <asm/irq.h>
/*
* The Machine Vector
*/
struct sh_machine_vector mv_hp620 __initmv = {
.mv_nr_irqs = HD64461_IRQBASE+HD64461_IRQ_NUM,
.mv_inb = hd64461_inb,
.mv_inw = hd64461_inw,
.mv_inl = hd64461_inl,
.mv_outb = hd64461_outb,
.mv_outw = hd64461_outw,
.mv_outl = hd64461_outl,
.mv_inb_p = hd64461_inb_p,
.mv_inw_p = hd64461_inw,
.mv_inl_p = hd64461_inl,
.mv_outb_p = hd64461_outb_p,
.mv_outw_p = hd64461_outw,
.mv_outl_p = hd64461_outl,
.mv_insb = hd64461_insb,
.mv_insw = hd64461_insw,
.mv_insl = hd64461_insl,
.mv_outsb = hd64461_outsb,
.mv_outsw = hd64461_outsw,
.mv_outsl = hd64461_outsl,
.mv_irq_demux = hd64461_irq_demux,
};
ALIAS_MV(hp620)
/*
* linux/arch/sh/boards/hp6xx/hp620/setup.c
*
* Copyright (C) 2002 Andriy Skulysh, 2005 Kristoffer Ericson
*
* May be copied or modified under the terms of the GNU General Public
* License. See Linux/COPYING for more information.
*
* Setup code for an HP620.
* Due to similiarity with hp680/hp690 same inits are done (for now)
*/
#include <linux/config.h>
#include <linux/init.h>
#include <asm/hd64461/hd64461.h>
#include <asm/io.h>
#include <asm/hp6xx/hp6xx.h>
#include <asm/cpu/dac.h>
const char *get_system_type(void)
{
return "HP620";
}
int __init platform_setup(void)
{
u16 v;
v = inw(HD64461_STBCR);
v |= HD64461_STBCR_SURTST | HD64461_STBCR_SIRST |
HD64461_STBCR_STM1ST | HD64461_STBCR_STM0ST |
HD64461_STBCR_SAFEST | HD64461_STBCR_SPC0ST |
HD64461_STBCR_SMIAST | HD64461_STBCR_SAFECKE_OST |
HD64461_STBCR_SAFECKE_IST;
outw(v, HD64461_STBCR);
v = inw(HD64461_GPADR);
v |= HD64461_GPADR_SPEAKER | HD64461_GPADR_PCMCIA0;
outw(v, HD64461_GPADR);
sh_dac_disable(DAC_SPEAKER_VOLUME);
return 0;
}
#
# Makefile for the HP680 specific parts of the kernel
#
obj-y := mach.o setup.o
#
# Makefile for the HP690 specific parts of the kernel
#
obj-y := mach.o
/*
* linux/arch/sh/boards/hp6xx/hp690/mach.c
*
* Copyright (C) 2000 Stuart Menefy (stuart.menefy@st.com)
*
* May be copied or modified under the terms of the GNU General Public
* License. See linux/COPYING for more information.
*
* Machine vector for the HP690
*/
#include <linux/init.h>
#include <asm/machvec.h>
#include <asm/rtc.h>
#include <asm/machvec_init.h>
#include <asm/io.h>
#include <asm/hd64461/hd64461.h>
#include <asm/irq.h>
struct sh_machine_vector mv_hp690 __initmv = {
.mv_nr_irqs = HD64461_IRQBASE+HD64461_IRQ_NUM,
.mv_inb = hd64461_inb,
.mv_inw = hd64461_inw,
.mv_inl = hd64461_inl,
.mv_outb = hd64461_outb,
.mv_outw = hd64461_outw,
.mv_outl = hd64461_outl,
.mv_inb_p = hd64461_inb_p,
.mv_inw_p = hd64461_inw,
.mv_inl_p = hd64461_inl,
.mv_outb_p = hd64461_outb_p,
.mv_outw_p = hd64461_outw,
.mv_outl_p = hd64461_outl,
.mv_insb = hd64461_insb,
.mv_insw = hd64461_insw,
.mv_insl = hd64461_insl,
.mv_outsb = hd64461_outsb,
.mv_outsw = hd64461_outsw,
.mv_outsl = hd64461_outsl,
.mv_irq_demux = hd64461_irq_demux,
};
ALIAS_MV(hp690)
/*
* linux/arch/sh/boards/hp6xx/hp680/mach.c
* linux/arch/sh/boards/hp6xx/mach.c
*
* Copyright (C) 2000 Stuart Menefy (stuart.menefy@st.com)
*
......@@ -8,19 +8,12 @@
*
* Machine vector for the HP680
*/
#include <linux/init.h>
#include <asm/machvec.h>
#include <asm/rtc.h>
#include <asm/machvec_init.h>
#include <asm/hd64461.h>
#include <asm/io.h>
#include <asm/hd64461/hd64461.h>
#include <asm/hp6xx/io.h>
#include <asm/irq.h>
struct sh_machine_vector mv_hp680 __initmv = {
struct sh_machine_vector mv_hp6xx __initmv = {
.mv_nr_irqs = HD64461_IRQBASE + HD64461_IRQ_NUM,
.mv_inb = hd64461_inb,
......@@ -50,4 +43,4 @@ struct sh_machine_vector mv_hp680 __initmv = {
.mv_irq_demux = hd64461_irq_demux,
};
ALIAS_MV(hp680)
ALIAS_MV(hp6xx)
......@@ -11,18 +11,19 @@
#include <linux/config.h>
#include <linux/init.h>
#include <asm/hd64461/hd64461.h>
#include <asm/io.h>
#include <asm/hd64461.h>
#include <asm/hp6xx/hp6xx.h>
#include <asm/cpu/dac.h>
const char *get_system_type(void)
{
return "HP680";
return "HP6xx";
}
int __init platform_setup(void)
{
u8 v8;
u16 v;
v = inw(HD64461_STBCR);
v |= HD64461_STBCR_SURTST | HD64461_STBCR_SIRST |
......@@ -30,12 +31,25 @@ int __init platform_setup(void)
HD64461_STBCR_SAFEST | HD64461_STBCR_SPC0ST |
HD64461_STBCR_SMIAST | HD64461_STBCR_SAFECKE_OST |
HD64461_STBCR_SAFECKE_IST;
#ifndef CONFIG_HD64461_ENABLER
v |= HD64461_STBCR_SPC1ST;
#endif
outw(v, HD64461_STBCR);
v = inw(HD64461_GPADR);
v |= HD64461_GPADR_SPEAKER | HD64461_GPADR_PCMCIA0;
outw(v, HD64461_GPADR);
outw(HD64461_PCCGCR_VCC0 | HD64461_PCCSCR_VCC1, HD64461_PCC0GCR);
#ifndef CONFIG_HD64461_ENABLER
outw(HD64461_PCCGCR_VCC0 | HD64461_PCCSCR_VCC1, HD64461_PCC1GCR);
#endif
sh_dac_output(0, DAC_SPEAKER_VOLUME);
sh_dac_disable(DAC_SPEAKER_VOLUME);
v8 = ctrl_inb(DACR);
v8 &= ~DACR_DAE;
ctrl_outb(v8,DACR);
return 0;
}
......@@ -2,7 +2,7 @@
# Makefile for the STMicroelectronics Overdrive specific parts of the kernel
#
obj-y := mach.o setup.o io.o irq.o led.o time.o
obj-y := mach.o setup.o io.o irq.o led.o
obj-$(CONFIG_PCI) += fpga.o galileo.o pcidma.o
......@@ -17,8 +17,6 @@
#include <asm/overdrive/overdrive.h>
#include <asm/overdrive/fpga.h>
extern void od_time_init(void);
const char *get_system_type(void)
{
return "SH7750 Overdrive";
......@@ -31,11 +29,9 @@ int __init platform_setup(void)
{
#ifdef CONFIG_PCI
init_overdrive_fpga();
galileo_init();
galileo_init();
#endif
board_time_init = od_time_init;
/* Enable RS232 receive buffers */
writel(0x1e, OVERDRIVE_CTRL);
}
/*
* arch/sh/boards/overdrive/time.c
*
* Copyright (C) 2000 Stuart Menefy (stuart.menefy@st.com)
* Copyright (C) 2002 Paul Mundt (lethal@chaoticdreams.org)
*
* May be copied or modified under the terms of the GNU General Public
* License. See linux/COPYING for more information.
*
* STMicroelectronics Overdrive Support.
*/
void od_time_init(void)
{
struct frqcr_data {
unsigned short frqcr;
struct {
unsigned char multiplier;
unsigned char divisor;
} factor[3];
};
static struct frqcr_data st40_frqcr_table[] = {
{ 0x000, {{1,1}, {1,1}, {1,2}}},
{ 0x002, {{1,1}, {1,1}, {1,4}}},
{ 0x004, {{1,1}, {1,1}, {1,8}}},
{ 0x008, {{1,1}, {1,2}, {1,2}}},
{ 0x00A, {{1,1}, {1,2}, {1,4}}},
{ 0x00C, {{1,1}, {1,2}, {1,8}}},
{ 0x011, {{1,1}, {2,3}, {1,6}}},
{ 0x013, {{1,1}, {2,3}, {1,3}}},
{ 0x01A, {{1,1}, {1,2}, {1,4}}},
{ 0x01C, {{1,1}, {1,2}, {1,8}}},
{ 0x023, {{1,1}, {2,3}, {1,3}}},
{ 0x02C, {{1,1}, {1,2}, {1,8}}},
{ 0x048, {{1,2}, {1,2}, {1,4}}},
{ 0x04A, {{1,2}, {1,2}, {1,6}}},
{ 0x04C, {{1,2}, {1,2}, {1,8}}},
{ 0x05A, {{1,2}, {1,3}, {1,6}}},
{ 0x05C, {{1,2}, {1,3}, {1,6}}},
{ 0x063, {{1,2}, {1,4}, {1,4}}},
{ 0x06C, {{1,2}, {1,4}, {1,8}}},
{ 0x091, {{1,3}, {1,3}, {1,6}}},
{ 0x093, {{1,3}, {1,3}, {1,6}}},
{ 0x0A3, {{1,3}, {1,6}, {1,6}}},
{ 0x0DA, {{1,4}, {1,4}, {1,8}}},
{ 0x0DC, {{1,4}, {1,4}, {1,8}}},
{ 0x0EC, {{1,4}, {1,8}, {1,8}}},
{ 0x123, {{1,4}, {1,4}, {1,8}}},
{ 0x16C, {{1,4}, {1,8}, {1,8}}},
};
struct memclk_data {
unsigned char multiplier;
unsigned char divisor;
};
static struct memclk_data st40_memclk_table[8] = {
{1,1}, // 000
{1,2}, // 001
{1,3}, // 010
{2,3}, // 011
{1,4}, // 100
{1,6}, // 101
{1,8}, // 110
{1,8} // 111
};
unsigned long pvr;
/*
* This should probably be moved into the SH3 probing code, and then
* use the processor structure to determine which CPU we are running
* on.
*/
pvr = ctrl_inl(CCN_PVR);
printk("PVR %08x\n", pvr);
if (((pvr >> CCN_PVR_CHIP_SHIFT) & CCN_PVR_CHIP_MASK) == CCN_PVR_CHIP_ST40STB1) {
/*
* Unfortunatly the STB1 FRQCR values are different from the
* 7750 ones.
*/
struct frqcr_data *d;
int a;
unsigned long memclkcr;
struct memclk_data *e;
for (a=0; a<ARRAY_SIZE(st40_frqcr_table); a++) {
d = &st40_frqcr_table[a];
if (d->frqcr == (frqcr & 0x1ff))
break;
}
if (a == ARRAY_SIZE(st40_frqcr_table)) {
d = st40_frqcr_table;
printk("ERROR: Unrecognised FRQCR value, using default multipliers\n");
}
memclkcr = ctrl_inl(CLOCKGEN_MEMCLKCR);
e = &st40_memclk_table[memclkcr & MEMCLKCR_RATIO_MASK];
printk("Clock multipliers: CPU: %d/%d Bus: %d/%d Mem: %d/%d Periph: %d/%d\n",
d->factor[0].multiplier, d->factor[0].divisor,
d->factor[1].multiplier, d->factor[1].divisor,
e->multiplier, e->divisor,
d->factor[2].multiplier, d->factor[2].divisor);
current_cpu_data.master_clock = current_cpu_data.module_clock *
d->factor[2].divisor /
d->factor[2].multiplier;
current_cpu_data.bus_clock = current_cpu_data.master_clock *
d->factor[1].multiplier /
d->factor[1].divisor;
current_cpu_data.memory_clock = current_cpu_data.master_clock *
e->multiplier / e->divisor;
current_cpu_data.cpu_clock = current_cpu_data.master_clock *
d->factor[0].multiplier /
d->factor[0].divisor;
}
......@@ -3,7 +3,7 @@
*
* SuperH-specific DMA management API
*
* Copyright (C) 2003, 2004 Paul Mundt
* Copyright (C) 2003, 2004, 2005 Paul Mundt
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
......@@ -15,6 +15,7 @@
#include <linux/spinlock.h>
#include <linux/proc_fs.h>
#include <linux/list.h>
#include <linux/platform_device.h>
#include <asm/dma.h>
DEFINE_SPINLOCK(dma_spin_lock);
......@@ -55,16 +56,14 @@ static LIST_HEAD(registered_dmac_list);
struct dma_info *get_dma_info(unsigned int chan)
{
struct list_head *pos, *tmp;
struct dma_info *info;
unsigned int total = 0;
/*
* Look for each DMAC's range to determine who the owner of
* the channel is.
*/
list_for_each_safe(pos, tmp, &registered_dmac_list) {
struct dma_info *info = list_entry(pos, struct dma_info, list);
list_for_each_entry(info, &registered_dmac_list, list) {
total += info->nr_channels;
if (chan > total)
continue;
......@@ -75,6 +74,20 @@ struct dma_info *get_dma_info(unsigned int chan)
return NULL;
}
static unsigned int get_nr_channels(void)
{
struct dma_info *info;
unsigned int nr = 0;
if (unlikely(list_empty(&registered_dmac_list)))
return nr;
list_for_each_entry(info, &registered_dmac_list, list)
nr += info->nr_channels;
return nr;
}
struct dma_channel *get_dma_channel(unsigned int chan)
{
struct dma_info *info = get_dma_info(chan);
......@@ -173,7 +186,7 @@ int dma_xfer(unsigned int chan, unsigned long from,
static int dma_read_proc(char *buf, char **start, off_t off,
int len, int *eof, void *data)
{
struct list_head *pos, *tmp;
struct dma_info *info;
char *p = buf;
if (list_empty(&registered_dmac_list))
......@@ -182,8 +195,7 @@ static int dma_read_proc(char *buf, char **start, off_t off,
/*
* Iterate over each registered DMAC
*/
list_for_each_safe(pos, tmp, &registered_dmac_list) {
struct dma_info *info = list_entry(pos, struct dma_info, list);
list_for_each_entry(info, &registered_dmac_list, list) {
int i;
/*
......@@ -205,9 +217,9 @@ static int dma_read_proc(char *buf, char **start, off_t off,
#endif
int __init register_dmac(struct dma_info *info)
int register_dmac(struct dma_info *info)
{
int i;
unsigned int total_channels, i;
INIT_LIST_HEAD(&info->list);
......@@ -217,6 +229,11 @@ int __init register_dmac(struct dma_info *info)
BUG_ON((info->flags & DMAC_CHANNELS_CONFIGURED) && !info->channels);
info->pdev = platform_device_register_simple((char *)info->name, -1,
NULL, 0);
if (IS_ERR(info->pdev))
return PTR_ERR(info->pdev);
/*
* Don't touch pre-configured channels
*/
......@@ -232,10 +249,12 @@ int __init register_dmac(struct dma_info *info)
memset(info->channels, 0, size);
}
total_channels = get_nr_channels();
for (i = 0; i < info->nr_channels; i++) {
struct dma_channel *chan = info->channels + i;
chan->chan = i;
chan->vchan = i + total_channels;
memcpy(chan->dev_id, "Unused", 7);
......@@ -245,9 +264,7 @@ int __init register_dmac(struct dma_info *info)
init_MUTEX(&chan->sem);
init_waitqueue_head(&chan->wait_queue);
#ifdef CONFIG_SYSFS
dma_create_sysfs_files(chan);
#endif
dma_create_sysfs_files(chan, info);
}
list_add(&info->list, &registered_dmac_list);
......@@ -255,12 +272,18 @@ int __init register_dmac(struct dma_info *info)
return 0;
}
void __exit unregister_dmac(struct dma_info *info)
void unregister_dmac(struct dma_info *info)
{
unsigned int i;
for (i = 0; i < info->nr_channels; i++)
dma_remove_sysfs_files(info->channels + i, info);
if (!(info->flags & DMAC_CHANNELS_CONFIGURED))
kfree(info->channels);
list_del(&info->list);
platform_device_unregister(info->pdev);
}
static int __init dma_api_init(void)
......
......@@ -140,7 +140,7 @@ static struct dma_ops g2_dma_ops = {
};
static struct dma_info g2_dma_info = {
.name = "G2 DMA",
.name = "g2_dmac",
.nr_channels = 4,
.ops = &g2_dma_ops,
.flags = DMAC_CHANNELS_TEI_CAPABLE,
......@@ -160,6 +160,7 @@ static int __init g2_dma_init(void)
static void __exit g2_dma_exit(void)
{
free_irq(HW_EVENT_G2_DMA, 0);
unregister_dmac(&g2_dma_info);
}
subsys_initcall(g2_dma_init);
......
......@@ -25,14 +25,14 @@
* such, this code is meant for only the simplest of tasks (and shouldn't be
* used in any new drivers at all).
*
* It should also be noted that various functions here are labelled as
* being deprecated. This is due to the fact that the ops->xfer() method is
* the preferred way of doing things (as well as just grabbing the spinlock
* directly). As such, any users of this interface will be warned rather
* loudly.
* NOTE: ops->xfer() is the preferred way of doing things. However, there
* are some users of the ISA DMA API that exist in common code that we
* don't necessarily want to go out of our way to break, so we still
* allow for some compatability at that level. Any new code is strongly
* advised to run far away from the ISA DMA API and use the SH DMA API
* directly.
*/
unsigned long __deprecated claim_dma_lock(void)
unsigned long claim_dma_lock(void)
{
unsigned long flags;
......@@ -42,19 +42,19 @@ unsigned long __deprecated claim_dma_lock(void)
}
EXPORT_SYMBOL(claim_dma_lock);
void __deprecated release_dma_lock(unsigned long flags)
void release_dma_lock(unsigned long flags)
{
spin_unlock_irqrestore(&dma_spin_lock, flags);
}
EXPORT_SYMBOL(release_dma_lock);
void __deprecated disable_dma(unsigned int chan)
void disable_dma(unsigned int chan)
{
/* Nothing */
}
EXPORT_SYMBOL(disable_dma);
void __deprecated enable_dma(unsigned int chan)
void enable_dma(unsigned int chan)
{
struct dma_info *info = get_dma_info(chan);
struct dma_channel *channel = &info->channels[chan];
......
......@@ -80,7 +80,7 @@ static struct dma_ops pvr2_dma_ops = {
};
static struct dma_info pvr2_dma_info = {
.name = "PowerVR 2 DMA",
.name = "pvr2_dmac",
.nr_channels = 1,
.ops = &pvr2_dma_ops,
.flags = DMAC_CHANNELS_TEI_CAPABLE,
......@@ -98,6 +98,7 @@ static void __exit pvr2_dma_exit(void)
{
free_dma(PVR2_CASCADE_CHAN);
free_irq(HW_EVENT_PVR2_DMA, 0);
unregister_dmac(&pvr2_dma_info);
}
subsys_initcall(pvr2_dma_init);
......
......@@ -5,6 +5,7 @@
*
* Copyright (C) 2000 Takashi YOSHII
* Copyright (C) 2003, 2004 Paul Mundt
* Copyright (C) 2005 Andriy Skulysh
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
......@@ -16,51 +17,28 @@
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <asm/dreamcast/dma.h>
#include <asm/signal.h>
#include <asm/irq.h>
#include <asm/dma.h>
#include <asm/io.h>
#include "dma-sh.h"
/*
* The SuperH DMAC supports a number of transmit sizes, we list them here,
* with their respective values as they appear in the CHCR registers.
*
* Defaults to a 64-bit transfer size.
*/
enum {
XMIT_SZ_64BIT,
XMIT_SZ_8BIT,
XMIT_SZ_16BIT,
XMIT_SZ_32BIT,
XMIT_SZ_256BIT,
};
/*
* The DMA count is defined as the number of bytes to transfer.
*/
static unsigned int ts_shift[] = {
[XMIT_SZ_64BIT] = 3,
[XMIT_SZ_8BIT] = 0,
[XMIT_SZ_16BIT] = 1,
[XMIT_SZ_32BIT] = 2,
[XMIT_SZ_256BIT] = 5,
};
static inline unsigned int get_dmte_irq(unsigned int chan)
{
unsigned int irq;
unsigned int irq = 0;
/*
* Normally we could just do DMTE0_IRQ + chan outright, though in the
* case of the 7751R, the DMTE IRQs for channels > 4 start right above
* the SCIF
*/
if (chan < 4) {
irq = DMTE0_IRQ + chan;
} else {
#ifdef DMTE4_IRQ
irq = DMTE4_IRQ + chan - 4;
#endif
}
return irq;
......@@ -78,9 +56,7 @@ static inline unsigned int calc_xmit_shift(struct dma_channel *chan)
{
u32 chcr = ctrl_inl(CHCR[chan->chan]);
chcr >>= 4;
return ts_shift[chcr & 0x0007];
return ts_shift[(chcr & CHCR_TS_MASK)>>CHCR_TS_SHIFT];
}
/*
......@@ -109,8 +85,13 @@ static irqreturn_t dma_tei(int irq, void *dev_id, struct pt_regs *regs)
static int sh_dmac_request_dma(struct dma_channel *chan)
{
char name[32];
snprintf(name, sizeof(name), "DMAC Transfer End (Channel %d)",
chan->chan);
return request_irq(get_dmte_irq(chan->chan), dma_tei,
SA_INTERRUPT, "DMAC Transfer End", chan);
SA_INTERRUPT, name, chan);
}
static void sh_dmac_free_dma(struct dma_channel *chan)
......@@ -118,10 +99,18 @@ static void sh_dmac_free_dma(struct dma_channel *chan)
free_irq(get_dmte_irq(chan->chan), chan);
}
static void sh_dmac_configure_channel(struct dma_channel *chan, unsigned long chcr)
static void
sh_dmac_configure_channel(struct dma_channel *chan, unsigned long chcr)
{
if (!chcr)
chcr = RS_DUAL;
chcr = RS_DUAL | CHCR_IE;
if (chcr & CHCR_IE) {
chcr &= ~CHCR_IE;
chan->flags |= DMA_TEI_CAPABLE;
} else {
chan->flags &= ~DMA_TEI_CAPABLE;
}
ctrl_outl(chcr, CHCR[chan->chan]);
......@@ -130,22 +119,32 @@ static void sh_dmac_configure_channel(struct dma_channel *chan, unsigned long ch
static void sh_dmac_enable_dma(struct dma_channel *chan)
{
int irq = get_dmte_irq(chan->chan);
int irq;
u32 chcr;
chcr = ctrl_inl(CHCR[chan->chan]);
chcr |= CHCR_DE | CHCR_IE;
chcr |= CHCR_DE;
if (chan->flags & DMA_TEI_CAPABLE)
chcr |= CHCR_IE;
ctrl_outl(chcr, CHCR[chan->chan]);
enable_irq(irq);
if (chan->flags & DMA_TEI_CAPABLE) {
irq = get_dmte_irq(chan->chan);
enable_irq(irq);
}
}
static void sh_dmac_disable_dma(struct dma_channel *chan)
{
int irq = get_dmte_irq(chan->chan);
int irq;
u32 chcr;
disable_irq(irq);
if (chan->flags & DMA_TEI_CAPABLE) {
irq = get_dmte_irq(chan->chan);
disable_irq(irq);
}
chcr = ctrl_inl(CHCR[chan->chan]);
chcr &= ~(CHCR_DE | CHCR_TE | CHCR_IE);
......@@ -158,7 +157,7 @@ static int sh_dmac_xfer_dma(struct dma_channel *chan)
* If we haven't pre-configured the channel with special flags, use
* the defaults.
*/
if (!(chan->flags & DMA_CONFIGURED))
if (unlikely(!(chan->flags & DMA_CONFIGURED)))
sh_dmac_configure_channel(chan, 0);
sh_dmac_disable_dma(chan);
......@@ -178,9 +177,11 @@ static int sh_dmac_xfer_dma(struct dma_channel *chan)
* cascading to the PVR2 DMAC. In this case, we still need to write
* SAR and DAR, regardless of value, in order for cascading to work.
*/
if (chan->sar || (mach_is_dreamcast() && chan->chan == 2))
if (chan->sar || (mach_is_dreamcast() &&
chan->chan == PVR2_CASCADE_CHAN))
ctrl_outl(chan->sar, SAR[chan->chan]);
if (chan->dar || (mach_is_dreamcast() && chan->chan == 2))
if (chan->dar || (mach_is_dreamcast() &&
chan->chan == PVR2_CASCADE_CHAN))
ctrl_outl(chan->dar, DAR[chan->chan]);
ctrl_outl(chan->count >> calc_xmit_shift(chan), DMATCR[chan->chan]);
......@@ -198,17 +199,38 @@ static int sh_dmac_get_dma_residue(struct dma_channel *chan)
return ctrl_inl(DMATCR[chan->chan]) << calc_xmit_shift(chan);
}
#if defined(CONFIG_CPU_SH4)
static irqreturn_t dma_err(int irq, void *dev_id, struct pt_regs *regs)
#ifdef CONFIG_CPU_SUBTYPE_SH7780
#define dmaor_read_reg() ctrl_inw(DMAOR)
#define dmaor_write_reg(data) ctrl_outw(data, DMAOR)
#else
#define dmaor_read_reg() ctrl_inl(DMAOR)
#define dmaor_write_reg(data) ctrl_outl(data, DMAOR)
#endif
static inline int dmaor_reset(void)
{
unsigned long dmaor = ctrl_inl(DMAOR);
unsigned long dmaor = dmaor_read_reg();
/* Try to clear the error flags first, incase they are set */
dmaor &= ~(DMAOR_NMIF | DMAOR_AE);
dmaor_write_reg(dmaor);
printk("DMAE: DMAOR=%lx\n", dmaor);
dmaor |= DMAOR_INIT;
dmaor_write_reg(dmaor);
ctrl_outl(ctrl_inl(DMAOR)&~DMAOR_NMIF, DMAOR);
ctrl_outl(ctrl_inl(DMAOR)&~DMAOR_AE, DMAOR);
ctrl_outl(ctrl_inl(DMAOR)|DMAOR_DME, DMAOR);
/* See if we got an error again */
if ((dmaor_read_reg() & (DMAOR_AE | DMAOR_NMIF))) {
printk(KERN_ERR "dma-sh: Can't initialize DMAOR.\n");
return -EINVAL;
}
return 0;
}
#if defined(CONFIG_CPU_SH4)
static irqreturn_t dma_err(int irq, void *dev_id, struct pt_regs *regs)
{
dmaor_reset();
disable_irq(irq);
return IRQ_HANDLED;
......@@ -224,8 +246,8 @@ static struct dma_ops sh_dmac_ops = {
};
static struct dma_info sh_dmac_info = {
.name = "SuperH DMAC",
.nr_channels = 4,
.name = "sh_dmac",
.nr_channels = CONFIG_NR_ONCHIP_DMA_CHANNELS,
.ops = &sh_dmac_ops,
.flags = DMAC_CHANNELS_TEI_CAPABLE,
};
......@@ -248,7 +270,13 @@ static int __init sh_dmac_init(void)
make_ipr_irq(irq, DMA_IPR_ADDR, DMA_IPR_POS, DMA_PRIORITY);
}
ctrl_outl(0x8000 | DMAOR_DME, DMAOR);
/*
* Initialize DMAOR, and clean up any error flags that may have
* been set.
*/
i = dmaor_reset();
if (i < 0)
return i;
return register_dmac(info);
}
......@@ -258,10 +286,12 @@ static void __exit sh_dmac_exit(void)
#ifdef CONFIG_CPU_SH4
free_irq(DMAE_IRQ, 0);
#endif
unregister_dmac(&sh_dmac_info);
}
subsys_initcall(sh_dmac_init);
module_exit(sh_dmac_exit);
MODULE_AUTHOR("Takashi YOSHII, Paul Mundt, Andriy Skulysh");
MODULE_DESCRIPTION("SuperH On-Chip DMAC Support");
MODULE_LICENSE("GPL");
......@@ -11,6 +11,8 @@
#ifndef __DMA_SH_H
#define __DMA_SH_H
#include <asm/cpu/dma.h>
/* Definitions for the SuperH DMAC */
#define REQ_L 0x00000000
#define REQ_E 0x00080000
......@@ -26,27 +28,47 @@
#define SM_DEC 0x00002000
#define RS_IN 0x00000200
#define RS_OUT 0x00000300
#define TM_BURST 0x0000080
#define TS_8 0x00000010
#define TS_16 0x00000020
#define TS_32 0x00000030
#define TS_64 0x00000000
#define TS_BLK 0x00000040
#define CHCR_DE 0x00000001
#define CHCR_TE 0x00000002
#define CHCR_IE 0x00000004
/* Define the default configuration for dual address memory-memory transfer.
* The 0x400 value represents auto-request, external->external.
*/
#define RS_DUAL (DM_INC | SM_INC | 0x400 | TS_32)
#define DMAOR_COD 0x00000008
/* DMAOR definitions */
#define DMAOR_AE 0x00000004
#define DMAOR_NMIF 0x00000002
#define DMAOR_DME 0x00000001
/*
* Define the default configuration for dual address memory-memory transfer.
* The 0x400 value represents auto-request, external->external.
*/
#define RS_DUAL (DM_INC | SM_INC | 0x400 | TS_32)
#define MAX_DMAC_CHANNELS (CONFIG_NR_ONCHIP_DMA_CHANNELS)
/*
* Subtypes that have fewer channels than this simply need to change
* CONFIG_NR_ONCHIP_DMA_CHANNELS. Likewise, subtypes with a larger number
* of channels should expand on this.
*
* For most subtypes we can easily figure these values out with some
* basic calculation, unfortunately on other subtypes these are more
* scattered, so we just leave it unrolled for simplicity.
*/
#define SAR ((unsigned long[]){SH_DMAC_BASE + 0x00, SH_DMAC_BASE + 0x10, \
SH_DMAC_BASE + 0x20, SH_DMAC_BASE + 0x30, \
SH_DMAC_BASE + 0x50, SH_DMAC_BASE + 0x60})
#define DAR ((unsigned long[]){SH_DMAC_BASE + 0x04, SH_DMAC_BASE + 0x14, \
SH_DMAC_BASE + 0x24, SH_DMAC_BASE + 0x34, \
SH_DMAC_BASE + 0x54, SH_DMAC_BASE + 0x64})
#define DMATCR ((unsigned long[]){SH_DMAC_BASE + 0x08, SH_DMAC_BASE + 0x18, \
SH_DMAC_BASE + 0x28, SH_DMAC_BASE + 0x38, \
SH_DMAC_BASE + 0x58, SH_DMAC_BASE + 0x68})
#define CHCR ((unsigned long[]){SH_DMAC_BASE + 0x0c, SH_DMAC_BASE + 0x1c, \
SH_DMAC_BASE + 0x2c, SH_DMAC_BASE + 0x3c, \
SH_DMAC_BASE + 0x5c, SH_DMAC_BASE + 0x6c})
#define DMAOR (SH_DMAC_BASE + 0x40)
#endif /* __DMA_SH_H */
......@@ -3,7 +3,7 @@
*
* sysfs interface for SH DMA API
*
* Copyright (C) 2004 Paul Mundt
* Copyright (C) 2004, 2005 Paul Mundt
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
......@@ -12,7 +12,9 @@
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/sysdev.h>
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/err.h>
#include <linux/string.h>
#include <asm/dma.h>
......@@ -77,7 +79,7 @@ static ssize_t dma_store_config(struct sys_device *dev,
unsigned long config;
config = simple_strtoul(buf, NULL, 0);
dma_configure_channel(channel->chan, config);
dma_configure_channel(channel->vchan, config);
return count;
}
......@@ -111,12 +113,13 @@ static SYSDEV_ATTR(field, S_IRUGO, dma_show_##field, NULL);
dma_ro_attr(count, "0x%08x\n");
dma_ro_attr(flags, "0x%08lx\n");
int __init dma_create_sysfs_files(struct dma_channel *chan)
int dma_create_sysfs_files(struct dma_channel *chan, struct dma_info *info)
{
struct sys_device *dev = &chan->dev;
char name[16];
int ret;
dev->id = chan->chan;
dev->id = chan->vchan;
dev->cls = &dma_sysclass;
ret = sysdev_register(dev);
......@@ -129,6 +132,24 @@ int __init dma_create_sysfs_files(struct dma_channel *chan)
sysdev_create_file(dev, &attr_flags);
sysdev_create_file(dev, &attr_config);
return 0;
snprintf(name, sizeof(name), "dma%d", chan->chan);
return sysfs_create_link(&info->pdev->dev.kobj, &dev->kobj, name);
}
void dma_remove_sysfs_files(struct dma_channel *chan, struct dma_info *info)
{
struct sys_device *dev = &chan->dev;
char name[16];
sysdev_remove_file(dev, &attr_dev_id);
sysdev_remove_file(dev, &attr_count);
sysdev_remove_file(dev, &attr_mode);
sysdev_remove_file(dev, &attr_flags);
sysdev_remove_file(dev, &attr_config);
snprintf(name, sizeof(name), "dma%d", chan->chan);
sysfs_remove_link(&info->pdev->dev.kobj, name);
sysdev_unregister(dev);
}
......@@ -17,6 +17,4 @@ obj-$(CONFIG_SH_KGDB) += kgdb_stub.o kgdb_jmp.o
obj-$(CONFIG_SH_CPU_FREQ) += cpufreq.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
USE_STANDARD_AS_RULE := true
obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o
......@@ -2,15 +2,12 @@
# Makefile for the Linux/SuperH CPU-specifc backends.
#
obj-y := irq_ipr.o irq_imask.o init.o bus.o
obj-y += irq/ init.o bus.o clock.o
obj-$(CONFIG_CPU_SH2) += sh2/
obj-$(CONFIG_CPU_SH3) += sh3/
obj-$(CONFIG_CPU_SH4) += sh4/
obj-$(CONFIG_SH_RTC) += rtc.o
obj-$(CONFIG_SH_RTC) += rtc.o
obj-$(CONFIG_UBC_WAKEUP) += ubc.o
obj-$(CONFIG_SH_ADC) += adc.o
USE_STANDARD_AS_RULE := true
obj-$(CONFIG_SH_ADC) += adc.o
......@@ -109,6 +109,8 @@ int sh_device_register(struct sh_dev *dev)
/* This is needed for USB OHCI to work */
if (dev->dma_mask)
dev->dev.dma_mask = dev->dma_mask;
if (dev->coherent_dma_mask)
dev->dev.coherent_dma_mask = dev->coherent_dma_mask;
snprintf(dev->dev.bus_id, BUS_ID_SIZE, "%s%u",
dev->name, dev->dev_id);
......
/*
* arch/sh/kernel/cpu/clock.c - SuperH clock framework
*
* Copyright (C) 2005 Paul Mundt
*
* This clock framework is derived from the OMAP version by:
*
* Copyright (C) 2004 Nokia Corporation
* Written by Tuukka Tikkanen <tuukka.tikkanen@elektrobit.com>
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/list.h>
#include <linux/kref.h>
#include <linux/seq_file.h>
#include <linux/err.h>
#include <asm/clock.h>
#include <asm/timer.h>
static LIST_HEAD(clock_list);
static DEFINE_SPINLOCK(clock_lock);
static DECLARE_MUTEX(clock_list_sem);
/*
* Each subtype is expected to define the init routines for these clocks,
* as each subtype (or processor family) will have these clocks at the
* very least. These are all provided through the CPG, which even some of
* the more quirky parts (such as ST40, SH4-202, etc.) still have.
*
* The processor-specific code is expected to register any additional
* clock sources that are of interest.
*/
static struct clk master_clk = {
.name = "master_clk",
.flags = CLK_ALWAYS_ENABLED | CLK_RATE_PROPAGATES,
#ifdef CONFIG_SH_PCLK_FREQ_BOOL
.rate = CONFIG_SH_PCLK_FREQ,
#endif
};
static struct clk module_clk = {
.name = "module_clk",
.parent = &master_clk,
.flags = CLK_ALWAYS_ENABLED | CLK_RATE_PROPAGATES,
};
static struct clk bus_clk = {
.name = "bus_clk",
.parent = &master_clk,
.flags = CLK_ALWAYS_ENABLED | CLK_RATE_PROPAGATES,
};
static struct clk cpu_clk = {
.name = "cpu_clk",
.parent = &master_clk,
.flags = CLK_ALWAYS_ENABLED,
};
/*
* The ordering of these clocks matters, do not change it.
*/
static struct clk *onchip_clocks[] = {
&master_clk,
&module_clk,
&bus_clk,
&cpu_clk,
};
static void propagate_rate(struct clk *clk)
{
struct clk *clkp;
list_for_each_entry(clkp, &clock_list, node) {
if (likely(clkp->parent != clk))
continue;
if (likely(clkp->ops && clkp->ops->recalc))
clkp->ops->recalc(clkp);
}
}
int __clk_enable(struct clk *clk)
{
/*
* See if this is the first time we're enabling the clock, some
* clocks that are always enabled still require "special"
* initialization. This is especially true if the clock mode
* changes and the clock needs to hunt for the proper set of
* divisors to use before it can effectively recalc.
*/
if (unlikely(atomic_read(&clk->kref.refcount) == 1))
if (clk->ops && clk->ops->init)
clk->ops->init(clk);
if (clk->flags & CLK_ALWAYS_ENABLED)
return 0;
if (likely(clk->ops && clk->ops->enable))
clk->ops->enable(clk);
kref_get(&clk->kref);
return 0;
}
int clk_enable(struct clk *clk)
{
unsigned long flags;
int ret;
spin_lock_irqsave(&clock_lock, flags);
ret = __clk_enable(clk);
spin_unlock_irqrestore(&clock_lock, flags);
return ret;
}
static void clk_kref_release(struct kref *kref)
{
/* Nothing to do */
}
void __clk_disable(struct clk *clk)
{
if (clk->flags & CLK_ALWAYS_ENABLED)
return;
kref_put(&clk->kref, clk_kref_release);
}
void clk_disable(struct clk *clk)
{
unsigned long flags;
spin_lock_irqsave(&clock_lock, flags);
__clk_disable(clk);
spin_unlock_irqrestore(&clock_lock, flags);
}
int clk_register(struct clk *clk)
{
down(&clock_list_sem);
list_add(&clk->node, &clock_list);
kref_init(&clk->kref);
up(&clock_list_sem);
return 0;
}
void clk_unregister(struct clk *clk)
{
down(&clock_list_sem);
list_del(&clk->node);
up(&clock_list_sem);
}
inline unsigned long clk_get_rate(struct clk *clk)
{
return clk->rate;
}
int clk_set_rate(struct clk *clk, unsigned long rate)
{
int ret = -EOPNOTSUPP;
if (likely(clk->ops && clk->ops->set_rate)) {
unsigned long flags;
spin_lock_irqsave(&clock_lock, flags);
ret = clk->ops->set_rate(clk, rate);
spin_unlock_irqrestore(&clock_lock, flags);
}
if (unlikely(clk->flags & CLK_RATE_PROPAGATES))
propagate_rate(clk);
return ret;
}
void clk_recalc_rate(struct clk *clk)
{
if (likely(clk->ops && clk->ops->recalc)) {
unsigned long flags;
spin_lock_irqsave(&clock_lock, flags);
clk->ops->recalc(clk);
spin_unlock_irqrestore(&clock_lock, flags);
}
if (unlikely(clk->flags & CLK_RATE_PROPAGATES))
propagate_rate(clk);
}
struct clk *clk_get(const char *id)
{
struct clk *p, *clk = ERR_PTR(-ENOENT);
down(&clock_list_sem);
list_for_each_entry(p, &clock_list, node) {
if (strcmp(id, p->name) == 0 && try_module_get(p->owner)) {
clk = p;
break;
}
}
up(&clock_list_sem);
return clk;
}
void clk_put(struct clk *clk)
{
if (clk && !IS_ERR(clk))
module_put(clk->owner);
}
void __init __attribute__ ((weak))
arch_init_clk_ops(struct clk_ops **ops, int type)
{
}
int __init clk_init(void)
{
int i, ret = 0;
if (unlikely(!master_clk.rate))
/*
* NOTE: This will break if the default divisor has been
* changed.
*
* No one should be changing the default on us however,
* expect that a sane value for CONFIG_SH_PCLK_FREQ will
* be defined in the event of a different divisor.
*/
master_clk.rate = get_timer_frequency() * 4;
for (i = 0; i < ARRAY_SIZE(onchip_clocks); i++) {
struct clk *clk = onchip_clocks[i];
arch_init_clk_ops(&clk->ops, i);
ret |= clk_register(clk);
clk_enable(clk);
}
/* Kick the child clocks.. */
propagate_rate(&master_clk);
propagate_rate(&bus_clk);
return ret;
}
int show_clocks(struct seq_file *m)
{
struct clk *clk;
list_for_each_entry_reverse(clk, &clock_list, node) {
unsigned long rate = clk_get_rate(clk);
/*
* Don't bother listing dummy clocks with no ancestry
* that only support enable and disable ops.
*/
if (unlikely(!rate && !clk->parent))
continue;
seq_printf(m, "%-12s\t: %ld.%02ldMHz\n", clk->name,
rate / 1000000, (rate % 1000000) / 10000);
}
return 0;
}
EXPORT_SYMBOL_GPL(clk_register);
EXPORT_SYMBOL_GPL(clk_unregister);
EXPORT_SYMBOL_GPL(clk_get);
EXPORT_SYMBOL_GPL(clk_put);
EXPORT_SYMBOL_GPL(clk_enable);
EXPORT_SYMBOL_GPL(clk_disable);
EXPORT_SYMBOL_GPL(__clk_enable);
EXPORT_SYMBOL_GPL(__clk_disable);
EXPORT_SYMBOL_GPL(clk_get_rate);
EXPORT_SYMBOL_GPL(clk_set_rate);
EXPORT_SYMBOL_GPL(clk_recalc_rate);
#
# Makefile for the Linux/SuperH CPU-specifc IRQ handlers.
#
obj-y += ipr.o imask.o
obj-$(CONFIG_CPU_HAS_PINT_IRQ) += pint.o
obj-$(CONFIG_CPU_HAS_INTC2_IRQ) += intc2.o
/* $Id: irq_imask.c,v 1.1.2.1 2002/11/17 10:53:43 mrbrown Exp $
*
* linux/arch/sh/kernel/irq_imask.c
/*
* arch/sh/kernel/cpu/irq/imask.c
*
* Copyright (C) 1999, 2000 Niibe Yutaka
*
* Simple interrupt handling using IMASK of SR register.
*
*/
/* NOTE: Will not work on level 15 */
#include <linux/ptrace.h>
#include <linux/errno.h>
#include <linux/kernel_stat.h>
......@@ -19,13 +15,11 @@
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/bitops.h>
#include <asm/system.h>
#include <asm/irq.h>
#include <linux/spinlock.h>
#include <linux/cache.h>
#include <linux/irq.h>
#include <asm/system.h>
#include <asm/irq.h>
/* Bitmap of IRQ masked */
static unsigned long imask_mask = 0x7fff;
......@@ -40,7 +34,7 @@ static void end_imask_irq(unsigned int irq);
#define IMASK_PRIORITY 15
static unsigned int startup_imask_irq(unsigned int irq)
{
{
/* Nothing to do */
return 0; /* never anything pending */
}
......
/*
* arch/sh/kernel/cpu/irq/pint.c - Interrupt handling for PINT-based IRQs.
*
* Copyright (C) 1999 Niibe Yutaka & Takeshi Yaegashi
* Copyright (C) 2000 Kazumoto Kojima
* Copyright (C) 2003 Takashi Kusuda <kusuda-takashi@hitachi-ul.co.jp>
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/config.h>
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/module.h>
#include <asm/system.h>
#include <asm/io.h>
#include <asm/machvec.h>
static unsigned char pint_map[256];
static unsigned long portcr_mask;
static void enable_pint_irq(unsigned int irq);
static void disable_pint_irq(unsigned int irq);
/* shutdown is same as "disable" */
#define shutdown_pint_irq disable_pint_irq
static void mask_and_ack_pint(unsigned int);
static void end_pint_irq(unsigned int irq);
static unsigned int startup_pint_irq(unsigned int irq)
{
enable_pint_irq(irq);
return 0; /* never anything pending */
}
static struct hw_interrupt_type pint_irq_type = {
.typename = "PINT-IRQ",
.startup = startup_pint_irq,
.shutdown = shutdown_pint_irq,
.enable = enable_pint_irq,
.disable = disable_pint_irq,
.ack = mask_and_ack_pint,
.end = end_pint_irq
};
static void disable_pint_irq(unsigned int irq)
{
unsigned long val, flags;
local_irq_save(flags);
val = ctrl_inw(INTC_INTER);
val &= ~(1 << (irq - PINT_IRQ_BASE));
ctrl_outw(val, INTC_INTER); /* disable PINTn */
portcr_mask &= ~(3 << (irq - PINT_IRQ_BASE)*2);
local_irq_restore(flags);
}
static void enable_pint_irq(unsigned int irq)
{
unsigned long val, flags;
local_irq_save(flags);
val = ctrl_inw(INTC_INTER);
val |= 1 << (irq - PINT_IRQ_BASE);
ctrl_outw(val, INTC_INTER); /* enable PINTn */
portcr_mask |= 3 << (irq - PINT_IRQ_BASE)*2;
local_irq_restore(flags);
}
static void mask_and_ack_pint(unsigned int irq)
{
disable_pint_irq(irq);
}
static void end_pint_irq(unsigned int irq)
{
if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS)))
enable_pint_irq(irq);
}
void make_pint_irq(unsigned int irq)
{
disable_irq_nosync(irq);
irq_desc[irq].handler = &pint_irq_type;
disable_pint_irq(irq);
}
void __init init_IRQ_pint(void)
{
int i;
make_ipr_irq(PINT0_IRQ, PINT0_IPR_ADDR, PINT0_IPR_POS, PINT0_PRIORITY);
make_ipr_irq(PINT8_IRQ, PINT8_IPR_ADDR, PINT8_IPR_POS, PINT8_PRIORITY);
enable_irq(PINT0_IRQ);
enable_irq(PINT8_IRQ);
for(i = 0; i < 16; i++)
make_pint_irq(PINT_IRQ_BASE + i);
for(i = 0; i < 256; i++) {
if (i & 1)
pint_map[i] = 0;
else if (i & 2)
pint_map[i] = 1;
else if (i & 4)
pint_map[i] = 2;
else if (i & 8)
pint_map[i] = 3;
else if (i & 0x10)
pint_map[i] = 4;
else if (i & 0x20)
pint_map[i] = 5;
else if (i & 0x40)
pint_map[i] = 6;
else if (i & 0x80)
pint_map[i] = 7;
}
}
int ipr_irq_demux(int irq)
{
unsigned long creg, dreg, d, sav;
if (irq == PINT0_IRQ) {
#if defined(CONFIG_CPU_SUBTYPE_SH7707)
creg = PORT_PACR;
dreg = PORT_PADR;
#else
creg = PORT_PCCR;
dreg = PORT_PCDR;
#endif
sav = ctrl_inw(creg);
ctrl_outw(sav | portcr_mask, creg);
d = (~ctrl_inb(dreg) ^ ctrl_inw(INTC_ICR2)) &
ctrl_inw(INTC_INTER) & 0xff;
ctrl_outw(sav, creg);
if (d == 0)
return irq;
return PINT_IRQ_BASE + pint_map[d];
} else if (irq == PINT8_IRQ) {
#if defined(CONFIG_CPU_SUBTYPE_SH7707)
creg = PORT_PBCR;
dreg = PORT_PBDR;
#else
creg = PORT_PFCR;
dreg = PORT_PFDR;
#endif
sav = ctrl_inw(creg);
ctrl_outw(sav | (portcr_mask >> 16), creg);
d = (~ctrl_inb(dreg) ^ (ctrl_inw(INTC_ICR2) >> 8)) &
(ctrl_inw(INTC_INTER) >> 8) & 0xff;
ctrl_outw(sav, creg);
if (d == 0)
return irq;
return PINT_IRQ_BASE + 8 + pint_map[d];
}
return irq;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment