Commit 59aee3c2 authored by Jeff Garzik's avatar Jeff Garzik

Merge branch 'master'

parents 0d69ae5f 046d20b7
......@@ -131,3 +131,47 @@ Netlink itself is not reliable protocol, that means that messages can
be lost due to memory pressure or process' receiving queue overflowed,
so caller is warned must be prepared. That is why struct cn_msg [main
connector's message header] contains u32 seq and u32 ack fields.
/*****************************************/
Userspace usage.
/*****************************************/
2.6.14 has a new netlink socket implementation, which by default does not
allow to send data to netlink groups other than 1.
So, if to use netlink socket (for example using connector)
with different group number userspace application must subscribe to
that group. It can be achieved by following pseudocode:
s = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_CONNECTOR);
l_local.nl_family = AF_NETLINK;
l_local.nl_groups = 12345;
l_local.nl_pid = 0;
if (bind(s, (struct sockaddr *)&l_local, sizeof(struct sockaddr_nl)) == -1) {
perror("bind");
close(s);
return -1;
}
{
int on = l_local.nl_groups;
setsockopt(s, 270, 1, &on, sizeof(on));
}
Where 270 above is SOL_NETLINK, and 1 is a NETLINK_ADD_MEMBERSHIP socket
option. To drop multicast subscription one should call above socket option
with NETLINK_DROP_MEMBERSHIP parameter which is defined as 0.
2.6.14 netlink code only allows to select a group which is less or equal to
the maximum group number, which is used at netlink_kernel_create() time.
In case of connector it is CN_NETLINK_USERS + 0xf, so if you want to use
group number 12345, you must increment CN_NETLINK_USERS to that number.
Additional 0xf numbers are allocated to be used by non-in-kernel users.
Due to this limitation, group 0xffffffff does not work now, so one can
not use add/remove connector's group notifications, but as far as I know,
only cn_test.c test module used it.
Some work in netlink area is still being done, so things can be changed in
2.6.15 timeframe, if it will happen, documentation will be updated for that
kernel.
......@@ -35,6 +35,7 @@ The driver load creates the following directories under the /sys file system.
/sys/class/firmware/dell_rbu/data
/sys/devices/platform/dell_rbu/image_type
/sys/devices/platform/dell_rbu/data
/sys/devices/platform/dell_rbu/packet_size
The driver supports two types of update mechanism; monolithic and packetized.
These update mechanism depends upon the BIOS currently running on the system.
......@@ -47,8 +48,26 @@ By default the driver uses monolithic memory for the update type. This can be
changed to packets during the driver load time by specifying the load
parameter image_type=packet. This can also be changed later as below
echo packet > /sys/devices/platform/dell_rbu/image_type
Also echoing either mono ,packet or init in to image_type will free up the
memory allocated by the driver.
In packet update mode the packet size has to be given before any packets can
be downloaded. It is done as below
echo XXXX > /sys/devices/platform/dell_rbu/packet_size
In the packet update mechanism, the user neesd to create a new file having
packets of data arranged back to back. It can be done as follows
The user creates packets header, gets the chunk of the BIOS image and
placs it next to the packetheader; now, the packetheader + BIOS image chunk
added to geather should match the specified packet_size. This makes one
packet, the user needs to create more such packets out of the entire BIOS
image file and then arrange all these packets back to back in to one single
file.
This file is then copied to /sys/class/firmware/dell_rbu/data.
Once this file gets to the driver, the driver extracts packet_size data from
the file and spreads it accross the physical memory in contiguous packet_sized
space.
This method makes sure that all the packets get to the driver in a single operation.
In monolithic update the user simply get the BIOS image (.hdr file) and copies
to the data file as is without any change to the BIOS image itself.
Do the steps below to download the BIOS image.
1) echo 1 > /sys/class/firmware/dell_rbu/loading
......@@ -58,7 +77,10 @@ Do the steps below to download the BIOS image.
The /sys/class/firmware/dell_rbu/ entries will remain till the following is
done.
echo -1 > /sys/class/firmware/dell_rbu/loading.
Until this step is completed the drivr cannot be unloaded.
Until this step is completed the driver cannot be unloaded.
Also echoing either mono ,packet or init in to image_type will free up the
memory allocated by the driver.
If an user by accident executes steps 1 and 3 above without executing step 2;
it will make the /sys/class/firmware/dell_rbu/ entries to disappear.
The entries can be recreated by doing the following
......@@ -66,15 +88,11 @@ echo init > /sys/devices/platform/dell_rbu/image_type
NOTE: echoing init in image_type does not change it original value.
Also the driver provides /sys/devices/platform/dell_rbu/data readonly file to
read back the image downloaded. This is useful in case of packet update
mechanism where the above steps 1,2,3 will repeated for every packet.
By reading the /sys/devices/platform/dell_rbu/data file all packet data
downloaded can be verified in a single file.
The packets are arranged in this file one after the other in a FIFO order.
read back the image downloaded.
NOTE:
This driver requires a patch for firmware_class.c which has the addition
of request_firmware_nowait_nohotplug function to wortk
This driver requires a patch for firmware_class.c which has the modified
request_firmware_nowait function.
Also after updating the BIOS image an user mdoe application neeeds to execute
code which message the BIOS update request to the BIOS. So on the next reboot
the BIOS knows about the new image downloaded and it updates it self.
......
===================
KEY REQUEST SERVICE
===================
The key request service is part of the key retention service (refer to
Documentation/keys.txt). This document explains more fully how that the
requesting algorithm works.
The process starts by either the kernel requesting a service by calling
request_key():
struct key *request_key(const struct key_type *type,
const char *description,
const char *callout_string);
Or by userspace invoking the request_key system call:
key_serial_t request_key(const char *type,
const char *description,
const char *callout_info,
key_serial_t dest_keyring);
The main difference between the two access points is that the in-kernel
interface does not need to link the key to a keyring to prevent it from being
immediately destroyed. The kernel interface returns a pointer directly to the
key, and it's up to the caller to destroy the key.
The userspace interface links the key to a keyring associated with the process
to prevent the key from going away, and returns the serial number of the key to
the caller.
===========
THE PROCESS
===========
A request proceeds in the following manner:
(1) Process A calls request_key() [the userspace syscall calls the kernel
interface].
(2) request_key() searches the process's subscribed keyrings to see if there's
a suitable key there. If there is, it returns the key. If there isn't, and
callout_info is not set, an error is returned. Otherwise the process
proceeds to the next step.
(3) request_key() sees that A doesn't have the desired key yet, so it creates
two things:
(a) An uninstantiated key U of requested type and description.
(b) An authorisation key V that refers to key U and notes that process A
is the context in which key U should be instantiated and secured, and
from which associated key requests may be satisfied.
(4) request_key() then forks and executes /sbin/request-key with a new session
keyring that contains a link to auth key V.
(5) /sbin/request-key execs an appropriate program to perform the actual
instantiation.
(6) The program may want to access another key from A's context (say a
Kerberos TGT key). It just requests the appropriate key, and the keyring
search notes that the session keyring has auth key V in its bottom level.
This will permit it to then search the keyrings of process A with the
UID, GID, groups and security info of process A as if it was process A,
and come up with key W.
(7) The program then does what it must to get the data with which to
instantiate key U, using key W as a reference (perhaps it contacts a
Kerberos server using the TGT) and then instantiates key U.
(8) Upon instantiating key U, auth key V is automatically revoked so that it
may not be used again.
(9) The program then exits 0 and request_key() deletes key V and returns key
U to the caller.
This also extends further. If key W (step 5 above) didn't exist, key W would be
created uninstantiated, another auth key (X) would be created [as per step 3]
and another copy of /sbin/request-key spawned [as per step 4]; but the context
specified by auth key X will still be process A, as it was in auth key V.
This is because process A's keyrings can't simply be attached to
/sbin/request-key at the appropriate places because (a) execve will discard two
of them, and (b) it requires the same UID/GID/Groups all the way through.
======================
NEGATIVE INSTANTIATION
======================
Rather than instantiating a key, it is possible for the possessor of an
authorisation key to negatively instantiate a key that's under construction.
This is a short duration placeholder that causes any attempt at re-requesting
the key whilst it exists to fail with error ENOKEY.
This is provided to prevent excessive repeated spawning of /sbin/request-key
processes for a key that will never be obtainable.
Should the /sbin/request-key process exit anything other than 0 or die on a
signal, the key under construction will be automatically negatively
instantiated for a short amount of time.
====================
THE SEARCH ALGORITHM
====================
A search of any particular keyring proceeds in the following fashion:
(1) When the key management code searches for a key (keyring_search_aux) it
firstly calls key_permission(SEARCH) on the keyring it's starting with,
if this denies permission, it doesn't search further.
(2) It considers all the non-keyring keys within that keyring and, if any key
matches the criteria specified, calls key_permission(SEARCH) on it to see
if the key is allowed to be found. If it is, that key is returned; if
not, the search continues, and the error code is retained if of higher
priority than the one currently set.
(3) It then considers all the keyring-type keys in the keyring it's currently
searching. It calls key_permission(SEARCH) on each keyring, and if this
grants permission, it recurses, executing steps (2) and (3) on that
keyring.
The process stops immediately a valid key is found with permission granted to
use it. Any error from a previous match attempt is discarded and the key is
returned.
When search_process_keyrings() is invoked, it performs the following searches
until one succeeds:
(1) If extant, the process's thread keyring is searched.
(2) If extant, the process's process keyring is searched.
(3) The process's session keyring is searched.
(4) If the process has a request_key() authorisation key in its session
keyring then:
(a) If extant, the calling process's thread keyring is searched.
(b) If extant, the calling process's process keyring is searched.
(c) The calling process's session keyring is searched.
The moment one succeeds, all pending errors are discarded and the found key is
returned.
Only if all these fail does the whole thing fail with the highest priority
error. Note that several errors may have come from LSM.
The error priority is:
EKEYREVOKED > EKEYEXPIRED > ENOKEY
EACCES/EPERM are only returned on a direct search of a specific keyring where
the basal keyring does not grant Search permission.
......@@ -361,6 +361,8 @@ The main syscalls are:
/sbin/request-key will be invoked in an attempt to obtain a key. The
callout_info string will be passed as an argument to the program.
See also Documentation/keys-request-key.txt.
The keyctl syscall functions are:
......@@ -533,8 +535,8 @@ The keyctl syscall functions are:
(*) Read the payload data from a key:
key_serial_t keyctl(KEYCTL_READ, key_serial_t keyring, char *buffer,
size_t buflen);
long keyctl(KEYCTL_READ, key_serial_t keyring, char *buffer,
size_t buflen);
This function attempts to read the payload data from the specified key
into the buffer. The process must have read permission on the key to
......@@ -555,9 +557,9 @@ The keyctl syscall functions are:
(*) Instantiate a partially constructed key.
key_serial_t keyctl(KEYCTL_INSTANTIATE, key_serial_t key,
const void *payload, size_t plen,
key_serial_t keyring);
long keyctl(KEYCTL_INSTANTIATE, key_serial_t key,
const void *payload, size_t plen,
key_serial_t keyring);
If the kernel calls back to userspace to complete the instantiation of a
key, userspace should use this call to supply data for the key before the
......@@ -576,8 +578,8 @@ The keyctl syscall functions are:
(*) Negatively instantiate a partially constructed key.
key_serial_t keyctl(KEYCTL_NEGATE, key_serial_t key,
unsigned timeout, key_serial_t keyring);
long keyctl(KEYCTL_NEGATE, key_serial_t key,
unsigned timeout, key_serial_t keyring);
If the kernel calls back to userspace to complete the instantiation of a
key, userspace should use this call mark the key as negative before the
......@@ -688,6 +690,8 @@ payload contents" for more information.
If successful, the key will have been attached to the default keyring for
implicitly obtained request-key keys, as set by KEYCTL_SET_REQKEY_KEYRING.
See also Documentation/keys-request-key.txt.
(*) When it is no longer required, the key should be released using:
......
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 14
EXTRAVERSION =-rc3
EXTRAVERSION =-rc4
NAME=Affluent Albatross
# *DOCUMENTATION*
......
......@@ -53,7 +53,7 @@ tune-$(CONFIG_CPU_ARM926T) :=-mtune=arm9tdmi
tune-$(CONFIG_CPU_SA110) :=-mtune=strongarm110
tune-$(CONFIG_CPU_SA1100) :=-mtune=strongarm1100
tune-$(CONFIG_CPU_XSCALE) :=$(call cc-option,-mtune=xscale,-mtune=strongarm110) -Wa,-mcpu=xscale
tune-$(CONFIG_CPU_V6) :=-mtune=strongarm
tune-$(CONFIG_CPU_V6) :=$(call cc-option,-mtune=arm1136j-s,-mtune=strongarm)
# Need -Uarm for gcc < 3.x
CFLAGS_ABI :=$(call cc-option,-mapcs-32,-mabi=apcs-gnu) $(call cc-option,-mno-thumb-interwork,)
......
......@@ -26,6 +26,8 @@ struct scoop_pcmcia_dev *scoop_devs;
struct scoop_dev {
void *base;
spinlock_t scoop_lock;
unsigned short suspend_clr;
unsigned short suspend_set;
u32 scoop_gpwr;
};
......@@ -90,14 +92,24 @@ EXPORT_SYMBOL(reset_scoop);
EXPORT_SYMBOL(read_scoop_reg);
EXPORT_SYMBOL(write_scoop_reg);
static void check_scoop_reg(struct scoop_dev *sdev)
{
unsigned short mcr;
mcr = SCOOP_REG(sdev->base, SCOOP_MCR);
if ((mcr & 0x100) == 0)
SCOOP_REG(sdev->base, SCOOP_MCR) = 0x0101;
}
#ifdef CONFIG_PM
static int scoop_suspend(struct device *dev, pm_message_t state, uint32_t level)
{
if (level == SUSPEND_POWER_DOWN) {
struct scoop_dev *sdev = dev_get_drvdata(dev);
sdev->scoop_gpwr = SCOOP_REG(sdev->base,SCOOP_GPWR);
SCOOP_REG(sdev->base,SCOOP_GPWR) = 0;
check_scoop_reg(sdev);
sdev->scoop_gpwr = SCOOP_REG(sdev->base, SCOOP_GPWR);
SCOOP_REG(sdev->base, SCOOP_GPWR) = (sdev->scoop_gpwr & ~sdev->suspend_clr) | sdev->suspend_set;
}
return 0;
}
......@@ -107,6 +119,7 @@ static int scoop_resume(struct device *dev, uint32_t level)
if (level == RESUME_POWER_ON) {
struct scoop_dev *sdev = dev_get_drvdata(dev);
check_scoop_reg(sdev);
SCOOP_REG(sdev->base,SCOOP_GPWR) = sdev->scoop_gpwr;
}
return 0;
......@@ -151,6 +164,9 @@ int __init scoop_probe(struct device *dev)
SCOOP_REG(devptr->base, SCOOP_GPCR) = inf->io_dir & 0xffff;
SCOOP_REG(devptr->base, SCOOP_GPWR) = inf->io_out & 0xffff;
devptr->suspend_clr = inf->suspend_clr;
devptr->suspend_set = inf->suspend_set;
return 0;
}
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -45,8 +45,8 @@ extern void fp_enter(void);
#define EXPORT_SYMBOL_ALIAS(sym,orig) \
EXPORT_CRC_ALIAS(sym) \
const struct kernel_symbol __ksymtab_##sym \
__attribute__((section("__ksymtab"))) = \
static const struct kernel_symbol __ksymtab_##sym \
__attribute_used__ __attribute__((section("__ksymtab"))) = \
{ (unsigned long)&orig, #sym };
/*
......
......@@ -106,15 +106,10 @@ ENTRY(ret_from_fork)
.endm
.Larm700bug:
ldr r0, [sp, #S_PSR] @ Get calling cpsr
sub lr, lr, #4
str lr, [r8]
msr spsr_cxsf, r0
ldmia sp, {r0 - lr}^ @ Get calling r0 - lr
mov r0, r0
ldr lr, [sp, #S_PC] @ Get PC
add sp, sp, #S_FRAME_SIZE
movs pc, lr
subs pc, lr, #4
#else
.macro arm710_bug_check, instr, temp
.endm
......
......@@ -36,6 +36,7 @@
#include <asm/arch/mmc.h>
#include <asm/arch/udc.h>
#include <asm/arch/corgi.h>
#include <asm/arch/sharpsl.h>
#include <asm/mach/sharpsl_param.h>
#include <asm/hardware/scoop.h>
......
......@@ -125,7 +125,7 @@ static int external_map[] = { 2 };
static int chip0_map[] = { 0 };
static int chip1_map[] = { 1 };
struct mtd_partition anubis_default_nand_part[] = {
static struct mtd_partition anubis_default_nand_part[] = {
[0] = {
.name = "Boot Agent",
.size = SZ_16K,
......
......@@ -230,7 +230,7 @@ static int chip0_map[] = { 1 };
static int chip1_map[] = { 2 };
static int chip2_map[] = { 3 };
struct mtd_partition bast_default_nand_part[] = {
static struct mtd_partition bast_default_nand_part[] = {
[0] = {
.name = "Boot Agent",
.size = SZ_16K,
......@@ -340,7 +340,7 @@ static struct resource bast_dm9k_resource[] = {
* better IO routines can be written and tested
*/
struct dm9000_plat_data bast_dm9k_platdata = {
static struct dm9000_plat_data bast_dm9k_platdata = {
.flags = DM9000_PLATF_16BITONLY
};
......
......@@ -288,7 +288,7 @@ static struct resource vr1000_dm9k1_resource[] = {
* better IO routines can be written and tested
*/
struct dm9000_plat_data vr1000_dm9k_platdata = {
static struct dm9000_plat_data vr1000_dm9k_platdata = {
.flags = DM9000_PLATF_16BITONLY,
};
......
......@@ -125,9 +125,6 @@ static struct platform_device *uart_devices[] __initdata = {
&s3c_uart2
};
/* store our uart devices for the serial driver console */
struct platform_device *s3c2410_uart_devices[3];
static int s3c2410_uart_count = 0;
/* uart registration process */
......
......@@ -151,7 +151,7 @@ void __init s3c2440_init_uarts(struct s3c2410_uartcfg *cfg, int no)
#ifdef CONFIG_PM
struct sleep_save s3c2440_sleep[] = {
static struct sleep_save s3c2440_sleep[] = {
SAVE_ITEM(S3C2440_DSC0),
SAVE_ITEM(S3C2440_DSC1),
SAVE_ITEM(S3C2440_GPJDAT),
......@@ -260,7 +260,7 @@ void __init s3c2440_init_clocks(int xtal)
* as a driver which may support both 2410 and 2440 may try and use it.
*/
int __init s3c2440_core_init(void)
static int __init s3c2440_core_init(void)
{
return sysdev_class_register(&s3c2440_sysclass);
}
......
......@@ -38,6 +38,7 @@
#include <asm/hardware/clock.h>
#include "clock.h"
#include "cpu.h"
static unsigned long timer_startval;
static unsigned long timer_usec_ticks;
......
......@@ -111,11 +111,11 @@ static struct mtd_partition collie_partitions[] = {
static void collie_set_vpp(int vpp)
{
write_scoop_reg(&colliescoop_device.dev, SCOOP_GPCR, read_scoop_reg(SCOOP_GPCR) | COLLIE_SCP_VPEN);
write_scoop_reg(&colliescoop_device.dev, SCOOP_GPCR, read_scoop_reg(&colliescoop_device.dev, SCOOP_GPCR) | COLLIE_SCP_VPEN);
if (vpp)
write_scoop_reg(&colliescoop_device.dev, SCOOP_GPWR, read_scoop_reg(SCOOP_GPWR) | COLLIE_SCP_VPEN);
write_scoop_reg(&colliescoop_device.dev, SCOOP_GPWR, read_scoop_reg(&colliescoop_device.dev, SCOOP_GPWR) | COLLIE_SCP_VPEN);
else
write_scoop_reg(&colliescoop_device.dev, SCOOP_GPWR, read_scoop_reg(SCOOP_GPWR) & ~COLLIE_SCP_VPEN);
write_scoop_reg(&colliescoop_device.dev, SCOOP_GPWR, read_scoop_reg(&colliescoop_device.dev, SCOOP_GPWR) & ~COLLIE_SCP_VPEN);
}
static struct flash_platform_data collie_flash_data = {
......
......@@ -370,21 +370,21 @@ config CPU_BIG_ENDIAN
config CPU_ICACHE_DISABLE
bool "Disable I-Cache"
depends on CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020
depends on CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020 || CPU_V6
help
Say Y here to disable the processor instruction cache. Unless
you have a reason not to or are unsure, say N.
config CPU_DCACHE_DISABLE
bool "Disable D-Cache"
depends on CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020
depends on CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020 || CPU_V6
help
Say Y here to disable the processor data cache. Unless
you have a reason not to or are unsure, say N.
config CPU_DCACHE_WRITETHROUGH
bool "Force write through D-cache"
depends on (CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020) && !CPU_DCACHE_DISABLE
depends on (CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020 || CPU_V6) && !CPU_DCACHE_DISABLE
default y if CPU_ARM925T
help
Say Y here to use the data cache in writethrough mode. Unless you
......@@ -399,7 +399,7 @@ config CPU_CACHE_ROUND_ROBIN
config CPU_BPREDICT_DISABLE
bool "Disable branch prediction"
depends on CPU_ARM1020
depends on CPU_ARM1020 || CPU_V6
help
Say Y here to disable branch prediction. If unsure, say N.
......
......@@ -111,7 +111,7 @@ proc_alignment_read(char *page, char **start, off_t off, int count, int *eof,
}
static int proc_alignment_write(struct file *file, const char __user *buffer,
unsigned long count, void *data)
unsigned long count, void *data)
{
char mode;
......@@ -119,7 +119,7 @@ static int proc_alignment_write(struct file *file, const char __user *buffer,
if (get_user(mode, buffer))
return -EFAULT;
if (mode >= '0' && mode <= '5')
ai_usermode = mode - '0';
ai_usermode = mode - '0';
}
return count;
}
......@@ -262,7 +262,7 @@ union offset_union {
goto fault; \
} while (0)
#define put32_unaligned_check(val,addr) \
#define put32_unaligned_check(val,addr) \
__put32_unaligned_check("strb", val, addr)
#define put32t_unaligned_check(val,addr) \
......@@ -306,19 +306,19 @@ do_alignment_ldrhstrh(unsigned long addr, unsigned long instr, struct pt_regs *r
return TYPE_LDST;
user:
if (LDST_L_BIT(instr)) {
unsigned long val;
get16t_unaligned_check(val, addr);
if (LDST_L_BIT(instr)) {
unsigned long val;
get16t_unaligned_check(val, addr);
/* signed half-word? */
if (instr & 0x40)
val = (signed long)((signed short) val);
/* signed half-word? */
if (instr & 0x40)
val = (signed long)((signed short) val);
regs->uregs[rd] = val;
} else
put16t_unaligned_check(regs->uregs[rd], addr);
regs->uregs[rd] = val;
} else
put16t_unaligned_check(regs->uregs[rd], addr);
return TYPE_LDST;
return TYPE_LDST;
fault:
return TYPE_FAULT;
......@@ -330,6 +330,9 @@ do_alignment_ldrdstrd(unsigned long addr, unsigned long instr,
{
unsigned int rd = RD_BITS(instr);
if (((rd & 1) == 1) || (rd == 14))
goto bad;
ai_dword += 1;
if (user_mode(regs))
......@@ -339,11 +342,11 @@ do_alignment_ldrdstrd(unsigned long addr, unsigned long instr,
unsigned long val;
get32_unaligned_check(val, addr);
regs->uregs[rd] = val;
get32_unaligned_check(val, addr+4);
regs->uregs[rd+1] = val;
get32_unaligned_check(val, addr + 4);
regs->uregs[rd + 1] = val;
} else {
put32_unaligned_check(regs->uregs[rd], addr);
put32_unaligned_check(regs->uregs[rd+1], addr+4);
put32_unaligned_check(regs->uregs[rd + 1], addr + 4);
}
return TYPE_LDST;
......@@ -353,15 +356,16 @@ do_alignment_ldrdstrd(unsigned long addr, unsigned long instr,
unsigned long val;
get32t_unaligned_check(val, addr);
regs->uregs[rd] = val;
get32t_unaligned_check(val, addr+4);
regs->uregs[rd+1] = val;
get32t_unaligned_check(val, addr + 4);
regs->uregs[rd + 1] = val;
} else {
put32t_unaligned_check(regs->uregs[rd], addr);
put32t_unaligned_check(regs->uregs[rd+1], addr+4);
put32t_unaligned_check(regs->uregs[rd + 1], addr + 4);
}
return TYPE_LDST;
bad:
return TYPE_ERROR;
fault:
return TYPE_FAULT;
}
......@@ -439,7 +443,7 @@ do_alignment_ldmstm(unsigned long addr, unsigned long instr, struct pt_regs *reg
if (LDST_P_EQ_U(instr)) /* U = P */
eaddr += 4;
/*
/*
* For alignment faults on the ARM922T/ARM920T the MMU makes
* the FSR (and hence addr) equal to the updated base address
* of the multiple access rather than the restored value.
......@@ -566,7 +570,7 @@ thumb2arm(u16 tinstr)
/* 6.5.1 Format 3: */
case 0x4800 >> 11: /* 7.1.28 LDR(3) */
/* NOTE: This case is not technically possible. We're
* loading 32-bit memory data via PC relative
* loading 32-bit memory data via PC relative
* addressing mode. So we can and should eliminate
* this case. But I'll leave it here for now.
*/
......@@ -638,7 +642,7 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
if (fault) {
type = TYPE_FAULT;
goto bad_or_fault;
goto bad_or_fault;
}
if (user_mode(regs))
......@@ -663,6 +667,8 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
else if ((instr & 0x001000f0) == 0x000000d0 || /* LDRD */
(instr & 0x001000f0) == 0x000000f0) /* STRD */
handler = do_alignment_ldrdstrd;
else if ((instr & 0x01f00ff0) == 0x01000090) /* SWP */
goto swp;
else
goto bad;
break;
......@@ -733,6 +739,9 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
do_bad_area(current, current->mm, addr, fsr, regs);
return 0;
swp:
printk(KERN_ERR "Alignment trap: not handling swp instruction\n");
bad:
/*
* Oops, we didn't handle the instruction.
......
......@@ -31,11 +31,6 @@
#include <linux/string.h>
#include <asm/system.h>
/* forward declarations */
unsigned int EmulateCPDO(const unsigned int);
unsigned int EmulateCPDT(const unsigned int);
unsigned int EmulateCPRT(const unsigned int);
/* Reset the FPA11 chip. Called to initialize and reset the emulator. */
static void resetFPA11(void)
{
......
......@@ -95,4 +95,24 @@ extern int8 SetRoundingMode(const unsigned int);
extern int8 SetRoundingPrecision(const unsigned int);
extern void nwfpe_init_fpa(union fp_state *fp);
extern unsigned int EmulateAll(unsigned int opcode);
extern unsigned int EmulateCPDT(const unsigned int opcode);
extern unsigned int EmulateCPDO(const unsigned int opcode);
extern unsigned int EmulateCPRT(const unsigned int opcode);
/* fpa11_cpdt.c */
extern unsigned int PerformLDF(const unsigned int opcode);
extern unsigned int PerformSTF(const unsigned int opcode);
extern unsigned int PerformLFM(const unsigned int opcode);
extern unsigned int PerformSFM(const unsigned int opcode);
/* single_cpdo.c */
extern unsigned int SingleCPDO(struct roundingData *roundData,
const unsigned int opcode, FPREG * rFd);
/* double_cpdo.c */
extern unsigned int DoubleCPDO(struct roundingData *roundData,
const unsigned int opcode, FPREG * rFd);
#endif
......@@ -26,12 +26,11 @@
#include "fpa11.inl"
#include "fpmodule.h"
#include "fpmodule.inl"
#include "softfloat.h"
#ifdef CONFIG_FPE_NWFPE_XP
extern flag floatx80_is_nan(floatx80);
#endif
extern flag float64_is_nan(float64);
extern flag float32_is_nan(float32);
unsigned int PerformFLT(const unsigned int opcode);
unsigned int PerformFIX(const unsigned int opcode);
......
......@@ -476,4 +476,10 @@ static inline unsigned int getDestinationSize(const unsigned int opcode)
return (nRc);
}
extern unsigned int checkCondition(const unsigned int opcode,
const unsigned int ccodes);
extern const float64 float64Constant[];
extern const float32 float32Constant[];
#endif
......@@ -265,4 +265,7 @@ static inline flag float64_lt_nocheck(float64 a, float64 b)
return (a != b) && (aSign ^ (a < b));
}
extern flag float32_is_nan( float32 a );
extern flag float64_is_nan( float64 a );
#endif
......@@ -2,11 +2,17 @@
#
# This file is linux/arch/arm/tools/mach-types
#
# Up to date versions of this file can be obtained from:
#
# http://www.arm.linux.org.uk/developer/machines/?action=download
#
# Please do not send patches to this file; it is automatically generated!
# To add an entry into this database, please see Documentation/arm/README,
# or contact rmk@arm.linux.org.uk
# or visit:
#
# http://www.arm.linux.org.uk/developer/machines/?action=new
#
# Last update: Thu Jun 23 20:19:33 2005
# Last update: Mon Oct 10 09:46:25 2005
#
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
#
......@@ -421,7 +427,7 @@ mt02 MACH_MT02 MT02 410
mport3s MACH_MPORT3S MPORT3S 411
ra_alpha MACH_RA_ALPHA RA_ALPHA 412
xcep MACH_XCEP XCEP 413
arcom_mercury MACH_ARCOM_MERCURY ARCOM_MERCURY 414
arcom_vulcan MACH_ARCOM_VULCAN ARCOM_VULCAN 414
stargate MACH_STARGATE STARGATE 415
armadilloj MACH_ARMADILLOJ ARMADILLOJ 416
elroy_jack MACH_ELROY_JACK ELROY_JACK 417
......@@ -454,7 +460,7 @@ esl_sarva MACH_ESL_SARVA ESL_SARVA 443
xm250 MACH_XM250 XM250 444
t6tc1xb MACH_T6TC1XB T6TC1XB 445
ess710 MACH_ESS710 ESS710 446
mx3ads MACH_MX3ADS MX3ADS 447
mx31ads MACH_MX3ADS MX3ADS 447
himalaya MACH_HIMALAYA HIMALAYA 448
bolfenk MACH_BOLFENK BOLFENK 449
at91rm9200kr MACH_AT91RM9200KR AT91RM9200KR 450
......@@ -787,3 +793,79 @@ ez_ixp42x MACH_EZ_IXP42X EZ_IXP42X 778
tapwave_zodiac MACH_TAPWAVE_ZODIAC TAPWAVE_ZODIAC 779
universalmeter MACH_UNIVERSALMETER UNIVERSALMETER 780
hicoarm9 MACH_HICOARM9 HICOARM9 781
pnx4008 MACH_PNX4008 PNX4008 782
kws6000 MACH_KWS6000 KWS6000 783
portux920t MACH_PORTUX920T PORTUX920T 784
ez_x5 MACH_EZ_X5 EZ_X5 785
omap_rudolph MACH_OMAP_RUDOLPH OMAP_RUDOLPH 786
cpuat91 MACH_CPUAT91 CPUAT91 787
rea9200 MACH_REA9200 REA9200 788
acts_pune_sa1110 MACH_ACTS_PUNE_SA1110 ACTS_PUNE_SA1110 789
ixp425 MACH_IXP425 IXP425 790
argonplusodyssey MACH_ODYSSEY ODYSSEY 791
perch MACH_PERCH PERCH 792
eis05r1 MACH_EIS05R1 EIS05R1 793
pepperpad MACH_PEPPERPAD PEPPERPAD 794
sb3010 MACH_SB3010 SB3010 795
rm9200 MACH_RM9200 RM9200 796
dma03 MACH_DMA03 DMA03 797
road_s101 MACH_ROAD_S101 ROAD_S101 798
iq_nextgen_a MACH_IQ_NEXTGEN_A IQ_NEXTGEN_A 799
iq_nextgen_b MACH_IQ_NEXTGEN_B IQ_NEXTGEN_B 800
iq_nextgen_c MACH_IQ_NEXTGEN_C IQ_NEXTGEN_C 801
iq_nextgen_d MACH_IQ_NEXTGEN_D IQ_NEXTGEN_D 802
iq_nextgen_e MACH_IQ_NEXTGEN_E IQ_NEXTGEN_E 803
mallow_at91 MACH_MALLOW_AT91 MALLOW_AT91 804
cybertracker MACH_CYBERTRACKER CYBERTRACKER 805
gesbc931x MACH_GESBC931X GESBC931X 806
centipad MACH_CENTIPAD CENTIPAD 807
armsoc MACH_ARMSOC ARMSOC 808
se4200 MACH_SE4200 SE4200 809
ems197a MACH_EMS197A EMS197A 810
micro9 MACH_MICRO9 MICRO9 811
micro9l MACH_MICRO9L MICRO9L 812
uc5471dsp MACH_UC5471DSP UC5471DSP 813
sj5471eng MACH_SJ5471ENG SJ5471ENG 814
none MACH_CMPXA26X CMPXA26X 815
nc MACH_NC NC 816
omap_palmte MACH_OMAP_PALMTE OMAP_PALMTE 817
ajax52x MACH_AJAX52X AJAX52X 818
siriustar MACH_SIRIUSTAR SIRIUSTAR 819
iodata_hdlg MACH_IODATA_HDLG IODATA_HDLG 820
at91rm9200utl MACH_AT91RM9200UTL AT91RM9200UTL 821
biosafe MACH_BIOSAFE BIOSAFE 822
mp1000 MACH_MP1000 MP1000 823
parsy MACH_PARSY PARSY 824
ccxp270 MACH_CCXP CCXP 825
omap_gsample MACH_OMAP_GSAMPLE OMAP_GSAMPLE 826
realview_eb MACH_REALVIEW_EB REALVIEW_EB 827
samoa MACH_SAMOA SAMOA 828
t3xscale MACH_T3XSCALE T3XSCALE 829
i878 MACH_I878 I878 830
borzoi MACH_BORZOI BORZOI 831
gecko MACH_GECKO GECKO 832
ds101 MACH_DS101 DS101 833
omap_palmtt2 MACH_OMAP_PALMTT2 OMAP_PALMTT2 834
xscale_palmld MACH_XSCALE_PALMLD XSCALE_PALMLD 835
cc9c MACH_CC9C CC9C 836
sbc1670 MACH_SBC1670 SBC1670 837
ixdp28x5 MACH_IXDP28X5 IXDP28X5 838
omap_palmtt MACH_OMAP_PALMTT OMAP_PALMTT 839
ml696k MACH_ML696K ML696K 840
arcom_zeus MACH_ARCOM_ZEUS ARCOM_ZEUS 841
osiris MACH_OSIRIS OSIRIS 842
maestro MACH_MAESTRO MAESTRO 843
tunge2 MACH_TUNGE2 TUNGE2 844
ixbbm MACH_IXBBM IXBBM 845
mx27 MACH_MX27 MX27 846
ax8004 MACH_AX8004 AX8004 847
at91sam9261ek MACH_AT91SAM9261EK AT91SAM9261EK 848
loft MACH_LOFT LOFT 849
magpie MACH_MAGPIE MAGPIE 850
mx21 MACH_MX21 MX21 851
mb87m3400 MACH_MB87M3400 MB87M3400 852
mguard_delta MACH_MGUARD_DELTA MGUARD_DELTA 853
davinci_dvdp MACH_DAVINCI_DVDP DAVINCI_DVDP 854
htcuniversal MACH_HTCUNIVERSAL HTCUNIVERSAL 855
tpad MACH_TPAD TPAD 856
roverp3 MACH_ROVERP3 ROVERP3 857
......@@ -24,7 +24,7 @@ struct dma_coherent_mem {
};
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, unsigned int __nocast gfp)
dma_addr_t *dma_handle, gfp_t gfp)
{
void *ret;
struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
......
......@@ -15,6 +15,7 @@
#include <linux/kernel.h>
#include <linux/cpumask.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#define IPI_SCHEDULE 1
#define IPI_CALL 2
......@@ -28,6 +29,7 @@ spinlock_t cris_atomic_locks[] = { [0 ... LOCK_COUNT - 1] = SPIN_LOCK_UNLOCKED};
/* CPU masks */
cpumask_t cpu_online_map = CPU_MASK_NONE;
cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
EXPORT_SYMBOL(phys_cpu_present_map);
/* Variables used during SMP boot */
volatile int cpu_now_booting = 0;
......
......@@ -29,7 +29,7 @@ static void __init init_amd(struct cpuinfo_x86 *c)
int r;
#ifdef CONFIG_SMP
unsigned long value;
unsigned long long value;
/* Disable TLB flush filter by setting HWCR.FFDIS on K8
* bit 6 of msr C001_0015
......
......@@ -23,7 +23,7 @@ struct dma_coherent_mem {
};
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, unsigned int __nocast gfp)
dma_addr_t *dma_handle, gfp_t gfp)
{
void *ret;
struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
......
......@@ -338,7 +338,11 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
esp = (unsigned long) ka->sa.sa_restorer;
}
return (void __user *)((esp - frame_size) & -8ul);
esp -= frame_size;
/* Align the stack pointer according to the i386 ABI,
* i.e. so that on function entry ((sp + 4) & 15) == 0. */
esp = ((esp + 4) & -16ul) - 4;
return (void __user *) esp;
}
/* These symbols are defined with the addresses in the vsyscall page.
......
......@@ -1016,6 +1016,11 @@ ia64_mca_cmc_int_handler(int cmc_irq, void *arg, struct pt_regs *ptregs)
cmc_polling_enabled = 1;
spin_unlock(&cmc_history_lock);
/* If we're being hit with CMC interrupts, we won't
* ever execute the schedule_work() below. Need to
* disable CMC interrupts on this processor now.
*/
ia64_mca_cmc_vector_disable(NULL);
schedule_work(&cmc_disable_work);
/*
......
......@@ -681,6 +681,15 @@ ENTRY(debug_trap)
bl do_debug_trap
bra error_code
ENTRY(ill_trap)
/* void ill_trap(void) */
SWITCH_TO_KERNEL_STACK
SAVE_ALL
ldi r1, #0 ; error_code ; FIXME
mv r0, sp ; pt_regs
bl do_ill_trap
bra error_code
/* Cache flushing handler */
ENTRY(cache_flushing_handler)
......
......@@ -5,8 +5,6 @@
* Hitoshi Yamamoto
*/
/* $Id$ */
/*
* 'traps.c' handles hardware traps and faults after we have saved some
* state in 'entry.S'.
......@@ -35,6 +33,7 @@ asmlinkage void ei_handler(void);
asmlinkage void rie_handler(void);
asmlinkage void debug_trap(void);
asmlinkage void cache_flushing_handler(void);
asmlinkage void ill_trap(void);
#ifdef CONFIG_SMP
extern void smp_reschedule_interrupt(void);
......@@ -77,22 +76,22 @@ void set_eit_vector_entries(void)
eit_vector[5] = BRA_INSN(default_eit_handler, 5);
eit_vector[8] = BRA_INSN(rie_handler, 8);
eit_vector[12] = BRA_INSN(alignment_check, 12);
eit_vector[16] = 0xff000000UL;
eit_vector[16] = BRA_INSN(ill_trap, 16);
eit_vector[17] = BRA_INSN(debug_trap, 17);
eit_vector[18] = BRA_INSN(system_call, 18);
eit_vector[19] = 0xff000000UL;
eit_vector[20] = 0xff000000UL;
eit_vector[21] = 0xff000000UL;
eit_vector[22] = 0xff000000UL;
eit_vector[23] = 0xff000000UL;
eit_vector[24] = 0xff000000UL;
eit_vector[25] = 0xff000000UL;
eit_vector[26] = 0xff000000UL;
eit_vector[27] = 0xff000000UL;
eit_vector[19] = BRA_INSN(ill_trap, 19);
eit_vector[20] = BRA_INSN(ill_trap, 20);
eit_vector[21] = BRA_INSN(ill_trap, 21);
eit_vector[22] = BRA_INSN(ill_trap, 22);
eit_vector[23] = BRA_INSN(ill_trap, 23);
eit_vector[24] = BRA_INSN(ill_trap, 24);
eit_vector[25] = BRA_INSN(ill_trap, 25);
eit_vector[26] = BRA_INSN(ill_trap, 26);
eit_vector[27] = BRA_INSN(ill_trap, 27);
eit_vector[28] = BRA_INSN(cache_flushing_handler, 28);
eit_vector[29] = 0xff000000UL;
eit_vector[30] = 0xff000000UL;
eit_vector[31] = 0xff000000UL;
eit_vector[29] = BRA_INSN(ill_trap, 29);
eit_vector[30] = BRA_INSN(ill_trap, 30);
eit_vector[31] = BRA_INSN(ill_trap, 31);
eit_vector[32] = BRA_INSN(ei_handler, 32);
eit_vector[64] = BRA_INSN(pie_handler, 64);
#ifdef CONFIG_MMU
......@@ -286,7 +285,8 @@ asmlinkage void do_##name(struct pt_regs * regs, long error_code) \
DO_ERROR( 1, SIGTRAP, "debug trap", debug_trap)
DO_ERROR_INFO(0x20, SIGILL, "reserved instruction ", rie_handler, ILL_ILLOPC, regs->bpc)
DO_ERROR_INFO(0x100, SIGILL, "privilege instruction", pie_handler, ILL_PRVOPC, regs->bpc)
DO_ERROR_INFO(0x100, SIGILL, "privileged instruction", pie_handler, ILL_PRVOPC, regs->bpc)
DO_ERROR_INFO(-1, SIGILL, "illegal trap", ill_trap, ILL_ILLTRP, regs->bpc)
extern int handle_unaligned_access(unsigned long, struct pt_regs *);
......@@ -329,4 +329,3 @@ asmlinkage void do_alignment_check(struct pt_regs *regs, long error_code)
set_fs(oldfs);
}
}
......@@ -91,7 +91,7 @@ struct cpu_spec cpu_specs[] = {
.cpu_features = CPU_FTR_COMMON | CPU_FTR_601 |
CPU_FTR_HPTE_TABLE,
.cpu_user_features = COMMON_PPC | PPC_FEATURE_601_INSTR |
PPC_FEATURE_UNIFIED_CACHE,
PPC_FEATURE_UNIFIED_CACHE | PPC_FEATURE_NO_TB,
.icache_bsize = 32,
.dcache_bsize = 32,
.cpu_setup = __setup_cpu_601
......@@ -745,7 +745,8 @@ struct cpu_spec cpu_specs[] = {
.cpu_name = "403GCX",
.cpu_features = CPU_FTR_SPLIT_ID_CACHE |
CPU_FTR_USE_TB,
.cpu_user_features = PPC_FEATURE_32 | PPC_FEATURE_HAS_MMU,
.cpu_user_features = PPC_FEATURE_32 |
PPC_FEATURE_HAS_MMU | PPC_FEATURE_NO_TB,
.icache_bsize = 16,
.dcache_bsize = 16,
},
......
......@@ -401,10 +401,10 @@ EXPORT_SYMBOL(__dma_sync);
static inline void __dma_sync_page_highmem(struct page *page,
unsigned long offset, size_t size, int direction)
{
size_t seg_size = min((size_t)PAGE_SIZE, size) - offset;
size_t seg_size = min((size_t)(PAGE_SIZE - offset), size);
size_t cur_size = seg_size;
unsigned long flags, start, seg_offset = offset;
int nr_segs = PAGE_ALIGN(size + (PAGE_SIZE - offset))/PAGE_SIZE;
int nr_segs = 1 + ((size - seg_size) + PAGE_SIZE - 1)/PAGE_SIZE;
int seg_nr = 0;
local_irq_save(flags);
......
......@@ -195,7 +195,7 @@ via_calibrate_decr(void)
;
dend = get_dec();
tb_ticks_per_jiffy = (dstart - dend) / (6 * (HZ/100));
tb_ticks_per_jiffy = (dstart - dend) / ((6 * HZ)/100);
tb_to_us = mulhwu_scale_factor(dstart - dend, 60000);
printk(KERN_INFO "via_calibrate_decr: ticks per jiffy = %u (%u ticks)\n",
......
......@@ -310,7 +310,7 @@ static void bpa_map_iommu(void)
static void *bpa_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, unsigned int __nocast flag)
dma_addr_t *dma_handle, gfp_t flag)
{
void *ret;
......
......@@ -53,7 +53,7 @@ int dma_set_mask(struct device *dev, u64 dma_mask)
EXPORT_SYMBOL(dma_set_mask);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, unsigned int __nocast flag)
dma_addr_t *dma_handle, gfp_t flag)
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
......
......@@ -519,7 +519,7 @@ void iommu_unmap_single(struct iommu_table *tbl, dma_addr_t dma_handle,
* to the dma address (mapping) of the first page.
*/
void *iommu_alloc_coherent(struct iommu_table *tbl, size_t size,
dma_addr_t *dma_handle, unsigned int __nocast flag)
dma_addr_t *dma_handle, gfp_t flag)
{
void *ret = NULL;
dma_addr_t mapping;
......
......@@ -341,6 +341,19 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
*(unsigned long *)location = my_r2(sechdrs, me);
break;
case R_PPC64_TOC16:
/* Subtact TOC pointer */
value -= my_r2(sechdrs, me);
if (value + 0x8000 > 0xffff) {
printk("%s: bad TOC16 relocation (%lu)\n",
me->name, value);
return -ENOEXEC;
}
*((uint16_t *) location)
= (*((uint16_t *) location) & ~0xffff)
| (value & 0xffff);
break;
case R_PPC64_TOC16_DS:
/* Subtact TOC pointer */
value -= my_r2(sechdrs, me);
......
......@@ -32,7 +32,7 @@
#include "pci.h"
static int __initdata s7a_workaround = -1;
static int __devinitdata s7a_workaround = -1;
#if 0
void pcibios_name_device(struct pci_dev *dev)
......@@ -60,7 +60,7 @@ void pcibios_name_device(struct pci_dev *dev)
DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pcibios_name_device);
#endif
static void __init check_s7a(void)
static void __devinit check_s7a(void)
{
struct device_node *root;
char *model;
......
......@@ -31,7 +31,7 @@
#include "pci.h"
static void *pci_direct_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, unsigned int __nocast flag)
dma_addr_t *dma_handle, gfp_t flag)
{
void *ret;
......
......@@ -76,7 +76,7 @@ static inline struct iommu_table *devnode_table(struct device *dev)
* to the dma address (mapping) of the first page.
*/
static void *pci_iommu_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, unsigned int __nocast flag)
dma_addr_t *dma_handle, gfp_t flag)
{
return iommu_alloc_coherent(devnode_table(hwdev), size, dma_handle,
flag);
......
......@@ -218,7 +218,7 @@ static void vio_unmap_sg(struct device *dev, struct scatterlist *sglist,
}
static void *vio_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, unsigned int __nocast flag)
dma_addr_t *dma_handle, gfp_t flag)
{
return iommu_alloc_coherent(to_vio_dev(dev)->iommu_table, size,
dma_handle, flag);
......
......@@ -22,6 +22,7 @@
#include <linux/time.h>
#include <linux/timex.h>
#include <linux/sched.h>
#include <linux/module.h>
#include <asm/atomic.h>
#include <asm/processor.h>
......@@ -39,6 +40,8 @@ struct sh_cpuinfo cpu_data[NR_CPUS];
extern void per_cpu_trap_init(void);
cpumask_t cpu_possible_map;
EXPORT_SYMBOL(cpu_possible_map);
cpumask_t cpu_online_map;
static atomic_t cpus_booted = ATOMIC_INIT(0);
......
......@@ -25,62 +25,6 @@ source "init/Kconfig"
menu "General machine setup"
config VT
bool
select INPUT
default y
---help---
If you say Y here, you will get support for terminal devices with
display and keyboard devices. These are called "virtual" because you
can run several virtual terminals (also called virtual consoles) on
one physical terminal. This is rather useful, for example one
virtual terminal can collect system messages and warnings, another
one can be used for a text-mode user session, and a third could run
an X session, all in parallel. Switching between virtual terminals
is done with certain key combinations, usually Alt-<function key>.
The setterm command ("man setterm") can be used to change the
properties (such as colors or beeping) of a virtual terminal. The
man page console_codes(4) ("man console_codes") contains the special
character sequences that can be used to change those properties
directly. The fonts used on virtual terminals can be changed with
the setfont ("man setfont") command and the key bindings are defined
with the loadkeys ("man loadkeys") command.
You need at least one virtual terminal device in order to make use
of your keyboard and monitor. Therefore, only people configuring an
embedded system would want to say N here in order to save some
memory; the only way to log into such a system is then via a serial
or network connection.
If unsure, say Y, or else you won't be able to do much with your new
shiny Linux system :-)
config VT_CONSOLE
bool
default y
---help---
The system console is the device which receives all kernel messages
and warnings and which allows logins in single user mode. If you
answer Y here, a virtual terminal (the device used to interact with
a physical terminal) can be used as system console. This is the most
common mode of operations, so you should say Y here unless you want
the kernel messages be output only to a serial port (in which case
you should say Y to "Console on serial port", below).
If you do say Y here, by default the currently visible virtual
terminal (/dev/tty0) will be used as system console. You can change
that with a kernel command line option such as "console=tty3" which
would use the third virtual terminal as system console. (Try "man
bootparam" or see the documentation of your boot loader (lilo or
loadlin) about how to pass options to the kernel at boot time.)
If unsure, say Y.
config HW_CONSOLE
bool
default y
config SMP
bool "Symmetric multi-processing support (does not work on sun4/sun4c)"
depends on BROKEN
......
......@@ -53,19 +53,18 @@
* be guaranteed to be 0 ... mmu_context.h does guarantee this
* by only using 10 bits in the hwcontext value.
*/
#define CREATE_VPTE_OFFSET1(r1, r2)
#define CREATE_VPTE_OFFSET1(r1, r2) nop
#define CREATE_VPTE_OFFSET2(r1, r2) \
srax r1, 10, r2
#define CREATE_VPTE_NOP nop
#else
#define CREATE_VPTE_OFFSET1(r1, r2) \
srax r1, PAGE_SHIFT, r2
#define CREATE_VPTE_OFFSET2(r1, r2) \
sllx r2, 3, r2
#define CREATE_VPTE_NOP
#endif
/* DTLB ** ICACHE line 1: Quick user TLB misses */
mov TLB_SFSR, %g1
ldxa [%g1 + %g1] ASI_DMMU, %g4 ! Get TAG_ACCESS
andcc %g4, TAG_CONTEXT_BITS, %g0 ! From Nucleus?
from_tl1_trap:
......@@ -74,18 +73,16 @@ from_tl1_trap:
be,pn %xcc, kvmap ! Yep, special processing
CREATE_VPTE_OFFSET2(%g4, %g6) ! Create VPTE offset
cmp %g5, 4 ! Last trap level?
be,pn %xcc, longpath ! Yep, cannot risk VPTE miss
nop ! delay slot
/* DTLB ** ICACHE line 2: User finish + quick kernel TLB misses */
be,pn %xcc, longpath ! Yep, cannot risk VPTE miss
nop ! delay slot
ldxa [%g3 + %g6] ASI_S, %g5 ! Load VPTE
1: brgez,pn %g5, longpath ! Invalid, branch out
nop ! Delay-slot
9: stxa %g5, [%g0] ASI_DTLB_DATA_IN ! Reload TLB
retry ! Trap return
nop
nop
nop
/* DTLB ** ICACHE line 3: winfixups+real_faults */
longpath:
......@@ -106,8 +103,7 @@ longpath:
nop
nop
nop
CREATE_VPTE_NOP
nop
#undef CREATE_VPTE_OFFSET1
#undef CREATE_VPTE_OFFSET2
#undef CREATE_VPTE_NOP
......@@ -14,14 +14,14 @@
*/
/* PROT ** ICACHE line 1: User DTLB protection trap */
stxa %g0, [%g1] ASI_DMMU ! Clear SFSR FaultValid bit
membar #Sync ! Synchronize ASI stores
rdpr %pstate, %g5 ! Move into alternate globals
mov TLB_SFSR, %g1
stxa %g0, [%g1] ASI_DMMU ! Clear FaultValid bit
membar #Sync ! Synchronize stores
rdpr %pstate, %g5 ! Move into alt-globals
wrpr %g5, PSTATE_AG|PSTATE_MG, %pstate
rdpr %tl, %g1 ! Need to do a winfixup?
rdpr %tl, %g1 ! Need a winfixup?
cmp %g1, 1 ! Trap level >1?
mov TLB_TAG_ACCESS, %g4 ! Prepare reload of vaddr
nop
mov TLB_TAG_ACCESS, %g4 ! For reload of vaddr
/* PROT ** ICACHE line 2: More real fault processing */
bgu,pn %xcc, winfix_trampoline ! Yes, perform winfixup
......
......@@ -33,7 +33,7 @@
/* This is trivial with the new code... */
.globl do_fpdis
do_fpdis:
sethi %hi(TSTATE_PEF), %g4 ! IEU0
sethi %hi(TSTATE_PEF), %g4
rdpr %tstate, %g5
andcc %g5, %g4, %g0
be,pt %xcc, 1f
......@@ -50,18 +50,18 @@ do_fpdis:
add %g0, %g0, %g0
ba,a,pt %xcc, rtrap_clr_l6
1: ldub [%g6 + TI_FPSAVED], %g5 ! Load Group
wr %g0, FPRS_FEF, %fprs ! LSU Group+4bubbles
andcc %g5, FPRS_FEF, %g0 ! IEU1 Group
be,a,pt %icc, 1f ! CTI
clr %g7 ! IEU0
ldx [%g6 + TI_GSR], %g7 ! Load Group
1: andcc %g5, FPRS_DL, %g0 ! IEU1
bne,pn %icc, 2f ! CTI
fzero %f0 ! FPA
andcc %g5, FPRS_DU, %g0 ! IEU1 Group
bne,pn %icc, 1f ! CTI
fzero %f2 ! FPA
1: ldub [%g6 + TI_FPSAVED], %g5
wr %g0, FPRS_FEF, %fprs
andcc %g5, FPRS_FEF, %g0
be,a,pt %icc, 1f
clr %g7
ldx [%g6 + TI_GSR], %g7
1: andcc %g5, FPRS_DL, %g0
bne,pn %icc, 2f
fzero %f0
andcc %g5, FPRS_DU, %g0
bne,pn %icc, 1f
fzero %f2
faddd %f0, %f2, %f4
fmuld %f0, %f2, %f6
faddd %f0, %f2, %f8
......@@ -97,15 +97,17 @@ do_fpdis:
faddd %f0, %f2, %f4
fmuld %f0, %f2, %f6
ldxa [%g3] ASI_DMMU, %g5
cplus_fptrap_insn_1:
sethi %hi(0), %g2
sethi %hi(sparc64_kern_sec_context), %g2
ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
stxa %g2, [%g3] ASI_DMMU
membar #Sync
add %g6, TI_FPREGS + 0xc0, %g2
faddd %f0, %f2, %f8
fmuld %f0, %f2, %f10
ldda [%g1] ASI_BLK_S, %f32 ! grrr, where is ASI_BLK_NUCLEUS 8-(
membar #Sync
ldda [%g1] ASI_BLK_S, %f32
ldda [%g2] ASI_BLK_S, %f48
membar #Sync
faddd %f0, %f2, %f12
fmuld %f0, %f2, %f14
faddd %f0, %f2, %f16
......@@ -116,7 +118,6 @@ cplus_fptrap_insn_1:
fmuld %f0, %f2, %f26
faddd %f0, %f2, %f28
fmuld %f0, %f2, %f30
membar #Sync
b,pt %xcc, fpdis_exit
nop
2: andcc %g5, FPRS_DU, %g0
......@@ -126,15 +127,17 @@ cplus_fptrap_insn_1:
fzero %f34
ldxa [%g3] ASI_DMMU, %g5
add %g6, TI_FPREGS, %g1
cplus_fptrap_insn_2:
sethi %hi(0), %g2
sethi %hi(sparc64_kern_sec_context), %g2
ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
stxa %g2, [%g3] ASI_DMMU
membar #Sync
add %g6, TI_FPREGS + 0x40, %g2
faddd %f32, %f34, %f36
fmuld %f32, %f34, %f38
ldda [%g1] ASI_BLK_S, %f0 ! grrr, where is ASI_BLK_NUCLEUS 8-(
membar #Sync
ldda [%g1] ASI_BLK_S, %f0
ldda [%g2] ASI_BLK_S, %f16
membar #Sync
faddd %f32, %f34, %f40
fmuld %f32, %f34, %f42
faddd %f32, %f34, %f44
......@@ -147,18 +150,18 @@ cplus_fptrap_insn_2:
fmuld %f32, %f34, %f58
faddd %f32, %f34, %f60
fmuld %f32, %f34, %f62
membar #Sync
ba,pt %xcc, fpdis_exit
nop
3: mov SECONDARY_CONTEXT, %g3
add %g6, TI_FPREGS, %g1
ldxa [%g3] ASI_DMMU, %g5
cplus_fptrap_insn_3:
sethi %hi(0), %g2
sethi %hi(sparc64_kern_sec_context), %g2
ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
stxa %g2, [%g3] ASI_DMMU
membar #Sync
mov 0x40, %g2
ldda [%g1] ASI_BLK_S, %f0 ! grrr, where is ASI_BLK_NUCLEUS 8-(
membar #Sync
ldda [%g1] ASI_BLK_S, %f0
ldda [%g1 + %g2] ASI_BLK_S, %f16
add %g1, 0x80, %g1
ldda [%g1] ASI_BLK_S, %f32
......@@ -319,8 +322,8 @@ do_fptrap_after_fsr:
stx %g3, [%g6 + TI_GSR]
mov SECONDARY_CONTEXT, %g3
ldxa [%g3] ASI_DMMU, %g5
cplus_fptrap_insn_4:
sethi %hi(0), %g2
sethi %hi(sparc64_kern_sec_context), %g2
ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
stxa %g2, [%g3] ASI_DMMU
membar #Sync
add %g6, TI_FPREGS, %g2
......@@ -341,33 +344,6 @@ cplus_fptrap_insn_4:
ba,pt %xcc, etrap
wr %g0, 0, %fprs
cplus_fptrap_1:
sethi %hi(CTX_CHEETAH_PLUS_CTX0), %g2
.globl cheetah_plus_patch_fpdis
cheetah_plus_patch_fpdis:
/* We configure the dTLB512_0 for 4MB pages and the
* dTLB512_1 for 8K pages when in context zero.
*/
sethi %hi(cplus_fptrap_1), %o0
lduw [%o0 + %lo(cplus_fptrap_1)], %o1
set cplus_fptrap_insn_1, %o2
stw %o1, [%o2]
flush %o2
set cplus_fptrap_insn_2, %o2
stw %o1, [%o2]
flush %o2
set cplus_fptrap_insn_3, %o2
stw %o1, [%o2]
flush %o2
set cplus_fptrap_insn_4, %o2
stw %o1, [%o2]
flush %o2
retl
nop
/* The registers for cross calls will be:
*
* DATA 0: [low 32-bits] Address of function to call, jmp to this
......
......@@ -68,12 +68,8 @@ etrap_irq:
wrpr %g3, 0, %otherwin
wrpr %g2, 0, %wstate
cplus_etrap_insn_1:
sethi %hi(0), %g3
sllx %g3, 32, %g3
cplus_etrap_insn_2:
sethi %hi(0), %g2
or %g3, %g2, %g3
sethi %hi(sparc64_kern_pri_context), %g2
ldx [%g2 + %lo(sparc64_kern_pri_context)], %g3
stxa %g3, [%l4] ASI_DMMU
flush %l6
wr %g0, ASI_AIUS, %asi
......@@ -215,12 +211,8 @@ scetrap: rdpr %pil, %g2
mov PRIMARY_CONTEXT, %l4
wrpr %g3, 0, %otherwin
wrpr %g2, 0, %wstate
cplus_etrap_insn_3:
sethi %hi(0), %g3
sllx %g3, 32, %g3
cplus_etrap_insn_4:
sethi %hi(0), %g2
or %g3, %g2, %g3
sethi %hi(sparc64_kern_pri_context), %g2
ldx [%g2 + %lo(sparc64_kern_pri_context)], %g3
stxa %g3, [%l4] ASI_DMMU
flush %l6
......@@ -264,38 +256,3 @@ cplus_etrap_insn_4:
#undef TASK_REGOFF
#undef ETRAP_PSTATE1
cplus_einsn_1:
sethi %uhi(CTX_CHEETAH_PLUS_NUC), %g3
cplus_einsn_2:
sethi %hi(CTX_CHEETAH_PLUS_CTX0), %g2
.globl cheetah_plus_patch_etrap
cheetah_plus_patch_etrap:
/* We configure the dTLB512_0 for 4MB pages and the
* dTLB512_1 for 8K pages when in context zero.
*/
sethi %hi(cplus_einsn_1), %o0
sethi %hi(cplus_etrap_insn_1), %o2
lduw [%o0 + %lo(cplus_einsn_1)], %o1
or %o2, %lo(cplus_etrap_insn_1), %o2
stw %o1, [%o2]
flush %o2
sethi %hi(cplus_etrap_insn_3), %o2
or %o2, %lo(cplus_etrap_insn_3), %o2
stw %o1, [%o2]
flush %o2
sethi %hi(cplus_einsn_2), %o0
sethi %hi(cplus_etrap_insn_2), %o2
lduw [%o0 + %lo(cplus_einsn_2)], %o1
or %o2, %lo(cplus_etrap_insn_2), %o2
stw %o1, [%o2]
flush %o2
sethi %hi(cplus_etrap_insn_4), %o2
or %o2, %lo(cplus_etrap_insn_4), %o2
stw %o1, [%o2]
flush %o2
retl
nop
This diff is collapsed.
......@@ -27,6 +27,7 @@
#include <asm/atomic.h>
#include <asm/system.h>
#include <asm/irq.h>
#include <asm/io.h>
#include <asm/sbus.h>
#include <asm/iommu.h>
#include <asm/upa.h>
......
......@@ -15,14 +15,12 @@
*/
#define CREATE_VPTE_OFFSET1(r1, r2) \
srax r1, 10, r2
#define CREATE_VPTE_OFFSET2(r1, r2)
#define CREATE_VPTE_NOP nop
#define CREATE_VPTE_OFFSET2(r1, r2) nop
#else /* PAGE_SHIFT */
#define CREATE_VPTE_OFFSET1(r1, r2) \
srax r1, PAGE_SHIFT, r2
#define CREATE_VPTE_OFFSET2(r1, r2) \
sllx r2, 3, r2
#define CREATE_VPTE_NOP
#endif /* PAGE_SHIFT */
......@@ -36,6 +34,7 @@
*/
/* ITLB ** ICACHE line 1: Quick user TLB misses */
mov TLB_SFSR, %g1
ldxa [%g1 + %g1] ASI_IMMU, %g4 ! Get TAG_ACCESS
CREATE_VPTE_OFFSET1(%g4, %g6) ! Create VPTE offset
CREATE_VPTE_OFFSET2(%g4, %g6) ! Create VPTE offset
......@@ -43,41 +42,38 @@
1: brgez,pn %g5, 3f ! Not valid, branch out
sethi %hi(_PAGE_EXEC), %g4 ! Delay-slot
andcc %g5, %g4, %g0 ! Executable?
/* ITLB ** ICACHE line 2: Real faults */
be,pn %xcc, 3f ! Nope, branch.
nop ! Delay-slot
2: stxa %g5, [%g0] ASI_ITLB_DATA_IN ! Load PTE into TLB
retry ! Trap return
3: rdpr %pstate, %g4 ! Move into alternate globals
/* ITLB ** ICACHE line 2: Real faults */
3: rdpr %pstate, %g4 ! Move into alt-globals
wrpr %g4, PSTATE_AG|PSTATE_MG, %pstate
rdpr %tpc, %g5 ! And load faulting VA
mov FAULT_CODE_ITLB, %g4 ! It was read from ITLB
sparc64_realfault_common: ! Called by TL0 dtlb_miss too
/* ITLB ** ICACHE line 3: Finish faults */
sparc64_realfault_common: ! Called by dtlb_miss
stb %g4, [%g6 + TI_FAULT_CODE]
stx %g5, [%g6 + TI_FAULT_ADDR]
ba,pt %xcc, etrap ! Save state
1: rd %pc, %g7 ! ...
nop
/* ITLB ** ICACHE line 3: Finish faults + window fixups */
call do_sparc64_fault ! Call fault handler
add %sp, PTREGS_OFF, %o0! Compute pt_regs arg
ba,pt %xcc, rtrap_clr_l6 ! Restore cpu state
nop
/* ITLB ** ICACHE line 4: Window fixups */
winfix_trampoline:
rdpr %tpc, %g3 ! Prepare winfixup TNPC
or %g3, 0x7c, %g3 ! Compute offset to branch
or %g3, 0x7c, %g3 ! Compute branch offset
wrpr %g3, %tnpc ! Write it into TNPC
done ! Do it to it
/* ITLB ** ICACHE line 4: Unused... */
nop
nop
nop
nop
CREATE_VPTE_NOP
#undef CREATE_VPTE_OFFSET1
#undef CREATE_VPTE_OFFSET2
#undef CREATE_VPTE_NOP
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -187,17 +187,13 @@ int prom_callback(long *args)
}
if ((va >= KERNBASE) && (va < (KERNBASE + (4 * 1024 * 1024)))) {
unsigned long kernel_pctx = 0;
if (tlb_type == cheetah_plus)
kernel_pctx |= (CTX_CHEETAH_PLUS_NUC |
CTX_CHEETAH_PLUS_CTX0);
extern unsigned long sparc64_kern_pri_context;
/* Spitfire Errata #32 workaround */
__asm__ __volatile__("stxa %0, [%1] %2\n\t"
"flush %%g6"
: /* No outputs */
: "r" (kernel_pctx),
: "r" (sparc64_kern_pri_context),
"r" (PRIMARY_CONTEXT),
"i" (ASI_DMMU));
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -152,7 +152,7 @@ archclean:
$(SYMLINK_HEADERS):
@echo ' SYMLINK $@'
ifneq ($(KBUILD_SRC),)
ln -fsn $(srctree)/include/asm-um/$(basename $(notdir $@))-$(SUBARCH)$(suffix $@) $@
$(Q)ln -fsn $(srctree)/include/asm-um/$(basename $(notdir $@))-$(SUBARCH)$(suffix $@) $@
else
$(Q)cd $(TOPDIR)/$(dir $@) ; \
ln -sf $(basename $(notdir $@))-$(SUBARCH)$(suffix $@) $(notdir $@)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment