Commit 63841783 authored by James Simmons's avatar James Simmons

Merge

parents 715c66c4 bb0cc78e
...@@ -95,7 +95,7 @@ your version of gcc 2.95.x, may necessitate using -fno-strict-aliasing). ...@@ -95,7 +95,7 @@ your version of gcc 2.95.x, may necessitate using -fno-strict-aliasing).
Make Make
---- ----
You will need Gnu make 3.77 or later to build the kernel. You will need Gnu make 3.78 or later to build the kernel.
Binutils Binutils
-------- --------
...@@ -287,9 +287,9 @@ gcc 2.95.3 ...@@ -287,9 +287,9 @@ gcc 2.95.3
---------- ----------
o <ftp://ftp.gnu.org/gnu/gcc/gcc-2.95.3.tar.gz> o <ftp://ftp.gnu.org/gnu/gcc/gcc-2.95.3.tar.gz>
Make 3.77 Make 3.78
--------- ---------
o <ftp://ftp.gnu.org/gnu/make/make-3.77.tar.gz> o <ftp://ftp.gnu.org/gnu/make/make-3.78.1.tar.gz>
Binutils Binutils
-------- --------
......
...@@ -60,7 +60,7 @@ of important structures for the scsi subsystem. ...@@ -60,7 +60,7 @@ of important structures for the scsi subsystem.
</para> </para>
</chapter> </chapter>
<chapter id="driver_struct"> <chapter id="driver-struct">
<title>Driver structure</title> <title>Driver structure</title>
<para> <para>
Traditionally a lower level driver for the scsi subsystem has been Traditionally a lower level driver for the scsi subsystem has been
......
...@@ -243,7 +243,7 @@ Simple example, of poor code: ...@@ -243,7 +243,7 @@ Simple example, of poor code:
if (!dev) if (!dev)
return -ENODEV; return -ENODEV;
#ifdef CONFIG_NET_FUNKINESS #ifdef CONFIG_NET_FUNKINESS
init_funky_net(dev); init_funky_net(dev);
#endif #endif
Cleaned-up example: Cleaned-up example:
......
...@@ -966,7 +966,7 @@ dirty_background_ratio ...@@ -966,7 +966,7 @@ dirty_background_ratio
Contains, as a percentage of total system memory, the number of pages at which Contains, as a percentage of total system memory, the number of pages at which
the pdflush background writeback daemon will start writing out dirty data. the pdflush background writeback daemon will start writing out dirty data.
dirty_async_ratio dirty_ratio
----------------- -----------------
Contains, as a percentage of total system memory, the number of pages at which Contains, as a percentage of total system memory, the number of pages at which
......
...@@ -8,6 +8,10 @@ if you want to format from within Linux. ...@@ -8,6 +8,10 @@ if you want to format from within Linux.
VFAT MOUNT OPTIONS VFAT MOUNT OPTIONS
---------------------------------------------------------------------- ----------------------------------------------------------------------
umask=### -- The permission mask (see umask(1)) for the regulare file.
The default is the umask of current process.
dmask=### -- The permission mask for the directory.
The default is the umask of current process.
codepage=### -- Sets the codepage for converting to shortname characters codepage=### -- Sets the codepage for converting to shortname characters
on FAT and VFAT filesystems. By default, codepage 437 on FAT and VFAT filesystems. By default, codepage 437
is used. This is the default for the U.S. and some is used. This is the default for the U.S. and some
...@@ -31,10 +35,6 @@ uni_xlate=<bool> -- Translate unhandled Unicode characters to special ...@@ -31,10 +35,6 @@ uni_xlate=<bool> -- Translate unhandled Unicode characters to special
illegal on the vfat filesystem. The escape sequence illegal on the vfat filesystem. The escape sequence
that gets used is ':' and the four digits of hexadecimal that gets used is ':' and the four digits of hexadecimal
unicode. unicode.
posix=<bool> -- Allow names of same letters, different case such as
'LongFileName' and 'longfilename' to coexist. This has some
problems currently because 8.3 conflicts are not handled
correctly for POSIX filesystem compliance.
nonumtail=<bool> -- When creating 8.3 aliases, normally the alias will nonumtail=<bool> -- When creating 8.3 aliases, normally the alias will
end in '~1' or tilde followed by some number. If this end in '~1' or tilde followed by some number. If this
option is set, then if the filename is option is set, then if the filename is
...@@ -66,10 +66,6 @@ TODO ...@@ -66,10 +66,6 @@ TODO
a get next directory entry approach. The only thing left that uses a get next directory entry approach. The only thing left that uses
raw scanning is the directory renaming code. raw scanning is the directory renaming code.
* Fix the POSIX filesystem support to work in 8.3 space. This involves
renaming aliases if a conflict occurs between a new filename and
an old alias. This is quite a mess.
POSSIBLE PROBLEMS POSSIBLE PROBLEMS
---------------------------------------------------------------------- ----------------------------------------------------------------------
......
...@@ -18,14 +18,14 @@ files can be found in mm/swap.c. ...@@ -18,14 +18,14 @@ files can be found in mm/swap.c.
Currently, these files are in /proc/sys/vm: Currently, these files are in /proc/sys/vm:
- overcommit_memory - overcommit_memory
- page-cluster - page-cluster
- dirty_async_ratio - dirty_ratio
- dirty_background_ratio - dirty_background_ratio
- dirty_expire_centisecs - dirty_expire_centisecs
- dirty_writeback_centisecs - dirty_writeback_centisecs
============================================================== ==============================================================
dirty_async_ratio, dirty_background_ratio, dirty_expire_centisecs, dirty_ratio, dirty_background_ratio, dirty_expire_centisecs,
dirty_writeback_centisecs: dirty_writeback_centisecs:
See Documentation/filesystems/proc.txt See Documentation/filesystems/proc.txt
......
...@@ -102,7 +102,7 @@ quirk_cypress(struct pci_dev *dev) ...@@ -102,7 +102,7 @@ quirk_cypress(struct pci_dev *dev)
#define DMAPSZ (max_low_pfn << PAGE_SHIFT) /* memory size, not window size */ #define DMAPSZ (max_low_pfn << PAGE_SHIFT) /* memory size, not window size */
if ((__direct_map_base + DMAPSZ - 1) >= 0xfff00000UL) { if ((__direct_map_base + DMAPSZ - 1) >= 0xfff00000UL) {
__direct_map_size = 0xfff00000UL - __direct_map_base; __direct_map_size = 0xfff00000UL - __direct_map_base;
printk("%s: adjusting direct map size to 0x%x\n", printk("%s: adjusting direct map size to 0x%lx\n",
__FUNCTION__, __direct_map_size); __FUNCTION__, __direct_map_size);
} else { } else {
struct pci_controller *hose = dev->sysdata; struct pci_controller *hose = dev->sysdata;
......
...@@ -31,7 +31,7 @@ static char * __init dmi_string(struct dmi_header *dm, u8 s) ...@@ -31,7 +31,7 @@ static char * __init dmi_string(struct dmi_header *dm, u8 s)
if(!s) if(!s)
return ""; return "";
s--; s--;
while(s>0) while(s>0 && *bp)
{ {
bp+=strlen(bp); bp+=strlen(bp);
bp++; bp++;
...@@ -50,7 +50,7 @@ static int __init dmi_table(u32 base, int len, int num, void (*decode)(struct dm ...@@ -50,7 +50,7 @@ static int __init dmi_table(u32 base, int len, int num, void (*decode)(struct dm
u8 *buf; u8 *buf;
struct dmi_header *dm; struct dmi_header *dm;
u8 *data; u8 *data;
int i=1; int i=0;
buf = bt_ioremap(base, len); buf = bt_ioremap(base, len);
if(buf==NULL) if(buf==NULL)
...@@ -59,28 +59,23 @@ static int __init dmi_table(u32 base, int len, int num, void (*decode)(struct dm ...@@ -59,28 +59,23 @@ static int __init dmi_table(u32 base, int len, int num, void (*decode)(struct dm
data = buf; data = buf;
/* /*
* Stop when we see al the items the table claimed to have * Stop when we see all the items the table claimed to have
* OR we run off the end of the table (also happens) * OR we run off the end of the table (also happens)
*/ */
while(i<num && (data - buf) < len) while(i<num && data-buf+sizeof(struct dmi_header)<=len)
{ {
dm=(struct dmi_header *)data; dm=(struct dmi_header *)data;
/* /*
* Avoid misparsing crud if the length of the last * We want to know the total length (formated area and strings)
* record is crap * before decoding to make sure we won't run off the table in
* dmi_decode or dmi_string
*/ */
if((data-buf+dm->length) >= len)
break;
decode(dm);
data+=dm->length; data+=dm->length;
/* while(data-buf<len-1 && (data[0] || data[1]))
* Don't go off the end of the data if there is
* stuff looking like string fill past the end
*/
while((data-buf) < len && (*data || data[1]))
data++; data++;
if(data-buf<len-1)
decode(dm);
data+=2; data+=2;
i++; i++;
} }
...@@ -89,11 +84,20 @@ static int __init dmi_table(u32 base, int len, int num, void (*decode)(struct dm ...@@ -89,11 +84,20 @@ static int __init dmi_table(u32 base, int len, int num, void (*decode)(struct dm
} }
inline static int __init dmi_checksum(u8 *buf)
{
u8 sum=0;
int a;
for(a=0; a<15; a++)
sum+=buf[a];
return (sum==0);
}
static int __init dmi_iterate(void (*decode)(struct dmi_header *)) static int __init dmi_iterate(void (*decode)(struct dmi_header *))
{ {
unsigned char buf[20]; u8 buf[15];
long fp=0xE0000L; u32 fp=0xF0000;
fp -= 16;
#ifdef CONFIG_SIMNOW #ifdef CONFIG_SIMNOW
/* /*
...@@ -105,24 +109,30 @@ static int __init dmi_iterate(void (*decode)(struct dmi_header *)) ...@@ -105,24 +109,30 @@ static int __init dmi_iterate(void (*decode)(struct dmi_header *))
while( fp < 0xFFFFF) while( fp < 0xFFFFF)
{ {
fp+=16; isa_memcpy_fromio(buf, fp, 15);
isa_memcpy_fromio(buf, fp, 20); if(memcmp(buf, "_DMI_", 5)==0 && dmi_checksum(buf))
if(memcmp(buf, "_DMI_", 5)==0)
{ {
u16 num=buf[13]<<8|buf[12]; u16 num=buf[13]<<8|buf[12];
u16 len=buf[7]<<8|buf[6]; u16 len=buf[7]<<8|buf[6];
u32 base=buf[11]<<24|buf[10]<<16|buf[9]<<8|buf[8]; u32 base=buf[11]<<24|buf[10]<<16|buf[9]<<8|buf[8];
dmi_printk((KERN_INFO "DMI %d.%d present.\n", /*
buf[14]>>4, buf[14]&0x0F)); * DMI version 0.0 means that the real version is taken from
* the SMBIOS version, which we don't know at this point.
*/
if(buf[14]!=0)
dmi_printk((KERN_INFO "DMI %d.%d present.\n",
buf[14]>>4, buf[14]&0x0F));
else
dmi_printk((KERN_INFO "DMI present.\n"));
dmi_printk((KERN_INFO "%d structures occupying %d bytes.\n", dmi_printk((KERN_INFO "%d structures occupying %d bytes.\n",
buf[13]<<8|buf[12], num, len));
buf[7]<<8|buf[6]));
dmi_printk((KERN_INFO "DMI table at 0x%08X.\n", dmi_printk((KERN_INFO "DMI table at 0x%08X.\n",
buf[11]<<24|buf[10]<<16|buf[9]<<8|buf[8])); base));
if(dmi_table(base,len, num, decode)==0) if(dmi_table(base,len, num, decode)==0)
return 0; return 0;
} }
fp+=16;
} }
return -1; return -1;
} }
...@@ -823,59 +833,43 @@ static __init void dmi_check_blacklist(void) ...@@ -823,59 +833,43 @@ static __init void dmi_check_blacklist(void)
static void __init dmi_decode(struct dmi_header *dm) static void __init dmi_decode(struct dmi_header *dm)
{ {
u8 *data = (u8 *)dm; u8 *data = (u8 *)dm;
char *p;
switch(dm->type) switch(dm->type)
{ {
case 0: case 0:
p=dmi_string(dm,data[4]); dmi_printk(("BIOS Vendor: %s\n",
if(*p) dmi_string(dm, data[4])));
{ dmi_save_ident(dm, DMI_BIOS_VENDOR, 4);
dmi_printk(("BIOS Vendor: %s\n", p)); dmi_printk(("BIOS Version: %s\n",
dmi_save_ident(dm, DMI_BIOS_VENDOR, 4); dmi_string(dm, data[5])));
dmi_printk(("BIOS Version: %s\n", dmi_save_ident(dm, DMI_BIOS_VERSION, 5);
dmi_string(dm, data[5]))); dmi_printk(("BIOS Release: %s\n",
dmi_save_ident(dm, DMI_BIOS_VERSION, 5); dmi_string(dm, data[8])));
dmi_printk(("BIOS Release: %s\n", dmi_save_ident(dm, DMI_BIOS_DATE, 8);
dmi_string(dm, data[8])));
dmi_save_ident(dm, DMI_BIOS_DATE, 8);
}
break; break;
case 1: case 1:
p=dmi_string(dm,data[4]); dmi_printk(("System Vendor: %s\n",
if(*p) dmi_string(dm, data[4])));
{ dmi_save_ident(dm, DMI_SYS_VENDOR, 4);
dmi_printk(("System Vendor: %s.\n",p)); dmi_printk(("Product Name: %s\n",
dmi_save_ident(dm, DMI_SYS_VENDOR, 4); dmi_string(dm, data[5])));
dmi_printk(("Product Name: %s.\n", dmi_save_ident(dm, DMI_PRODUCT_NAME, 5);
dmi_string(dm, data[5]))); dmi_printk(("Version: %s\n",
dmi_save_ident(dm, DMI_PRODUCT_NAME, 5); dmi_string(dm, data[6])));
dmi_printk(("Version %s.\n", dmi_save_ident(dm, DMI_PRODUCT_VERSION, 6);
dmi_string(dm, data[6]))); dmi_printk(("Serial Number: %s\n",
dmi_save_ident(dm, DMI_PRODUCT_VERSION, 6); dmi_string(dm, data[7])));
dmi_printk(("Serial Number %s.\n",
dmi_string(dm, data[7])));
}
break; break;
case 2: case 2:
p=dmi_string(dm,data[4]); dmi_printk(("Board Vendor: %s\n",
if(*p) dmi_string(dm, data[4])));
{ dmi_save_ident(dm, DMI_BOARD_VENDOR, 4);
dmi_printk(("Board Vendor: %s.\n",p)); dmi_printk(("Board Name: %s\n",
dmi_save_ident(dm, DMI_BOARD_VENDOR, 4); dmi_string(dm, data[5])));
dmi_printk(("Board Name: %s.\n", dmi_save_ident(dm, DMI_BOARD_NAME, 5);
dmi_string(dm, data[5]))); dmi_printk(("Board Version: %s\n",
dmi_save_ident(dm, DMI_BOARD_NAME, 5); dmi_string(dm, data[6])));
dmi_printk(("Board Version: %s.\n", dmi_save_ident(dm, DMI_BOARD_VERSION, 6);
dmi_string(dm, data[6])));
dmi_save_ident(dm, DMI_BOARD_VERSION, 6);
}
break;
case 3:
p=dmi_string(dm,data[8]);
if(*p && *p!=' ')
dmi_printk(("Asset Tag: %s.\n", p));
break; break;
} }
} }
......
...@@ -21,83 +21,75 @@ ROOT_DEV := CURRENT ...@@ -21,83 +21,75 @@ ROOT_DEV := CURRENT
SVGA_MODE := -DSVGA_MODE=NORMAL_VGA SVGA_MODE := -DSVGA_MODE=NORMAL_VGA
# --------------------------------------------------------------------------- # If you want the RAM disk device, define this to be the size in blocks.
BOOT_INCL = $(TOPDIR)/include/linux/config.h \ #RAMDISK := -DRAMDISK=512
$(TOPDIR)/include/linux/autoconf.h \
$(TOPDIR)/include/asm/boot.h
zImage: bootsect setup compressed/vmlinux tools/build EXTRA_TARGETS := vmlinux.bin bootsect bootsect.o \
$(OBJCOPY) $(OBJCOPYFLAGS) compressed/vmlinux compressed/vmlinux.out setup setup.o zImage bzImage
tools/build bootsect setup compressed/vmlinux.out $(ROOT_DEV) > zImage
bzImage: bbootsect bsetup compressed/bvmlinux tools/build CFLAGS += -m32
$(OBJCOPY) $(OBJCOPYFLAGS) compressed/bvmlinux compressed/bvmlinux.out
tools/build -b bbootsect bsetup compressed/bvmlinux.out $(ROOT_DEV) > bzImage
bzImage-padded: bzImage host-progs := tools/build
dd if=/dev/zero bs=1k count=70 >> bzImage
compressed/vmlinux: $(TOPDIR)/vmlinux # Default
@$(MAKE) -C compressed vmlinux
compressed/bvmlinux: $(TOPDIR)/vmlinux boot: bzImage
@$(MAKE) -C compressed bvmlinux
zdisk: $(BOOTIMAGE) include $(TOPDIR)/Rules.make
dd bs=8192 if=$(BOOTIMAGE) of=/dev/fd0
zlilo: $(BOOTIMAGE) # ---------------------------------------------------------------------------
if [ -f $(INSTALL_PATH)/vmlinuz ]; then mv $(INSTALL_PATH)/vmlinuz $(INSTALL_PATH)/vmlinuz.old; fi
if [ -f $(INSTALL_PATH)/System.map ]; then mv $(INSTALL_PATH)/System.map $(INSTALL_PATH)/System.old; fi
cat $(BOOTIMAGE) > $(INSTALL_PATH)/vmlinuz
cp $(TOPDIR)/System.map $(INSTALL_PATH)/
if [ -x /sbin/lilo ]; then /sbin/lilo; else /etc/lilo/install; fi
install: $(BOOTIMAGE)
sh -x ./install.sh $(KERNELRELEASE) $(BOOTIMAGE) $(TOPDIR)/System.map "$(INSTALL_PATH)"
tools/build: tools/build.c $(obj)/zImage: IMAGE_OFFSET := 0x1000
$(HOSTCC) $(HOSTCFLAGS) -o $@ $< $(obj)/zImage: EXTRA_AFLAGS := -traditional $(SVGA_MODE) $(RAMDISK)
$(obj)/bzImage: IMAGE_OFFSET := 0x100000
$(obj)/bzImage: EXTRA_AFLAGS := -traditional $(SVGA_MODE) $(RAMDISK) -D__BIG_KERNEL__
$(obj)/bzImage: BUILDFLAGS := -b
bootsect: bootsect.o quiet_cmd_image = BUILD $(echo_target)
$(IA32_LD) -Ttext 0x0 -s --oformat binary -o $@ $< cmd_image = $(obj)/tools/build $(BUILDFLAGS) $(obj)/bootsect $(obj)/setup \
$(obj)/vmlinux.bin $(ROOT_DEV) > $@
bootsect.o: bootsect.s $(obj)/zImage $(obj)/bzImage: $(obj)/bootsect $(obj)/setup \
$(IA32_AS) -o $@ $< $(obj)/vmlinux.bin $(obj)/tools/build FORCE
$(call if_changed,image)
bootsect.s: bootsect.S Makefile $(BOOT_INCL) $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE
$(IA32_CPP) $(CPPFLAGS) -traditional -D__ASSEMBLY__ $(SVGA_MODE) $(RAMDISK) $< -o $@ $(call if_changed,objcopy)
bbootsect: bbootsect.o LDFLAGS_bootsect := -Ttext 0x0 -s --oformat binary
$(IA32_LD) -Ttext 0x0 -s --oformat binary $< -o $@ LDFLAGS_setup := -Ttext 0x0 -s --oformat binary -e begtext
bbootsect.o: bbootsect.s $(obj)/setup $(obj)/bootsect: %: %.o FORCE
$(IA32_AS) -o $@ $< $(call if_changed,ld)
bbootsect.s: bootsect.S Makefile $(BOOT_INCL) $(obj)/compressed/vmlinux: FORCE
$(IA32_CPP) $(CPPFLAGS) -D__BIG_KERNEL__ -D__ASSEMBLY__ -traditional $(SVGA_MODE) $(RAMDISK) $< -o $@ +@$(call descend,$(obj)/compressed,IMAGE_OFFSET=$(IMAGE_OFFSET) \
$(obj)/compressed/vmlinux)
setup: setup.o
$(IA32_LD) -Ttext 0x0 -s --oformat binary -e begtext -o $@ $<
setup.o: setup.s zdisk: $(BOOTIMAGE)
$(IA32_AS) -o $@ $< dd bs=8192 if=$(BOOTIMAGE) of=/dev/fd0
setup.s: setup.S video.S Makefile $(BOOT_INCL) $(TOPDIR)/include/linux/version.h $(TOPDIR)/include/linux/compile.h zlilo: $(BOOTIMAGE)
$(IA32_CPP) $(CPPFLAGS) -traditional -D__ASSEMBLY__ $(SVGA_MODE) $(RAMDISK) $< -o $@ if [ -f $(INSTALL_PATH)/vmlinuz ]; then mv $(INSTALL_PATH)/vmlinuz $(INSTALL_PATH)/vmlinuz.old; fi
if [ -f $(INSTALL_PATH)/System.map ]; then mv $(INSTALL_PATH)/System.map $(INSTALL_PATH)/System.old; fi
cat $(BOOTIMAGE) > $(INSTALL_PATH)/vmlinuz
cp System.map $(INSTALL_PATH)/
if [ -x /sbin/lilo ]; then /sbin/lilo; else /etc/lilo/install; fi
bsetup: bsetup.o install: $(BOOTIMAGE)
$(IA32_LD) -Ttext 0x0 -s --oformat binary -e begtext -o $@ $< sh $(src)/install.sh $(KERNELRELEASE) $(BOOTIMAGE) System.map "$(INSTALL_PATH)"
bsetup.o: bsetup.s clean:
$(IA32_AS) -o $@ $< @echo 'Cleaning up (boot)'
@rm -f $(host-progs) $(EXTRA_TARGETS)
+@$(call descend,$(obj)/compressed) clean
bsetup.s: setup.S video.S Makefile $(BOOT_INCL) $(TOPDIR)/include/linux/version.h $(TOPDIR)/include/linux/compile.h archhelp:
$(IA32_CPP) $(CPPFLAGS) -D__BIG_KERNEL__ -D__ASSEMBLY__ -traditional $(SVGA_MODE) $(RAMDISK) $< -o $@ @echo '* bzImage - Compressed kernel image (arch/$(ARCH)/boot/bzImage)'
@echo ' install - Install kernel using'
@echo ' (your) ~/bin/installkernel or'
@echo ' (distribution) /sbin/installkernel or'
@echo ' install to $$(INSTALL_PATH) and run lilo'
clean:
rm -f tools/build
rm -f setup bootsect zImage compressed/vmlinux.out
rm -f bsetup bbootsect bzImage compressed/bvmlinux.out
@$(MAKE) -C compressed clean
# #
# linux/arch/i386/boot/compressed/Makefile # linux/arch/x86_64/boot/compressed/Makefile
# #
# create a compressed vmlinux image from the original vmlinux # create a compressed vmlinux image from the original vmlinux
# #
# Note all the files here are compiled/linked as 32bit executables.
#
HEAD = head.o EXTRA_TARGETS := vmlinux vmlinux.bin vmlinux.bin.gz head.o misc.o piggy.o
SYSTEM = $(TOPDIR)/vmlinux EXTRA_AFLAGS := -traditional -m32
OBJECTS = $(HEAD) misc.o
IA32_CFLAGS := -O2 -DSTDC_HEADERS # cannot use EXTRA_CFLAGS because base CFLAGS contains -mkernel which conflicts with
# -m32
CFLAGS := -m32 -D__KERNEL__ -I$(TOPDIR)/include -O2
LDFLAGS := -m elf_i386
# include $(TOPDIR)/Rules.make
# ZIMAGE_OFFSET is the load offset of the compression loader
# BZIMAGE_OFFSET is the load offset of the high loaded compression loader
#
BZIMAGE_OFFSET = 0x100000
BZLINKFLAGS = -Ttext $(BZIMAGE_OFFSET) $(ZLDFLAGS) LDFLAGS_vmlinux := -Ttext $(IMAGE_OFFSET) -e startup_32 -m elf_i386
all: vmlinux $(obj)/vmlinux: $(obj)/head.o $(obj)/misc.o $(obj)/piggy.o FORCE
$(call if_changed,ld)
bvmlinux: piggy.o $(OBJECTS) $(obj)/vmlinux.bin: vmlinux FORCE
$(IA32_LD) $(BZLINKFLAGS) -o bvmlinux $(OBJECTS) piggy.o strip vmlinux
$(call if_changed,objcopy)
head.o: head.S $(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
$(IA32_AS) -c head.S $(call if_changed,gzip)
misc.o: misc.c LDFLAGS_piggy.o := -r --format binary --oformat elf32-i386 -T
$(IA32_CC) $(IA32_CFLAGS) -c misc.c
piggy.o: $(SYSTEM) $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.gz FORCE
tmppiggy=_tmp_$$$$piggy; \ $(call if_changed,ld)
rm -f $$tmppiggy $$tmppiggy.gz $$tmppiggy.lnk; \
$(OBJCOPY) $(OBJCOPYFLAGS) $< $$tmppiggy; \
gzip -f -9 < $$tmppiggy > $$tmppiggy.gz; \
echo "SECTIONS { .data : { input_len = .; LONG(input_data_end - input_data) input_data = .; *(.data) input_data_end = .; }}" > $$tmppiggy.lnk; \
$(IA32_LD) -r -o piggy.o -b binary $$tmppiggy.gz -b elf32-i386 -T $$tmppiggy.lnk; \
rm -f $$tmppiggy $$tmppiggy.gz $$tmppiggy.lnk
clean: clean:
rm -f vmlinux bvmlinux _tmp_* @echo 'Cleaning up (boot/compressed)'
@rm -f $(EXTRA_TARGETS)
...@@ -89,12 +89,8 @@ static long bytes_out = 0; ...@@ -89,12 +89,8 @@ static long bytes_out = 0;
static uch *output_data; static uch *output_data;
static unsigned long output_ptr = 0; static unsigned long output_ptr = 0;
static void *malloc(int size); static void *malloc(int size);
static void free(void *where); static void free(void *where);
static void error(char *m);
static void gzip_mark(void **);
static void gzip_release(void **);
static void puts(const char *); static void puts(const char *);
......
#define NULL 0 #define NULL 0
typedef unsigned int size_t; //typedef unsigned int size_t;
struct screen_info { struct screen_info {
......
SECTIONS
{
.data : {
input_len = .;
LONG(input_data_end - input_data) input_data = .;
*(.data)
input_data_end = .;
}
}
...@@ -95,6 +95,7 @@ ...@@ -95,6 +95,7 @@
#define PARAM_VESAPM_SEG 0x2e #define PARAM_VESAPM_SEG 0x2e
#define PARAM_VESAPM_OFF 0x30 #define PARAM_VESAPM_OFF 0x30
#define PARAM_LFB_PAGES 0x32 #define PARAM_LFB_PAGES 0x32
#define PARAM_VESA_ATTRIB 0x34
/* Define DO_STORE according to CONFIG_VIDEO_RETAIN */ /* Define DO_STORE according to CONFIG_VIDEO_RETAIN */
...@@ -220,6 +221,8 @@ mopar_gr: ...@@ -220,6 +221,8 @@ mopar_gr:
movl %eax, %fs:(PARAM_LFB_COLORS) movl %eax, %fs:(PARAM_LFB_COLORS)
movl 35(%di), %eax movl 35(%di), %eax
movl %eax, %fs:(PARAM_LFB_COLORS+4) movl %eax, %fs:(PARAM_LFB_COLORS+4)
movw 0(%di), %ax
movw %ax, %fs:(PARAM_VESA_ATTRIB)
# get video mem size # get video mem size
leaw modelist+1024, %di leaw modelist+1024, %di
......
...@@ -2,17 +2,10 @@ ...@@ -2,17 +2,10 @@
# Makefile for the ia32 kernel emulation subsystem. # Makefile for the ia32 kernel emulation subsystem.
# #
USE_STANDARD_AS_RULE := true
export-objs := ia32_ioctl.o sys_ia32.o export-objs := ia32_ioctl.o sys_ia32.o
all: ia32.o
O_TARGET := ia32.o
obj-$(CONFIG_IA32_EMULATION) := ia32entry.o sys_ia32.o ia32_ioctl.o \ obj-$(CONFIG_IA32_EMULATION) := ia32entry.o sys_ia32.o ia32_ioctl.o \
ia32_signal.o \ ia32_signal.o \
ia32_binfmt.o fpu32.o socket32.o ptrace32.o ipc32.o ia32_binfmt.o fpu32.o socket32.o ptrace32.o ipc32.o
clean::
include $(TOPDIR)/Rules.make include $(TOPDIR)/Rules.make
...@@ -58,6 +58,8 @@ struct timeval32 ...@@ -58,6 +58,8 @@ struct timeval32
int tv_sec, tv_usec; int tv_sec, tv_usec;
}; };
#define jiffies_to_timeval(a,b) do { (b)->tv_usec = 0; (b)->tv_sec = (a)/HZ; }while(0)
struct elf_prstatus struct elf_prstatus
{ {
struct elf_siginfo pr_info; /* Info associated with signal */ struct elf_siginfo pr_info; /* Info associated with signal */
...@@ -162,6 +164,7 @@ do { \ ...@@ -162,6 +164,7 @@ do { \
#define ELF_PLAT_INIT(r) elf32_init(r) #define ELF_PLAT_INIT(r) elf32_init(r)
#define setup_arg_pages(bprm) ia32_setup_arg_pages(bprm) #define setup_arg_pages(bprm) ia32_setup_arg_pages(bprm)
int ia32_setup_arg_pages(struct linux_binprm *bprm);
#undef start_thread #undef start_thread
#define start_thread(regs,new_rip,new_rsp) do { \ #define start_thread(regs,new_rip,new_rsp) do { \
......
This diff is collapsed.
...@@ -81,11 +81,11 @@ sys32_sigsuspend(int history0, int history1, old_sigset_t mask, struct pt_regs r ...@@ -81,11 +81,11 @@ sys32_sigsuspend(int history0, int history1, old_sigset_t mask, struct pt_regs r
sigset_t saveset; sigset_t saveset;
mask &= _BLOCKABLE; mask &= _BLOCKABLE;
spin_lock_irq(&current->sigmask_lock); spin_lock_irq(&current->sig->siglock);
saveset = current->blocked; saveset = current->blocked;
siginitset(&current->blocked, mask); siginitset(&current->blocked, mask);
recalc_sigpending(); recalc_sigpending();
spin_unlock_irq(&current->sigmask_lock); spin_unlock_irq(&current->sig->siglock);
regs.rax = -EINTR; regs.rax = -EINTR;
while (1) { while (1) {
...@@ -170,10 +170,7 @@ ia32_restore_sigcontext(struct pt_regs *regs, struct sigcontext_ia32 *sc, unsign ...@@ -170,10 +170,7 @@ ia32_restore_sigcontext(struct pt_regs *regs, struct sigcontext_ia32 *sc, unsign
asm volatile("movl %%" #seg ",%0" : "=r" (cur)); \ asm volatile("movl %%" #seg ",%0" : "=r" (cur)); \
if (pre != cur) loadsegment(seg,pre); } if (pre != cur) loadsegment(seg,pre); }
/* Reload fs and gs if they have changed in the signal handler. /* Reload fs and gs if they have changed in the signal handler. */
This does not handle long fs/gs base changes in the handler, but does not clobber
them at least in the normal case. */
{ {
unsigned short gs; unsigned short gs;
...@@ -181,11 +178,18 @@ ia32_restore_sigcontext(struct pt_regs *regs, struct sigcontext_ia32 *sc, unsign ...@@ -181,11 +178,18 @@ ia32_restore_sigcontext(struct pt_regs *regs, struct sigcontext_ia32 *sc, unsign
load_gs_index(gs); load_gs_index(gs);
} }
RELOAD_SEG(fs); RELOAD_SEG(fs);
RELOAD_SEG(ds);
RELOAD_SEG(es);
COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx); COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
COPY(dx); COPY(cx); COPY(ip); COPY(dx); COPY(cx); COPY(ip);
/* Don't touch extended registers */ /* Don't touch extended registers */
err |= __get_user(regs->cs, &sc->cs);
regs->cs |= 2;
err |= __get_user(regs->ss, &sc->ss);
regs->ss |= 2;
{ {
unsigned int tmpflags; unsigned int tmpflags;
err |= __get_user(tmpflags, &sc->eflags); err |= __get_user(tmpflags, &sc->eflags);
...@@ -231,10 +235,10 @@ asmlinkage int sys32_sigreturn(struct pt_regs regs) ...@@ -231,10 +235,10 @@ asmlinkage int sys32_sigreturn(struct pt_regs regs)
goto badframe; goto badframe;
sigdelsetmask(&set, ~_BLOCKABLE); sigdelsetmask(&set, ~_BLOCKABLE);
spin_lock_irq(&current->sigmask_lock); spin_lock_irq(&current->sig->siglock);
current->blocked = set; current->blocked = set;
recalc_sigpending(); recalc_sigpending();
spin_unlock_irq(&current->sigmask_lock); spin_unlock_irq(&current->sig->siglock);
if (ia32_restore_sigcontext(&regs, &frame->sc, &eax)) if (ia32_restore_sigcontext(&regs, &frame->sc, &eax))
goto badframe; goto badframe;
...@@ -258,10 +262,10 @@ asmlinkage int sys32_rt_sigreturn(struct pt_regs regs) ...@@ -258,10 +262,10 @@ asmlinkage int sys32_rt_sigreturn(struct pt_regs regs)
goto badframe; goto badframe;
sigdelsetmask(&set, ~_BLOCKABLE); sigdelsetmask(&set, ~_BLOCKABLE);
spin_lock_irq(&current->sigmask_lock); spin_lock_irq(&current->sig->siglock);
current->blocked = set; current->blocked = set;
recalc_sigpending(); recalc_sigpending();
spin_unlock_irq(&current->sigmask_lock); spin_unlock_irq(&current->sig->siglock);
if (ia32_restore_sigcontext(&regs, &frame->uc.uc_mcontext, &eax)) if (ia32_restore_sigcontext(&regs, &frame->uc.uc_mcontext, &eax))
goto badframe; goto badframe;
...@@ -299,6 +303,10 @@ ia32_setup_sigcontext(struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate, ...@@ -299,6 +303,10 @@ ia32_setup_sigcontext(struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate,
err |= __put_user(tmp, (unsigned int *)&sc->gs); err |= __put_user(tmp, (unsigned int *)&sc->gs);
__asm__("movl %%fs,%0" : "=r"(tmp): "0"(tmp)); __asm__("movl %%fs,%0" : "=r"(tmp): "0"(tmp));
err |= __put_user(tmp, (unsigned int *)&sc->fs); err |= __put_user(tmp, (unsigned int *)&sc->fs);
__asm__("movl %%ds,%0" : "=r"(tmp): "0"(tmp));
err |= __put_user(tmp, (unsigned int *)&sc->ds);
__asm__("movl %%es,%0" : "=r"(tmp): "0"(tmp));
err |= __put_user(tmp, (unsigned int *)&sc->es);
err |= __put_user((u32)regs->rdi, &sc->edi); err |= __put_user((u32)regs->rdi, &sc->edi);
err |= __put_user((u32)regs->rsi, &sc->esi); err |= __put_user((u32)regs->rsi, &sc->esi);
...@@ -308,6 +316,8 @@ ia32_setup_sigcontext(struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate, ...@@ -308,6 +316,8 @@ ia32_setup_sigcontext(struct sigcontext_ia32 *sc, struct _fpstate_ia32 *fpstate,
err |= __put_user((u32)regs->rdx, &sc->edx); err |= __put_user((u32)regs->rdx, &sc->edx);
err |= __put_user((u32)regs->rcx, &sc->ecx); err |= __put_user((u32)regs->rcx, &sc->ecx);
err |= __put_user((u32)regs->rax, &sc->eax); err |= __put_user((u32)regs->rax, &sc->eax);
err |= __put_user((u32)regs->cs, &sc->cs);
err |= __put_user((u32)regs->ss, &sc->ss);
err |= __put_user(current->thread.trap_no, &sc->trapno); err |= __put_user(current->thread.trap_no, &sc->trapno);
err |= __put_user(current->thread.error_code, &sc->err); err |= __put_user(current->thread.error_code, &sc->err);
err |= __put_user((u32)regs->rip, &sc->eip); err |= __put_user((u32)regs->rip, &sc->eip);
...@@ -406,8 +416,13 @@ void ia32_setup_frame(int sig, struct k_sigaction *ka, ...@@ -406,8 +416,13 @@ void ia32_setup_frame(int sig, struct k_sigaction *ka,
regs->rsp = (unsigned long) frame; regs->rsp = (unsigned long) frame;
regs->rip = (unsigned long) ka->sa.sa_handler; regs->rip = (unsigned long) ka->sa.sa_handler;
asm volatile("movl %0,%%ds" :: "r" (__USER32_DS));
asm volatile("movl %0,%%es" :: "r" (__USER32_DS));
regs->cs = __USER32_CS;
regs->ss = __USER32_DS;
set_fs(USER_DS); set_fs(USER_DS);
// XXX: cs
regs->eflags &= ~TF_MASK; regs->eflags &= ~TF_MASK;
#if DEBUG_SIG #if DEBUG_SIG
...@@ -479,8 +494,13 @@ void ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -479,8 +494,13 @@ void ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
regs->rsp = (unsigned long) frame; regs->rsp = (unsigned long) frame;
regs->rip = (unsigned long) ka->sa.sa_handler; regs->rip = (unsigned long) ka->sa.sa_handler;
asm volatile("movl %0,%%ds" :: "r" (__USER32_DS));
asm volatile("movl %0,%%es" :: "r" (__USER32_DS));
regs->cs = __USER32_CS;
regs->ss = __USER32_DS;
set_fs(USER_DS); set_fs(USER_DS);
// XXX: cs
regs->eflags &= ~TF_MASK; regs->eflags &= ~TF_MASK;
#if DEBUG_SIG #if DEBUG_SIG
......
...@@ -171,7 +171,7 @@ ia32_sys_call_table: ...@@ -171,7 +171,7 @@ ia32_sys_call_table:
.quad sys_umount /* new_umount */ .quad sys_umount /* new_umount */
.quad ni_syscall /* old lock syscall holder */ .quad ni_syscall /* old lock syscall holder */
.quad sys32_ioctl .quad sys32_ioctl
.quad sys32_fcntl /* 55 */ .quad sys32_fcntl64 /* 55 */
.quad ni_syscall /* old mpx syscall holder */ .quad ni_syscall /* old mpx syscall holder */
.quad sys_setpgid .quad sys_setpgid
.quad ni_syscall /* old ulimit syscall holder */ .quad ni_syscall /* old ulimit syscall holder */
...@@ -319,15 +319,15 @@ ia32_sys_call_table: ...@@ -319,15 +319,15 @@ ia32_sys_call_table:
.quad sys_getgid /* 200 */ .quad sys_getgid /* 200 */
.quad sys_geteuid .quad sys_geteuid
.quad sys_getegid .quad sys_getegid
.quad sys32_setreuid .quad sys_setreuid
.quad sys32_setregid .quad sys_setregid
.quad sys32_getgroups /* 205 */ .quad sys_getgroups /* 205 */
.quad sys32_setgroups .quad sys_setgroups
.quad sys_fchown .quad sys_fchown
.quad sys32_setresuid .quad sys_setresuid
.quad sys32_getresuid .quad sys_getresuid
.quad sys32_setresgid /* 210 */ .quad sys_setresgid /* 210 */
.quad sys32_getresgid .quad sys_getresgid
.quad sys_chown .quad sys_chown
.quad sys_setuid .quad sys_setuid
.quad sys_setgid .quad sys_setgid
...@@ -356,9 +356,19 @@ ia32_sys_call_table: ...@@ -356,9 +356,19 @@ ia32_sys_call_table:
.quad sys_fremovexattr .quad sys_fremovexattr
.quad sys_tkill /* 238 */ .quad sys_tkill /* 238 */
.quad sys_sendfile64 .quad sys_sendfile64
.quad sys_futex .quad sys_futex /* 240 */
.quad sys32_sched_setaffinity .quad sys32_sched_setaffinity
.quad sys32_sched_getaffinity .quad sys32_sched_getaffinity
.quad sys_set_thread_area
.quad sys_get_thread_area
.quad sys32_io_setup
.quad sys_io_destroy
.quad sys_io_getevents
.quad sys_io_submit
.quad sys_io_cancel
.quad sys_ni_syscall /* 250 alloc_huge_pages */
.quad sys_ni_syscall /* free_huge_pages */
.quad sys_ni_syscall /* exit_group */
ia32_syscall_end: ia32_syscall_end:
.rept IA32_NR_syscalls-(ia32_syscall_end-ia32_sys_call_table)/8 .rept IA32_NR_syscalls-(ia32_syscall_end-ia32_sys_call_table)/8
.quad ni_syscall .quad ni_syscall
......
...@@ -152,8 +152,8 @@ struct shm_info32 { ...@@ -152,8 +152,8 @@ struct shm_info32 {
}; };
struct ipc_kludge { struct ipc_kludge {
struct msgbuf *msgp; u32 msgp;
int msgtyp; s32 msgtyp;
}; };
...@@ -268,14 +268,19 @@ semctl32 (int first, int second, int third, void *uptr) ...@@ -268,14 +268,19 @@ semctl32 (int first, int second, int third, void *uptr)
return err; return err;
} }
#define MAXBUF (64*1024)
static int static int
do_sys32_msgsnd (int first, int second, int third, void *uptr) do_sys32_msgsnd (int first, int second, int third, void *uptr)
{ {
struct msgbuf *p = kmalloc(second + sizeof(struct msgbuf) + 4, GFP_USER); struct msgbuf *p;
struct msgbuf32 *up = (struct msgbuf32 *)uptr; struct msgbuf32 *up = (struct msgbuf32 *)uptr;
mm_segment_t old_fs; mm_segment_t old_fs;
int err; int err;
if (second >= MAXBUF-sizeof(struct msgbuf))
return -EINVAL;
p = kmalloc(second + sizeof(struct msgbuf), GFP_USER);
if (!p) if (!p)
return -ENOMEM; return -ENOMEM;
err = get_user(p->mtype, &up->mtype); err = get_user(p->mtype, &up->mtype);
...@@ -312,13 +317,15 @@ do_sys32_msgrcv (int first, int second, int msgtyp, int third, int version, void ...@@ -312,13 +317,15 @@ do_sys32_msgrcv (int first, int second, int msgtyp, int third, int version, void
uptr = (void *)A(ipck.msgp); uptr = (void *)A(ipck.msgp);
msgtyp = ipck.msgtyp; msgtyp = ipck.msgtyp;
} }
if (second >= MAXBUF-sizeof(struct msgbuf))
return -EINVAL;
err = -ENOMEM; err = -ENOMEM;
p = kmalloc(second + sizeof(struct msgbuf) + 4, GFP_USER); p = kmalloc(second + sizeof(struct msgbuf), GFP_USER);
if (!p) if (!p)
goto out; goto out;
old_fs = get_fs(); old_fs = get_fs();
set_fs(KERNEL_DS); set_fs(KERNEL_DS);
err = sys_msgrcv(first, p, second + 4, msgtyp, third); err = sys_msgrcv(first, p, second, msgtyp, third);
set_fs(old_fs); set_fs(old_fs);
if (err < 0) if (err < 0)
goto free_then_out; goto free_then_out;
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <asm/i387.h> #include <asm/i387.h>
#include <asm/fpu32.h> #include <asm/fpu32.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/ptrace.h>
#define R32(l,q) \ #define R32(l,q) \
case offsetof(struct user32, regs.l): stack[offsetof(struct pt_regs, q)/8] = val; break case offsetof(struct user32, regs.l): stack[offsetof(struct pt_regs, q)/8] = val; break
...@@ -35,20 +36,32 @@ static int putreg32(struct task_struct *child, unsigned regno, u32 val) ...@@ -35,20 +36,32 @@ static int putreg32(struct task_struct *child, unsigned regno, u32 val)
switch (regno) { switch (regno) {
case offsetof(struct user32, regs.fs): case offsetof(struct user32, regs.fs):
if (val && (val & 3) != 3) return -EIO;
child->thread.fs = val; child->thread.fs = val;
break; break;
case offsetof(struct user32, regs.gs): case offsetof(struct user32, regs.gs):
if (val && (val & 3) != 3) return -EIO;
child->thread.gs = val; child->thread.gs = val;
break; break;
case offsetof(struct user32, regs.ds): case offsetof(struct user32, regs.ds):
if (val && (val & 3) != 3) return -EIO;
child->thread.ds = val; child->thread.ds = val;
break; break;
case offsetof(struct user32, regs.es): case offsetof(struct user32, regs.es):
if (val && (val & 3) != 3) return -EIO;
child->thread.es = val; child->thread.es = val;
break; break;
R32(cs, cs); case offsetof(struct user32, regs.ss):
R32(ss, ss); if ((val & 3) != 3) return -EIO;
stack[offsetof(struct pt_regs, ss)/8] = val;
break;
case offsetof(struct user32, regs.cs):
if ((val & 3) != 3) return -EIO;
stack[offsetof(struct pt_regs, cs)/8] = val;
break;
R32(ebx, rbx); R32(ebx, rbx);
R32(ecx, rcx); R32(ecx, rcx);
R32(edx, rdx); R32(edx, rdx);
......
...@@ -57,6 +57,7 @@ ...@@ -57,6 +57,7 @@
#include <linux/ipc.h> #include <linux/ipc.h>
#include <linux/rwsem.h> #include <linux/rwsem.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/aio_abi.h>
#include <asm/mman.h> #include <asm/mman.h>
#include <asm/types.h> #include <asm/types.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
...@@ -1288,37 +1289,71 @@ static inline int put_flock(struct flock *kfl, struct flock32 *ufl) ...@@ -1288,37 +1289,71 @@ static inline int put_flock(struct flock *kfl, struct flock32 *ufl)
extern asmlinkage long sys_fcntl(unsigned int fd, unsigned int cmd, unsigned long arg); extern asmlinkage long sys_fcntl(unsigned int fd, unsigned int cmd, unsigned long arg);
asmlinkage long sys32_fcntl(unsigned int fd, unsigned int cmd, unsigned long arg) static inline int get_flock64(struct ia32_flock64 *fl32, struct flock *fl64)
{ {
switch (cmd) { if (access_ok(fl32, sizeof(struct ia32_flock64), VERIFY_WRITE)) {
case F_GETLK: int ret = __get_user(fl64->l_type, &fl32->l_type);
case F_SETLK: ret |= __get_user(fl64->l_whence, &fl32->l_whence);
case F_SETLKW: ret |= __get_user(fl64->l_start, &fl32->l_start);
{ ret |= __get_user(fl64->l_len, &fl32->l_len);
struct flock f; ret |= __get_user(fl64->l_pid, &fl32->l_pid);
mm_segment_t old_fs; return ret;
long ret;
if (get_flock(&f, (struct flock32 *)arg))
return -EFAULT;
old_fs = get_fs(); set_fs (KERNEL_DS);
ret = sys_fcntl(fd, cmd, (unsigned long)&f);
set_fs (old_fs);
if (ret) return ret;
if (put_flock(&f, (struct flock32 *)arg))
return -EFAULT;
return 0;
} }
default: return -EFAULT;
return sys_fcntl(fd, cmd, (unsigned long)arg); }
static inline int put_flock64(struct ia32_flock64 *fl32, struct flock *fl64)
{
if (access_ok(fl32, sizeof(struct ia32_flock64), VERIFY_WRITE)) {
int ret = __put_user(fl64->l_type, &fl32->l_type);
ret |= __put_user(fl64->l_whence, &fl32->l_whence);
ret |= __put_user(fl64->l_start, &fl32->l_start);
ret |= __put_user(fl64->l_len, &fl32->l_len);
ret |= __put_user(fl64->l_pid, &fl32->l_pid);
return ret;
} }
return -EFAULT;
} }
asmlinkage long sys32_fcntl64(unsigned int fd, unsigned int cmd, unsigned long arg) asmlinkage long sys32_fcntl64(unsigned int fd, unsigned int cmd, unsigned long arg)
{ {
if (cmd >= F_GETLK64 && cmd <= F_SETLKW64) struct flock fl64;
return sys_fcntl(fd, cmd + F_GETLK - F_GETLK64, arg); mm_segment_t oldfs = get_fs();
return sys32_fcntl(fd, cmd, arg); int ret = 0, origcmd;
unsigned long origarg;
origcmd = cmd;
origarg = arg;
switch (cmd) {
case F_GETLK:
case F_SETLK:
case F_SETLKW:
ret = get_flock(&fl64, (struct flock32 *)arg);
arg = (unsigned long) &fl64;
set_fs(KERNEL_DS);
break;
case F_GETLK64:
cmd = F_GETLK;
goto cnv64;
case F_SETLK64:
cmd = F_SETLK;
goto cnv64;
case F_SETLKW64:
cmd = F_SETLKW;
cnv64:
ret = get_flock64((struct ia32_flock64 *)arg, &fl64);
arg = (unsigned long)&fl64;
set_fs(KERNEL_DS);
break;
}
if (!ret)
ret = sys_fcntl(fd, cmd, arg);
set_fs(oldfs);
if (origcmd == F_GETLK && !ret)
ret = put_flock(&fl64, (struct flock32 *)origarg);
else if (cmd == F_GETLK && !ret)
ret = put_flock64((struct ia32_flock64 *)origarg, &fl64);
return ret;
} }
int sys32_ni_syscall(int call) int sys32_ni_syscall(int call)
...@@ -1362,42 +1397,6 @@ sys32_utime(char * filename, struct utimbuf32 *times) ...@@ -1362,42 +1397,6 @@ sys32_utime(char * filename, struct utimbuf32 *times)
return ret; return ret;
} }
/*
* Ooo, nasty. We need here to frob 32-bit unsigned longs to
* 64-bit unsigned longs.
*/
static inline int
get_fd_set32(unsigned long n, unsigned long *fdset, u32 *ufdset)
{
if (ufdset) {
unsigned long odd;
if (verify_area(VERIFY_READ, ufdset, n*sizeof(u32)))
return -EFAULT;
odd = n & 1UL;
n &= ~1UL;
while (n) {
unsigned long h, l;
__get_user(l, ufdset);
__get_user(h, ufdset+1);
ufdset += 2;
*fdset++ = h << 32 | l;
n -= 2;
}
if (odd)
__get_user(*fdset, ufdset);
} else {
/* Tricky, must clear full unsigned long in the
* kernel fdset at the end, this makes sure that
* actually happens.
*/
memset(fdset, 0, ((n + 1) & ~1)*sizeof(u32));
}
return 0;
}
extern asmlinkage long sys_sysfs(int option, unsigned long arg1, extern asmlinkage long sys_sysfs(int option, unsigned long arg1,
unsigned long arg2); unsigned long arg2);
...@@ -1704,136 +1703,6 @@ sys32_rt_sigqueueinfo(int pid, int sig, siginfo_t32 *uinfo) ...@@ -1704,136 +1703,6 @@ sys32_rt_sigqueueinfo(int pid, int sig, siginfo_t32 *uinfo)
return ret; return ret;
} }
extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid);
asmlinkage long sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid)
{
uid_t sruid, seuid;
sruid = (ruid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
seuid = (euid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
return sys_setreuid(sruid, seuid);
}
extern asmlinkage long sys_setresuid(uid_t ruid, uid_t euid, uid_t suid);
asmlinkage long
sys32_setresuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid,
__kernel_uid_t32 suid)
{
uid_t sruid, seuid, ssuid;
sruid = (ruid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)ruid);
seuid = (euid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)euid);
ssuid = (suid == (__kernel_uid_t32)-1) ? ((uid_t)-1) : ((uid_t)suid);
return sys_setresuid(sruid, seuid, ssuid);
}
extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
asmlinkage long
sys32_getresuid(__kernel_uid_t32 *ruid, __kernel_uid_t32 *euid,
__kernel_uid_t32 *suid)
{
uid_t a, b, c;
int ret;
mm_segment_t old_fs = get_fs();
set_fs (KERNEL_DS);
ret = sys_getresuid(&a, &b, &c);
set_fs (old_fs);
if (put_user (a, ruid) || put_user (b, euid) || put_user (c, suid))
return -EFAULT;
return ret;
}
extern asmlinkage long sys_setregid(gid_t rgid, gid_t egid);
asmlinkage long
sys32_setregid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid)
{
gid_t srgid, segid;
srgid = (rgid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
segid = (egid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
return sys_setregid(srgid, segid);
}
extern asmlinkage long sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid);
asmlinkage long
sys32_setresgid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid,
__kernel_gid_t32 sgid)
{
gid_t srgid, segid, ssgid;
srgid = (rgid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)rgid);
segid = (egid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)egid);
ssgid = (sgid == (__kernel_gid_t32)-1) ? ((gid_t)-1) : ((gid_t)sgid);
return sys_setresgid(srgid, segid, ssgid);
}
extern asmlinkage long sys_getresgid(gid_t *rgid, gid_t *egid, gid_t *sgid);
asmlinkage long
sys32_getresgid(__kernel_gid_t32 *rgid, __kernel_gid_t32 *egid,
__kernel_gid_t32 *sgid)
{
gid_t a, b, c;
int ret;
mm_segment_t old_fs = get_fs();
set_fs (KERNEL_DS);
ret = sys_getresgid(&a, &b, &c);
set_fs (old_fs);
if (!ret) {
ret = put_user (a, rgid);
ret |= put_user (b, egid);
ret |= put_user (c, sgid);
}
return ret;
}
extern asmlinkage long sys_getgroups(int gidsetsize, gid_t *grouplist);
asmlinkage long
sys32_getgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
{
gid_t gl[NGROUPS];
int ret, i;
mm_segment_t old_fs = get_fs ();
set_fs (KERNEL_DS);
ret = sys_getgroups(gidsetsize, gl);
set_fs (old_fs);
if (gidsetsize && ret > 0 && ret <= NGROUPS)
for (i = 0; i < ret; i++, grouplist++)
if (put_user (gl[i], grouplist))
return -EFAULT;
return ret;
}
extern asmlinkage long sys_setgroups(int gidsetsize, gid_t *grouplist);
asmlinkage long
sys32_setgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
{
gid_t gl[NGROUPS];
int ret, i;
mm_segment_t old_fs = get_fs ();
if ((unsigned) gidsetsize > NGROUPS)
return -EINVAL;
for (i = 0; i < gidsetsize; i++, grouplist++)
if (get_user (gl[i], grouplist))
return -EFAULT;
set_fs (KERNEL_DS);
ret = sys_setgroups(gidsetsize, gl);
set_fs (old_fs);
return ret;
}
extern void check_pending(int signum); extern void check_pending(int signum);
asmlinkage long sys_utimes(char *, struct timeval *); asmlinkage long sys_utimes(char *, struct timeval *);
...@@ -1943,7 +1812,7 @@ sys32_newuname(struct new_utsname * name) ...@@ -1943,7 +1812,7 @@ sys32_newuname(struct new_utsname * name)
int ret = sys_newuname(name); int ret = sys_newuname(name);
if (current->personality == PER_LINUX32 && !ret) { if (current->personality == PER_LINUX32 && !ret) {
ret = copy_to_user(name->machine, "i386\0\0", 8); ret = copy_to_user(name->machine, "i686\0\0", 6);
} }
return ret; return ret;
} }
...@@ -2164,7 +2033,7 @@ int sys32_uname(struct old_utsname * name) ...@@ -2164,7 +2033,7 @@ int sys32_uname(struct old_utsname * name)
err=copy_to_user(name, &system_utsname, sizeof (*name)); err=copy_to_user(name, &system_utsname, sizeof (*name));
up_read(&uts_sem); up_read(&uts_sem);
if (current->personality == PER_LINUX32) if (current->personality == PER_LINUX32)
err |= copy_to_user(&name->machine, "i386", 5); err |= copy_to_user(&name->machine, "i686", 5);
return err?-EFAULT:0; return err?-EFAULT:0;
} }
...@@ -2270,16 +2139,17 @@ int sys32_execve(char *name, u32 argv, u32 envp, struct pt_regs regs) ...@@ -2270,16 +2139,17 @@ int sys32_execve(char *name, u32 argv, u32 envp, struct pt_regs regs)
asmlinkage int sys32_fork(struct pt_regs regs) asmlinkage int sys32_fork(struct pt_regs regs)
{ {
struct task_struct *p; struct task_struct *p;
p = do_fork(SIGCHLD, regs.rsp, &regs, 0); p = do_fork(SIGCHLD, regs.rsp, &regs, 0, NULL);
return IS_ERR(p) ? PTR_ERR(p) : p->pid; return IS_ERR(p) ? PTR_ERR(p) : p->pid;
} }
asmlinkage int sys32_clone(unsigned int clone_flags, unsigned int newsp, struct pt_regs regs) asmlinkage int sys32_clone(unsigned int clone_flags, unsigned int newsp, struct pt_regs regs)
{ {
struct task_struct *p; struct task_struct *p;
int *user_tid = (int *)regs.rdx;
if (!newsp) if (!newsp)
newsp = regs.rsp; newsp = regs.rsp;
p = do_fork(clone_flags & ~CLONE_IDLETASK, newsp, &regs, 0); p = do_fork(clone_flags & ~CLONE_IDLETASK, newsp, &regs, 0, user_tid);
return IS_ERR(p) ? PTR_ERR(p) : p->pid; return IS_ERR(p) ? PTR_ERR(p) : p->pid;
} }
...@@ -2296,7 +2166,7 @@ asmlinkage int sys32_clone(unsigned int clone_flags, unsigned int newsp, struct ...@@ -2296,7 +2166,7 @@ asmlinkage int sys32_clone(unsigned int clone_flags, unsigned int newsp, struct
asmlinkage int sys32_vfork(struct pt_regs regs) asmlinkage int sys32_vfork(struct pt_regs regs)
{ {
struct task_struct *p; struct task_struct *p;
p = do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs.rsp, &regs, 0); p = do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs.rsp, &regs, 0, NULL);
return IS_ERR(p) ? PTR_ERR(p) : p->pid; return IS_ERR(p) ? PTR_ERR(p) : p->pid;
} }
...@@ -2695,10 +2565,26 @@ int sys32_sched_getaffinity(pid_t pid, unsigned int len, ...@@ -2695,10 +2565,26 @@ int sys32_sched_getaffinity(pid_t pid, unsigned int len,
return err; return err;
} }
extern long sys_io_setup(unsigned nr_reqs, aio_context_t *ctx);
long sys32_io_setup(unsigned nr_reqs, u32 *ctx32p)
{
long ret;
aio_context_t ctx64;
mm_segment_t oldfs = get_fs();
set_fs(KERNEL_DS);
ret = sys_io_setup(nr_reqs, &ctx64);
set_fs(oldfs);
/* truncating is ok because it's a user address */
if (!ret)
ret = put_user((u32)ctx64, ctx32p);
return ret;
}
struct exec_domain ia32_exec_domain = { struct exec_domain ia32_exec_domain = {
name: "linux/x86", .name = "linux/x86",
pers_low: PER_LINUX32, .pers_low = PER_LINUX32,
pers_high: PER_LINUX32, .pers_high = PER_LINUX32,
}; };
static int __init ia32_init (void) static int __init ia32_init (void)
......
...@@ -2,15 +2,14 @@ ...@@ -2,15 +2,14 @@
# Makefile for the linux kernel. # Makefile for the linux kernel.
# #
O_TARGET := kernel.o
EXTRA_TARGETS := head.o head64.o init_task.o EXTRA_TARGETS := head.o head64.o init_task.o
export-objs := mtrr.o x8664_ksyms.o export-objs := mtrr.o x8664_ksyms.o pci-gart.o
obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o \ obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o \
ptrace.o i8259.o ioport.o ldt.o setup.o time.o sys_x86_64.o \ ptrace.o i8259.o ioport.o ldt.o setup.o time.o sys_x86_64.o \
pci-dma.o x8664_ksyms.o i387.o syscall.o vsyscall.o \ pci-dma.o x8664_ksyms.o i387.o syscall.o vsyscall.o \
setup64.o bluesmoke.o bootflag.o setup64.o bluesmoke.o bootflag.o e820.o reboot.o
obj-$(CONFIG_MTRR) += mtrr.o obj-$(CONFIG_MTRR) += mtrr.o
obj-$(CONFIG_X86_MSR) += msr.o obj-$(CONFIG_X86_MSR) += msr.o
...@@ -18,9 +17,11 @@ obj-$(CONFIG_X86_CPUID) += cpuid.o ...@@ -18,9 +17,11 @@ obj-$(CONFIG_X86_CPUID) += cpuid.o
obj-$(CONFIG_SMP) += smp.o smpboot.o trampoline.o obj-$(CONFIG_SMP) += smp.o smpboot.o trampoline.o
obj-$(CONFIG_X86_LOCAL_APIC) += apic.o nmi.o obj-$(CONFIG_X86_LOCAL_APIC) += apic.o nmi.o
obj-$(CONFIG_X86_IO_APIC) += io_apic.o mpparse.o obj-$(CONFIG_X86_IO_APIC) += io_apic.o mpparse.o
#obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI) += acpi.o
#obj-$(CONFIG_ACPI_SLEEP) += acpi_wakeup.o #obj-$(CONFIG_ACPI_SLEEP) += acpi_wakeup.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_GART_IOMMU) += pci-gart.o aperture.o
obj-$(CONFIG_DUMMY_IOMMU) += pci-nommu.o
EXTRA_AFLAGS := -traditional EXTRA_AFLAGS := -traditional
......
This diff is collapsed.
/*
* Firmware replacement code.
*
* Work around broken BIOSes that don't set an aperture.
* The IOMMU code needs an aperture even who no AGP is present in the system.
* Map the aperture over some low memory. This is cheaper than doing bounce
* buffering. The memory is lost. This is done at early boot because only
* the bootmem allocator can allocate 32+MB.
*
* Copyright 2002 Andi Kleen, SuSE Labs.
* $Id: aperture.c,v 1.2 2002/09/19 19:25:32 ak Exp $
*/
#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/init.h>
#include <linux/bootmem.h>
#include <linux/mmzone.h>
#include <linux/pci_ids.h>
#include <asm/e820.h>
#include <asm/io.h>
#include <asm/proto.h>
#include <asm/pci-direct.h>
int fallback_aper_order __initdata = 1; /* 64MB */
int fallback_aper_force __initdata = 0;
extern int no_iommu, force_mmu;
/* This code runs before the PCI subsystem is initialized, so just
access the northbridge directly. */
#define NB_ID_3 (PCI_VENDOR_ID_AMD | (0x1103<<16))
static u32 __init allocate_aperture(void)
{
#ifdef CONFIG_DISCONTIGMEM
pg_data_t *nd0 = NODE_DATA(0);
#else
pg_data_t *nd0 = &contig_page_data;
#endif
u32 aper_size;
void *p;
if (fallback_aper_order > 7)
fallback_aper_order = 7;
aper_size = (32 * 1024 * 1024) << fallback_aper_order;
/*
* Aperture has to be naturally aligned it seems. This means an
* 2GB aperture won't have much changes to succeed in the lower 4GB of
* memory. Unfortunately we cannot move it up because that would make
* the IOMMU useless.
*/
p = __alloc_bootmem_node(nd0, aper_size, aper_size, 0);
if (!p || __pa(p)+aper_size > 0xffffffff) {
printk("Cannot allocate aperture memory hole (%p,%uK)\n",
p, aper_size>>10);
if (p)
free_bootmem((unsigned long)p, aper_size);
return 0;
}
printk("Mapping aperture over %d KB of RAM @ %lx\n",
aper_size >> 10, __pa(p));
return (u32)__pa(p);
}
void __init iommu_hole_init(void)
{
int fix, num;
u32 aper_size, aper_alloc, aper_order;
u64 aper_base;
if (no_iommu)
return;
if (end_pfn < (0xffffffff>>PAGE_SHIFT) && !force_mmu)
return;
printk("Checking aperture...\n");
fix = 0;
for (num = 24; num < 32; num++) {
if (read_pci_config(0, num, 3, 0x00) != NB_ID_3)
continue;
aper_order = (read_pci_config(0, num, 3, 0x90) >> 1) & 7;
aper_size = (32 * 1024 * 1024) << aper_order;
aper_base = read_pci_config(0, num, 3, 0x94) & 0x7fff;
aper_base <<= 25;
printk("CPU %d: aperture @ %Lx size %u KB\n", num-24,
aper_base, aper_size>>10);
if (!aper_base || aper_base + aper_size >= 0xffffffff) {
fix = 1;
break;
}
if (e820_mapped(aper_base, aper_base + aper_size, E820_RAM)) {
printk("Aperture pointing to e820 RAM. Ignoring.\n");
fix = 1;
break;
}
}
if (!fix && !fallback_aper_force)
return;
printk("Your BIOS is broken and doesn't leave a aperture memory hole\n");
aper_alloc = allocate_aperture();
if (!aper_alloc)
return;
for (num = 24; num < 32; num++) {
if (read_pci_config(0, num, 3, 0x00) != NB_ID_3)
continue;
/* Don't enable translation yet. That is done later.
Assume this BIOS didn't initialise the GART so
just overwrite all previous bits */
write_pci_config(0, num, 3, 0x90, fallback_aper_order<<1);
write_pci_config(0, num, 3, 0x94, aper_alloc>>25);
}
}
...@@ -30,6 +30,8 @@ ...@@ -30,6 +30,8 @@
#include <asm/mpspec.h> #include <asm/mpspec.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
int disable_apic_timer __initdata;
/* Using APIC to generate smp_local_timer_interrupt? */ /* Using APIC to generate smp_local_timer_interrupt? */
int using_apic_timer = 0; int using_apic_timer = 0;
...@@ -598,7 +600,7 @@ static int __init detect_init_APIC (void) ...@@ -598,7 +600,7 @@ static int __init detect_init_APIC (void)
switch (boot_cpu_data.x86_vendor) { switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_AMD: case X86_VENDOR_AMD:
if (boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model > 1) if (boot_cpu_data.x86 > 6)
break; break;
goto no_apic; goto no_apic;
case X86_VENDOR_INTEL: case X86_VENDOR_INTEL:
...@@ -640,6 +642,8 @@ static int __init detect_init_APIC (void) ...@@ -640,6 +642,8 @@ static int __init detect_init_APIC (void)
if (nmi_watchdog != NMI_NONE) if (nmi_watchdog != NMI_NONE)
nmi_watchdog = NMI_LOCAL_APIC; nmi_watchdog = NMI_LOCAL_APIC;
apic_pm_init1();
printk("Found and enabled local APIC!\n"); printk("Found and enabled local APIC!\n");
return 0; return 0;
...@@ -694,59 +698,6 @@ void __init init_apic_mappings(void) ...@@ -694,59 +698,6 @@ void __init init_apic_mappings(void)
#endif #endif
} }
/*
* This part sets up the APIC 32 bit clock in LVTT1, with HZ interrupts
* per second. We assume that the caller has already set up the local
* APIC.
*
* The APIC timer is not exactly sync with the external timer chip, it
* closely follows bus clocks.
*/
/*
* The timer chip is already set up at HZ interrupts per second here,
* but we do not accept timer interrupts yet. We only allow the BP
* to calibrate.
*/
static unsigned int __init get_8254_timer_count(void)
{
extern spinlock_t i8253_lock;
unsigned long flags;
unsigned int count;
spin_lock_irqsave(&i8253_lock, flags);
outb_p(0x00, 0x43);
count = inb_p(0x40);
count |= inb_p(0x40) << 8;
spin_unlock_irqrestore(&i8253_lock, flags);
return count;
}
void __init wait_8254_wraparound(void)
{
unsigned int curr_count, prev_count=~0;
int delta;
curr_count = get_8254_timer_count();
do {
prev_count = curr_count;
curr_count = get_8254_timer_count();
delta = curr_count-prev_count;
/*
* This limit for delta seems arbitrary, but it isn't, it's
* slightly above the level of error a buggy Mercury/Neptune
* chipset timer can cause.
*/
} while (delta < 300);
}
/* /*
* This function sets up the local APIC timer, with a timeout of * This function sets up the local APIC timer, with a timeout of
* 'clocks' APIC bus clock. During calibration we actually call * 'clocks' APIC bus clock. During calibration we actually call
...@@ -779,52 +730,36 @@ void __setup_APIC_LVTT(unsigned int clocks) ...@@ -779,52 +730,36 @@ void __setup_APIC_LVTT(unsigned int clocks)
apic_write_around(APIC_TMICT, clocks/APIC_DIVISOR); apic_write_around(APIC_TMICT, clocks/APIC_DIVISOR);
} }
void setup_APIC_timer(void * data) static void setup_APIC_timer(unsigned int clocks)
{ {
unsigned int clocks = (unsigned long) data, slice, t0, t1;
unsigned long flags; unsigned long flags;
int delta;
local_save_flags(flags); local_irq_save(flags);
local_irq_enable();
/*
* ok, Intel has some smart code in their APIC that knows
* if a CPU was in 'hlt' lowpower mode, and this increases
* its APIC arbitration priority. To avoid the external timer
* IRQ APIC event being in synchron with the APIC clock we
* introduce an interrupt skew to spread out timer events.
*
* The number of slices within a 'big' timeslice is smp_num_cpus+1
*/
slice = clocks / (smp_num_cpus+1);
printk("cpu: %d, clocks: %d, slice: %d\n",
smp_processor_id(), clocks, slice);
/*
* Wait for IRQ0's slice:
*/
wait_8254_wraparound();
#if 0
/* For some reasons this doesn't work on Simics, so fake it for now */
if (strstr(boot_cpu_data.x86_model_id, "Screwdriver")) {
__setup_APIC_LVTT(clocks); __setup_APIC_LVTT(clocks);
return;
}
#endif
t0 = apic_read(APIC_TMICT)*APIC_DIVISOR; /* wait for irq slice */
/* Wait till TMCCT gets reloaded from TMICT... */ {
do { int c1, c2;
t1 = apic_read(APIC_TMCCT)*APIC_DIVISOR; outb_p(0x00, 0x43);
delta = (int)(t0 - t1 - slice*(smp_processor_id()+1)); c2 = inb_p(0x40);
} while (delta >= 0); c2 |= inb_p(0x40) << 8;
/* Now wait for our slice for real. */
do { do {
t1 = apic_read(APIC_TMCCT)*APIC_DIVISOR; c1 = c2;
delta = (int)(t0 - t1 - slice*(smp_processor_id()+1)); outb_p(0x00, 0x43);
} while (delta < 0); c2 = inb_p(0x40);
c2 |= inb_p(0x40) << 8;
} while (c2 - c1 < 300);
}
__setup_APIC_LVTT(clocks); __setup_APIC_LVTT(clocks);
printk("CPU%d<T0:%u,T1:%u,D:%d,S:%u,C:%u>\n",
smp_processor_id(), t0, t1, delta, slice, clocks);
local_irq_restore(flags); local_irq_restore(flags);
} }
...@@ -841,16 +776,12 @@ void setup_APIC_timer(void * data) ...@@ -841,16 +776,12 @@ void setup_APIC_timer(void * data)
* APIC irq that way. * APIC irq that way.
*/ */
#define TICK_COUNT 100000000
int __init calibrate_APIC_clock(void) int __init calibrate_APIC_clock(void)
{ {
unsigned long t1 = 0, t2 = 0; int apic, apic_start, tsc, tsc_start;
int tt1, tt2;
int result; int result;
int i;
const int LOOPS = HZ/10;
printk("calibrating APIC timer ...\n");
/* /*
* Put whatever arbitrary (but long enough) timeout * Put whatever arbitrary (but long enough) timeout
* value into the APIC clock, we just want to get the * value into the APIC clock, we just want to get the
...@@ -858,61 +789,31 @@ int __init calibrate_APIC_clock(void) ...@@ -858,61 +789,31 @@ int __init calibrate_APIC_clock(void)
*/ */
__setup_APIC_LVTT(1000000000); __setup_APIC_LVTT(1000000000);
/* apic_start = apic_read(APIC_TMCCT);
* The timer chip counts down to zero. Let's wait rdtscl(tsc_start);
* for a wraparound to start exact measurement:
* (the current tick might have been already half done)
*/
wait_8254_wraparound();
/*
* We wrapped around just now. Let's start:
*/
if (cpu_has_tsc)
rdtscll(t1);
tt1 = apic_read(APIC_TMCCT);
/*
* Let's wait LOOPS wraprounds:
*/
for (i = 0; i < LOOPS; i++)
wait_8254_wraparound();
tt2 = apic_read(APIC_TMCCT);
if (cpu_has_tsc)
rdtscll(t2);
/*
* The APIC bus clock counter is 32 bits only, it
* might have overflown, but note that we use signed
* longs, thus no extra care needed.
*
* underflown to be exact, as the timer counts down ;)
*/
result = (tt1-tt2)*APIC_DIVISOR/LOOPS;
printk("t1 = %ld t2 = %ld tt1 = %d tt2 = %d\n", t1, t2, tt1, tt2);
do {
apic = apic_read(APIC_TMCCT);
rdtscl(tsc);
} while ((tsc - tsc_start) < TICK_COUNT && (apic - apic_start) < TICK_COUNT);
if (cpu_has_tsc) result = (apic_start - apic) * 1000L * cpu_khz / (tsc - tsc_start);
printk("..... CPU clock speed is %d.%04d MHz.\n",
((int)(t2-t1)/LOOPS)/(1000000/HZ),
((int)(t2-t1)/LOOPS)%(1000000/HZ));
printk("..... host bus clock speed is %d.%04d MHz.\n", printk("Detected %d.%03d MHz APIC timer.\n",
result/(1000000/HZ), result / 1000 / 1000, result / 1000 % 1000);
result%(1000000/HZ));
return result; return result * APIC_DIVISOR / HZ;
} }
static unsigned int calibration_result; static unsigned int calibration_result;
void __init setup_APIC_clocks (void) void __init setup_boot_APIC_clock (void)
{ {
if (disable_apic_timer) {
printk("Disabling APIC timer\n");
return;
}
printk("Using local APIC timer interrupts.\n"); printk("Using local APIC timer interrupts.\n");
using_apic_timer = 1; using_apic_timer = 1;
...@@ -922,12 +823,16 @@ void __init setup_APIC_clocks (void) ...@@ -922,12 +823,16 @@ void __init setup_APIC_clocks (void)
/* /*
* Now set up the timer for real. * Now set up the timer for real.
*/ */
setup_APIC_timer((void *)(u64)calibration_result); setup_APIC_timer(calibration_result);
local_irq_enable(); local_irq_enable();
}
/* and update all other cpus */ void __init setup_secondary_APIC_clock(void)
smp_call_function(setup_APIC_timer, (void *)(u64)calibration_result, 1, 1); {
local_irq_disable(); /* FIXME: Do we need this? --RR */
setup_APIC_timer(calibration_result);
local_irq_enable();
} }
void __init disable_APIC_timer(void) void __init disable_APIC_timer(void)
...@@ -1044,8 +949,6 @@ inline void smp_local_timer_interrupt(struct pt_regs *regs) ...@@ -1044,8 +949,6 @@ inline void smp_local_timer_interrupt(struct pt_regs *regs)
* [ if a single-CPU system runs an SMP kernel then we call the local * [ if a single-CPU system runs an SMP kernel then we call the local
* interrupt as well. Thus we cannot inline the local irq ... ] * interrupt as well. Thus we cannot inline the local irq ... ]
*/ */
unsigned int apic_timer_irqs [NR_CPUS];
void smp_apic_timer_interrupt(struct pt_regs *regs) void smp_apic_timer_interrupt(struct pt_regs *regs)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
...@@ -1053,7 +956,7 @@ void smp_apic_timer_interrupt(struct pt_regs *regs) ...@@ -1053,7 +956,7 @@ void smp_apic_timer_interrupt(struct pt_regs *regs)
/* /*
* the NMI deadlock-detector uses this. * the NMI deadlock-detector uses this.
*/ */
apic_timer_irqs[cpu]++; add_pda(apic_timer_irqs, 1);
/* /*
* NOTE! We'd better ACK the irq immediately, * NOTE! We'd better ACK the irq immediately,
...@@ -1065,12 +968,9 @@ void smp_apic_timer_interrupt(struct pt_regs *regs) ...@@ -1065,12 +968,9 @@ void smp_apic_timer_interrupt(struct pt_regs *regs)
* Besides, if we don't timer interrupts ignore the global * Besides, if we don't timer interrupts ignore the global
* interrupt lock, which is the WrongThing (tm) to do. * interrupt lock, which is the WrongThing (tm) to do.
*/ */
irq_enter(cpu, 0); irq_enter();
smp_local_timer_interrupt(regs); smp_local_timer_interrupt(regs);
irq_exit(cpu, 0); irq_exit();
if (softirq_pending(cpu))
do_softirq();
} }
/* /*
...@@ -1082,6 +982,7 @@ asmlinkage void smp_spurious_interrupt(void) ...@@ -1082,6 +982,7 @@ asmlinkage void smp_spurious_interrupt(void)
static unsigned long last_warning; static unsigned long last_warning;
static unsigned long skipped; static unsigned long skipped;
irq_enter();
/* /*
* Check if this really is a spurious interrupt and ACK it * Check if this really is a spurious interrupt and ACK it
* if it is a vectored one. Just in case... * if it is a vectored one. Just in case...
...@@ -1099,6 +1000,7 @@ asmlinkage void smp_spurious_interrupt(void) ...@@ -1099,6 +1000,7 @@ asmlinkage void smp_spurious_interrupt(void)
} else { } else {
skipped++; skipped++;
} }
irq_exit();
} }
/* /*
...@@ -1109,6 +1011,7 @@ asmlinkage void smp_error_interrupt(void) ...@@ -1109,6 +1011,7 @@ asmlinkage void smp_error_interrupt(void)
{ {
unsigned int v, v1; unsigned int v, v1;
irq_enter();
/* First tickle the hardware, only then report what went on. -- REW */ /* First tickle the hardware, only then report what went on. -- REW */
v = apic_read(APIC_ESR); v = apic_read(APIC_ESR);
apic_write(APIC_ESR, 0); apic_write(APIC_ESR, 0);
...@@ -1126,16 +1029,23 @@ asmlinkage void smp_error_interrupt(void) ...@@ -1126,16 +1029,23 @@ asmlinkage void smp_error_interrupt(void)
6: Received illegal vector 6: Received illegal vector
7: Illegal register address 7: Illegal register address
*/ */
printk (KERN_ERR "APIC error on CPU%d: %02x(%02x)\n", printk (KERN_INFO "APIC error on CPU%d: %02x(%02x)\n",
smp_processor_id(), v , v1); smp_processor_id(), v , v1);
irq_exit();
} }
int disable_apic __initdata;
/* /*
* This initializes the IO-APIC and APIC hardware if this is * This initializes the IO-APIC and APIC hardware if this is
* a UP kernel. * a UP kernel.
*/ */
int __init APIC_init_uniprocessor (void) int __init APIC_init_uniprocessor (void)
{ {
if (disable_apic) {
printk(KERN_INFO "Apic disabled\n");
return -1;
}
if (!smp_found_config && !cpu_has_apic) if (!smp_found_config && !cpu_has_apic)
return -1; return -1;
...@@ -1166,7 +1076,21 @@ int __init APIC_init_uniprocessor (void) ...@@ -1166,7 +1076,21 @@ int __init APIC_init_uniprocessor (void)
if (!skip_ioapic_setup && nr_ioapics) if (!skip_ioapic_setup && nr_ioapics)
setup_IO_APIC(); setup_IO_APIC();
#endif #endif
setup_APIC_clocks(); setup_boot_APIC_clock();
return 0; return 0;
} }
static __init int setup_disableapic(char *str)
{
disable_apic = 1;
}
static __init int setup_noapictimer(char *str)
{
disable_apic_timer = 1;
}
__setup("disableapic", setup_disableapic);
__setup("noapictimer", setup_noapictimer);
...@@ -24,15 +24,16 @@ int main(void) ...@@ -24,15 +24,16 @@ int main(void)
ENTRY(state); ENTRY(state);
ENTRY(flags); ENTRY(flags);
ENTRY(thread); ENTRY(thread);
ENTRY(pid);
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
#define ENTRY(entry) DEFINE(threadinfo__ ## entry, offsetof(struct thread_info, entry)) #define ENTRY(entry) DEFINE(threadinfo_ ## entry, offsetof(struct thread_info, entry))
ENTRY(flags); ENTRY(flags);
ENTRY(addr_limit); ENTRY(addr_limit);
ENTRY(preempt_count); ENTRY(preempt_count);
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
#define ENTRY(entry) DEFINE(pda__ ## entry, offsetof(struct x8664_pda, entry)) #define ENTRY(entry) DEFINE(pda_ ## entry, offsetof(struct x8664_pda, entry))
ENTRY(kernelstack); ENTRY(kernelstack);
ENTRY(oldrsp); ENTRY(oldrsp);
ENTRY(pcurrent); ENTRY(pcurrent);
......
...@@ -39,7 +39,7 @@ static void hammer_machine_check(struct pt_regs * regs, long error_code) ...@@ -39,7 +39,7 @@ static void hammer_machine_check(struct pt_regs * regs, long error_code)
recover=0; recover=0;
printk(KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n", smp_processor_id(), mcgsth, mcgstl); printk(KERN_EMERG "CPU %d: Machine Check Exception: %08x%08x\n", smp_processor_id(), mcgsth, mcgstl);
preempt_disable();
for (i=0;i<banks;i++) { for (i=0;i<banks;i++) {
rdmsr(MSR_IA32_MC0_STATUS+i*4,low, high); rdmsr(MSR_IA32_MC0_STATUS+i*4,low, high);
if(high&(1<<31)) { if(high&(1<<31)) {
...@@ -64,6 +64,7 @@ static void hammer_machine_check(struct pt_regs * regs, long error_code) ...@@ -64,6 +64,7 @@ static void hammer_machine_check(struct pt_regs * regs, long error_code)
wmb(); wmb();
} }
} }
preempt_enable();
if(recover&2) if(recover&2)
panic("CPU context corrupt"); panic("CPU context corrupt");
...@@ -110,6 +111,7 @@ static void mce_checkregs (void *info) ...@@ -110,6 +111,7 @@ static void mce_checkregs (void *info)
BUG_ON (*cpu != smp_processor_id()); BUG_ON (*cpu != smp_processor_id());
preempt_disable();
for (i=0; i<banks; i++) { for (i=0; i<banks; i++) {
rdmsr(MSR_IA32_MC0_STATUS+i*4, low, high); rdmsr(MSR_IA32_MC0_STATUS+i*4, low, high);
...@@ -124,6 +126,7 @@ static void mce_checkregs (void *info) ...@@ -124,6 +126,7 @@ static void mce_checkregs (void *info)
wmb(); wmb();
} }
} }
preempt_enable();
} }
......
...@@ -146,10 +146,10 @@ static int cpuid_open(struct inode *inode, struct file *file) ...@@ -146,10 +146,10 @@ static int cpuid_open(struct inode *inode, struct file *file)
* File operations we support * File operations we support
*/ */
static struct file_operations cpuid_fops = { static struct file_operations cpuid_fops = {
owner: THIS_MODULE, .owner = THIS_MODULE,
llseek: cpuid_seek, .llseek = cpuid_seek,
read: cpuid_read, .read = cpuid_read,
open: cpuid_open, .open = cpuid_open,
}; };
int __init cpuid_init(void) int __init cpuid_init(void)
......
This diff is collapsed.
...@@ -47,10 +47,10 @@ static void early_vga_write(struct console *con, const char *str, unsigned n) ...@@ -47,10 +47,10 @@ static void early_vga_write(struct console *con, const char *str, unsigned n)
} }
static struct console early_vga_console = { static struct console early_vga_console = {
name: "earlyvga", .name = "earlyvga",
write: early_vga_write, .write = early_vga_write,
flags: CON_PRINTBUFFER, .flags = CON_PRINTBUFFER,
index: -1, .index = -1,
}; };
/* Serial functions losely based on a similar package from Klaus P. Gerlicher */ /* Serial functions losely based on a similar package from Klaus P. Gerlicher */
...@@ -138,10 +138,10 @@ static __init void early_serial_init(char *opt) ...@@ -138,10 +138,10 @@ static __init void early_serial_init(char *opt)
} }
static struct console early_serial_console = { static struct console early_serial_console = {
name: "earlyser", .name = "earlyser",
write: early_serial_write, .write = early_serial_write,
flags: CON_PRINTBUFFER, .flags = CON_PRINTBUFFER,
index: -1, .index = -1,
}; };
/* Direct interface for emergencies */ /* Direct interface for emergencies */
...@@ -181,6 +181,9 @@ int __init setup_early_printk(char *opt) ...@@ -181,6 +181,9 @@ int __init setup_early_printk(char *opt)
if (!strncmp(buf, "serial", 6)) { if (!strncmp(buf, "serial", 6)) {
early_serial_init(buf + 6); early_serial_init(buf + 6);
early_console = &early_serial_console; early_console = &early_serial_console;
} else if (!strncmp(buf, "ttyS", 4)) {
early_serial_init(buf);
early_console = &early_serial_console;
} else if (!strncmp(buf, "vga", 3)) { } else if (!strncmp(buf, "vga", 3)) {
early_console = &early_vga_console; early_console = &early_vga_console;
} else { } else {
......
...@@ -84,7 +84,7 @@ ...@@ -84,7 +84,7 @@
xorq %rax, %rax xorq %rax, %rax
pushq %rax /* ss */ pushq %rax /* ss */
pushq %rax /* rsp */ pushq %rax /* rsp */
pushq %rax /* eflags */ pushq $(1<<9) /* eflags - interrupts on */
pushq $__KERNEL_CS /* cs */ pushq $__KERNEL_CS /* cs */
pushq \child_rip /* rip */ pushq \child_rip /* rip */
pushq %rax /* orig rax */ pushq %rax /* orig rax */
...@@ -236,21 +236,17 @@ badsys: ...@@ -236,21 +236,17 @@ badsys:
* Has correct top of stack, but partial stack frame. * Has correct top of stack, but partial stack frame.
*/ */
ENTRY(int_ret_from_sys_call) ENTRY(int_ret_from_sys_call)
testl $3,CS-ARGOFFSET(%rsp) # kernel syscall? cli
je int_restore_args testl $3,CS-ARGOFFSET(%rsp)
je retint_restore_args
movl $_TIF_ALLWORK_MASK,%edi movl $_TIF_ALLWORK_MASK,%edi
/* edi: mask to check */ /* edi: mask to check */
int_with_check: int_with_check:
GET_THREAD_INFO(%rcx) GET_THREAD_INFO(%rcx)
cli
movl threadinfo_flags(%rcx),%edx movl threadinfo_flags(%rcx),%edx
andl %edi,%edx andl %edi,%edx
jnz int_careful jnz int_careful
int_restore_swapgs: jmp retint_swapgs
swapgs
int_restore_args:
RESTORE_ARGS 0,8,0
iretq
/* Either reschedule or signal or syscall exit tracking needed. */ /* Either reschedule or signal or syscall exit tracking needed. */
/* First do a reschedule test. */ /* First do a reschedule test. */
...@@ -364,15 +360,11 @@ ENTRY(stub_rt_sigreturn) ...@@ -364,15 +360,11 @@ ENTRY(stub_rt_sigreturn)
.macro interrupt func .macro interrupt func
cld cld
SAVE_ARGS SAVE_ARGS
#ifdef CONFIG_PREEMPT
GET_THREAD_INFO(%rdx)
incl threadinfo_preempt_count(%rdx)
#endif
leaq -ARGOFFSET(%rsp),%rdi # arg1 for handler leaq -ARGOFFSET(%rsp),%rdi # arg1 for handler
testl $3,CS(%rdi) testl $3,CS(%rdi)
je 1f je 1f
swapgs swapgs
1: addl $1,PDAREF(pda_irqcount) # XXX: should be merged with irq.c irqcount 1: addl $1,PDAREF(pda_irqcount) # RED-PEN should check preempt count
movq PDAREF(pda_irqstackptr),%rax movq PDAREF(pda_irqstackptr),%rax
cmoveq %rax,%rsp cmoveq %rax,%rsp
pushq %rdi # save old stack pushq %rdi # save old stack
...@@ -389,9 +381,6 @@ ret_from_intr: ...@@ -389,9 +381,6 @@ ret_from_intr:
leaq ARGOFFSET(%rdi),%rsp leaq ARGOFFSET(%rdi),%rsp
exit_intr: exit_intr:
GET_THREAD_INFO(%rcx) GET_THREAD_INFO(%rcx)
#ifdef CONFIG_PREEMPT
decl threadinfo_preempt_count(%rcx)
#endif
testl $3,CS-ARGOFFSET(%rsp) testl $3,CS-ARGOFFSET(%rsp)
je retint_kernel je retint_kernel
...@@ -407,11 +396,24 @@ retint_check: ...@@ -407,11 +396,24 @@ retint_check:
andl %edi,%edx andl %edi,%edx
jnz retint_careful jnz retint_careful
retint_swapgs: retint_swapgs:
cli
swapgs swapgs
retint_restore_args: retint_restore_args:
cli
RESTORE_ARGS 0,8,0 RESTORE_ARGS 0,8,0
iret_label:
iretq iretq
.section __ex_table,"a"
.quad iret_label,bad_iret
.previous
.section .fixup,"ax"
/* force a signal here? this matches i386 behaviour */
bad_iret:
movq $-9999,%rdi /* better code? */
jmp do_exit
.previous
/* edi: workmask, edx: work */ /* edi: workmask, edx: work */
retint_careful: retint_careful:
bt $TIF_NEED_RESCHED,%edx bt $TIF_NEED_RESCHED,%edx
...@@ -448,9 +450,8 @@ retint_kernel: ...@@ -448,9 +450,8 @@ retint_kernel:
jnz retint_restore_args jnz retint_restore_args
bt $TIF_NEED_RESCHED,threadinfo_flags(%rcx) bt $TIF_NEED_RESCHED,threadinfo_flags(%rcx)
jnc retint_restore_args jnc retint_restore_args
movl PDAREF(pda___local_bh_count),%eax bt $9,EFLAGS-ARGOFFSET(%rsp) /* interrupts off? */
addl PDAREF(pda___local_irq_count),%eax jc retint_restore_args
jnz retint_restore_args
movl $PREEMPT_ACTIVE,threadinfo_preempt_count(%rcx) movl $PREEMPT_ACTIVE,threadinfo_preempt_count(%rcx)
sti sti
call schedule call schedule
...@@ -513,11 +514,6 @@ ENTRY(spurious_interrupt) ...@@ -513,11 +514,6 @@ ENTRY(spurious_interrupt)
*/ */
ALIGN ALIGN
error_entry: error_entry:
testl $3,24(%rsp)
je error_kernelspace
swapgs
error_kernelspace:
sti
/* rdi slot contains rax, oldrax contains error code */ /* rdi slot contains rax, oldrax contains error code */
pushq %rsi pushq %rsi
movq 8(%rsp),%rsi /* load rax */ movq 8(%rsp),%rsi /* load rax */
...@@ -530,17 +526,25 @@ error_kernelspace: ...@@ -530,17 +526,25 @@ error_kernelspace:
pushq %r11 pushq %r11
cld cld
SAVE_REST SAVE_REST
testl $3,CS(%rsp)
je error_kernelspace
error_swapgs:
xorl %ebx,%ebx
swapgs
error_sti:
sti
movq %rdi,RDI(%rsp) movq %rdi,RDI(%rsp)
movq %rsp,%rdi movq %rsp,%rdi
movq ORIG_RAX(%rsp),%rsi /* get error code */ movq ORIG_RAX(%rsp),%rsi /* get error code */
movq $-1,ORIG_RAX(%rsp) movq $-1,ORIG_RAX(%rsp)
call *%rax call *%rax
error_exit: error_exit:
movl %ebx,%eax
RESTORE_REST RESTORE_REST
cli cli
GET_THREAD_INFO(%rcx) GET_THREAD_INFO(%rcx)
testl $3,CS-ARGOFFSET(%rsp) testl %eax,%eax
je retint_kernel jne retint_kernel
movl threadinfo_flags(%rcx),%edx movl threadinfo_flags(%rcx),%edx
movl $_TIF_WORK_MASK,%edi movl $_TIF_WORK_MASK,%edi
andl %edi,%edx andl %edi,%edx
...@@ -549,6 +553,39 @@ error_exit: ...@@ -549,6 +553,39 @@ error_exit:
RESTORE_ARGS 0,8,0 RESTORE_ARGS 0,8,0
iretq iretq
error_kernelspace:
/* There are two places in the kernel that can potentially fault with
usergs. Handle them here. */
cmpq $iret_label,RIP(%rsp)
je error_swapgs
cmpq $gs_change,RIP(%rsp)
je error_swapgs
movl $1,%ebx
jmp error_sti
/* Reload gs selector with exception handling */
/* edi: new selector */
ENTRY(load_gs_index)
pushf
cli
swapgs
gs_change:
movl %edi,%gs
2: swapgs
popf
ret
.section __ex_table,"a"
.align 8
.quad gs_change,bad_gs
.previous
.section .fixup,"ax"
bad_gs:
xorl %eax,%eax
movl %eax,%gs
jmp 2b
.previous
/* /*
* Create a kernel thread. * Create a kernel thread.
* *
...@@ -564,7 +601,7 @@ ENTRY(kernel_thread) ...@@ -564,7 +601,7 @@ ENTRY(kernel_thread)
# rdi: flags, rsi: usp, rdx: will be &pt_regs # rdi: flags, rsi: usp, rdx: will be &pt_regs
movq %rdx,%rdi movq %rdx,%rdi
orq kernel_thread_flags(%rip), %rdi orq kernel_thread_flags(%rip),%rdi
movq $-1, %rsi movq $-1, %rsi
movq %rsp, %rdx movq %rsp, %rdx
...@@ -573,8 +610,9 @@ ENTRY(kernel_thread) ...@@ -573,8 +610,9 @@ ENTRY(kernel_thread)
xorl %edi,%edi xorl %edi,%edi
cmpq $-1000,%rax cmpq $-1000,%rax
cmovb %rdi,%rax jnb 1f
movq %rax,RAX(%rsp) movl tsk_pid(%rax),%eax
1: movq %rax,RAX(%rsp)
/* /*
* It isn't worth to check for reschedule here, * It isn't worth to check for reschedule here,
...@@ -648,18 +686,19 @@ ENTRY(simd_coprocessor_error) ...@@ -648,18 +686,19 @@ ENTRY(simd_coprocessor_error)
zeroentry do_simd_coprocessor_error zeroentry do_simd_coprocessor_error
ENTRY(device_not_available) ENTRY(device_not_available)
testl $3,8(%rsp) pushq $-1 #error code
SAVE_ALL
movl $1,%ebx
testl $3,CS(%rsp)
je 1f je 1f
xorl %ebx,%ebx
swapgs swapgs
1: pushq $-1 #error code 1: movq %cr0,%rax
SAVE_ALL
movq %cr0,%rax
leaq math_state_restore(%rip),%rcx leaq math_state_restore(%rip),%rcx
leaq math_emulate(%rip),%rbx leaq math_emulate(%rip),%rdx
testl $0x4,%eax testl $0x4,%eax
cmoveq %rcx,%rbx cmoveq %rcx,%rdx
preempt_stop call *%rdx
call *%rbx
jmp error_exit jmp error_exit
ENTRY(debug) ENTRY(debug)
......
...@@ -159,7 +159,7 @@ reach_long64: ...@@ -159,7 +159,7 @@ reach_long64:
* addresses where we're currently running on. We have to do that here * addresses where we're currently running on. We have to do that here
* because in 32bit we couldn't load a 64bit linear address. * because in 32bit we couldn't load a 64bit linear address.
*/ */
lgdt pGDT64 lgdt cpu_gdt_descr
/* /*
* Setup up a dummy PDA. this is just for some early bootup code * Setup up a dummy PDA. this is just for some early bootup code
...@@ -276,7 +276,7 @@ temp_boot_pmds: ...@@ -276,7 +276,7 @@ temp_boot_pmds:
.org 0x5000 .org 0x5000
ENTRY(level2_kernel_pgt) ENTRY(level2_kernel_pgt)
/* 40MB kernel mapping. The kernel code cannot be bigger than that. /* 40MB kernel mapping. The kernel code cannot be bigger than that.
When you change this change KERNEL_TEXT_SIZE in pgtable.h too. */ When you change this change KERNEL_TEXT_SIZE in page.h too. */
/* (2^48-(2*1024*1024*1024)-((2^39)*511)-((2^30)*510)) = 0 */ /* (2^48-(2*1024*1024*1024)-((2^39)*511)-((2^30)*510)) = 0 */
.quad 0x0000000000000183 .quad 0x0000000000000183
.quad 0x0000000000200183 .quad 0x0000000000200183
...@@ -320,16 +320,18 @@ ENTRY(level3_physmem_pgt) ...@@ -320,16 +320,18 @@ ENTRY(level3_physmem_pgt)
.org 0xb000 .org 0xb000
.data .data
.globl gdt
.word 0
.align 16 .align 16
.word 0 .globl cpu_gdt_descr
pGDT64: cpu_gdt_descr:
.word gdt_end-gdt_table .word gdt_end-cpu_gdt_table
gdt: gdt:
.quad gdt_table .quad cpu_gdt_table
#ifdef CONFIG_SMP
.rept NR_CPUS-1
.word 0
.quad 0
.endr
#endif
.align 64 /* cacheline aligned */ .align 64 /* cacheline aligned */
ENTRY(gdt_table32) ENTRY(gdt_table32)
...@@ -344,8 +346,12 @@ gdt32_end: ...@@ -344,8 +346,12 @@ gdt32_end:
*/ */
.align 64 /* cacheline aligned, keep this synchronized with asm/desc.h */ .align 64 /* cacheline aligned, keep this synchronized with asm/desc.h */
ENTRY(gdt_table)
.quad 0x0000000000000000 /* This one is magic */ /* The TLS descriptors are currently at a different place compared to i386.
Hopefully nobody expects them at a fixed place (Wine?) */
ENTRY(cpu_gdt_table)
.quad 0x0000000000000000 /* NULL descriptor */
.quad 0x0000000000000000 /* unused */ .quad 0x0000000000000000 /* unused */
.quad 0x00af9a000000ffff /* __KERNEL_CS */ .quad 0x00af9a000000ffff /* __KERNEL_CS */
.quad 0x00cf92000000ffff /* __KERNEL_DS */ .quad 0x00cf92000000ffff /* __KERNEL_DS */
...@@ -358,15 +364,20 @@ ENTRY(gdt_table) ...@@ -358,15 +364,20 @@ ENTRY(gdt_table)
.word 0x00CF # granularity = 4096, 386 .word 0x00CF # granularity = 4096, 386
# (+5th nibble of limit) # (+5th nibble of limit)
/* __KERNEL32_CS */ /* __KERNEL32_CS */
.quad 0,0 /* TSS */
.globl tss_start .quad 0 /* LDT */
tss_start: .quad 0,0,0 /* three TLS descriptors */
.rept NR_CPUS .quad 0,0 /* pad to cache line boundary */
.quad 0,0,0,0,0,0,0,0 /* TSS/LDT/per cpu entries. filled in later */
.endr
gdt_end: gdt_end:
.globl gdt_end .globl gdt_end
/* GDTs of other CPUs */
#ifdef CONFIG_SMP
.rept NR_CPUS-1
.quad 0,0,0,0,0,0,0,0,0,0,0
.endr
#endif
.align 64 .align 64
ENTRY(idt_table) ENTRY(idt_table)
.rept 256 .rept 256
......
...@@ -71,6 +71,7 @@ static void __init setup_boot_cpu_data(void) ...@@ -71,6 +71,7 @@ static void __init setup_boot_cpu_data(void)
} }
extern void start_kernel(void), pda_init(int), setup_early_printk(char *); extern void start_kernel(void), pda_init(int), setup_early_printk(char *);
extern int disable_apic;
void __init x86_64_start_kernel(char * real_mode_data) void __init x86_64_start_kernel(char * real_mode_data)
{ {
...@@ -82,6 +83,10 @@ void __init x86_64_start_kernel(char * real_mode_data) ...@@ -82,6 +83,10 @@ void __init x86_64_start_kernel(char * real_mode_data)
s = strstr(saved_command_line, "earlyprintk="); s = strstr(saved_command_line, "earlyprintk=");
if (s != NULL) if (s != NULL)
setup_early_printk(s+12); setup_early_printk(s+12);
#ifdef CONFIG_X86_IO_APIC
if (strstr(saved_command_line, "disableapic"))
disable_apic = 1;
#endif
setup_boot_cpu_data(); setup_boot_cpu_data();
start_kernel(); start_kernel();
} }
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/config.h> #include <linux/config.h>
#include <linux/ptrace.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/signal.h> #include <linux/signal.h>
#include <linux/sched.h> #include <linux/sched.h>
...@@ -120,7 +119,8 @@ static void end_8259A_irq (unsigned int irq) ...@@ -120,7 +119,8 @@ static void end_8259A_irq (unsigned int irq)
BUG(); BUG();
} }
if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS))) if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS)) &&
irq_desc[irq].action)
enable_8259A_irq(irq); enable_8259A_irq(irq);
} }
...@@ -320,18 +320,6 @@ void mask_and_ack_8259A(unsigned int irq) ...@@ -320,18 +320,6 @@ void mask_and_ack_8259A(unsigned int irq)
} }
} }
static struct device device_i8259A = {
name: "i8259A",
bus_id: "0020",
};
static int __init init_8259A_devicefs(void)
{
return register_sys_device(&device_i8259A);
}
__initcall(init_8259A_devicefs);
void __init init_8259A(int auto_eoi) void __init init_8259A(int auto_eoi)
{ {
unsigned long flags; unsigned long flags;
......
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
static struct fs_struct init_fs = INIT_FS; static struct fs_struct init_fs = INIT_FS;
static struct files_struct init_files = INIT_FILES; static struct files_struct init_files = INIT_FILES;
static struct signal_struct init_signals = INIT_SIGNALS; static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
struct mm_struct init_mm = INIT_MM(init_mm); struct mm_struct init_mm = INIT_MM(init_mm);
/* /*
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
* thanks to Eric Gilmore * thanks to Eric Gilmore
* and Rolf G. Tews * and Rolf G. Tews
* for testing these extensively * for testing these extensively
* Paul Diefenbaugh : Added full ACPI support
*/ */
#include <linux/mm.h> #include <linux/mm.h>
...@@ -28,6 +29,7 @@ ...@@ -28,6 +29,7 @@
#include <linux/config.h> #include <linux/config.h>
#include <linux/smp_lock.h> #include <linux/smp_lock.h>
#include <linux/mc146818rtc.h> #include <linux/mc146818rtc.h>
#include <linux/acpi.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/smp.h> #include <asm/smp.h>
...@@ -746,7 +748,8 @@ void __init print_IO_APIC(void) ...@@ -746,7 +748,8 @@ void __init print_IO_APIC(void)
(reg_01.entries != 0x1f) && /* dual Xeon boards */ (reg_01.entries != 0x1f) && /* dual Xeon boards */
(reg_01.entries != 0x22) && /* bigger Xeon boards */ (reg_01.entries != 0x22) && /* bigger Xeon boards */
(reg_01.entries != 0x2E) && (reg_01.entries != 0x2E) &&
(reg_01.entries != 0x3F) (reg_01.entries != 0x3F) &&
(reg_01.entries != 0x03)
) )
UNEXPECTED_IO_APIC(); UNEXPECTED_IO_APIC();
...@@ -1014,6 +1017,8 @@ static void __init setup_ioapic_ids_from_mpc (void) ...@@ -1014,6 +1017,8 @@ static void __init setup_ioapic_ids_from_mpc (void)
unsigned char old_id; unsigned char old_id;
unsigned long flags; unsigned long flags;
if (acpi_ioapic) return; /* ACPI does that already */
/* /*
* Set the IOAPIC ID to the value stored in the MPC table. * Set the IOAPIC ID to the value stored in the MPC table.
*/ */
...@@ -1104,7 +1109,7 @@ static int __init timer_irq_works(void) ...@@ -1104,7 +1109,7 @@ static int __init timer_irq_works(void)
{ {
unsigned int t1 = jiffies; unsigned int t1 = jiffies;
sti(); local_irq_enable();
/* Let ten ticks pass... */ /* Let ten ticks pass... */
mdelay((10 * 1000) / HZ); mdelay((10 * 1000) / HZ);
...@@ -1117,7 +1122,6 @@ static int __init timer_irq_works(void) ...@@ -1117,7 +1122,6 @@ static int __init timer_irq_works(void)
*/ */
if (jiffies - t1 > 4) if (jiffies - t1 > 4)
return 1; return 1;
return 0; return 0;
} }
...@@ -1376,7 +1380,7 @@ static struct hw_interrupt_type lapic_irq_type = { ...@@ -1376,7 +1380,7 @@ static struct hw_interrupt_type lapic_irq_type = {
end_lapic_irq end_lapic_irq
}; };
static void enable_NMI_through_LVT0 (void * dummy) void enable_NMI_through_LVT0 (void * dummy)
{ {
unsigned int v, ver; unsigned int v, ver;
...@@ -1401,7 +1405,6 @@ static void setup_nmi (void) ...@@ -1401,7 +1405,6 @@ static void setup_nmi (void)
*/ */
printk(KERN_INFO "activating NMI Watchdog ..."); printk(KERN_INFO "activating NMI Watchdog ...");
smp_call_function(enable_NMI_through_LVT0, NULL, 1, 1);
enable_NMI_through_LVT0(NULL); enable_NMI_through_LVT0(NULL);
printk(" done.\n"); printk(" done.\n");
...@@ -1477,7 +1480,6 @@ static inline void unlock_ExtINT_logic(void) ...@@ -1477,7 +1480,6 @@ static inline void unlock_ExtINT_logic(void)
*/ */
static inline void check_timer(void) static inline void check_timer(void)
{ {
extern int timer_ack;
int pin1, pin2; int pin1, pin2;
int vector; int vector;
...@@ -1497,7 +1499,6 @@ static inline void check_timer(void) ...@@ -1497,7 +1499,6 @@ static inline void check_timer(void)
*/ */
apic_write_around(APIC_LVT0, APIC_LVT_MASKED | APIC_DM_EXTINT); apic_write_around(APIC_LVT0, APIC_LVT_MASKED | APIC_DM_EXTINT);
init_8259A(1); init_8259A(1);
timer_ack = 1;
enable_8259A_irq(0); enable_8259A_irq(0);
pin1 = find_isa_irq_pin(0, mp_INT); pin1 = find_isa_irq_pin(0, mp_INT);
...@@ -1605,8 +1606,7 @@ void __init setup_IO_APIC(void) ...@@ -1605,8 +1606,7 @@ void __init setup_IO_APIC(void)
printk("ENABLING IO-APIC IRQs\n"); printk("ENABLING IO-APIC IRQs\n");
/* /*
* Set up the IO-APIC IRQ routing table by parsing the MP-BIOS * Set up the IO-APIC IRQ routing table.
* mptable:
*/ */
setup_ioapic_ids_from_mpc(); setup_ioapic_ids_from_mpc();
sync_Arb_IDs(); sync_Arb_IDs();
...@@ -1615,3 +1615,175 @@ void __init setup_IO_APIC(void) ...@@ -1615,3 +1615,175 @@ void __init setup_IO_APIC(void)
check_timer(); check_timer();
print_IO_APIC(); print_IO_APIC();
} }
/* Ensure the ACPI SCI interrupt level is active low, edge-triggered */
void __init mp_config_ioapic_for_sci(int irq)
{
#if 0 /* fixme */
int ioapic;
int ioapic_pin;
ioapic = mp_find_ioapic(irq);
ioapic_pin = irq - mp_ioapic_routing[ioapic].irq_start;
io_apic_set_pci_routing(ioapic, ioapic_pin, irq);
#endif
}
/* --------------------------------------------------------------------------
ACPI-based IOAPIC Configuration
-------------------------------------------------------------------------- */
#ifdef CONFIG_ACPI_BOOT
#define IO_APIC_MAX_ID 15
int __init io_apic_get_unique_id (int ioapic, int apic_id)
{
struct IO_APIC_reg_00 reg_00;
static unsigned long apic_id_map = 0;
unsigned long flags;
int i = 0;
/*
* The P4 platform supports up to 256 APIC IDs on two separate APIC
* buses (one for LAPICs, one for IOAPICs), where predecessors only
* supports up to 16 on one shared APIC bus.
*
* TBD: Expand LAPIC/IOAPIC support on P4-class systems to take full
* advantage of new APIC bus architecture.
*/
if (!apic_id_map)
apic_id_map = phys_cpu_present_map;
spin_lock_irqsave(&ioapic_lock, flags);
*(int *)&reg_00 = io_apic_read(ioapic, 0);
spin_unlock_irqrestore(&ioapic_lock, flags);
if (apic_id >= IO_APIC_MAX_ID) {
printk(KERN_WARNING "IOAPIC[%d]: Invalid apic_id %d, trying "
"%d\n", ioapic, apic_id, reg_00.ID);
apic_id = reg_00.ID;
}
/*
* Every APIC in a system must have a unique ID or we get lots of nice
* 'stuck on smp_invalidate_needed IPI wait' messages.
*/
if (apic_id_map & (1 << apic_id)) {
for (i = 0; i < IO_APIC_MAX_ID; i++) {
if (!(apic_id_map & (1 << i)))
break;
}
if (i == IO_APIC_MAX_ID)
panic("Max apic_id exceeded!\n");
printk(KERN_WARNING "IOAPIC[%d]: apic_id %d already used, "
"trying %d\n", ioapic, apic_id, i);
apic_id = i;
}
apic_id_map |= (1 << apic_id);
if (reg_00.ID != apic_id) {
reg_00.ID = apic_id;
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(ioapic, 0, *(int *)&reg_00);
*(int *)&reg_00 = io_apic_read(ioapic, 0);
spin_unlock_irqrestore(&ioapic_lock, flags);
/* Sanity check */
if (reg_00.ID != apic_id)
panic("IOAPIC[%d]: Unable change apic_id!\n", ioapic);
}
printk(KERN_INFO "IOAPIC[%d]: Assigned apic_id %d\n", ioapic, apic_id);
return apic_id;
}
int __init io_apic_get_version (int ioapic)
{
struct IO_APIC_reg_01 reg_01;
unsigned long flags;
spin_lock_irqsave(&ioapic_lock, flags);
*(int *)&reg_01 = io_apic_read(ioapic, 1);
spin_unlock_irqrestore(&ioapic_lock, flags);
return reg_01.version;
}
int __init io_apic_get_redir_entries (int ioapic)
{
struct IO_APIC_reg_01 reg_01;
unsigned long flags;
spin_lock_irqsave(&ioapic_lock, flags);
*(int *)&reg_01 = io_apic_read(ioapic, 1);
spin_unlock_irqrestore(&ioapic_lock, flags);
return reg_01.entries;
}
int io_apic_set_pci_routing (int ioapic, int pin, int irq)
{
struct IO_APIC_route_entry entry;
unsigned long flags;
if (!IO_APIC_IRQ(irq)) {
printk(KERN_ERR "IOAPIC[%d]: Invalid reference to IRQ 0/n",
ioapic);
return -EINVAL;
}
/*
* Generate a PCI IRQ routing entry and program the IOAPIC accordingly.
* Note that we mask (disable) IRQs now -- these get enabled when the
* corresponding device driver registers for this IRQ.
*/
memset(&entry,0,sizeof(entry));
entry.delivery_mode = dest_LowestPrio;
entry.dest_mode = INT_DELIVERY_MODE;
entry.dest.logical.logical_dest = TARGET_CPUS;
entry.mask = 1; /* Disabled (masked) */
entry.trigger = 1; /* Level sensitive */
entry.polarity = 1; /* Low active */
add_pin_to_irq(irq, ioapic, pin);
entry.vector = assign_irq_vector(irq);
printk(KERN_DEBUG "IOAPIC[%d]: Set PCI routing entry (%d-%d -> 0x%x -> "
"IRQ %d)\n", ioapic,
mp_ioapics[ioapic].mpc_apicid, pin, entry.vector, irq);
irq_desc[irq].handler = &ioapic_level_irq_type;
set_intr_gate(entry.vector, interrupt[irq]);
if (!ioapic && (irq < 16))
disable_8259A_irq(irq);
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(ioapic, 0x11+2*pin, *(((int *)&entry)+1));
io_apic_write(ioapic, 0x10+2*pin, *(((int *)&entry)+0));
spin_unlock_irqrestore(&ioapic_lock, flags);
return entry.vector;
}
#endif /*CONFIG_ACPI_BOOT*/
...@@ -56,17 +56,21 @@ static void set_bitmap(unsigned long *bitmap, short base, short extent, int new_ ...@@ -56,17 +56,21 @@ static void set_bitmap(unsigned long *bitmap, short base, short extent, int new_
asmlinkage int sys_ioperm(unsigned long from, unsigned long num, int turn_on) asmlinkage int sys_ioperm(unsigned long from, unsigned long num, int turn_on)
{ {
struct thread_struct * t = &current->thread; struct thread_struct * t = &current->thread;
struct tss_struct * tss = init_tss + smp_processor_id(); struct tss_struct * tss;
int ret = 0;
if ((from + num <= from) || (from + num > IO_BITMAP_SIZE*32)) if ((from + num <= from) || (from + num > IO_BITMAP_SIZE*32))
return -EINVAL; return -EINVAL;
if (turn_on && !capable(CAP_SYS_RAWIO)) if (turn_on && !capable(CAP_SYS_RAWIO))
return -EPERM; return -EPERM;
tss = init_tss + get_cpu();
if (!t->io_bitmap_ptr) { if (!t->io_bitmap_ptr) {
t->io_bitmap_ptr = kmalloc((IO_BITMAP_SIZE+1)*4, GFP_KERNEL); t->io_bitmap_ptr = kmalloc((IO_BITMAP_SIZE+1)*4, GFP_KERNEL);
if (!t->io_bitmap_ptr) if (!t->io_bitmap_ptr) {
return -ENOMEM; ret = -ENOMEM;
goto out;
}
memset(t->io_bitmap_ptr,0xff,(IO_BITMAP_SIZE+1)*4); memset(t->io_bitmap_ptr,0xff,(IO_BITMAP_SIZE+1)*4);
tss->io_map_base = IO_BITMAP_OFFSET; tss->io_map_base = IO_BITMAP_OFFSET;
} }
...@@ -77,7 +81,9 @@ asmlinkage int sys_ioperm(unsigned long from, unsigned long num, int turn_on) ...@@ -77,7 +81,9 @@ asmlinkage int sys_ioperm(unsigned long from, unsigned long num, int turn_on)
set_bitmap((unsigned long *) t->io_bitmap_ptr, from, num, !turn_on); set_bitmap((unsigned long *) t->io_bitmap_ptr, from, num, !turn_on);
set_bitmap((unsigned long *) tss->io_bitmap, from, num, !turn_on); set_bitmap((unsigned long *) tss->io_bitmap, from, num, !turn_on);
return 0; out:
put_cpu();
return ret;
} }
/* /*
......
This diff is collapsed.
...@@ -22,32 +22,13 @@ ...@@ -22,32 +22,13 @@
#include <asm/ldt.h> #include <asm/ldt.h>
#include <asm/desc.h> #include <asm/desc.h>
void load_gs_index(unsigned gs) extern void load_gs_index(unsigned gs);
{
int access;
struct task_struct *me = current;
if (me->mm)
read_lock(&me->mm->context.ldtlock);
asm volatile("pushf\n\t"
"cli\n\t"
"swapgs\n\t"
"lar %1,%0\n\t"
"jnz 1f\n\t"
"movl %1,%%eax\n\t"
"movl %%eax,%%gs\n\t"
"jmp 2f\n\t"
"1: movl %2,%%gs\n\t"
"2: swapgs\n\t"
"popf" : "=g" (access) : "g" (gs), "r" (0) : "rax");
if (me->mm)
read_unlock(&me->mm->context.ldtlock);
}
#ifdef CONFIG_SMP /* avoids "defined but not used" warnig */ #ifdef CONFIG_SMP /* avoids "defined but not used" warnig */
static void flush_ldt(void *mm) static void flush_ldt(void *null)
{ {
if (current->mm) if (current->active_mm)
load_LDT(&current->mm->context); load_LDT(&current->active_mm->context);
} }
#endif #endif
...@@ -75,15 +56,18 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload) ...@@ -75,15 +56,18 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload)
memset(newldt+oldsize*LDT_ENTRY_SIZE, 0, (mincount-oldsize)*LDT_ENTRY_SIZE); memset(newldt+oldsize*LDT_ENTRY_SIZE, 0, (mincount-oldsize)*LDT_ENTRY_SIZE);
wmb(); wmb();
pc->ldt = newldt; pc->ldt = newldt;
wmb();
pc->size = mincount; pc->size = mincount;
wmb();
if (reload) { if (reload) {
load_LDT(pc); load_LDT(pc);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
preempt_disable();
if (current->mm->cpu_vm_mask != (1<<smp_processor_id())) if (current->mm->cpu_vm_mask != (1<<smp_processor_id()))
smp_call_function(flush_ldt, 0, 1, 1); smp_call_function(flush_ldt, 0, 1, 1);
preempt_enable();
#endif #endif
} }
wmb();
if (oldsize) { if (oldsize) {
if (oldsize*LDT_ENTRY_SIZE > PAGE_SIZE) if (oldsize*LDT_ENTRY_SIZE > PAGE_SIZE)
vfree(oldldt); vfree(oldldt);
...@@ -96,11 +80,8 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload) ...@@ -96,11 +80,8 @@ static int alloc_ldt(mm_context_t *pc, int mincount, int reload)
static inline int copy_ldt(mm_context_t *new, mm_context_t *old) static inline int copy_ldt(mm_context_t *new, mm_context_t *old)
{ {
int err = alloc_ldt(new, old->size, 0); int err = alloc_ldt(new, old->size, 0);
if (err < 0) { if (err < 0)
printk(KERN_WARNING "ldt allocation failed\n");
new->size = 0;
return err; return err;
}
memcpy(new->ldt, old->ldt, old->size*LDT_ENTRY_SIZE); memcpy(new->ldt, old->ldt, old->size*LDT_ENTRY_SIZE);
return 0; return 0;
} }
...@@ -187,7 +168,7 @@ static int write_ldt(void * ptr, unsigned long bytecount, int oldmode) ...@@ -187,7 +168,7 @@ static int write_ldt(void * ptr, unsigned long bytecount, int oldmode)
struct mm_struct * mm = me->mm; struct mm_struct * mm = me->mm;
__u32 entry_1, entry_2, *lp; __u32 entry_1, entry_2, *lp;
int error; int error;
struct modify_ldt_ldt_s ldt_info; struct user_desc ldt_info;
error = -EINVAL; error = -EINVAL;
...@@ -223,34 +204,17 @@ static int write_ldt(void * ptr, unsigned long bytecount, int oldmode) ...@@ -223,34 +204,17 @@ static int write_ldt(void * ptr, unsigned long bytecount, int oldmode)
/* Allow LDTs to be cleared by the user. */ /* Allow LDTs to be cleared by the user. */
if (ldt_info.base_addr == 0 && ldt_info.limit == 0) { if (ldt_info.base_addr == 0 && ldt_info.limit == 0) {
if (oldmode || if (oldmode || LDT_empty(&ldt_info)) {
(ldt_info.contents == 0 &&
ldt_info.read_exec_only == 1 &&
ldt_info.seg_32bit == 0 &&
ldt_info.limit_in_pages == 0 &&
ldt_info.seg_not_present == 1 &&
ldt_info.useable == 0 &&
ldt_info.lm == 0)) {
entry_1 = 0; entry_1 = 0;
entry_2 = 0; entry_2 = 0;
goto install; goto install;
} }
} }
entry_1 = ((ldt_info.base_addr & 0x0000ffff) << 16) | entry_1 = LDT_entry_a(&ldt_info);
(ldt_info.limit & 0x0ffff); entry_2 = LDT_entry_b(&ldt_info);
entry_2 = (ldt_info.base_addr & 0xff000000) | if (oldmode)
((ldt_info.base_addr & 0x00ff0000) >> 16) | entry_2 &= ~(1 << 20);
(ldt_info.limit & 0xf0000) |
((ldt_info.read_exec_only ^ 1) << 9) |
(ldt_info.contents << 10) |
((ldt_info.seg_not_present ^ 1) << 15) |
(ldt_info.seg_32bit << 22) |
(ldt_info.limit_in_pages << 23) |
(ldt_info.lm << 21) |
0x7000;
if (!oldmode)
entry_2 |= (ldt_info.useable << 20);
/* Install the new entry ... */ /* Install the new entry ... */
install: install:
......
This diff is collapsed.
...@@ -247,11 +247,11 @@ static int msr_open(struct inode *inode, struct file *file) ...@@ -247,11 +247,11 @@ static int msr_open(struct inode *inode, struct file *file)
* File operations we support * File operations we support
*/ */
static struct file_operations msr_fops = { static struct file_operations msr_fops = {
owner: THIS_MODULE, .owner = THIS_MODULE,
llseek: msr_seek, .llseek = msr_seek,
read: msr_read, .read = msr_read,
write: msr_write, .write = msr_write,
open: msr_open, .open = msr_open,
}; };
int __init msr_init(void) int __init msr_init(void)
......
This diff is collapsed.
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
#include <asm/mtrr.h> #include <asm/mtrr.h>
#include <asm/mpspec.h> #include <asm/mpspec.h>
unsigned int nmi_watchdog = NMI_NONE; unsigned int nmi_watchdog = NMI_LOCAL_APIC;
static unsigned int nmi_hz = HZ; static unsigned int nmi_hz = HZ;
unsigned int nmi_perfctr_msr; /* the MSR to reset in NMI handler */ unsigned int nmi_perfctr_msr; /* the MSR to reset in NMI handler */
extern void show_registers(struct pt_regs *regs); extern void show_registers(struct pt_regs *regs);
...@@ -43,22 +43,38 @@ extern void show_registers(struct pt_regs *regs); ...@@ -43,22 +43,38 @@ extern void show_registers(struct pt_regs *regs);
#define P6_EVENT_CPU_CLOCKS_NOT_HALTED 0x79 #define P6_EVENT_CPU_CLOCKS_NOT_HALTED 0x79
#define P6_NMI_EVENT P6_EVENT_CPU_CLOCKS_NOT_HALTED #define P6_NMI_EVENT P6_EVENT_CPU_CLOCKS_NOT_HALTED
/* Why is there no CPUID flag for this? */
static __init int cpu_has_lapic(void)
{
switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_INTEL:
case X86_VENDOR_AMD:
return boot_cpu_data.x86 >= 6;
/* .... add more cpus here or find a different way to figure this out. */
default:
return 0;
}
}
int __init check_nmi_watchdog (void) int __init check_nmi_watchdog (void)
{ {
int counts[NR_CPUS]; int counts[NR_CPUS];
int j, cpu; int cpu;
if (nmi_watchdog == NMI_LOCAL_APIC && !cpu_has_lapic()) {
nmi_watchdog = NMI_NONE;
return -1;
}
printk(KERN_INFO "testing NMI watchdog ... "); printk(KERN_INFO "testing NMI watchdog ... ");
for (j = 0; j < NR_CPUS; ++j) { for_each_cpu(cpu) {
cpu = cpu_logical_map(j);
counts[cpu] = cpu_pda[cpu].__nmi_count; counts[cpu] = cpu_pda[cpu].__nmi_count;
} }
sti(); local_irq_enable();
mdelay((10*1000)/nmi_hz); // wait 10 ticks mdelay((10*1000)/nmi_hz); // wait 10 ticks
for (j = 0; j < smp_num_cpus; j++) { for_each_cpu(cpu) {
cpu = cpu_logical_map(j);
if (cpu_pda[cpu].__nmi_count - counts[cpu] <= 5) { if (cpu_pda[cpu].__nmi_count - counts[cpu] <= 5) {
printk("CPU#%d: NMI appears to be stuck (%d)!\n", printk("CPU#%d: NMI appears to be stuck (%d)!\n",
cpu, cpu,
...@@ -84,26 +100,6 @@ static int __init setup_nmi_watchdog(char *str) ...@@ -84,26 +100,6 @@ static int __init setup_nmi_watchdog(char *str)
if (nmi >= NMI_INVALID) if (nmi >= NMI_INVALID)
return 0; return 0;
if (nmi == NMI_NONE)
nmi_watchdog = nmi;
/*
* If any other x86 CPU has a local APIC, then
* please test the NMI stuff there and send me the
* missing bits. Right now Intel P6 and AMD K7 only.
*/
if ((nmi == NMI_LOCAL_APIC) &&
(boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
(boot_cpu_data.x86 == 6))
nmi_watchdog = nmi;
if ((nmi == NMI_LOCAL_APIC) &&
(boot_cpu_data.x86_vendor == X86_VENDOR_AMD) &&
(boot_cpu_data.x86 == 6))
nmi_watchdog = nmi;
/*
* We can enable the IO-APIC watchdog
* unconditionally.
*/
if (nmi == NMI_IO_APIC)
nmi_watchdog = nmi; nmi_watchdog = nmi;
return 1; return 1;
} }
...@@ -167,6 +163,8 @@ static void __pminit setup_k7_watchdog(void) ...@@ -167,6 +163,8 @@ static void __pminit setup_k7_watchdog(void)
int i; int i;
unsigned int evntsel; unsigned int evntsel;
/* XXX should check these in EFER */
nmi_perfctr_msr = MSR_K7_PERFCTR0; nmi_perfctr_msr = MSR_K7_PERFCTR0;
for(i = 0; i < 4; ++i) { for(i = 0; i < 4; ++i) {
...@@ -180,7 +178,7 @@ static void __pminit setup_k7_watchdog(void) ...@@ -180,7 +178,7 @@ static void __pminit setup_k7_watchdog(void)
| K7_NMI_EVENT; | K7_NMI_EVENT;
wrmsr(MSR_K7_EVNTSEL0, evntsel, 0); wrmsr(MSR_K7_EVNTSEL0, evntsel, 0);
Dprintk("setting K7_PERFCTR0 to %08lx\n", -(cpu_khz/nmi_hz*1000)); printk(KERN_INFO "watchdog: setting K7_PERFCTR0 to %08lx\n", -(cpu_khz/nmi_hz*1000));
wrmsr(MSR_K7_PERFCTR0, -(cpu_khz/nmi_hz*1000), -1); wrmsr(MSR_K7_PERFCTR0, -(cpu_khz/nmi_hz*1000), -1);
apic_write(APIC_LVTPC, APIC_DM_NMI); apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= K7_EVNTSEL_ENABLE; evntsel |= K7_EVNTSEL_ENABLE;
...@@ -191,7 +189,11 @@ void __pminit setup_apic_nmi_watchdog (void) ...@@ -191,7 +189,11 @@ void __pminit setup_apic_nmi_watchdog (void)
{ {
switch (boot_cpu_data.x86_vendor) { switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_AMD: case X86_VENDOR_AMD:
if (boot_cpu_data.x86 != 6) if (boot_cpu_data.x86 < 6)
return;
/* Simics masquerades as AMD, but does not support
performance counters */
if (strstr(boot_cpu_data.x86_model_id, "Screwdriver"))
return; return;
setup_k7_watchdog(); setup_k7_watchdog();
break; break;
...@@ -230,7 +232,7 @@ void touch_nmi_watchdog (void) ...@@ -230,7 +232,7 @@ void touch_nmi_watchdog (void)
* Just reset the alert counters, (other CPUs might be * Just reset the alert counters, (other CPUs might be
* spinning on locks we hold): * spinning on locks we hold):
*/ */
for (i = 0; i < smp_num_cpus; i++) for (i = 0; i < NR_CPUS; i++)
alert_counter[i] = 0; alert_counter[i] = 0;
} }
...@@ -243,8 +245,7 @@ void nmi_watchdog_tick (struct pt_regs * regs) ...@@ -243,8 +245,7 @@ void nmi_watchdog_tick (struct pt_regs * regs)
* smp_processor_id(). * smp_processor_id().
*/ */
int sum, cpu = smp_processor_id(); int sum, cpu = smp_processor_id();
sum = read_pda(apic_timer_irqs);
sum = apic_timer_irqs[cpu];
if (last_irq_sums[cpu] == sum) { if (last_irq_sums[cpu] == sum) {
/* /*
......
This diff is collapsed.
This diff is collapsed.
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/string.h>
/*
* Dummy IO MMU functions
*/
extern unsigned long end_pfn;
void *pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
dma_addr_t *dma_handle)
{
void *ret;
int gfp = GFP_ATOMIC;
if (hwdev == NULL ||
end_pfn > (hwdev->dma_mask>>PAGE_SHIFT) || /* XXX */
(u32)hwdev->dma_mask < 0xffffffff)
gfp |= GFP_DMA;
ret = (void *)__get_free_pages(gfp, get_order(size));
if (ret != NULL) {
memset(ret, 0, size);
*dma_handle = virt_to_bus(ret);
}
return ret;
}
void pci_free_consistent(struct pci_dev *hwdev, size_t size,
void *vaddr, dma_addr_t dma_handle)
{
free_pages((unsigned long)vaddr, get_order(size));
}
static void __init check_ram(void)
{
if (end_pfn >= 0xffffffff>>PAGE_SHIFT) {
printk(KERN_ERR "WARNING more than 4GB of memory but no IOMMU.\n"
KERN_ERR "WARNING 32bit PCI may malfunction.\n");
/* Could play with highmem_start_page here to trick some subsystems
into bounce buffers. Unfortunately that would require setting
CONFIG_HIGHMEM too.
*/
}
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -33,7 +33,7 @@ ...@@ -33,7 +33,7 @@
ENTRY(trampoline_data) ENTRY(trampoline_data)
r_base = . r_base = .
wbinvd
mov %cs, %ax # Code and data in the same place mov %cs, %ax # Code and data in the same place
mov %ax, %ds mov %ax, %ds
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
thunk rwsem_down_read_failed_thunk,rwsem_down_read_failed thunk rwsem_down_read_failed_thunk,rwsem_down_read_failed
thunk rwsem_down_write_failed_thunk,rwsem_down_write_failed thunk rwsem_down_write_failed_thunk,rwsem_down_write_failed
thunk rwsem_wake_thunk,rwsem_wake thunk rwsem_wake_thunk,rwsem_wake
thunk rwsem_downgrade_thunk,rwsem_downgrade_wake
#endif #endif
thunk do_softirq_thunk,do_softirq thunk do_softirq_thunk,do_softirq
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment