Commit 6293d56c authored by Linus Torvalds's avatar Linus Torvalds

v2.4.14.6 -> v2.4.14.7

  - Jeff Garzik: network driver updates
  - Christoph Hellwig: UFS filesystem byteorder cleanups
  - me: modified Andrea VM page allocator tuning
parent 5fc4bcde
......@@ -35,11 +35,18 @@ Default video modes
At the moment, there are two kernel command line arguments supported:
mode:640x480
mode:800x600
or
mode:1024x768
Full support for startup video modes (modedb) will be integrated soon.
Version 1.9.9.1
---------------
* Fix memory detection for 512kB case
* 800x600 mode
* Fixed timings
* Hint for AXP: Use -accel false -vyres -1 when changing resolution
Version 1.9.4.4
......
D-Link DL2000-based Gigabit Ethernet Adapter Installation
for Linux
July 5, 2001
Nov 12, 2001
Contents
========
......@@ -14,20 +14,22 @@ Contents
- Troubleshooting
Compatiblity List
Compatibility List
=================
Adapter Support:
D-Link DGE-550T Gigabit Ethernet Adapter.
D-Link DGE-550SX Gigabit Ethernet Adapter.
D-Link DL2000-based Gigabit Ethernet Adapter.
The driver support Linux kernal 2.4.x later. We had tested it
The driver support Linux kernel 2.4.7 later. We had tested it
on the environments below.
. Red Hat v6.2 (update to kernel 2.4.4)
. Red Hat v7.0 (update to kernel 2.4.4)
. Red Hat v7.1 (kernel 2.4.2-2)
. Red Hat v6.2 (update kernel to 2.4.7)
. Red Hat v7.0 (update kernel to 2.4.7)
. Red Hat v7.1 (kernel 2.4.7)
. Red Hat v7.2 (kernel 2.4.7-10)
Quick Install
......@@ -35,16 +37,16 @@ Quick Install
Install linux driver as following command:
1. make all
2. insmod dl2x.o
2. insmod dl2k.o
3. ifconfig eth0 up 10.xxx.xxx.xxx netmask 255.0.0.0
^^^^^^^^^^^^^^^\ ^^^^^^^^\
IP NETMASK
Now eth0 bring up, you can test it by "ping" or get more information by
"ifconfig". If test ok, then continue next step.
Now eth0 should active, you can test it by "ping" or get more information by
"ifconfig". If tested ok, continue the next step.
4. cp dl2x.o /lib/modules/`uname -r`/kernel/drivers/net
4. cp dl2k.o /lib/modules/`uname -r`/kernel/drivers/net
5. Add the following lines to /etc/modules.conf:
alias eth0 dl2x
alias eth0 dl2k
6. Run "netconfig" or "netconf" to create configuration script ifcfg-eth0
located at /etc/sysconfig/network-scripts or create it manually.
[see - Configuration Script Sample]
......@@ -61,10 +63,10 @@ source instead of relying on a precompiled version. This approach provides
better reliability since a precompiled driver might depend on libraries or
kernel features that are not present in a given Linux installation.
The 3 files necessary to build Linux device driver are dl2x.c, dl2x.h and
The 3 files necessary to build Linux device driver are dl2k.c, dl2k.h and
Makefile. To compile, the Linux installation must include the gcc compiler,
the kernel source, and the kernel headers. The Linux driver supports Linux
Kernels 2.4.x. Copy the files to a directory and enter the following command
Kernels 2.4.7. Copy the files to a directory and enter the following command
to compile and link the driver:
CD-ROM drive
......@@ -73,21 +75,21 @@ CD-ROM drive
[root@XXX /] mkdir cdrom
[root@XXX /] mount -r -t iso9660 -o conv=auto /dev/cdrom /cdrom
[root@XXX /] cd root
[root@XXX /root] mkdir dl2x
[root@XXX /root] cd dl2x
[root@XXX dl2x] cp /cdrom/linux/dl2x.tgz /root/dl2x
[root@XXX dl2x] tar xfvz dl2x.tgz
[root@XXX dl2x] make all
[root@XXX /root] mkdir dl2k
[root@XXX /root] cd dl2k
[root@XXX dl2k] cp /cdrom/linux/dl2k.tgz /root/dl2k
[root@XXX dl2k] tar xfvz dl2k.tgz
[root@XXX dl2k] make all
Floppy disc drive
-----------------
[root@XXX /] cd root
[root@XXX /root] mkdir dl2x
[root@XXX /root] cd dl2x
[root@XXX dl2x] mcopy a:/linux/dl2x.tgz /root/dl2x
[root@XXX dl2x] tar xfvz dl2x.tgz
[root@XXX dl2x] make all
[root@XXX /root] mkdir dl2k
[root@XXX /root] cd dl2k
[root@XXX dl2k] mcopy a:/linux/dl2k.tgz /root/dl2k
[root@XXX dl2k] tar xfvz dl2k.tgz
[root@XXX dl2k] make all
Installing the Driver
=====================
......@@ -98,17 +100,16 @@ Installing the Driver
to a protocol stack in order to establish network connectivity. To load a
module enter the command:
insmod dl2x.o
insmod dl2k.o
or
insmod dl2x.o <optional parameter> ; add parameter
insmod dl2k.o <optional parameter> ; add parameter
===============================================================
example: insmod dl2x.o media=100mbps_hd
or insmod dl2x.o media=3
or insmod dl2x.o media=3 2 ; for 2 cards
example: insmod dl2k.o media=100mbps_hd
or insmod dl2k.o media=3
or insmod dl2k.o media=3,2 ; for 2 cards
===============================================================
Please reference the list of the command line parameters supported by
......@@ -133,7 +134,7 @@ Installing the Driver
ifdown eth0
ifconfig eth0 down
rmmod dl2x.o
rmmod dl2k.o
The following are the commands to list the currently loaded modules and
to see the current network configuration.
......@@ -151,13 +152,13 @@ Installing the Driver
Red Hat v6.x/v7.x
-----------------
1. Copy dl2x.o to the network modules directory, typically
1. Copy dl2k.o to the network modules directory, typically
/lib/modules/2.x.x-xx/net or /lib/modules/2.x.x/kernel/drivers/net.
2. Locate the boot module configuration file, most commonly modules.conf
or conf.modules in the /etc directory. Add the following lines:
alias ethx dl2x
options dl2x <optional parameters>
alias ethx dl2k
options dl2k <optional parameters>
where ethx will be eth0 if the NIC is the only ethernet adapter, eth1 if
one other ethernet adapter is installed, etc. Refer to the table in the
......@@ -187,12 +188,19 @@ media=xxxxxxxxx - Specifies the media type the NIC operates at.
10mbps_fd 10Mbps full duplex.
100mbps_hd 100Mbps half duplex.
100mbps_fd 100Mbps full duplex.
1000mbps_fd 1000Mbps full duplex.
1000mbps_hd 1000Mbps half duplex.
0 Autosensing active media.
1 10Mbps half duplex.
2 10Mbps full duplex.
3 100Mbps half duplex.
4 100Mbps full duplex.
5 1000Mbps full duplex.
6 1000Mbps half duplex.
By default, the NIC operates at autosense.
Note that only 1000mbps_fd and 1000mbps_hd
types are available for fiber adapter.
vlan=x - Specifies the VLAN ID. If vlan=0, the
Virtual Local Area Network (VLAN) function is
......@@ -208,9 +216,8 @@ int_count - Rx frame count each interrupt.
int_timeout - Rx DMA wait time for an interrupt. Proper
values of int_count and int_timeout bring
a conspicuous performance in the fast machine.
For P4 1.5GHz systems, a setting of
int_count=5 and int_timeout=750 is
recommendable.
Ex. int_count=5 and int_timeout=750
Configuration Script Sample
===========================
Here is a sample of a simple configuration script:
......
VERSION = 2
PATCHLEVEL = 4
SUBLEVEL = 15
EXTRAVERSION =-pre6
EXTRAVERSION =-pre7
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
......
......@@ -378,14 +378,15 @@ fi
mainmenu_option next_comment
comment 'Kernel hacking'
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
tristate 'Kernel FP software completion' CONFIG_MATHEMU
bool 'Legacy kernel start address' CONFIG_ALPHA_LEGACY_START_ADDRESS
bool 'Kernel debugging' CONFIG_DEBUG_KERNEL
if [ "$CONFIG_DEBUG_KERNEL" != "n" ]; then
tristate ' Kernel FP software completion' CONFIG_MATHEMU
bool ' Debug memory allocations' CONFIG_DEBUG_SLAB
bool ' Magic SysRq key' CONFIG_MAGIC_SYSRQ
else
define_tristate CONFIG_MATHEMU y
define_tristate CONFIG_MATHEMU y
fi
bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
bool 'Legacy kernel start address' CONFIG_ALPHA_LEGACY_START_ADDRESS
endmenu
#
# Automatically generated make config: don't edit
#
CONFIG_ALPHA=y
# CONFIG_UID16 is not set
# CONFIG_RWSEM_GENERIC_SPINLOCK is not set
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
#
# Code maturity level options
......@@ -49,10 +52,13 @@ CONFIG_ALPHA_GENERIC=y
# CONFIG_ALPHA_TITAN is not set
# CONFIG_ALPHA_WILDFIRE is not set
CONFIG_ISA=y
CONFIG_EISA=y
# CONFIG_SBUS is not set
# CONFIG_MCA is not set
CONFIG_PCI=y
CONFIG_ALPHA_BROKEN_IRQ_MASK=y
# CONFIG_SMP is not set
# CONFIG_DISCONTIGMEM is not set
# CONFIG_ALPHA_LARGE_VMALLOC is not set
CONFIG_PCI_NAMES=y
# CONFIG_HOTPLUG is not set
......@@ -107,8 +113,8 @@ CONFIG_BLK_DEV_LOOP=m
# CONFIG_MD_RAID0 is not set
# CONFIG_MD_RAID1 is not set
# CONFIG_MD_RAID5 is not set
# CONFIG_MD_MULTIPATH is not set
# CONFIG_BLK_DEV_LVM is not set
# CONFIG_LVM_PROC_FS is not set
#
# Networking options
......@@ -135,12 +141,16 @@ CONFIG_INET_ECN=y
#
CONFIG_IP_NF_CONNTRACK=m
CONFIG_IP_NF_FTP=m
CONFIG_IP_NF_IRC=m
CONFIG_IP_NF_IPTABLES=m
# CONFIG_IP_NF_MATCH_LIMIT is not set
# CONFIG_IP_NF_MATCH_MAC is not set
# CONFIG_IP_NF_MATCH_MARK is not set
# CONFIG_IP_NF_MATCH_MULTIPORT is not set
# CONFIG_IP_NF_MATCH_TOS is not set
# CONFIG_IP_NF_MATCH_LENGTH is not set
# CONFIG_IP_NF_MATCH_TTL is not set
# CONFIG_IP_NF_MATCH_TCPMSS is not set
# CONFIG_IP_NF_MATCH_STATE is not set
# CONFIG_IP_NF_MATCH_UNCLEAN is not set
# CONFIG_IP_NF_MATCH_OWNER is not set
......@@ -151,13 +161,18 @@ CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
# CONFIG_IP_NF_TARGET_REDIRECT is not set
# CONFIG_IP_NF_NAT_SNMP_BASIC is not set
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
# CONFIG_IP_NF_MANGLE is not set
# CONFIG_IP_NF_TARGET_LOG is not set
# CONFIG_IP_NF_TARGET_TCPMSS is not set
CONFIG_IP_NF_COMPAT_IPCHAINS=y
CONFIG_IP_NF_NAT_NEEDED=y
# CONFIG_IPV6 is not set
# CONFIG_KHTTPD is not set
# CONFIG_ATM is not set
CONFIG_VLAN_8021Q=m
#
#
......@@ -222,6 +237,7 @@ CONFIG_BLK_DEV_IDECD=y
CONFIG_BLK_DEV_IDEPCI=y
# CONFIG_IDEPCI_SHARE_IRQ is not set
CONFIG_BLK_DEV_IDEDMA_PCI=y
CONFIG_BLK_DEV_ADMA=y
# CONFIG_BLK_DEV_OFFBOARD is not set
CONFIG_IDEDMA_PCI_AUTO=y
CONFIG_BLK_DEV_IDEDMA=y
......@@ -231,8 +247,8 @@ CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_AEC62XX_TUNING is not set
CONFIG_BLK_DEV_ALI15X3=y
# CONFIG_WDC_ALI15X3 is not set
# CONFIG_BLK_DEV_AMD7409 is not set
# CONFIG_AMD7409_OVERRIDE is not set
# CONFIG_BLK_DEV_AMD74XX is not set
# CONFIG_AMD74XX_OVERRIDE is not set
CONFIG_BLK_DEV_CMD64X=y
CONFIG_BLK_DEV_CY82C693=y
# CONFIG_BLK_DEV_CS5530 is not set
......@@ -243,7 +259,10 @@ CONFIG_BLK_DEV_CY82C693=y
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_PDC202XX is not set
# CONFIG_PDC202XX_BURST is not set
# CONFIG_PDC202XX_FORCE is not set
# CONFIG_BLK_DEV_SVWKS is not set
# CONFIG_BLK_DEV_SIS5513 is not set
# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_IDE_CHIPSETS is not set
......@@ -251,6 +270,9 @@ CONFIG_IDEDMA_AUTO=y
# CONFIG_IDEDMA_IVB is not set
# CONFIG_DMA_NONPCI is not set
CONFIG_BLK_DEV_IDE_MODES=y
# CONFIG_BLK_DEV_ATARAID is not set
# CONFIG_BLK_DEV_ATARAID_PDC is not set
# CONFIG_BLK_DEV_ATARAID_HPT is not set
#
# SCSI support
......@@ -263,6 +285,7 @@ CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_SD_EXTRA_DEVS=40
# CONFIG_CHR_DEV_ST is not set
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=y
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_SR_EXTRA_DEVS=2
......@@ -288,6 +311,8 @@ CONFIG_SR_EXTRA_DEVS=2
CONFIG_SCSI_AIC7XXX=y
CONFIG_AIC7XXX_CMDS_PER_DEVICE=253
CONFIG_AIC7XXX_RESET_DELAY_MS=5000
# CONFIG_AIC7XXX_BUILD_FIRMWARE is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_IN2000 is not set
# CONFIG_SCSI_AM53C974 is not set
......@@ -349,6 +374,12 @@ CONFIG_DUMMY=m
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
# CONFIG_SUNLANCE is not set
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNBMAC is not set
# CONFIG_SUNQE is not set
# CONFIG_SUNLANCE is not set
# CONFIG_SUNGEM is not set
CONFIG_NET_VENDOR_3COM=y
# CONFIG_EL1 is not set
# CONFIG_EL2 is not set
......@@ -356,12 +387,15 @@ CONFIG_NET_VENDOR_3COM=y
# CONFIG_EL16 is not set
# CONFIG_EL3 is not set
# CONFIG_3C515 is not set
# CONFIG_ELMC is not set
# CONFIG_ELMC_II is not set
CONFIG_VORTEX=y
# CONFIG_LANCE is not set
# CONFIG_NET_VENDOR_SMC is not set
# CONFIG_NET_VENDOR_RACAL is not set
# CONFIG_AT1700 is not set
# CONFIG_DEPCA is not set
# CONFIG_HP100 is not set
# CONFIG_NET_ISA is not set
CONFIG_NET_PCI=y
# CONFIG_PCNET32 is not set
......@@ -393,11 +427,15 @@ CONFIG_TULIP=y
# Ethernet (1000 Mbit)
#
# CONFIG_ACENIC is not set
CONFIG_DL2K=m
# CONFIG_MYRI_SBUS is not set
CONFIG_NS83820=m
# CONFIG_HAMACHI is not set
CONFIG_YELLOWFIN=y
# CONFIG_SK98LIN is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_PLIP is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
......@@ -463,7 +501,11 @@ CONFIG_PSMOUSE=y
#
# Joysticks
#
# CONFIG_JOYSTICK is not set
# CONFIG_INPUT_GAMEPORT is not set
#
# Input core support is needed for gameports
#
#
# Input core support is needed for joysticks
......@@ -499,6 +541,8 @@ CONFIG_RTC=y
# CONFIG_QUOTA is not set
CONFIG_AUTOFS_FS=m
# CONFIG_AUTOFS4_FS is not set
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
# CONFIG_ADFS_FS is not set
# CONFIG_ADFS_FS_RW is not set
# CONFIG_AFFS_FS is not set
......@@ -510,11 +554,15 @@ CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
# CONFIG_EFS_FS is not set
# CONFIG_JFFS_FS is not set
# CONFIG_JFFS2_FS is not set
# CONFIG_CRAMFS is not set
CONFIG_TMPFS=y
# CONFIG_RAMFS is not set
CONFIG_ISO9660_FS=y
# CONFIG_JOLIET is not set
# CONFIG_ZISOFS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_NTFS_FS is not set
# CONFIG_NTFS_RW is not set
# CONFIG_HPFS_FS is not set
......@@ -528,7 +576,6 @@ CONFIG_DEVPTS_FS=y
# CONFIG_ROMFS_FS is not set
CONFIG_EXT2_FS=y
# CONFIG_SYSV_FS is not set
# CONFIG_SYSV_FS_WRITE is not set
# CONFIG_UDF_FS is not set
# CONFIG_UDF_RW is not set
# CONFIG_UFS_FS is not set
......@@ -554,10 +601,10 @@ CONFIG_LOCKD_V4=y
# CONFIG_NCPFS_NFS_NS is not set
# CONFIG_NCPFS_OS2_NS is not set
# CONFIG_NCPFS_SMALLDOS is not set
# CONFIG_NCPFS_MOUNT_SUBDIR is not set
# CONFIG_NCPFS_NDS_DOMAINS is not set
# CONFIG_NCPFS_NLS is not set
# CONFIG_NCPFS_EXTRAS is not set
# CONFIG_ZISOFS_FS is not set
# CONFIG_ZLIB_FS_INFLATE is not set
#
# Partition Types
......@@ -565,6 +612,7 @@ CONFIG_LOCKD_V4=y
# CONFIG_PARTITION_ADVANCED is not set
CONFIG_OSF_PARTITION=y
CONFIG_MSDOS_PARTITION=y
# CONFIG_SMB_NLS is not set
CONFIG_NLS=y
#
......@@ -586,11 +634,13 @@ CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ISO8859_1 is not set
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
......@@ -598,11 +648,12 @@ CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_UTF8 is not set
#
......@@ -625,14 +676,122 @@ CONFIG_VGA_CONSOLE=y
#
# CONFIG_USB is not set
#
# USB Controllers
#
# CONFIG_USB_UHCI is not set
# CONFIG_USB_UHCI_ALT is not set
# CONFIG_USB_OHCI is not set
#
# USB Device Class drivers
#
# CONFIG_USB_AUDIO is not set
# CONFIG_USB_BLUETOOTH is not set
# CONFIG_USB_STORAGE is not set
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_DPCM is not set
# CONFIG_USB_STORAGE_HP8200e is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
#
# USB Human Interface Devices (HID)
#
#
# Input core support is needed for USB HID
#
#
# USB Imaging devices
#
# CONFIG_USB_DC2XX is not set
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_SCANNER is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USB_HPUSBSCSI is not set
#
# USB Multimedia devices
#
#
# Video4Linux support is needed for USB Multimedia device support
#
#
# USB Network adaptors
#
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_CATC is not set
# CONFIG_USB_CDCETHER is not set
# CONFIG_USB_USBNET is not set
#
# USB port drivers
#
# CONFIG_USB_USS720 is not set
#
# USB Serial Converter support
#
# CONFIG_USB_SERIAL is not set
# CONFIG_USB_SERIAL_GENERIC is not set
# CONFIG_USB_SERIAL_BELKIN is not set
# CONFIG_USB_SERIAL_WHITEHEAT is not set
# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set
# CONFIG_USB_SERIAL_EMPEG is not set
# CONFIG_USB_SERIAL_FTDI_SIO is not set
# CONFIG_USB_SERIAL_VISOR is not set
# CONFIG_USB_SERIAL_IR is not set
# CONFIG_USB_SERIAL_EDGEPORT is not set
# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set
# CONFIG_USB_SERIAL_KEYSPAN is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA28 is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA28X is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA28XA is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA28XB is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA19 is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA18X is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA19W is not set
# CONFIG_USB_SERIAL_KEYSPAN_USA49W is not set
# CONFIG_USB_SERIAL_MCT_U232 is not set
# CONFIG_USB_SERIAL_PL2303 is not set
# CONFIG_USB_SERIAL_CYBERJACK is not set
# CONFIG_USB_SERIAL_XIRCOM is not set
# CONFIG_USB_SERIAL_OMNINET is not set
#
# USB Miscellaneous drivers
#
# CONFIG_USB_RIO500 is not set
#
# Input core support
#
# CONFIG_INPUT is not set
# CONFIG_INPUT_KEYBDEV is not set
# CONFIG_INPUT_MOUSEDEV is not set
# CONFIG_INPUT_JOYDEV is not set
# CONFIG_INPUT_EVDEV is not set
#
# Bluetooth support
#
# CONFIG_BLUEZ is not set
#
# Kernel hacking
#
CONFIG_ALPHA_LEGACY_START_ADDRESS=y
CONFIG_DEBUG_KERNEL=y
CONFIG_MATHEMU=y
# CONFIG_DEBUG_SLAB is not set
CONFIG_MAGIC_SYSRQ=y
CONFIG_ALPHA_LEGACY_START_ADDRESS=y
......@@ -1233,13 +1233,12 @@ static int __init init_amd(struct cpuinfo_x86 *c)
}
/* K6 with old style WHCR */
if( c->x86_model < 8 ||
(c->x86_model== 8 && c->x86_mask < 8))
{
if (c->x86_model < 8 ||
(c->x86_model== 8 && c->x86_mask < 8)) {
/* We can only write allocate on the low 508Mb */
if(mbytes>508)
mbytes=508;
rdmsr(MSR_K6_WHCR, l, h);
if ((l&0x0000FFFF)==0) {
unsigned long flags;
......@@ -1250,14 +1249,14 @@ static int __init init_amd(struct cpuinfo_x86 *c)
local_irq_restore(flags);
printk(KERN_INFO "Enabling old style K6 write allocation for %d Mb\n",
mbytes);
}
break;
}
if (c->x86_model == 8 || c->x86_model == 9 || c->x86_model == 13)
{
if ((c->x86_model == 8 && c->x86_mask >7) ||
c->x86_model == 9 || c->x86_model == 13) {
/* The more serious chips .. */
if(mbytes>4092)
mbytes=4092;
......@@ -1274,10 +1273,8 @@ static int __init init_amd(struct cpuinfo_x86 *c)
}
/* Set MTRR capability flag if appropriate */
if ( (c->x86_model == 13) ||
(c->x86_model == 9) ||
((c->x86_model == 8) &&
(c->x86_mask >= 8)) )
if (c->x86_model == 13 || c->x86_model == 9 ||
(c->x86_model == 8 && c->x86_mask >= 8))
set_bit(X86_FEATURE_K6_MTRR, &c->x86_capability);
break;
}
......
......@@ -180,6 +180,7 @@ static int lo_send(struct loop_device *lo, struct buffer_head *bh, int bsize,
unsigned size, offset;
int len;
down(&mapping->host->i_sem);
index = pos >> PAGE_CACHE_SHIFT;
offset = pos & (PAGE_CACHE_SIZE - 1);
len = bh->b_size;
......@@ -220,12 +221,14 @@ static int lo_send(struct loop_device *lo, struct buffer_head *bh, int bsize,
UnlockPage(page);
page_cache_release(page);
}
up(&mapping->host->i_sem);
return 0;
unlock:
UnlockPage(page);
page_cache_release(page);
fail:
up(&mapping->host->i_sem);
return -1;
}
......
......@@ -359,6 +359,7 @@ struct file_operations lvm_chr_fops = {
/* block device operations structure needed for 2.3.38? and above */
struct block_device_operations lvm_blk_dops =
{
owner: THIS_MODULE,
open: lvm_blk_open,
release: lvm_blk_close,
ioctl: lvm_blk_ioctl,
......
......@@ -625,6 +625,10 @@ static int cp_start_xmit (struct sk_buff *skb, struct net_device *dev)
len = skb->len;
mapping = pci_map_single(cp->pdev, skb->data, len, PCI_DMA_TODEVICE);
eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0;
txd->opts2 = 0;
txd->addr_lo = cpu_to_le32(mapping);
wmb();
#ifdef CP_TX_CHECKSUM
txd->opts1 = cpu_to_le32(eor | len | DescOwn | FirstFrag |
LastFrag | IPCS | UDPCS | TCPCS);
......@@ -632,13 +636,11 @@ static int cp_start_xmit (struct sk_buff *skb, struct net_device *dev)
txd->opts1 = cpu_to_le32(eor | len | DescOwn | FirstFrag |
LastFrag);
#endif
txd->opts2 = 0;
txd->addr_lo = cpu_to_le32(mapping);
wmb();
cp->tx_skb[entry].skb = skb;
cp->tx_skb[entry].mapping = mapping;
cp->tx_skb[entry].frag = 0;
wmb();
entry = NEXT_TX(entry);
} else {
struct cp_desc *txd;
......@@ -676,24 +678,29 @@ static int cp_start_xmit (struct sk_buff *skb, struct net_device *dev)
ctrl |= LastFrag;
txd = &cp->tx_ring[entry];
txd->opts1 = cpu_to_le32(ctrl);
txd->opts2 = 0;
txd->addr_lo = cpu_to_le32(mapping);
wmb();
txd->opts1 = cpu_to_le32(ctrl);
wmb();
cp->tx_skb[entry].skb = skb;
cp->tx_skb[entry].mapping = mapping;
cp->tx_skb[entry].frag = frag + 2;
wmb();
entry = NEXT_TX(entry);
}
txd = &cp->tx_ring[first_entry];
txd->opts2 = 0;
txd->addr_lo = cpu_to_le32(first_mapping);
wmb();
#ifdef CP_TX_CHECKSUM
txd->opts1 = cpu_to_le32(first_len | FirstFrag | DescOwn | IPCS | UDPCS | TCPCS);
#else
txd->opts1 = cpu_to_le32(first_len | FirstFrag | DescOwn);
#endif
txd->opts2 = 0;
txd->addr_lo = cpu_to_le32(first_mapping);
wmb();
}
cp->tx_head = entry;
......
......@@ -188,6 +188,7 @@ if [ "$CONFIG_NET_ETHERNET" = "y" ]; then
tristate ' TI ThunderLAN support' CONFIG_TLAN
fi
dep_tristate ' VIA Rhine support' CONFIG_VIA_RHINE $CONFIG_PCI
dep_mbool ' Use MMIO instead of PIO (EXPERIMENTAL)' CONFIG_VIA_RHINE_MMIO $CONFIG_VIA_RHINE $CONFIG_EXPERIMENTAL
dep_tristate ' Winbond W89c840 Ethernet support' CONFIG_WINBOND_840 $CONFIG_PCI
if [ "$CONFIG_OBSOLETE" = "y" ]; then
dep_bool ' Zenith Z-Note support (EXPERIMENTAL)' CONFIG_ZNET $CONFIG_ISA
......
......@@ -208,8 +208,32 @@ static inline void *pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
(((u64)(mask) & 0xffffffff00000000) == 0 ? 0 : -EIO)
#define pci_dma_supported(dev, mask) \
(((u64)(mask) & 0xffffffff00000000) == 0 ? 1 : 0)
#elif (LINUX_VERSION_CODE < 0x02040d)
/*
* 2.4.13 introduced pci_map_page()/pci_unmap_page() - for 2.4.12 and prior,
* fall back on pci_map_single()/pci_unnmap_single().
*
* We are guaranteed that the page is mapped at this point since
* pci_map_page() is only used upon valid struct skb's.
*/
static inline dma_addr_t
pci_map_page(struct pci_dev *cookie, struct page *page, unsigned long off,
size_t size, int dir)
{
void *page_virt;
page_virt = page_address(page);
if (!page_virt)
BUG();
return pci_map_single(cookie, (page_virt + off), size, dir);
}
#define pci_unmap_page(cookie, dma_addr, size, dir) \
pci_unmap_single(cookie, dma_addr, size, dir)
#endif
#if (LINUX_VERSION_CODE < 0x02032b)
/*
* SoftNet
......@@ -525,7 +549,7 @@ static int tx_ratio[ACE_MAX_MOD_PARMS];
static int dis_pci_mem_inval[ACE_MAX_MOD_PARMS] = {1, 1, 1, 1, 1, 1, 1, 1};
static char version[] __initdata =
"acenic.c: v0.83 09/30/2001 Jes Sorensen, linux-acenic@SunSITE.dk\n"
"acenic.c: v0.85 11/08/2001 Jes Sorensen, linux-acenic@SunSITE.dk\n"
" http://home.cern.ch/~jes/gige/acenic.html\n";
static struct net_device *root_dev;
......@@ -538,7 +562,6 @@ int __devinit acenic_probe (ACE_PROBE_ARG)
#ifdef NEW_NETINIT
struct net_device *dev;
#endif
struct ace_private *ap;
struct pci_dev *pdev = NULL;
int boards_found = 0;
......@@ -738,6 +761,7 @@ int __devinit acenic_probe (ACE_PROBE_ARG)
kfree(dev);
continue;
}
if (ap->pci_using_dac)
dev->features |= NETIF_F_HIGHDMA;
......@@ -767,12 +791,14 @@ MODULE_PARM(tx_coal_tick, "1-" __MODULE_STRING(8) "i");
MODULE_PARM(max_tx_desc, "1-" __MODULE_STRING(8) "i");
MODULE_PARM(rx_coal_tick, "1-" __MODULE_STRING(8) "i");
MODULE_PARM(max_rx_desc, "1-" __MODULE_STRING(8) "i");
MODULE_PARM_DESC(link, "Acenic/3C985/NetGear link state");
MODULE_PARM_DESC(trace, "Acenic/3C985/NetGear firmware trace level");
MODULE_PARM(tx_ratio, "1-" __MODULE_STRING(8) "i");
MODULE_PARM_DESC(link, "AceNIC/3C985/NetGear link state");
MODULE_PARM_DESC(trace, "AceNIC/3C985/NetGear firmware trace level");
MODULE_PARM_DESC(tx_coal_tick, "AceNIC/3C985/GA620 max clock ticks to wait from first tx descriptor arrives");
MODULE_PARM_DESC(max_tx_desc, "AceNIC/3C985/GA620 max number of transmit descriptors to wait");
MODULE_PARM_DESC(rx_coal_tick, "AceNIC/3C985/GA620 max clock ticks to wait from first rx descriptor arrives");
MODULE_PARM_DESC(max_rx_desc, "AceNIC/3C985/GA620 max number of receive descriptors to wait");
MODULE_PARM_DESC(tx_ratio, "AceNIC/3C985/GA620 ratio of NIC memory used for TX/RX descriptors (range 0-63)");
#endif
......@@ -911,8 +937,7 @@ static void ace_free_descriptors(struct net_device *dev)
RX_JUMBO_RING_ENTRIES +
RX_MINI_RING_ENTRIES +
RX_RETURN_RING_ENTRIES));
pci_free_consistent(ap->pdev, size,
ap->rx_std_ring,
pci_free_consistent(ap->pdev, size, ap->rx_std_ring,
ap->rx_ring_base_dma);
ap->rx_std_ring = NULL;
ap->rx_jumbo_ring = NULL;
......@@ -921,8 +946,7 @@ static void ace_free_descriptors(struct net_device *dev)
}
if (ap->evt_ring != NULL) {
size = (sizeof(struct event) * EVT_RING_ENTRIES);
pci_free_consistent(ap->pdev, size,
ap->evt_ring,
pci_free_consistent(ap->pdev, size, ap->evt_ring,
ap->evt_ring_dma);
ap->evt_ring = NULL;
}
......@@ -933,7 +957,8 @@ static void ace_free_descriptors(struct net_device *dev)
}
if (ap->rx_ret_prd != NULL) {
pci_free_consistent(ap->pdev, sizeof(u32),
(void *)ap->rx_ret_prd, ap->rx_ret_prd_dma);
(void *)ap->rx_ret_prd,
ap->rx_ret_prd_dma);
ap->rx_ret_prd = NULL;
}
if (ap->tx_csm != NULL) {
......@@ -1051,8 +1076,8 @@ static int __init ace_init(struct net_device *dev)
struct ace_private *ap;
struct ace_regs *regs;
struct ace_info *info = NULL;
u64 tmp_ptr;
unsigned long myjif;
u64 tmp_ptr;
u32 tig_ver, mac1, mac2, tmp, pci_state;
int board_idx, ecode = 0;
short i;
......@@ -1306,9 +1331,9 @@ static int __init ace_init(struct net_device *dev)
/*
* Configure DMA attributes.
*/
if (!pci_set_dma_mask(ap->pdev, (u64) 0xffffffffffffffff)) {
if (!pci_set_dma_mask(ap->pdev, 0xffffffffffffffffULL)) {
ap->pci_using_dac = 1;
} else if (!pci_set_dma_mask(ap->pdev, (u64) 0xffffffff)) {
} else if (!pci_set_dma_mask(ap->pdev, 0xffffffffULL)) {
ap->pci_using_dac = 0;
} else {
ecode = -ENODEV;
......@@ -1362,7 +1387,7 @@ static int __init ace_init(struct net_device *dev)
ace_load_firmware(dev);
ap->fw_running = 0;
tmp_ptr = (u64) ap->info_dma;
tmp_ptr = ap->info_dma;
writel(tmp_ptr >> 32, &regs->InfoPtrHi);
writel(tmp_ptr & 0xffffffff, &regs->InfoPtrLo);
......@@ -1428,7 +1453,8 @@ static int __init ace_init(struct net_device *dev)
(RX_STD_RING_ENTRIES +
RX_JUMBO_RING_ENTRIES))));
info->rx_mini_ctrl.max_len = ACE_MINI_SIZE;
info->rx_mini_ctrl.flags = RCB_FLG_TCP_UDP_SUM|RCB_FLG_NO_PSEUDO_HDR;
info->rx_mini_ctrl.flags =
RCB_FLG_TCP_UDP_SUM|RCB_FLG_NO_PSEUDO_HDR;
for (i = 0; i < RX_MINI_RING_ENTRIES; i++)
ap->rx_mini_ring[i].flags =
......@@ -1712,11 +1738,13 @@ static void ace_watchdog(struct net_device *data)
dev->name, (unsigned int)readl(&regs->HostCtrl));
/* This can happen due to ieee flow control. */
} else {
printk(KERN_DEBUG "%s: BUG... transmitter died. Kicking it.\n", dev->name);
printk(KERN_DEBUG "%s: BUG... transmitter died. Kicking it.\n",
dev->name);
netif_wake_queue(dev);
}
}
static void ace_tasklet(unsigned long dev)
{
struct ace_private *ap = ((struct net_device *)dev)->priv;
......@@ -1747,7 +1775,7 @@ static void ace_tasklet(unsigned long dev)
if (ap->jumbo && (cur_size < RX_LOW_JUMBO_THRES) &&
!test_and_set_bit(0, &ap->jumbo_refill_busy)) {
#if DEBUG
printk("refilling jumbo buffers (current %i)\n", >cur_size);
printk("refilling jumbo buffers (current %i)\n", cur_size);
#endif
ace_load_jumbo_rx_ring(ap, RX_JUMBO_SIZE - cur_size);
}
......@@ -1799,10 +1827,8 @@ static void ace_load_std_rx_ring(struct ace_private *ap, int nr_bufs)
* Make sure IP header starts on a fresh cache line.
*/
skb_reserve(skb, 2 + 16);
mapping = pci_map_page(ap->pdev,
virt_to_page(skb->data),
((unsigned long) skb->data &
~PAGE_MASK),
mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
((unsigned long)skb->data & ~PAGE_MASK),
ACE_STD_BUFSIZE - (2 + 16),
PCI_DMA_FROMDEVICE);
ap->skb->rx_std_skbuff[idx].skb = skb;
......@@ -1866,10 +1892,8 @@ static void ace_load_mini_rx_ring(struct ace_private *ap, int nr_bufs)
* Make sure the IP header ends up on a fresh cache line
*/
skb_reserve(skb, 2 + 16);
mapping = pci_map_page(ap->pdev,
virt_to_page(skb->data),
((unsigned long) skb->data &
~PAGE_MASK),
mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
((unsigned long)skb->data & ~PAGE_MASK),
ACE_MINI_BUFSIZE - (2 + 16),
PCI_DMA_FROMDEVICE);
ap->skb->rx_mini_skbuff[idx].skb = skb;
......@@ -1928,10 +1952,8 @@ static void ace_load_jumbo_rx_ring(struct ace_private *ap, int nr_bufs)
* Make sure the IP header ends up on a fresh cache line
*/
skb_reserve(skb, 2 + 16);
mapping = pci_map_page(ap->pdev,
virt_to_page(skb->data),
((unsigned long) skb->data &
~PAGE_MASK),
mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
((unsigned long)skb->data & ~PAGE_MASK),
ACE_JUMBO_BUFSIZE - (2 + 16),
PCI_DMA_FROMDEVICE);
ap->skb->rx_jumbo_skbuff[idx].skb = skb;
......@@ -2499,7 +2521,7 @@ static int ace_close(struct net_device *dev)
mapping = info->mapping;
if (mapping) {
memset(ap->tx_ring+i, 0, sizeof(struct tx_desc));
memset(ap->tx_ring + i, 0, sizeof(struct tx_desc));
pci_unmap_page(ap->pdev, mapping, info->maplen,
PCI_DMA_TODEVICE);
info->mapping = 0;
......@@ -2523,24 +2545,23 @@ static int ace_close(struct net_device *dev)
return 0;
}
static inline dma_addr_t
ace_map_tx_skb(struct ace_private *ap, struct sk_buff *skb,
struct sk_buff *tail, u32 idx)
{
unsigned long addr;
dma_addr_t mapping;
struct tx_ring_info *info;
addr = pci_map_page(ap->pdev,
virt_to_page(skb->data),
((unsigned long) skb->data &
~PAGE_MASK),
skb->len, PCI_DMA_TODEVICE);
mapping = pci_map_page(ap->pdev, virt_to_page(skb->data),
((unsigned long) skb->data & ~PAGE_MASK),
skb->len, PCI_DMA_TODEVICE);
info = ap->skb->tx_skbuff + idx;
info->skb = tail;
info->mapping = addr;
info->mapping = mapping;
info->maplen = skb->len;
return addr;
return mapping;
}
......@@ -2581,9 +2602,9 @@ static int ace_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (!skb_shinfo(skb)->nr_frags)
#endif
{
unsigned long addr;
dma_addr_t mapping;
addr = ace_map_tx_skb(ap, skb, skb, idx);
mapping = ace_map_tx_skb(ap, skb, skb, idx);
flagsize = (skb->len << 16) | (BD_FLG_END);
if (skb->ip_summed == CHECKSUM_HW)
flagsize |= BD_FLG_TCP_UDP_SUM;
......@@ -2594,42 +2615,40 @@ static int ace_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (tx_ring_full(ap->tx_ret_csm, idx))
flagsize |= BD_FLG_COAL_NOW;
ace_load_tx_bd(desc, addr, flagsize);
ace_load_tx_bd(desc, mapping, flagsize);
}
#if MAX_SKB_FRAGS
else {
unsigned long addr;
dma_addr_t mapping;
int i, len = 0;
addr = ace_map_tx_skb(ap, skb, NULL, idx);
mapping = ace_map_tx_skb(ap, skb, NULL, idx);
flagsize = ((skb->len - skb->data_len) << 16);
if (skb->ip_summed == CHECKSUM_HW)
flagsize |= BD_FLG_TCP_UDP_SUM;
ace_load_tx_bd(ap->tx_ring + idx, addr, flagsize);
ace_load_tx_bd(ap->tx_ring + idx, mapping, flagsize);
idx = (idx + 1) % TX_RING_ENTRIES;
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
struct tx_ring_info *info;
dma_addr_t phys;
len += frag->size;
info = ap->skb->tx_skbuff + idx;
desc = ap->tx_ring + idx;
phys = pci_map_page(ap->pdev, frag->page,
frag->page_offset,
frag->size,
PCI_DMA_TODEVICE);
mapping = pci_map_page(ap->pdev, frag->page,
frag->page_offset, frag->size,
PCI_DMA_TODEVICE);
flagsize = (frag->size << 16);
if (skb->ip_summed == CHECKSUM_HW)
flagsize |= BD_FLG_TCP_UDP_SUM;
idx = (idx + 1) % TX_RING_ENTRIES;
if (i == skb_shinfo(skb)->nr_frags-1) {
if (i == skb_shinfo(skb)->nr_frags - 1) {
flagsize |= BD_FLG_END;
if (tx_ring_full(ap->tx_ret_csm, idx))
flagsize |= BD_FLG_COAL_NOW;
......@@ -2642,9 +2661,9 @@ static int ace_start_xmit(struct sk_buff *skb, struct net_device *dev)
} else {
info->skb = NULL;
}
info->mapping = phys;
info->mapping = mapping;
info->maplen = frag->size;
ace_load_tx_bd(desc, phys, flagsize);
ace_load_tx_bd(desc, mapping, flagsize);
}
}
#endif
......
......@@ -582,11 +582,13 @@ struct ace_info {
aceaddr stats2_ptr;
};
struct ring_info {
struct sk_buff *skb;
dma_addr_t mapping;
};
/*
* Funny... As soon as we add maplen on alpha, it starts to work
* much slower. Hmm... is it because struct does not fit to one cacheline?
......@@ -598,6 +600,7 @@ struct tx_ring_info {
int maplen;
};
/*
* struct ace_skb holding the rings of skb's. This is an awful lot of
* pointers, but I don't see any other smart mode to do this in an
......
......@@ -12,34 +12,41 @@
/*
Rev Date Description
==========================================================================
0.01 2001/05/03 Create DL2000-based linux driver
0.02 2001/05/21 Add VLAN and hardware checksum support.
1.00 2001/06/26 Add jumbo frame support.
1.01 2001/08/21 Add two parameters, int_count and int_timeout.
0.01 2001/05/03 Created DL2000-based linux driver
0.02 2001/05/21 Added VLAN and hardware checksum support.
1.00 2001/06/26 Added jumbo frame support.
1.01 2001/08/21 Added two parameters, int_count and int_timeout.
1.02 2001/10/08 Supported fiber media.
Added flow control parameters.
1.03 2001/10/12 Changed the default media to 1000mbps_fd for the
fiber devices.
1.04 2001/11/08 Fixed a bug which Tx stop when a very busy case.
*/
#include "dl2k.h"
static char version[] __devinitdata =
KERN_INFO "D-Link DL2000-based linux driver v1.01 2001/08/30\n";
KERN_INFO "D-Link DL2000-based linux driver v1.04 2001/11/08\n";
#define MAX_UNITS 8
static int mtu[MAX_UNITS];
static int vlan[MAX_UNITS];
static int jumbo[MAX_UNITS];
static char *media[MAX_UNITS];
static int tx_flow[MAX_UNITS];
static int rx_flow[MAX_UNITS];
static int copy_thresh;
static int int_count; /* Rx frame count each interrupt */
static int int_timeout; /* Rx DMA wait time in 64ns increments */
MODULE_AUTHOR ("Edward Peng");
MODULE_DESCRIPTION ("D-Link DL2000-based Gigabit Ethernet Adapter");
MODULE_LICENSE("GPL");
MODULE_PARM (mtu, "1-" __MODULE_STRING (MAX_UNITS) "i");
MODULE_PARM (media, "1-" __MODULE_STRING (MAX_UNITS) "s");
MODULE_PARM (vlan, "1-" __MODULE_STRING (MAX_UNITS) "i");
MODULE_PARM (jumbo, "1-" __MODULE_STRING (MAX_UNITS) "i");
MODULE_PARM (tx_flow, "1-" __MODULE_STRING (MAX_UNITS) "i");
MODULE_PARM (rx_flow, "1-" __MODULE_STRING (MAX_UNITS) "i");
MODULE_PARM (copy_thresh, "i");
MODULE_PARM (int_count, "i");
MODULE_PARM (int_timeout, "i");
......@@ -72,6 +79,8 @@ static unsigned get_crc (unsigned char *p, int len);
static int mii_wait_link (struct net_device *dev, int wait);
static int mii_set_media (struct net_device *dev);
static int mii_get_media (struct net_device *dev);
static int mii_set_media_pcs (struct net_device *dev);
static int mii_get_media_pcs (struct net_device *dev);
static int mii_read (struct net_device *dev, int phy_addr, int reg_num);
static int mii_write (struct net_device *dev, int phy_addr, int reg_num,
u16 data);
......@@ -104,7 +113,6 @@ rio_probe1 (struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out_disable;
pci_set_master (pdev);
dev = alloc_etherdev (sizeof (*np));
if (!dev) {
err = -ENOMEM;
......@@ -134,7 +142,11 @@ rio_probe1 (struct pci_dev *pdev, const struct pci_device_id *ent)
if (card_idx < MAX_UNITS) {
if (media[card_idx] != NULL) {
np->an_enable = 0;
if (strcmp (media[card_idx], "100mbps_fd") == 0 ||
if (strcmp (media[card_idx], "auto") == 0 ||
strcmp (media[card_idx], "autosense") == 0 ||
strcmp (media[card_idx], "0") == 0 ) {
np->an_enable = 2;
} else if (strcmp (media[card_idx], "100mbps_fd") == 0 ||
strcmp (media[card_idx], "4") == 0) {
np->speed = 100;
np->full_duplex = 1;
......@@ -150,16 +162,14 @@ rio_probe1 (struct pci_dev *pdev, const struct pci_device_id *ent)
strcmp (media[card_idx], "1") == 0) {
np->speed = 10;
np->full_duplex = 0;
}
/* Auto-Negotiation is mandatory for 1000BASE-T,
IEEE 802.3ab Annex 28D page 14 */
else if (strcmp (media[card_idx], "1000mbps_fd") == 0 ||
strcmp (media[card_idx], "5") == 0 ||
strcmp (media[card_idx], "1000mbps_hd") == 0 ||
} else if (strcmp (media[card_idx], "1000mbps_fd") == 0 ||
strcmp (media[card_idx], "5") == 0) {
np->speed=1000;
np->full_duplex=1;
} else if (strcmp (media[card_idx], "1000mbps_hd") == 0 ||
strcmp (media[card_idx], "6") == 0) {
np->speed = 1000;
np->full_duplex = 1;
np->an_enable = 1;
np->full_duplex = 0;
} else {
np->an_enable = 1;
}
......@@ -179,6 +189,9 @@ rio_probe1 (struct pci_dev *pdev, const struct pci_device_id *ent)
np->int_timeout = int_timeout;
np->coalesce = 1;
}
np->tx_flow = (tx_flow[card_idx]) ? 1 : 0;
np->rx_flow = (rx_flow[card_idx]) ? 1 : 0;
}
dev->open = &rio_open;
dev->hard_start_xmit = &start_xmit;
......@@ -213,9 +226,27 @@ rio_probe1 (struct pci_dev *pdev, const struct pci_device_id *ent)
err = find_miiphy (dev);
if (err)
goto err_out_unmap_rx;
/* Fiber device? */
np->phy_media = (readw(ioaddr + ASICCtrl) & PhyMedia) ? 1 : 0;
/* Set media and reset PHY */
mii_set_media (dev);
if (np->phy_media) {
/* default 1000mbps_fd for fiber deivices */
if (np->an_enable == 1) {
np->an_enable = 0;
np->speed = 1000;
np->full_duplex = 1;
} else if (np->an_enable == 2) {
np->an_enable = 1;
}
mii_set_media_pcs (dev);
} else {
/* Auto-Negotiation is mandatory for 1000BASE-T,
IEEE 802.3ab Annex 28D page 14 */
if (np->speed == 1000)
np->an_enable = 1;
mii_set_media (dev);
}
/* Reset all logic functions */
writew (GlobalReset | DMAReset | FIFOReset | NetworkReset | HostReset,
......@@ -227,7 +258,7 @@ rio_probe1 (struct pci_dev *pdev, const struct pci_device_id *ent)
card_idx++;
printk (KERN_INFO "%s: %s, %2x:%2x:%2x:%2x:%2x:%2x, IRQ %d\n",
printk (KERN_INFO "%s: %s, %02x:%02x:%02x:%02x:%02x:%02x, IRQ %d\n",
dev->name, np->name,
dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5], irq);
......@@ -297,7 +328,7 @@ parse_eeprom (struct net_device *dev)
/* Check CRC */
crc = ~get_crc (sromdata, 256 - 4);
if (psrom->crc != ~get_crc (sromdata, 256 - 4)) {
if (psrom->crc != crc) {
printk (KERN_ERR "%s: EEPROM data CRC error.\n", dev->name);
return -1;
}
......@@ -590,7 +621,7 @@ rio_interrupt (int irq, void *dev_instance, struct pt_regs *rgs)
if (int_status & RxDMAComplete)
receive_packet (dev);
/* TxComplete interrupt */
if (int_status & TxComplete) {
if (int_status & TxComplete || np->tx_full) {
int tx_status = readl (ioaddr + TxStatus);
if (tx_status & 0x01)
tx_error (dev, tx_status);
......@@ -715,8 +746,6 @@ tx_error (struct net_device *dev, int tx_status)
writel (readw (dev->base_addr + MACCtrl) | TxEnable, ioaddr + MACCtrl);
}
/* Every interrupts go into here to see if any packet need to process, this
ensure Rx rings keep full in a critical cases of Rx rings ran out */
static int
receive_packet (struct net_device *dev)
{
......@@ -832,6 +861,7 @@ rio_error (struct net_device *dev, int int_status)
{
long ioaddr = dev->base_addr;
struct netdev_private *np = dev->priv;
u16 macctrl;
/* Stop the down counter and recovery the interrupt */
if (int_status & IntRequested) {
......@@ -845,15 +875,17 @@ rio_error (struct net_device *dev, int int_status)
if (int_status & LinkEvent) {
if (mii_wait_link (dev, 10) == 0) {
printk (KERN_INFO "%s: Link up\n", dev->name);
if (np->an_enable) {
/* Auto-Negotiation mode */
if (np->phy_media)
mii_get_media_pcs (dev);
else
mii_get_media (dev);
if (np->full_duplex) {
writew (readw (dev->base_addr + MACCtrl)
| DuplexSelect,
ioaddr + MACCtrl);
}
}
macctrl = 0;
macctrl |= (np->full_duplex) ? DuplexSelect : 0;
macctrl |= (np->tx_flow) ?
TxFlowControlEnable : 0;
macctrl |= (np->rx_flow) ?
RxFlowControlEnable : 0;
writew(macctrl, ioaddr + MACCtrl);
} else {
printk (KERN_INFO "%s: Link off\n", dev->name);
}
......@@ -1302,35 +1334,42 @@ mii_get_media (struct net_device *dev)
/* Auto-Negotiation not completed */
return -1;
}
negotiate.image = mii_read (dev, phy_addr, MII_ANAR) &
mii_read (dev, phy_addr, MII_ANLPAR);
negotiate.image = mii_read (dev, phy_addr, MII_ANAR) &
mii_read (dev, phy_addr, MII_ANLPAR);
mscr.image = mii_read (dev, phy_addr, MII_MSCR);
mssr.image = mii_read (dev, phy_addr, MII_MSSR);
if (mscr.bits.media_1000BT_FD & mssr.bits.lp_1000BT_FD) {
np->speed = 1000;
np->full_duplex = 1;
printk (KERN_INFO "Auto 1000BaseT, Full duplex.\n");
printk (KERN_INFO "Auto 1000 Mbps, Full duplex\n");
} else if (mscr.bits.media_1000BT_HD & mssr.bits.lp_1000BT_HD) {
np->speed = 1000;
np->full_duplex = 0;
printk (KERN_INFO "Auto 1000BaseT, Half duplex.\n");
printk (KERN_INFO "Auto 1000 Mbps, Half duplex\n");
} else if (negotiate.bits.media_100BX_FD) {
np->speed = 100;
np->full_duplex = 1;
printk (KERN_INFO "Auto 100BaseT, Full duplex.\n");
printk (KERN_INFO "Auto 100 Mbps, Full duplex\n");
} else if (negotiate.bits.media_100BX_HD) {
np->speed = 100;
np->full_duplex = 0;
printk (KERN_INFO "Auto 100BaseT, Half duplex.\n");
printk (KERN_INFO "Auto 100 Mbps, Half duplex\n");
} else if (negotiate.bits.media_10BT_FD) {
np->speed = 10;
np->full_duplex = 1;
printk (KERN_INFO "Auto 10BaseT, Full duplex.\n");
printk (KERN_INFO "Auto 10 Mbps, Full duplex\n");
} else if (negotiate.bits.media_10BT_HD) {
np->speed = 10;
np->full_duplex = 0;
printk (KERN_INFO "Auto 10BaseT, Half duplex.\n");
printk (KERN_INFO "Auto 10 Mbps, Half duplex\n");
}
if (negotiate.bits.pause) {
np->tx_flow = 1;
np->rx_flow = 1;
} else if (negotiate.bits.asymmetric) {
np->rx_flow = 1;
}
/* else tx_flow, rx_flow = user select */
} else {
bmcr.image = mii_read (dev, phy_addr, MII_BMCR);
if (bmcr.bits.speed100 == 1 && bmcr.bits.speed1000 == 0) {
......@@ -1341,11 +1380,20 @@ mii_get_media (struct net_device *dev)
printk (KERN_INFO "Operating at 1000 Mbps, ");
}
if (bmcr.bits.duplex_mode) {
printk ("Full duplex.\n");
printk ("Full duplex\n");
} else {
printk ("Half duplex.\n");
printk ("Half duplex\n");
}
}
if (np->tx_flow)
printk(KERN_INFO "Enable Tx Flow Control\n");
else
printk(KERN_INFO "Disable Tx Flow Control\n");
if (np->rx_flow)
printk(KERN_INFO "Enable Rx Flow Control\n");
else
printk(KERN_INFO "Disable Rx Flow Control\n");
return 0;
}
......@@ -1363,7 +1411,7 @@ mii_set_media (struct net_device *dev)
/* Does user set speed? */
if (np->an_enable) {
/* Reset to enable Auto-Negotiation */
/* Advertise capabilities */
bmsr.image = mii_read (dev, phy_addr, MII_BMSR);
anar.image = mii_read (dev, phy_addr, MII_ANAR);
anar.bits.media_100BX_FD = bmsr.bits.media_100BX_FD;
......@@ -1371,24 +1419,23 @@ mii_set_media (struct net_device *dev)
anar.bits.media_100BT4 = bmsr.bits.media_100BT4;
anar.bits.media_10BT_FD = bmsr.bits.media_10BT_FD;
anar.bits.media_10BT_HD = bmsr.bits.media_10BT_HD;
anar.bits.pause = 1;
anar.bits.asymmetric = 1;
mii_write (dev, phy_addr, MII_ANAR, anar.image);
/* Enable Auto crossover */
pscr.image = mii_read (dev, phy_addr, MII_PHY_SCR);
pscr.bits.mdi_crossover_mode = 3; /* 11'b */
mii_write (dev, phy_addr, MII_PHY_SCR, pscr.image);
/* Soft reset PHY */
mii_write (dev, phy_addr, MII_BMCR, MII_BMCR_RESET);
bmcr.image = 0;
bmcr.bits.an_enable = 1;
bmcr.bits.restart_an = 1;
bmcr.bits.reset = 1;
mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
/* Wait for Link up, link up need a certain time */
if (mii_wait_link (dev, 3200) != 0) {
printk (KERN_INFO "Link time out\n");
}
mdelay (1);
mii_get_media (dev);
mdelay(1);
} else {
/* Force speed setting */
/* 1) Disable Auto crossover */
......@@ -1423,10 +1470,10 @@ mii_set_media (struct net_device *dev)
}
if (np->full_duplex) {
bmcr.bits.duplex_mode = 1;
printk ("Full duplex. \n");
printk ("Full duplex\n");
} else {
bmcr.bits.duplex_mode = 0;
printk ("Half duplex.\n");
printk ("Half duplex\n");
}
#if 0
/* Set 1000BaseT Master/Slave setting */
......@@ -1435,16 +1482,125 @@ mii_set_media (struct net_device *dev)
mscr.bits.cfg_value = 0;
#endif
mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
mdelay(10);
}
return 0;
}
/* Wait for Link up, link up need a certain time */
if (mii_wait_link (dev, 3200) != 0) {
printk (KERN_INFO "Link time out\n");
static int
mii_get_media_pcs (struct net_device *dev)
{
ANAR_PCS_t negotiate;
BMSR_t bmsr;
BMCR_t bmcr;
int phy_addr;
struct netdev_private *np;
np = dev->priv;
phy_addr = np->phy_addr;
bmsr.image = mii_read (dev, phy_addr, PCS_BMSR);
if (np->an_enable) {
if (!bmsr.bits.an_complete) {
/* Auto-Negotiation not completed */
return -1;
}
negotiate.image = mii_read (dev, phy_addr, PCS_ANAR) &
mii_read (dev, phy_addr, PCS_ANLPAR);
np->speed = 1000;
if (negotiate.bits.full_duplex) {
printk (KERN_INFO "Auto 1000 Mbps, Full duplex\n");
np->full_duplex = 1;
} else {
printk (KERN_INFO "Auto 1000 Mbps, half duplex\n");
np->full_duplex = 0;
}
if (negotiate.bits.pause) {
np->tx_flow = 1;
np->rx_flow = 1;
} else if (negotiate.bits.asymmetric) {
np->rx_flow = 1;
}
mii_get_media (dev);
/* else tx_flow, rx_flow = user select */
} else {
bmcr.image = mii_read (dev, phy_addr, PCS_BMCR);
printk (KERN_INFO "Operating at 1000 Mbps, ");
if (bmcr.bits.duplex_mode) {
printk ("Full duplex\n");
} else {
printk ("Half duplex\n");
}
}
if (np->tx_flow)
printk(KERN_INFO "Enable Tx Flow Control\n");
else
printk(KERN_INFO "Disable Tx Flow Control\n");
if (np->rx_flow)
printk(KERN_INFO "Enable Rx Flow Control\n");
else
printk(KERN_INFO "Disable Rx Flow Control\n");
return 0;
}
static int
mii_set_media_pcs (struct net_device *dev)
{
BMCR_t bmcr;
ESR_t esr;
ANAR_PCS_t anar;
int phy_addr;
struct netdev_private *np;
np = dev->priv;
phy_addr = np->phy_addr;
/* Auto-Negotiation? */
if (np->an_enable) {
/* Advertise capabilities */
esr.image = mii_read (dev, phy_addr, PCS_ESR);
anar.image = mii_read (dev, phy_addr, MII_ANAR);
anar.bits.half_duplex =
esr.bits.media_1000BT_HD | esr.bits.media_1000BX_HD;
anar.bits.full_duplex =
esr.bits.media_1000BT_FD | esr.bits.media_1000BX_FD;
anar.bits.pause = 1;
anar.bits.asymmetric = 1;
mii_write (dev, phy_addr, MII_ANAR, anar.image);
/* Soft reset PHY */
mii_write (dev, phy_addr, MII_BMCR, MII_BMCR_RESET);
bmcr.image = 0;
bmcr.bits.an_enable = 1;
bmcr.bits.restart_an = 1;
bmcr.bits.reset = 1;
mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
mdelay(1);
} else {
/* Force speed setting */
/* PHY Reset */
bmcr.image = 0;
bmcr.bits.reset = 1;
mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
mdelay(10);
bmcr.image = 0;
bmcr.bits.an_enable = 0;
if (np->full_duplex) {
bmcr.bits.duplex_mode = 1;
printk (KERN_INFO "Manual full duplex\n");
} else {
bmcr.bits.duplex_mode = 0;
printk (KERN_INFO "Manual half duplex\n");
}
mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
mdelay(10);
/* Advertise nothing */
mii_write (dev, phy_addr, MII_ANAR, 0);
}
return 0;
}
static int
rio_close (struct net_device *dev)
{
......@@ -1521,10 +1677,6 @@ static struct pci_driver rio_driver = {
static int __init
rio_init (void)
{
#ifdef MODULE
printk ("%s", version);
#endif
return pci_module_init (&rio_driver);
}
......@@ -1543,4 +1695,6 @@ Compile command:
gcc -D__KERNEL__ -DMODULE -I/usr/src/linux/include -Wall -Wstrict-prototypes -O2 -c dl2x.c
Read Documentation/networking/dl2k.txt for details.
*/
......@@ -209,6 +209,11 @@ enum MACCtrl_bits {
RxDisable = 0x10000000,
RxEnabled = 0x20000000,
};
enum ASICCtrl_LoWord_bits {
PhyMedia = 0x0080,
};
enum ASICCtrl_HiWord_bits {
GlobalReset = 0x0001,
RxReset = 0x0002,
......@@ -277,6 +282,17 @@ enum _mii_reg {
MII_ESR = 15,
MII_PHY_SCR = 16,
};
/* PCS register */
enum _pcs_reg {
PCS_BMCR = 0,
PCS_BMSR = 1,
PCS_ANAR = 4,
PCS_ANLPAR = 5,
PCS_ANER = 6,
PCS_ANNPT = 7,
PCS_ANLPRNP = 8,
PCS_ESR = 15,
};
/* Basic Mode Control Register */
typedef union t_MII_BMCR {
......@@ -533,6 +549,58 @@ typedef enum t_MII_ADMIN_STATUS {
adm_isolate
} MII_ADMIN_t, *PMII_ADMIN_t;
/* Physical Coding Sublayer Management (PCS) */
/* PCS control and status registers bitmap as the same as MII */
/* PCS Extended Status register bitmap as the same as MII */
/* PCS ANAR */
typedef union t_PCS_ANAR {
u16 image;
struct {
u16 _bit_4_0:5; // bit 4:0
u16 full_duplex:1; // bit 5
u16 half_duplex:1; // bit 6
u16 asymmetric:1; // bit 7
u16 pause:1; // bit 8
u16 _bit_11_9:3; // bit 11:9
u16 remote_fault:2; // bit 13:12
u16 _bit_14:1; // bit 14
u16 next_page:1; // bit 15
} bits;
} ANAR_PCS_t, *PANAR_PCS_t;
enum _pcs_anar {
PCS_ANAR_NEXT_PAGE = 0x8000,
PCS_ANAR_REMOTE_FAULT = 0x3000,
PCS_ANAR_ASYMMETRIC = 0x0100,
PCS_ANAR_PAUSE = 0x0080,
PCS_ANAR_HALF_DUPLEX = 0x0040,
PCS_ANAR_FULL_DUPLEX = 0x0020,
};
/* PCS ANLPAR */
typedef union t_PCS_ANLPAR {
u16 image;
struct {
u16 _bit_4_0:5; // bit 4:0
u16 full_duplex:1; // bit 5
u16 half_duplex:1; // bit 6
u16 asymmetric:1; // bit 7
u16 pause:1; // bit 8
u16 _bit_11_9:3; // bit 11:9
u16 remote_fault:2; // bit 13:12
u16 _bit_14:1; // bit 14
u16 next_page:1; // bit 15
} bits;
} ANLPAR_PCS_t, *PANLPAR_PCS_t;
enum _pcs_anlpar {
PCS_ANLPAR_NEXT_PAGE = PCS_ANAR_NEXT_PAGE,
PCS_ANLPAR_REMOTE_FAULT = PCS_ANAR_REMOTE_FAULT,
PCS_ANLPAR_ASYMMETRIC = PCS_ANAR_ASYMMETRIC,
PCS_ANLPAR_PAUSE = PCS_ANAR_PAUSE,
PCS_ANLPAR_HALF_DUPLEX = PCS_ANAR_HALF_DUPLEX,
PCS_ANLPAR_FULL_DUPLEX = PCS_ANAR_FULL_DUPLEX,
};
typedef struct t_SROM {
u16 config_param; /* 0x00 */
u16 asic_ctrl; /* 0x02 */
......@@ -582,16 +650,19 @@ struct netdev_private {
spinlock_t lock;
struct net_device_stats stats;
unsigned int rx_buf_sz; /* Based on MTU+slack. */
unsigned int tx_full:1; /* The Tx queue is full. */
unsigned int full_duplex:1; /* Full-duplex operation requested. */
unsigned int speed; /* Operating speed */
unsigned int vlan; /* VLAN Id */
unsigned int an_enable; /* Auto-Negotiated Enable */
unsigned int chip_id; /* PCI table chip id */
unsigned int jumbo;
unsigned int int_count;
unsigned int int_timeout;
unsigned int coalesce:1;
unsigned int int_count; /* Maximum frames each RxDMAComplete intr */
unsigned int int_timeout; /* Wait time between RxDMAComplete intr */
unsigned int tx_full:1; /* The Tx queue is full. */
unsigned int full_duplex:1; /* Full-duplex operation requested. */
unsigned int an_enable:2; /* Auto-Negotiated Enable */
unsigned int jumbo:1; /* Jumbo frame enable */
unsigned int coalesce:1; /* Rx coalescing enable */
unsigned int tx_flow:1; /* Tx flow control enable */
unsigned int rx_flow:1; /* Rx flow control enable */
unsigned int phy_media:1; /* 1: fiber, 0: copper */
struct netdev_desc *last_tx; /* Last Tx descriptor used. */
unsigned long cur_rx, old_rx; /* Producer/consumer ring indices */
unsigned long cur_tx, old_tx;
......
......@@ -23,6 +23,8 @@
This is a compatibility hardware problem.
Versions:
0.13 irq sharing, rewrote probe function, fixed a nasty bug in
hardware_send_packet and a major cleanup (aris, 11/08/2001)
0.12d fixing a problem with single card detected as eight eth devices
fixing a problem with sudden drop in card performance
(chris (asdn@go2.pl), 10/29/2001)
......@@ -100,7 +102,7 @@
*/
static const char version[] =
"eepro.c: v0.12c 01/08/2000 aris@conectiva.com.br\n";
"eepro.c: v0.13 11/08/2001 aris@cathedrallabs.org\n";
#include <linux/module.h>
......@@ -192,12 +194,24 @@ struct eepro_local {
unsigned tx_end; /* end of the transmit chain (plus 1) */
int eepro; /* 1 for the EtherExpress Pro/10,
2 for the EtherExpress Pro/10+,
3 for the EtherExpress 10 (blue cards),
0 for other 82595-based lan cards. */
int version; /* a flag to indicate if this is a TX or FX
version of the 82595 chip. */
int stepping;
spinlock_t lock; /* Serializing lock */
unsigned rcv_ram; /* pre-calculated space for rx */
unsigned xmt_ram; /* pre-calculated space for tx */
unsigned char xmt_bar;
unsigned char xmt_lower_limit_reg;
unsigned char xmt_upper_limit_reg;
short xmt_lower_limit;
short xmt_upper_limit;
short rcv_lower_limit;
short rcv_upper_limit;
unsigned char eeprom_reg;
};
/* The station (ethernet) address prefix, used for IDing the board. */
......@@ -302,7 +316,7 @@ static void set_multicast_list(struct net_device *dev);
static void eepro_tx_timeout (struct net_device *dev);
static int read_eeprom(int ioaddr, int location, struct net_device *dev);
static void hardware_send_packet(struct net_device *dev, void *buf, short length);
static int hardware_send_packet(struct net_device *dev, void *buf, short length);
static int eepro_grab_irq(struct net_device *dev);
/*
......@@ -335,38 +349,25 @@ it is reset to the default of 24K, and, hence, 8K for the trasnmit
buffer (transmit-buffer = 32K - receive-buffer).
*/
/* now this section could be used by both boards: the oldies and the ee10:
* ee10 uses tx buffer before of rx buffer and the oldies the inverse.
* (aris)
*/
#define RAM_SIZE 0x8000
#define RCV_HEADER 8
#define RCV_DEFAULT_RAM 0x6000
#define RCV_RAM rcv_ram
static unsigned rcv_ram = RCV_DEFAULT_RAM;
#define XMT_HEADER 8
#define XMT_RAM (RAM_SIZE - RCV_RAM)
#define XMT_START ((rcv_start + RCV_RAM) % RAM_SIZE)
#define RCV_LOWER_LIMIT (rcv_start >> 8)
#define RCV_UPPER_LIMIT (((rcv_start + RCV_RAM) - 2) >> 8)
#define XMT_LOWER_LIMIT (XMT_START >> 8)
#define XMT_UPPER_LIMIT (((XMT_START + XMT_RAM) - 2) >> 8)
#define XMT_DEFAULT_RAM (RAM_SIZE - RCV_DEFAULT_RAM)
#define RCV_START_PRO 0x00
#define RCV_START_10 XMT_RAM
/* by default the old driver */
static unsigned rcv_start = RCV_START_PRO;
#define XMT_START_PRO RCV_DEFAULT_RAM
#define XMT_START_10 0x0000
#define RCV_START_PRO 0x0000
#define RCV_START_10 XMT_DEFAULT_RAM
#define RCV_DONE 0x0008
#define RX_OK 0x2000
#define RX_ERROR 0x0d81
#define TX_DONE_BIT 0x0080
#define TX_OK 0x2000
#define CHAIN_BIT 0x8000
#define XMT_STATUS 0x02
#define XMT_CHAIN 0x04
......@@ -409,7 +410,6 @@ static unsigned rcv_start = RCV_START_PRO;
#define XMT_BAR_PRO 0x0a
#define XMT_BAR_10 0x0b
static unsigned xmt_bar = XMT_BAR_PRO;
#define HOST_ADDRESS_REG 0x0c
#define IO_PORT 0x0e
......@@ -427,8 +427,6 @@ static unsigned xmt_bar = XMT_BAR_PRO;
#define XMT_UPPER_LIMIT_REG_PRO 0x0b
#define XMT_LOWER_LIMIT_REG_10 0x0b
#define XMT_UPPER_LIMIT_REG_10 0x0a
static unsigned xmt_lower_limit_reg = XMT_LOWER_LIMIT_REG_PRO;
static unsigned xmt_upper_limit_reg = XMT_UPPER_LIMIT_REG_PRO;
/* Bank 2 registers */
#define XMT_Chain_Int 0x20 /* Interrupt at the end of the transmit chain */
......@@ -453,7 +451,6 @@ static unsigned xmt_upper_limit_reg = XMT_UPPER_LIMIT_REG_PRO;
#define EEPROM_REG_PRO 0x0a
#define EEPROM_REG_10 0x0b
static unsigned eeprom_reg = EEPROM_REG_PRO;
#define EESK 0x01
#define EECS 0x02
......@@ -505,11 +502,6 @@ static unsigned eeprom_reg = EEPROM_REG_PRO;
/* set diagnose flag */
#define eepro_diag(ioaddr) outb(DIAGNOSE_CMD, ioaddr)
#ifdef ANSWER_TX_AND_RX /* experimental way of handling interrupts */
/* ack for rx/tx int */
#define eepro_ack_rxtx(ioaddr) outb (RX_INT | TX_INT, ioaddr + STATUS_REG)
#endif
/* ack for rx int */
#define eepro_ack_rx(ioaddr) outb (RX_INT, ioaddr + STATUS_REG)
......@@ -517,16 +509,15 @@ static unsigned eeprom_reg = EEPROM_REG_PRO;
#define eepro_ack_tx(ioaddr) outb (TX_INT, ioaddr + STATUS_REG)
/* a complete sel reset */
#define eepro_complete_selreset(ioaddr) { eepro_dis_int(ioaddr);\
#define eepro_complete_selreset(ioaddr) { \
lp->stats.tx_errors++;\
eepro_sel_reset(ioaddr);\
lp->tx_end = \
(XMT_LOWER_LIMIT << 8);\
lp->xmt_lower_limit;\
lp->tx_start = lp->tx_end;\
lp->tx_last = 0;\
dev->trans_start = jiffies;\
netif_wake_queue(dev);\
eepro_en_int(ioaddr);\
eepro_en_rx(ioaddr);\
}
......@@ -539,7 +530,7 @@ static unsigned eeprom_reg = EEPROM_REG_PRO;
int __init eepro_probe(struct net_device *dev)
{
int i;
int base_addr = dev ? dev->base_addr : 0;
int base_addr = dev->base_addr;
SET_MODULE_OWNER(dev);
......@@ -597,7 +588,7 @@ static void __init printEEPROMInfo(short ioaddr, struct net_device *dev)
for (i=0, j=ee_Checksum; i<ee_SIZE; i++)
j+=read_eeprom(ioaddr,i,dev);
printk("Checksum: %#x\n",j&0xffff);
printk(KERN_DEBUG "Checksum: %#x\n",j&0xffff);
Word=read_eeprom(ioaddr, 0, dev);
printk(KERN_DEBUG "Word0:\n");
......@@ -623,10 +614,10 @@ static void __init printEEPROMInfo(short ioaddr, struct net_device *dev)
printk(KERN_DEBUG " BNC: %d\n",GetBit(Word,ee_BNC_TPE));
printk(KERN_DEBUG " NumConnectors: %d\n",GetBit(Word,ee_NumConn));
printk(KERN_DEBUG " Has ");
if (GetBit(Word,ee_PortTPE)) printk("TPE ");
if (GetBit(Word,ee_PortBNC)) printk("BNC ");
if (GetBit(Word,ee_PortAUI)) printk("AUI ");
printk("port(s) \n");
if (GetBit(Word,ee_PortTPE)) printk(KERN_DEBUG "TPE ");
if (GetBit(Word,ee_PortBNC)) printk(KERN_DEBUG "BNC ");
if (GetBit(Word,ee_PortAUI)) printk(KERN_DEBUG "AUI ");
printk(KERN_DEBUG "port(s) \n");
Word=read_eeprom(ioaddr, 6, dev);
printk(KERN_DEBUG "Word6:\n");
......@@ -637,12 +628,85 @@ static void __init printEEPROMInfo(short ioaddr, struct net_device *dev)
printk(KERN_DEBUG "Word7:\n");
printk(KERN_DEBUG " INT to IRQ:\n");
printk(KERN_DEBUG);
for (i=0, j=0; i<15; i++)
if (GetBit(Word,i)) printk(" INT%d -> IRQ %d;",j++,i);
if (GetBit(Word,i)) printk(KERN_DEBUG " INT%d -> IRQ %d;",j++,i);
printk(KERN_DEBUG "\n");
}
/* function to recalculate the limits of buffer based on rcv_ram */
static void eepro_recalc (struct net_device *dev)
{
struct eepro_local * lp;
lp = dev->priv;
lp->xmt_ram = RAM_SIZE - lp->rcv_ram;
if (lp->eepro == LAN595FX_10ISA) {
lp->xmt_lower_limit = XMT_START_10;
lp->xmt_upper_limit = (lp->xmt_ram - 2);
lp->rcv_lower_limit = lp->xmt_ram;
lp->rcv_upper_limit = (RAM_SIZE - 2);
}
else {
lp->rcv_lower_limit = RCV_START_PRO;
lp->rcv_upper_limit = (lp->rcv_ram - 2);
lp->xmt_lower_limit = lp->rcv_ram;
lp->xmt_upper_limit = (RAM_SIZE - 2);
}
}
printk("\n");
/* prints boot-time info */
static void eepro_print_info (struct net_device *dev)
{
struct eepro_local * lp = dev->priv;
int i;
const char * ifmap[] = {"AUI", "10Base2", "10BaseT"};
i = inb(dev->base_addr + ID_REG);
printk(KERN_DEBUG " id: %#x ",i);
printk(KERN_DEBUG " io: %#x ", (unsigned)dev->base_addr);
switch (lp->eepro) {
case LAN595FX_10ISA:
printk(KERN_INFO "%s: Intel EtherExpress 10 ISA\n at %#x,",
dev->name, (unsigned)dev->base_addr);
break;
case LAN595FX:
printk(KERN_INFO "%s: Intel EtherExpress Pro/10+ ISA\n at %#x,",
dev->name, (unsigned)dev->base_addr);
break;
case LAN595TX:
printk(KERN_INFO "%s: Intel EtherExpress Pro/10 ISA at %#x,",
dev->name, (unsigned)dev->base_addr);
break;
case LAN595:
printk(KERN_INFO "%s: Intel 82595-based lan card at %#x,",
dev->name, (unsigned)dev->base_addr);
}
for (i=0; i < 6; i++)
printk(KERN_INFO "%c%02x", i ? ':' : ' ', dev->dev_addr[i]);
if (net_debug > 3)
printk(KERN_DEBUG ", %dK RCV buffer",
(int)(lp->rcv_ram)/1024);
if (dev->irq > 2)
printk(KERN_INFO ", IRQ %d, %s.\n", dev->irq, ifmap[dev->if_port]);
else
printk(KERN_INFO ", %s.\n", ifmap[dev->if_port]);
if (net_debug > 3) {
i = read_eeprom(dev->base_addr, 5, dev);
if (i & 0x2000) /* bit 13 of EEPROM word 5 */
printk(KERN_DEBUG "%s: Concurrent Processing is "
"enabled but not used!\n", dev->name);
}
/* Check the station address for the manufacturer's code */
if (net_debug>3)
printEEPROMInfo(dev->base_addr, dev);
}
/* This is the real probe routine. Linux has a history of friendly device
......@@ -652,10 +716,8 @@ static void __init printEEPROMInfo(short ioaddr, struct net_device *dev)
static int __init eepro_probe1(struct net_device *dev, short ioaddr)
{
unsigned short station_addr[6], id, counter;
int i,j, irqMask;
int eepro = 0;
int i, j, irqMask, retval = 0;
struct eepro_local *lp;
const char *ifmap[] = {"AUI", "10Base2", "10BaseT"};
enum iftype { AUI=0, BNC=1, TPE=2 };
/* Now, we are going to check for the signature of the
......@@ -663,31 +725,42 @@ static int __init eepro_probe1(struct net_device *dev, short ioaddr)
id=inb(ioaddr + ID_REG);
if (((id) & ID_REG_MASK) == ID_REG_SIG) {
if (((id) & ID_REG_MASK) != ID_REG_SIG) {
retval = -ENODEV;
goto exit;
}
/* We seem to have the 82595 signature, let's
play with its counter (last 2 bits of
register 2 of bank 0) to be sure. */
counter = (id & R_ROBIN_BITS);
if (((id=inb(ioaddr+ID_REG)) & R_ROBIN_BITS) ==
(counter + 0x40)) {
/* Yes, the 82595 has been found */
printk(KERN_DEBUG " id: %#x ",id);
printk(" io: %#x ",ioaddr);
if (((id=inb(ioaddr+ID_REG)) & R_ROBIN_BITS)!=(counter + 0x40)) {
retval = -ENODEV;
goto exit;
}
/* Initialize the device structure */
dev->priv = kmalloc(sizeof(struct eepro_local), GFP_KERNEL);
if (dev->priv == NULL)
return -ENOMEM;
if (!dev->priv) {
retval = -ENOMEM;
goto exit;
}
memset(dev->priv, 0, sizeof(struct eepro_local));
lp = (struct eepro_local *)dev->priv;
/* default values */
lp->eepro = 0;
lp->xmt_bar = XMT_BAR_PRO;
lp->xmt_lower_limit_reg = XMT_LOWER_LIMIT_REG_PRO;
lp->xmt_upper_limit_reg = XMT_UPPER_LIMIT_REG_PRO;
lp->eeprom_reg = EEPROM_REG_PRO;
/* Now, get the ethernet hardware address from
the EEPROM */
station_addr[0] = read_eeprom(ioaddr, 2, dev);
/* FIXME - find another way to know that we've found
......@@ -695,77 +768,43 @@ static int __init eepro_probe1(struct net_device *dev, short ioaddr)
*/
if (station_addr[0] == 0x0000 ||
station_addr[0] == 0xffff) {
eepro = 3;
lp->eepro = LAN595FX_10ISA;
eeprom_reg = EEPROM_REG_10;
rcv_start = RCV_START_10;
xmt_lower_limit_reg = XMT_LOWER_LIMIT_REG_10;
xmt_upper_limit_reg = XMT_UPPER_LIMIT_REG_10;
lp->eeprom_reg = EEPROM_REG_10;
lp->xmt_lower_limit_reg = XMT_LOWER_LIMIT_REG_10;
lp->xmt_upper_limit_reg = XMT_UPPER_LIMIT_REG_10;
lp->xmt_bar = XMT_BAR_10;
station_addr[0] = read_eeprom(ioaddr, 2, dev);
}
station_addr[1] = read_eeprom(ioaddr, 3, dev);
station_addr[2] = read_eeprom(ioaddr, 4, dev);
if (eepro) {
printk("%s: Intel EtherExpress 10 ISA\n at %#x,",
dev->name, ioaddr);
} else if (read_eeprom(ioaddr,7,dev)== ee_FX_INT2IRQ) {
/* int to IRQ Mask */
eepro = 2;
printk("%s: Intel EtherExpress Pro/10+ ISA\n at %#x,",
dev->name, ioaddr);
} else
if (station_addr[2] == 0x00aa) {
eepro = 1;
printk("%s: Intel EtherExpress Pro/10 ISA at %#x,",
dev->name, ioaddr);
}
else {
eepro = 0;
printk("%s: Intel 82595-based lan card at %#x,",
dev->name, ioaddr);
if (!lp->eepro) {
if (read_eeprom(ioaddr,7,dev)== ee_FX_INT2IRQ)
lp->eepro = 2;
else if (station_addr[2] == SA_ADDR1)
lp->eepro = 1;
}
/* Fill in the 'dev' fields. */
dev->base_addr = ioaddr;
for (i=0; i < 6; i++) {
for (i=0; i < 6; i++)
dev->dev_addr[i] = ((unsigned char *) station_addr)[5-i];
printk("%c%02x", i ? ':' : ' ', dev->dev_addr[i]);
}
dev->mem_start = (RCV_LOWER_LIMIT << 8);
if ((dev->mem_end & 0x3f) < 3 || /* RX buffer must be more than 3K */
(dev->mem_end & 0x3f) > 29) /* and less than 29K */
dev->mem_end = (RCV_UPPER_LIMIT << 8);
else {
dev->mem_end = (dev->mem_end * 1024) +
(RCV_LOWER_LIMIT << 8);
rcv_ram = dev->mem_end - (RCV_LOWER_LIMIT << 8);
}
/* From now on, dev->mem_end - dev->mem_start contains
* the actual size of rx buffer
*/
/* RX buffer must be more than 3K and less than 29K */
if (dev->mem_end < 3072 || dev->mem_end > 29696)
lp->rcv_ram = RCV_DEFAULT_RAM;
if (net_debug > 3)
printk(", %dK RCV buffer", (int)(dev->mem_end -
dev->mem_start)/1024);
/* calculate {xmt,rcv}_{lower,upper}_limit */
eepro_recalc(dev);
/* ............... */
if (GetBit( read_eeprom(ioaddr, 5, dev),ee_BNC_TPE))
dev->if_port = BNC;
else dev->if_port = TPE;
/* ............... */
else
dev->if_port = TPE;
if ((dev->irq < 2) && (eepro!=0)) {
if ((dev->irq < 2) && (lp->eepro!=0)) {
i = read_eeprom(ioaddr, 1, dev);
irqMask = read_eeprom(ioaddr, 7, dev);
i &= 0x07; /* Mask off INT number */
......@@ -780,34 +819,14 @@ static int __init eepro_probe1(struct net_device *dev, short ioaddr)
}
}
if (dev->irq < 2) {
printk(" Duh! illegal interrupt vector stored in EEPROM.\n");
printk(KERN_ERR " Duh! illegal interrupt vector stored in EEPROM.\n");
kfree(dev->priv);
return -ENODEV;
retval = -ENODEV;
goto freeall;
} else
if (dev->irq==2)
dev->irq = 9;
}
if (dev->irq > 2) {
printk(", IRQ %d, %s.\n", dev->irq,
ifmap[dev->if_port]);
}
else printk(", %s.\n", ifmap[dev->if_port]);
if ((dev->mem_start & 0xf) > 0) /* I don't know if this is */
net_debug = dev->mem_start & 7; /* still useful or not */
if (net_debug > 3) {
i = read_eeprom(ioaddr, 5, dev);
if (i & 0x2000) /* bit 13 of EEPROM word 5 */
printk(KERN_DEBUG "%s: Concurrent Processing is enabled but not used!\n",
dev->name);
if (dev->irq==2) dev->irq = 9;
}
if (net_debug)
printk(version);
/* Grab the region so we can find another board if autoIRQ fails. */
request_region(ioaddr, EEPRO_IO_EXTENT, dev->name);
......@@ -823,23 +842,20 @@ static int __init eepro_probe1(struct net_device *dev, short ioaddr)
/* Fill in the fields of the device structure with
ethernet generic values */
ether_setup(dev);
/* Check the station address for the manufacturer's code */
if (net_debug>3)
printEEPROMInfo(ioaddr, dev);
/* print boot time info */
eepro_print_info(dev);
/* RESET the 82595 */
/* reset 82595 */
eepro_reset(ioaddr);
return 0;
}
else return -ENODEV;
}
else if (net_debug > 3)
printk ("EtherExpress Pro probed failed!\n");
return -ENODEV;
exit:
return retval;
freeall:
kfree(dev->priv);
goto exit;
}
/* Open/initialize the board. This is called (in the current kernel)
......@@ -879,7 +895,7 @@ static int eepro_grab_irq(struct net_device *dev)
eepro_sw2bank0(ioaddr); /* Switch back to Bank 0 */
if (request_irq (*irqp, NULL, 0, "bogus", dev) != EBUSY) {
if (request_irq (*irqp, NULL, SA_SHIRQ, "bogus", dev) != EBUSY) {
/* Twinkle the interrupt, and check if it's seen */
autoirq_setup(0);
......@@ -942,12 +958,12 @@ static int eepro_open(struct net_device *dev)
/* Get the interrupt vector for the 82595 */
if (dev->irq < 2 && eepro_grab_irq(dev) == 0) {
printk("%s: unable to get IRQ %d.\n", dev->name, dev->irq);
printk(KERN_ERR "%s: unable to get IRQ %d.\n", dev->name, dev->irq);
return -EAGAIN;
}
if (request_irq(dev->irq , &eepro_interrupt, 0, dev->name, dev)) {
printk("%s: unable to get IRQ %d.\n", dev->name, dev->irq);
printk(KERN_ERR "%s: unable to get IRQ %d.\n", dev->name, dev->irq);
return -EAGAIN;
}
......@@ -964,7 +980,7 @@ static int eepro_open(struct net_device *dev)
/* Initialize the 82595. */
eepro_sw2bank2(ioaddr); /* be CAREFUL, BANK 2 now */
temp_reg = inb(ioaddr + eeprom_reg);
temp_reg = inb(ioaddr + lp->eeprom_reg);
lp->stepping = temp_reg >> 5; /* Get the stepping number of the 595 */
......@@ -972,7 +988,7 @@ static int eepro_open(struct net_device *dev)
printk(KERN_DEBUG "The stepping of the 82595 is %d\n", lp->stepping);
if (temp_reg & 0x10) /* Check the TurnOff Enable bit */
outb(temp_reg & 0xef, ioaddr + eeprom_reg);
outb(temp_reg & 0xef, ioaddr + lp->eeprom_reg);
for (i=0; i < 6; i++)
outb(dev->dev_addr[i] , ioaddr + I_ADD_REG0 + i);
......@@ -991,13 +1007,13 @@ static int eepro_open(struct net_device *dev)
/* Set the interrupt vector */
temp_reg = inb(ioaddr + INT_NO_REG);
if (lp->eepro == 2 || lp->eepro == LAN595FX_10ISA)
if (lp->eepro == LAN595FX || lp->eepro == LAN595FX_10ISA)
outb((temp_reg & 0xf8) | irqrmap2[dev->irq], ioaddr + INT_NO_REG);
else outb((temp_reg & 0xf8) | irqrmap[dev->irq], ioaddr + INT_NO_REG);
temp_reg = inb(ioaddr + INT_NO_REG);
if (lp->eepro == 2 || lp->eepro == LAN595FX_10ISA)
if (lp->eepro == LAN595FX || lp->eepro == LAN595FX_10ISA)
outb((temp_reg & 0xf0) | irqrmap2[dev->irq] | 0x08,ioaddr+INT_NO_REG);
else outb((temp_reg & 0xf8) | irqrmap[dev->irq], ioaddr + INT_NO_REG);
......@@ -1006,10 +1022,10 @@ static int eepro_open(struct net_device *dev)
/* Initialize the RCV and XMT upper and lower limits */
outb(RCV_LOWER_LIMIT, ioaddr + RCV_LOWER_LIMIT_REG);
outb(RCV_UPPER_LIMIT, ioaddr + RCV_UPPER_LIMIT_REG);
outb(XMT_LOWER_LIMIT, ioaddr + xmt_lower_limit_reg);
outb(XMT_UPPER_LIMIT, ioaddr + xmt_upper_limit_reg);
outb(lp->rcv_lower_limit >> 8, ioaddr + RCV_LOWER_LIMIT_REG);
outb(lp->rcv_upper_limit >> 8, ioaddr + RCV_UPPER_LIMIT_REG);
outb(lp->xmt_lower_limit >> 8, ioaddr + lp->xmt_lower_limit_reg);
outb(lp->xmt_upper_limit >> 8, ioaddr + lp->xmt_upper_limit_reg);
/* Enable the interrupt line. */
eepro_en_intline(ioaddr);
......@@ -1024,12 +1040,14 @@ static int eepro_open(struct net_device *dev)
eepro_clear_int(ioaddr);
/* Initialize RCV */
outw(RCV_LOWER_LIMIT << 8, ioaddr + RCV_BAR);
lp->rx_start = (RCV_LOWER_LIMIT << 8) ;
outw((RCV_UPPER_LIMIT << 8) | 0xfe, ioaddr + RCV_STOP);
outw(lp->rcv_lower_limit, ioaddr + RCV_BAR);
lp->rx_start = lp->rcv_lower_limit;
outw(lp->rcv_upper_limit | 0xfe, ioaddr + RCV_STOP);
/* Initialize XMT */
outw(XMT_LOWER_LIMIT << 8, ioaddr + xmt_bar);
outw(lp->xmt_lower_limit, ioaddr + lp->xmt_bar);
lp->tx_start = lp->tx_end = lp->xmt_lower_limit;
lp->tx_last = 0;
/* Check for the i82595TX and i82595FX */
old8 = inb(ioaddr + 8);
......@@ -1044,8 +1062,6 @@ static int eepro_open(struct net_device *dev)
lp->version = LAN595TX;
outb(old8, ioaddr + 8);
old9 = inb(ioaddr + 9);
/*outb(~old9, ioaddr + 9);
if (((temp_reg = inb(ioaddr + 9)) == ( (~old9)&0xff) )) {*/
if (irqMask==ee_FX_INT2IRQ) {
enum iftype { AUI=0, BNC=1, TPE=2 };
......@@ -1074,11 +1090,6 @@ static int eepro_open(struct net_device *dev)
}
eepro_sel_reset(ioaddr);
SLOW_DOWN;
SLOW_DOWN;
lp->tx_start = lp->tx_end = XMT_LOWER_LIMIT << 8;
lp->tx_last = 0;
netif_start_queue(dev);
......@@ -1088,6 +1099,8 @@ static int eepro_open(struct net_device *dev)
/* enabling rx */
eepro_en_rx(ioaddr);
MOD_INC_USE_COUNT;
return 0;
}
......@@ -1111,23 +1124,28 @@ static int eepro_send_packet(struct sk_buff *skb, struct net_device *dev)
{
struct eepro_local *lp = (struct eepro_local *)dev->priv;
unsigned long flags;
int ioaddr = dev->base_addr;
if (net_debug > 5)
printk(KERN_DEBUG "%s: entering eepro_send_packet routine.\n", dev->name);
netif_stop_queue (dev);
eepro_dis_int(ioaddr);
spin_lock_irqsave(&lp->lock, flags);
{
short length = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN;
unsigned char *buf = skb->data;
if (hardware_send_packet(dev, buf, length))
/* we won't wake queue here because we're out of space */
lp->stats.tx_dropped++;
else {
lp->stats.tx_bytes+=skb->len;
hardware_send_packet(dev, buf, length);
dev->trans_start = jiffies;
netif_wake_queue(dev);
}
}
......@@ -1139,6 +1157,7 @@ static int eepro_send_packet(struct sk_buff *skb, struct net_device *dev)
if (net_debug > 5)
printk(KERN_DEBUG "%s: exiting eepro_send_packet routine.\n", dev->name);
eepro_en_int(ioaddr);
spin_unlock_irqrestore(&lp->lock, flags);
return 0;
......@@ -1153,7 +1172,7 @@ eepro_interrupt(int irq, void *dev_id, struct pt_regs * regs)
{
struct net_device *dev = (struct net_device *)dev_id;
/* (struct net_device *)(irq2dev_map[irq]);*/
struct eepro_local *lp = (struct eepro_local *)dev->priv;
struct eepro_local *lp;
int ioaddr, status, boguscount = 20;
if (dev == NULL) {
......@@ -1161,6 +1180,8 @@ eepro_interrupt(int irq, void *dev_id, struct pt_regs * regs)
return;
}
lp = (struct eepro_local *)dev->priv;
spin_lock(&lp->lock);
if (net_debug > 5)
......@@ -1168,41 +1189,32 @@ eepro_interrupt(int irq, void *dev_id, struct pt_regs * regs)
ioaddr = dev->base_addr;
while (((status = inb(ioaddr + STATUS_REG)) & 0x06) && (boguscount--))
while (((status = inb(ioaddr + STATUS_REG)) & (RX_INT|TX_INT)) && (boguscount--))
{
#ifdef ANSWER_TX_AND_RX
switch (status & (RX_INT | TX_INT)) {
case (RX_INT | TX_INT):
eepro_ack_rxtx(ioaddr);
break;
case RX_INT:
eepro_ack_rx(ioaddr);
break;
case TX_INT:
eepro_ack_tx(ioaddr);
break;
}
#endif
if (status & RX_INT) {
if (net_debug > 4)
printk(KERN_DEBUG "%s: packet received interrupt.\n", dev->name);
#ifndef ANSWER_TX_AND_RX
eepro_ack_rx(ioaddr);
#endif
eepro_dis_int(ioaddr);
/* Get the received packets */
eepro_ack_rx(ioaddr);
eepro_rx(dev);
eepro_en_int(ioaddr);
}
if (status & TX_INT) {
if (net_debug > 4)
printk(KERN_DEBUG "%s: packet transmit interrupt.\n", dev->name);
#ifndef ANSWER_TX_AND_RX
eepro_ack_tx(ioaddr);
#endif
eepro_dis_int(ioaddr);
/* Process the status of transmitted packets */
eepro_ack_tx(ioaddr);
eepro_transmit_interrupt(dev);
eepro_en_int(ioaddr);
}
}
......@@ -1231,7 +1243,7 @@ static int eepro_close(struct net_device *dev)
/* Flush the Tx and disable Rx. */
outb(STOP_RCV_CMD, ioaddr);
lp->tx_start = lp->tx_end = (XMT_LOWER_LIMIT << 8);
lp->tx_start = lp->tx_end = lp->xmt_lower_limit;
lp->tx_last = 0;
/* Mask all the interrupts. */
......@@ -1252,6 +1264,8 @@ static int eepro_close(struct net_device *dev)
/* Update the statistics here. What statistics? */
MOD_DEC_USE_COUNT;
return 0;
}
......@@ -1291,7 +1305,7 @@ set_multicast_list(struct net_device *dev)
mode = inb(ioaddr + REG3);
outb(mode, ioaddr + REG3); /* writing reg. 3 to complete the update */
eepro_sw2bank0(ioaddr); /* Return to BANK 0 now */
printk("%s: promiscuous mode enabled.\n", dev->name);
printk(KERN_INFO "%s: promiscuous mode enabled.\n", dev->name);
}
else if (dev->mc_count==0 )
......@@ -1339,7 +1353,7 @@ set_multicast_list(struct net_device *dev)
outw(eaddrs[0], ioaddr + IO_PORT);
outw(eaddrs[1], ioaddr + IO_PORT);
outw(eaddrs[2], ioaddr + IO_PORT);
outw(lp->tx_end, ioaddr + xmt_bar);
outw(lp->tx_end, ioaddr + lp->xmt_bar);
outb(MC_SETUP, ioaddr);
/* Update the transmit queue */
......@@ -1370,11 +1384,11 @@ set_multicast_list(struct net_device *dev)
outb(0x08, ioaddr + STATUS_REG);
if (i & 0x20) { /* command ABORTed */
printk("%s: multicast setup failed.\n",
printk(KERN_NOTICE "%s: multicast setup failed.\n",
dev->name);
break;
} else if ((i & 0x0f) == 0x03) { /* MC-Done */
printk("%s: set Rx mode to %d address%s.\n",
printk(KERN_DEBUG "%s: set Rx mode to %d address%s.\n",
dev->name, dev->mc_count,
dev->mc_count > 1 ? "es":"");
break;
......@@ -1404,19 +1418,15 @@ read_eeprom(int ioaddr, int location, struct net_device *dev)
{
int i;
unsigned short retval = 0;
short ee_addr = ioaddr + eeprom_reg;
struct eepro_local *lp = (struct eepro_local *)dev->priv;
struct eepro_local *lp = dev->priv;
short ee_addr = ioaddr + lp->eeprom_reg;
int read_cmd = location | EE_READ_CMD;
short ctrl_val = EECS ;
/* XXXX - this is not the final version. We must test this on other
* boards other than eepro10. I think that it won't let other
* boards to fail. (aris)
*/
if (lp->eepro == LAN595FX_10ISA) {
/* XXXX - black magic */
eepro_sw2bank1(ioaddr);
outb(0x00, ioaddr + STATUS_REG);
}
/* XXXX - black magic */
eepro_sw2bank2(ioaddr);
outb(ctrl_val, ee_addr);
......@@ -1449,55 +1459,42 @@ read_eeprom(int ioaddr, int location, struct net_device *dev)
return retval;
}
static void
static int
hardware_send_packet(struct net_device *dev, void *buf, short length)
{
struct eepro_local *lp = (struct eepro_local *)dev->priv;
short ioaddr = dev->base_addr;
unsigned status, tx_available, last, end, boguscount = 100;
unsigned status, tx_available, last, end;
if (net_debug > 5)
printk(KERN_DEBUG "%s: entering hardware_send_packet routine.\n", dev->name);
while (boguscount-- > 0) {
/* Disable RX and TX interrupts. Necessary to avoid
corruption of the HOST_ADDRESS_REG by interrupt
service routines. */
eepro_dis_int(ioaddr);
/* determine how much of the transmit buffer space is available */
if (lp->tx_end > lp->tx_start)
tx_available = XMT_RAM - (lp->tx_end - lp->tx_start);
tx_available = lp->xmt_ram - (lp->tx_end - lp->tx_start);
else if (lp->tx_end < lp->tx_start)
tx_available = lp->tx_start - lp->tx_end;
else tx_available = XMT_RAM;
if (((((length + 3) >> 1) << 1) + 2*XMT_HEADER)
>= tx_available) /* No space available ??? */
{
eepro_transmit_interrupt(dev); /* Clean up the transmiting queue */
else tx_available = lp->xmt_ram;
/* Enable RX and TX interrupts */
eepro_en_int(ioaddr);
continue;
if (((((length + 3) >> 1) << 1) + 2*XMT_HEADER) >= tx_available) {
/* No space available ??? */
return 1;
}
last = lp->tx_end;
end = last + (((length + 3) >> 1) << 1) + XMT_HEADER;
if (end >= (XMT_UPPER_LIMIT << 8)) { /* the transmit buffer is wrapped around */
if (((XMT_UPPER_LIMIT << 8) - last) <= XMT_HEADER) {
if (end >= lp->xmt_upper_limit + 2) { /* the transmit buffer is wrapped around */
if ((lp->xmt_upper_limit + 2 - last) <= XMT_HEADER) {
/* Arrrr!!!, must keep the xmt header together,
several days were lost to chase this one down. */
last = (XMT_LOWER_LIMIT << 8);
last = lp->xmt_lower_limit;
end = last + (((length + 3) >> 1) << 1) + XMT_HEADER;
}
else end = (XMT_LOWER_LIMIT << 8) + (end -
(XMT_UPPER_LIMIT <<8));
else end = lp->xmt_lower_limit + (end -
lp->xmt_upper_limit + 2);
}
outw(last, ioaddr + HOST_ADDRESS_REG);
outw(XMT_CMD, ioaddr + IO_PORT);
outw(0, ioaddr + IO_PORT);
......@@ -1517,7 +1514,7 @@ hardware_send_packet(struct net_device *dev, void *buf, short length)
status = inw(ioaddr + IO_PORT);
if (lp->tx_start == lp->tx_end) {
outw(last, ioaddr + xmt_bar);
outw(last, ioaddr + lp->xmt_bar);
outb(XMT_CMD, ioaddr);
lp->tx_start = last; /* I don't like to change tx_start here */
}
......@@ -1541,27 +1538,10 @@ hardware_send_packet(struct net_device *dev, void *buf, short length)
lp->tx_last = last;
lp->tx_end = end;
if (netif_queue_stopped(dev))
netif_wake_queue(dev);
/* now we are serializing tx. queue won't come back until
* the tx interrupt
*/
if (lp->eepro == LAN595FX_10ISA)
netif_stop_queue(dev);
/* Enable RX and TX interrupts */
eepro_en_int(ioaddr);
if (net_debug > 5)
printk(KERN_DEBUG "%s: exiting hardware_send_packet routine.\n", dev->name);
return;
}
if (lp->eepro == LAN595FX_10ISA)
netif_stop_queue(dev);
if (net_debug > 5)
printk(KERN_DEBUG "%s: exiting hardware_send_packet routine.\n", dev->name);
return 0;
}
static void
......@@ -1570,7 +1550,7 @@ eepro_rx(struct net_device *dev)
struct eepro_local *lp = (struct eepro_local *)dev->priv;
short ioaddr = dev->base_addr;
short boguscount = 20;
unsigned rcv_car = lp->rx_start;
short rcv_car = lp->rx_start;
unsigned rcv_event, rcv_status, rcv_next_frame, rcv_size;
if (net_debug > 5)
......@@ -1632,24 +1612,25 @@ eepro_rx(struct net_device *dev)
else if (rcv_status & 0x0800)
lp->stats.rx_crc_errors++;
printk("%s: event = %#x, status = %#x, next = %#x, size = %#x\n",
printk(KERN_DEBUG "%s: event = %#x, status = %#x, next = %#x, size = %#x\n",
dev->name, rcv_event, rcv_status, rcv_next_frame, rcv_size);
}
if (rcv_status & 0x1000)
lp->stats.rx_length_errors++;
rcv_car = lp->rx_start + RCV_HEADER + rcv_size;
lp->rx_start = rcv_next_frame;
if (--boguscount == 0)
break;
rcv_car = lp->rx_start + RCV_HEADER + rcv_size;
lp->rx_start = rcv_next_frame;
outw(rcv_next_frame, ioaddr + HOST_ADDRESS_REG);
rcv_event = inw(ioaddr + IO_PORT);
}
if (rcv_car == 0)
rcv_car = (RCV_UPPER_LIMIT << 8) | 0xff;
rcv_car = lp->rcv_upper_limit | 0xff;
outw(rcv_car - 1, ioaddr + RCV_STOP);
......@@ -1662,53 +1643,23 @@ eepro_transmit_interrupt(struct net_device *dev)
{
struct eepro_local *lp = (struct eepro_local *)dev->priv;
short ioaddr = dev->base_addr;
short boguscount = 20;
unsigned xmt_status;
/*
if (dev->tbusy == 0) {
printk("%s: transmit_interrupt called with tbusy = 0 ??\n",
dev->name);
printk(KERN_DEBUG "%s: transmit_interrupt called with tbusy = 0 ??\n",
dev->name);
}
*/
while (lp->tx_start != lp->tx_end && boguscount) {
short boguscount = 25;
short xmt_status;
while ((lp->tx_start != lp->tx_end) && boguscount--) {
outw(lp->tx_start, ioaddr + HOST_ADDRESS_REG);
xmt_status = inw(ioaddr+IO_PORT);
if ((xmt_status & TX_DONE_BIT) == 0) {
if (lp->eepro == LAN595FX_10ISA) {
udelay(40);
boguscount--;
continue;
}
else
if (!(xmt_status & TX_DONE_BIT))
break;
}
xmt_status = inw(ioaddr+IO_PORT);
lp->tx_start = inw(ioaddr+IO_PORT);
if (lp->eepro == LAN595FX_10ISA) {
lp->tx_start = (XMT_LOWER_LIMIT << 8);
lp->tx_end = lp->tx_start;
/* yeah, black magic :( */
eepro_sw2bank0(ioaddr);
eepro_en_int(ioaddr);
/* disabling rx */
eepro_dis_rx(ioaddr);
/* enabling rx */
eepro_en_rx(ioaddr);
}
netif_wake_queue (dev);
if (xmt_status & 0x2000)
if (xmt_status & TX_OK)
lp->stats.tx_packets++;
else {
lp->stats.tx_errors++;
......@@ -1725,18 +1676,6 @@ eepro_transmit_interrupt(struct net_device *dev)
printk(KERN_DEBUG "%s: XMT status = %#x\n",
dev->name, xmt_status);
}
if (lp->eepro == LAN595FX_10ISA) {
/* Try to restart the adaptor. */
/* We are supposed to wait for 2 us after a SEL_RESET */
eepro_sel_reset(ioaddr);
/* first enable interrupts */
eepro_sw2bank0(ioaddr);
outb(ALL_MASK & ~(RX_INT | TX_INT), ioaddr + STATUS_REG);
/* enabling rx */
eepro_en_rx(ioaddr);
}
}
if (xmt_status & 0x000f) {
lp->stats.collisions += (xmt_status & 0x000f);
......@@ -1745,15 +1684,7 @@ eepro_transmit_interrupt(struct net_device *dev)
if ((xmt_status & 0x0040) == 0x0) {
lp->stats.tx_heartbeat_errors++;
}
boguscount--;
}
/* if it reached here then it's probable that the adapter won't
* interrupt again for tx. in other words: tx timeout what will take
* a lot of time to happen, so we'll do a complete selreset.
*/
if (!boguscount && lp->eepro == LAN595FX_10ISA)
eepro_complete_selreset(ioaddr);
}
#ifdef MODULE
......@@ -1789,31 +1720,32 @@ init_module(void)
{
int i;
if (io[0] == 0 && autodetect == 0) {
printk("eepro_init_module: Probe is very dangerous in ISA boards!\n");
printk("eepro_init_module: Please add \"autodetect=1\" to force probe\n");
printk(KERN_WARNING "eepro_init_module: Probe is very dangerous in ISA boards!\n");
printk(KERN_WARNING "eepro_init_module: Please add \"autodetect=1\" to force probe\n");
return 1;
}
else if (autodetect) {
/* if autodetect is set then we must force detection */
io[0] = 0;
printk("eepro_init_module: Auto-detecting boards (May God protect us...)\n");
printk(KERN_INFO "eepro_init_module: Auto-detecting boards (May God protect us...)\n");
}
for (i = 0; i < MAX_EEPRO; i++) {
struct net_device *d = &dev_eepro[n_eepro];
d->mem_end = mem[n_eepro];
d->base_addr = io[n_eepro];
d->irq = irq[n_eepro];
d->mem_end = mem[i];
d->base_addr = io[i];
d->irq = irq[i];
d->init = eepro_probe;
if (io[n_eepro]>0) {
if (register_netdev(d) == 0)
n_eepro++;
else
break;
}
}
if (n_eepro)
printk(KERN_INFO "%s", version);
return n_eepro ? 0 : -ENODEV;
}
......
......@@ -431,7 +431,7 @@ static void set_rx_mode(struct net_device *dev);
static struct net_device_stats *get_stats(struct net_device *dev);
static int mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
static int netdev_close(struct net_device *dev);
static void reset_rx_descriptors(struct net_device *dev);
void stop_nic_tx(long ioaddr, long crvalue)
{
......@@ -887,7 +887,8 @@ static int netdev_open(struct net_device *dev)
1 1 0 128
1 1 1 256
Wait the specified 50 PCI cycles after a reset by initializing
Tx and Rx queues and the address filter list. */
Tx and Rx queues and the address filter list.
FIXME (Ueimor): optimistic for alpha + posted writes ? */
#if defined(__powerpc__) || defined(__sparc__)
// 89/9/1 modify,
// np->bcrvalue=0x04 | 0x0x38; /* big-endian, 256 burst length */
......@@ -1164,12 +1165,12 @@ static void tx_timeout(struct net_device *dev)
{
struct netdev_private *np = dev->priv;
long ioaddr = dev->base_addr;
int i;
printk(KERN_WARNING "%s: Transmit timed out, status %8.8x,"
" resetting...\n", dev->name, readl(ioaddr + ISR));
{
int i;
printk(KERN_DEBUG " Rx ring %p: ", np->rx_ring);
for (i = 0; i < RX_RING_SIZE; i++)
......@@ -1180,12 +1181,41 @@ static void tx_timeout(struct net_device *dev)
printk("\n");
}
/* Perhaps we should reinitialize the hardware here. Just trigger a
Tx demand for now. */
+ dev->if_port = np->default_port;
/* Reinit. Gross */
/* Reset the chip's Tx and Rx processes. */
stop_nic_tx(ioaddr, 0);
reset_rx_descriptors(dev);
/* Disable interrupts by clearing the interrupt mask. */
writel(0x0000, ioaddr + IMR);
/* Reset the chip to erase previous misconfiguration. */
writel(0x00000001, ioaddr + BCR);
/* Ueimor: wait for 50 PCI cycles (and flush posted writes btw).
We surely wait too long (address+data phase). Who cares ? */
for (i = 0; i < 50; i++) {
readl(ioaddr + BCR);
rmb();
}
writel((np->cur_tx - np->tx_ring)*sizeof(struct fealnx_desc) +
np->tx_ring_dma, ioaddr + TXLBA);
writel((np->cur_rx - np->rx_ring)*sizeof(struct fealnx_desc) +
np->rx_ring_dma, ioaddr + RXLBA);
writel(np->bcrvalue, ioaddr + BCR);
writel(0, dev->base_addr + RXPDR);
set_rx_mode(dev);
/* Clear and Enable interrupts by setting the interrupt mask. */
writel(FBE | TUNF | CNTOVF | RBU | TI | RI, ioaddr + ISR);
writel(np->imrvalue, ioaddr + IMR);
writel(0, dev->base_addr + TXPDR);
dev->if_port = 0;
/* Stop and restart the chip's Tx processes . */
dev->trans_start = jiffies;
np->stats.tx_errors++;
......
......@@ -99,6 +99,9 @@
version 1.0.12:
* ETHTOOL_* further support (Tim Hockin)
version 1.0.13:
* ETHTOOL_[GS]EEPROM support (Tim Hockin)
TODO:
* big endian support with CFG:BEM instead of cpu_to_le32
* support for an external PHY
......@@ -106,7 +109,7 @@
*/
#define DRV_NAME "natsemi"
#define DRV_VERSION "1.07+LK1.0.12"
#define DRV_VERSION "1.07+LK1.0.13"
#define DRV_RELDATE "Oct 19, 2001"
/* Updated to recommendations in pci-skeleton v2.03. */
......@@ -167,8 +170,13 @@ static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
#define NATSEMI_HW_TIMEOUT 400
#define NATSEMI_TIMER_FREQ 3*HZ
#define NATSEMI_PG0_NREGS 64
#define NATSEMI_RFDR_NREGS 8
#define NATSEMI_PG1_NREGS 4
#define NATSEMI_NREGS (NATSEMI_PG0_NREGS + NATSEMI_PG1_NREGS)
#define NATSEMI_NREGS (NATSEMI_PG0_NREGS + NATSEMI_RFDR_NREGS + \
NATSEMI_PG1_NREGS)
#define NATSEMI_REGS_VER 1 /* v1 added RFDR registers */
#define NATSEMI_REGS_SIZE (NATSEMI_NREGS * sizeof(u32))
#define NATSEMI_EEPROM_SIZE 24 /* 12 16-bit values */
#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer. */
......@@ -654,7 +662,8 @@ static int netdev_get_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd);
static int netdev_set_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd);
static void enable_wol_mode(struct net_device *dev, int enable_intr);
static int netdev_close(struct net_device *dev);
static int netdev_get_regs(struct net_device *dev, u32 *buf);
static int netdev_get_regs(struct net_device *dev, u8 *buf);
static int netdev_get_eeprom(struct net_device *dev, u8 *buf);
static int __devinit natsemi_probe1 (struct pci_dev *pdev,
......@@ -1820,7 +1829,8 @@ static int netdev_ethtool_ioctl(struct net_device *dev, void *useraddr)
info.fw_version[0] = '\0';
strncpy(info.bus_info, np->pci_dev->slot_name,
ETHTOOL_BUSINFO_LEN);
info.regdump_len = NATSEMI_NREGS;
info.eedump_len = NATSEMI_EEPROM_SIZE;
info.regdump_len = NATSEMI_REGS_SIZE;
if (copy_to_user(useraddr, &info, sizeof(info)))
return -EFAULT;
return 0;
......@@ -1872,16 +1882,16 @@ static int netdev_ethtool_ioctl(struct net_device *dev, void *useraddr)
/* get registers */
case ETHTOOL_GREGS: {
struct ethtool_regs regs;
u32 regbuf[NATSEMI_NREGS];
u8 regbuf[NATSEMI_REGS_SIZE];
int r;
if (copy_from_user(&regs, useraddr, sizeof(regs)))
return -EFAULT;
if (regs.len > NATSEMI_NREGS) {
regs.len = NATSEMI_NREGS;
if (regs.len > NATSEMI_REGS_SIZE) {
regs.len = NATSEMI_REGS_SIZE;
}
regs.version = 0;
regs.version = NATSEMI_REGS_VER;
if (copy_to_user(useraddr, &regs, sizeof(regs)))
return -EFAULT;
......@@ -1893,7 +1903,7 @@ static int netdev_ethtool_ioctl(struct net_device *dev, void *useraddr)
if (r)
return r;
if (copy_to_user(useraddr, regbuf, regs.len*sizeof(u32)))
if (copy_to_user(useraddr, regbuf, regs.len))
return -EFAULT;
return 0;
}
......@@ -1934,6 +1944,34 @@ static int netdev_ethtool_ioctl(struct net_device *dev, void *useraddr)
return -EFAULT;
return 0;
}
/* get EEPROM */
case ETHTOOL_GEEPROM: {
struct ethtool_eeprom eeprom;
u8 eebuf[NATSEMI_EEPROM_SIZE];
int r;
if (copy_from_user(&eeprom, useraddr, sizeof(eeprom)))
return -EFAULT;
if ((eeprom.offset+eeprom.len) > NATSEMI_EEPROM_SIZE) {
eeprom.len = NATSEMI_EEPROM_SIZE-eeprom.offset;
}
eeprom.magic = PCI_VENDOR_ID_NS | (PCI_DEVICE_ID_NS_83815<<16);
if (copy_to_user(useraddr, &eeprom, sizeof(eeprom)))
return -EFAULT;
useraddr += offsetof(struct ethtool_eeprom, data);
spin_lock_irq(&np->lock);
r = netdev_get_eeprom(dev, eebuf);
spin_unlock_irq(&np->lock);
if (r)
return r;
if (copy_to_user(useraddr, eebuf+eeprom.offset, eeprom.len))
return -EFAULT;
return 0;
}
}
......@@ -2172,33 +2210,69 @@ static int netdev_set_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd)
return 0;
}
static int netdev_get_regs(struct net_device *dev, u32 *buf)
static int netdev_get_regs(struct net_device *dev, u8 *buf)
{
int i;
int j;
u32 rfcr;
u32 *rbuf = (u32 *)buf;
/* read all of page 0 of registers */
for (i = 0; i < NATSEMI_PG0_NREGS; i++) {
buf[i] = readl(dev->base_addr + i*4);
rbuf[i] = readl(dev->base_addr + i*4);
}
/* read only the 'magic' registers from page 1 */
writew(1, dev->base_addr + PGSEL);
buf[i++] = readw(dev->base_addr + PMDCSR);
buf[i++] = readw(dev->base_addr + TSTDAT);
buf[i++] = readw(dev->base_addr + DSPCFG);
buf[i++] = readw(dev->base_addr + SDCFG);
rbuf[i++] = readw(dev->base_addr + PMDCSR);
rbuf[i++] = readw(dev->base_addr + TSTDAT);
rbuf[i++] = readw(dev->base_addr + DSPCFG);
rbuf[i++] = readw(dev->base_addr + SDCFG);
writew(0, dev->base_addr + PGSEL);
/* read RFCR indexed registers */
rfcr = readl(dev->base_addr + RxFilterAddr);
for (j = 0; j < NATSEMI_RFDR_NREGS; j++) {
writel(j*2, dev->base_addr + RxFilterAddr);
rbuf[i++] = readw(dev->base_addr + RxFilterData);
}
writel(rfcr, dev->base_addr + RxFilterAddr);
/* the interrupt status is clear-on-read - see if we missed any */
if (buf[4] & buf[5]) {
if (rbuf[4] & rbuf[5]) {
printk(KERN_WARNING
"%s: shoot, we dropped an interrupt (0x%x)\n",
dev->name, buf[4] & buf[5]);
dev->name, rbuf[4] & rbuf[5]);
}
return 0;
}
#define SWAP_BITS(x) ( (((x) & 0x0001) << 15) | (((x) & 0x0002) << 13) \
| (((x) & 0x0004) << 11) | (((x) & 0x0008) << 9) \
| (((x) & 0x0010) << 7) | (((x) & 0x0020) << 5) \
| (((x) & 0x0040) << 3) | (((x) & 0x0080) << 1) \
| (((x) & 0x0100) >> 1) | (((x) & 0x0200) >> 3) \
| (((x) & 0x0400) >> 5) | (((x) & 0x0800) >> 7) \
| (((x) & 0x1000) >> 9) | (((x) & 0x2000) >> 11) \
| (((x) & 0x4000) >> 13) | (((x) & 0x8000) >> 15) )
static int netdev_get_eeprom(struct net_device *dev, u8 *buf)
{
int i;
u16 *ebuf = (u16 *)buf;
/* eeprom_read reads 16 bits, and indexes by 16 bits */
for (i = 0; i < NATSEMI_EEPROM_SIZE/2; i++) {
ebuf[i] = eeprom_read(dev->base_addr, i);
/* The EEPROM itself stores data bit-swapped, but eeprom_read
* reads it back "sanely". So we swap it back here in order to
* present it to userland as it is stored. */
ebuf[i] = SWAP_BITS(ebuf[i]);
}
return 0;
}
static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
struct mii_ioctl_data *data = (struct mii_ioctl_data *)&rq->ifr_data;
......
2001-11-13 David S. Miller <davem@redhat.com>
* tulip_core.c (tulip_mwi_config): Kill unused label early_out.
2001-11-06 Richard Mortimer <richm@oldelvet.netscapeonline.co.uk>
* tulip_core.c: Correct set of values to mask out of csr0,
......
......@@ -1328,7 +1328,6 @@ static void __devinit tulip_mwi_config (struct pci_dev *pdev,
tp->csr0 = csr0;
goto out;
early_out:
if (csr0 & MWI) {
pci_command &= ~PCI_COMMAND_INVALIDATE;
pci_write_config_word(pdev, PCI_COMMAND, pci_command);
......
......@@ -2428,6 +2428,8 @@ static int __devinit ymf_probe_one(struct pci_dev *pcidev, const struct pci_devi
goto out_free;
}
pci_set_master(pcidev);
printk(KERN_INFO "ymfpci: %s at 0x%lx IRQ %d\n",
(char *)ent->driver_data, pci_resource_start(pcidev, 0), pcidev->irq);
......
......@@ -31,7 +31,7 @@
*
*/
#define CLGEN_VERSION "1.9.9"
#define CLGEN_VERSION "1.9.9.1"
#include <linux/config.h>
#include <linux/module.h>
......@@ -86,7 +86,6 @@
/* disable runtime assertions? */
/* #define CLGEN_NDEBUG */
/* debug output */
#ifdef CLGEN_DEBUG
#define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __FUNCTION__ , ## args)
......@@ -115,6 +114,7 @@
#define FALSE 0
#define MB_ (1024*1024)
#define KB_ (1024)
#define MAX_NUM_BOARDS 7
......@@ -439,11 +439,23 @@ static const struct {
{0, 8, 0},
{0, 8, 0},
{0, 0, 0},
0, 0, -1, -1, FB_ACCEL_NONE, 40000, 32, 32, 33, 10, 96, 2,
0, 0, -1, -1, FB_ACCEL_NONE, 40000, 48, 16, 32, 8, 96, 4,
FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT, FB_VMODE_NONINTERLACED
}
},
{"800x600", /* 800x600, 48 kHz, 76 Hz, 50 MHz PixClock */
{
800, 600, 800, 600, 0, 0, 8, 0,
{0, 8, 0},
{0, 8, 0},
{0, 8, 0},
{0, 0, 0},
0, 0, -1, -1, FB_ACCEL_NONE, 20000, 128, 16, 24, 2, 96, 6,
0, FB_VMODE_NONINTERLACED
}
},
/*
Modeline from XF86Config:
Mode "1024x768" 80 1024 1136 1340 1432 768 770 774 805
......@@ -455,8 +467,8 @@ static const struct {
{0, 8, 0},
{0, 8, 0},
{0, 0, 0},
0, 0, -1, -1, FB_ACCEL_NONE, 12500, 92, 112, 31, 2, 204, 4,
FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT, FB_VMODE_NONINTERLACED
0, 0, -1, -1, FB_ACCEL_NONE, 12500, 144, 32, 30, 2, 192, 6,
0, FB_VMODE_NONINTERLACED
}
}
};
......@@ -2404,22 +2416,27 @@ static void __init get_prep_addrs (unsigned long *display, unsigned long *regist
* seem to have. */
static unsigned int __init clgen_get_memsize (caddr_t regbase)
{
unsigned long mem = 1 * MB_;
unsigned long mem;
unsigned char SRF;
DPRINTK ("ENTER\n");
SRF = vga_rseq (regbase, CL_SEQRF);
if ((SRF & 0x18) == 0x18) {
switch ((SRF & 0x18)) {
case 0x08: mem = 512 * 1024; break;
case 0x10: mem = 1024 * 1024; break;
/* 64-bit DRAM data bus width; assume 2MB. Also indicates 2MB memory
* on the 5430. */
mem *= 2;
case 0x18: mem = 2048 * 1024; break;
default: printk ("CLgenfb: Unknown memory size!\n");
mem = 1024 * 1024;
}
if (SRF & 0x80) {
/* If DRAM bank switching is enabled, there must be twice as much
* memory installed. (4MB on the 5434) */
mem *= 2;
}
/* TODO: Handling of GD5446/5480 (see XF86 sources ...) */
return mem;
DPRINTK ("EXIT\n");
......@@ -2562,9 +2579,9 @@ static int __init clgen_pci_setup (struct clgenfb_info *info,
info->fbmem_phys = board_addr;
info->size = board_size;
printk (" RAM (%lu MB) at 0x%lx, ", info->size / MB_, board_addr);
printk (" RAM (%lu kB) at 0x%lx, ", info->size / KB_, board_addr);
printk (KERN_INFO "Cirrus Logic chipset on PCI bus\n");
printk ("Cirrus Logic chipset on PCI bus\n");
DPRINTK ("EXIT, returning 0\n");
return 0;
......
......@@ -678,11 +678,11 @@ void prune_icache(int goal)
entry = entry->prev;
inode = INODE(tmp);
if (inode->i_state & (I_FREEING|I_CLEAR|I_LOCK))
BUG();
continue;
if (!CAN_UNUSE(inode))
continue;
if (atomic_read(&inode->i_count))
BUG();
continue;
list_del(tmp);
list_del(&inode->i_hash);
INIT_LIST_HEAD(&inode->i_hash);
......
......@@ -44,11 +44,9 @@ void ufs_free_fragments (struct inode * inode, unsigned fragment, unsigned count
struct ufs_cg_private_info * ucpi;
struct ufs_cylinder_group * ucg;
unsigned cgno, bit, end_bit, bbase, blkmap, i, blkno, cylno;
unsigned swab;
sb = inode->i_sb;
uspi = sb->u.ufs_sb.s_uspi;
swab = sb->u.ufs_sb.s_swab;
usb1 = ubh_get_usb_first(USPI_UBH);
UFSD(("ENTER, fragment %u, count %u\n", fragment, count))
......@@ -69,7 +67,7 @@ void ufs_free_fragments (struct inode * inode, unsigned fragment, unsigned count
if (!ucpi)
goto failed;
ucg = ubh_get_ucg (UCPI_UBH);
if (!ufs_cg_chkmagic (ucg)) {
if (!ufs_cg_chkmagic(sb, ucg)) {
ufs_panic (sb, "ufs_free_fragments", "internal error, bad magic number on cg %u", cgno);
goto failed;
}
......@@ -86,9 +84,11 @@ void ufs_free_fragments (struct inode * inode, unsigned fragment, unsigned count
}
DQUOT_FREE_BLOCK (inode, count);
ADD_SWAB32(ucg->cg_cs.cs_nffree, count);
ADD_SWAB32(usb1->fs_cstotal.cs_nffree, count);
ADD_SWAB32(sb->fs_cs(cgno).cs_nffree, count);
fs32_add(sb, &ucg->cg_cs.cs_nffree, count);
fs32_add(sb, &usb1->fs_cstotal.cs_nffree, count);
fs32_add(sb, &sb->fs_cs(cgno).cs_nffree, count);
blkmap = ubh_blkmap (UCPI_UBH, ucpi->c_freeoff, bbase);
ufs_fragacct(sb, blkmap, ucg->cg_frsum, 1);
......@@ -97,17 +97,17 @@ void ufs_free_fragments (struct inode * inode, unsigned fragment, unsigned count
*/
blkno = ufs_fragstoblks (bbase);
if (ubh_isblockset(UCPI_UBH, ucpi->c_freeoff, blkno)) {
SUB_SWAB32(ucg->cg_cs.cs_nffree, uspi->s_fpb);
SUB_SWAB32(usb1->fs_cstotal.cs_nffree, uspi->s_fpb);
SUB_SWAB32(sb->fs_cs(cgno).cs_nffree, uspi->s_fpb);
fs32_sub(sb, &ucg->cg_cs.cs_nffree, uspi->s_fpb);
fs32_sub(sb, &usb1->fs_cstotal.cs_nffree, uspi->s_fpb);
fs32_sub(sb, &sb->fs_cs(cgno).cs_nffree, uspi->s_fpb);
if ((sb->u.ufs_sb.s_flags & UFS_CG_MASK) == UFS_CG_44BSD)
ufs_clusteracct (sb, ucpi, blkno, 1);
INC_SWAB32(ucg->cg_cs.cs_nbfree);
INC_SWAB32(usb1->fs_cstotal.cs_nbfree);
INC_SWAB32(sb->fs_cs(cgno).cs_nbfree);
fs32_add(sb, &ucg->cg_cs.cs_nbfree, 1);
fs32_add(sb, &usb1->fs_cstotal.cs_nbfree, 1);
fs32_add(sb, &sb->fs_cs(cgno).cs_nbfree, 1);
cylno = ufs_cbtocylno (bbase);
INC_SWAB16(ubh_cg_blks (ucpi, cylno, ufs_cbtorpos(bbase)));
INC_SWAB32(ubh_cg_blktot (ucpi, cylno));
fs16_add(sb, &ubh_cg_blks(ucpi, cylno, ufs_cbtorpos(bbase)), 1);
fs32_add(sb, &ubh_cg_blktot(ucpi, cylno), 1);
}
ubh_mark_buffer_dirty (USPI_UBH);
......@@ -138,11 +138,9 @@ void ufs_free_blocks (struct inode * inode, unsigned fragment, unsigned count) {
struct ufs_cg_private_info * ucpi;
struct ufs_cylinder_group * ucg;
unsigned overflow, cgno, bit, end_bit, blkno, i, cylno;
unsigned swab;
sb = inode->i_sb;
uspi = sb->u.ufs_sb.s_uspi;
swab = sb->u.ufs_sb.s_swab;
usb1 = ubh_get_usb_first(USPI_UBH);
UFSD(("ENTER, fragment %u, count %u\n", fragment, count))
......@@ -174,7 +172,7 @@ void ufs_free_blocks (struct inode * inode, unsigned fragment, unsigned count) {
if (!ucpi)
goto failed;
ucg = ubh_get_ucg (UCPI_UBH);
if (!ufs_cg_chkmagic (ucg)) {
if (!ufs_cg_chkmagic(sb, ucg)) {
ufs_panic (sb, "ufs_free_blocks", "internal error, bad magic number on cg %u", cgno);
goto failed;
}
......@@ -188,12 +186,13 @@ void ufs_free_blocks (struct inode * inode, unsigned fragment, unsigned count) {
if ((sb->u.ufs_sb.s_flags & UFS_CG_MASK) == UFS_CG_44BSD)
ufs_clusteracct (sb, ucpi, blkno, 1);
DQUOT_FREE_BLOCK(inode, uspi->s_fpb);
INC_SWAB32(ucg->cg_cs.cs_nbfree);
INC_SWAB32(usb1->fs_cstotal.cs_nbfree);
INC_SWAB32(sb->fs_cs(cgno).cs_nbfree);
fs32_add(sb, &ucg->cg_cs.cs_nbfree, 1);
fs32_add(sb, &usb1->fs_cstotal.cs_nbfree, 1);
fs32_add(sb, &sb->fs_cs(cgno).cs_nbfree, 1);
cylno = ufs_cbtocylno(i);
INC_SWAB16(ubh_cg_blks(ucpi, cylno, ufs_cbtorpos(i)));
INC_SWAB32(ubh_cg_blktot(ucpi, cylno));
fs16_add(sb, &ubh_cg_blks(ucpi, cylno, ufs_cbtorpos(i)), 1);
fs32_add(sb, &ubh_cg_blktot(ucpi, cylno), 1);
}
ubh_mark_buffer_dirty (USPI_UBH);
......@@ -243,19 +242,17 @@ unsigned ufs_new_fragments (struct inode * inode, u32 * p, unsigned fragment,
struct ufs_super_block_first * usb1;
struct buffer_head * bh;
unsigned cgno, oldcount, newcount, tmp, request, i, result;
unsigned swab;
UFSD(("ENTER, ino %lu, fragment %u, goal %u, count %u\n", inode->i_ino, fragment, goal, count))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first(USPI_UBH);
*err = -ENOSPC;
lock_super (sb);
tmp = SWAB32(*p);
tmp = fs32_to_cpu(sb, *p);
if (count + ufs_fragnum(fragment) > uspi->s_fpb) {
ufs_warning (sb, "ufs_new_fragments", "internal warning"
" fragment %u, count %u", fragment, count);
......@@ -310,7 +307,7 @@ unsigned ufs_new_fragments (struct inode * inode, u32 * p, unsigned fragment,
if (oldcount == 0) {
result = ufs_alloc_fragments (inode, cgno, goal, count, err);
if (result) {
*p = SWAB32(result);
*p = cpu_to_fs32(sb, result);
*err = 0;
inode->i_blocks += count << uspi->s_nspfshift;
inode->u.ufs_i.i_lastfrag = max_t(u32, inode->u.ufs_i.i_lastfrag, fragment + count);
......@@ -338,23 +335,23 @@ unsigned ufs_new_fragments (struct inode * inode, u32 * p, unsigned fragment,
/*
* allocate new block and move data
*/
switch (SWAB32(usb1->fs_optim)) {
switch (fs32_to_cpu(sb, usb1->fs_optim)) {
case UFS_OPTSPACE:
request = newcount;
if (uspi->s_minfree < 5 || SWAB32(usb1->fs_cstotal.cs_nffree)
if (uspi->s_minfree < 5 || fs32_to_cpu(sb, usb1->fs_cstotal.cs_nffree)
> uspi->s_dsize * uspi->s_minfree / (2 * 100) )
break;
usb1->fs_optim = SWAB32(UFS_OPTTIME);
usb1->fs_optim = cpu_to_fs32(sb, UFS_OPTTIME);
break;
default:
usb1->fs_optim = SWAB32(UFS_OPTTIME);
usb1->fs_optim = cpu_to_fs32(sb, UFS_OPTTIME);
case UFS_OPTTIME:
request = uspi->s_fpb;
if (SWAB32(usb1->fs_cstotal.cs_nffree) < uspi->s_dsize *
if (fs32_to_cpu(sb, usb1->fs_cstotal.cs_nffree) < uspi->s_dsize *
(uspi->s_minfree - 2) / 100)
break;
usb1->fs_optim = SWAB32(UFS_OPTSPACE);
usb1->fs_optim = cpu_to_fs32(sb, UFS_OPTTIME);
break;
}
result = ufs_alloc_fragments (inode, cgno, goal, request, err);
......@@ -378,7 +375,7 @@ unsigned ufs_new_fragments (struct inode * inode, u32 * p, unsigned fragment,
return 0;
}
}
*p = SWAB32(result);
*p = cpu_to_fs32(sb, result);
*err = 0;
inode->i_blocks += count << uspi->s_nspfshift;
inode->u.ufs_i.i_lastfrag = max_t(u32, inode->u.ufs_i.i_lastfrag, fragment + count);
......@@ -405,12 +402,10 @@ unsigned ufs_add_fragments (struct inode * inode, unsigned fragment,
struct ufs_cg_private_info * ucpi;
struct ufs_cylinder_group * ucg;
unsigned cgno, fragno, fragoff, count, fragsize, i;
unsigned swab;
UFSD(("ENTER, fragment %u, oldcount %u, newcount %u\n", fragment, oldcount, newcount))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first (USPI_UBH);
count = newcount - oldcount;
......@@ -424,7 +419,7 @@ unsigned ufs_add_fragments (struct inode * inode, unsigned fragment,
if (!ucpi)
return 0;
ucg = ubh_get_ucg (UCPI_UBH);
if (!ufs_cg_chkmagic(ucg)) {
if (!ufs_cg_chkmagic(sb, ucg)) {
ufs_panic (sb, "ufs_add_fragments",
"internal error, bad magic number on cg %u", cgno);
return 0;
......@@ -438,26 +433,27 @@ unsigned ufs_add_fragments (struct inode * inode, unsigned fragment,
/*
* Block can be extended
*/
ucg->cg_time = SWAB32(CURRENT_TIME);
ucg->cg_time = cpu_to_fs32(sb, CURRENT_TIME);
for (i = newcount; i < (uspi->s_fpb - fragoff); i++)
if (ubh_isclr (UCPI_UBH, ucpi->c_freeoff, fragno + i))
break;
fragsize = i - oldcount;
if (!SWAB32(ucg->cg_frsum[fragsize]))
if (!fs32_to_cpu(sb, ucg->cg_frsum[fragsize]))
ufs_panic (sb, "ufs_add_fragments",
"internal error or corrupted bitmap on cg %u", cgno);
DEC_SWAB32(ucg->cg_frsum[fragsize]);
fs32_sub(sb, &ucg->cg_frsum[fragsize], 1);
if (fragsize != count)
INC_SWAB32(ucg->cg_frsum[fragsize - count]);
fs32_add(sb, &ucg->cg_frsum[fragsize - count], 1);
for (i = oldcount; i < newcount; i++)
ubh_clrbit (UCPI_UBH, ucpi->c_freeoff, fragno + i);
if(DQUOT_ALLOC_BLOCK(inode, count)) {
*err = -EDQUOT;
return 0;
}
SUB_SWAB32(ucg->cg_cs.cs_nffree, count);
SUB_SWAB32(sb->fs_cs(cgno).cs_nffree, count);
SUB_SWAB32(usb1->fs_cstotal.cs_nffree, count);
fs32_sub(sb, &ucg->cg_cs.cs_nffree, count);
fs32_sub(sb, &sb->fs_cs(cgno).cs_nffree, count);
fs32_sub(sb, &usb1->fs_cstotal.cs_nffree, count);
ubh_mark_buffer_dirty (USPI_UBH);
ubh_mark_buffer_dirty (UCPI_UBH);
......@@ -474,10 +470,10 @@ unsigned ufs_add_fragments (struct inode * inode, unsigned fragment,
#define UFS_TEST_FREE_SPACE_CG \
ucg = (struct ufs_cylinder_group *) sb->u.ufs_sb.s_ucg[cgno]->b_data; \
if (SWAB32(ucg->cg_cs.cs_nbfree)) \
if (fs32_to_cpu(sb, ucg->cg_cs.cs_nbfree)) \
goto cg_found; \
for (k = count; k < uspi->s_fpb; k++) \
if (SWAB32(ucg->cg_frsum[k])) \
if (fs32_to_cpu(sb, ucg->cg_frsum[k])) \
goto cg_found;
unsigned ufs_alloc_fragments (struct inode * inode, unsigned cgno,
......@@ -489,12 +485,10 @@ unsigned ufs_alloc_fragments (struct inode * inode, unsigned cgno,
struct ufs_cg_private_info * ucpi;
struct ufs_cylinder_group * ucg;
unsigned oldcg, i, j, k, result, allocsize;
unsigned swab;
UFSD(("ENTER, ino %lu, cgno %u, goal %u, count %u\n", inode->i_ino, cgno, goal, count))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first(USPI_UBH);
oldcg = cgno;
......@@ -534,10 +528,10 @@ unsigned ufs_alloc_fragments (struct inode * inode, unsigned cgno,
if (!ucpi)
return 0;
ucg = ubh_get_ucg (UCPI_UBH);
if (!ufs_cg_chkmagic(ucg))
if (!ufs_cg_chkmagic(sb, ucg))
ufs_panic (sb, "ufs_alloc_fragments",
"internal error, bad magic number on cg %u", cgno);
ucg->cg_time = SWAB32(CURRENT_TIME);
ucg->cg_time = cpu_to_fs32(sb, CURRENT_TIME);
if (count == uspi->s_fpb) {
result = ufs_alloccg_block (inode, ucpi, goal, err);
......@@ -547,7 +541,7 @@ unsigned ufs_alloc_fragments (struct inode * inode, unsigned cgno,
}
for (allocsize = count; allocsize < uspi->s_fpb; allocsize++)
if (SWAB32(ucg->cg_frsum[allocsize]) != 0)
if (fs32_to_cpu(sb, ucg->cg_frsum[allocsize]) != 0)
break;
if (allocsize == uspi->s_fpb) {
......@@ -559,10 +553,11 @@ unsigned ufs_alloc_fragments (struct inode * inode, unsigned cgno,
ubh_setbit (UCPI_UBH, ucpi->c_freeoff, goal + i);
i = uspi->s_fpb - count;
DQUOT_FREE_BLOCK(inode, i);
ADD_SWAB32(ucg->cg_cs.cs_nffree, i);
ADD_SWAB32(usb1->fs_cstotal.cs_nffree, i);
ADD_SWAB32(sb->fs_cs(cgno).cs_nffree, i);
INC_SWAB32(ucg->cg_frsum[i]);
fs32_add(sb, &ucg->cg_cs.cs_nffree, i);
fs32_add(sb, &usb1->fs_cstotal.cs_nffree, i);
fs32_add(sb, &sb->fs_cs(cgno).cs_nffree, i);
fs32_add(sb, &ucg->cg_frsum[i], 1);
goto succed;
}
......@@ -575,12 +570,14 @@ unsigned ufs_alloc_fragments (struct inode * inode, unsigned cgno,
}
for (i = 0; i < count; i++)
ubh_clrbit (UCPI_UBH, ucpi->c_freeoff, result + i);
SUB_SWAB32(ucg->cg_cs.cs_nffree, count);
SUB_SWAB32(usb1->fs_cstotal.cs_nffree, count);
SUB_SWAB32(sb->fs_cs(cgno).cs_nffree, count);
DEC_SWAB32(ucg->cg_frsum[allocsize]);
fs32_sub(sb, &ucg->cg_cs.cs_nffree, count);
fs32_sub(sb, &usb1->fs_cstotal.cs_nffree, count);
fs32_sub(sb, &sb->fs_cs(cgno).cs_nffree, count);
fs32_sub(sb, &ucg->cg_frsum[allocsize], 1);
if (count != allocsize)
INC_SWAB32(ucg->cg_frsum[allocsize - count]);
fs32_add(sb, &ucg->cg_frsum[allocsize - count], 1);
succed:
ubh_mark_buffer_dirty (USPI_UBH);
......@@ -604,12 +601,10 @@ unsigned ufs_alloccg_block (struct inode * inode,
struct ufs_super_block_first * usb1;
struct ufs_cylinder_group * ucg;
unsigned result, cylno, blkno;
unsigned swab;
UFSD(("ENTER, goal %u\n", goal))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first(USPI_UBH);
ucg = ubh_get_ucg(UCPI_UBH);
......@@ -643,12 +638,13 @@ unsigned ufs_alloccg_block (struct inode * inode,
*err = -EDQUOT;
return (unsigned)-1;
}
DEC_SWAB32(ucg->cg_cs.cs_nbfree);
DEC_SWAB32(usb1->fs_cstotal.cs_nbfree);
DEC_SWAB32(sb->fs_cs(ucpi->c_cgx).cs_nbfree);
fs32_sub(sb, &ucg->cg_cs.cs_nbfree, 1);
fs32_sub(sb, &usb1->fs_cstotal.cs_nbfree, 1);
fs32_sub(sb, &sb->fs_cs(ucpi->c_cgx).cs_nbfree, 1);
cylno = ufs_cbtocylno(result);
DEC_SWAB16(ubh_cg_blks(ucpi, cylno, ufs_cbtorpos(result)));
DEC_SWAB32(ubh_cg_blktot(ucpi, cylno));
fs16_sub(sb, &ubh_cg_blks(ucpi, cylno, ufs_cbtorpos(result)), 1);
fs32_sub(sb, &ubh_cg_blktot(ucpi, cylno), 1);
UFSD(("EXIT, result %u\n", result))
......@@ -663,11 +659,9 @@ unsigned ufs_bitmap_search (struct super_block * sb,
struct ufs_cylinder_group * ucg;
unsigned start, length, location, result;
unsigned possition, fragsize, blockmap, mask;
unsigned swab;
UFSD(("ENTER, cg %u, goal %u, count %u\n", ucpi->c_cgx, goal, count))
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first (USPI_UBH);
ucg = ubh_get_ucg(UCPI_UBH);
......@@ -733,12 +727,8 @@ void ufs_clusteracct(struct super_block * sb,
{
struct ufs_sb_private_info * uspi;
int i, start, end, forw, back;
unsigned swab;
uspi = sb->u.ufs_sb.s_uspi;
swab = sb->u.ufs_sb.s_swab;
if (uspi->s_contigsumsize <= 0)
return;
......@@ -778,11 +768,11 @@ void ufs_clusteracct(struct super_block * sb,
i = back + forw + 1;
if (i > uspi->s_contigsumsize)
i = uspi->s_contigsumsize;
ADD_SWAB32(*((u32*)ubh_get_addr(UCPI_UBH, ucpi->c_clustersumoff + (i << 2))), cnt);
fs32_add(sb, (u32*)ubh_get_addr(UCPI_UBH, ucpi->c_clustersumoff + (i << 2)), cnt);
if (back > 0)
SUB_SWAB32(*((u32*)ubh_get_addr(UCPI_UBH, ucpi->c_clustersumoff + (back << 2))), cnt);
fs32_sub(sb, (u32*)ubh_get_addr(UCPI_UBH, ucpi->c_clustersumoff + (back << 2)), cnt);
if (forw > 0)
SUB_SWAB32(*((u32*)ubh_get_addr(UCPI_UBH, ucpi->c_clustersumoff + (forw << 2))), cnt);
fs32_sub(sb, (u32*)ubh_get_addr(UCPI_UBH, ucpi->c_clustersumoff + (forw << 2)), cnt);
}
......
......@@ -41,10 +41,8 @@ static void ufs_read_cylinder (struct super_block * sb,
struct ufs_cg_private_info * ucpi;
struct ufs_cylinder_group * ucg;
unsigned i, j;
unsigned swab;
UFSD(("ENTER, cgno %u, bitmap_nr %u\n", cgno, bitmap_nr))
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
ucpi = sb->u.ufs_sb.s_ucpi[bitmap_nr];
ucg = (struct ufs_cylinder_group *)sb->u.ufs_sb.s_ucg[cgno]->b_data;
......@@ -60,21 +58,21 @@ static void ufs_read_cylinder (struct super_block * sb,
goto failed;
sb->u.ufs_sb.s_cgno[bitmap_nr] = cgno;
ucpi->c_cgx = SWAB32(ucg->cg_cgx);
ucpi->c_ncyl = SWAB16(ucg->cg_ncyl);
ucpi->c_niblk = SWAB16(ucg->cg_niblk);
ucpi->c_ndblk = SWAB32(ucg->cg_ndblk);
ucpi->c_rotor = SWAB32(ucg->cg_rotor);
ucpi->c_frotor = SWAB32(ucg->cg_frotor);
ucpi->c_irotor = SWAB32(ucg->cg_irotor);
ucpi->c_btotoff = SWAB32(ucg->cg_btotoff);
ucpi->c_boff = SWAB32(ucg->cg_boff);
ucpi->c_iusedoff = SWAB32(ucg->cg_iusedoff);
ucpi->c_freeoff = SWAB32(ucg->cg_freeoff);
ucpi->c_nextfreeoff = SWAB32(ucg->cg_nextfreeoff);
ucpi->c_clustersumoff = SWAB32(ucg->cg_u.cg_44.cg_clustersumoff);
ucpi->c_clusteroff = SWAB32(ucg->cg_u.cg_44.cg_clusteroff);
ucpi->c_nclusterblks = SWAB32(ucg->cg_u.cg_44.cg_nclusterblks);
ucpi->c_cgx = fs32_to_cpu(sb, ucg->cg_cgx);
ucpi->c_ncyl = fs16_to_cpu(sb, ucg->cg_ncyl);
ucpi->c_niblk = fs16_to_cpu(sb, ucg->cg_niblk);
ucpi->c_ndblk = fs32_to_cpu(sb, ucg->cg_ndblk);
ucpi->c_rotor = fs32_to_cpu(sb, ucg->cg_rotor);
ucpi->c_frotor = fs32_to_cpu(sb, ucg->cg_frotor);
ucpi->c_irotor = fs32_to_cpu(sb, ucg->cg_irotor);
ucpi->c_btotoff = fs32_to_cpu(sb, ucg->cg_btotoff);
ucpi->c_boff = fs32_to_cpu(sb, ucg->cg_boff);
ucpi->c_iusedoff = fs32_to_cpu(sb, ucg->cg_iusedoff);
ucpi->c_freeoff = fs32_to_cpu(sb, ucg->cg_freeoff);
ucpi->c_nextfreeoff = fs32_to_cpu(sb, ucg->cg_nextfreeoff);
ucpi->c_clustersumoff = fs32_to_cpu(sb, ucg->cg_u.cg_44.cg_clustersumoff);
ucpi->c_clusteroff = fs32_to_cpu(sb, ucg->cg_u.cg_44.cg_clusteroff);
ucpi->c_nclusterblks = fs32_to_cpu(sb, ucg->cg_u.cg_44.cg_nclusterblks);
UFSD(("EXIT\n"))
return;
......@@ -95,11 +93,9 @@ void ufs_put_cylinder (struct super_block * sb, unsigned bitmap_nr)
struct ufs_cg_private_info * ucpi;
struct ufs_cylinder_group * ucg;
unsigned i;
unsigned swab;
UFSD(("ENTER, bitmap_nr %u\n", bitmap_nr))
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
if (sb->u.ufs_sb.s_cgno[bitmap_nr] == UFS_CGNO_EMPTY) {
UFSD(("EXIT\n"))
......@@ -116,9 +112,9 @@ void ufs_put_cylinder (struct super_block * sb, unsigned bitmap_nr)
* rotor is not so important data, so we put it to disk
* at the end of working with cylinder
*/
ucg->cg_rotor = SWAB32(ucpi->c_rotor);
ucg->cg_frotor = SWAB32(ucpi->c_frotor);
ucg->cg_irotor = SWAB32(ucpi->c_irotor);
ucg->cg_rotor = cpu_to_fs32(sb, ucpi->c_rotor);
ucg->cg_frotor = cpu_to_fs32(sb, ucpi->c_frotor);
ucg->cg_irotor = cpu_to_fs32(sb, ucpi->c_irotor);
ubh_mark_buffer_dirty (UCPI_UBH);
for (i = 1; i < UCPI_UBH->count; i++) {
brelse (UCPI_UBH->bh[i]);
......
......@@ -29,15 +29,17 @@
#define UFSD(x)
#endif
/*
* NOTE! unlike strncmp, ufs_match returns 1 for success, 0 for failure.
*
* len <= UFS_MAXNAMLEN and de != NULL are guaranteed by caller.
*/
static inline int ufs_match (int len, const char * const name,
struct ufs_dir_entry * de, unsigned flags, unsigned swab)
static inline int ufs_match(struct super_block *sb, int len,
const char * const name, struct ufs_dir_entry * de)
{
if (len != ufs_get_de_namlen(de))
if (len != ufs_get_de_namlen(sb, de))
return 0;
if (!de->d_ino)
return 0;
......@@ -58,10 +60,9 @@ ufs_readdir (struct file * filp, void * dirent, filldir_t filldir)
struct ufs_dir_entry * de;
struct super_block * sb;
int de_reclen;
unsigned flags, swab;
unsigned flags;
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
flags = sb->u.ufs_sb.s_flags;
UFSD(("ENTER, ino %lu f_pos %lu\n", inode->i_ino, (unsigned long) filp->f_pos))
......@@ -96,7 +97,7 @@ ufs_readdir (struct file * filp, void * dirent, filldir_t filldir)
* least that it is non-zero. A
* failure will be detected in the
* dirent test below. */
de_reclen = SWAB16(de->d_reclen);
de_reclen = fs16_to_cpu(sb, de->d_reclen);
if (de_reclen < 1)
break;
i += de_reclen;
......@@ -111,8 +112,7 @@ ufs_readdir (struct file * filp, void * dirent, filldir_t filldir)
&& offset < sb->s_blocksize) {
de = (struct ufs_dir_entry *) (bh->b_data + offset);
/* XXX - put in a real ufs_check_dir_entry() */
if ((de->d_reclen == 0) || (ufs_get_de_namlen(de) == 0)) {
/* SWAB16() was unneeded -- compare to 0 */
if ((de->d_reclen == 0) || (ufs_get_de_namlen(sb, de) == 0)) {
filp->f_pos = (filp->f_pos &
(sb->s_blocksize - 1)) +
sb->s_blocksize;
......@@ -129,9 +129,8 @@ ufs_readdir (struct file * filp, void * dirent, filldir_t filldir)
brelse (bh);
return stored;
}
offset += SWAB16(de->d_reclen);
offset += fs16_to_cpu(sb, de->d_reclen);
if (de->d_ino) {
/* SWAB16() was unneeded -- compare to 0 */
/* We might block in the next section
* if the data destination is
* currently swapped out. So, use a
......@@ -141,19 +140,22 @@ ufs_readdir (struct file * filp, void * dirent, filldir_t filldir)
unsigned long version = filp->f_version;
unsigned char d_type = DT_UNKNOWN;
UFSD(("filldir(%s,%u)\n", de->d_name, SWAB32(de->d_ino)))
UFSD(("namlen %u\n", ufs_get_de_namlen(de)))
UFSD(("filldir(%s,%u)\n", de->d_name,
fs32_to_cpu(sb, de->d_ino)))
UFSD(("namlen %u\n", ufs_get_de_namlen(sb, de)))
if ((flags & UFS_DE_MASK) == UFS_DE_44BSD)
d_type = de->d_u.d_44.d_type;
error = filldir(dirent, de->d_name, ufs_get_de_namlen(de),
filp->f_pos, SWAB32(de->d_ino), d_type);
error = filldir(dirent, de->d_name,
ufs_get_de_namlen(sb, de), filp->f_pos,
fs32_to_cpu(sb, de->d_ino), d_type);
if (error)
break;
if (version != filp->f_version)
goto revalidate;
stored ++;
}
filp->f_pos += SWAB16(de->d_reclen);
filp->f_pos += fs16_to_cpu(sb, de->d_reclen);
}
offset = 0;
brelse (bh);
......@@ -186,7 +188,6 @@ struct ufs_dir_entry * ufs_find_entry (struct dentry *dentry,
struct buffer_head * bh_read[NAMEI_RA_SIZE];
unsigned long offset;
int block, toread, i, err;
unsigned flags, swab;
struct inode *dir = dentry->d_parent->d_inode;
const char *name = dentry->d_name.name;
int namelen = dentry->d_name.len;
......@@ -196,8 +197,6 @@ struct ufs_dir_entry * ufs_find_entry (struct dentry *dentry,
*res_bh = NULL;
sb = dir->i_sb;
flags = sb->u.ufs_sb.s_flags;
swab = sb->u.ufs_sb.s_swab;
if (namelen > UFS_MAXNAMLEN)
return NULL;
......@@ -248,7 +247,7 @@ struct ufs_dir_entry * ufs_find_entry (struct dentry *dentry,
int de_len;
if ((char *) de + namelen <= dlimit &&
ufs_match (namelen, name, de, flags, swab)) {
ufs_match(sb, namelen, name, de)) {
/* found a match -
just to be sure, do a full check */
if (!ufs_check_dir_entry("ufs_find_entry",
......@@ -262,7 +261,7 @@ struct ufs_dir_entry * ufs_find_entry (struct dentry *dentry,
return de;
}
/* prevent looping on a bad block */
de_len = SWAB16(de->d_reclen);
de_len = fs16_to_cpu(sb, de->d_reclen);
if (de_len <= 0)
goto failed;
offset += de_len;
......@@ -290,33 +289,28 @@ int ufs_check_dir_entry (const char * function, struct inode * dir,
struct ufs_dir_entry * de, struct buffer_head * bh,
unsigned long offset)
{
struct super_block * sb;
const char * error_msg;
unsigned flags, swab;
sb = dir->i_sb;
flags = sb->u.ufs_sb.s_flags;
swab = sb->u.ufs_sb.s_swab;
error_msg = NULL;
if (SWAB16(de->d_reclen) < UFS_DIR_REC_LEN(1))
struct super_block *sb = dir->i_sb;
const char *error_msg = NULL;
int rlen = fs16_to_cpu(sb, de->d_reclen);
if (rlen < UFS_DIR_REC_LEN(1))
error_msg = "reclen is smaller than minimal";
else if (SWAB16(de->d_reclen) % 4 != 0)
else if (rlen % 4 != 0)
error_msg = "reclen % 4 != 0";
else if (SWAB16(de->d_reclen) < UFS_DIR_REC_LEN(ufs_get_de_namlen(de)))
else if (rlen < UFS_DIR_REC_LEN(ufs_get_de_namlen(sb, de)))
error_msg = "reclen is too small for namlen";
else if (dir && ((char *) de - bh->b_data) + SWAB16(de->d_reclen) >
dir->i_sb->s_blocksize)
else if (((char *) de - bh->b_data) + rlen > dir->i_sb->s_blocksize)
error_msg = "directory entry across blocks";
else if (dir && SWAB32(de->d_ino) > (sb->u.ufs_sb.s_uspi->s_ipg * sb->u.ufs_sb.s_uspi->s_ncg))
else if (fs32_to_cpu(sb, de->d_ino) > (sb->u.ufs_sb.s_uspi->s_ipg *
sb->u.ufs_sb.s_uspi->s_ncg))
error_msg = "inode out of bounds";
if (error_msg != NULL)
ufs_error (sb, function, "bad entry in directory #%lu, size %Lu: %s - "
"offset=%lu, inode=%lu, reclen=%d, namlen=%d",
dir->i_ino, dir->i_size, error_msg, offset,
(unsigned long) SWAB32(de->d_ino),
SWAB16(de->d_reclen), ufs_get_de_namlen(de));
(unsigned long)fs32_to_cpu(sb, de->d_ino),
rlen, ufs_get_de_namlen(sb, de));
return (error_msg == NULL ? 1 : 0);
}
......@@ -328,25 +322,22 @@ struct ufs_dir_entry *ufs_dotdot(struct inode *dir, struct buffer_head **p)
struct ufs_dir_entry *res = NULL;
if (bh) {
unsigned swab = dir->i_sb->u.ufs_sb.s_swab;
res = (struct ufs_dir_entry *) bh->b_data;
res = (struct ufs_dir_entry *)((char *)res +
SWAB16(res->d_reclen));
fs16_to_cpu(dir->i_sb, res->d_reclen));
}
*p = bh;
return res;
}
ino_t ufs_inode_by_name(struct inode * dir, struct dentry *dentry)
{
unsigned swab = dir->i_sb->u.ufs_sb.s_swab;
ino_t res = 0;
struct ufs_dir_entry * de;
struct buffer_head *bh;
de = ufs_find_entry (dentry, &bh);
if (de) {
res = SWAB32(de->d_ino);
res = fs32_to_cpu(dir->i_sb, de->d_ino);
brelse(bh);
}
return res;
......@@ -355,9 +346,8 @@ ino_t ufs_inode_by_name(struct inode * dir, struct dentry *dentry)
void ufs_set_link(struct inode *dir, struct ufs_dir_entry *de,
struct buffer_head *bh, struct inode *inode)
{
unsigned swab = dir->i_sb->u.ufs_sb.s_swab;
dir->i_version = ++event;
de->d_ino = SWAB32(inode->i_ino);
de->d_ino = cpu_to_fs32(dir->i_sb, inode->i_ino);
mark_buffer_dirty(bh);
if (IS_SYNC(dir)) {
ll_rw_block (WRITE, 1, &bh);
......@@ -381,7 +371,6 @@ int ufs_add_link(struct dentry *dentry, struct inode *inode)
unsigned short rec_len;
struct buffer_head * bh;
struct ufs_dir_entry * de, * de1;
unsigned flags, swab;
struct inode *dir = dentry->d_parent->d_inode;
const char *name = dentry->d_name.name;
int namelen = dentry->d_name.len;
......@@ -390,8 +379,6 @@ int ufs_add_link(struct dentry *dentry, struct inode *inode)
UFSD(("ENTER, name %s, namelen %u\n", name, namelen))
sb = dir->i_sb;
flags = sb->u.ufs_sb.s_flags;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
if (!namelen)
......@@ -420,9 +407,9 @@ int ufs_add_link(struct dentry *dentry, struct inode *inode)
return -ENOENT;
}
de = (struct ufs_dir_entry *) (bh->b_data + fragoff);
de->d_ino = SWAB32(0);
de->d_reclen = SWAB16(UFS_SECTOR_SIZE);
ufs_set_de_namlen(de,0);
de->d_ino = 0;
de->d_reclen = cpu_to_fs16(sb, UFS_SECTOR_SIZE);
ufs_set_de_namlen(sb, de, 0);
dir->i_size = offset + UFS_SECTOR_SIZE;
mark_inode_dirty(dir);
} else {
......@@ -433,32 +420,35 @@ int ufs_add_link(struct dentry *dentry, struct inode *inode)
brelse (bh);
return -ENOENT;
}
if (ufs_match (namelen, name, de, flags, swab)) {
if (ufs_match(sb, namelen, name, de)) {
brelse (bh);
return -EEXIST;
}
if (SWAB32(de->d_ino) == 0 && SWAB16(de->d_reclen) >= rec_len)
if (de->d_ino == 0 && fs16_to_cpu(sb, de->d_reclen) >= rec_len)
break;
if (SWAB16(de->d_reclen) >= UFS_DIR_REC_LEN(ufs_get_de_namlen(de)) + rec_len)
if (fs16_to_cpu(sb, de->d_reclen) >=
UFS_DIR_REC_LEN(ufs_get_de_namlen(sb, de)) + rec_len)
break;
offset += SWAB16(de->d_reclen);
de = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
offset += fs16_to_cpu(sb, de->d_reclen);
de = (struct ufs_dir_entry *) ((char *) de + fs16_to_cpu(sb, de->d_reclen));
}
if (SWAB32(de->d_ino)) {
if (de->d_ino) {
de1 = (struct ufs_dir_entry *) ((char *) de +
UFS_DIR_REC_LEN(ufs_get_de_namlen(de)));
de1->d_reclen = SWAB16(SWAB16(de->d_reclen) -
UFS_DIR_REC_LEN(ufs_get_de_namlen(de)));
de->d_reclen = SWAB16(UFS_DIR_REC_LEN(ufs_get_de_namlen(de)));
UFS_DIR_REC_LEN(ufs_get_de_namlen(sb, de)));
de1->d_reclen =
cpu_to_fs16(sb, fs16_to_cpu(sb, de->d_reclen) -
UFS_DIR_REC_LEN(ufs_get_de_namlen(sb, de)));
de->d_reclen =
cpu_to_fs16(sb, UFS_DIR_REC_LEN(ufs_get_de_namlen(sb, de)));
de = de1;
}
de->d_ino = SWAB32(0);
ufs_set_de_namlen(de, namelen);
de->d_ino = 0;
ufs_set_de_namlen(sb, de, namelen);
memcpy (de->d_name, name, namelen + 1);
de->d_ino = SWAB32(inode->i_ino);
ufs_set_de_type (de, inode->i_mode);
de->d_ino = cpu_to_fs32(sb, inode->i_ino);
ufs_set_de_type(sb, de, inode->i_mode);
mark_buffer_dirty(bh);
if (IS_SYNC(dir)) {
ll_rw_block (WRITE, 1, &bh);
......@@ -484,19 +474,18 @@ int ufs_delete_entry (struct inode * inode, struct ufs_dir_entry * dir,
struct super_block * sb;
struct ufs_dir_entry * de, * pde;
unsigned i;
unsigned flags, swab;
UFSD(("ENTER\n"))
sb = inode->i_sb;
flags = sb->u.ufs_sb.s_flags;
swab = sb->u.ufs_sb.s_swab;
i = 0;
pde = NULL;
de = (struct ufs_dir_entry *) bh->b_data;
UFSD(("ino %u, reclen %u, namlen %u, name %s\n", SWAB32(de->d_ino),
SWAB16(de->d_reclen), ufs_get_de_namlen(de), de->d_name))
UFSD(("ino %u, reclen %u, namlen %u, name %s\n",
fs32_to_cpu(sb, de->d_ino),
fs16to_cpu(sb, de->d_reclen),
ufs_get_de_namlen(sb, de), de->d_name))
while (i < bh->b_size) {
if (!ufs_check_dir_entry ("ufs_delete_entry", inode, de, bh, i)) {
......@@ -505,10 +494,9 @@ int ufs_delete_entry (struct inode * inode, struct ufs_dir_entry * dir,
}
if (de == dir) {
if (pde)
pde->d_reclen =
SWAB16(SWAB16(pde->d_reclen) +
SWAB16(dir->d_reclen));
dir->d_ino = SWAB32(0);
fs16_add(sb, &pde->d_reclen,
fs16_to_cpu(sb, dir->d_reclen));
dir->d_ino = 0;
inode->i_version = ++event;
inode->i_ctime = inode->i_mtime = CURRENT_TIME;
mark_inode_dirty(inode);
......@@ -521,12 +509,12 @@ int ufs_delete_entry (struct inode * inode, struct ufs_dir_entry * dir,
UFSD(("EXIT\n"))
return 0;
}
i += SWAB16(de->d_reclen);
i += fs16_to_cpu(sb, de->d_reclen);
if (i == UFS_SECTOR_SIZE) pde = NULL;
else pde = de;
de = (struct ufs_dir_entry *)
((char *) de + SWAB16(de->d_reclen));
if (i == UFS_SECTOR_SIZE && SWAB16(de->d_reclen) == 0)
((char *) de + fs16_to_cpu(sb, de->d_reclen));
if (i == UFS_SECTOR_SIZE && de->d_reclen == 0)
break;
}
UFSD(("EXIT\n"))
......@@ -537,8 +525,6 @@ int ufs_delete_entry (struct inode * inode, struct ufs_dir_entry * dir,
int ufs_make_empty(struct inode * inode, struct inode *dir)
{
struct super_block * sb = dir->i_sb;
unsigned flags = sb->u.ufs_sb.s_flags;
unsigned swab = sb->u.ufs_sb.s_swab;
struct buffer_head * dir_block;
struct ufs_dir_entry * de;
int err;
......@@ -549,16 +535,17 @@ int ufs_make_empty(struct inode * inode, struct inode *dir)
inode->i_blocks = sb->s_blocksize / UFS_SECTOR_SIZE;
de = (struct ufs_dir_entry *) dir_block->b_data;
de->d_ino = SWAB32(inode->i_ino);
ufs_set_de_type (de, inode->i_mode);
ufs_set_de_namlen(de,1);
de->d_reclen = SWAB16(UFS_DIR_REC_LEN(1));
de->d_ino = cpu_to_fs32(sb, inode->i_ino);
ufs_set_de_type(sb, de, inode->i_mode);
ufs_set_de_namlen(sb, de, 1);
de->d_reclen = cpu_to_fs16(sb, UFS_DIR_REC_LEN(1));
strcpy (de->d_name, ".");
de = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
de->d_ino = SWAB32(dir->i_ino);
ufs_set_de_type (de, dir->i_mode);
de->d_reclen = SWAB16(UFS_SECTOR_SIZE - UFS_DIR_REC_LEN(1));
ufs_set_de_namlen(de,2);
de = (struct ufs_dir_entry *)
((char *)de + fs16_to_cpu(sb, de->d_reclen));
de->d_ino = cpu_to_fs32(sb, dir->i_ino);
ufs_set_de_type(sb, de, dir->i_mode);
de->d_reclen = cpu_to_fs16(sb, UFS_SECTOR_SIZE - UFS_DIR_REC_LEN(1));
ufs_set_de_namlen(sb, de, 2);
strcpy (de->d_name, "..");
mark_buffer_dirty(dir_block);
brelse (dir_block);
......@@ -576,10 +563,8 @@ int ufs_empty_dir (struct inode * inode)
struct buffer_head * bh;
struct ufs_dir_entry * de, * de1;
int err;
unsigned swab;
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
if (inode->i_size < UFS_DIR_REC_LEN(1) + UFS_DIR_REC_LEN(2) ||
!(bh = ufs_bread (inode, 0, 0, &err))) {
......@@ -589,16 +574,18 @@ int ufs_empty_dir (struct inode * inode)
return 1;
}
de = (struct ufs_dir_entry *) bh->b_data;
de1 = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
if (SWAB32(de->d_ino) != inode->i_ino || !SWAB32(de1->d_ino) ||
strcmp (".", de->d_name) || strcmp ("..", de1->d_name)) {
de1 = (struct ufs_dir_entry *)
((char *)de + fs16_to_cpu(sb, de->d_reclen));
if (fs32_to_cpu(sb, de->d_ino) != inode->i_ino || de1->d_ino == 0 ||
strcmp (".", de->d_name) || strcmp ("..", de1->d_name)) {
ufs_warning (inode->i_sb, "empty_dir",
"bad directory (dir #%lu) - no `.' or `..'",
inode->i_ino);
return 1;
}
offset = SWAB16(de->d_reclen) + SWAB16(de1->d_reclen);
de = (struct ufs_dir_entry *) ((char *) de1 + SWAB16(de1->d_reclen));
offset = fs16_to_cpu(sb, de->d_reclen) + fs16_to_cpu(sb, de1->d_reclen);
de = (struct ufs_dir_entry *)
((char *)de1 + fs16_to_cpu(sb, de1->d_reclen));
while (offset < inode->i_size ) {
if (!bh || (void *) de >= (void *) (bh->b_data + sb->s_blocksize)) {
brelse (bh);
......@@ -616,12 +603,13 @@ int ufs_empty_dir (struct inode * inode)
brelse (bh);
return 1;
}
if (SWAB32(de->d_ino)) {
if (de->d_ino) {
brelse (bh);
return 0;
}
offset += SWAB16(de->d_reclen);
de = (struct ufs_dir_entry *) ((char *) de + SWAB16(de->d_reclen));
offset += fs16_to_cpu(sb, de->d_reclen);
de = (struct ufs_dir_entry *)
((char *)de + fs16_to_cpu(sb, de->d_reclen));
}
brelse (bh);
return 1;
......
......@@ -66,12 +66,10 @@ void ufs_free_inode (struct inode * inode)
struct ufs_cylinder_group * ucg;
int is_directory;
unsigned ino, cg, bit;
unsigned swab;
UFSD(("ENTER, ino %lu\n", inode->i_ino))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first(USPI_UBH);
......@@ -93,10 +91,10 @@ void ufs_free_inode (struct inode * inode)
return;
}
ucg = ubh_get_ucg(UCPI_UBH);
if (!ufs_cg_chkmagic(ucg))
if (!ufs_cg_chkmagic(sb, ucg))
ufs_panic (sb, "ufs_free_fragments", "internal error, bad cg magic number");
ucg->cg_time = SWAB32(CURRENT_TIME);
ucg->cg_time = cpu_to_fs32(sb, CURRENT_TIME);
is_directory = S_ISDIR(inode->i_mode);
......@@ -111,16 +109,17 @@ void ufs_free_inode (struct inode * inode)
ubh_clrbit (UCPI_UBH, ucpi->c_iusedoff, bit);
if (ino < ucpi->c_irotor)
ucpi->c_irotor = ino;
INC_SWAB32(ucg->cg_cs.cs_nifree);
INC_SWAB32(usb1->fs_cstotal.cs_nifree);
INC_SWAB32(sb->fs_cs(cg).cs_nifree);
fs32_add(sb, &ucg->cg_cs.cs_nifree, 1);
fs32_add(sb, &usb1->fs_cstotal.cs_nifree, 1);
fs32_add(sb, &sb->fs_cs(cg).cs_nifree, 1);
if (is_directory) {
DEC_SWAB32(ucg->cg_cs.cs_ndir);
DEC_SWAB32(usb1->fs_cstotal.cs_ndir);
DEC_SWAB32(sb->fs_cs(cg).cs_ndir);
fs32_sub(sb, &ucg->cg_cs.cs_ndir, 1);
fs32_sub(sb, &usb1->fs_cstotal.cs_ndir, 1);
fs32_sub(sb, &sb->fs_cs(cg).cs_ndir, 1);
}
}
ubh_mark_buffer_dirty (USPI_UBH);
ubh_mark_buffer_dirty (UCPI_UBH);
if (sb->s_flags & MS_SYNCHRONOUS) {
......@@ -152,7 +151,6 @@ struct inode * ufs_new_inode (const struct inode * dir, int mode)
struct ufs_cylinder_group * ucg;
struct inode * inode;
unsigned cg, bit, i, j, start;
unsigned swab;
UFSD(("ENTER\n"))
......@@ -163,7 +161,6 @@ struct inode * ufs_new_inode (const struct inode * dir, int mode)
inode = new_inode(sb);
if (!inode)
return ERR_PTR(-ENOMEM);
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first(USPI_UBH);
......@@ -173,7 +170,7 @@ struct inode * ufs_new_inode (const struct inode * dir, int mode)
* Try to place the inode in its parent directory
*/
i = ufs_inotocg(dir->i_ino);
if (SWAB32(sb->fs_cs(i).cs_nifree)) {
if (sb->fs_cs(i).cs_nifree) {
cg = i;
goto cg_found;
}
......@@ -185,7 +182,7 @@ struct inode * ufs_new_inode (const struct inode * dir, int mode)
i += j;
if (i >= uspi->s_ncg)
i -= uspi->s_ncg;
if (SWAB32(sb->fs_cs(i).cs_nifree)) {
if (sb->fs_cs(i).cs_nifree) {
cg = i;
goto cg_found;
}
......@@ -199,7 +196,7 @@ struct inode * ufs_new_inode (const struct inode * dir, int mode)
i++;
if (i >= uspi->s_ncg)
i = 0;
if (SWAB32(sb->fs_cs(i).cs_nifree)) {
if (sb->fs_cs(i).cs_nifree) {
cg = i;
goto cg_found;
}
......@@ -212,7 +209,7 @@ struct inode * ufs_new_inode (const struct inode * dir, int mode)
if (!ucpi)
goto failed;
ucg = ubh_get_ucg(UCPI_UBH);
if (!ufs_cg_chkmagic(ucg))
if (!ufs_cg_chkmagic(sb, ucg))
ufs_panic (sb, "ufs_new_inode", "internal error, bad cg magic number");
start = ucpi->c_irotor;
......@@ -233,14 +230,14 @@ struct inode * ufs_new_inode (const struct inode * dir, int mode)
goto failed;
}
DEC_SWAB32(ucg->cg_cs.cs_nifree);
DEC_SWAB32(usb1->fs_cstotal.cs_nifree);
DEC_SWAB32(sb->fs_cs(cg).cs_nifree);
fs32_sub(sb, &ucg->cg_cs.cs_nifree, 1);
fs32_sub(sb, &usb1->fs_cstotal.cs_nifree, 1);
fs32_sub(sb, &sb->fs_cs(cg).cs_nifree, 1);
if (S_ISDIR(mode)) {
INC_SWAB32(ucg->cg_cs.cs_ndir);
INC_SWAB32(usb1->fs_cstotal.cs_ndir);
INC_SWAB32(sb->fs_cs(cg).cs_ndir);
fs32_add(sb, &ucg->cg_cs.cs_ndir, 1);
fs32_add(sb, &usb1->fs_cstotal.cs_ndir, 1);
fs32_add(sb, &sb->fs_cs(cg).cs_ndir, 1);
}
ubh_mark_buffer_dirty (USPI_UBH);
......
......@@ -50,17 +50,6 @@
#define UFSD(x)
#endif
static inline unsigned int ufs_block_bmap1(struct buffer_head * bh, unsigned nr,
struct ufs_sb_private_info * uspi, unsigned swab)
{
unsigned int tmp;
if (!bh)
return 0;
tmp = SWAB32(((u32 *) bh->b_data)[nr]);
brelse (bh);
return tmp;
}
static int ufs_block_to_path(struct inode *inode, long i_block, int offsets[4])
{
struct ufs_sb_private_info *uspi = inode->i_sb->u.ufs_sb.s_uspi;
......@@ -97,7 +86,6 @@ int ufs_frag_map(struct inode *inode, int frag)
{
struct super_block *sb = inode->i_sb;
struct ufs_sb_private_info *uspi = sb->u.ufs_sb.s_uspi;
unsigned int swab = sb->u.ufs_sb.s_swab;
int mask = uspi->s_apbmask>>uspi->s_fpbshift;
int shift = uspi->s_apbshift-uspi->s_fpbshift;
int offsets[4], *p;
......@@ -118,7 +106,7 @@ int ufs_frag_map(struct inode *inode, int frag)
struct buffer_head *bh;
int n = *p++;
bh = bread(sb->s_dev, uspi->s_sbbase+SWAB32(block)+(n>>shift),
bh = bread(sb->s_dev, uspi->s_sbbase + fs32_to_cpu(sb, block)+(n>>shift),
sb->s_blocksize);
if (!bh)
goto out;
......@@ -127,7 +115,7 @@ int ufs_frag_map(struct inode *inode, int frag)
if (!block)
goto out;
}
ret = uspi->s_sbbase + SWAB32(block) + (frag & uspi->s_fpbmask);
ret = uspi->s_sbbase + fs32_to_cpu(sb, block) + (frag & uspi->s_fpbmask);
out:
unlock_kernel();
return ret;
......@@ -143,13 +131,11 @@ static struct buffer_head * ufs_inode_getfrag (struct inode *inode,
unsigned block, blockoff, lastfrag, lastblock, lastblockoff;
unsigned tmp, goal;
u32 * p, * p2;
unsigned int swab;
UFSD(("ENTER, ino %lu, fragment %u, new_fragment %u, required %u\n",
inode->i_ino, fragment, new_fragment, required))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
block = ufs_fragstoblks (fragment);
blockoff = ufs_fragnum (fragment);
......@@ -157,13 +143,13 @@ static struct buffer_head * ufs_inode_getfrag (struct inode *inode,
goal = 0;
repeat:
tmp = SWAB32(*p);
tmp = fs32_to_cpu(sb, *p);
lastfrag = inode->u.ufs_i.i_lastfrag;
if (tmp && fragment < lastfrag) {
if (metadata) {
result = getblk (sb->s_dev, uspi->s_sbbase + tmp + blockoff,
sb->s_blocksize);
if (tmp == SWAB32(*p)) {
if (tmp == fs32_to_cpu(sb, *p)) {
UFSD(("EXIT, result %u\n", tmp + blockoff))
return result;
}
......@@ -187,7 +173,7 @@ static struct buffer_head * ufs_inode_getfrag (struct inode *inode,
if (lastblockoff) {
p2 = inode->u.ufs_i.i_u1.i_data + lastblock;
tmp = ufs_new_fragments (inode, p2, lastfrag,
SWAB32(*p2), uspi->s_fpb - lastblockoff, err);
fs32_to_cpu(sb, *p2), uspi->s_fpb - lastblockoff, err);
if (!tmp) {
if (lastfrag != inode->u.ufs_i.i_lastfrag)
goto repeat;
......@@ -197,7 +183,7 @@ static struct buffer_head * ufs_inode_getfrag (struct inode *inode,
lastfrag = inode->u.ufs_i.i_lastfrag;
}
goal = SWAB32(inode->u.ufs_i.i_u1.i_data[lastblock]) + uspi->s_fpb;
goal = fs32_to_cpu(sb, inode->u.ufs_i.i_u1.i_data[lastblock]) + uspi->s_fpb;
tmp = ufs_new_fragments (inode, p, fragment - blockoff,
goal, required + blockoff, err);
}
......@@ -206,19 +192,19 @@ static struct buffer_head * ufs_inode_getfrag (struct inode *inode,
*/
else if (lastblock == block) {
tmp = ufs_new_fragments (inode, p, fragment - (blockoff - lastblockoff),
SWAB32(*p), required + (blockoff - lastblockoff), err);
fs32_to_cpu(sb, *p), required + (blockoff - lastblockoff), err);
}
/*
* We will allocate new block before last allocated block
*/
else /* (lastblock > block) */ {
if (lastblock && (tmp = SWAB32(inode->u.ufs_i.i_u1.i_data[lastblock-1])))
if (lastblock && (tmp = fs32_to_cpu(sb, inode->u.ufs_i.i_u1.i_data[lastblock-1])))
goal = tmp + uspi->s_fpb;
tmp = ufs_new_fragments (inode, p, fragment - blockoff,
goal, uspi->s_fpb, err);
}
if (!tmp) {
if ((!blockoff && SWAB32(*p)) ||
if ((!blockoff && *p) ||
(blockoff && lastfrag != inode->u.ufs_i.i_lastfrag))
goto repeat;
*err = -ENOSPC;
......@@ -255,10 +241,8 @@ static struct buffer_head * ufs_block_getfrag (struct inode *inode,
struct buffer_head * result;
unsigned tmp, goal, block, blockoff;
u32 * p;
unsigned int swab;
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
block = ufs_fragstoblks (fragment);
blockoff = ufs_fragnum (fragment);
......@@ -277,12 +261,12 @@ static struct buffer_head * ufs_block_getfrag (struct inode *inode,
p = (u32 *) bh->b_data + block;
repeat:
tmp = SWAB32(*p);
tmp = fs32_to_cpu(sb, *p);
if (tmp) {
if (metadata) {
result = getblk (bh->b_dev, uspi->s_sbbase + tmp + blockoff,
sb->s_blocksize);
if (tmp == SWAB32(*p))
if (tmp == fs32_to_cpu(sb, *p))
goto out;
brelse (result);
goto repeat;
......@@ -292,13 +276,13 @@ static struct buffer_head * ufs_block_getfrag (struct inode *inode,
}
}
if (block && (tmp = SWAB32(((u32*)bh->b_data)[block-1]) + uspi->s_fpb))
if (block && (tmp = fs32_to_cpu(sb, ((u32*)bh->b_data)[block-1]) + uspi->s_fpb))
goal = tmp + uspi->s_fpb;
else
goal = bh->b_blocknr + uspi->s_fpb;
tmp = ufs_new_fragments (inode, p, ufs_blknum(new_fragment), goal, uspi->s_fpb, err);
if (!tmp) {
if (SWAB32(*p))
if (fs32_to_cpu(sb, *p))
goto repeat;
goto out;
}
......@@ -332,13 +316,11 @@ static int ufs_getfrag_block (struct inode *inode, long fragment, struct buffer_
struct super_block * sb;
struct ufs_sb_private_info * uspi;
struct buffer_head * bh;
unsigned int swab;
int ret, err, new;
unsigned long ptr, phys;
sb = inode->i_sb;
uspi = sb->u.ufs_sb.s_uspi;
swab = sb->u.ufs_sb.s_swab;
if (!create) {
phys = ufs_frag_map(inode, fragment);
......@@ -504,14 +486,13 @@ void ufs_read_inode (struct inode * inode)
struct ufs_inode * ufs_inode;
struct buffer_head * bh;
unsigned i;
unsigned flags, swab;
unsigned flags;
UFSD(("ENTER, ino %lu\n", inode->i_ino))
sb = inode->i_sb;
uspi = sb->u.ufs_sb.s_uspi;
flags = sb->u.ufs_sb.s_flags;
swab = sb->u.ufs_sb.s_swab;
if (inode->i_ino < UFS_ROOTINO ||
inode->i_ino > (uspi->s_ncg * uspi->s_ipg)) {
......@@ -529,37 +510,29 @@ void ufs_read_inode (struct inode * inode)
/*
* Copy data to the in-core inode.
*/
inode->i_mode = SWAB16(ufs_inode->ui_mode);
inode->i_nlink = SWAB16(ufs_inode->ui_nlink);
inode->i_mode = fs16_to_cpu(sb, ufs_inode->ui_mode);
inode->i_nlink = fs16_to_cpu(sb, ufs_inode->ui_nlink);
if (inode->i_nlink == 0)
ufs_error (sb, "ufs_read_inode", "inode %lu has zero nlink\n", inode->i_ino);
/*
* Linux now has 32-bit uid and gid, so we can support EFT.
*/
inode->i_uid = ufs_get_inode_uid(ufs_inode);
inode->i_gid = ufs_get_inode_gid(ufs_inode);
/*
* Linux i_size can be 32 on some architectures. We will mark
* big files as read only and let user access first 32 bits.
*/
inode->u.ufs_i.i_size = SWAB64(ufs_inode->ui_size);
inode->i_size = (off_t) inode->u.ufs_i.i_size;
if (sizeof(off_t) == 4 && (inode->u.ufs_i.i_size >> 32))
inode->i_size = (__u32)-1;
inode->i_atime = SWAB32(ufs_inode->ui_atime.tv_sec);
inode->i_ctime = SWAB32(ufs_inode->ui_ctime.tv_sec);
inode->i_mtime = SWAB32(ufs_inode->ui_mtime.tv_sec);
inode->i_blocks = SWAB32(ufs_inode->ui_blocks);
inode->i_uid = ufs_get_inode_uid(sb, ufs_inode);
inode->i_gid = ufs_get_inode_gid(sb, ufs_inode);
inode->i_size = fs64_to_cpu(sb, ufs_inode->ui_size);
inode->i_atime = fs32_to_cpu(sb, ufs_inode->ui_atime.tv_sec);
inode->i_ctime = fs32_to_cpu(sb, ufs_inode->ui_ctime.tv_sec);
inode->i_mtime = fs32_to_cpu(sb, ufs_inode->ui_mtime.tv_sec);
inode->i_blocks = fs32_to_cpu(sb, ufs_inode->ui_blocks);
inode->i_blksize = PAGE_SIZE; /* This is the optimal IO size (for stat) */
inode->i_version = ++event;
inode->u.ufs_i.i_flags = SWAB32(ufs_inode->ui_flags);
inode->u.ufs_i.i_gen = SWAB32(ufs_inode->ui_gen);
inode->u.ufs_i.i_shadow = SWAB32(ufs_inode->ui_u3.ui_sun.ui_shadow);
inode->u.ufs_i.i_oeftflag = SWAB32(ufs_inode->ui_u3.ui_sun.ui_oeftflag);
inode->u.ufs_i.i_flags = fs32_to_cpu(sb, ufs_inode->ui_flags);
inode->u.ufs_i.i_gen = fs32_to_cpu(sb, ufs_inode->ui_gen);
inode->u.ufs_i.i_shadow = fs32_to_cpu(sb, ufs_inode->ui_u3.ui_sun.ui_shadow);
inode->u.ufs_i.i_oeftflag = fs32_to_cpu(sb, ufs_inode->ui_u3.ui_sun.ui_oeftflag);
inode->u.ufs_i.i_lastfrag = (inode->i_size + uspi->s_fsize - 1) >> uspi->s_fshift;
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode))
......@@ -590,7 +563,7 @@ void ufs_read_inode (struct inode * inode)
}
} else
init_special_inode(inode, inode->i_mode,
SWAB32(ufs_inode->ui_u2.ui_addr.ui_db[0]));
fs32_to_cpu(sb, ufs_inode->ui_u2.ui_addr.ui_db[0]));
brelse (bh);
......@@ -604,14 +577,13 @@ static int ufs_update_inode(struct inode * inode, int do_sync)
struct buffer_head * bh;
struct ufs_inode * ufs_inode;
unsigned i;
unsigned flags, swab;
unsigned flags;
UFSD(("ENTER, ino %lu\n", inode->i_ino))
sb = inode->i_sb;
uspi = sb->u.ufs_sb.s_uspi;
flags = sb->u.ufs_sb.s_flags;
swab = sb->u.ufs_sb.s_swab;
if (inode->i_ino < UFS_ROOTINO ||
inode->i_ino > (uspi->s_ncg * uspi->s_ipg)) {
......@@ -626,30 +598,30 @@ static int ufs_update_inode(struct inode * inode, int do_sync)
}
ufs_inode = (struct ufs_inode *) (bh->b_data + ufs_inotofsbo(inode->i_ino) * sizeof(struct ufs_inode));
ufs_inode->ui_mode = SWAB16(inode->i_mode);
ufs_inode->ui_nlink = SWAB16(inode->i_nlink);
ufs_inode->ui_mode = cpu_to_fs16(sb, inode->i_mode);
ufs_inode->ui_nlink = cpu_to_fs16(sb, inode->i_nlink);
ufs_set_inode_uid (ufs_inode, inode->i_uid);
ufs_set_inode_gid (ufs_inode, inode->i_gid);
ufs_set_inode_uid(sb, ufs_inode, inode->i_uid);
ufs_set_inode_gid(sb, ufs_inode, inode->i_gid);
ufs_inode->ui_size = SWAB64((u64)inode->i_size);
ufs_inode->ui_atime.tv_sec = SWAB32(inode->i_atime);
ufs_inode->ui_atime.tv_usec = SWAB32(0);
ufs_inode->ui_ctime.tv_sec = SWAB32(inode->i_ctime);
ufs_inode->ui_ctime.tv_usec = SWAB32(0);
ufs_inode->ui_mtime.tv_sec = SWAB32(inode->i_mtime);
ufs_inode->ui_mtime.tv_usec = SWAB32(0);
ufs_inode->ui_blocks = SWAB32(inode->i_blocks);
ufs_inode->ui_flags = SWAB32(inode->u.ufs_i.i_flags);
ufs_inode->ui_gen = SWAB32(inode->u.ufs_i.i_gen);
ufs_inode->ui_size = cpu_to_fs64(sb, inode->i_size);
ufs_inode->ui_atime.tv_sec = cpu_to_fs32(sb, inode->i_atime);
ufs_inode->ui_atime.tv_usec = 0;
ufs_inode->ui_ctime.tv_sec = cpu_to_fs32(sb, inode->i_ctime);
ufs_inode->ui_ctime.tv_usec = 0;
ufs_inode->ui_mtime.tv_sec = cpu_to_fs32(sb, inode->i_mtime);
ufs_inode->ui_mtime.tv_usec = 0;
ufs_inode->ui_blocks = cpu_to_fs32(sb, inode->i_blocks);
ufs_inode->ui_flags = cpu_to_fs32(sb, inode->u.ufs_i.i_flags);
ufs_inode->ui_gen = cpu_to_fs32(sb, inode->u.ufs_i.i_gen);
if ((flags & UFS_UID_MASK) == UFS_UID_EFT) {
ufs_inode->ui_u3.ui_sun.ui_shadow = SWAB32(inode->u.ufs_i.i_shadow);
ufs_inode->ui_u3.ui_sun.ui_oeftflag = SWAB32(inode->u.ufs_i.i_oeftflag);
ufs_inode->ui_u3.ui_sun.ui_shadow = cpu_to_fs32(sb, inode->u.ufs_i.i_shadow);
ufs_inode->ui_u3.ui_sun.ui_oeftflag = cpu_to_fs32(sb, inode->u.ufs_i.i_oeftflag);
}
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode))
ufs_inode->ui_u2.ui_addr.ui_db[0] = SWAB32(kdev_t_to_nr(inode->i_rdev));
ufs_inode->ui_u2.ui_addr.ui_db[0] = cpu_to_fs32(sb, kdev_t_to_nr(inode->i_rdev));
else if (inode->i_blocks) {
for (i = 0; i < (UFS_NDADDR + UFS_NINDIR); i++)
ufs_inode->ui_u2.ui_addr.ui_db[i] = inode->u.ufs_i.i_u1.i_data[i];
......
......@@ -97,44 +97,45 @@
/*
* Print contents of ufs_super_block, useful for debugging
*/
void ufs_print_super_stuff(struct ufs_super_block_first * usb1,
void ufs_print_super_stuff(struct super_block *sb,
struct ufs_super_block_first * usb1,
struct ufs_super_block_second * usb2,
struct ufs_super_block_third * usb3, unsigned swab)
struct ufs_super_block_third * usb3)
{
printk("ufs_print_super_stuff\n");
printk("size of usb: %u\n", sizeof(struct ufs_super_block));
printk(" magic: 0x%x\n", SWAB32(usb3->fs_magic));
printk(" sblkno: %u\n", SWAB32(usb1->fs_sblkno));
printk(" cblkno: %u\n", SWAB32(usb1->fs_cblkno));
printk(" iblkno: %u\n", SWAB32(usb1->fs_iblkno));
printk(" dblkno: %u\n", SWAB32(usb1->fs_dblkno));
printk(" cgoffset: %u\n", SWAB32(usb1->fs_cgoffset));
printk(" ~cgmask: 0x%x\n", ~SWAB32(usb1->fs_cgmask));
printk(" size: %u\n", SWAB32(usb1->fs_size));
printk(" dsize: %u\n", SWAB32(usb1->fs_dsize));
printk(" ncg: %u\n", SWAB32(usb1->fs_ncg));
printk(" bsize: %u\n", SWAB32(usb1->fs_bsize));
printk(" fsize: %u\n", SWAB32(usb1->fs_fsize));
printk(" frag: %u\n", SWAB32(usb1->fs_frag));
printk(" fragshift: %u\n", SWAB32(usb1->fs_fragshift));
printk(" ~fmask: %u\n", ~SWAB32(usb1->fs_fmask));
printk(" fshift: %u\n", SWAB32(usb1->fs_fshift));
printk(" sbsize: %u\n", SWAB32(usb1->fs_sbsize));
printk(" spc: %u\n", SWAB32(usb1->fs_spc));
printk(" cpg: %u\n", SWAB32(usb1->fs_cpg));
printk(" ipg: %u\n", SWAB32(usb1->fs_ipg));
printk(" fpg: %u\n", SWAB32(usb1->fs_fpg));
printk(" csaddr: %u\n", SWAB32(usb1->fs_csaddr));
printk(" cssize: %u\n", SWAB32(usb1->fs_cssize));
printk(" cgsize: %u\n", SWAB32(usb1->fs_cgsize));
printk(" fstodb: %u\n", SWAB32(usb1->fs_fsbtodb));
printk(" contigsumsize: %d\n", SWAB32(usb3->fs_u2.fs_44.fs_contigsumsize));
printk(" postblformat: %u\n", SWAB32(usb3->fs_postblformat));
printk(" nrpos: %u\n", SWAB32(usb3->fs_nrpos));
printk(" ndir %u\n", SWAB32(usb1->fs_cstotal.cs_ndir));
printk(" nifree %u\n", SWAB32(usb1->fs_cstotal.cs_nifree));
printk(" nbfree %u\n", SWAB32(usb1->fs_cstotal.cs_nbfree));
printk(" nffree %u\n", SWAB32(usb1->fs_cstotal.cs_nffree));
printk(" magic: 0x%x\n", fs32_to_cpu(sb, usb3->fs_magic));
printk(" sblkno: %u\n", fs32_to_cpu(sb, usb1->fs_sblkno));
printk(" cblkno: %u\n", fs32_to_cpu(sb, usb1->fs_cblkno));
printk(" iblkno: %u\n", fs32_to_cpu(sb, usb1->fs_iblkno));
printk(" dblkno: %u\n", fs32_to_cpu(sb, usb1->fs_dblkno));
printk(" cgoffset: %u\n", fs32_to_cpu(sb, usb1->fs_cgoffset));
printk(" ~cgmask: 0x%x\n", ~fs32_to_cpu(sb, usb1->fs_cgmask));
printk(" size: %u\n", fs32_to_cpu(sb, usb1->fs_size));
printk(" dsize: %u\n", fs32_to_cpu(sb, usb1->fs_dsize));
printk(" ncg: %u\n", fs32_to_cpu(sb, usb1->fs_ncg));
printk(" bsize: %u\n", fs32_to_cpu(sb, usb1->fs_bsize));
printk(" fsize: %u\n", fs32_to_cpu(sb, usb1->fs_fsize));
printk(" frag: %u\n", fs32_to_cpu(sb, usb1->fs_frag));
printk(" fragshift: %u\n", fs32_to_cpu(sb, usb1->fs_fragshift));
printk(" ~fmask: %u\n", ~fs32_to_cpu(sb, usb1->fs_fmask));
printk(" fshift: %u\n", fs32_to_cpu(sb, usb1->fs_fshift));
printk(" sbsize: %u\n", fs32_to_cpu(sb, usb1->fs_sbsize));
printk(" spc: %u\n", fs32_to_cpu(sb, usb1->fs_spc));
printk(" cpg: %u\n", fs32_to_cpu(sb, usb1->fs_cpg));
printk(" ipg: %u\n", fs32_to_cpu(sb, usb1->fs_ipg));
printk(" fpg: %u\n", fs32_to_cpu(sb, usb1->fs_fpg));
printk(" csaddr: %u\n", fs32_to_cpu(sb, usb1->fs_csaddr));
printk(" cssize: %u\n", fs32_to_cpu(sb, usb1->fs_cssize));
printk(" cgsize: %u\n", fs32_to_cpu(sb, usb1->fs_cgsize));
printk(" fstodb: %u\n", fs32_to_cpu(sb, usb1->fs_fsbtodb));
printk(" contigsumsize: %d\n", fs32_to_cpu(sb, usb3->fs_u2.fs_44.fs_contigsumsize));
printk(" postblformat: %u\n", fs32_to_cpu(sb, usb3->fs_postblformat));
printk(" nrpos: %u\n", fs32_to_cpu(sb, usb3->fs_nrpos));
printk(" ndir %u\n", fs32_to_cpu(sb, usb1->fs_cstotal.cs_ndir));
printk(" nifree %u\n", fs32_to_cpu(sb, usb1->fs_cstotal.cs_nifree));
printk(" nbfree %u\n", fs32_to_cpu(sb, usb1->fs_cstotal.cs_nbfree));
printk(" nffree %u\n", fs32_to_cpu(sb, usb1->fs_cstotal.cs_nffree));
printk("\n");
}
......@@ -142,36 +143,36 @@ void ufs_print_super_stuff(struct ufs_super_block_first * usb1,
/*
* Print contents of ufs_cylinder_group, useful for debugging
*/
void ufs_print_cylinder_stuff(struct ufs_cylinder_group *cg, unsigned swab)
void ufs_print_cylinder_stuff(struct super_block *sb, struct ufs_cylinder_group *cg)
{
printk("\nufs_print_cylinder_stuff\n");
printk("size of ucg: %u\n", sizeof(struct ufs_cylinder_group));
printk(" magic: %x\n", SWAB32(cg->cg_magic));
printk(" time: %u\n", SWAB32(cg->cg_time));
printk(" cgx: %u\n", SWAB32(cg->cg_cgx));
printk(" ncyl: %u\n", SWAB16(cg->cg_ncyl));
printk(" niblk: %u\n", SWAB16(cg->cg_niblk));
printk(" ndblk: %u\n", SWAB32(cg->cg_ndblk));
printk(" cs_ndir: %u\n", SWAB32(cg->cg_cs.cs_ndir));
printk(" cs_nbfree: %u\n", SWAB32(cg->cg_cs.cs_nbfree));
printk(" cs_nifree: %u\n", SWAB32(cg->cg_cs.cs_nifree));
printk(" cs_nffree: %u\n", SWAB32(cg->cg_cs.cs_nffree));
printk(" rotor: %u\n", SWAB32(cg->cg_rotor));
printk(" frotor: %u\n", SWAB32(cg->cg_frotor));
printk(" irotor: %u\n", SWAB32(cg->cg_irotor));
printk(" magic: %x\n", fs32_to_cpu(sb, cg->cg_magic));
printk(" time: %u\n", fs32_to_cpu(sb, cg->cg_time));
printk(" cgx: %u\n", fs32_to_cpu(sb, cg->cg_cgx));
printk(" ncyl: %u\n", fs16_to_cpu(sb, cg->cg_ncyl));
printk(" niblk: %u\n", fs16_to_cpu(sb, cg->cg_niblk));
printk(" ndblk: %u\n", fs32_to_cpu(sb, cg->cg_ndblk));
printk(" cs_ndir: %u\n", fs32_to_cpu(sb, cg->cg_cs.cs_ndir));
printk(" cs_nbfree: %u\n", fs32_to_cpu(sb, cg->cg_cs.cs_nbfree));
printk(" cs_nifree: %u\n", fs32_to_cpu(sb, cg->cg_cs.cs_nifree));
printk(" cs_nffree: %u\n", fs32_to_cpu(sb, cg->cg_cs.cs_nffree));
printk(" rotor: %u\n", fs32_to_cpu(sb, cg->cg_rotor));
printk(" frotor: %u\n", fs32_to_cpu(sb, cg->cg_frotor));
printk(" irotor: %u\n", fs32_to_cpu(sb, cg->cg_irotor));
printk(" frsum: %u, %u, %u, %u, %u, %u, %u, %u\n",
SWAB32(cg->cg_frsum[0]), SWAB32(cg->cg_frsum[1]),
SWAB32(cg->cg_frsum[2]), SWAB32(cg->cg_frsum[3]),
SWAB32(cg->cg_frsum[4]), SWAB32(cg->cg_frsum[5]),
SWAB32(cg->cg_frsum[6]), SWAB32(cg->cg_frsum[7]));
printk(" btotoff: %u\n", SWAB32(cg->cg_btotoff));
printk(" boff: %u\n", SWAB32(cg->cg_boff));
printk(" iuseoff: %u\n", SWAB32(cg->cg_iusedoff));
printk(" freeoff: %u\n", SWAB32(cg->cg_freeoff));
printk(" nextfreeoff: %u\n", SWAB32(cg->cg_nextfreeoff));
printk(" clustersumoff %u\n", SWAB32(cg->cg_u.cg_44.cg_clustersumoff));
printk(" clusteroff %u\n", SWAB32(cg->cg_u.cg_44.cg_clusteroff));
printk(" nclusterblks %u\n", SWAB32(cg->cg_u.cg_44.cg_nclusterblks));
fs32_to_cpu(sb, cg->cg_frsum[0]), fs32_to_cpu(sb, cg->cg_frsum[1]),
fs32_to_cpu(sb, cg->cg_frsum[2]), fs32_to_cpu(sb, cg->cg_frsum[3]),
fs32_to_cpu(sb, cg->cg_frsum[4]), fs32_to_cpu(sb, cg->cg_frsum[5]),
fs32_to_cpu(sb, cg->cg_frsum[6]), fs32_to_cpu(sb, cg->cg_frsum[7]));
printk(" btotoff: %u\n", fs32_to_cpu(sb, cg->cg_btotoff));
printk(" boff: %u\n", fs32_to_cpu(sb, cg->cg_boff));
printk(" iuseoff: %u\n", fs32_to_cpu(sb, cg->cg_iusedoff));
printk(" freeoff: %u\n", fs32_to_cpu(sb, cg->cg_freeoff));
printk(" nextfreeoff: %u\n", fs32_to_cpu(sb, cg->cg_nextfreeoff));
printk(" clustersumoff %u\n", fs32_to_cpu(sb, cg->cg_u.cg_44.cg_clustersumoff));
printk(" clusteroff %u\n", fs32_to_cpu(sb, cg->cg_u.cg_44.cg_clusteroff));
printk(" nclusterblks %u\n", fs32_to_cpu(sb, cg->cg_u.cg_44.cg_nclusterblks));
printk("\n");
}
#endif /* UFS_SUPER_DEBUG_MORE */
......@@ -320,12 +321,10 @@ int ufs_read_cylinder_structures (struct super_block * sb) {
struct ufs_buffer_head * ubh;
unsigned char * base, * space;
unsigned size, blks, i;
unsigned swab;
UFSD(("ENTER\n"))
uspi = sb->u.ufs_sb.s_uspi;
swab = sb->u.ufs_sb.s_swab;
/*
* Read cs structures from (usually) first data block
......@@ -366,10 +365,10 @@ int ufs_read_cylinder_structures (struct super_block * sb) {
UFSD(("read cg %u\n", i))
if (!(sb->u.ufs_sb.s_ucg[i] = bread (sb->s_dev, ufs_cgcmin(i), sb->s_blocksize)))
goto failed;
if (!ufs_cg_chkmagic ((struct ufs_cylinder_group *) sb->u.ufs_sb.s_ucg[i]->b_data))
if (!ufs_cg_chkmagic (sb, (struct ufs_cylinder_group *) sb->u.ufs_sb.s_ucg[i]->b_data))
goto failed;
#ifdef UFS_SUPER_DEBUG_MORE
ufs_print_cylinder_stuff((struct ufs_cylinder_group *) sb->u.ufs_sb.s_ucg[i]->b_data, swab);
ufs_print_cylinder_stuff(sb, (struct ufs_cylinder_group *) sb->u.ufs_sb.s_ucg[i]->b_data);
#endif
}
for (i = 0; i < UFS_MAX_GROUP_LOADED; i++) {
......@@ -444,12 +443,11 @@ struct super_block * ufs_read_super (struct super_block * sb, void * data,
struct ufs_super_block_third * usb3;
struct ufs_buffer_head * ubh;
unsigned block_size, super_block_size;
unsigned flags, swab;
unsigned flags;
uspi = NULL;
ubh = NULL;
flags = 0;
swab = 0;
UFSD(("ENTER\n"))
......@@ -614,37 +612,22 @@ struct super_block * ufs_read_super (struct super_block * sb, void * data,
/*
* Check ufs magic number
*/
#if defined(__LITTLE_ENDIAN) || defined(__BIG_ENDIAN) /* sane bytesex */
switch (usb3->fs_magic) {
switch (__constant_le32_to_cpu(usb3->fs_magic)) {
case UFS_MAGIC:
case UFS_MAGIC_LFN:
case UFS_MAGIC_LFN:
case UFS_MAGIC_FEA:
case UFS_MAGIC_4GB:
swab = UFS_NATIVE_ENDIAN;
goto magic_found;
case UFS_CIGAM:
case UFS_CIGAM_LFN:
case UFS_CIGAM_FEA:
case UFS_CIGAM_4GB:
swab = UFS_SWABBED_ENDIAN;
sb->u.ufs_sb.s_bytesex = BYTESEX_LE;
goto magic_found;
}
#else /* bytesex perversion */
switch (le32_to_cpup(&usb3->fs_magic)) {
switch (__constant_be32_to_cpu(usb3->fs_magic)) {
case UFS_MAGIC:
case UFS_MAGIC_LFN:
case UFS_MAGIC_FEA:
case UFS_MAGIC_4GB:
swab = UFS_LITTLE_ENDIAN;
goto magic_found;
case UFS_CIGAM:
case UFS_CIGAM_LFN:
case UFS_CIGAM_FEA:
case UFS_CIGAM_4GB:
swab = UFS_BIG_ENDIAN;
sb->u.ufs_sb.s_bytesex = BYTESEX_BE;
goto magic_found;
}
#endif
if ((((sb->u.ufs_sb.s_mount_opt & UFS_MOUNT_UFSTYPE) == UFS_MOUNT_UFSTYPE_NEXTSTEP)
|| ((sb->u.ufs_sb.s_mount_opt & UFS_MOUNT_UFSTYPE) == UFS_MOUNT_UFSTYPE_NEXTSTEP_CD)
......@@ -662,11 +645,11 @@ struct super_block * ufs_read_super (struct super_block * sb, void * data,
/*
* Check block and fragment sizes
*/
uspi->s_bsize = SWAB32(usb1->fs_bsize);
uspi->s_fsize = SWAB32(usb1->fs_fsize);
uspi->s_sbsize = SWAB32(usb1->fs_sbsize);
uspi->s_fmask = SWAB32(usb1->fs_fmask);
uspi->s_fshift = SWAB32(usb1->fs_fshift);
uspi->s_bsize = fs32_to_cpu(sb, usb1->fs_bsize);
uspi->s_fsize = fs32_to_cpu(sb, usb1->fs_fsize);
uspi->s_sbsize = fs32_to_cpu(sb, usb1->fs_sbsize);
uspi->s_fmask = fs32_to_cpu(sb, usb1->fs_fmask);
uspi->s_fshift = fs32_to_cpu(sb, usb1->fs_fshift);
if (uspi->s_bsize != 4096 && uspi->s_bsize != 8192
&& uspi->s_bsize != 32768) {
......@@ -688,7 +671,7 @@ struct super_block * ufs_read_super (struct super_block * sb, void * data,
}
#ifdef UFS_SUPER_DEBUG_MORE
ufs_print_super_stuff (usb1, usb2, usb3, swab);
ufs_print_super_stuff(sb, usb1, usb2, usb3);
#endif
/*
......@@ -699,7 +682,7 @@ struct super_block * ufs_read_super (struct super_block * sb, void * data,
((flags & UFS_ST_MASK) == UFS_ST_OLD) ||
(((flags & UFS_ST_MASK) == UFS_ST_SUN ||
(flags & UFS_ST_MASK) == UFS_ST_SUNx86) &&
(ufs_get_fs_state(usb1, usb3) == (UFS_FSOK - SWAB32(usb1->fs_time))))) {
(ufs_get_fs_state(sb, usb1, usb3) == (UFS_FSOK - fs32_to_cpu(sb, usb1->fs_time))))) {
switch(usb1->fs_clean) {
case UFS_FSCLEAN:
UFSD(("fs is clean\n"))
......@@ -732,56 +715,56 @@ struct super_block * ufs_read_super (struct super_block * sb, void * data,
/*
* Read ufs_super_block into internal data structures
*/
sb->s_blocksize = SWAB32(usb1->fs_fsize);
sb->s_blocksize_bits = SWAB32(usb1->fs_fshift);
sb->s_blocksize = fs32_to_cpu(sb, usb1->fs_fsize);
sb->s_blocksize_bits = fs32_to_cpu(sb, usb1->fs_fshift);
sb->s_op = &ufs_super_ops;
sb->dq_op = NULL; /***/
sb->s_magic = SWAB32(usb3->fs_magic);
uspi->s_sblkno = SWAB32(usb1->fs_sblkno);
uspi->s_cblkno = SWAB32(usb1->fs_cblkno);
uspi->s_iblkno = SWAB32(usb1->fs_iblkno);
uspi->s_dblkno = SWAB32(usb1->fs_dblkno);
uspi->s_cgoffset = SWAB32(usb1->fs_cgoffset);
uspi->s_cgmask = SWAB32(usb1->fs_cgmask);
uspi->s_size = SWAB32(usb1->fs_size);
uspi->s_dsize = SWAB32(usb1->fs_dsize);
uspi->s_ncg = SWAB32(usb1->fs_ncg);
sb->s_magic = fs32_to_cpu(sb, usb3->fs_magic);
uspi->s_sblkno = fs32_to_cpu(sb, usb1->fs_sblkno);
uspi->s_cblkno = fs32_to_cpu(sb, usb1->fs_cblkno);
uspi->s_iblkno = fs32_to_cpu(sb, usb1->fs_iblkno);
uspi->s_dblkno = fs32_to_cpu(sb, usb1->fs_dblkno);
uspi->s_cgoffset = fs32_to_cpu(sb, usb1->fs_cgoffset);
uspi->s_cgmask = fs32_to_cpu(sb, usb1->fs_cgmask);
uspi->s_size = fs32_to_cpu(sb, usb1->fs_size);
uspi->s_dsize = fs32_to_cpu(sb, usb1->fs_dsize);
uspi->s_ncg = fs32_to_cpu(sb, usb1->fs_ncg);
/* s_bsize already set */
/* s_fsize already set */
uspi->s_fpb = SWAB32(usb1->fs_frag);
uspi->s_minfree = SWAB32(usb1->fs_minfree);
uspi->s_bmask = SWAB32(usb1->fs_bmask);
uspi->s_fmask = SWAB32(usb1->fs_fmask);
uspi->s_bshift = SWAB32(usb1->fs_bshift);
uspi->s_fshift = SWAB32(usb1->fs_fshift);
uspi->s_fpbshift = SWAB32(usb1->fs_fragshift);
uspi->s_fsbtodb = SWAB32(usb1->fs_fsbtodb);
uspi->s_fpb = fs32_to_cpu(sb, usb1->fs_frag);
uspi->s_minfree = fs32_to_cpu(sb, usb1->fs_minfree);
uspi->s_bmask = fs32_to_cpu(sb, usb1->fs_bmask);
uspi->s_fmask = fs32_to_cpu(sb, usb1->fs_fmask);
uspi->s_bshift = fs32_to_cpu(sb, usb1->fs_bshift);
uspi->s_fshift = fs32_to_cpu(sb, usb1->fs_fshift);
uspi->s_fpbshift = fs32_to_cpu(sb, usb1->fs_fragshift);
uspi->s_fsbtodb = fs32_to_cpu(sb, usb1->fs_fsbtodb);
/* s_sbsize already set */
uspi->s_csmask = SWAB32(usb1->fs_csmask);
uspi->s_csshift = SWAB32(usb1->fs_csshift);
uspi->s_nindir = SWAB32(usb1->fs_nindir);
uspi->s_inopb = SWAB32(usb1->fs_inopb);
uspi->s_nspf = SWAB32(usb1->fs_nspf);
uspi->s_npsect = ufs_get_fs_npsect(usb1, usb3);
uspi->s_interleave = SWAB32(usb1->fs_interleave);
uspi->s_trackskew = SWAB32(usb1->fs_trackskew);
uspi->s_csaddr = SWAB32(usb1->fs_csaddr);
uspi->s_cssize = SWAB32(usb1->fs_cssize);
uspi->s_cgsize = SWAB32(usb1->fs_cgsize);
uspi->s_ntrak = SWAB32(usb1->fs_ntrak);
uspi->s_nsect = SWAB32(usb1->fs_nsect);
uspi->s_spc = SWAB32(usb1->fs_spc);
uspi->s_ipg = SWAB32(usb1->fs_ipg);
uspi->s_fpg = SWAB32(usb1->fs_fpg);
uspi->s_cpc = SWAB32(usb2->fs_cpc);
uspi->s_contigsumsize = SWAB32(usb3->fs_u2.fs_44.fs_contigsumsize);
uspi->s_qbmask = ufs_get_fs_qbmask(usb3);
uspi->s_qfmask = ufs_get_fs_qfmask(usb3);
uspi->s_postblformat = SWAB32(usb3->fs_postblformat);
uspi->s_nrpos = SWAB32(usb3->fs_nrpos);
uspi->s_postbloff = SWAB32(usb3->fs_postbloff);
uspi->s_rotbloff = SWAB32(usb3->fs_rotbloff);
uspi->s_csmask = fs32_to_cpu(sb, usb1->fs_csmask);
uspi->s_csshift = fs32_to_cpu(sb, usb1->fs_csshift);
uspi->s_nindir = fs32_to_cpu(sb, usb1->fs_nindir);
uspi->s_inopb = fs32_to_cpu(sb, usb1->fs_inopb);
uspi->s_nspf = fs32_to_cpu(sb, usb1->fs_nspf);
uspi->s_npsect = ufs_get_fs_npsect(sb, usb1, usb3);
uspi->s_interleave = fs32_to_cpu(sb, usb1->fs_interleave);
uspi->s_trackskew = fs32_to_cpu(sb, usb1->fs_trackskew);
uspi->s_csaddr = fs32_to_cpu(sb, usb1->fs_csaddr);
uspi->s_cssize = fs32_to_cpu(sb, usb1->fs_cssize);
uspi->s_cgsize = fs32_to_cpu(sb, usb1->fs_cgsize);
uspi->s_ntrak = fs32_to_cpu(sb, usb1->fs_ntrak);
uspi->s_nsect = fs32_to_cpu(sb, usb1->fs_nsect);
uspi->s_spc = fs32_to_cpu(sb, usb1->fs_spc);
uspi->s_ipg = fs32_to_cpu(sb, usb1->fs_ipg);
uspi->s_fpg = fs32_to_cpu(sb, usb1->fs_fpg);
uspi->s_cpc = fs32_to_cpu(sb, usb2->fs_cpc);
uspi->s_contigsumsize = fs32_to_cpu(sb, usb3->fs_u2.fs_44.fs_contigsumsize);
uspi->s_qbmask = ufs_get_fs_qbmask(sb, usb3);
uspi->s_qfmask = ufs_get_fs_qfmask(sb, usb3);
uspi->s_postblformat = fs32_to_cpu(sb, usb3->fs_postblformat);
uspi->s_nrpos = fs32_to_cpu(sb, usb3->fs_nrpos);
uspi->s_postbloff = fs32_to_cpu(sb, usb3->fs_postbloff);
uspi->s_rotbloff = fs32_to_cpu(sb, usb3->fs_rotbloff);
/*
* Compute another frequently used values
......@@ -803,11 +786,9 @@ struct super_block * ufs_read_super (struct super_block * sb, void * data,
if ((sb->u.ufs_sb.s_mount_opt & UFS_MOUNT_UFSTYPE) ==
UFS_MOUNT_UFSTYPE_44BSD)
uspi->s_maxsymlinklen =
SWAB32(usb3->fs_u2.fs_44.fs_maxsymlinklen);
fs32_to_cpu(sb, usb3->fs_u2.fs_44.fs_maxsymlinklen);
sb->u.ufs_sb.s_flags = flags;
sb->u.ufs_sb.s_swab = swab;
sb->s_root = d_alloc_root(iget(sb, UFS_ROOTINO));
......@@ -832,20 +813,20 @@ void ufs_write_super (struct super_block * sb) {
struct ufs_sb_private_info * uspi;
struct ufs_super_block_first * usb1;
struct ufs_super_block_third * usb3;
unsigned flags, swab;
unsigned flags;
UFSD(("ENTER\n"))
swab = sb->u.ufs_sb.s_swab;
flags = sb->u.ufs_sb.s_flags;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first(USPI_UBH);
usb3 = ubh_get_usb_third(USPI_UBH);
if (!(sb->s_flags & MS_RDONLY)) {
usb1->fs_time = SWAB32(CURRENT_TIME);
usb1->fs_time = cpu_to_fs32(sb, CURRENT_TIME);
if ((flags & UFS_ST_MASK) == UFS_ST_SUN
|| (flags & UFS_ST_MASK) == UFS_ST_SUNx86)
ufs_set_fs_state(usb1, usb3, UFS_FSOK - SWAB32(usb1->fs_time));
ufs_set_fs_state(sb, usb1, usb3,
UFS_FSOK - fs32_to_cpu(sb, usb1->fs_time));
ubh_mark_buffer_dirty (USPI_UBH);
}
sb->s_dirt = 0;
......@@ -855,12 +836,10 @@ void ufs_write_super (struct super_block * sb) {
void ufs_put_super (struct super_block * sb)
{
struct ufs_sb_private_info * uspi;
unsigned swab;
UFSD(("ENTER\n"))
uspi = sb->u.ufs_sb.s_uspi;
swab = sb->u.ufs_sb.s_swab;
if (!(sb->s_flags & MS_RDONLY))
ufs_put_cylinder_structures (sb);
......@@ -877,11 +856,10 @@ int ufs_remount (struct super_block * sb, int * mount_flags, char * data)
struct ufs_super_block_first * usb1;
struct ufs_super_block_third * usb3;
unsigned new_mount_opt, ufstype;
unsigned flags, swab;
unsigned flags;
uspi = sb->u.ufs_sb.s_uspi;
flags = sb->u.ufs_sb.s_flags;
swab = sb->u.ufs_sb.s_swab;
usb1 = ubh_get_usb_first(USPI_UBH);
usb3 = ubh_get_usb_third(USPI_UBH);
......@@ -912,10 +890,11 @@ int ufs_remount (struct super_block * sb, int * mount_flags, char * data)
*/
if (*mount_flags & MS_RDONLY) {
ufs_put_cylinder_structures(sb);
usb1->fs_time = SWAB32(CURRENT_TIME);
usb1->fs_time = cpu_to_fs32(sb, CURRENT_TIME);
if ((flags & UFS_ST_MASK) == UFS_ST_SUN
|| (flags & UFS_ST_MASK) == UFS_ST_SUNx86)
ufs_set_fs_state(usb1, usb3, UFS_FSOK - SWAB32(usb1->fs_time));
ufs_set_fs_state(sb, usb1, usb3,
UFS_FSOK - fs32_to_cpu(sb, usb1->fs_time));
ubh_mark_buffer_dirty (USPI_UBH);
sb->s_dirt = 0;
sb->s_flags |= MS_RDONLY;
......@@ -950,21 +929,19 @@ int ufs_statfs (struct super_block * sb, struct statfs * buf)
{
struct ufs_sb_private_info * uspi;
struct ufs_super_block_first * usb1;
unsigned swab;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
usb1 = ubh_get_usb_first (USPI_UBH);
buf->f_type = UFS_MAGIC;
buf->f_bsize = sb->s_blocksize;
buf->f_blocks = uspi->s_dsize;
buf->f_bfree = ufs_blkstofrags(SWAB32(usb1->fs_cstotal.cs_nbfree)) +
SWAB32(usb1->fs_cstotal.cs_nffree);
buf->f_bfree = ufs_blkstofrags(fs32_to_cpu(sb, usb1->fs_cstotal.cs_nbfree)) +
fs32_to_cpu(sb, usb1->fs_cstotal.cs_nffree);
buf->f_bavail = (buf->f_bfree > ((buf->f_blocks / 100) * uspi->s_minfree))
? (buf->f_bfree - ((buf->f_blocks / 100) * uspi->s_minfree)) : 0;
buf->f_files = uspi->s_ncg * uspi->s_ipg;
buf->f_ffree = SWAB32(usb1->fs_cstotal.cs_nifree);
buf->f_ffree = fs32_to_cpu(sb, usb1->fs_cstotal.cs_nifree);
buf->f_namelen = UFS_MAXNAMLEN;
return 0;
}
......
......@@ -3,6 +3,7 @@
*
* Copyright (C) 1997, 1998 Francois-Rene Rideau <fare@tunes.org>
* Copyright (C) 1998 Jakub Jelinek <jj@ultra.linux.cz>
* Copyright (C) 2001 Christoph Hellwig <hch@caldera.de>
*/
#ifndef _UFS_SWAB_H
......@@ -14,124 +15,119 @@
* in case there are ufs implementations that have strange bytesexes,
* you'll need to modify code here as well as in ufs_super.c and ufs_fs.h
* to support them.
*
* WE ALSO ASSUME A REMOTELY SANE ARCHITECTURE BYTESEX.
* We are not ready to confront insane bytesexual perversions where
* conversion to/from little/big-endian is not an involution.
* That is, we require that XeYZ_to_cpu(x) == cpu_to_XeYZ(x)
*
* NOTE that swab macros depend on a variable (or macro) swab being in
* scope and properly initialized (usually from sb->u.ufs_sb.s_swab).
* Its meaning depends on whether the architecture is sane-endian or not.
* For sane architectures, it's a flag taking values UFS_NATIVE_ENDIAN (0)
* or UFS_SWABBED_ENDIAN (1), indicating whether to swab or not.
* For pervert architectures, it's either UFS_LITTLE_ENDIAN or
* UFS_BIG_ENDIAN whose meaning you'll have to guess.
*
* It is important to keep these conventions in synch with ufs_fs.h
* and super.c. Failure to do so (initializing swab to 0 both for
* NATIVE_ENDIAN and LITTLE_ENDIAN) led to nasty crashes on big endian
* machines reading little endian UFSes. Search for "swab =" in super.c.
*
* I also suspect the whole UFS code to trust the on-disk structures
* much too much, which might lead to losing badly when mounting
* inconsistent partitions as UFS filesystems. fsck required (but of
* course, no fsck.ufs has yet to be ported from BSD to Linux as of 199808).
*/
#include <linux/ufs_fs.h>
#include <asm/byteorder.h>
/*
* These are only valid inside ufs routines,
* after swab has been initialized to sb->u.ufs_sb.s_swab
*/
#define SWAB16(x) ufs_swab16(swab,x)
#define SWAB32(x) ufs_swab32(swab,x)
#define SWAB64(x) ufs_swab64(swab,x)
enum {
BYTESEX_LE,
BYTESEX_BE
};
/*
* We often use swabing, when we want to increment/decrement some value,
* so these macros might become handy and increase readability. (Daniel)
*/
#define INC_SWAB16(x) ((x)=ufs_swab16_add(swab,x,1))
#define INC_SWAB32(x) ((x)=ufs_swab32_add(swab,x,1))
#define INC_SWAB64(x) ((x)=ufs_swab64_add(swab,x,1))
#define DEC_SWAB16(x) ((x)=ufs_swab16_add(swab,x,-1))
#define DEC_SWAB32(x) ((x)=ufs_swab32_add(swab,x,-1))
#define DEC_SWAB64(x) ((x)=ufs_swab64_add(swab,x,-1))
#define ADD_SWAB16(x,y) ((x)=ufs_swab16_add(swab,x,y))
#define ADD_SWAB32(x,y) ((x)=ufs_swab32_add(swab,x,y))
#define ADD_SWAB64(x,y) ((x)=ufs_swab64_add(swab,x,y))
#define SUB_SWAB16(x,y) ((x)=ufs_swab16_add(swab,x,-(y)))
#define SUB_SWAB32(x,y) ((x)=ufs_swab32_add(swab,x,-(y)))
#define SUB_SWAB64(x,y) ((x)=ufs_swab64_add(swab,x,-(y)))
#if defined(__LITTLE_ENDIAN) || defined(__BIG_ENDIAN) /* sane bytesex */
extern __inline__ __const__ __u16 ufs_swab16(unsigned swab, __u16 x) {
if (swab)
return swab16(x);
static __inline u64
fs64_to_cpu(struct super_block *sbp, u64 n)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return le64_to_cpu(n);
else
return x;
return be64_to_cpu(n);
}
extern __inline__ __const__ __u32 ufs_swab32(unsigned swab, __u32 x) {
if (swab)
return swab32(x);
static __inline u64
cpu_to_fs64(struct super_block *sbp, u64 n)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return cpu_to_le64(n);
else
return x;
return cpu_to_be64(n);
}
extern __inline__ __const__ __u64 ufs_swab64(unsigned swab, __u64 x) {
if (swab)
return swab64(x);
static __inline u32
fs64_add(struct super_block *sbp, u32 *n, int d)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return *n = cpu_to_le64(le64_to_cpu(*n)+d);
else
return x;
return *n = cpu_to_be64(be64_to_cpu(*n)+d);
}
extern __inline__ __const__ __u16 ufs_swab16_add(unsigned swab, __u16 x, __u16 y) {
if (swab)
return swab16(swab16(x)+y);
static __inline u32
fs64_sub(struct super_block *sbp, u32 *n, int d)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return *n = cpu_to_le64(le64_to_cpu(*n)-d);
else
return x + y;
return *n = cpu_to_be64(be64_to_cpu(*n)-d);
}
extern __inline__ __const__ __u32 ufs_swab32_add(unsigned swab, __u32 x, __u32 y) {
if (swab)
return swab32(swab32(x)+y);
static __inline u32
fs32_to_cpu(struct super_block *sbp, u32 n)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return le32_to_cpu(n);
else
return x + y;
return be32_to_cpu(n);
}
extern __inline__ __const__ __u64 ufs_swab64_add(unsigned swab, __u64 x, __u64 y) {
if (swab)
return swab64(swab64(x)+y);
static __inline u32
cpu_to_fs32(struct super_block *sbp, u32 n)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return cpu_to_le32(n);
else
return x + y;
return cpu_to_be32(n);
}
#else /* bytesexual perversion -- BEWARE! Read note at top of file! */
extern __inline__ __const__ __u16 ufs_swab16(unsigned swab, __u16 x) {
if (swab == UFS_LITTLE_ENDIAN)
return le16_to_cpu(x);
static __inline u32
fs32_add(struct super_block *sbp, u32 *n, int d)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return *n = cpu_to_le32(le32_to_cpu(*n)+d);
else
return be16_to_cpu(x);
return *n = cpu_to_be32(be32_to_cpu(*n)+d);
}
extern __inline__ __const__ __u32 ufs_swab32(unsigned swab, __u32 x) {
if (swab == UFS_LITTLE_ENDIAN)
return le32_to_cpu(x);
static __inline u32
fs32_sub(struct super_block *sbp, u32 *n, int d)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return *n = cpu_to_le32(le32_to_cpu(*n)-d);
else
return be32_to_cpu(x);
return *n = cpu_to_be32(be32_to_cpu(*n)-d);
}
extern __inline__ __const__ __u64 ufs_swab64(unsigned swab, __u64 x) {
if (swab == UFS_LITTLE_ENDIAN)
return le64_to_cpu(x);
static __inline u16
fs16_to_cpu(struct super_block *sbp, u16 n)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return le16_to_cpu(n);
else
return be64_to_cpu(x);
return be16_to_cpu(n);
}
extern __inline__ __const__ __u16 ufs_swab16_add(unsigned swab, __u16 x, __u16 y) {
return ufs_swab16(swab, ufs_swab16(swab, x) + y);
static __inline u16
cpu_to_fs16(struct super_block *sbp, u16 n)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return cpu_to_le16(n);
else
return cpu_to_be16(n);
}
extern __inline__ __const__ __u32 ufs_swab32_add(unsigned swab, __u32 x, __u32 y) {
return ufs_swab32(swab, ufs_swab32(swab, x) + y);
static __inline u16
fs16_add(struct super_block *sbp, u16 *n, int d)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return *n = cpu_to_le16(le16_to_cpu(*n)+d);
else
return *n = cpu_to_be16(be16_to_cpu(*n)+d);
}
extern __inline__ __const__ __u64 ufs_swab64_add(unsigned swab, __u64 x, __u64 y) {
return ufs_swab64(swab, ufs_swab64(swab, x) + y);
static __inline u16
fs16_sub(struct super_block *sbp, u16 *n, int d)
{
if (sbp->u.ufs_sb.s_bytesex == BYTESEX_LE)
return *n = cpu_to_le16(le16_to_cpu(*n)-d);
else
return *n = cpu_to_be16(be16_to_cpu(*n)-d);
}
#endif /* byte sexuality */
#endif /* _UFS_SWAB_H */
......@@ -75,12 +75,10 @@ static int ufs_trunc_direct (struct inode * inode)
unsigned frag_to_free, free_count;
unsigned i, j, tmp;
int retry;
unsigned swab;
UFSD(("ENTER\n"))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
frag_to_free = 0;
......@@ -110,14 +108,14 @@ static int ufs_trunc_direct (struct inode * inode)
* Free first free fragments
*/
p = inode->u.ufs_i.i_u1.i_data + ufs_fragstoblks (frag1);
tmp = SWAB32(*p);
tmp = fs32_to_cpu(sb, *p);
if (!tmp )
ufs_panic (sb, "ufs_trunc_direct", "internal error");
frag1 = ufs_fragnum (frag1);
frag2 = ufs_fragnum (frag2);
for (j = frag1; j < frag2; j++) {
bh = get_hash_table (sb->s_dev, tmp + j, uspi->s_fsize);
if ((bh && DATA_BUFFER_USED(bh)) || tmp != SWAB32(*p)) {
if ((bh && DATA_BUFFER_USED(bh)) || tmp != fs32_to_cpu(sb, *p)) {
retry = 1;
brelse (bh);
goto next1;
......@@ -135,19 +133,19 @@ static int ufs_trunc_direct (struct inode * inode)
*/
for (i = block1 ; i < block2; i++) {
p = inode->u.ufs_i.i_u1.i_data + i;
tmp = SWAB32(*p);
tmp = fs32_to_cpu(sb, *p);
if (!tmp)
continue;
for (j = 0; j < uspi->s_fpb; j++) {
bh = get_hash_table (sb->s_dev, tmp + j, uspi->s_fsize);
if ((bh && DATA_BUFFER_USED(bh)) || tmp != SWAB32(*p)) {
if ((bh && DATA_BUFFER_USED(bh)) || tmp != fs32_to_cpu(sb, *p)) {
retry = 1;
brelse (bh);
goto next2;
}
bforget (bh);
}
*p = SWAB32(0);
*p = 0;
inode->i_blocks -= uspi->s_nspb;
mark_inode_dirty(inode);
if (free_count == 0) {
......@@ -173,20 +171,20 @@ next2:;
* Free last free fragments
*/
p = inode->u.ufs_i.i_u1.i_data + ufs_fragstoblks (frag3);
tmp = SWAB32(*p);
tmp = fs32_to_cpu(sb, *p);
if (!tmp )
ufs_panic(sb, "ufs_truncate_direct", "internal error");
frag4 = ufs_fragnum (frag4);
for (j = 0; j < frag4; j++) {
bh = get_hash_table (sb->s_dev, tmp + j, uspi->s_fsize);
if ((bh && DATA_BUFFER_USED(bh)) || tmp != SWAB32(*p)) {
if ((bh && DATA_BUFFER_USED(bh)) || tmp != fs32_to_cpu(sb, *p)) {
retry = 1;
brelse (bh);
goto next1;
}
bforget (bh);
}
*p = SWAB32(0);
*p = 0;
inode->i_blocks -= frag4 << uspi->s_nspfshift;
mark_inode_dirty(inode);
ufs_free_fragments (inode, tmp, frag4);
......@@ -207,47 +205,45 @@ static int ufs_trunc_indirect (struct inode * inode, unsigned offset, u32 * p)
unsigned indirect_block, i, j, tmp;
unsigned frag_to_free, free_count;
int retry;
unsigned swab;
UFSD(("ENTER\n"))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
frag_to_free = 0;
free_count = 0;
retry = 0;
tmp = SWAB32(*p);
tmp = fs32_to_cpu(sb, *p);
if (!tmp)
return 0;
ind_ubh = ubh_bread (sb->s_dev, tmp, uspi->s_bsize);
if (tmp != SWAB32(*p)) {
if (tmp != fs32_to_cpu(sb, *p)) {
ubh_brelse (ind_ubh);
return 1;
}
if (!ind_ubh) {
*p = SWAB32(0);
*p = 0;
return 0;
}
indirect_block = (DIRECT_BLOCK > offset) ? (DIRECT_BLOCK - offset) : 0;
for (i = indirect_block; i < uspi->s_apb; i++) {
ind = ubh_get_addr32 (ind_ubh, i);
tmp = SWAB32(*ind);
tmp = fs32_to_cpu(sb, *ind);
if (!tmp)
continue;
for (j = 0; j < uspi->s_fpb; j++) {
bh = get_hash_table (sb->s_dev, tmp + j, uspi->s_fsize);
if ((bh && DATA_BUFFER_USED(bh)) || tmp != SWAB32(*ind)) {
if ((bh && DATA_BUFFER_USED(bh)) || tmp != fs32_to_cpu(sb, *ind)) {
retry = 1;
brelse (bh);
goto next;
}
bforget (bh);
}
*ind = SWAB32(0);
*ind = 0;
ubh_mark_buffer_dirty(ind_ubh);
if (free_count == 0) {
frag_to_free = tmp;
......@@ -268,15 +264,15 @@ next:;
ufs_free_blocks (inode, frag_to_free, free_count);
}
for (i = 0; i < uspi->s_apb; i++)
if (SWAB32(*ubh_get_addr32(ind_ubh,i)))
if (*ubh_get_addr32(ind_ubh,i))
break;
if (i >= uspi->s_apb) {
if (ubh_max_bcount(ind_ubh) != 1) {
retry = 1;
}
else {
tmp = SWAB32(*p);
*p = SWAB32(0);
tmp = fs32_to_cpu(sb, *p);
*p = 0;
inode->i_blocks -= uspi->s_nspb;
mark_inode_dirty(inode);
ufs_free_blocks (inode, tmp, uspi->s_fpb);
......@@ -303,34 +299,32 @@ static int ufs_trunc_dindirect (struct inode * inode, unsigned offset, u32 * p)
unsigned i, tmp, dindirect_block;
u32 * dind;
int retry = 0;
unsigned swab;
UFSD(("ENTER\n"))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
dindirect_block = (DIRECT_BLOCK > offset)
? ((DIRECT_BLOCK - offset) >> uspi->s_apbshift) : 0;
retry = 0;
tmp = SWAB32(*p);
tmp = fs32_to_cpu(sb, *p);
if (!tmp)
return 0;
dind_bh = ubh_bread (inode->i_dev, tmp, uspi->s_bsize);
if (tmp != SWAB32(*p)) {
if (tmp != fs32_to_cpu(sb, *p)) {
ubh_brelse (dind_bh);
return 1;
}
if (!dind_bh) {
*p = SWAB32(0);
*p = 0;
return 0;
}
for (i = dindirect_block ; i < uspi->s_apb ; i++) {
dind = ubh_get_addr32 (dind_bh, i);
tmp = SWAB32(*dind);
tmp = fs32_to_cpu(sb, *dind);
if (!tmp)
continue;
retry |= ufs_trunc_indirect (inode, offset + (i << uspi->s_apbshift), dind);
......@@ -338,14 +332,14 @@ static int ufs_trunc_dindirect (struct inode * inode, unsigned offset, u32 * p)
}
for (i = 0; i < uspi->s_apb; i++)
if (SWAB32(*ubh_get_addr32 (dind_bh, i)))
if (*ubh_get_addr32 (dind_bh, i))
break;
if (i >= uspi->s_apb) {
if (ubh_max_bcount(dind_bh) != 1)
retry = 1;
else {
tmp = SWAB32(*p);
*p = SWAB32(0);
tmp = fs32_to_cpu(sb, *p);
*p = 0;
inode->i_blocks -= uspi->s_nspb;
mark_inode_dirty(inode);
ufs_free_blocks (inode, tmp, uspi->s_fpb);
......@@ -372,27 +366,25 @@ static int ufs_trunc_tindirect (struct inode * inode)
unsigned tindirect_block, tmp, i;
u32 * tind, * p;
int retry;
unsigned swab;
UFSD(("ENTER\n"))
sb = inode->i_sb;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
retry = 0;
tindirect_block = (DIRECT_BLOCK > (UFS_NDADDR + uspi->s_apb + uspi->s_2apb))
? ((DIRECT_BLOCK - UFS_NDADDR - uspi->s_apb - uspi->s_2apb) >> uspi->s_2apbshift) : 0;
p = inode->u.ufs_i.i_u1.i_data + UFS_TIND_BLOCK;
if (!(tmp = SWAB32(*p)))
if (!(tmp = fs32_to_cpu(sb, *p)))
return 0;
tind_bh = ubh_bread (sb->s_dev, tmp, uspi->s_bsize);
if (tmp != SWAB32(*p)) {
if (tmp != fs32_to_cpu(sb, *p)) {
ubh_brelse (tind_bh);
return 1;
}
if (!tind_bh) {
*p = SWAB32(0);
*p = 0;
return 0;
}
......@@ -403,14 +395,14 @@ static int ufs_trunc_tindirect (struct inode * inode)
ubh_mark_buffer_dirty(tind_bh);
}
for (i = 0; i < uspi->s_apb; i++)
if (SWAB32(*ubh_get_addr32 (tind_bh, i)))
if (*ubh_get_addr32 (tind_bh, i))
break;
if (i >= uspi->s_apb) {
if (ubh_max_bcount(tind_bh) != 1)
retry = 1;
else {
tmp = SWAB32(*p);
*p = SWAB32(0);
tmp = fs32_to_cpu(sb, *p);
*p = 0;
inode->i_blocks -= uspi->s_nspb;
mark_inode_dirty(inode);
ufs_free_blocks (inode, tmp, uspi->s_fpb);
......
......@@ -26,180 +26,203 @@
/*
* macros used for accesing structures
*/
#define ufs_get_fs_state(usb1,usb3) _ufs_get_fs_state_(usb1,usb3,flags,swab)
static inline __s32 _ufs_get_fs_state_(struct ufs_super_block_first * usb1,
struct ufs_super_block_third * usb3, unsigned flags, unsigned swab)
static inline s32
ufs_get_fs_state(struct super_block *sb, struct ufs_super_block_first *usb1,
struct ufs_super_block_third *usb3)
{
switch (flags & UFS_ST_MASK) {
case UFS_ST_SUN:
return SWAB32((usb3)->fs_u2.fs_sun.fs_state);
case UFS_ST_SUNx86:
return SWAB32((usb1)->fs_u1.fs_sunx86.fs_state);
case UFS_ST_44BSD:
default:
return SWAB32((usb3)->fs_u2.fs_44.fs_state);
switch (sb->u.ufs_sb.s_flags & UFS_ST_MASK) {
case UFS_ST_SUN:
return fs32_to_cpu(sb, usb3->fs_u2.fs_sun.fs_state);
case UFS_ST_SUNx86:
return fs32_to_cpu(sb, usb1->fs_u1.fs_sunx86.fs_state);
case UFS_ST_44BSD:
default:
return fs32_to_cpu(sb, usb3->fs_u2.fs_44.fs_state);
}
}
#define ufs_set_fs_state(usb1,usb3,value) _ufs_set_fs_state_(usb1,usb3,value,flags,swab)
static inline void _ufs_set_fs_state_(struct ufs_super_block_first * usb1,
struct ufs_super_block_third * usb3, __s32 value, unsigned flags, unsigned swab)
static inline void
ufs_set_fs_state(struct super_block *sb, struct ufs_super_block_first *usb1,
struct ufs_super_block_third *usb3, s32 value)
{
switch (flags & UFS_ST_MASK) {
case UFS_ST_SUN:
(usb3)->fs_u2.fs_sun.fs_state = SWAB32(value);
break;
case UFS_ST_SUNx86:
(usb1)->fs_u1.fs_sunx86.fs_state = SWAB32(value);
break;
case UFS_ST_44BSD:
(usb3)->fs_u2.fs_44.fs_state = SWAB32(value);
break;
switch (sb->u.ufs_sb.s_flags & UFS_ST_MASK) {
case UFS_ST_SUN:
usb3->fs_u2.fs_sun.fs_state = cpu_to_fs32(sb, value);
break;
case UFS_ST_SUNx86:
usb1->fs_u1.fs_sunx86.fs_state = cpu_to_fs32(sb, value);
break;
case UFS_ST_44BSD:
usb3->fs_u2.fs_44.fs_state = cpu_to_fs32(sb, value);
break;
}
}
#define ufs_get_fs_npsect(usb1,usb3) _ufs_get_fs_npsect_(usb1,usb3,flags,swab)
static inline __u32 _ufs_get_fs_npsect_(struct ufs_super_block_first * usb1,
struct ufs_super_block_third * usb3, unsigned flags, unsigned swab)
static inline u32
ufs_get_fs_npsect(struct super_block *sb, struct ufs_super_block_first *usb1,
struct ufs_super_block_third *usb3)
{
if ((flags & UFS_ST_MASK) == UFS_ST_SUNx86)
return SWAB32((usb3)->fs_u2.fs_sunx86.fs_npsect);
if ((sb->u.ufs_sb.s_flags & UFS_ST_MASK) == UFS_ST_SUNx86)
return fs32_to_cpu(sb, usb3->fs_u2.fs_sunx86.fs_npsect);
else
return SWAB32((usb1)->fs_u1.fs_sun.fs_npsect);
return fs32_to_cpu(sb, usb1->fs_u1.fs_sun.fs_npsect);
}
#define ufs_get_fs_qbmask(usb3) _ufs_get_fs_qbmask_(usb3,flags,swab)
static inline __u64 _ufs_get_fs_qbmask_(struct ufs_super_block_third * usb3,
unsigned flags, unsigned swab)
static inline u64
ufs_get_fs_qbmask(struct super_block *sb, struct ufs_super_block_third *usb3)
{
__u64 tmp;
switch (flags & UFS_ST_MASK) {
case UFS_ST_SUN:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_sun.fs_qbmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_sun.fs_qbmask[1];
break;
case UFS_ST_SUNx86:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_sunx86.fs_qbmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_sunx86.fs_qbmask[1];
break;
case UFS_ST_44BSD:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_44.fs_qbmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_44.fs_qbmask[1];
break;
u64 tmp;
switch (sb->u.ufs_sb.s_flags & UFS_ST_MASK) {
case UFS_ST_SUN:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_sun.fs_qbmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_sun.fs_qbmask[1];
break;
case UFS_ST_SUNx86:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_sunx86.fs_qbmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_sunx86.fs_qbmask[1];
break;
case UFS_ST_44BSD:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_44.fs_qbmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_44.fs_qbmask[1];
break;
}
return SWAB64(tmp);
return fs64_to_cpu(sb, tmp);
}
#define ufs_get_fs_qfmask(usb3) _ufs_get_fs_qfmask_(usb3,flags,swab)
static inline __u64 _ufs_get_fs_qfmask_(struct ufs_super_block_third * usb3,
unsigned flags, unsigned swab)
static inline u64
ufs_get_fs_qfmask(struct super_block *sb, struct ufs_super_block_third *usb3)
{
__u64 tmp;
switch (flags & UFS_ST_MASK) {
case UFS_ST_SUN:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_sun.fs_qfmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_sun.fs_qfmask[1];
break;
case UFS_ST_SUNx86:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_sunx86.fs_qfmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_sunx86.fs_qfmask[1];
break;
case UFS_ST_44BSD:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_44.fs_qfmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_44.fs_qfmask[1];
break;
u64 tmp;
switch (sb->u.ufs_sb.s_flags & UFS_ST_MASK) {
case UFS_ST_SUN:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_sun.fs_qfmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_sun.fs_qfmask[1];
break;
case UFS_ST_SUNx86:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_sunx86.fs_qfmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_sunx86.fs_qfmask[1];
break;
case UFS_ST_44BSD:
((u32 *)&tmp)[0] = usb3->fs_u2.fs_44.fs_qfmask[0];
((u32 *)&tmp)[1] = usb3->fs_u2.fs_44.fs_qfmask[1];
break;
}
return SWAB64(tmp);
return fs64_to_cpu(sb, tmp);
}
#define ufs_get_de_namlen(de) \
(((flags & UFS_DE_MASK) == UFS_DE_OLD) \
? SWAB16(de->d_u.d_namlen) \
: de->d_u.d_44.d_namlen)
static inline u16
ufs_get_de_namlen(struct super_block *sb, struct ufs_dir_entry *de)
{
if ((sb->u.ufs_sb.s_flags & UFS_DE_MASK) == UFS_DE_OLD)
return fs16_to_cpu(sb, de->d_u.d_namlen);
else
return de->d_u.d_44.d_namlen; /* XXX this seems wrong */
}
#define ufs_set_de_namlen(de,value) \
(((flags & UFS_DE_MASK) == UFS_DE_OLD) \
? (de->d_u.d_namlen = SWAB16(value)) \
: (de->d_u.d_44.d_namlen = value))
static inline void
ufs_set_de_namlen(struct super_block *sb, struct ufs_dir_entry *de, u16 value)
{
if ((sb->u.ufs_sb.s_flags & UFS_DE_MASK) == UFS_DE_OLD)
de->d_u.d_namlen = cpu_to_fs16(sb, value);
else
de->d_u.d_44.d_namlen = value; /* XXX this seems wrong */
}
#define ufs_set_de_type(de,mode) _ufs_set_de_type_(de,mode,flags,swab)
static inline void _ufs_set_de_type_(struct ufs_dir_entry * de, int mode,
unsigned flags, unsigned swab)
static inline void
ufs_set_de_type(struct super_block *sb, struct ufs_dir_entry *de, int mode)
{
if ((flags & UFS_DE_MASK) == UFS_DE_44BSD) {
switch (mode & S_IFMT) {
case S_IFSOCK: de->d_u.d_44.d_type = DT_SOCK; break;
case S_IFLNK: de->d_u.d_44.d_type = DT_LNK; break;
case S_IFREG: de->d_u.d_44.d_type = DT_REG; break;
case S_IFBLK: de->d_u.d_44.d_type = DT_BLK; break;
case S_IFDIR: de->d_u.d_44.d_type = DT_DIR; break;
case S_IFCHR: de->d_u.d_44.d_type = DT_CHR; break;
case S_IFIFO: de->d_u.d_44.d_type = DT_FIFO; break;
default: de->d_u.d_44.d_type = DT_UNKNOWN;
}
if ((sb->u.ufs_sb.s_flags & UFS_DE_MASK) != UFS_DE_44BSD)
return;
/*
* TODO turn this into a table lookup
*/
switch (mode & S_IFMT) {
case S_IFSOCK:
de->d_u.d_44.d_type = DT_SOCK;
break;
case S_IFLNK:
de->d_u.d_44.d_type = DT_LNK;
break;
case S_IFREG:
de->d_u.d_44.d_type = DT_REG;
break;
case S_IFBLK:
de->d_u.d_44.d_type = DT_BLK;
break;
case S_IFDIR:
de->d_u.d_44.d_type = DT_DIR;
break;
case S_IFCHR:
de->d_u.d_44.d_type = DT_CHR;
break;
case S_IFIFO:
de->d_u.d_44.d_type = DT_FIFO;
break;
default:
de->d_u.d_44.d_type = DT_UNKNOWN;
}
}
#define ufs_get_inode_uid(inode) _ufs_get_inode_uid_(inode,flags,swab)
static inline __u32 _ufs_get_inode_uid_(struct ufs_inode * inode,
unsigned flags, unsigned swab)
static inline u32
ufs_get_inode_uid(struct super_block *sb, struct ufs_inode *inode)
{
switch (flags & UFS_UID_MASK) {
case UFS_UID_EFT:
return SWAB32(inode->ui_u3.ui_sun.ui_uid);
case UFS_UID_44BSD:
return SWAB32(inode->ui_u3.ui_44.ui_uid);
default:
return SWAB16(inode->ui_u1.oldids.ui_suid);
switch (sb->u.ufs_sb.s_flags & UFS_UID_MASK) {
case UFS_UID_EFT:
return fs32_to_cpu(sb, inode->ui_u3.ui_sun.ui_uid);
case UFS_UID_44BSD:
return fs32_to_cpu(sb, inode->ui_u3.ui_44.ui_uid);
default:
return fs16_to_cpu(sb, inode->ui_u1.oldids.ui_suid);
}
}
#define ufs_set_inode_uid(inode,value) _ufs_set_inode_uid_(inode,value,flags,swab)
static inline void _ufs_set_inode_uid_(struct ufs_inode * inode, __u32 value,
unsigned flags, unsigned swab)
static inline void
ufs_set_inode_uid(struct super_block *sb, struct ufs_inode *inode, u32 value)
{
inode->ui_u1.oldids.ui_suid = SWAB16(value);
switch (flags & UFS_UID_MASK) {
case UFS_UID_EFT:
inode->ui_u3.ui_sun.ui_uid = SWAB32(value);
break;
case UFS_UID_44BSD:
inode->ui_u3.ui_44.ui_uid = SWAB32(value);
break;
switch (sb->u.ufs_sb.s_flags & UFS_UID_MASK) {
case UFS_UID_EFT:
inode->ui_u3.ui_sun.ui_uid = cpu_to_fs32(sb, value);
break;
case UFS_UID_44BSD:
inode->ui_u3.ui_44.ui_uid = cpu_to_fs32(sb, value);
break;
}
inode->ui_u1.oldids.ui_suid = cpu_to_fs16(sb, value);
}
#define ufs_get_inode_gid(inode) _ufs_get_inode_gid_(inode,flags,swab)
static inline __u32 _ufs_get_inode_gid_(struct ufs_inode * inode,
unsigned flags, unsigned swab)
static inline u32
ufs_get_inode_gid(struct super_block *sb, struct ufs_inode *inode)
{
switch (flags & UFS_UID_MASK) {
case UFS_UID_EFT:
return SWAB32(inode->ui_u3.ui_sun.ui_gid);
case UFS_UID_44BSD:
return SWAB32(inode->ui_u3.ui_44.ui_gid);
default:
return SWAB16(inode->ui_u1.oldids.ui_sgid);
switch (sb->u.ufs_sb.s_flags & UFS_UID_MASK) {
case UFS_UID_EFT:
return fs32_to_cpu(sb, inode->ui_u3.ui_sun.ui_gid);
case UFS_UID_44BSD:
return fs32_to_cpu(sb, inode->ui_u3.ui_44.ui_gid);
default:
return fs16_to_cpu(sb, inode->ui_u1.oldids.ui_sgid);
}
}
#define ufs_set_inode_gid(inode,value) _ufs_set_inode_gid_(inode,value,flags,swab)
static inline void _ufs_set_inode_gid_(struct ufs_inode * inode, __u32 value,
unsigned flags, unsigned swab)
static inline void
ufs_set_inode_gid(struct super_block *sb, struct ufs_inode *inode, u32 value)
{
inode->ui_u1.oldids.ui_sgid = SWAB16(value);
switch (flags & UFS_UID_MASK) {
case UFS_UID_EFT:
inode->ui_u3.ui_sun.ui_gid = SWAB32(value);
break;
case UFS_UID_44BSD:
inode->ui_u3.ui_44.ui_gid = SWAB32(value);
break;
switch (sb->u.ufs_sb.s_flags & UFS_UID_MASK) {
case UFS_UID_EFT:
inode->ui_u3.ui_sun.ui_gid = cpu_to_fs32(sb, value);
break;
case UFS_UID_44BSD:
inode->ui_u3.ui_44.ui_gid = cpu_to_fs32(sb, value);
break;
}
inode->ui_u1.oldids.ui_sgid = cpu_to_fs16(sb, value);
}
/*
* These functions manipulate ufs buffers
*/
......@@ -284,8 +307,8 @@ extern void _ubh_memcpyubh_(struct ufs_sb_private_info *, struct ufs_buffer_head
* percentage to hold in reserve.
*/
#define ufs_freespace(usb, percentreserved) \
(ufs_blkstofrags(SWAB32((usb)->fs_cstotal.cs_nbfree)) + \
SWAB32((usb)->fs_cstotal.cs_nffree) - (uspi->s_dsize * (percentreserved) / 100))
(ufs_blkstofrags(fs32_to_cpu(sb, (usb)->fs_cstotal.cs_nbfree)) + \
fs32_to_cpu(sb, (usb)->fs_cstotal.cs_nffree) - (uspi->s_dsize * (percentreserved) / 100))
/*
* Macros to access cylinder group array structures
......@@ -456,9 +479,7 @@ static inline void ufs_fragacct (struct super_block * sb, unsigned blockmap,
{
struct ufs_sb_private_info * uspi;
unsigned fragsize, pos;
unsigned swab;
swab = sb->u.ufs_sb.s_swab;
uspi = sb->u.ufs_sb.s_uspi;
fragsize = 0;
......@@ -467,12 +488,12 @@ static inline void ufs_fragacct (struct super_block * sb, unsigned blockmap,
fragsize++;
}
else if (fragsize > 0) {
ADD_SWAB32(fraglist[fragsize], cnt);
fs32_add(sb, &fraglist[fragsize], cnt);
fragsize = 0;
}
}
if (fragsize > 0 && fragsize < uspi->s_fpb)
ADD_SWAB32(fraglist[fragsize], cnt);
fs32_add(sb, &fraglist[fragsize], cnt);
}
#define ubh_scanc(ubh,begin,size,table,mask) _ubh_scanc_(uspi,ubh,begin,size,table,mask)
......
......@@ -127,7 +127,4 @@ extern size_t strlen(const char *);
/* Don't build bcopy at all ... */
#define __HAVE_ARCH_BCOPY
#define __HAVE_ARCH_MEMSCAN
#define memscan memchr
#endif /* __ASM_SH_STRING_H */
......@@ -36,8 +36,9 @@ struct ethtool_drvinfo {
char bus_info[ETHTOOL_BUSINFO_LEN]; /* Bus info for this IF. */
/* For PCI devices, use pci_dev->slot_name. */
char reserved1[32];
char reserved2[28];
u32 regdump_len; /* Amount of data from ETHTOOL_GREGS (u32s) */
char reserved2[24];
u32 eedump_len; /* Size of data from ETHTOOL_GEEPROM (bytes) */
u32 regdump_len; /* Size of data from ETHTOOL_GREGS (bytes) */
};
#define SOPASS_MAX 6
......@@ -59,10 +60,18 @@ struct ethtool_value {
struct ethtool_regs {
u32 cmd;
u32 version; /* driver-specific, indicates different chips/revs */
u32 len; /* in u32 increments */
u32 data[0];
u32 len; /* bytes */
u8 data[0];
};
/* for passing EEPROM chunks */
struct ethtool_eeprom {
u32 cmd;
u32 magic;
u32 offset; /* in bytes */
u32 len; /* in bytes */
u8 data[0];
};
/* CMDs currently supported */
#define ETHTOOL_GSET 0x00000001 /* Get settings. */
#define ETHTOOL_SSET 0x00000002 /* Set settings, privileged. */
......@@ -74,6 +83,8 @@ struct ethtool_regs {
#define ETHTOOL_SMSGLVL 0x00000008 /* Set driver msg level, priv. */
#define ETHTOOL_NWAY_RST 0x00000009 /* Restart autonegotiation, priv. */
#define ETHTOOL_GLINK 0x0000000a /* Get link status */
#define ETHTOOL_GEEPROM 0x0000000b /* Get EEPROM data */
#define ETHTOOL_SEEPROM 0x0000000c /* Set EEPROM data */
/* compatibility with older code */
#define SPARC_ETH_GSET ETHTOOL_GSET
......
......@@ -1386,7 +1386,6 @@ extern int block_sync_page(struct page *);
int generic_block_bmap(struct address_space *, long, get_block_t *);
int generic_commit_write(struct file *, struct page *, unsigned, unsigned);
int block_truncate_page(struct address_space *, loff_t, get_block_t *);
extern void create_empty_buffers(struct page *, kdev_t, unsigned long);
extern int waitfor_one_page(struct page*);
extern int generic_file_mmap(struct file *, struct vm_area_struct *);
......
......@@ -96,17 +96,6 @@
#define UFS_FSBAD ((char)0xff)
/* From here to next blank line, s_flags for ufs_sb_info */
/* endianness */
#define UFS_BYTESEX 0x00000001 /* mask; leave room to 0xF */
#if defined(__LITTLE_ENDIAN) || defined(__BIG_ENDIAN)
/* these are for sane architectures */
#define UFS_NATIVE_ENDIAN 0x00000000
#define UFS_SWABBED_ENDIAN 0x00000001
#else
/* these are for pervert architectures */
#define UFS_LITTLE_ENDIAN 0x00000000
#define UFS_BIG_ENDIAN 0x00000001
#endif
/* directory entry encoding */
#define UFS_DE_MASK 0x00000010 /* mask for the following */
#define UFS_DE_OLD 0x00000000
......@@ -417,7 +406,8 @@ struct ufs_super_block {
* super block lock fs->fs_lock.
*/
#define CG_MAGIC 0x090255
#define ufs_cg_chkmagic(ucg) (SWAB32((ucg)->cg_magic) == CG_MAGIC)
#define ufs_cg_chkmagic(sb, ucg) \
(fs32_to_cpu((sb), (ucg)->cg_magic) == CG_MAGIC)
/*
* size of this structure is 172 B
......
......@@ -18,7 +18,6 @@ struct ufs_inode_info {
__u32 i_data[15];
__u8 i_symlink[4*15];
} i_u1;
__u64 i_size;
__u32 i_flags;
__u32 i_gen;
__u32 i_shadow;
......
......@@ -118,7 +118,7 @@ struct ufs_sb_private_info {
struct ufs_sb_info {
struct ufs_sb_private_info * s_uspi;
struct ufs_csum * s_csp[UFS_MAXCSBUFS];
unsigned s_swab;
unsigned s_bytesex;
unsigned s_flags;
struct buffer_head ** s_ucg;
struct ufs_cg_private_info * s_ucpi[UFS_MAX_GROUP_LOADED];
......
......@@ -27,7 +27,7 @@ struct list_head active_list;
pg_data_t *pgdat_list;
static char *zone_names[MAX_NR_ZONES] = { "DMA", "Normal", "HighMem" };
static int zone_balance_ratio[MAX_NR_ZONES] __initdata = { 32, 128, 128, };
static int zone_balance_ratio[MAX_NR_ZONES] __initdata = { 128, 128, 128, };
static int zone_balance_min[MAX_NR_ZONES] __initdata = { 20 , 20, 20, };
static int zone_balance_max[MAX_NR_ZONES] __initdata = { 255 , 255, 255, };
......@@ -299,29 +299,26 @@ static struct page * balance_classzone(zone_t * classzone, unsigned int gfp_mask
return page;
}
static inline unsigned long zone_free_pages(zone_t * zone, unsigned int order)
{
long free = zone->free_pages - (1UL << order);
return free >= 0 ? free : 0;
}
/*
* This is the 'heart' of the zoned buddy allocator:
*/
struct page * __alloc_pages(unsigned int gfp_mask, unsigned int order, zonelist_t *zonelist)
{
unsigned long min;
zone_t **zone, * classzone;
struct page * page;
int freed;
zone = zonelist->zones;
classzone = *zone;
min = 1UL << order;
for (;;) {
zone_t *z = *(zone++);
if (!z)
break;
if (zone_free_pages(z, order) > z->pages_low) {
min += z->pages_low;
if (z->free_pages > min) {
page = rmqueue(z, order);
if (page)
return page;
......@@ -334,16 +331,18 @@ struct page * __alloc_pages(unsigned int gfp_mask, unsigned int order, zonelist_
wake_up_interruptible(&kswapd_wait);
zone = zonelist->zones;
min = 1UL << order;
for (;;) {
unsigned long min;
unsigned long local_min;
zone_t *z = *(zone++);
if (!z)
break;
min = z->pages_min;
local_min = z->pages_min;
if (!(gfp_mask & __GFP_WAIT))
min >>= 2;
if (zone_free_pages(z, order) > min) {
local_min >>= 2;
min += local_min;
if (z->free_pages > min) {
page = rmqueue(z, order);
if (page)
return page;
......@@ -376,12 +375,14 @@ struct page * __alloc_pages(unsigned int gfp_mask, unsigned int order, zonelist_
return page;
zone = zonelist->zones;
min = 1UL << order;
for (;;) {
zone_t *z = *(zone++);
if (!z)
break;
if (zone_free_pages(z, order) > z->pages_min) {
min += z->pages_min;
if (z->free_pages > min) {
page = rmqueue(z, order);
if (page)
return page;
......
......@@ -41,7 +41,6 @@ static int rw_swap_page_base(int rw, swp_entry_t entry, struct page *page)
kdev_t dev = 0;
int block_size;
struct inode *swapf = 0;
int wait = 0;
if (rw == READ) {
ClearPageUptodate(page);
......@@ -78,14 +77,6 @@ static int rw_swap_page_base(int rw, swp_entry_t entry, struct page *page)
* decrementing the page count, and unlocking the page in the
* swap lock map - in the IO completion handler.
*/
if (!wait)
return 1;
wait_on_page(page);
/* This shouldn't happen, but check to be sure. */
if (page_count(page) == 0)
printk(KERN_ERR "rw_swap_page: page unused while waiting!\n");
return 1;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment