Commit fe097651 authored by Linus Torvalds's avatar Linus Torvalds

v2.5.0.10 -> v2.5.0.11

- Jeff Garzik: no longer support old cards in tulip driver
(see separate driver for old tulip chips)
- Pat Mochel: driverfs/device model documentation
- Ballabio Dario: update eata driver to new IO locking
- Ingo Molnar: raid resync with new bio structures (much more efficient)
and mempool_resize()
- Jens Axboe: bio queue locking
parent 80044607
This diff is collapsed.
driverfs - The Device Driver Filesystem
Patrick Mochel <mochel@osdl.org>
3 December 2001
What it is:
~~~~~~~~~~~
driverfs is a unified means for device drivers to export interfaces to
userspace.
Some drivers have a need for exporting interfaces for things like
setting device-specific parameters, or tuning the device performance.
For example, wireless networking cards export a file in procfs to set
their SSID.
Other times, the bus on which a device resides may export other
information about the device. For example, PCI and USB both export
device information via procfs or usbdevfs.
In these cases, the files or directories are in nearly random places
in /proc. One benefit of driverfs is that it can consolidate all of
these interfaces to one standard location.
Why it's better than procfs:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This of course can't happen without changing every single driver that
exports a procfs interface, and having some coordination between all
of them as to what the proper place for their files is. Or can it?
driverfs was developed in conjunction with the new driver model for
the 2.5 kernel. In that model, the system has one unified tree of all
the devices that are present in the system. It follows naturally that
this tree can be exported to userspace in the same order.
So, every bus and every device gets a directory in the filesystem.
This directory is created when the device is registered in the tree;
before the driver actually gets a initialised. The dentry for this
directory is stored in the struct device for this driver, so the
driver has access to it.
Now, every driver has one standard place to export its files.
Granted, the location of the file is not as intuitive as it may have
been under procfs. But, I argue that with the exception of
/proc/bus/pci, none of the files had intuitive locations. I also argue
that the development of userspace tools can help cope with these
changes and inconsistencies in locations.
Why we're not just using procfs:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When developing the new driver model, it was initially implemented
with a procfs tree. In explaining the concept to Linus, he said "Don't
use proc."
I was a little shocked (especially considering I had already
implemented it using procfs). "What do you mean 'don't use proc'?"
His argument was that too many things use proc that shouldn't. And
even more things misuse proc that shouldn't. On top of that, procfs
was written before the VFS layer was written, so it doesn't use the
dcache. It reimplements many of the same features that the dcache
does, and is in general, crufty.
So, he told me to write my own. Soon after, he pointed me at ramfs,
the simplest filesystem known to man.
Consequently, we have a virtual fileystem based heavily on ramfs, and
borrowing some conceptual functionality from procfs.
It may suck, but it does what it was designed to. At least so far.
How it works:
~~~~~~~~~~~~~
Directories are encapsulated like this:
struct driver_dir_entry {
char * name;
struct dentry * dentry;
mode_t mode;
struct list_head files;
};
name:
Name of the directory.
dentry:
Dentry for the directory.
mode:
Permissions of the directory.
files:
Linked list of driver_file_entry's that are in the directory.
To create a directory, one first calls
struct driver_dir_entry *
driverfs_create_dir_entry(const char * name, mode_t mode);
which allocates and initialises a struct driver_dir_entry. Then to actually
create the directory:
int driverfs_create_dir(struct driver_dir_entry *, struct driver_dir_entry *);
To remove a directory:
void driverfs_remove_dir(struct driver_dir_entry * entry);
Files are encapsulated like this:
struct driver_file_entry {
struct driver_dir_entry * parent;
struct list_head node;
char * name;
mode_t mode;
struct dentry * dentry;
void * data;
struct driverfs_operations * ops;
};
struct driverfs_operations {
ssize_t (*read) (char *, size_t, loff_t, void *);
ssize_t (*write)(const char *, size_t, loff_t, void*);
};
node:
Node in its parent directory's list of files.
name:
The name of the file.
dentry:
The dentry for the file.
data:
Caller specific data that is passed to the callbacks when they
are called.
ops:
Operations for the file. Currently, this only contains read() and write()
callbacks for the file.
To create a file, one first calls
struct driver_file_entry *
driverfs_create_entry (const char * name, mode_t mode,
struct driverfs_operations * ops, void * data);
That allocates and initialises a struct driver_file_entry. Then, to actually
create a file, one calls
int driverfs_create_file(struct driver_file_entry * entry,
struct driver_dir_entry * parent);
To remove a file, one calls
void driverfs_remove_file(struct driver_dir_entry *, const char * name);
The callback functionality is similar to the way procfs works. When a
user performs a read(2) or write(2) on the file, it first calls a
driverfs function. This function then checks for a non-NULL pointer in
the file->private_data field, which it assumes to be a pointer to a
struct driver_file_entry.
It then checks for the appropriate callback and calls it.
What driverfs is not:
~~~~~~~~~~~~~~~~~~~~~
It is not a replacement for either devfs or procfs.
It does not handle device nodes, like devfs is intended to do. I think
this functionality is possible, but indeed think that integration of
the device nodes and control files should be done. Whether driverfs or
devfs, or something else, is the place to do it, I don't know.
It is not intended to be a replacement for all of the procfs
functionality. I think that many of the driver files should be moved
out of /proc (and maybe a few other things as well ;).
Limitations:
~~~~~~~~~~~~
The driverfs functions assume that at most a page is being either read
or written each time.
Possible bugs:
~~~~~~~~~~~~~~
It may not deal with offsets and/or seeks very well, especially if
they cross a page boundary.
There may be locking issues when dynamically adding/removing files and
directories rapidly (like if you have a hot plug device).
There are some people that believe that filesystems which add
files/directories dynamically based on the presence of devices are
inherently flawed. Though not as technically versed in this area as
some of those people, I like to believe that they can be made to work,
with the right guidance.
VERSION = 2
PATCHLEVEL = 5
SUBLEVEL = 1
EXTRAVERSION =-pre10
EXTRAVERSION =-pre11
KERNELRELEASE=$(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION)
......
......@@ -9,11 +9,3 @@ void * __io_virt_debug(unsigned long x, const char *file, int line)
return (void *)x;
}
unsigned long __io_phys_debug(unsigned long x, const char *file, int line)
{
if (x < PAGE_OFFSET) {
printk("io mapaddr 0x%05lx not valid at %s:%d!\n", x, file, line);
return x;
}
return __pa(x);
}
......@@ -1237,7 +1237,7 @@ static void do_cciss_request(request_queue_t *q)
blkdev_dequeue_request(creq);
spin_unlock_irq(&q->queue_lock);
spin_unlock_irq(q->queue_lock);
c->cmd_type = CMD_RWREQ;
c->rq = creq;
......@@ -1298,7 +1298,7 @@ static void do_cciss_request(request_queue_t *q)
c->Request.CDB[8]= creq->nr_sectors & 0xff;
c->Request.CDB[9] = c->Request.CDB[11] = c->Request.CDB[12] = 0;
spin_lock_irq(&q->queue_lock);
spin_lock_irq(q->queue_lock);
addQ(&(h->reqQ),c);
h->Qdepth++;
......@@ -1866,7 +1866,7 @@ static int __init cciss_init_one(struct pci_dev *pdev,
q = BLK_DEFAULT_QUEUE(MAJOR_NR + i);
q->queuedata = hba[i];
blk_init_queue(q, do_cciss_request);
blk_init_queue(q, do_cciss_request, &hba[i]->lock);
blk_queue_bounce_limit(q, hba[i]->pdev->dma_mask);
blk_queue_max_segments(q, MAXSGENTRIES);
blk_queue_max_sectors(q, 512);
......
......@@ -66,6 +66,7 @@ struct ctlr_info
unsigned int Qdepth;
unsigned int maxQsinceinit;
unsigned int maxSG;
spinlock_t lock;
//* pointers to command and error info pool */
CommandList_struct *cmd_pool;
......@@ -242,7 +243,7 @@ struct board_type {
struct access_method *access;
};
#define CCISS_LOCK(i) (&((BLK_DEFAULT_QUEUE(MAJOR_NR + i))->queue_lock))
#define CCISS_LOCK(i) ((BLK_DEFAULT_QUEUE(MAJOR_NR + i))->queue_lock)
#endif /* CCISS_H */
......@@ -467,7 +467,7 @@ int __init cpqarray_init(void)
q = BLK_DEFAULT_QUEUE(MAJOR_NR + i);
q->queuedata = hba[i];
blk_init_queue(q, do_ida_request);
blk_init_queue(q, do_ida_request, &hba[i]->lock);
blk_queue_bounce_limit(q, hba[i]->pci_dev->dma_mask);
blk_queue_max_segments(q, SG_MAX);
blksize_size[MAJOR_NR+i] = ida_blocksizes + (i*256);
......@@ -882,7 +882,7 @@ static void do_ida_request(request_queue_t *q)
blkdev_dequeue_request(creq);
spin_unlock_irq(&q->queue_lock);
spin_unlock_irq(q->queue_lock);
c->ctlr = h->ctlr;
c->hdr.unit = MINOR(creq->rq_dev) >> NWD_SHIFT;
......@@ -915,7 +915,7 @@ DBGPX( printk("Submitting %d sectors in %d segments\n", creq->nr_sectors, seg);
c->req.hdr.cmd = (rq_data_dir(creq) == READ) ? IDA_READ : IDA_WRITE;
c->type = CMD_RWREQ;
spin_lock_irq(&q->queue_lock);
spin_lock_irq(q->queue_lock);
/* Put the request on the tail of the request queue */
addQ(&h->reqQ, c);
......
......@@ -106,6 +106,7 @@ struct ctlr_info {
cmdlist_t *cmd_pool;
dma_addr_t cmd_pool_dhandle;
__u32 *cmd_pool_bits;
spinlock_t lock;
unsigned int Qdepth;
unsigned int maxQsinceinit;
......@@ -117,7 +118,7 @@ struct ctlr_info {
unsigned int misc_tflags;
};
#define IDA_LOCK(i) (&((BLK_DEFAULT_QUEUE(MAJOR_NR + i))->queue_lock))
#define IDA_LOCK(i) ((BLK_DEFAULT_QUEUE(MAJOR_NR + i))->queue_lock)
#endif
......
......@@ -204,6 +204,8 @@ static int use_virtual_dma;
* record each buffers capabilities
*/
static spinlock_t floppy_lock;
static unsigned short virtual_dma_port=0x3f0;
void floppy_interrupt(int irq, void *dev_id, struct pt_regs * regs);
static int set_dor(int fdc, char mask, char data);
......@@ -2296,7 +2298,7 @@ static void request_done(int uptodate)
DRS->maxtrack = 1;
/* unlock chained buffers */
spin_lock_irqsave(&QUEUE->queue_lock, flags);
spin_lock_irqsave(QUEUE->queue_lock, flags);
while (current_count_sectors && !QUEUE_EMPTY &&
current_count_sectors >= CURRENT->current_nr_sectors){
current_count_sectors -= CURRENT->current_nr_sectors;
......@@ -2304,7 +2306,7 @@ static void request_done(int uptodate)
CURRENT->sector += CURRENT->current_nr_sectors;
end_request(1);
}
spin_unlock_irqrestore(&QUEUE->queue_lock, flags);
spin_unlock_irqrestore(QUEUE->queue_lock, flags);
if (current_count_sectors && !QUEUE_EMPTY){
/* "unlock" last subsector */
......@@ -2329,9 +2331,9 @@ static void request_done(int uptodate)
DRWE->last_error_sector = CURRENT->sector;
DRWE->last_error_generation = DRS->generation;
}
spin_lock_irqsave(&QUEUE->queue_lock, flags);
spin_lock_irqsave(QUEUE->queue_lock, flags);
end_request(0);
spin_unlock_irqrestore(&QUEUE->queue_lock, flags);
spin_unlock_irqrestore(QUEUE->queue_lock, flags);
}
}
......@@ -2433,17 +2435,20 @@ static void rw_interrupt(void)
static int buffer_chain_size(void)
{
struct bio *bio;
int size;
struct bio_vec *bv;
int size, i;
char *base;
base = CURRENT->buffer;
base = bio_data(CURRENT->bio);
size = 0;
rq_for_each_bio(bio, CURRENT) {
if (bio_data(bio) != base + size)
break;
bio_for_each_segment(bv, bio, i) {
if (page_address(bv->bv_page) + bv->bv_offset != base + size)
break;
size += bio->bi_size;
size += bv->bv_len;
}
}
return size >> 9;
......@@ -2469,9 +2474,10 @@ static int transfer_size(int ssize, int max_sector, int max_size)
static void copy_buffer(int ssize, int max_sector, int max_sector_2)
{
int remaining; /* number of transferred 512-byte sectors */
struct bio_vec *bv;
struct bio *bio;
char *buffer, *dma_buffer;
int size;
int size, i;
max_sector = transfer_size(ssize,
minimum(max_sector, max_sector_2),
......@@ -2501,12 +2507,17 @@ static void copy_buffer(int ssize, int max_sector, int max_sector_2)
dma_buffer = floppy_track_buffer + ((fsector_t - buffer_min) << 9);
bio = CURRENT->bio;
size = CURRENT->current_nr_sectors << 9;
buffer = CURRENT->buffer;
while (remaining > 0){
SUPBOUND(size, remaining);
rq_for_each_bio(bio, CURRENT) {
bio_for_each_segment(bv, bio, i) {
if (!remaining)
break;
size = bv->bv_len;
SUPBOUND(size, remaining);
buffer = page_address(bv->bv_page) + bv->bv_offset;
#ifdef FLOPPY_SANITY_CHECK
if (dma_buffer + size >
floppy_track_buffer + (max_buffer_sectors << 10) ||
......@@ -2526,24 +2537,14 @@ static void copy_buffer(int ssize, int max_sector, int max_sector_2)
if (((unsigned long)buffer) % 512)
DPRINT("%p buffer not aligned\n", buffer);
#endif
if (CT(COMMAND) == FD_READ)
memcpy(buffer, dma_buffer, size);
else
memcpy(dma_buffer, buffer, size);
remaining -= size;
if (!remaining)
break;
if (CT(COMMAND) == FD_READ)
memcpy(buffer, dma_buffer, size);
else
memcpy(dma_buffer, buffer, size);
dma_buffer += size;
bio = bio->bi_next;
#ifdef FLOPPY_SANITY_CHECK
if (!bio){
DPRINT("bh=null in copy buffer after copy\n");
break;
remaining -= size;
dma_buffer += size;
}
#endif
size = bio->bi_size;
buffer = bio_data(bio);
}
#ifdef FLOPPY_SANITY_CHECK
if (remaining){
......@@ -4169,7 +4170,7 @@ int __init floppy_init(void)
blk_size[MAJOR_NR] = floppy_sizes;
blksize_size[MAJOR_NR] = floppy_blocksizes;
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), DEVICE_REQUEST);
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), DEVICE_REQUEST, &floppy_lock);
reschedule_timeout(MAXTIMEOUT, "floppy init", MAXTIMEOUT);
config_types();
......@@ -4477,6 +4478,7 @@ MODULE_LICENSE("GPL");
#else
__setup ("floppy=", floppy_setup);
module_init(floppy_init)
/* eject the boot floppy (if we need the drive for a different root floppy) */
/* This should only be called at boot time when we're sure that there's no
......
......@@ -254,6 +254,12 @@ void blk_queue_segment_boundary(request_queue_t *q, unsigned long mask)
q->seg_boundary_mask = mask;
}
void blk_queue_assign_lock(request_queue_t *q, spinlock_t *lock)
{
spin_lock_init(lock);
q->queue_lock = lock;
}
static char *rq_flags[] = { "REQ_RW", "REQ_RW_AHEAD", "REQ_BARRIER",
"REQ_CMD", "REQ_NOMERGE", "REQ_STARTED",
"REQ_DONTPREP", "REQ_DRIVE_CMD", "REQ_DRIVE_TASK",
......@@ -536,9 +542,9 @@ void generic_unplug_device(void *data)
request_queue_t *q = (request_queue_t *) data;
unsigned long flags;
spin_lock_irqsave(&q->queue_lock, flags);
spin_lock_irqsave(q->queue_lock, flags);
__generic_unplug_device(q);
spin_unlock_irqrestore(&q->queue_lock, flags);
spin_unlock_irqrestore(q->queue_lock, flags);
}
static int __blk_cleanup_queue(struct request_list *list)
......@@ -624,7 +630,6 @@ static int blk_init_free_list(request_queue_t *q)
init_waitqueue_head(&q->rq[READ].wait);
init_waitqueue_head(&q->rq[WRITE].wait);
spin_lock_init(&q->queue_lock);
return 0;
nomem:
blk_cleanup_queue(q);
......@@ -661,7 +666,7 @@ static int __make_request(request_queue_t *, struct bio *);
* blk_init_queue() must be paired with a blk_cleanup_queue() call
* when the block device is deactivated (such as at module unload).
**/
int blk_init_queue(request_queue_t *q, request_fn_proc *rfn)
int blk_init_queue(request_queue_t *q, request_fn_proc *rfn, spinlock_t *lock)
{
int ret;
......@@ -682,6 +687,7 @@ int blk_init_queue(request_queue_t *q, request_fn_proc *rfn)
q->plug_tq.routine = &generic_unplug_device;
q->plug_tq.data = q;
q->queue_flags = (1 << QUEUE_FLAG_CLUSTER);
q->queue_lock = lock;
/*
* by default assume old behaviour and bounce for any highmem page
......@@ -728,7 +734,7 @@ static struct request *get_request_wait(request_queue_t *q, int rw)
struct request_list *rl = &q->rq[rw];
struct request *rq;
spin_lock_prefetch(&q->queue_lock);
spin_lock_prefetch(q->queue_lock);
generic_unplug_device(q);
add_wait_queue(&rl->wait, &wait);
......@@ -736,9 +742,9 @@ static struct request *get_request_wait(request_queue_t *q, int rw)
set_current_state(TASK_UNINTERRUPTIBLE);
if (rl->count < batch_requests)
schedule();
spin_lock_irq(&q->queue_lock);
spin_lock_irq(q->queue_lock);
rq = get_request(q, rw);
spin_unlock_irq(&q->queue_lock);
spin_unlock_irq(q->queue_lock);
} while (rq == NULL);
remove_wait_queue(&rl->wait, &wait);
current->state = TASK_RUNNING;
......@@ -949,9 +955,9 @@ void blk_attempt_remerge(request_queue_t *q, struct request *rq)
{
unsigned long flags;
spin_lock_irqsave(&q->queue_lock, flags);
spin_lock_irqsave(q->queue_lock, flags);
__blk_attempt_remerge(q, rq);
spin_unlock_irqrestore(&q->queue_lock, flags);
spin_unlock_irqrestore(q->queue_lock, flags);
}
static int __make_request(request_queue_t *q, struct bio *bio)
......@@ -974,7 +980,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
*/
blk_queue_bounce(q, &bio);
spin_lock_prefetch(&q->queue_lock);
spin_lock_prefetch(q->queue_lock);
latency = elevator_request_latency(elevator, rw);
barrier = test_bit(BIO_RW_BARRIER, &bio->bi_rw);
......@@ -983,7 +989,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
req = NULL;
head = &q->queue_head;
spin_lock_irq(&q->queue_lock);
spin_lock_irq(q->queue_lock);
insert_here = head->prev;
if (blk_queue_empty(q) || barrier) {
......@@ -1066,7 +1072,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
freereq = NULL;
} else if ((req = get_request(q, rw)) == NULL) {
spin_unlock_irq(&q->queue_lock);
spin_unlock_irq(q->queue_lock);
/*
* READA bit set
......@@ -1111,7 +1117,7 @@ static int __make_request(request_queue_t *q, struct bio *bio)
out:
if (freereq)
blkdev_release_request(freereq);
spin_unlock_irq(&q->queue_lock);
spin_unlock_irq(q->queue_lock);
return 0;
end_io:
......@@ -1608,3 +1614,4 @@ EXPORT_SYMBOL(blk_nohighio);
EXPORT_SYMBOL(blk_dump_rq_flags);
EXPORT_SYMBOL(submit_bio);
EXPORT_SYMBOL(blk_contig_segment);
EXPORT_SYMBOL(blk_queue_assign_lock);
......@@ -62,6 +62,8 @@ static u64 nbd_bytesizes[MAX_NBD];
static struct nbd_device nbd_dev[MAX_NBD];
static devfs_handle_t devfs_handle;
static spinlock_t nbd_lock;
#define DEBUG( s )
/* #define DEBUG( s ) printk( s )
*/
......@@ -347,22 +349,22 @@ static void do_nbd_request(request_queue_t * q)
#endif
req->errors = 0;
blkdev_dequeue_request(req);
spin_unlock_irq(&q->queue_lock);
spin_unlock_irq(q->queue_lock);
down (&lo->queue_lock);
list_add(&req->queuelist, &lo->queue_head);
nbd_send_req(lo->sock, req); /* Why does this block? */
up (&lo->queue_lock);
spin_lock_irq(&q->queue_lock);
spin_lock_irq(q->queue_lock);
continue;
error_out:
req->errors++;
blkdev_dequeue_request(req);
spin_unlock(&q->queue_lock);
spin_unlock(q->queue_lock);
nbd_end_request(req);
spin_lock(&q->queue_lock);
spin_lock(q->queue_lock);
}
return;
}
......@@ -515,7 +517,7 @@ static int __init nbd_init(void)
#endif
blksize_size[MAJOR_NR] = nbd_blksizes;
blk_size[MAJOR_NR] = nbd_sizes;
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), do_nbd_request);
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), do_nbd_request, &nbd_lock);
for (i = 0; i < MAX_NBD; i++) {
nbd_dev[i].refcnt = 0;
nbd_dev[i].file = NULL;
......
......@@ -146,6 +146,8 @@ static int pcd_drive_count;
#include <asm/uaccess.h>
static spinlock_t pcd_lock;
#ifndef MODULE
#include "setup.h"
......@@ -355,7 +357,7 @@ int pcd_init (void) /* preliminary initialisation */
}
}
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), DEVICE_REQUEST);
blk_init_queue(BLK_DEFAULT_QUEUE(MAJOR_NR), DEVICE_REQUEST, &pcd_lock);
read_ahead[MAJOR_NR] = 8; /* 8 sector (4kB) read ahead */
for (i=0;i<PCD_UNITS;i++) pcd_blocksizes[i] = 1024;
......@@ -821,11 +823,11 @@ static void pcd_start( void )
if (pcd_command(unit,rd_cmd,2048,"read block")) {
pcd_bufblk = -1;
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pcd_lock,saved_flags);
pcd_busy = 0;
end_request(0);
do_pcd_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pcd_lock,saved_flags);
return;
}
......@@ -845,11 +847,11 @@ static void do_pcd_read( void )
pcd_retries = 0;
pcd_transfer();
if (!pcd_count) {
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pcd_lock,saved_flags);
end_request(1);
pcd_busy = 0;
do_pcd_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pcd_lock,saved_flags);
return;
}
......@@ -868,19 +870,19 @@ static void do_pcd_read_drq( void )
pi_do_claimed(PI,pcd_start);
return;
}
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pcd_lock,saved_flags);
pcd_busy = 0;
pcd_bufblk = -1;
end_request(0);
do_pcd_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pcd_lock,saved_flags);
return;
}
do_pcd_read();
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pcd_lock,saved_flags);
do_pcd_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pcd_lock,saved_flags);
}
/* the audio_ioctl stuff is adapted from sr_ioctl.c */
......
......@@ -164,6 +164,8 @@ static int pf_drive_count;
#include <asm/uaccess.h>
static spinlock_t pf_spin_lock;
#ifndef MODULE
#include "setup.h"
......@@ -358,7 +360,7 @@ int pf_init (void) /* preliminary initialisation */
return -1;
}
q = BLK_DEFAULT_QUEUE(MAJOR_NR);
blk_init_queue(q, DEVICE_REQUEST);
blk_init_queue(q, DEVICE_REQUEST, &pf_spin_lock);
blk_queue_max_segments(q, cluster);
read_ahead[MAJOR_NR] = 8; /* 8 sector (4kB) read ahead */
......@@ -876,9 +878,9 @@ static void pf_next_buf( int unit )
{ long saved_flags;
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(1);
if (!pf_run) { spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
if (!pf_run) { spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return;
}
......@@ -894,7 +896,7 @@ static void pf_next_buf( int unit )
pf_count = CURRENT->current_nr_sectors;
pf_buf = CURRENT->buffer;
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
}
static void do_pf_read( void )
......@@ -918,11 +920,11 @@ static void do_pf_read_start( void )
pi_do_claimed(PI,do_pf_read_start);
return;
}
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0);
pf_busy = 0;
do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return;
}
pf_mask = STAT_DRQ;
......@@ -944,11 +946,11 @@ static void do_pf_read_drq( void )
pi_do_claimed(PI,do_pf_read_start);
return;
}
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0);
pf_busy = 0;
do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return;
}
pi_read_block(PI,pf_buf,512);
......@@ -959,11 +961,11 @@ static void do_pf_read_drq( void )
if (!pf_count) pf_next_buf(unit);
}
pi_disconnect(PI);
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(1);
pf_busy = 0;
do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
}
static void do_pf_write( void )
......@@ -985,11 +987,11 @@ static void do_pf_write_start( void )
pi_do_claimed(PI,do_pf_write_start);
return;
}
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0);
pf_busy = 0;
do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return;
}
......@@ -1002,11 +1004,11 @@ static void do_pf_write_start( void )
pi_do_claimed(PI,do_pf_write_start);
return;
}
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0);
pf_busy = 0;
do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return;
}
pi_write_block(PI,pf_buf,512);
......@@ -1032,19 +1034,19 @@ static void do_pf_write_done( void )
pi_do_claimed(PI,do_pf_write_start);
return;
}
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(0);
pf_busy = 0;
do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
return;
}
pi_disconnect(PI);
spin_lock_irqsave(&QUEUE->queue_lock,saved_flags);
spin_lock_irqsave(&pf_spin_lock,saved_flags);
end_request(1);
pf_busy = 0;
do_pf_request(NULL);
spin_unlock_irqrestore(&QUEUE->queue_lock,saved_flags);
spin_unlock_irqrestore(&pf_spin_lock,saved_flags);
}
/* end of pf.c */
......
......@@ -189,6 +189,8 @@ int __init ps2esdi_init(void)
return 0;
} /* ps2esdi_init */
module_init(ps2esdi_init);
#ifdef MODULE
static int cyl[MAX_HD] = {-1,-1};
......
This diff is collapsed.
......@@ -597,7 +597,7 @@ static void ide_init_queue(ide_drive_t *drive)
int max_sectors;
q->queuedata = HWGROUP(drive);
blk_init_queue(q, do_ide_request);
blk_init_queue(q, do_ide_request, &ide_lock);
blk_queue_segment_boundary(q, 0xffff);
/* IDE can do up to 128K per request, pdc4030 needs smaller limit */
......
......@@ -177,8 +177,6 @@ static int initializing; /* set while initializing built-in drivers */
/*
* protects global structures etc, we want to split this into per-hwgroup
* instead.
*
* anti-deadlock ordering: ide_lock -> DRIVE_LOCK
*/
spinlock_t ide_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED;
......@@ -583,11 +581,9 @@ inline int __ide_end_request(ide_hwgroup_t *hwgroup, int uptodate, int nr_secs)
if (!end_that_request_first(rq, uptodate, nr_secs)) {
add_blkdev_randomness(MAJOR(rq->rq_dev));
spin_lock(DRIVE_LOCK(drive));
blkdev_dequeue_request(rq);
hwgroup->rq = NULL;
end_that_request_last(rq);
spin_unlock(DRIVE_LOCK(drive));
ret = 0;
}
......@@ -900,11 +896,9 @@ void ide_end_drive_cmd (ide_drive_t *drive, byte stat, byte err)
}
}
spin_lock(DRIVE_LOCK(drive));
blkdev_dequeue_request(rq);
HWGROUP(drive)->rq = NULL;
end_that_request_last(rq);
spin_unlock(DRIVE_LOCK(drive));
spin_unlock_irqrestore(&ide_lock, flags);
}
......@@ -1368,7 +1362,7 @@ static inline ide_drive_t *choose_drive (ide_hwgroup_t *hwgroup)
/*
* Issue a new request to a drive from hwgroup
* Caller must have already done spin_lock_irqsave(DRIVE_LOCK(drive), ...)
* Caller must have already done spin_lock_irqsave(&ide_lock, ...)
*
* A hwgroup is a serialized group of IDE interfaces. Usually there is
* exactly one hwif (interface) per hwgroup, but buggy controllers (eg. CMD640)
......@@ -1456,9 +1450,7 @@ static void ide_do_request(ide_hwgroup_t *hwgroup, int masked_irq)
/*
* just continuing an interrupted request maybe
*/
spin_lock(DRIVE_LOCK(drive));
rq = hwgroup->rq = elv_next_request(&drive->queue);
spin_unlock(DRIVE_LOCK(drive));
/*
* Some systems have trouble with IDE IRQs arriving while
......@@ -1496,19 +1488,7 @@ request_queue_t *ide_get_queue (kdev_t dev)
*/
void do_ide_request(request_queue_t *q)
{
unsigned long flags;
/*
* release queue lock, grab IDE global lock and restore when
* we leave...
*/
spin_unlock(&q->queue_lock);
spin_lock_irqsave(&ide_lock, flags);
ide_do_request(q->queuedata, 0);
spin_unlock_irqrestore(&ide_lock, flags);
spin_lock(&q->queue_lock);
}
/*
......@@ -1875,7 +1855,6 @@ int ide_do_drive_cmd (ide_drive_t *drive, struct request *rq, ide_action_t actio
if (action == ide_wait)
rq->waiting = &wait;
spin_lock_irqsave(&ide_lock, flags);
spin_lock(DRIVE_LOCK(drive));
if (blk_queue_empty(&drive->queue) || action == ide_preempt) {
if (action == ide_preempt)
hwgroup->rq = NULL;
......@@ -1886,7 +1865,6 @@ int ide_do_drive_cmd (ide_drive_t *drive, struct request *rq, ide_action_t actio
queue_head = queue_head->next;
}
q->elevator.elevator_add_req_fn(q, rq, queue_head);
spin_unlock(DRIVE_LOCK(drive));
ide_do_request(hwgroup, 0);
spin_unlock_irqrestore(&ide_lock, flags);
if (action == ide_wait) {
......
......@@ -189,7 +189,7 @@ static mdk_personality_t linear_personality=
status: linear_status,
};
static int md__init linear_init (void)
static int __init linear_init (void)
{
return register_md_personality (LINEAR, &linear_personality);
}
......
This diff is collapsed.
......@@ -334,7 +334,7 @@ static mdk_personality_t raid0_personality=
status: raid0_status,
};
static int md__init raid0_init (void)
static int __init raid0_init (void)
{
return register_md_personality (RAID0, &raid0_personality);
}
......
This diff is collapsed.
2001-12-11 Jeff Garzik <jgarzik@mandrakesoft.com>
* eeprom.c, timer.c, media.c, tulip_core.c:
Remove 21040 and 21041 chip support.
2001-11-13 David S. Miller <davem@redhat.com>
* tulip_core.c (tulip_mwi_config): Kill unused label early_out.
......
......@@ -136,23 +136,6 @@ void __devinit tulip_parse_eeprom(struct net_device *dev)
subsequent_board:
if (ee_data[27] == 0) { /* No valid media table. */
} else if (tp->chip_id == DC21041) {
unsigned char *p = (void *)ee_data + ee_data[27 + controller_index*3];
int media = get_u16(p);
int count = p[2];
p += 3;
printk(KERN_INFO "%s: 21041 Media table, default media %4.4x (%s).\n",
dev->name, media,
media & 0x0800 ? "Autosense" : medianame[media & MEDIA_MASK]);
for (i = 0; i < count; i++) {
unsigned char media_block = *p++;
int media_code = media_block & MEDIA_MASK;
if (media_block & 0x40)
p += 6;
printk(KERN_INFO "%s: 21041 media #%d, %s.\n",
dev->name, media_code, medianame[media_code]);
}
} else {
unsigned char *p = (void *)ee_data + ee_data[27];
unsigned char csr12dir = 0;
......
......@@ -21,12 +21,6 @@
#include "tulip.h"
/* This is a mysterious value that can be written to CSR11 in the 21040 (only)
to support a pre-NWay full-duplex signaling mechanism using short frames.
No one knows what it should be, but if left at its default value some
10base2(!) packets trigger a full-duplex-request interrupt. */
#define FULL_DUPLEX_MAGIC 0x6969
/* The maximum data clock rate is 2.5 Mhz. The minimum timing is usually
met by back-to-back PCI I/O cycles, but we insert a delay to avoid
"overclocking" issues or future 66Mhz PCI. */
......@@ -326,17 +320,6 @@ void tulip_select_media(struct net_device *dev, int startup)
printk(KERN_DEBUG "%s: Using media type %s, CSR12 is %2.2x.\n",
dev->name, medianame[dev->if_port],
inl(ioaddr + CSR12) & 0xff);
} else if (tp->chip_id == DC21041) {
int port = dev->if_port <= 4 ? dev->if_port : 0;
if (tulip_debug > 1)
printk(KERN_DEBUG "%s: 21041 using media %s, CSR12 is %4.4x.\n",
dev->name, medianame[port == 3 ? 12: port],
inl(ioaddr + CSR12));
outl(0x00000000, ioaddr + CSR13); /* Reset the serial interface */
outl(t21041_csr14[port], ioaddr + CSR14);
outl(t21041_csr15[port], ioaddr + CSR15);
outl(t21041_csr13[port], ioaddr + CSR13);
new_csr6 = 0x80020000;
} else if (tp->chip_id == LC82C168) {
if (startup && ! tp->medialock)
dev->if_port = tp->mii_cnt ? 11 : 0;
......@@ -363,26 +346,6 @@ void tulip_select_media(struct net_device *dev, int startup)
new_csr6 = 0x00420000;
outl(0x1F078, ioaddr + 0xB8);
}
} else if (tp->chip_id == DC21040) { /* 21040 */
/* Turn on the xcvr interface. */
int csr12 = inl(ioaddr + CSR12);
if (tulip_debug > 1)
printk(KERN_DEBUG "%s: 21040 media type is %s, CSR12 is %2.2x.\n",
dev->name, medianame[dev->if_port], csr12);
if (tulip_media_cap[dev->if_port] & MediaAlwaysFD)
tp->full_duplex = 1;
new_csr6 = 0x20000;
/* Set the full duplux match frame. */
outl(FULL_DUPLEX_MAGIC, ioaddr + CSR11);
outl(0x00000000, ioaddr + CSR13); /* Reset the serial interface */
if (t21040_csr13[dev->if_port] & 8) {
outl(0x0705, ioaddr + CSR14);
outl(0x0006, ioaddr + CSR15);
} else {
outl(0xffff, ioaddr + CSR14);
outl(0x0000, ioaddr + CSR15);
}
outl(0x8f01 | t21040_csr13[dev->if_port], ioaddr + CSR13);
} else { /* Unknown chip type with no media table. */
if (tp->default_port == 0)
dev->if_port = tp->mii_cnt ? 11 : 3;
......
......@@ -33,60 +33,6 @@ void tulip_timer(unsigned long data)
inl(ioaddr + CSR14), inl(ioaddr + CSR15));
}
switch (tp->chip_id) {
case DC21040:
if (!tp->medialock && csr12 & 0x0002) { /* Network error */
printk(KERN_INFO "%s: No link beat found.\n",
dev->name);
dev->if_port = (dev->if_port == 2 ? 0 : 2);
tulip_select_media(dev, 0);
dev->trans_start = jiffies;
}
break;
case DC21041:
if (tulip_debug > 2)
printk(KERN_DEBUG "%s: 21041 media tick CSR12 %8.8x.\n",
dev->name, csr12);
if (tp->medialock) break;
switch (dev->if_port) {
case 0: case 3: case 4:
if (csr12 & 0x0004) { /*LnkFail */
/* 10baseT is dead. Check for activity on alternate port. */
tp->mediasense = 1;
if (csr12 & 0x0200)
dev->if_port = 2;
else
dev->if_port = 1;
printk(KERN_INFO "%s: No 21041 10baseT link beat, Media switched to %s.\n",
dev->name, medianame[dev->if_port]);
outl(0, ioaddr + CSR13); /* Reset */
outl(t21041_csr14[dev->if_port], ioaddr + CSR14);
outl(t21041_csr15[dev->if_port], ioaddr + CSR15);
outl(t21041_csr13[dev->if_port], ioaddr + CSR13);
next_tick = 10*HZ; /* 2.4 sec. */
} else
next_tick = 30*HZ;
break;
case 1: /* 10base2 */
case 2: /* AUI */
if (csr12 & 0x0100) {
next_tick = (30*HZ); /* 30 sec. */
tp->mediasense = 0;
} else if ((csr12 & 0x0004) == 0) {
printk(KERN_INFO "%s: 21041 media switched to 10baseT.\n",
dev->name);
dev->if_port = 0;
tulip_select_media(dev, 0);
next_tick = (24*HZ)/10; /* 2.4 sec. */
} else if (tp->mediasense || (csr12 & 0x0002)) {
dev->if_port = 3 - dev->if_port; /* Swap ports. */
tulip_select_media(dev, 0);
next_tick = 20*HZ;
} else {
next_tick = 20*HZ;
}
break;
}
break;
case DC21140:
case DC21142:
case MX98713:
......
This diff is collapsed.
/*
* eata.c - Low-level driver for EATA/DMA SCSI host adapters.
*
* 11 Dec 2001 Rev. 7.00 for linux 2.5.1
* + Use host->host_lock instead of io_request_lock.
*
* 1 May 2001 Rev. 6.05 for linux 2.4.4
* + Clean up all pci related routines.
* + Fix data transfer direction for opcode SEND_CUE_SHEET (0x5d)
......@@ -438,13 +441,6 @@ MODULE_AUTHOR("Dario Ballabio");
#include <linux/ctype.h>
#include <linux/spinlock.h>
#define SPIN_FLAGS unsigned long spin_flags;
#define SPIN_LOCK spin_lock_irq(&io_request_lock);
#define SPIN_LOCK_SAVE spin_lock_irqsave(&io_request_lock, spin_flags);
#define SPIN_UNLOCK spin_unlock_irq(&io_request_lock);
#define SPIN_UNLOCK_RESTORE \
spin_unlock_irqrestore(&io_request_lock, spin_flags);
/* Subversion values */
#define ISA 0
#define ESA 1
......@@ -1589,10 +1585,12 @@ static inline int do_reset(Scsi_Cmnd *SCarg) {
#endif
HD(j)->in_reset = TRUE;
SPIN_UNLOCK
spin_unlock_irq(&sh[j]->host_lock);
time = jiffies;
while ((jiffies - time) < (10 * HZ) && limit++ < 200000) udelay(100L);
SPIN_LOCK
spin_lock_irq(&sh[j]->host_lock);
printk("%s: reset, interrupts disabled, loops %d.\n", BN(j), limit);
for (i = 0; i < sh[j]->can_queue; i++) {
......@@ -2036,14 +2034,14 @@ static inline void ihdlr(int irq, unsigned int j) {
static void do_interrupt_handler(int irq, void *shap, struct pt_regs *regs) {
unsigned int j;
SPIN_FLAGS
unsigned long spin_flags;
/* Check if the interrupt must be processed by this handler */
if ((j = (unsigned int)((char *)shap - sha)) >= num_boards) return;
SPIN_LOCK_SAVE
spin_lock_irqsave(&sh[j]->host_lock, spin_flags);
ihdlr(irq, j);
SPIN_UNLOCK_RESTORE
spin_unlock_irqrestore(&sh[j]->host_lock, spin_flags);
}
int eata2x_release(struct Scsi_Host *shpnt) {
......@@ -2077,4 +2075,4 @@ static Scsi_Host_Template driver_template = EATA;
#ifndef MODULE
__setup("eata=", option_setup);
#endif /* end MODULE */
MODULE_LICENSE("Dual BSD/GPL");
MODULE_LICENSE("GPL");
......@@ -13,7 +13,7 @@ int eata2x_abort(Scsi_Cmnd *);
int eata2x_reset(Scsi_Cmnd *);
int eata2x_biosparam(Disk *, kdev_t, int *);
#define EATA_VERSION "6.05.00"
#define EATA_VERSION "7.00.00"
#define EATA { \
name: "EATA/DMA 2.0x rev. " EATA_VERSION " ", \
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment