Commit 383f9749 authored by James Bottomley's avatar James Bottomley

Merge by hand (conflicts between pending drivers and kfree cleanups)

Signed-off-by: default avatarJames Bottomley <James.Bottomley@SteelEye.com>
parents f093182d 3da8b713
...@@ -133,3 +133,32 @@ hardware and it is important to prevent the kernel from attempting to directly ...@@ -133,3 +133,32 @@ hardware and it is important to prevent the kernel from attempting to directly
access these devices too, as if the array controller were merely a SCSI access these devices too, as if the array controller were merely a SCSI
controller in the same way that we are allowing it to access SCSI tape drives. controller in the same way that we are allowing it to access SCSI tape drives.
SCSI error handling for tape drives and medium changers
-------------------------------------------------------
The linux SCSI mid layer provides an error handling protocol which
kicks into gear whenever a SCSI command fails to complete within a
certain amount of time (which can vary depending on the command).
The cciss driver participates in this protocol to some extent. The
normal protocol is a four step process. First the device is told
to abort the command. If that doesn't work, the device is reset.
If that doesn't work, the SCSI bus is reset. If that doesn't work
the host bus adapter is reset. Because the cciss driver is a block
driver as well as a SCSI driver and only the tape drives and medium
changers are presented to the SCSI mid layer, and unlike more
straightforward SCSI drivers, disk i/o continues through the block
side during the SCSI error recovery process, the cciss driver only
implements the first two of these actions, aborting the command, and
resetting the device. Additionally, most tape drives will not oblige
in aborting commands, and sometimes it appears they will not even
obey a reset coommand, though in most circumstances they will. In
the case that the command cannot be aborted and the device cannot be
reset, the device will be set offline.
In the event the error handling code is triggered and a tape drive is
successfully reset or the tardy command is successfully aborted, the
tape drive may still not allow i/o to continue until some command
is issued which positions the tape to a known position. Typically you
must rewind the tape (by issuing "mt -f /dev/st0 rewind" for example)
before i/o can proceed again to a tape drive which was reset.
...@@ -52,8 +52,6 @@ ppa.txt ...@@ -52,8 +52,6 @@ ppa.txt
- info on driver for IOmega zip drive - info on driver for IOmega zip drive
qlogicfas.txt qlogicfas.txt
- info on driver for QLogic FASxxx based adapters - info on driver for QLogic FASxxx based adapters
qlogicisp.txt
- info on driver for QLogic ISP 1020 based adapters
scsi-generic.txt scsi-generic.txt
- info on the sg driver for generic (non-disk/CD/tape) SCSI devices. - info on the sg driver for generic (non-disk/CD/tape) SCSI devices.
scsi.txt scsi.txt
......
...@@ -11,8 +11,7 @@ Qlogic boards: ...@@ -11,8 +11,7 @@ Qlogic boards:
* IQ-PCI-10 * IQ-PCI-10
* IQ-PCI-D * IQ-PCI-D
is provided by the qlogicisp.c driver. Check README.qlogicisp for is provided by the qla1280 driver.
details.
Nor does it support the PCI-Basic, which is supported by the Nor does it support the PCI-Basic, which is supported by the
'am53c974' driver. 'am53c974' driver.
......
Notes for the QLogic ISP1020 PCI SCSI Driver:
This driver works well in practice, but does not support disconnect/
reconnect, which makes using it with tape drives impractical.
It should work for most host adaptors with the ISP1020 chip. The
QLogic Corporation produces several PCI SCSI adapters which should
work:
* IQ-PCI
* IQ-PCI-10
* IQ-PCI-D
This driver may work with boards containing the ISP1020A or ISP1040A
chips, but that has not been tested.
This driver will NOT work with:
* ISA or VL Bus Qlogic cards (they use the 'qlogicfas' driver)
* PCI-basic (it uses the 'am53c974' driver)
Much thanks to QLogic's tech support for providing the latest ISP1020
firmware, and for taking the time to review my code.
Erik Moe
ehm@cris.com
Revised:
Michael A. Griffith
grif@cs.ucr.edu
...@@ -83,11 +83,11 @@ with the command. ...@@ -83,11 +83,11 @@ with the command.
The timeout handler is scsi_times_out(). When a timeout occurs, this The timeout handler is scsi_times_out(). When a timeout occurs, this
function function
1. invokes optional hostt->eh_timedout() callback. Return value can 1. invokes optional hostt->eh_timed_out() callback. Return value can
be one of be one of
- EH_HANDLED - EH_HANDLED
This indicates that eh_timedout() dealt with the timeout. The This indicates that eh_timed_out() dealt with the timeout. The
scmd is passed to __scsi_done() and thus linked into per-cpu scmd is passed to __scsi_done() and thus linked into per-cpu
scsi_done_q. Normal command completion described in [1-2-1] scsi_done_q. Normal command completion described in [1-2-1]
follows. follows.
...@@ -105,7 +105,7 @@ function ...@@ -105,7 +105,7 @@ function
command will time out again. command will time out again.
- EH_NOT_HANDLED - EH_NOT_HANDLED
This is the same as when eh_timedout() callback doesn't exist. This is the same as when eh_timed_out() callback doesn't exist.
Step #2 is taken. Step #2 is taken.
2. scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD) is invoked for the 2. scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD) is invoked for the
...@@ -142,7 +142,7 @@ are linked on shost->eh_cmd_q. ...@@ -142,7 +142,7 @@ are linked on shost->eh_cmd_q.
Note that this does not mean lower layers are quiescent. If a LLDD Note that this does not mean lower layers are quiescent. If a LLDD
completed a scmd with error status, the LLDD and lower layers are completed a scmd with error status, the LLDD and lower layers are
assumed to forget about the scmd at that point. However, if a scmd assumed to forget about the scmd at that point. However, if a scmd
has timed out, unless hostt->eh_timedout() made lower layers forget has timed out, unless hostt->eh_timed_out() made lower layers forget
about the scmd, which currently no LLDD does, the command is still about the scmd, which currently no LLDD does, the command is still
active as long as lower layers are concerned and completion could active as long as lower layers are concerned and completion could
occur at any time. Of course, all such completions are ignored as the occur at any time. Of course, all such completions are ignored as the
......
...@@ -148,6 +148,7 @@ static struct board_type products[] = { ...@@ -148,6 +148,7 @@ static struct board_type products[] = {
static ctlr_info_t *hba[MAX_CTLR]; static ctlr_info_t *hba[MAX_CTLR];
static void do_cciss_request(request_queue_t *q); static void do_cciss_request(request_queue_t *q);
static irqreturn_t do_cciss_intr(int irq, void *dev_id, struct pt_regs *regs);
static int cciss_open(struct inode *inode, struct file *filep); static int cciss_open(struct inode *inode, struct file *filep);
static int cciss_release(struct inode *inode, struct file *filep); static int cciss_release(struct inode *inode, struct file *filep);
static int cciss_ioctl(struct inode *inode, struct file *filep, static int cciss_ioctl(struct inode *inode, struct file *filep,
...@@ -1583,6 +1584,24 @@ static int fill_cmd(CommandList_struct *c, __u8 cmd, int ctlr, void *buff, ...@@ -1583,6 +1584,24 @@ static int fill_cmd(CommandList_struct *c, __u8 cmd, int ctlr, void *buff,
} }
} else if (cmd_type == TYPE_MSG) { } else if (cmd_type == TYPE_MSG) {
switch (cmd) { switch (cmd) {
case 0: /* ABORT message */
c->Request.CDBLen = 12;
c->Request.Type.Attribute = ATTR_SIMPLE;
c->Request.Type.Direction = XFER_WRITE;
c->Request.Timeout = 0;
c->Request.CDB[0] = cmd; /* abort */
c->Request.CDB[1] = 0; /* abort a command */
/* buff contains the tag of the command to abort */
memcpy(&c->Request.CDB[4], buff, 8);
break;
case 1: /* RESET message */
c->Request.CDBLen = 12;
c->Request.Type.Attribute = ATTR_SIMPLE;
c->Request.Type.Direction = XFER_WRITE;
c->Request.Timeout = 0;
memset(&c->Request.CDB[0], 0, sizeof(c->Request.CDB));
c->Request.CDB[0] = cmd; /* reset */
c->Request.CDB[1] = 0x04; /* reset a LUN */
case 3: /* No-Op message */ case 3: /* No-Op message */
c->Request.CDBLen = 1; c->Request.CDBLen = 1;
c->Request.Type.Attribute = ATTR_SIMPLE; c->Request.Type.Attribute = ATTR_SIMPLE;
...@@ -1869,6 +1888,52 @@ static unsigned long pollcomplete(int ctlr) ...@@ -1869,6 +1888,52 @@ static unsigned long pollcomplete(int ctlr)
/* Invalid address to tell caller we ran out of time */ /* Invalid address to tell caller we ran out of time */
return 1; return 1;
} }
static int add_sendcmd_reject(__u8 cmd, int ctlr, unsigned long complete)
{
/* We get in here if sendcmd() is polling for completions
and gets some command back that it wasn't expecting --
something other than that which it just sent down.
Ordinarily, that shouldn't happen, but it can happen when
the scsi tape stuff gets into error handling mode, and
starts using sendcmd() to try to abort commands and
reset tape drives. In that case, sendcmd may pick up
completions of commands that were sent to logical drives
through the block i/o system, or cciss ioctls completing, etc.
In that case, we need to save those completions for later
processing by the interrupt handler.
*/
#ifdef CONFIG_CISS_SCSI_TAPE
struct sendcmd_reject_list *srl = &hba[ctlr]->scsi_rejects;
/* If it's not the scsi tape stuff doing error handling, (abort */
/* or reset) then we don't expect anything weird. */
if (cmd != CCISS_RESET_MSG && cmd != CCISS_ABORT_MSG) {
#endif
printk( KERN_WARNING "cciss cciss%d: SendCmd "
"Invalid command list address returned! (%lx)\n",
ctlr, complete);
/* not much we can do. */
#ifdef CONFIG_CISS_SCSI_TAPE
return 1;
}
/* We've sent down an abort or reset, but something else
has completed */
if (srl->ncompletions >= (NR_CMDS + 2)) {
/* Uh oh. No room to save it for later... */
printk(KERN_WARNING "cciss%d: Sendcmd: Invalid command addr, "
"reject list overflow, command lost!\n", ctlr);
return 1;
}
/* Save it for later */
srl->complete[srl->ncompletions] = complete;
srl->ncompletions++;
#endif
return 0;
}
/* /*
* Send a command to the controller, and wait for it to complete. * Send a command to the controller, and wait for it to complete.
* Only used at init time. * Only used at init time.
...@@ -1891,7 +1956,7 @@ static int sendcmd( ...@@ -1891,7 +1956,7 @@ static int sendcmd(
unsigned long complete; unsigned long complete;
ctlr_info_t *info_p= hba[ctlr]; ctlr_info_t *info_p= hba[ctlr];
u64bit buff_dma_handle; u64bit buff_dma_handle;
int status; int status, done = 0;
if ((c = cmd_alloc(info_p, 1)) == NULL) { if ((c = cmd_alloc(info_p, 1)) == NULL) {
printk(KERN_WARNING "cciss: unable to get memory"); printk(KERN_WARNING "cciss: unable to get memory");
...@@ -1913,7 +1978,9 @@ static int sendcmd( ...@@ -1913,7 +1978,9 @@ static int sendcmd(
info_p->access.set_intr_mask(info_p, CCISS_INTR_OFF); info_p->access.set_intr_mask(info_p, CCISS_INTR_OFF);
/* Make sure there is room in the command FIFO */ /* Make sure there is room in the command FIFO */
/* Actually it should be completely empty at this time. */ /* Actually it should be completely empty at this time */
/* unless we are in here doing error handling for the scsi */
/* tape side of the driver. */
for (i = 200000; i > 0; i--) for (i = 200000; i > 0; i--)
{ {
/* if fifo isn't full go */ /* if fifo isn't full go */
...@@ -1930,13 +1997,25 @@ static int sendcmd( ...@@ -1930,13 +1997,25 @@ static int sendcmd(
* Send the cmd * Send the cmd
*/ */
info_p->access.submit_command(info_p, c); info_p->access.submit_command(info_p, c);
complete = pollcomplete(ctlr); done = 0;
do {
complete = pollcomplete(ctlr);
#ifdef CCISS_DEBUG #ifdef CCISS_DEBUG
printk(KERN_DEBUG "cciss: command completed\n"); printk(KERN_DEBUG "cciss: command completed\n");
#endif /* CCISS_DEBUG */ #endif /* CCISS_DEBUG */
if (complete != 1) { if (complete == 1) {
printk( KERN_WARNING
"cciss cciss%d: SendCmd Timeout out, "
"No command list address returned!\n",
ctlr);
status = IO_ERROR;
done = 1;
break;
}
/* This will need to change for direct lookup completions */
if ( (complete & CISS_ERROR_BIT) if ( (complete & CISS_ERROR_BIT)
&& (complete & ~CISS_ERROR_BIT) == c->busaddr) && (complete & ~CISS_ERROR_BIT) == c->busaddr)
{ {
...@@ -1976,6 +2055,10 @@ static int sendcmd( ...@@ -1976,6 +2055,10 @@ static int sendcmd(
status = IO_ERROR; status = IO_ERROR;
goto cleanup1; goto cleanup1;
} }
} else if (c->err_info->CommandStatus == CMD_UNABORTABLE) {
printk(KERN_WARNING "cciss%d: command could not be aborted.\n", ctlr);
status = IO_ERROR;
goto cleanup1;
} }
printk(KERN_WARNING "ciss ciss%d: sendcmd" printk(KERN_WARNING "ciss ciss%d: sendcmd"
" Error %x \n", ctlr, " Error %x \n", ctlr,
...@@ -1990,20 +2073,15 @@ static int sendcmd( ...@@ -1990,20 +2073,15 @@ static int sendcmd(
goto cleanup1; goto cleanup1;
} }
} }
/* This will need changing for direct lookup completions */
if (complete != c->busaddr) { if (complete != c->busaddr) {
printk( KERN_WARNING "cciss cciss%d: SendCmd " if (add_sendcmd_reject(cmd, ctlr, complete) != 0) {
"Invalid command list address returned! (%lx)\n", BUG(); /* we are pretty much hosed if we get here. */
ctlr, complete); }
status = IO_ERROR; continue;
goto cleanup1; } else
} done = 1;
} else { } while (!done);
printk( KERN_WARNING
"cciss cciss%d: SendCmd Timeout out, "
"No command list address returned!\n",
ctlr);
status = IO_ERROR;
}
cleanup1: cleanup1:
/* unlock the data buffer from DMA */ /* unlock the data buffer from DMA */
...@@ -2011,6 +2089,11 @@ static int sendcmd( ...@@ -2011,6 +2089,11 @@ static int sendcmd(
buff_dma_handle.val32.upper = c->SG[0].Addr.upper; buff_dma_handle.val32.upper = c->SG[0].Addr.upper;
pci_unmap_single(info_p->pdev, (dma_addr_t) buff_dma_handle.val, pci_unmap_single(info_p->pdev, (dma_addr_t) buff_dma_handle.val,
c->SG[0].Len, PCI_DMA_BIDIRECTIONAL); c->SG[0].Len, PCI_DMA_BIDIRECTIONAL);
#ifdef CONFIG_CISS_SCSI_TAPE
/* if we saved some commands for later, process them now. */
if (info_p->scsi_rejects.ncompletions > 0)
do_cciss_intr(0, info_p, NULL);
#endif
cmd_free(info_p, c, 1); cmd_free(info_p, c, 1);
return (status); return (status);
} }
...@@ -2335,6 +2418,48 @@ static void do_cciss_request(request_queue_t *q) ...@@ -2335,6 +2418,48 @@ static void do_cciss_request(request_queue_t *q)
start_io(h); start_io(h);
} }
static inline unsigned long get_next_completion(ctlr_info_t *h)
{
#ifdef CONFIG_CISS_SCSI_TAPE
/* Any rejects from sendcmd() lying around? Process them first */
if (h->scsi_rejects.ncompletions == 0)
return h->access.command_completed(h);
else {
struct sendcmd_reject_list *srl;
int n;
srl = &h->scsi_rejects;
n = --srl->ncompletions;
/* printk("cciss%d: processing saved reject\n", h->ctlr); */
printk("p");
return srl->complete[n];
}
#else
return h->access.command_completed(h);
#endif
}
static inline int interrupt_pending(ctlr_info_t *h)
{
#ifdef CONFIG_CISS_SCSI_TAPE
return ( h->access.intr_pending(h)
|| (h->scsi_rejects.ncompletions > 0));
#else
return h->access.intr_pending(h);
#endif
}
static inline long interrupt_not_for_us(ctlr_info_t *h)
{
#ifdef CONFIG_CISS_SCSI_TAPE
return (((h->access.intr_pending(h) == 0) ||
(h->interrupts_enabled == 0))
&& (h->scsi_rejects.ncompletions == 0));
#else
return (((h->access.intr_pending(h) == 0) ||
(h->interrupts_enabled == 0)));
#endif
}
static irqreturn_t do_cciss_intr(int irq, void *dev_id, struct pt_regs *regs) static irqreturn_t do_cciss_intr(int irq, void *dev_id, struct pt_regs *regs)
{ {
ctlr_info_t *h = dev_id; ctlr_info_t *h = dev_id;
...@@ -2344,19 +2469,15 @@ static irqreturn_t do_cciss_intr(int irq, void *dev_id, struct pt_regs *regs) ...@@ -2344,19 +2469,15 @@ static irqreturn_t do_cciss_intr(int irq, void *dev_id, struct pt_regs *regs)
int j; int j;
int start_queue = h->next_to_run; int start_queue = h->next_to_run;
/* Is this interrupt for us? */ if (interrupt_not_for_us(h))
if (( h->access.intr_pending(h) == 0) || (h->interrupts_enabled == 0))
return IRQ_NONE; return IRQ_NONE;
/* /*
* If there are completed commands in the completion queue, * If there are completed commands in the completion queue,
* we had better do something about it. * we had better do something about it.
*/ */
spin_lock_irqsave(CCISS_LOCK(h->ctlr), flags); spin_lock_irqsave(CCISS_LOCK(h->ctlr), flags);
while( h->access.intr_pending(h)) while (interrupt_pending(h)) {
{ while((a = get_next_completion(h)) != FIFO_EMPTY) {
while((a = h->access.command_completed(h)) != FIFO_EMPTY)
{
a1 = a; a1 = a;
if ((a & 0x04)) { if ((a & 0x04)) {
a2 = (a >> 3); a2 = (a >> 3);
...@@ -2963,7 +3084,15 @@ static int __devinit cciss_init_one(struct pci_dev *pdev, ...@@ -2963,7 +3084,15 @@ static int __devinit cciss_init_one(struct pci_dev *pdev,
printk( KERN_ERR "cciss: out of memory"); printk( KERN_ERR "cciss: out of memory");
goto clean4; goto clean4;
} }
#ifdef CONFIG_CISS_SCSI_TAPE
hba[i]->scsi_rejects.complete =
kmalloc(sizeof(hba[i]->scsi_rejects.complete[0]) *
(NR_CMDS + 5), GFP_KERNEL);
if (hba[i]->scsi_rejects.complete == NULL) {
printk( KERN_ERR "cciss: out of memory");
goto clean4;
}
#endif
spin_lock_init(&hba[i]->lock); spin_lock_init(&hba[i]->lock);
/* Initialize the pdev driver private data. /* Initialize the pdev driver private data.
...@@ -3031,6 +3160,10 @@ static int __devinit cciss_init_one(struct pci_dev *pdev, ...@@ -3031,6 +3160,10 @@ static int __devinit cciss_init_one(struct pci_dev *pdev,
return(1); return(1);
clean4: clean4:
#ifdef CONFIG_CISS_SCSI_TAPE
if(hba[i]->scsi_rejects.complete)
kfree(hba[i]->scsi_rejects.complete);
#endif
kfree(hba[i]->cmd_pool_bits); kfree(hba[i]->cmd_pool_bits);
if(hba[i]->cmd_pool) if(hba[i]->cmd_pool)
pci_free_consistent(hba[i]->pdev, pci_free_consistent(hba[i]->pdev,
...@@ -3103,6 +3236,9 @@ static void __devexit cciss_remove_one (struct pci_dev *pdev) ...@@ -3103,6 +3236,9 @@ static void __devexit cciss_remove_one (struct pci_dev *pdev)
pci_free_consistent(hba[i]->pdev, NR_CMDS * sizeof( ErrorInfo_struct), pci_free_consistent(hba[i]->pdev, NR_CMDS * sizeof( ErrorInfo_struct),
hba[i]->errinfo_pool, hba[i]->errinfo_pool_dhandle); hba[i]->errinfo_pool, hba[i]->errinfo_pool_dhandle);
kfree(hba[i]->cmd_pool_bits); kfree(hba[i]->cmd_pool_bits);
#ifdef CONFIG_CISS_SCSI_TAPE
kfree(hba[i]->scsi_rejects.complete);
#endif
release_io_mem(hba[i]); release_io_mem(hba[i]);
free_hba(i); free_hba(i);
} }
......
...@@ -44,6 +44,14 @@ typedef struct _drive_info_struct ...@@ -44,6 +44,14 @@ typedef struct _drive_info_struct
*/ */
} drive_info_struct; } drive_info_struct;
#ifdef CONFIG_CISS_SCSI_TAPE
struct sendcmd_reject_list {
int ncompletions;
unsigned long *complete; /* array of NR_CMDS tags */
};
#endif
struct ctlr_info struct ctlr_info
{ {
int ctlr; int ctlr;
...@@ -100,6 +108,9 @@ struct ctlr_info ...@@ -100,6 +108,9 @@ struct ctlr_info
struct gendisk *gendisk[NWD]; struct gendisk *gendisk[NWD];
#ifdef CONFIG_CISS_SCSI_TAPE #ifdef CONFIG_CISS_SCSI_TAPE
void *scsi_ctlr; /* ptr to structure containing scsi related stuff */ void *scsi_ctlr; /* ptr to structure containing scsi related stuff */
/* list of block side commands the scsi error handling sucked up */
/* and saved for later processing */
struct sendcmd_reject_list scsi_rejects;
#endif #endif
unsigned char alive; unsigned char alive;
}; };
......
...@@ -42,6 +42,9 @@ ...@@ -42,6 +42,9 @@
#include "cciss_scsi.h" #include "cciss_scsi.h"
#define CCISS_ABORT_MSG 0x00
#define CCISS_RESET_MSG 0x01
/* some prototypes... */ /* some prototypes... */
static int sendcmd( static int sendcmd(
__u8 cmd, __u8 cmd,
...@@ -67,6 +70,8 @@ static int cciss_scsi_proc_info( ...@@ -67,6 +70,8 @@ static int cciss_scsi_proc_info(
static int cciss_scsi_queue_command (struct scsi_cmnd *cmd, static int cciss_scsi_queue_command (struct scsi_cmnd *cmd,
void (* done)(struct scsi_cmnd *)); void (* done)(struct scsi_cmnd *));
static int cciss_eh_device_reset_handler(struct scsi_cmnd *);
static int cciss_eh_abort_handler(struct scsi_cmnd *);
static struct cciss_scsi_hba_t ccissscsi[MAX_CTLR] = { static struct cciss_scsi_hba_t ccissscsi[MAX_CTLR] = {
{ .name = "cciss0", .ndevices = 0 }, { .name = "cciss0", .ndevices = 0 },
...@@ -90,6 +95,9 @@ static struct scsi_host_template cciss_driver_template = { ...@@ -90,6 +95,9 @@ static struct scsi_host_template cciss_driver_template = {
.sg_tablesize = MAXSGENTRIES, .sg_tablesize = MAXSGENTRIES,
.cmd_per_lun = 1, .cmd_per_lun = 1,
.use_clustering = DISABLE_CLUSTERING, .use_clustering = DISABLE_CLUSTERING,
/* Can't have eh_bus_reset_handler or eh_host_reset_handler for cciss */
.eh_device_reset_handler= cciss_eh_device_reset_handler,
.eh_abort_handler = cciss_eh_abort_handler,
}; };
#pragma pack(1) #pragma pack(1)
...@@ -247,7 +255,7 @@ scsi_cmd_stack_free(int ctlr) ...@@ -247,7 +255,7 @@ scsi_cmd_stack_free(int ctlr)
#define DEVICETYPE(n) (n<0 || n>MAX_SCSI_DEVICE_CODE) ? \ #define DEVICETYPE(n) (n<0 || n>MAX_SCSI_DEVICE_CODE) ? \
"Unknown" : scsi_device_types[n] "Unknown" : scsi_device_types[n]
#if 0 #if 1
static int xmargin=8; static int xmargin=8;
static int amargin=60; static int amargin=60;
...@@ -1448,6 +1456,78 @@ cciss_proc_tape_report(int ctlr, unsigned char *buffer, off_t *pos, off_t *len) ...@@ -1448,6 +1456,78 @@ cciss_proc_tape_report(int ctlr, unsigned char *buffer, off_t *pos, off_t *len)
*pos += size; *len += size; *pos += size; *len += size;
} }
/* Need at least one of these error handlers to keep ../scsi/hosts.c from
* complaining. Doing a host- or bus-reset can't do anything good here.
* Despite what it might say in scsi_error.c, there may well be commands
* on the controller, as the cciss driver registers twice, once as a block
* device for the logical drives, and once as a scsi device, for any tape
* drives. So we know there are no commands out on the tape drives, but we
* don't know there are no commands on the controller, and it is likely
* that there probably are, as the cciss block device is most commonly used
* as a boot device (embedded controller on HP/Compaq systems.)
*/
static int cciss_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
{
int rc;
CommandList_struct *cmd_in_trouble;
ctlr_info_t **c;
int ctlr;
/* find the controller to which the command to be aborted was sent */
c = (ctlr_info_t **) &scsicmd->device->host->hostdata[0];
if (c == NULL) /* paranoia */
return FAILED;
ctlr = (*c)->ctlr;
printk(KERN_WARNING "cciss%d: resetting tape drive or medium changer.\n", ctlr);
/* find the command that's giving us trouble */
cmd_in_trouble = (CommandList_struct *) scsicmd->host_scribble;
if (cmd_in_trouble == NULL) { /* paranoia */
return FAILED;
}
/* send a reset to the SCSI LUN which the command was sent to */
rc = sendcmd(CCISS_RESET_MSG, ctlr, NULL, 0, 2, 0, 0,
(unsigned char *) &cmd_in_trouble->Header.LUN.LunAddrBytes[0],
TYPE_MSG);
/* sendcmd turned off interrputs on the board, turn 'em back on. */
(*c)->access.set_intr_mask(*c, CCISS_INTR_ON);
if (rc == 0)
return SUCCESS;
printk(KERN_WARNING "cciss%d: resetting device failed.\n", ctlr);
return FAILED;
}
static int cciss_eh_abort_handler(struct scsi_cmnd *scsicmd)
{
int rc;
CommandList_struct *cmd_to_abort;
ctlr_info_t **c;
int ctlr;
/* find the controller to which the command to be aborted was sent */
c = (ctlr_info_t **) &scsicmd->device->host->hostdata[0];
if (c == NULL) /* paranoia */
return FAILED;
ctlr = (*c)->ctlr;
printk(KERN_WARNING "cciss%d: aborting tardy SCSI cmd\n", ctlr);
/* find the command to be aborted */
cmd_to_abort = (CommandList_struct *) scsicmd->host_scribble;
if (cmd_to_abort == NULL) /* paranoia */
return FAILED;
rc = sendcmd(CCISS_ABORT_MSG, ctlr, &cmd_to_abort->Header.Tag,
0, 2, 0, 0,
(unsigned char *) &cmd_to_abort->Header.LUN.LunAddrBytes[0],
TYPE_MSG);
/* sendcmd turned off interrputs on the board, turn 'em back on. */
(*c)->access.set_intr_mask(*c, CCISS_INTR_ON);
if (rc == 0)
return SUCCESS;
return FAILED;
}
#else /* no CONFIG_CISS_SCSI_TAPE */ #else /* no CONFIG_CISS_SCSI_TAPE */
/* If no tape support, then these become defined out of existence */ /* If no tape support, then these become defined out of existence */
......
...@@ -1295,27 +1295,6 @@ config SCSI_QLOGIC_FAS ...@@ -1295,27 +1295,6 @@ config SCSI_QLOGIC_FAS
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called qlogicfas. module will be called qlogicfas.
config SCSI_QLOGIC_ISP
tristate "Qlogic ISP SCSI support (old driver)"
depends on PCI && SCSI && BROKEN
---help---
This driver works for all QLogic PCI SCSI host adapters (IQ-PCI,
IQ-PCI-10, IQ_PCI-D) except for the PCI-basic card. (This latter
card is supported by the "AM53/79C974 PCI SCSI" driver.)
If you say Y here, make sure to choose "BIOS" at the question "PCI
access mode".
Please read the file <file:Documentation/scsi/qlogicisp.txt>. You
should also read the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
To compile this driver as a module, choose M here: the
module will be called qlogicisp.
These days the hardware is also supported by the more modern qla1280
driver. In doubt use that one instead of qlogicisp.
config SCSI_QLOGIC_FC config SCSI_QLOGIC_FC
tristate "Qlogic ISP FC SCSI support" tristate "Qlogic ISP FC SCSI support"
depends on PCI && SCSI depends on PCI && SCSI
...@@ -1342,14 +1321,6 @@ config SCSI_QLOGIC_1280 ...@@ -1342,14 +1321,6 @@ config SCSI_QLOGIC_1280
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called qla1280. module will be called qla1280.
config SCSI_QLOGIC_1280_1040
bool "Qlogic QLA 1020/1040 SCSI support"
depends on SCSI_QLOGIC_1280 && SCSI_QLOGIC_ISP!=y
help
Say Y here if you have a QLogic ISP1020/1040 SCSI host adapter and
do not want to use the old driver. This option enables support in
the qla1280 driver for those host adapters.
config SCSI_QLOGICPTI config SCSI_QLOGICPTI
tristate "PTI Qlogic, ISP Driver" tristate "PTI Qlogic, ISP Driver"
depends on SBUS && SCSI depends on SBUS && SCSI
......
...@@ -78,7 +78,6 @@ obj-$(CONFIG_SCSI_NCR_Q720) += NCR_Q720_mod.o ...@@ -78,7 +78,6 @@ obj-$(CONFIG_SCSI_NCR_Q720) += NCR_Q720_mod.o
obj-$(CONFIG_SCSI_SYM53C416) += sym53c416.o obj-$(CONFIG_SCSI_SYM53C416) += sym53c416.o
obj-$(CONFIG_SCSI_QLOGIC_FAS) += qlogicfas408.o qlogicfas.o obj-$(CONFIG_SCSI_QLOGIC_FAS) += qlogicfas408.o qlogicfas.o
obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o
obj-$(CONFIG_SCSI_QLOGIC_ISP) += qlogicisp.o
obj-$(CONFIG_SCSI_QLOGIC_FC) += qlogicfc.o obj-$(CONFIG_SCSI_QLOGIC_FC) += qlogicfc.o
obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o
obj-$(CONFIG_SCSI_QLA2XXX) += qla2xxx/ obj-$(CONFIG_SCSI_QLA2XXX) += qla2xxx/
......
...@@ -436,29 +436,20 @@ ahd_linux_queue(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *)) ...@@ -436,29 +436,20 @@ ahd_linux_queue(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *))
{ {
struct ahd_softc *ahd; struct ahd_softc *ahd;
struct ahd_linux_device *dev = scsi_transport_device_data(cmd->device); struct ahd_linux_device *dev = scsi_transport_device_data(cmd->device);
int rtn = SCSI_MLQUEUE_HOST_BUSY;
unsigned long flags;
ahd = *(struct ahd_softc **)cmd->device->host->hostdata; ahd = *(struct ahd_softc **)cmd->device->host->hostdata;
/* ahd_lock(ahd, &flags);
* Close the race of a command that was in the process of if (ahd->platform_data->qfrozen == 0) {
* being queued to us just as our simq was frozen. Let cmd->scsi_done = scsi_done;
* DV commands through so long as we are only frozen to cmd->result = CAM_REQ_INPROG << 16;
* perform DV. rtn = ahd_linux_run_command(ahd, dev, cmd);
*/
if (ahd->platform_data->qfrozen != 0) {
printf("%s: queue frozen\n", ahd_name(ahd));
return SCSI_MLQUEUE_HOST_BUSY;
} }
ahd_unlock(ahd, &flags);
/* return rtn;
* Save the callback on completion function.
*/
cmd->scsi_done = scsi_done;
cmd->result = CAM_REQ_INPROG << 16;
return ahd_linux_run_command(ahd, dev, cmd);
} }
static inline struct scsi_target ** static inline struct scsi_target **
...@@ -1081,7 +1072,6 @@ ahd_linux_register_host(struct ahd_softc *ahd, struct scsi_host_template *templa ...@@ -1081,7 +1072,6 @@ ahd_linux_register_host(struct ahd_softc *ahd, struct scsi_host_template *templa
*((struct ahd_softc **)host->hostdata) = ahd; *((struct ahd_softc **)host->hostdata) = ahd;
ahd_lock(ahd, &s); ahd_lock(ahd, &s);
scsi_assign_lock(host, &ahd->platform_data->spin_lock);
ahd->platform_data->host = host; ahd->platform_data->host = host;
host->can_queue = AHD_MAX_QUEUE; host->can_queue = AHD_MAX_QUEUE;
host->cmd_per_lun = 2; host->cmd_per_lun = 2;
...@@ -2062,6 +2052,7 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag) ...@@ -2062,6 +2052,7 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
int wait; int wait;
int disconnected; int disconnected;
ahd_mode_state saved_modes; ahd_mode_state saved_modes;
unsigned long flags;
pending_scb = NULL; pending_scb = NULL;
paused = FALSE; paused = FALSE;
...@@ -2077,7 +2068,7 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag) ...@@ -2077,7 +2068,7 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
printf(" 0x%x", cmd->cmnd[cdb_byte]); printf(" 0x%x", cmd->cmnd[cdb_byte]);
printf("\n"); printf("\n");
spin_lock_irq(&ahd->platform_data->spin_lock); ahd_lock(ahd, &flags);
/* /*
* First determine if we currently own this command. * First determine if we currently own this command.
...@@ -2291,7 +2282,8 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag) ...@@ -2291,7 +2282,8 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
int ret; int ret;
ahd->platform_data->flags |= AHD_SCB_UP_EH_SEM; ahd->platform_data->flags |= AHD_SCB_UP_EH_SEM;
spin_unlock_irq(&ahd->platform_data->spin_lock); ahd_unlock(ahd, &flags);
init_timer(&timer); init_timer(&timer);
timer.data = (u_long)ahd; timer.data = (u_long)ahd;
timer.expires = jiffies + (5 * HZ); timer.expires = jiffies + (5 * HZ);
...@@ -2305,9 +2297,8 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag) ...@@ -2305,9 +2297,8 @@ ahd_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
printf("Timer Expired\n"); printf("Timer Expired\n");
retval = FAILED; retval = FAILED;
} }
spin_lock_irq(&ahd->platform_data->spin_lock);
} }
spin_unlock_irq(&ahd->platform_data->spin_lock); ahd_unlock(ahd, &flags);
return (retval); return (retval);
} }
......
...@@ -476,26 +476,20 @@ ahc_linux_queue(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *)) ...@@ -476,26 +476,20 @@ ahc_linux_queue(struct scsi_cmnd * cmd, void (*scsi_done) (struct scsi_cmnd *))
{ {
struct ahc_softc *ahc; struct ahc_softc *ahc;
struct ahc_linux_device *dev = scsi_transport_device_data(cmd->device); struct ahc_linux_device *dev = scsi_transport_device_data(cmd->device);
int rtn = SCSI_MLQUEUE_HOST_BUSY;
unsigned long flags;
ahc = *(struct ahc_softc **)cmd->device->host->hostdata; ahc = *(struct ahc_softc **)cmd->device->host->hostdata;
/* ahc_lock(ahc, &flags);
* Save the callback on completion function. if (ahc->platform_data->qfrozen == 0) {
*/ cmd->scsi_done = scsi_done;
cmd->scsi_done = scsi_done; cmd->result = CAM_REQ_INPROG << 16;
rtn = ahc_linux_run_command(ahc, dev, cmd);
/* }
* Close the race of a command that was in the process of ahc_unlock(ahc, &flags);
* being queued to us just as our simq was frozen. Let
* DV commands through so long as we are only frozen to
* perform DV.
*/
if (ahc->platform_data->qfrozen != 0)
return SCSI_MLQUEUE_HOST_BUSY;
cmd->result = CAM_REQ_INPROG << 16;
return ahc_linux_run_command(ahc, dev, cmd); return rtn;
} }
static inline struct scsi_target ** static inline struct scsi_target **
...@@ -1079,7 +1073,6 @@ ahc_linux_register_host(struct ahc_softc *ahc, struct scsi_host_template *templa ...@@ -1079,7 +1073,6 @@ ahc_linux_register_host(struct ahc_softc *ahc, struct scsi_host_template *templa
*((struct ahc_softc **)host->hostdata) = ahc; *((struct ahc_softc **)host->hostdata) = ahc;
ahc_lock(ahc, &s); ahc_lock(ahc, &s);
scsi_assign_lock(host, &ahc->platform_data->spin_lock);
ahc->platform_data->host = host; ahc->platform_data->host = host;
host->can_queue = AHC_MAX_QUEUE; host->can_queue = AHC_MAX_QUEUE;
host->cmd_per_lun = 2; host->cmd_per_lun = 2;
...@@ -2111,6 +2104,7 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag) ...@@ -2111,6 +2104,7 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
int paused; int paused;
int wait; int wait;
int disconnected; int disconnected;
unsigned long flags;
pending_scb = NULL; pending_scb = NULL;
paused = FALSE; paused = FALSE;
...@@ -2125,7 +2119,7 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag) ...@@ -2125,7 +2119,7 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
printf(" 0x%x", cmd->cmnd[cdb_byte]); printf(" 0x%x", cmd->cmnd[cdb_byte]);
printf("\n"); printf("\n");
spin_lock_irq(&ahc->platform_data->spin_lock); ahc_lock(ahc, &flags);
/* /*
* First determine if we currently own this command. * First determine if we currently own this command.
...@@ -2357,7 +2351,8 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag) ...@@ -2357,7 +2351,8 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
int ret; int ret;
ahc->platform_data->flags |= AHC_UP_EH_SEMAPHORE; ahc->platform_data->flags |= AHC_UP_EH_SEMAPHORE;
spin_unlock_irq(&ahc->platform_data->spin_lock); ahc_unlock(ahc, &flags);
init_timer(&timer); init_timer(&timer);
timer.data = (u_long)ahc; timer.data = (u_long)ahc;
timer.expires = jiffies + (5 * HZ); timer.expires = jiffies + (5 * HZ);
...@@ -2371,10 +2366,8 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag) ...@@ -2371,10 +2366,8 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
printf("Timer Expired\n"); printf("Timer Expired\n");
retval = FAILED; retval = FAILED;
} }
spin_lock_irq(&ahc->platform_data->spin_lock); } else
} ahc_unlock(ahc, &flags);
spin_unlock_irq(&ahc->platform_data->spin_lock);
return (retval); return (retval);
} }
......
...@@ -395,6 +395,7 @@ static int idescsi_end_request (ide_drive_t *drive, int uptodate, int nrsecs) ...@@ -395,6 +395,7 @@ static int idescsi_end_request (ide_drive_t *drive, int uptodate, int nrsecs)
int log = test_bit(IDESCSI_LOG_CMD, &scsi->log); int log = test_bit(IDESCSI_LOG_CMD, &scsi->log);
struct Scsi_Host *host; struct Scsi_Host *host;
u8 *scsi_buf; u8 *scsi_buf;
int errors = rq->errors;
unsigned long flags; unsigned long flags;
if (!(rq->flags & (REQ_SPECIAL|REQ_SENSE))) { if (!(rq->flags & (REQ_SPECIAL|REQ_SENSE))) {
...@@ -421,11 +422,11 @@ static int idescsi_end_request (ide_drive_t *drive, int uptodate, int nrsecs) ...@@ -421,11 +422,11 @@ static int idescsi_end_request (ide_drive_t *drive, int uptodate, int nrsecs)
printk (KERN_WARNING "ide-scsi: %s: timed out for %lu\n", printk (KERN_WARNING "ide-scsi: %s: timed out for %lu\n",
drive->name, pc->scsi_cmd->serial_number); drive->name, pc->scsi_cmd->serial_number);
pc->scsi_cmd->result = DID_TIME_OUT << 16; pc->scsi_cmd->result = DID_TIME_OUT << 16;
} else if (rq->errors >= ERROR_MAX) { } else if (errors >= ERROR_MAX) {
pc->scsi_cmd->result = DID_ERROR << 16; pc->scsi_cmd->result = DID_ERROR << 16;
if (log) if (log)
printk ("ide-scsi: %s: I/O error for %lu\n", drive->name, pc->scsi_cmd->serial_number); printk ("ide-scsi: %s: I/O error for %lu\n", drive->name, pc->scsi_cmd->serial_number);
} else if (rq->errors) { } else if (errors) {
if (log) if (log)
printk ("ide-scsi: %s: check condition for %lu\n", drive->name, pc->scsi_cmd->serial_number); printk ("ide-scsi: %s: check condition for %lu\n", drive->name, pc->scsi_cmd->serial_number);
if (!idescsi_check_condition(drive, rq)) if (!idescsi_check_condition(drive, rq))
......
This diff is collapsed.
This diff is collapsed.
...@@ -139,6 +139,7 @@ ...@@ -139,6 +139,7 @@
/* - Remove 3 unused "inline" functions */ /* - Remove 3 unused "inline" functions */
/* 7.12.xx - Use STATIC functions whereever possible */ /* 7.12.xx - Use STATIC functions whereever possible */
/* - Clean up deprecated MODULE_PARM calls */ /* - Clean up deprecated MODULE_PARM calls */
/* 7.12.05 - Remove Version Matching per IBM request */
/*****************************************************************************/ /*****************************************************************************/
/* /*
...@@ -210,7 +211,7 @@ module_param(ips, charp, 0); ...@@ -210,7 +211,7 @@ module_param(ips, charp, 0);
* DRIVER_VER * DRIVER_VER
*/ */
#define IPS_VERSION_HIGH "7.12" #define IPS_VERSION_HIGH "7.12"
#define IPS_VERSION_LOW ".02 " #define IPS_VERSION_LOW ".05 "
#if !defined(__i386__) && !defined(__ia64__) && !defined(__x86_64__) #if !defined(__i386__) && !defined(__ia64__) && !defined(__x86_64__)
#warning "This driver has only been tested on the x86/ia64/x86_64 platforms" #warning "This driver has only been tested on the x86/ia64/x86_64 platforms"
...@@ -347,8 +348,6 @@ static int ips_proc_info(struct Scsi_Host *, char *, char **, off_t, int, int); ...@@ -347,8 +348,6 @@ static int ips_proc_info(struct Scsi_Host *, char *, char **, off_t, int, int);
static int ips_host_info(ips_ha_t *, char *, off_t, int); static int ips_host_info(ips_ha_t *, char *, off_t, int);
static void copy_mem_info(IPS_INFOSTR *, char *, int); static void copy_mem_info(IPS_INFOSTR *, char *, int);
static int copy_info(IPS_INFOSTR *, char *, ...); static int copy_info(IPS_INFOSTR *, char *, ...);
static int ips_get_version_info(ips_ha_t * ha, dma_addr_t, int intr);
static void ips_version_check(ips_ha_t * ha, int intr);
static int ips_abort_init(ips_ha_t * ha, int index); static int ips_abort_init(ips_ha_t * ha, int index);
static int ips_init_phase2(int index); static int ips_init_phase2(int index);
...@@ -406,8 +405,6 @@ static Scsi_Host_Template ips_driver_template = { ...@@ -406,8 +405,6 @@ static Scsi_Host_Template ips_driver_template = {
#endif #endif
}; };
static IPS_DEFINE_COMPAT_TABLE( Compatable ); /* Version Compatability Table */
/* This table describes all ServeRAID Adapters */ /* This table describes all ServeRAID Adapters */
static struct pci_device_id ips_pci_table[] = { static struct pci_device_id ips_pci_table[] = {
...@@ -5930,7 +5927,7 @@ ips_write_driver_status(ips_ha_t * ha, int intr) ...@@ -5930,7 +5927,7 @@ ips_write_driver_status(ips_ha_t * ha, int intr)
strncpy((char *) ha->nvram->bios_high, ha->bios_version, 4); strncpy((char *) ha->nvram->bios_high, ha->bios_version, 4);
strncpy((char *) ha->nvram->bios_low, ha->bios_version + 4, 4); strncpy((char *) ha->nvram->bios_low, ha->bios_version + 4, 4);
ips_version_check(ha, intr); /* Check BIOS/FW/Driver Versions */ ha->nvram->versioning = 0; /* Indicate the Driver Does Not Support Versioning */
/* now update the page */ /* now update the page */
if (!ips_readwrite_page5(ha, TRUE, intr)) { if (!ips_readwrite_page5(ha, TRUE, intr)) {
...@@ -6847,135 +6844,6 @@ ips_verify_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize, ...@@ -6847,135 +6844,6 @@ ips_verify_bios_memio(ips_ha_t * ha, char *buffer, uint32_t buffersize,
return (0); return (0);
} }
/*---------------------------------------------------------------------------*/
/* Routine Name: ips_version_check */
/* */
/* Dependencies: */
/* Assumes that ips_read_adapter_status() is called first filling in */
/* the data for SubSystem Parameters. */
/* Called from ips_write_driver_status() so it also assumes NVRAM Page 5 */
/* Data is available. */
/* */
/*---------------------------------------------------------------------------*/
static void
ips_version_check(ips_ha_t * ha, int intr)
{
IPS_VERSION_DATA *VersionInfo;
uint8_t FirmwareVersion[IPS_COMPAT_ID_LENGTH + 1];
uint8_t BiosVersion[IPS_COMPAT_ID_LENGTH + 1];
int MatchError;
int rc;
char BiosString[10];
char FirmwareString[10];
METHOD_TRACE("ips_version_check", 1);
VersionInfo = ( IPS_VERSION_DATA * ) ha->ioctl_data;
memset(FirmwareVersion, 0, IPS_COMPAT_ID_LENGTH + 1);
memset(BiosVersion, 0, IPS_COMPAT_ID_LENGTH + 1);
/* Get the Compatible BIOS Version from NVRAM Page 5 */
memcpy(BiosVersion, ha->nvram->BiosCompatibilityID,
IPS_COMPAT_ID_LENGTH);
rc = IPS_FAILURE;
if (ha->subsys->param[4] & IPS_GET_VERSION_SUPPORT) { /* If Versioning is Supported */
/* Get the Version Info with a Get Version Command */
memset( VersionInfo, 0, sizeof (IPS_VERSION_DATA));
rc = ips_get_version_info(ha, ha->ioctl_busaddr, intr);
if (rc == IPS_SUCCESS)
memcpy(FirmwareVersion, VersionInfo->compatibilityId,
IPS_COMPAT_ID_LENGTH);
}
if (rc != IPS_SUCCESS) { /* If Data Not Obtainable from a GetVersion Command */
/* Get the Firmware Version from Enquiry Data */
memcpy(FirmwareVersion, ha->enq->CodeBlkVersion,
IPS_COMPAT_ID_LENGTH);
}
/* printk(KERN_WARNING "Adapter's BIOS Version = %s\n", BiosVersion); */
/* printk(KERN_WARNING "BIOS Compatible Version = %s\n", IPS_COMPAT_BIOS); */
/* printk(KERN_WARNING "Adapter's Firmware Version = %s\n", FirmwareVersion); */
/* printk(KERN_WARNING "Firmware Compatible Version = %s \n", Compatable[ ha->nvram->adapter_type ]); */
MatchError = 0;
if (strncmp
(FirmwareVersion, Compatable[ha->nvram->adapter_type],
IPS_COMPAT_ID_LENGTH) != 0)
MatchError = 1;
if (strncmp(BiosVersion, IPS_COMPAT_BIOS, IPS_COMPAT_ID_LENGTH) != 0)
MatchError = 1;
ha->nvram->versioning = 1; /* Indicate the Driver Supports Versioning */
if (MatchError) {
ha->nvram->version_mismatch = 1;
if (ips_cd_boot == 0) {
strncpy(&BiosString[0], ha->nvram->bios_high, 4);
strncpy(&BiosString[4], ha->nvram->bios_low, 4);
BiosString[8] = 0;
strncpy(&FirmwareString[0], ha->enq->CodeBlkVersion, 8);
FirmwareString[8] = 0;
IPS_PRINTK(KERN_WARNING, ha->pcidev,
"Warning ! ! ! ServeRAID Version Mismatch\n");
IPS_PRINTK(KERN_WARNING, ha->pcidev,
"Bios = %s, Firmware = %s, Device Driver = %s%s\n",
BiosString, FirmwareString, IPS_VERSION_HIGH,
IPS_VERSION_LOW);
IPS_PRINTK(KERN_WARNING, ha->pcidev,
"These levels should match to avoid possible compatibility problems.\n");
}
} else {
ha->nvram->version_mismatch = 0;
}
return;
}
/*---------------------------------------------------------------------------*/
/* Routine Name: ips_get_version_info */
/* */
/* Routine Description: */
/* Issue an internal GETVERSION Command */
/* */
/* Return Value: */
/* 0 if Successful, else non-zero */
/*---------------------------------------------------------------------------*/
static int
ips_get_version_info(ips_ha_t * ha, dma_addr_t Buffer, int intr)
{
ips_scb_t *scb;
int rc;
METHOD_TRACE("ips_get_version_info", 1);
scb = &ha->scbs[ha->max_cmds - 1];
ips_init_scb(ha, scb);
scb->timeout = ips_cmd_timeout;
scb->cdb[0] = IPS_CMD_GET_VERSION_INFO;
scb->cmd.version_info.op_code = IPS_CMD_GET_VERSION_INFO;
scb->cmd.version_info.command_id = IPS_COMMAND_ID(ha, scb);
scb->cmd.version_info.reserved = 0;
scb->cmd.version_info.count = sizeof (IPS_VERSION_DATA);
scb->cmd.version_info.reserved2 = 0;
scb->data_len = sizeof (IPS_VERSION_DATA);
scb->data_busaddr = Buffer;
scb->cmd.version_info.buffer_addr = Buffer;
scb->flags = 0;
/* issue command */
rc = ips_send_wait(ha, scb, ips_cmd_timeout, intr);
return (rc);
}
/****************************************************************************/ /****************************************************************************/
/* */ /* */
/* Routine Name: ips_abort_init */ /* Routine Name: ips_abort_init */
......
...@@ -362,6 +362,7 @@ megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *)) ...@@ -362,6 +362,7 @@ megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *))
adapter_t *adapter; adapter_t *adapter;
scb_t *scb; scb_t *scb;
int busy=0; int busy=0;
unsigned long flags;
adapter = (adapter_t *)scmd->device->host->hostdata; adapter = (adapter_t *)scmd->device->host->hostdata;
...@@ -377,6 +378,7 @@ megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *)) ...@@ -377,6 +378,7 @@ megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *))
* return 0 in that case. * return 0 in that case.
*/ */
spin_lock_irqsave(&adapter->lock, flags);
scb = mega_build_cmd(adapter, scmd, &busy); scb = mega_build_cmd(adapter, scmd, &busy);
if(scb) { if(scb) {
...@@ -393,6 +395,7 @@ megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *)) ...@@ -393,6 +395,7 @@ megaraid_queue(Scsi_Cmnd *scmd, void (*done)(Scsi_Cmnd *))
} }
return 0; return 0;
} }
spin_unlock_irqrestore(&adapter->lock, flags);
return busy; return busy;
} }
...@@ -1981,7 +1984,7 @@ megaraid_reset(struct scsi_cmnd *cmd) ...@@ -1981,7 +1984,7 @@ megaraid_reset(struct scsi_cmnd *cmd)
mc.cmd = MEGA_CLUSTER_CMD; mc.cmd = MEGA_CLUSTER_CMD;
mc.opcode = MEGA_RESET_RESERVATIONS; mc.opcode = MEGA_RESET_RESERVATIONS;
if( mega_internal_command(adapter, LOCK_INT, &mc, NULL) != 0 ) { if( mega_internal_command(adapter, &mc, NULL) != 0 ) {
printk(KERN_WARNING printk(KERN_WARNING
"megaraid: reservation reset failed.\n"); "megaraid: reservation reset failed.\n");
} }
...@@ -3011,7 +3014,7 @@ proc_rdrv(adapter_t *adapter, char *page, int start, int end ) ...@@ -3011,7 +3014,7 @@ proc_rdrv(adapter_t *adapter, char *page, int start, int end )
mc.cmd = FC_NEW_CONFIG; mc.cmd = FC_NEW_CONFIG;
mc.opcode = OP_DCMD_READ_CONFIG; mc.opcode = OP_DCMD_READ_CONFIG;
if( mega_internal_command(adapter, LOCK_INT, &mc, NULL) ) { if( mega_internal_command(adapter, &mc, NULL) ) {
len = sprintf(page, "40LD read config failed.\n"); len = sprintf(page, "40LD read config failed.\n");
...@@ -3029,11 +3032,11 @@ proc_rdrv(adapter_t *adapter, char *page, int start, int end ) ...@@ -3029,11 +3032,11 @@ proc_rdrv(adapter_t *adapter, char *page, int start, int end )
else { else {
mc.cmd = NEW_READ_CONFIG_8LD; mc.cmd = NEW_READ_CONFIG_8LD;
if( mega_internal_command(adapter, LOCK_INT, &mc, NULL) ) { if( mega_internal_command(adapter, &mc, NULL) ) {
mc.cmd = READ_CONFIG_8LD; mc.cmd = READ_CONFIG_8LD;
if( mega_internal_command(adapter, LOCK_INT, &mc, if( mega_internal_command(adapter, &mc,
NULL) ){ NULL) ){
len = sprintf(page, len = sprintf(page,
...@@ -3632,7 +3635,7 @@ megadev_ioctl(struct inode *inode, struct file *filep, unsigned int cmd, ...@@ -3632,7 +3635,7 @@ megadev_ioctl(struct inode *inode, struct file *filep, unsigned int cmd,
/* /*
* Issue the command * Issue the command
*/ */
mega_internal_command(adapter, LOCK_INT, &mc, pthru); mega_internal_command(adapter, &mc, pthru);
rval = mega_n_to_m((void __user *)arg, &mc); rval = mega_n_to_m((void __user *)arg, &mc);
...@@ -3715,7 +3718,7 @@ megadev_ioctl(struct inode *inode, struct file *filep, unsigned int cmd, ...@@ -3715,7 +3718,7 @@ megadev_ioctl(struct inode *inode, struct file *filep, unsigned int cmd,
/* /*
* Issue the command * Issue the command
*/ */
mega_internal_command(adapter, LOCK_INT, &mc, NULL); mega_internal_command(adapter, &mc, NULL);
rval = mega_n_to_m((void __user *)arg, &mc); rval = mega_n_to_m((void __user *)arg, &mc);
...@@ -4234,7 +4237,7 @@ mega_do_del_logdrv(adapter_t *adapter, int logdrv) ...@@ -4234,7 +4237,7 @@ mega_do_del_logdrv(adapter_t *adapter, int logdrv)
mc.opcode = OP_DEL_LOGDRV; mc.opcode = OP_DEL_LOGDRV;
mc.subopcode = logdrv; mc.subopcode = logdrv;
rval = mega_internal_command(adapter, LOCK_INT, &mc, NULL); rval = mega_internal_command(adapter, &mc, NULL);
/* log this event */ /* log this event */
if(rval) { if(rval) {
...@@ -4367,7 +4370,7 @@ mega_adapinq(adapter_t *adapter, dma_addr_t dma_handle) ...@@ -4367,7 +4370,7 @@ mega_adapinq(adapter_t *adapter, dma_addr_t dma_handle)
mc.xferaddr = (u32)dma_handle; mc.xferaddr = (u32)dma_handle;
if ( mega_internal_command(adapter, LOCK_INT, &mc, NULL) != 0 ) { if ( mega_internal_command(adapter, &mc, NULL) != 0 ) {
return -1; return -1;
} }
...@@ -4435,7 +4438,7 @@ mega_internal_dev_inquiry(adapter_t *adapter, u8 ch, u8 tgt, ...@@ -4435,7 +4438,7 @@ mega_internal_dev_inquiry(adapter_t *adapter, u8 ch, u8 tgt,
mc.cmd = MEGA_MBOXCMD_PASSTHRU; mc.cmd = MEGA_MBOXCMD_PASSTHRU;
mc.xferaddr = (u32)pthru_dma_handle; mc.xferaddr = (u32)pthru_dma_handle;
rval = mega_internal_command(adapter, LOCK_INT, &mc, pthru); rval = mega_internal_command(adapter, &mc, pthru);
pci_free_consistent(pdev, sizeof(mega_passthru), pthru, pci_free_consistent(pdev, sizeof(mega_passthru), pthru,
pthru_dma_handle); pthru_dma_handle);
...@@ -4449,7 +4452,6 @@ mega_internal_dev_inquiry(adapter_t *adapter, u8 ch, u8 tgt, ...@@ -4449,7 +4452,6 @@ mega_internal_dev_inquiry(adapter_t *adapter, u8 ch, u8 tgt,
/** /**
* mega_internal_command() * mega_internal_command()
* @adapter - pointer to our soft state * @adapter - pointer to our soft state
* @ls - the scope of the exclusion lock.
* @mc - the mailbox command * @mc - the mailbox command
* @pthru - Passthru structure for DCDB commands * @pthru - Passthru structure for DCDB commands
* *
...@@ -4463,8 +4465,7 @@ mega_internal_dev_inquiry(adapter_t *adapter, u8 ch, u8 tgt, ...@@ -4463,8 +4465,7 @@ mega_internal_dev_inquiry(adapter_t *adapter, u8 ch, u8 tgt,
* Note: parameter 'pthru' is null for non-passthru commands. * Note: parameter 'pthru' is null for non-passthru commands.
*/ */
static int static int
mega_internal_command(adapter_t *adapter, lockscope_t ls, megacmd_t *mc, mega_internal_command(adapter_t *adapter, megacmd_t *mc, mega_passthru *pthru)
mega_passthru *pthru )
{ {
Scsi_Cmnd *scmd; Scsi_Cmnd *scmd;
struct scsi_device *sdev; struct scsi_device *sdev;
...@@ -4508,15 +4509,8 @@ mega_internal_command(adapter_t *adapter, lockscope_t ls, megacmd_t *mc, ...@@ -4508,15 +4509,8 @@ mega_internal_command(adapter_t *adapter, lockscope_t ls, megacmd_t *mc,
scb->idx = CMDID_INT_CMDS; scb->idx = CMDID_INT_CMDS;
/*
* Get the lock only if the caller has not acquired it already
*/
if( ls == LOCK_INT ) spin_lock_irqsave(&adapter->lock, flags);
megaraid_queue(scmd, mega_internal_done); megaraid_queue(scmd, mega_internal_done);
if( ls == LOCK_INT ) spin_unlock_irqrestore(&adapter->lock, flags);
wait_for_completion(&adapter->int_waitq); wait_for_completion(&adapter->int_waitq);
rval = scmd->result; rval = scmd->result;
......
...@@ -925,13 +925,6 @@ struct mega_hbas { ...@@ -925,13 +925,6 @@ struct mega_hbas {
#define MEGA_BULK_DATA 0x0001 #define MEGA_BULK_DATA 0x0001
#define MEGA_SGLIST 0x0002 #define MEGA_SGLIST 0x0002
/*
* lockscope definitions, callers can specify the lock scope with this data
* type. LOCK_INT would mean the caller has not acquired the lock before
* making the call and LOCK_EXT would mean otherwise.
*/
typedef enum { LOCK_INT, LOCK_EXT } lockscope_t;
/* /*
* Parameters for the io-mapped controllers * Parameters for the io-mapped controllers
*/ */
...@@ -1062,8 +1055,7 @@ static int mega_support_random_del(adapter_t *); ...@@ -1062,8 +1055,7 @@ static int mega_support_random_del(adapter_t *);
static int mega_del_logdrv(adapter_t *, int); static int mega_del_logdrv(adapter_t *, int);
static int mega_do_del_logdrv(adapter_t *, int); static int mega_do_del_logdrv(adapter_t *, int);
static void mega_get_max_sgl(adapter_t *); static void mega_get_max_sgl(adapter_t *);
static int mega_internal_command(adapter_t *, lockscope_t, megacmd_t *, static int mega_internal_command(adapter_t *, megacmd_t *, mega_passthru *);
mega_passthru *);
static void mega_internal_done(Scsi_Cmnd *); static void mega_internal_done(Scsi_Cmnd *);
static int mega_support_cluster(adapter_t *); static int mega_support_cluster(adapter_t *);
#endif #endif
......
...@@ -97,7 +97,6 @@ typedef struct { ...@@ -97,7 +97,6 @@ typedef struct {
* @param dpc_h : tasklet handle * @param dpc_h : tasklet handle
* @param pdev : pci configuration pointer for kernel * @param pdev : pci configuration pointer for kernel
* @param host : pointer to host structure of mid-layer * @param host : pointer to host structure of mid-layer
* @param host_lock : pointer to appropriate lock
* @param lock : synchronization lock for mid-layer and driver * @param lock : synchronization lock for mid-layer and driver
* @param quiescent : driver is quiescent for now. * @param quiescent : driver is quiescent for now.
* @param outstanding_cmds : number of commands pending in the driver * @param outstanding_cmds : number of commands pending in the driver
...@@ -152,7 +151,6 @@ typedef struct { ...@@ -152,7 +151,6 @@ typedef struct {
struct tasklet_struct dpc_h; struct tasklet_struct dpc_h;
struct pci_dev *pdev; struct pci_dev *pdev;
struct Scsi_Host *host; struct Scsi_Host *host;
spinlock_t *host_lock;
spinlock_t lock; spinlock_t lock;
uint8_t quiescent; uint8_t quiescent;
int outstanding_cmds; int outstanding_cmds;
......
...@@ -533,8 +533,6 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) ...@@ -533,8 +533,6 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
// Initialize the synchronization lock for kernel and LLD // Initialize the synchronization lock for kernel and LLD
spin_lock_init(&adapter->lock); spin_lock_init(&adapter->lock);
adapter->host_lock = &adapter->lock;
// Initialize the command queues: the list of free SCBs and the list // Initialize the command queues: the list of free SCBs and the list
// of pending SCBs. // of pending SCBs.
...@@ -715,9 +713,6 @@ megaraid_io_attach(adapter_t *adapter) ...@@ -715,9 +713,6 @@ megaraid_io_attach(adapter_t *adapter)
SCSIHOST2ADAP(host) = (caddr_t)adapter; SCSIHOST2ADAP(host) = (caddr_t)adapter;
adapter->host = host; adapter->host = host;
// export the parameters required by the mid-layer
scsi_assign_lock(host, adapter->host_lock);
host->irq = adapter->irq; host->irq = adapter->irq;
host->unique_id = adapter->unique_id; host->unique_id = adapter->unique_id;
host->can_queue = adapter->max_cmds; host->can_queue = adapter->max_cmds;
...@@ -1560,10 +1555,6 @@ megaraid_queue_command(struct scsi_cmnd *scp, void (* done)(struct scsi_cmnd *)) ...@@ -1560,10 +1555,6 @@ megaraid_queue_command(struct scsi_cmnd *scp, void (* done)(struct scsi_cmnd *))
scp->scsi_done = done; scp->scsi_done = done;
scp->result = 0; scp->result = 0;
assert_spin_locked(adapter->host_lock);
spin_unlock(adapter->host_lock);
/* /*
* Allocate and build a SCB request * Allocate and build a SCB request
* if_busy flag will be set if megaraid_mbox_build_cmd() command could * if_busy flag will be set if megaraid_mbox_build_cmd() command could
...@@ -1573,23 +1564,16 @@ megaraid_queue_command(struct scsi_cmnd *scp, void (* done)(struct scsi_cmnd *)) ...@@ -1573,23 +1564,16 @@ megaraid_queue_command(struct scsi_cmnd *scp, void (* done)(struct scsi_cmnd *))
* return 0 in that case, and we would do the callback right away. * return 0 in that case, and we would do the callback right away.
*/ */
if_busy = 0; if_busy = 0;
scb = megaraid_mbox_build_cmd(adapter, scp, &if_busy); scb = megaraid_mbox_build_cmd(adapter, scp, &if_busy);
if (scb) {
megaraid_mbox_runpendq(adapter, scb);
}
spin_lock(adapter->host_lock);
if (!scb) { // command already completed if (!scb) { // command already completed
done(scp); done(scp);
return 0; return 0;
} }
megaraid_mbox_runpendq(adapter, scb);
return if_busy; return if_busy;
} }
/** /**
* megaraid_mbox_build_cmd - transform the mid-layer scsi command to megaraid * megaraid_mbox_build_cmd - transform the mid-layer scsi command to megaraid
* firmware lingua * firmware lingua
...@@ -2546,9 +2530,7 @@ megaraid_mbox_dpc(unsigned long devp) ...@@ -2546,9 +2530,7 @@ megaraid_mbox_dpc(unsigned long devp)
megaraid_dealloc_scb(adapter, scb); megaraid_dealloc_scb(adapter, scb);
// send the scsi packet back to kernel // send the scsi packet back to kernel
spin_lock(adapter->host_lock);
scp->scsi_done(scp); scp->scsi_done(scp);
spin_unlock(adapter->host_lock);
} }
return; return;
...@@ -2563,7 +2545,7 @@ megaraid_mbox_dpc(unsigned long devp) ...@@ -2563,7 +2545,7 @@ megaraid_mbox_dpc(unsigned long devp)
* aborted. All the commands issued to the F/W must complete. * aborted. All the commands issued to the F/W must complete.
**/ **/
static int static int
__megaraid_abort_handler(struct scsi_cmnd *scp) megaraid_abort_handler(struct scsi_cmnd *scp)
{ {
adapter_t *adapter; adapter_t *adapter;
mraid_device_t *raid_dev; mraid_device_t *raid_dev;
...@@ -2577,8 +2559,6 @@ __megaraid_abort_handler(struct scsi_cmnd *scp) ...@@ -2577,8 +2559,6 @@ __megaraid_abort_handler(struct scsi_cmnd *scp)
adapter = SCP2ADAPTER(scp); adapter = SCP2ADAPTER(scp);
raid_dev = ADAP2RAIDDEV(adapter); raid_dev = ADAP2RAIDDEV(adapter);
assert_spin_locked(adapter->host_lock);
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
"megaraid: aborting-%ld cmd=%x <c=%d t=%d l=%d>\n", "megaraid: aborting-%ld cmd=%x <c=%d t=%d l=%d>\n",
scp->serial_number, scp->cmnd[0], SCP2CHANNEL(scp), scp->serial_number, scp->cmnd[0], SCP2CHANNEL(scp),
...@@ -2658,6 +2638,7 @@ __megaraid_abort_handler(struct scsi_cmnd *scp) ...@@ -2658,6 +2638,7 @@ __megaraid_abort_handler(struct scsi_cmnd *scp)
// traverse through the list of all SCB, since driver does not // traverse through the list of all SCB, since driver does not
// maintain these SCBs on any list // maintain these SCBs on any list
found = 0; found = 0;
spin_lock_irq(&adapter->lock);
for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) { for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) {
scb = adapter->kscb_list + i; scb = adapter->kscb_list + i;
...@@ -2680,6 +2661,7 @@ __megaraid_abort_handler(struct scsi_cmnd *scp) ...@@ -2680,6 +2661,7 @@ __megaraid_abort_handler(struct scsi_cmnd *scp)
} }
} }
} }
spin_unlock_irq(&adapter->lock);
if (!found) { if (!found) {
con_log(CL_ANN, (KERN_WARNING con_log(CL_ANN, (KERN_WARNING
...@@ -2696,22 +2678,6 @@ __megaraid_abort_handler(struct scsi_cmnd *scp) ...@@ -2696,22 +2678,6 @@ __megaraid_abort_handler(struct scsi_cmnd *scp)
return FAILED; return FAILED;
} }
static int
megaraid_abort_handler(struct scsi_cmnd *scp)
{
adapter_t *adapter;
int rc;
adapter = SCP2ADAPTER(scp);
spin_lock_irq(adapter->host_lock);
rc = __megaraid_abort_handler(scp);
spin_unlock_irq(adapter->host_lock);
return rc;
}
/** /**
* megaraid_reset_handler - device reset hadler for mailbox based driver * megaraid_reset_handler - device reset hadler for mailbox based driver
* @scp : reference command * @scp : reference command
...@@ -2723,7 +2689,7 @@ megaraid_abort_handler(struct scsi_cmnd *scp) ...@@ -2723,7 +2689,7 @@ megaraid_abort_handler(struct scsi_cmnd *scp)
* host * host
**/ **/
static int static int
__megaraid_reset_handler(struct scsi_cmnd *scp) megaraid_reset_handler(struct scsi_cmnd *scp)
{ {
adapter_t *adapter; adapter_t *adapter;
scb_t *scb; scb_t *scb;
...@@ -2739,10 +2705,6 @@ __megaraid_reset_handler(struct scsi_cmnd *scp) ...@@ -2739,10 +2705,6 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
adapter = SCP2ADAPTER(scp); adapter = SCP2ADAPTER(scp);
raid_dev = ADAP2RAIDDEV(adapter); raid_dev = ADAP2RAIDDEV(adapter);
assert_spin_locked(adapter->host_lock);
con_log(CL_ANN, (KERN_WARNING "megaraid: reseting the host...\n"));
// return failure if adapter is not responding // return failure if adapter is not responding
if (raid_dev->hw_error) { if (raid_dev->hw_error) {
con_log(CL_ANN, (KERN_NOTICE con_log(CL_ANN, (KERN_NOTICE
...@@ -2779,8 +2741,6 @@ __megaraid_reset_handler(struct scsi_cmnd *scp) ...@@ -2779,8 +2741,6 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
adapter->outstanding_cmds, MBOX_RESET_WAIT)); adapter->outstanding_cmds, MBOX_RESET_WAIT));
} }
spin_unlock(adapter->host_lock);
recovery_window = MBOX_RESET_WAIT + MBOX_RESET_EXT_WAIT; recovery_window = MBOX_RESET_WAIT + MBOX_RESET_EXT_WAIT;
recovering = adapter->outstanding_cmds; recovering = adapter->outstanding_cmds;
...@@ -2806,7 +2766,7 @@ __megaraid_reset_handler(struct scsi_cmnd *scp) ...@@ -2806,7 +2766,7 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
msleep(1000); msleep(1000);
} }
spin_lock(adapter->host_lock); spin_lock(&adapter->lock);
// If still outstanding commands, bail out // If still outstanding commands, bail out
if (adapter->outstanding_cmds) { if (adapter->outstanding_cmds) {
...@@ -2815,7 +2775,8 @@ __megaraid_reset_handler(struct scsi_cmnd *scp) ...@@ -2815,7 +2775,8 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
raid_dev->hw_error = 1; raid_dev->hw_error = 1;
return FAILED; rval = FAILED;
goto out;
} }
else { else {
con_log(CL_ANN, (KERN_NOTICE con_log(CL_ANN, (KERN_NOTICE
...@@ -2824,7 +2785,10 @@ __megaraid_reset_handler(struct scsi_cmnd *scp) ...@@ -2824,7 +2785,10 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
// If the controller supports clustering, reset reservations // If the controller supports clustering, reset reservations
if (!adapter->ha) return SUCCESS; if (!adapter->ha) {
rval = SUCCESS;
goto out;
}
// clear reservations if any // clear reservations if any
raw_mbox[0] = CLUSTER_CMD; raw_mbox[0] = CLUSTER_CMD;
...@@ -2841,22 +2805,11 @@ __megaraid_reset_handler(struct scsi_cmnd *scp) ...@@ -2841,22 +2805,11 @@ __megaraid_reset_handler(struct scsi_cmnd *scp)
"megaraid: reservation reset failed\n")); "megaraid: reservation reset failed\n"));
} }
out:
spin_unlock_irq(&adapter->lock);
return rval; return rval;
} }
static int
megaraid_reset_handler(struct scsi_cmnd *cmd)
{
int rc;
spin_lock_irq(cmd->device->host->host_lock);
rc = __megaraid_reset_handler(cmd);
spin_unlock_irq(cmd->device->host->host_lock);
return rc;
}
/* /*
* START: internal commands library * START: internal commands library
* *
...@@ -3776,9 +3729,9 @@ wait_till_fw_empty(adapter_t *adapter) ...@@ -3776,9 +3729,9 @@ wait_till_fw_empty(adapter_t *adapter)
/* /*
* Set the quiescent flag to stop issuing cmds to FW. * Set the quiescent flag to stop issuing cmds to FW.
*/ */
spin_lock_irqsave(adapter->host_lock, flags); spin_lock_irqsave(&adapter->lock, flags);
adapter->quiescent++; adapter->quiescent++;
spin_unlock_irqrestore(adapter->host_lock, flags); spin_unlock_irqrestore(&adapter->lock, flags);
/* /*
* Wait till there are no more cmds outstanding at FW. Try for at most * Wait till there are no more cmds outstanding at FW. Try for at most
......
...@@ -767,17 +767,12 @@ static int megasas_generic_reset(struct scsi_cmnd *scmd) ...@@ -767,17 +767,12 @@ static int megasas_generic_reset(struct scsi_cmnd *scmd)
return FAILED; return FAILED;
} }
spin_unlock(scmd->device->host->host_lock);
ret_val = megasas_wait_for_outstanding(instance); ret_val = megasas_wait_for_outstanding(instance);
if (ret_val == SUCCESS) if (ret_val == SUCCESS)
printk(KERN_NOTICE "megasas: reset successful \n"); printk(KERN_NOTICE "megasas: reset successful \n");
else else
printk(KERN_ERR "megasas: failed to do reset\n"); printk(KERN_ERR "megasas: failed to do reset\n");
spin_lock(scmd->device->host->host_lock);
return ret_val; return ret_val;
} }
......
...@@ -639,10 +639,8 @@ struct qla_boards { ...@@ -639,10 +639,8 @@ struct qla_boards {
static struct pci_device_id qla1280_pci_tbl[] = { static struct pci_device_id qla1280_pci_tbl[] = {
{PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP12160, {PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP12160,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
#ifdef CONFIG_SCSI_QLOGIC_1280_1040
{PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1020, {PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1020,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1}, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1},
#endif
{PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1080, {PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1080,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2}, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2},
{PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1240, {PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP1240,
......
This diff is collapsed.
This diff is collapsed.
/* /*
* RAID Attributes * raid_class.c - implementation of a simple raid visualisation class
*
* Copyright (c) 2005 - James Bottomley <James.Bottomley@steeleye.com>
*
* This file is licensed under GPLv2
*
* This class is designed to allow raid attributes to be visualised and
* manipulated in a form independent of the underlying raid. Ultimately this
* should work for both hardware and software raids.
*/ */
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
...@@ -24,7 +32,7 @@ struct raid_internal { ...@@ -24,7 +32,7 @@ struct raid_internal {
struct raid_component { struct raid_component {
struct list_head node; struct list_head node;
struct device *dev; struct class_device cdev;
int num; int num;
}; };
...@@ -74,11 +82,10 @@ static int raid_setup(struct transport_container *tc, struct device *dev, ...@@ -74,11 +82,10 @@ static int raid_setup(struct transport_container *tc, struct device *dev,
BUG_ON(class_get_devdata(cdev)); BUG_ON(class_get_devdata(cdev));
rd = kmalloc(sizeof(*rd), GFP_KERNEL); rd = kzalloc(sizeof(*rd), GFP_KERNEL);
if (!rd) if (!rd)
return -ENOMEM; return -ENOMEM;
memset(rd, 0, sizeof(*rd));
INIT_LIST_HEAD(&rd->component_list); INIT_LIST_HEAD(&rd->component_list);
class_set_devdata(cdev, rd); class_set_devdata(cdev, rd);
...@@ -90,15 +97,15 @@ static int raid_remove(struct transport_container *tc, struct device *dev, ...@@ -90,15 +97,15 @@ static int raid_remove(struct transport_container *tc, struct device *dev,
{ {
struct raid_data *rd = class_get_devdata(cdev); struct raid_data *rd = class_get_devdata(cdev);
struct raid_component *rc, *next; struct raid_component *rc, *next;
dev_printk(KERN_ERR, dev, "RAID REMOVE\n");
class_set_devdata(cdev, NULL); class_set_devdata(cdev, NULL);
list_for_each_entry_safe(rc, next, &rd->component_list, node) { list_for_each_entry_safe(rc, next, &rd->component_list, node) {
char buf[40];
snprintf(buf, sizeof(buf), "component-%d", rc->num);
list_del(&rc->node); list_del(&rc->node);
sysfs_remove_link(&cdev->kobj, buf); dev_printk(KERN_ERR, rc->cdev.dev, "RAID COMPONENT REMOVE\n");
kfree(rc); class_device_unregister(&rc->cdev);
} }
kfree(class_get_devdata(cdev)); dev_printk(KERN_ERR, dev, "RAID REMOVE DONE\n");
kfree(rd);
return 0; return 0;
} }
...@@ -112,10 +119,11 @@ static struct { ...@@ -112,10 +119,11 @@ static struct {
enum raid_state value; enum raid_state value;
char *name; char *name;
} raid_states[] = { } raid_states[] = {
{ RAID_ACTIVE, "active" }, { RAID_STATE_UNKNOWN, "unknown" },
{ RAID_DEGRADED, "degraded" }, { RAID_STATE_ACTIVE, "active" },
{ RAID_RESYNCING, "resyncing" }, { RAID_STATE_DEGRADED, "degraded" },
{ RAID_OFFLINE, "offline" }, { RAID_STATE_RESYNCING, "resyncing" },
{ RAID_STATE_OFFLINE, "offline" },
}; };
static const char *raid_state_name(enum raid_state state) static const char *raid_state_name(enum raid_state state)
...@@ -132,6 +140,33 @@ static const char *raid_state_name(enum raid_state state) ...@@ -132,6 +140,33 @@ static const char *raid_state_name(enum raid_state state)
return name; return name;
} }
static struct {
enum raid_level value;
char *name;
} raid_levels[] = {
{ RAID_LEVEL_UNKNOWN, "unknown" },
{ RAID_LEVEL_LINEAR, "linear" },
{ RAID_LEVEL_0, "raid0" },
{ RAID_LEVEL_1, "raid1" },
{ RAID_LEVEL_3, "raid3" },
{ RAID_LEVEL_4, "raid4" },
{ RAID_LEVEL_5, "raid5" },
{ RAID_LEVEL_6, "raid6" },
};
static const char *raid_level_name(enum raid_level level)
{
int i;
char *name = NULL;
for (i = 0; i < sizeof(raid_levels)/sizeof(raid_levels[0]); i++) {
if (raid_levels[i].value == level) {
name = raid_levels[i].name;
break;
}
}
return name;
}
#define raid_attr_show_internal(attr, fmt, var, code) \ #define raid_attr_show_internal(attr, fmt, var, code) \
static ssize_t raid_show_##attr(struct class_device *cdev, char *buf) \ static ssize_t raid_show_##attr(struct class_device *cdev, char *buf) \
...@@ -161,11 +196,22 @@ static CLASS_DEVICE_ATTR(attr, S_IRUGO, raid_show_##attr, NULL) ...@@ -161,11 +196,22 @@ static CLASS_DEVICE_ATTR(attr, S_IRUGO, raid_show_##attr, NULL)
#define raid_attr_ro(attr) raid_attr_ro_internal(attr, ) #define raid_attr_ro(attr) raid_attr_ro_internal(attr, )
#define raid_attr_ro_fn(attr) raid_attr_ro_internal(attr, ATTR_CODE(attr)) #define raid_attr_ro_fn(attr) raid_attr_ro_internal(attr, ATTR_CODE(attr))
#define raid_attr_ro_state(attr) raid_attr_ro_states(attr, attr, ATTR_CODE(attr)) #define raid_attr_ro_state(attr) raid_attr_ro_states(attr, attr, )
#define raid_attr_ro_state_fn(attr) raid_attr_ro_states(attr, attr, ATTR_CODE(attr))
raid_attr_ro(level); raid_attr_ro_state(level);
raid_attr_ro_fn(resync); raid_attr_ro_fn(resync);
raid_attr_ro_state(state); raid_attr_ro_state_fn(state);
static void raid_component_release(struct class_device *cdev)
{
struct raid_component *rc = container_of(cdev, struct raid_component,
cdev);
dev_printk(KERN_ERR, rc->cdev.dev, "COMPONENT RELEASE\n");
put_device(rc->cdev.dev);
kfree(rc);
}
void raid_component_add(struct raid_template *r,struct device *raid_dev, void raid_component_add(struct raid_template *r,struct device *raid_dev,
struct device *component_dev) struct device *component_dev)
...@@ -175,34 +221,36 @@ void raid_component_add(struct raid_template *r,struct device *raid_dev, ...@@ -175,34 +221,36 @@ void raid_component_add(struct raid_template *r,struct device *raid_dev,
raid_dev); raid_dev);
struct raid_component *rc; struct raid_component *rc;
struct raid_data *rd = class_get_devdata(cdev); struct raid_data *rd = class_get_devdata(cdev);
char buf[40];
rc = kmalloc(sizeof(*rc), GFP_KERNEL); rc = kzalloc(sizeof(*rc), GFP_KERNEL);
if (!rc) if (!rc)
return; return;
INIT_LIST_HEAD(&rc->node); INIT_LIST_HEAD(&rc->node);
rc->dev = component_dev; class_device_initialize(&rc->cdev);
rc->cdev.release = raid_component_release;
rc->cdev.dev = get_device(component_dev);
rc->num = rd->component_count++; rc->num = rd->component_count++;
snprintf(buf, sizeof(buf), "component-%d", rc->num); snprintf(rc->cdev.class_id, sizeof(rc->cdev.class_id),
"component-%d", rc->num);
list_add_tail(&rc->node, &rd->component_list); list_add_tail(&rc->node, &rd->component_list);
sysfs_create_link(&cdev->kobj, &component_dev->kobj, buf); rc->cdev.parent = cdev;
rc->cdev.class = &raid_class.class;
class_device_add(&rc->cdev);
} }
EXPORT_SYMBOL(raid_component_add); EXPORT_SYMBOL(raid_component_add);
struct raid_template * struct raid_template *
raid_class_attach(struct raid_function_template *ft) raid_class_attach(struct raid_function_template *ft)
{ {
struct raid_internal *i = kmalloc(sizeof(struct raid_internal), struct raid_internal *i = kzalloc(sizeof(struct raid_internal),
GFP_KERNEL); GFP_KERNEL);
int count = 0; int count = 0;
if (unlikely(!i)) if (unlikely(!i))
return NULL; return NULL;
memset(i, 0, sizeof(*i));
i->f = ft; i->f = ft;
i->r.raid_attrs.ac.class = &raid_class.class; i->r.raid_attrs.ac.class = &raid_class.class;
......
...@@ -416,44 +416,16 @@ static int scsi_eh_completed_normally(struct scsi_cmnd *scmd) ...@@ -416,44 +416,16 @@ static int scsi_eh_completed_normally(struct scsi_cmnd *scmd)
return FAILED; return FAILED;
} }
/**
* scsi_eh_times_out - timeout function for error handling.
* @scmd: Cmd that is timing out.
*
* Notes:
* During error handling, the kernel thread will be sleeping waiting
* for some action to complete on the device. our only job is to
* record that it timed out, and to wake up the thread.
**/
static void scsi_eh_times_out(struct scsi_cmnd *scmd)
{
scmd->eh_eflags |= SCSI_EH_REC_TIMEOUT;
SCSI_LOG_ERROR_RECOVERY(3, printk("%s: scmd:%p\n", __FUNCTION__,
scmd));
up(scmd->device->host->eh_action);
}
/** /**
* scsi_eh_done - Completion function for error handling. * scsi_eh_done - Completion function for error handling.
* @scmd: Cmd that is done. * @scmd: Cmd that is done.
**/ **/
static void scsi_eh_done(struct scsi_cmnd *scmd) static void scsi_eh_done(struct scsi_cmnd *scmd)
{ {
/* SCSI_LOG_ERROR_RECOVERY(3,
* if the timeout handler is already running, then just set the printk("%s scmd: %p result: %x\n",
* flag which says we finished late, and return. we have no __FUNCTION__, scmd, scmd->result));
* way of stopping the timeout handler from running, so we must complete(scmd->device->host->eh_action);
* always defer to it.
*/
if (del_timer(&scmd->eh_timeout)) {
scmd->request->rq_status = RQ_SCSI_DONE;
SCSI_LOG_ERROR_RECOVERY(3, printk("%s scmd: %p result: %x\n",
__FUNCTION__, scmd, scmd->result));
up(scmd->device->host->eh_action);
}
} }
/** /**
...@@ -461,10 +433,6 @@ static void scsi_eh_done(struct scsi_cmnd *scmd) ...@@ -461,10 +433,6 @@ static void scsi_eh_done(struct scsi_cmnd *scmd)
* @scmd: SCSI Cmd to send. * @scmd: SCSI Cmd to send.
* @timeout: Timeout for cmd. * @timeout: Timeout for cmd.
* *
* Notes:
* The initialization of the structures is quite a bit different in
* this case, and furthermore, there is a different completion handler
* vs scsi_dispatch_cmd.
* Return value: * Return value:
* SUCCESS or FAILED or NEEDS_RETRY * SUCCESS or FAILED or NEEDS_RETRY
**/ **/
...@@ -472,24 +440,16 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout) ...@@ -472,24 +440,16 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout)
{ {
struct scsi_device *sdev = scmd->device; struct scsi_device *sdev = scmd->device;
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
DECLARE_MUTEX_LOCKED(sem); DECLARE_COMPLETION(done);
unsigned long timeleft;
unsigned long flags; unsigned long flags;
int rtn = SUCCESS; int rtn;
/*
* we will use a queued command if possible, otherwise we will
* emulate the queuing and calling of completion function ourselves.
*/
if (sdev->scsi_level <= SCSI_2) if (sdev->scsi_level <= SCSI_2)
scmd->cmnd[1] = (scmd->cmnd[1] & 0x1f) | scmd->cmnd[1] = (scmd->cmnd[1] & 0x1f) |
(sdev->lun << 5 & 0xe0); (sdev->lun << 5 & 0xe0);
scsi_add_timer(scmd, timeout, scsi_eh_times_out); shost->eh_action = &done;
/*
* set up the semaphore so we wait for the command to complete.
*/
shost->eh_action = &sem;
scmd->request->rq_status = RQ_SCSI_BUSY; scmd->request->rq_status = RQ_SCSI_BUSY;
spin_lock_irqsave(shost->host_lock, flags); spin_lock_irqsave(shost->host_lock, flags);
...@@ -497,47 +457,29 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout) ...@@ -497,47 +457,29 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout)
shost->hostt->queuecommand(scmd, scsi_eh_done); shost->hostt->queuecommand(scmd, scsi_eh_done);
spin_unlock_irqrestore(shost->host_lock, flags); spin_unlock_irqrestore(shost->host_lock, flags);
down(&sem); timeleft = wait_for_completion_timeout(&done, timeout);
scsi_log_completion(scmd, SUCCESS);
scmd->request->rq_status = RQ_SCSI_DONE;
shost->eh_action = NULL; shost->eh_action = NULL;
/* scsi_log_completion(scmd, SUCCESS);
* see if timeout. if so, tell the host to forget about it.
* in other words, we don't want a callback any more.
*/
if (scmd->eh_eflags & SCSI_EH_REC_TIMEOUT) {
scmd->eh_eflags &= ~SCSI_EH_REC_TIMEOUT;
/*
* as far as the low level driver is
* concerned, this command is still active, so
* we must give the low level driver a chance
* to abort it. (db)
*
* FIXME(eric) - we are not tracking whether we could
* abort a timed out command or not. not sure how
* we should treat them differently anyways.
*/
if (shost->hostt->eh_abort_handler)
shost->hostt->eh_abort_handler(scmd);
scmd->request->rq_status = RQ_SCSI_DONE;
rtn = FAILED;
}
SCSI_LOG_ERROR_RECOVERY(3, printk("%s: scmd: %p, rtn:%x\n", SCSI_LOG_ERROR_RECOVERY(3,
__FUNCTION__, scmd, rtn)); printk("%s: scmd: %p, timeleft: %ld\n",
__FUNCTION__, scmd, timeleft));
/* /*
* now examine the actual status codes to see whether the command * If there is time left scsi_eh_done got called, and we will
* actually did complete normally. * examine the actual status codes to see whether the command
* actually did complete normally, else tell the host to forget
* about this command.
*/ */
if (rtn == SUCCESS) { if (timeleft) {
rtn = scsi_eh_completed_normally(scmd); rtn = scsi_eh_completed_normally(scmd);
SCSI_LOG_ERROR_RECOVERY(3, SCSI_LOG_ERROR_RECOVERY(3,
printk("%s: scsi_eh_completed_normally %x\n", printk("%s: scsi_eh_completed_normally %x\n",
__FUNCTION__, rtn)); __FUNCTION__, rtn));
switch (rtn) { switch (rtn) {
case SUCCESS: case SUCCESS:
case NEEDS_RETRY: case NEEDS_RETRY:
...@@ -547,6 +489,15 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout) ...@@ -547,6 +489,15 @@ static int scsi_send_eh_cmnd(struct scsi_cmnd *scmd, int timeout)
rtn = FAILED; rtn = FAILED;
break; break;
} }
} else {
/*
* FIXME(eric) - we are not tracking whether we could
* abort a timed out command or not. not sure how
* we should treat them differently anyways.
*/
if (shost->hostt->eh_abort_handler)
shost->hostt->eh_abort_handler(scmd);
rtn = FAILED;
} }
return rtn; return rtn;
...@@ -1571,50 +1522,41 @@ static void scsi_unjam_host(struct Scsi_Host *shost) ...@@ -1571,50 +1522,41 @@ static void scsi_unjam_host(struct Scsi_Host *shost)
} }
/** /**
* scsi_error_handler - Handle errors/timeouts of SCSI cmds. * scsi_error_handler - SCSI error handler thread
* @data: Host for which we are running. * @data: Host for which we are running.
* *
* Notes: * Notes:
* This is always run in the context of a kernel thread. The idea is * This is the main error handling loop. This is run as a kernel thread
* that we start this thing up when the kernel starts up (one per host * for every SCSI host and handles all error handling activity.
* that we detect), and it immediately goes to sleep and waits for some
* event (i.e. failure). When this takes place, we have the job of
* trying to unjam the bus and restarting things.
**/ **/
int scsi_error_handler(void *data) int scsi_error_handler(void *data)
{ {
struct Scsi_Host *shost = (struct Scsi_Host *) data; struct Scsi_Host *shost = data;
int rtn;
current->flags |= PF_NOFREEZE; current->flags |= PF_NOFREEZE;
/* /*
* Note - we always use TASK_INTERRUPTIBLE even if the module * We use TASK_INTERRUPTIBLE so that the thread is not
* was loaded as part of the kernel. The reason is that * counted against the load average as a running process.
* UNINTERRUPTIBLE would cause this thread to be counted in * We never actually get interrupted because kthread_run
* the load average as a running process, and an interruptible * disables singal delivery for the created thread.
* wait doesn't.
*/ */
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
while (!kthread_should_stop()) { while (!kthread_should_stop()) {
if (shost->host_failed == 0 || if (shost->host_failed == 0 ||
shost->host_failed != shost->host_busy) { shost->host_failed != shost->host_busy) {
SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler" SCSI_LOG_ERROR_RECOVERY(1,
" scsi_eh_%d" printk("Error handler scsi_eh_%d sleeping\n",
" sleeping\n", shost->host_no));
shost->host_no));
schedule(); schedule();
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
continue; continue;
} }
__set_current_state(TASK_RUNNING); __set_current_state(TASK_RUNNING);
SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler" SCSI_LOG_ERROR_RECOVERY(1,
" scsi_eh_%d waking" printk("Error handler scsi_eh_%d waking up\n",
" up\n",shost->host_no)); shost->host_no));
shost->eh_active = 1;
/* /*
* We have a host that is failing for some reason. Figure out * We have a host that is failing for some reason. Figure out
...@@ -1622,12 +1564,10 @@ int scsi_error_handler(void *data) ...@@ -1622,12 +1564,10 @@ int scsi_error_handler(void *data)
* If we fail, we end up taking the thing offline. * If we fail, we end up taking the thing offline.
*/ */
if (shost->hostt->eh_strategy_handler) if (shost->hostt->eh_strategy_handler)
rtn = shost->hostt->eh_strategy_handler(shost); shost->hostt->eh_strategy_handler(shost);
else else
scsi_unjam_host(shost); scsi_unjam_host(shost);
shost->eh_active = 0;
/* /*
* Note - if the above fails completely, the action is to take * Note - if the above fails completely, the action is to take
* individual devices offline and flush the queue of any * individual devices offline and flush the queue of any
...@@ -1638,15 +1578,10 @@ int scsi_error_handler(void *data) ...@@ -1638,15 +1578,10 @@ int scsi_error_handler(void *data)
scsi_restart_operations(shost); scsi_restart_operations(shost);
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
} }
__set_current_state(TASK_RUNNING); __set_current_state(TASK_RUNNING);
SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler scsi_eh_%d" SCSI_LOG_ERROR_RECOVERY(1,
" exiting\n",shost->host_no)); printk("Error handler scsi_eh_%d exiting\n", shost->host_no));
/*
* Make sure that nobody tries to wake us up again.
*/
shost->ehandler = NULL; shost->ehandler = NULL;
return 0; return 0;
} }
......
...@@ -254,55 +254,6 @@ void scsi_do_req(struct scsi_request *sreq, const void *cmnd, ...@@ -254,55 +254,6 @@ void scsi_do_req(struct scsi_request *sreq, const void *cmnd,
} }
EXPORT_SYMBOL(scsi_do_req); EXPORT_SYMBOL(scsi_do_req);
/* This is the end routine we get to if a command was never attached
* to the request. Simply complete the request without changing
* rq_status; this will cause a DRIVER_ERROR. */
static void scsi_wait_req_end_io(struct request *req)
{
BUG_ON(!req->waiting);
complete(req->waiting);
}
void scsi_wait_req(struct scsi_request *sreq, const void *cmnd, void *buffer,
unsigned bufflen, int timeout, int retries)
{
DECLARE_COMPLETION(wait);
int write = (sreq->sr_data_direction == DMA_TO_DEVICE);
struct request *req;
req = blk_get_request(sreq->sr_device->request_queue, write,
__GFP_WAIT);
if (bufflen && blk_rq_map_kern(sreq->sr_device->request_queue, req,
buffer, bufflen, __GFP_WAIT)) {
sreq->sr_result = DRIVER_ERROR << 24;
blk_put_request(req);
return;
}
req->flags |= REQ_NOMERGE;
req->waiting = &wait;
req->end_io = scsi_wait_req_end_io;
req->cmd_len = COMMAND_SIZE(((u8 *)cmnd)[0]);
req->sense = sreq->sr_sense_buffer;
req->sense_len = 0;
memcpy(req->cmd, cmnd, req->cmd_len);
req->timeout = timeout;
req->flags |= REQ_BLOCK_PC;
req->rq_disk = NULL;
blk_insert_request(sreq->sr_device->request_queue, req,
sreq->sr_data_direction == DMA_TO_DEVICE, NULL);
wait_for_completion(&wait);
sreq->sr_request->waiting = NULL;
sreq->sr_result = req->errors;
if (req->errors)
sreq->sr_result |= (DRIVER_ERROR << 24);
blk_put_request(req);
}
EXPORT_SYMBOL(scsi_wait_req);
/** /**
* scsi_execute - insert request and wait for the result * scsi_execute - insert request and wait for the result
* @sdev: scsi device * @sdev: scsi device
......
...@@ -22,7 +22,6 @@ struct Scsi_Host; ...@@ -22,7 +22,6 @@ struct Scsi_Host;
* Scsi Error Handler Flags * Scsi Error Handler Flags
*/ */
#define SCSI_EH_CANCEL_CMD 0x0001 /* Cancel this cmd */ #define SCSI_EH_CANCEL_CMD 0x0001 /* Cancel this cmd */
#define SCSI_EH_REC_TIMEOUT 0x0002 /* EH retry timed out */
#define SCSI_SENSE_VALID(scmd) \ #define SCSI_SENSE_VALID(scmd) \
(((scmd)->sense_buffer[0] & 0x70) == 0x70) (((scmd)->sense_buffer[0] & 0x70) == 0x70)
......
...@@ -691,16 +691,19 @@ int scsi_sysfs_add_sdev(struct scsi_device *sdev) ...@@ -691,16 +691,19 @@ int scsi_sysfs_add_sdev(struct scsi_device *sdev)
void __scsi_remove_device(struct scsi_device *sdev) void __scsi_remove_device(struct scsi_device *sdev)
{ {
struct device *dev = &sdev->sdev_gendev;
if (scsi_device_set_state(sdev, SDEV_CANCEL) != 0) if (scsi_device_set_state(sdev, SDEV_CANCEL) != 0)
return; return;
class_device_unregister(&sdev->sdev_classdev); class_device_unregister(&sdev->sdev_classdev);
device_del(&sdev->sdev_gendev); transport_remove_device(dev);
device_del(dev);
scsi_device_set_state(sdev, SDEV_DEL); scsi_device_set_state(sdev, SDEV_DEL);
if (sdev->host->hostt->slave_destroy) if (sdev->host->hostt->slave_destroy)
sdev->host->hostt->slave_destroy(sdev); sdev->host->hostt->slave_destroy(sdev);
transport_unregister_device(&sdev->sdev_gendev); transport_destroy_device(dev);
put_device(&sdev->sdev_gendev); put_device(dev);
} }
/** /**
......
...@@ -441,6 +441,7 @@ ...@@ -441,6 +441,7 @@
#define PCI_DEVICE_ID_IBM_SNIPE 0x0180 #define PCI_DEVICE_ID_IBM_SNIPE 0x0180
#define PCI_DEVICE_ID_IBM_CITRINE 0x028C #define PCI_DEVICE_ID_IBM_CITRINE 0x028C
#define PCI_DEVICE_ID_IBM_GEMSTONE 0xB166 #define PCI_DEVICE_ID_IBM_GEMSTONE 0xB166
#define PCI_DEVICE_ID_IBM_OBSIDIAN 0x02BD
#define PCI_DEVICE_ID_IBM_ICOM_DEV_ID_1 0x0031 #define PCI_DEVICE_ID_IBM_ICOM_DEV_ID_1 0x0031
#define PCI_DEVICE_ID_IBM_ICOM_DEV_ID_2 0x0219 #define PCI_DEVICE_ID_IBM_ICOM_DEV_ID_2 0x0219
#define PCI_DEVICE_ID_IBM_ICOM_V2_TWO_PORTS_RVX 0x021A #define PCI_DEVICE_ID_IBM_ICOM_V2_TWO_PORTS_RVX 0x021A
...@@ -2144,6 +2145,7 @@ ...@@ -2144,6 +2145,7 @@
#define PCI_DEVICE_ID_ADAPTEC2_7899B 0x00c1 #define PCI_DEVICE_ID_ADAPTEC2_7899B 0x00c1
#define PCI_DEVICE_ID_ADAPTEC2_7899D 0x00c3 #define PCI_DEVICE_ID_ADAPTEC2_7899D 0x00c3
#define PCI_DEVICE_ID_ADAPTEC2_7899P 0x00cf #define PCI_DEVICE_ID_ADAPTEC2_7899P 0x00cf
#define PCI_DEVICE_ID_ADAPTEC2_OBSIDIAN 0x0500
#define PCI_DEVICE_ID_ADAPTEC2_SCAMP 0x0503 #define PCI_DEVICE_ID_ADAPTEC2_SCAMP 0x0503
......
/* /*
* raid_class.h - a generic raid visualisation class
*
* Copyright (c) 2005 - James Bottomley <James.Bottomley@steeleye.com>
*
* This file is licensed under GPLv2
*/ */
#include <linux/transport_class.h> #include <linux/transport_class.h>
...@@ -14,20 +19,35 @@ struct raid_function_template { ...@@ -14,20 +19,35 @@ struct raid_function_template {
}; };
enum raid_state { enum raid_state {
RAID_ACTIVE = 1, RAID_STATE_UNKNOWN = 0,
RAID_DEGRADED, RAID_STATE_ACTIVE,
RAID_RESYNCING, RAID_STATE_DEGRADED,
RAID_OFFLINE, RAID_STATE_RESYNCING,
RAID_STATE_OFFLINE,
};
enum raid_level {
RAID_LEVEL_UNKNOWN = 0,
RAID_LEVEL_LINEAR,
RAID_LEVEL_0,
RAID_LEVEL_1,
RAID_LEVEL_3,
RAID_LEVEL_4,
RAID_LEVEL_5,
RAID_LEVEL_6,
}; };
struct raid_data { struct raid_data {
struct list_head component_list; struct list_head component_list;
int component_count; int component_count;
int level; enum raid_level level;
enum raid_state state; enum raid_state state;
int resync; int resync;
}; };
/* resync complete goes from 0 to this */
#define RAID_MAX_RESYNC (10000)
#define DEFINE_RAID_ATTRIBUTE(type, attr) \ #define DEFINE_RAID_ATTRIBUTE(type, attr) \
static inline void \ static inline void \
raid_set_##attr(struct raid_template *r, struct device *dev, type value) { \ raid_set_##attr(struct raid_template *r, struct device *dev, type value) { \
...@@ -48,7 +68,7 @@ raid_get_##attr(struct raid_template *r, struct device *dev) { \ ...@@ -48,7 +68,7 @@ raid_get_##attr(struct raid_template *r, struct device *dev) { \
return rd->attr; \ return rd->attr; \
} }
DEFINE_RAID_ATTRIBUTE(int, level) DEFINE_RAID_ATTRIBUTE(enum raid_level, level)
DEFINE_RAID_ATTRIBUTE(int, resync) DEFINE_RAID_ATTRIBUTE(int, resync)
DEFINE_RAID_ATTRIBUTE(enum raid_state, state) DEFINE_RAID_ATTRIBUTE(enum raid_state, state)
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include <linux/workqueue.h> #include <linux/workqueue.h>
struct block_device; struct block_device;
struct completion;
struct module; struct module;
struct scsi_cmnd; struct scsi_cmnd;
struct scsi_device; struct scsi_device;
...@@ -467,10 +468,8 @@ struct Scsi_Host { ...@@ -467,10 +468,8 @@ struct Scsi_Host {
struct list_head eh_cmd_q; struct list_head eh_cmd_q;
struct task_struct * ehandler; /* Error recovery thread. */ struct task_struct * ehandler; /* Error recovery thread. */
struct semaphore * eh_action; /* Wait for specific actions on the struct completion * eh_action; /* Wait for specific actions on the
host. */ host. */
unsigned int eh_active:1; /* Indicates the eh thread is awake and active if
this is true. */
wait_queue_head_t host_wait; wait_queue_head_t host_wait;
struct scsi_host_template *hostt; struct scsi_host_template *hostt;
struct scsi_transport_template *transportt; struct scsi_transport_template *transportt;
......
...@@ -47,9 +47,6 @@ struct scsi_request { ...@@ -47,9 +47,6 @@ struct scsi_request {
extern struct scsi_request *scsi_allocate_request(struct scsi_device *, gfp_t); extern struct scsi_request *scsi_allocate_request(struct scsi_device *, gfp_t);
extern void scsi_release_request(struct scsi_request *); extern void scsi_release_request(struct scsi_request *);
extern void scsi_wait_req(struct scsi_request *, const void *cmnd,
void *buffer, unsigned bufflen,
int timeout, int retries);
extern void scsi_do_req(struct scsi_request *, const void *cmnd, extern void scsi_do_req(struct scsi_request *, const void *cmnd,
void *buffer, unsigned bufflen, void *buffer, unsigned bufflen,
void (*done) (struct scsi_cmnd *), void (*done) (struct scsi_cmnd *),
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment