Commit 5fadd053 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'upstream' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/libata-dev

parents e5dfa928 26ba2a7a
...@@ -415,6 +415,362 @@ and other resources, etc. ...@@ -415,6 +415,362 @@ and other resources, etc.
</sect1> </sect1>
</chapter> </chapter>
<chapter id="libataEH">
<title>Error handling</title>
<para>
This chapter describes how errors are handled under libata.
Readers are advised to read SCSI EH
(Documentation/scsi/scsi_eh.txt) and ATA exceptions doc first.
</para>
<sect1><title>Origins of commands</title>
<para>
In libata, a command is represented with struct ata_queued_cmd
or qc. qc's are preallocated during port initialization and
repetitively used for command executions. Currently only one
qc is allocated per port but yet-to-be-merged NCQ branch
allocates one for each tag and maps each qc to NCQ tag 1-to-1.
</para>
<para>
libata commands can originate from two sources - libata itself
and SCSI midlayer. libata internal commands are used for
initialization and error handling. All normal blk requests
and commands for SCSI emulation are passed as SCSI commands
through queuecommand callback of SCSI host template.
</para>
</sect1>
<sect1><title>How commands are issued</title>
<variablelist>
<varlistentry><term>Internal commands</term>
<listitem>
<para>
First, qc is allocated and initialized using
ata_qc_new_init(). Although ata_qc_new_init() doesn't
implement any wait or retry mechanism when qc is not
available, internal commands are currently issued only during
initialization and error recovery, so no other command is
active and allocation is guaranteed to succeed.
</para>
<para>
Once allocated qc's taskfile is initialized for the command to
be executed. qc currently has two mechanisms to notify
completion. One is via qc->complete_fn() callback and the
other is completion qc->waiting. qc->complete_fn() callback
is the asynchronous path used by normal SCSI translated
commands and qc->waiting is the synchronous (issuer sleeps in
process context) path used by internal commands.
</para>
<para>
Once initialization is complete, host_set lock is acquired
and the qc is issued.
</para>
</listitem>
</varlistentry>
<varlistentry><term>SCSI commands</term>
<listitem>
<para>
All libata drivers use ata_scsi_queuecmd() as
hostt->queuecommand callback. scmds can either be simulated
or translated. No qc is involved in processing a simulated
scmd. The result is computed right away and the scmd is
completed.
</para>
<para>
For a translated scmd, ata_qc_new_init() is invoked to
allocate a qc and the scmd is translated into the qc. SCSI
midlayer's completion notification function pointer is stored
into qc->scsidone.
</para>
<para>
qc->complete_fn() callback is used for completion
notification. ATA commands use ata_scsi_qc_complete() while
ATAPI commands use atapi_qc_complete(). Both functions end up
calling qc->scsidone to notify upper layer when the qc is
finished. After translation is completed, the qc is issued
with ata_qc_issue().
</para>
<para>
Note that SCSI midlayer invokes hostt->queuecommand while
holding host_set lock, so all above occur while holding
host_set lock.
</para>
</listitem>
</varlistentry>
</variablelist>
</sect1>
<sect1><title>How commands are processed</title>
<para>
Depending on which protocol and which controller are used,
commands are processed differently. For the purpose of
discussion, a controller which uses taskfile interface and all
standard callbacks is assumed.
</para>
<para>
Currently 6 ATA command protocols are used. They can be
sorted into the following four categories according to how
they are processed.
</para>
<variablelist>
<varlistentry><term>ATA NO DATA or DMA</term>
<listitem>
<para>
ATA_PROT_NODATA and ATA_PROT_DMA fall into this category.
These types of commands don't require any software
intervention once issued. Device will raise interrupt on
completion.
</para>
</listitem>
</varlistentry>
<varlistentry><term>ATA PIO</term>
<listitem>
<para>
ATA_PROT_PIO is in this category. libata currently
implements PIO with polling. ATA_NIEN bit is set to turn
off interrupt and pio_task on ata_wq performs polling and
IO.
</para>
</listitem>
</varlistentry>
<varlistentry><term>ATAPI NODATA or DMA</term>
<listitem>
<para>
ATA_PROT_ATAPI_NODATA and ATA_PROT_ATAPI_DMA are in this
category. packet_task is used to poll BSY bit after
issuing PACKET command. Once BSY is turned off by the
device, packet_task transfers CDB and hands off processing
to interrupt handler.
</para>
</listitem>
</varlistentry>
<varlistentry><term>ATAPI PIO</term>
<listitem>
<para>
ATA_PROT_ATAPI is in this category. ATA_NIEN bit is set
and, as in ATAPI NODATA or DMA, packet_task submits cdb.
However, after submitting cdb, further processing (data
transfer) is handed off to pio_task.
</para>
</listitem>
</varlistentry>
</variablelist>
</sect1>
<sect1><title>How commands are completed</title>
<para>
Once issued, all qc's are either completed with
ata_qc_complete() or time out. For commands which are handled
by interrupts, ata_host_intr() invokes ata_qc_complete(), and,
for PIO tasks, pio_task invokes ata_qc_complete(). In error
cases, packet_task may also complete commands.
</para>
<para>
ata_qc_complete() does the following.
</para>
<orderedlist>
<listitem>
<para>
DMA memory is unmapped.
</para>
</listitem>
<listitem>
<para>
ATA_QCFLAG_ACTIVE is clared from qc->flags.
</para>
</listitem>
<listitem>
<para>
qc->complete_fn() callback is invoked. If the return value of
the callback is not zero. Completion is short circuited and
ata_qc_complete() returns.
</para>
</listitem>
<listitem>
<para>
__ata_qc_complete() is called, which does
<orderedlist>
<listitem>
<para>
qc->flags is cleared to zero.
</para>
</listitem>
<listitem>
<para>
ap->active_tag and qc->tag are poisoned.
</para>
</listitem>
<listitem>
<para>
qc->waiting is claread &amp; completed (in that order).
</para>
</listitem>
<listitem>
<para>
qc is deallocated by clearing appropriate bit in ap->qactive.
</para>
</listitem>
</orderedlist>
</para>
</listitem>
</orderedlist>
<para>
So, it basically notifies upper layer and deallocates qc. One
exception is short-circuit path in #3 which is used by
atapi_qc_complete().
</para>
<para>
For all non-ATAPI commands, whether it fails or not, almost
the same code path is taken and very little error handling
takes place. A qc is completed with success status if it
succeeded, with failed status otherwise.
</para>
<para>
However, failed ATAPI commands require more handling as
REQUEST SENSE is needed to acquire sense data. If an ATAPI
command fails, ata_qc_complete() is invoked with error status,
which in turn invokes atapi_qc_complete() via
qc->complete_fn() callback.
</para>
<para>
This makes atapi_qc_complete() set scmd->result to
SAM_STAT_CHECK_CONDITION, complete the scmd and return 1. As
the sense data is empty but scmd->result is CHECK CONDITION,
SCSI midlayer will invoke EH for the scmd, and returning 1
makes ata_qc_complete() to return without deallocating the qc.
This leads us to ata_scsi_error() with partially completed qc.
</para>
</sect1>
<sect1><title>ata_scsi_error()</title>
<para>
ata_scsi_error() is the current hostt->eh_strategy_handler()
for libata. As discussed above, this will be entered in two
cases - timeout and ATAPI error completion. This function
calls low level libata driver's eng_timeout() callback, the
standard callback for which is ata_eng_timeout(). It checks
if a qc is active and calls ata_qc_timeout() on the qc if so.
Actual error handling occurs in ata_qc_timeout().
</para>
<para>
If EH is invoked for timeout, ata_qc_timeout() stops BMDMA and
completes the qc. Note that as we're currently in EH, we
cannot call scsi_done. As described in SCSI EH doc, a
recovered scmd should be either retried with
scsi_queue_insert() or finished with scsi_finish_command().
Here, we override qc->scsidone with scsi_finish_command() and
calls ata_qc_complete().
</para>
<para>
If EH is invoked due to a failed ATAPI qc, the qc here is
completed but not deallocated. The purpose of this
half-completion is to use the qc as place holder to make EH
code reach this place. This is a bit hackish, but it works.
</para>
<para>
Once control reaches here, the qc is deallocated by invoking
__ata_qc_complete() explicitly. Then, internal qc for REQUEST
SENSE is issued. Once sense data is acquired, scmd is
finished by directly invoking scsi_finish_command() on the
scmd. Note that as we already have completed and deallocated
the qc which was associated with the scmd, we don't need
to/cannot call ata_qc_complete() again.
</para>
</sect1>
<sect1><title>Problems with the current EH</title>
<itemizedlist>
<listitem>
<para>
Error representation is too crude. Currently any and all
error conditions are represented with ATA STATUS and ERROR
registers. Errors which aren't ATA device errors are treated
as ATA device errors by setting ATA_ERR bit. Better error
descriptor which can properly represent ATA and other
errors/exceptions is needed.
</para>
</listitem>
<listitem>
<para>
When handling timeouts, no action is taken to make device
forget about the timed out command and ready for new commands.
</para>
</listitem>
<listitem>
<para>
EH handling via ata_scsi_error() is not properly protected
from usual command processing. On EH entrance, the device is
not in quiescent state. Timed out commands may succeed or
fail any time. pio_task and atapi_task may still be running.
</para>
</listitem>
<listitem>
<para>
Too weak error recovery. Devices / controllers causing HSM
mismatch errors and other errors quite often require reset to
return to known state. Also, advanced error handling is
necessary to support features like NCQ and hotplug.
</para>
</listitem>
<listitem>
<para>
ATA errors are directly handled in the interrupt handler and
PIO errors in pio_task. This is problematic for advanced
error handling for the following reasons.
</para>
<para>
First, advanced error handling often requires context and
internal qc execution.
</para>
<para>
Second, even a simple failure (say, CRC error) needs
information gathering and could trigger complex error handling
(say, resetting &amp; reconfiguring). Having multiple code
paths to gather information, enter EH and trigger actions
makes life painful.
</para>
<para>
Third, scattered EH code makes implementing low level drivers
difficult. Low level drivers override libata callbacks. If
EH is scattered over several places, each affected callbacks
should perform its part of error handling. This can be error
prone and painful.
</para>
</listitem>
</itemizedlist>
</sect1>
</chapter>
<chapter id="libataExt"> <chapter id="libataExt">
<title>libata Library</title> <title>libata Library</title>
!Edrivers/scsi/libata-core.c !Edrivers/scsi/libata-core.c
...@@ -431,6 +787,722 @@ and other resources, etc. ...@@ -431,6 +787,722 @@ and other resources, etc.
!Idrivers/scsi/libata-scsi.c !Idrivers/scsi/libata-scsi.c
</chapter> </chapter>
<chapter id="ataExceptions">
<title>ATA errors &amp; exceptions</title>
<para>
This chapter tries to identify what error/exception conditions exist
for ATA/ATAPI devices and describe how they should be handled in
implementation-neutral way.
</para>
<para>
The term 'error' is used to describe conditions where either an
explicit error condition is reported from device or a command has
timed out.
</para>
<para>
The term 'exception' is either used to describe exceptional
conditions which are not errors (say, power or hotplug events), or
to describe both errors and non-error exceptional conditions. Where
explicit distinction between error and exception is necessary, the
term 'non-error exception' is used.
</para>
<sect1 id="excat">
<title>Exception categories</title>
<para>
Exceptions are described primarily with respect to legacy
taskfile + bus master IDE interface. If a controller provides
other better mechanism for error reporting, mapping those into
categories described below shouldn't be difficult.
</para>
<para>
In the following sections, two recovery actions - reset and
reconfiguring transport - are mentioned. These are described
further in <xref linkend="exrec"/>.
</para>
<sect2 id="excatHSMviolation">
<title>HSM violation</title>
<para>
This error is indicated when STATUS value doesn't match HSM
requirement during issuing or excution any ATA/ATAPI command.
</para>
<itemizedlist>
<title>Examples</title>
<listitem>
<para>
ATA_STATUS doesn't contain !BSY &amp;&amp; DRDY &amp;&amp; !DRQ while trying
to issue a command.
</para>
</listitem>
<listitem>
<para>
!BSY &amp;&amp; !DRQ during PIO data transfer.
</para>
</listitem>
<listitem>
<para>
DRQ on command completion.
</para>
</listitem>
<listitem>
<para>
!BSY &amp;&amp; ERR after CDB tranfer starts but before the
last byte of CDB is transferred. ATA/ATAPI standard states
that &quot;The device shall not terminate the PACKET command
with an error before the last byte of the command packet has
been written&quot; in the error outputs description of PACKET
command and the state diagram doesn't include such
transitions.
</para>
</listitem>
</itemizedlist>
<para>
In these cases, HSM is violated and not much information
regarding the error can be acquired from STATUS or ERROR
register. IOW, this error can be anything - driver bug,
faulty device, controller and/or cable.
</para>
<para>
As HSM is violated, reset is necessary to restore known state.
Reconfiguring transport for lower speed might be helpful too
as transmission errors sometimes cause this kind of errors.
</para>
</sect2>
<sect2 id="excatDevErr">
<title>ATA/ATAPI device error (non-NCQ / non-CHECK CONDITION)</title>
<para>
These are errors detected and reported by ATA/ATAPI devices
indicating device problems. For this type of errors, STATUS
and ERROR register values are valid and describe error
condition. Note that some of ATA bus errors are detected by
ATA/ATAPI devices and reported using the same mechanism as
device errors. Those cases are described later in this
section.
</para>
<para>
For ATA commands, this type of errors are indicated by !BSY
&amp;&amp; ERR during command execution and on completion.
</para>
<para>For ATAPI commands,</para>
<itemizedlist>
<listitem>
<para>
!BSY &amp;&amp; ERR &amp;&amp; ABRT right after issuing PACKET
indicates that PACKET command is not supported and falls in
this category.
</para>
</listitem>
<listitem>
<para>
!BSY &amp;&amp; ERR(==CHK) &amp;&amp; !ABRT after the last
byte of CDB is transferred indicates CHECK CONDITION and
doesn't fall in this category.
</para>
</listitem>
<listitem>
<para>
!BSY &amp;&amp; ERR(==CHK) &amp;&amp; ABRT after the last byte
of CDB is transferred *probably* indicates CHECK CONDITION and
doesn't fall in this category.
</para>
</listitem>
</itemizedlist>
<para>
Of errors detected as above, the followings are not ATA/ATAPI
device errors but ATA bus errors and should be handled
according to <xref linkend="excatATAbusErr"/>.
</para>
<variablelist>
<varlistentry>
<term>CRC error during data transfer</term>
<listitem>
<para>
This is indicated by ICRC bit in the ERROR register and
means that corruption occurred during data transfer. Upto
ATA/ATAPI-7, the standard specifies that this bit is only
applicable to UDMA transfers but ATA/ATAPI-8 draft revision
1f says that the bit may be applicable to multiword DMA and
PIO.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>ABRT error during data transfer or on completion</term>
<listitem>
<para>
Upto ATA/ATAPI-7, the standard specifies that ABRT could be
set on ICRC errors and on cases where a device is not able
to complete a command. Combined with the fact that MWDMA
and PIO transfer errors aren't allowed to use ICRC bit upto
ATA/ATAPI-7, it seems to imply that ABRT bit alone could
indicate tranfer errors.
</para>
<para>
However, ATA/ATAPI-8 draft revision 1f removes the part
that ICRC errors can turn on ABRT. So, this is kind of
gray area. Some heuristics are needed here.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
ATA/ATAPI device errors can be further categorized as follows.
</para>
<variablelist>
<varlistentry>
<term>Media errors</term>
<listitem>
<para>
This is indicated by UNC bit in the ERROR register. ATA
devices reports UNC error only after certain number of
retries cannot recover the data, so there's nothing much
else to do other than notifying upper layer.
</para>
<para>
READ and WRITE commands report CHS or LBA of the first
failed sector but ATA/ATAPI standard specifies that the
amount of transferred data on error completion is
indeterminate, so we cannot assume that sectors preceding
the failed sector have been transferred and thus cannot
complete those sectors successfully as SCSI does.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Media changed / media change requested error</term>
<listitem>
<para>
&lt;&lt;TODO: fill here&gt;&gt;
</para>
</listitem>
</varlistentry>
<varlistentry><term>Address error</term>
<listitem>
<para>
This is indicated by IDNF bit in the ERROR register.
Report to upper layer.
</para>
</listitem>
</varlistentry>
<varlistentry><term>Other errors</term>
<listitem>
<para>
This can be invalid command or parameter indicated by ABRT
ERROR bit or some other error condition. Note that ABRT
bit can indicate a lot of things including ICRC and Address
errors. Heuristics needed.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
Depending on commands, not all STATUS/ERROR bits are
applicable. These non-applicable bits are marked with
&quot;na&quot; in the output descriptions but upto ATA/ATAPI-7
no definition of &quot;na&quot; can be found. However,
ATA/ATAPI-8 draft revision 1f describes &quot;N/A&quot; as
follows.
</para>
<blockquote>
<variablelist>
<varlistentry><term>3.2.3.3a N/A</term>
<listitem>
<para>
A keyword the indicates a field has no defined value in
this standard and should not be checked by the host or
device. N/A fields should be cleared to zero.
</para>
</listitem>
</varlistentry>
</variablelist>
</blockquote>
<para>
So, it seems reasonable to assume that &quot;na&quot; bits are
cleared to zero by devices and thus need no explicit masking.
</para>
</sect2>
<sect2 id="excatATAPIcc">
<title>ATAPI device CHECK CONDITION</title>
<para>
ATAPI device CHECK CONDITION error is indicated by set CHK bit
(ERR bit) in the STATUS register after the last byte of CDB is
transferred for a PACKET command. For this kind of errors,
sense data should be acquired to gather information regarding
the errors. REQUEST SENSE packet command should be used to
acquire sense data.
</para>
<para>
Once sense data is acquired, this type of errors can be
handled similary to other SCSI errors. Note that sense data
may indicate ATA bus error (e.g. Sense Key 04h HARDWARE ERROR
&amp;&amp; ASC/ASCQ 47h/00h SCSI PARITY ERROR). In such
cases, the error should be considered as an ATA bus error and
handled according to <xref linkend="excatATAbusErr"/>.
</para>
</sect2>
<sect2 id="excatNCQerr">
<title>ATA device error (NCQ)</title>
<para>
NCQ command error is indicated by cleared BSY and set ERR bit
during NCQ command phase (one or more NCQ commands
outstanding). Although STATUS and ERROR registers will
contain valid values describing the error, READ LOG EXT is
required to clear the error condition, determine which command
has failed and acquire more information.
</para>
<para>
READ LOG EXT Log Page 10h reports which tag has failed and
taskfile register values describing the error. With this
information the failed command can be handled as a normal ATA
command error as in <xref linkend="excatDevErr"/> and all
other in-flight commands must be retried. Note that this
retry should not be counted - it's likely that commands
retried this way would have completed normally if it were not
for the failed command.
</para>
<para>
Note that ATA bus errors can be reported as ATA device NCQ
errors. This should be handled as described in <xref
linkend="excatATAbusErr"/>.
</para>
<para>
If READ LOG EXT Log Page 10h fails or reports NQ, we're
thoroughly screwed. This condition should be treated
according to <xref linkend="excatHSMviolation"/>.
</para>
</sect2>
<sect2 id="excatATAbusErr">
<title>ATA bus error</title>
<para>
ATA bus error means that data corruption occurred during
transmission over ATA bus (SATA or PATA). This type of errors
can be indicated by
</para>
<itemizedlist>
<listitem>
<para>
ICRC or ABRT error as described in <xref linkend="excatDevErr"/>.
</para>
</listitem>
<listitem>
<para>
Controller-specific error completion with error information
indicating transmission error.
</para>
</listitem>
<listitem>
<para>
On some controllers, command timeout. In this case, there may
be a mechanism to determine that the timeout is due to
transmission error.
</para>
</listitem>
<listitem>
<para>
Unknown/random errors, timeouts and all sorts of weirdities.
</para>
</listitem>
</itemizedlist>
<para>
As described above, transmission errors can cause wide variety
of symptoms ranging from device ICRC error to random device
lockup, and, for many cases, there is no way to tell if an
error condition is due to transmission error or not;
therefore, it's necessary to employ some kind of heuristic
when dealing with errors and timeouts. For example,
encountering repetitive ABRT errors for known supported
command is likely to indicate ATA bus error.
</para>
<para>
Once it's determined that ATA bus errors have possibly
occurred, lowering ATA bus transmission speed is one of
actions which may alleviate the problem. See <xref
linkend="exrecReconf"/> for more information.
</para>
</sect2>
<sect2 id="excatPCIbusErr">
<title>PCI bus error</title>
<para>
Data corruption or other failures during transmission over PCI
(or other system bus). For standard BMDMA, this is indicated
by Error bit in the BMDMA Status register. This type of
errors must be logged as it indicates something is very wrong
with the system. Resetting host controller is recommended.
</para>
</sect2>
<sect2 id="excatLateCompletion">
<title>Late completion</title>
<para>
This occurs when timeout occurs and the timeout handler finds
out that the timed out command has completed successfully or
with error. This is usually caused by lost interrupts. This
type of errors must be logged. Resetting host controller is
recommended.
</para>
</sect2>
<sect2 id="excatUnknown">
<title>Unknown error (timeout)</title>
<para>
This is when timeout occurs and the command is still
processing or the host and device are in unknown state. When
this occurs, HSM could be in any valid or invalid state. To
bring the device to known state and make it forget about the
timed out command, resetting is necessary. The timed out
command may be retried.
</para>
<para>
Timeouts can also be caused by transmission errors. Refer to
<xref linkend="excatATAbusErr"/> for more details.
</para>
</sect2>
<sect2 id="excatHoplugPM">
<title>Hotplug and power management exceptions</title>
<para>
&lt;&lt;TODO: fill here&gt;&gt;
</para>
</sect2>
</sect1>
<sect1 id="exrec">
<title>EH recovery actions</title>
<para>
This section discusses several important recovery actions.
</para>
<sect2 id="exrecClr">
<title>Clearing error condition</title>
<para>
Many controllers require its error registers to be cleared by
error handler. Different controllers may have different
requirements.
</para>
<para>
For SATA, it's strongly recommended to clear at least SError
register during error handling.
</para>
</sect2>
<sect2 id="exrecRst">
<title>Reset</title>
<para>
During EH, resetting is necessary in the following cases.
</para>
<itemizedlist>
<listitem>
<para>
HSM is in unknown or invalid state
</para>
</listitem>
<listitem>
<para>
HBA is in unknown or invalid state
</para>
</listitem>
<listitem>
<para>
EH needs to make HBA/device forget about in-flight commands
</para>
</listitem>
<listitem>
<para>
HBA/device behaves weirdly
</para>
</listitem>
</itemizedlist>
<para>
Resetting during EH might be a good idea regardless of error
condition to improve EH robustness. Whether to reset both or
either one of HBA and device depends on situation but the
following scheme is recommended.
</para>
<itemizedlist>
<listitem>
<para>
When it's known that HBA is in ready state but ATA/ATAPI
device in in unknown state, reset only device.
</para>
</listitem>
<listitem>
<para>
If HBA is in unknown state, reset both HBA and device.
</para>
</listitem>
</itemizedlist>
<para>
HBA resetting is implementation specific. For a controller
complying to taskfile/BMDMA PCI IDE, stopping active DMA
transaction may be sufficient iff BMDMA state is the only HBA
context. But even mostly taskfile/BMDMA PCI IDE complying
controllers may have implementation specific requirements and
mechanism to reset themselves. This must be addressed by
specific drivers.
</para>
<para>
OTOH, ATA/ATAPI standard describes in detail ways to reset
ATA/ATAPI devices.
</para>
<variablelist>
<varlistentry><term>PATA hardware reset</term>
<listitem>
<para>
This is hardware initiated device reset signalled with
asserted PATA RESET- signal. There is no standard way to
initiate hardware reset from software although some
hardware provides registers that allow driver to directly
tweak the RESET- signal.
</para>
</listitem>
</varlistentry>
<varlistentry><term>Software reset</term>
<listitem>
<para>
This is achieved by turning CONTROL SRST bit on for at
least 5us. Both PATA and SATA support it but, in case of
SATA, this may require controller-specific support as the
second Register FIS to clear SRST should be transmitted
while BSY bit is still set. Note that on PATA, this resets
both master and slave devices on a channel.
</para>
</listitem>
</varlistentry>
<varlistentry><term>EXECUTE DEVICE DIAGNOSTIC command</term>
<listitem>
<para>
Although ATA/ATAPI standard doesn't describe exactly, EDD
implies some level of resetting, possibly similar level
with software reset. Host-side EDD protocol can be handled
with normal command processing and most SATA controllers
should be able to handle EDD's just like other commands.
As in software reset, EDD affects both devices on a PATA
bus.
</para>
<para>
Although EDD does reset devices, this doesn't suit error
handling as EDD cannot be issued while BSY is set and it's
unclear how it will act when device is in unknown/weird
state.
</para>
</listitem>
</varlistentry>
<varlistentry><term>ATAPI DEVICE RESET command</term>
<listitem>
<para>
This is very similar to software reset except that reset
can be restricted to the selected device without affecting
the other device sharing the cable.
</para>
</listitem>
</varlistentry>
<varlistentry><term>SATA phy reset</term>
<listitem>
<para>
This is the preferred way of resetting a SATA device. In
effect, it's identical to PATA hardware reset. Note that
this can be done with the standard SCR Control register.
As such, it's usually easier to implement than software
reset.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
One more thing to consider when resetting devices is that
resetting clears certain configuration parameters and they
need to be set to their previous or newly adjusted values
after reset.
</para>
<para>
Parameters affected are.
</para>
<itemizedlist>
<listitem>
<para>
CHS set up with INITIALIZE DEVICE PARAMETERS (seldomly used)
</para>
</listitem>
<listitem>
<para>
Parameters set with SET FEATURES including transfer mode setting
</para>
</listitem>
<listitem>
<para>
Block count set with SET MULTIPLE MODE
</para>
</listitem>
<listitem>
<para>
Other parameters (SET MAX, MEDIA LOCK...)
</para>
</listitem>
</itemizedlist>
<para>
ATA/ATAPI standard specifies that some parameters must be
maintained across hardware or software reset, but doesn't
strictly specify all of them. Always reconfiguring needed
parameters after reset is required for robustness. Note that
this also applies when resuming from deep sleep (power-off).
</para>
<para>
Also, ATA/ATAPI standard requires that IDENTIFY DEVICE /
IDENTIFY PACKET DEVICE is issued after any configuration
parameter is updated or a hardware reset and the result used
for further operation. OS driver is required to implement
revalidation mechanism to support this.
</para>
</sect2>
<sect2 id="exrecReconf">
<title>Reconfigure transport</title>
<para>
For both PATA and SATA, a lot of corners are cut for cheap
connectors, cables or controllers and it's quite common to see
high transmission error rate. This can be mitigated by
lowering transmission speed.
</para>
<para>
The following is a possible scheme Jeff Garzik suggested.
</para>
<blockquote>
<para>
If more than $N (3?) transmission errors happen in 15 minutes,
</para>
<itemizedlist>
<listitem>
<para>
if SATA, decrease SATA PHY speed. if speed cannot be decreased,
</para>
</listitem>
<listitem>
<para>
decrease UDMA xfer speed. if at UDMA0, switch to PIO4,
</para>
</listitem>
<listitem>
<para>
decrease PIO xfer speed. if at PIO3, complain, but continue
</para>
</listitem>
</itemizedlist>
</blockquote>
</sect2>
</sect1>
</chapter>
<chapter id="PiixInt"> <chapter id="PiixInt">
<title>ata_piix Internals</title> <title>ata_piix Internals</title>
!Idrivers/scsi/ata_piix.c !Idrivers/scsi/ata_piix.c
......
...@@ -489,11 +489,11 @@ config SCSI_SATA_NV ...@@ -489,11 +489,11 @@ config SCSI_SATA_NV
If unsure, say N. If unsure, say N.
config SCSI_SATA_PROMISE config SCSI_PDC_ADMA
tristate "Promise SATA TX2/TX4 support" tristate "Pacific Digital ADMA support"
depends on SCSI_SATA && PCI depends on SCSI_SATA && PCI
help help
This option enables support for Promise Serial ATA TX2/TX4. This option enables support for Pacific Digital ADMA controllers
If unsure, say N. If unsure, say N.
...@@ -505,6 +505,14 @@ config SCSI_SATA_QSTOR ...@@ -505,6 +505,14 @@ config SCSI_SATA_QSTOR
If unsure, say N. If unsure, say N.
config SCSI_SATA_PROMISE
tristate "Promise SATA TX2/TX4 support"
depends on SCSI_SATA && PCI
help
This option enables support for Promise Serial ATA TX2/TX4.
If unsure, say N.
config SCSI_SATA_SX4 config SCSI_SATA_SX4
tristate "Promise SATA SX4 support" tristate "Promise SATA SX4 support"
depends on SCSI_SATA && PCI && EXPERIMENTAL depends on SCSI_SATA && PCI && EXPERIMENTAL
...@@ -521,6 +529,14 @@ config SCSI_SATA_SIL ...@@ -521,6 +529,14 @@ config SCSI_SATA_SIL
If unsure, say N. If unsure, say N.
config SCSI_SATA_SIL24
tristate "Silicon Image 3124/3132 SATA support"
depends on SCSI_SATA && PCI && EXPERIMENTAL
help
This option enables support for Silicon Image 3124/3132 Serial ATA.
If unsure, say N.
config SCSI_SATA_SIS config SCSI_SATA_SIS
tristate "SiS 964/180 SATA support" tristate "SiS 964/180 SATA support"
depends on SCSI_SATA && PCI && EXPERIMENTAL depends on SCSI_SATA && PCI && EXPERIMENTAL
......
...@@ -130,6 +130,7 @@ obj-$(CONFIG_SCSI_ATA_PIIX) += libata.o ata_piix.o ...@@ -130,6 +130,7 @@ obj-$(CONFIG_SCSI_ATA_PIIX) += libata.o ata_piix.o
obj-$(CONFIG_SCSI_SATA_PROMISE) += libata.o sata_promise.o obj-$(CONFIG_SCSI_SATA_PROMISE) += libata.o sata_promise.o
obj-$(CONFIG_SCSI_SATA_QSTOR) += libata.o sata_qstor.o obj-$(CONFIG_SCSI_SATA_QSTOR) += libata.o sata_qstor.o
obj-$(CONFIG_SCSI_SATA_SIL) += libata.o sata_sil.o obj-$(CONFIG_SCSI_SATA_SIL) += libata.o sata_sil.o
obj-$(CONFIG_SCSI_SATA_SIL24) += libata.o sata_sil24.o
obj-$(CONFIG_SCSI_SATA_VIA) += libata.o sata_via.o obj-$(CONFIG_SCSI_SATA_VIA) += libata.o sata_via.o
obj-$(CONFIG_SCSI_SATA_VITESSE) += libata.o sata_vsc.o obj-$(CONFIG_SCSI_SATA_VITESSE) += libata.o sata_vsc.o
obj-$(CONFIG_SCSI_SATA_SIS) += libata.o sata_sis.o obj-$(CONFIG_SCSI_SATA_SIS) += libata.o sata_sis.o
...@@ -137,6 +138,7 @@ obj-$(CONFIG_SCSI_SATA_SX4) += libata.o sata_sx4.o ...@@ -137,6 +138,7 @@ obj-$(CONFIG_SCSI_SATA_SX4) += libata.o sata_sx4.o
obj-$(CONFIG_SCSI_SATA_NV) += libata.o sata_nv.o obj-$(CONFIG_SCSI_SATA_NV) += libata.o sata_nv.o
obj-$(CONFIG_SCSI_SATA_ULI) += libata.o sata_uli.o obj-$(CONFIG_SCSI_SATA_ULI) += libata.o sata_uli.o
obj-$(CONFIG_SCSI_SATA_MV) += libata.o sata_mv.o obj-$(CONFIG_SCSI_SATA_MV) += libata.o sata_mv.o
obj-$(CONFIG_SCSI_PDC_ADMA) += libata.o pdc_adma.o
obj-$(CONFIG_ARM) += arm/ obj-$(CONFIG_ARM) += arm/
......
...@@ -216,7 +216,7 @@ static Scsi_Host_Template ahci_sht = { ...@@ -216,7 +216,7 @@ static Scsi_Host_Template ahci_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations ahci_ops = { static const struct ata_port_operations ahci_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.check_status = ahci_check_status, .check_status = ahci_check_status,
...@@ -407,7 +407,7 @@ static u32 ahci_scr_read (struct ata_port *ap, unsigned int sc_reg_in) ...@@ -407,7 +407,7 @@ static u32 ahci_scr_read (struct ata_port *ap, unsigned int sc_reg_in)
return 0xffffffffU; return 0xffffffffU;
} }
return readl((void *) ap->ioaddr.scr_addr + (sc_reg * 4)); return readl((void __iomem *) ap->ioaddr.scr_addr + (sc_reg * 4));
} }
...@@ -425,7 +425,7 @@ static void ahci_scr_write (struct ata_port *ap, unsigned int sc_reg_in, ...@@ -425,7 +425,7 @@ static void ahci_scr_write (struct ata_port *ap, unsigned int sc_reg_in,
return; return;
} }
writel(val, (void *) ap->ioaddr.scr_addr + (sc_reg * 4)); writel(val, (void __iomem *) ap->ioaddr.scr_addr + (sc_reg * 4));
} }
static void ahci_phy_reset(struct ata_port *ap) static void ahci_phy_reset(struct ata_port *ap)
...@@ -453,14 +453,14 @@ static void ahci_phy_reset(struct ata_port *ap) ...@@ -453,14 +453,14 @@ static void ahci_phy_reset(struct ata_port *ap)
static u8 ahci_check_status(struct ata_port *ap) static u8 ahci_check_status(struct ata_port *ap)
{ {
void *mmio = (void *) ap->ioaddr.cmd_addr; void __iomem *mmio = (void __iomem *) ap->ioaddr.cmd_addr;
return readl(mmio + PORT_TFDATA) & 0xFF; return readl(mmio + PORT_TFDATA) & 0xFF;
} }
static u8 ahci_check_err(struct ata_port *ap) static u8 ahci_check_err(struct ata_port *ap)
{ {
void *mmio = (void *) ap->ioaddr.cmd_addr; void __iomem *mmio = (void __iomem *) ap->ioaddr.cmd_addr;
return (readl(mmio + PORT_TFDATA) >> 8) & 0xFF; return (readl(mmio + PORT_TFDATA) >> 8) & 0xFF;
} }
...@@ -672,17 +672,36 @@ static irqreturn_t ahci_interrupt (int irq, void *dev_instance, struct pt_regs * ...@@ -672,17 +672,36 @@ static irqreturn_t ahci_interrupt (int irq, void *dev_instance, struct pt_regs *
for (i = 0; i < host_set->n_ports; i++) { for (i = 0; i < host_set->n_ports; i++) {
struct ata_port *ap; struct ata_port *ap;
u32 tmp;
VPRINTK("port %u\n", i); if (!(irq_stat & (1 << i)))
continue;
ap = host_set->ports[i]; ap = host_set->ports[i];
tmp = irq_stat & (1 << i); if (ap) {
if (tmp && ap) {
struct ata_queued_cmd *qc; struct ata_queued_cmd *qc;
qc = ata_qc_from_tag(ap, ap->active_tag); qc = ata_qc_from_tag(ap, ap->active_tag);
if (ahci_host_intr(ap, qc)) if (!ahci_host_intr(ap, qc))
irq_ack |= (1 << i); if (ata_ratelimit()) {
struct pci_dev *pdev =
to_pci_dev(ap->host_set->dev);
printk(KERN_WARNING
"ahci(%s): unhandled interrupt on port %u\n",
pci_name(pdev), i);
}
VPRINTK("port %u\n", i);
} else {
VPRINTK("port %u (no irq)\n", i);
if (ata_ratelimit()) {
struct pci_dev *pdev =
to_pci_dev(ap->host_set->dev);
printk(KERN_WARNING
"ahci(%s): interrupt on disabled port %u\n",
pci_name(pdev), i);
}
} }
irq_ack |= (1 << i);
} }
if (irq_ack) { if (irq_ack) {
......
...@@ -147,7 +147,7 @@ static Scsi_Host_Template piix_sht = { ...@@ -147,7 +147,7 @@ static Scsi_Host_Template piix_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations piix_pata_ops = { static const struct ata_port_operations piix_pata_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.set_piomode = piix_set_piomode, .set_piomode = piix_set_piomode,
.set_dmamode = piix_set_dmamode, .set_dmamode = piix_set_dmamode,
...@@ -177,7 +177,7 @@ static struct ata_port_operations piix_pata_ops = { ...@@ -177,7 +177,7 @@ static struct ata_port_operations piix_pata_ops = {
.host_stop = ata_host_stop, .host_stop = ata_host_stop,
}; };
static struct ata_port_operations piix_sata_ops = { static const struct ata_port_operations piix_sata_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = ata_tf_load, .tf_load = ata_tf_load,
......
...@@ -48,6 +48,7 @@ ...@@ -48,6 +48,7 @@
#include <linux/completion.h> #include <linux/completion.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/jiffies.h>
#include <scsi/scsi.h> #include <scsi/scsi.h>
#include "scsi.h" #include "scsi.h"
#include "scsi_priv.h" #include "scsi_priv.h"
...@@ -62,14 +63,15 @@ ...@@ -62,14 +63,15 @@
static unsigned int ata_busy_sleep (struct ata_port *ap, static unsigned int ata_busy_sleep (struct ata_port *ap,
unsigned long tmout_pat, unsigned long tmout_pat,
unsigned long tmout); unsigned long tmout);
static void ata_dev_reread_id(struct ata_port *ap, struct ata_device *dev);
static void ata_dev_init_params(struct ata_port *ap, struct ata_device *dev);
static void ata_set_mode(struct ata_port *ap); static void ata_set_mode(struct ata_port *ap);
static void ata_dev_set_xfermode(struct ata_port *ap, struct ata_device *dev); static void ata_dev_set_xfermode(struct ata_port *ap, struct ata_device *dev);
static unsigned int ata_get_mode_mask(struct ata_port *ap, int shift); static unsigned int ata_get_mode_mask(const struct ata_port *ap, int shift);
static int fgb(u32 bitmap); static int fgb(u32 bitmap);
static int ata_choose_xfer_mode(struct ata_port *ap, static int ata_choose_xfer_mode(const struct ata_port *ap,
u8 *xfer_mode_out, u8 *xfer_mode_out,
unsigned int *xfer_shift_out); unsigned int *xfer_shift_out);
static int ata_qc_complete_noop(struct ata_queued_cmd *qc, u8 drv_stat);
static void __ata_qc_complete(struct ata_queued_cmd *qc); static void __ata_qc_complete(struct ata_queued_cmd *qc);
static unsigned int ata_unique_id = 1; static unsigned int ata_unique_id = 1;
...@@ -85,7 +87,7 @@ MODULE_LICENSE("GPL"); ...@@ -85,7 +87,7 @@ MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_VERSION); MODULE_VERSION(DRV_VERSION);
/** /**
* ata_tf_load - send taskfile registers to host controller * ata_tf_load_pio - send taskfile registers to host controller
* @ap: Port to which output is sent * @ap: Port to which output is sent
* @tf: ATA taskfile register set * @tf: ATA taskfile register set
* *
...@@ -95,7 +97,7 @@ MODULE_VERSION(DRV_VERSION); ...@@ -95,7 +97,7 @@ MODULE_VERSION(DRV_VERSION);
* Inherited from caller. * Inherited from caller.
*/ */
static void ata_tf_load_pio(struct ata_port *ap, struct ata_taskfile *tf) static void ata_tf_load_pio(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
struct ata_ioports *ioaddr = &ap->ioaddr; struct ata_ioports *ioaddr = &ap->ioaddr;
unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR; unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR;
...@@ -153,7 +155,7 @@ static void ata_tf_load_pio(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -153,7 +155,7 @@ static void ata_tf_load_pio(struct ata_port *ap, struct ata_taskfile *tf)
* Inherited from caller. * Inherited from caller.
*/ */
static void ata_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf) static void ata_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
struct ata_ioports *ioaddr = &ap->ioaddr; struct ata_ioports *ioaddr = &ap->ioaddr;
unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR; unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR;
...@@ -222,7 +224,7 @@ static void ata_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -222,7 +224,7 @@ static void ata_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf)
* LOCKING: * LOCKING:
* Inherited from caller. * Inherited from caller.
*/ */
void ata_tf_load(struct ata_port *ap, struct ata_taskfile *tf) void ata_tf_load(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
if (ap->flags & ATA_FLAG_MMIO) if (ap->flags & ATA_FLAG_MMIO)
ata_tf_load_mmio(ap, tf); ata_tf_load_mmio(ap, tf);
...@@ -242,7 +244,7 @@ void ata_tf_load(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -242,7 +244,7 @@ void ata_tf_load(struct ata_port *ap, struct ata_taskfile *tf)
* spin_lock_irqsave(host_set lock) * spin_lock_irqsave(host_set lock)
*/ */
static void ata_exec_command_pio(struct ata_port *ap, struct ata_taskfile *tf) static void ata_exec_command_pio(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
DPRINTK("ata%u: cmd 0x%X\n", ap->id, tf->command); DPRINTK("ata%u: cmd 0x%X\n", ap->id, tf->command);
...@@ -263,7 +265,7 @@ static void ata_exec_command_pio(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -263,7 +265,7 @@ static void ata_exec_command_pio(struct ata_port *ap, struct ata_taskfile *tf)
* spin_lock_irqsave(host_set lock) * spin_lock_irqsave(host_set lock)
*/ */
static void ata_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf) static void ata_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
DPRINTK("ata%u: cmd 0x%X\n", ap->id, tf->command); DPRINTK("ata%u: cmd 0x%X\n", ap->id, tf->command);
...@@ -283,7 +285,7 @@ static void ata_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -283,7 +285,7 @@ static void ata_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf)
* LOCKING: * LOCKING:
* spin_lock_irqsave(host_set lock) * spin_lock_irqsave(host_set lock)
*/ */
void ata_exec_command(struct ata_port *ap, struct ata_taskfile *tf) void ata_exec_command(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
if (ap->flags & ATA_FLAG_MMIO) if (ap->flags & ATA_FLAG_MMIO)
ata_exec_command_mmio(ap, tf); ata_exec_command_mmio(ap, tf);
...@@ -303,7 +305,7 @@ void ata_exec_command(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -303,7 +305,7 @@ void ata_exec_command(struct ata_port *ap, struct ata_taskfile *tf)
* Obtains host_set lock. * Obtains host_set lock.
*/ */
static inline void ata_exec(struct ata_port *ap, struct ata_taskfile *tf) static inline void ata_exec(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
unsigned long flags; unsigned long flags;
...@@ -326,7 +328,7 @@ static inline void ata_exec(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -326,7 +328,7 @@ static inline void ata_exec(struct ata_port *ap, struct ata_taskfile *tf)
* Obtains host_set lock. * Obtains host_set lock.
*/ */
static void ata_tf_to_host(struct ata_port *ap, struct ata_taskfile *tf) static void ata_tf_to_host(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
ap->ops->tf_load(ap, tf); ap->ops->tf_load(ap, tf);
...@@ -346,7 +348,7 @@ static void ata_tf_to_host(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -346,7 +348,7 @@ static void ata_tf_to_host(struct ata_port *ap, struct ata_taskfile *tf)
* spin_lock_irqsave(host_set lock) * spin_lock_irqsave(host_set lock)
*/ */
void ata_tf_to_host_nolock(struct ata_port *ap, struct ata_taskfile *tf) void ata_tf_to_host_nolock(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
ap->ops->tf_load(ap, tf); ap->ops->tf_load(ap, tf);
ap->ops->exec_command(ap, tf); ap->ops->exec_command(ap, tf);
...@@ -556,7 +558,7 @@ u8 ata_chk_err(struct ata_port *ap) ...@@ -556,7 +558,7 @@ u8 ata_chk_err(struct ata_port *ap)
* Inherited from caller. * Inherited from caller.
*/ */
void ata_tf_to_fis(struct ata_taskfile *tf, u8 *fis, u8 pmp) void ata_tf_to_fis(const struct ata_taskfile *tf, u8 *fis, u8 pmp)
{ {
fis[0] = 0x27; /* Register - Host to Device FIS */ fis[0] = 0x27; /* Register - Host to Device FIS */
fis[1] = (pmp & 0xf) | (1 << 7); /* Port multiplier number, fis[1] = (pmp & 0xf) | (1 << 7); /* Port multiplier number,
...@@ -597,7 +599,7 @@ void ata_tf_to_fis(struct ata_taskfile *tf, u8 *fis, u8 pmp) ...@@ -597,7 +599,7 @@ void ata_tf_to_fis(struct ata_taskfile *tf, u8 *fis, u8 pmp)
* Inherited from caller. * Inherited from caller.
*/ */
void ata_tf_from_fis(u8 *fis, struct ata_taskfile *tf) void ata_tf_from_fis(const u8 *fis, struct ata_taskfile *tf)
{ {
tf->command = fis[2]; /* status */ tf->command = fis[2]; /* status */
tf->feature = fis[3]; /* error */ tf->feature = fis[3]; /* error */
...@@ -615,79 +617,53 @@ void ata_tf_from_fis(u8 *fis, struct ata_taskfile *tf) ...@@ -615,79 +617,53 @@ void ata_tf_from_fis(u8 *fis, struct ata_taskfile *tf)
tf->hob_nsect = fis[13]; tf->hob_nsect = fis[13];
} }
/** static const u8 ata_rw_cmds[] = {
* ata_prot_to_cmd - determine which read/write opcodes to use /* pio multi */
* @protocol: ATA_PROT_xxx taskfile protocol ATA_CMD_READ_MULTI,
* @lba48: true is lba48 is present ATA_CMD_WRITE_MULTI,
* ATA_CMD_READ_MULTI_EXT,
* Given necessary input, determine which read/write commands ATA_CMD_WRITE_MULTI_EXT,
* to use to transfer data. /* pio */
* ATA_CMD_PIO_READ,
* LOCKING: ATA_CMD_PIO_WRITE,
* None. ATA_CMD_PIO_READ_EXT,
*/ ATA_CMD_PIO_WRITE_EXT,
static int ata_prot_to_cmd(int protocol, int lba48) /* dma */
{ ATA_CMD_READ,
int rcmd = 0, wcmd = 0; ATA_CMD_WRITE,
ATA_CMD_READ_EXT,
switch (protocol) { ATA_CMD_WRITE_EXT
case ATA_PROT_PIO: };
if (lba48) {
rcmd = ATA_CMD_PIO_READ_EXT;
wcmd = ATA_CMD_PIO_WRITE_EXT;
} else {
rcmd = ATA_CMD_PIO_READ;
wcmd = ATA_CMD_PIO_WRITE;
}
break;
case ATA_PROT_DMA:
if (lba48) {
rcmd = ATA_CMD_READ_EXT;
wcmd = ATA_CMD_WRITE_EXT;
} else {
rcmd = ATA_CMD_READ;
wcmd = ATA_CMD_WRITE;
}
break;
default:
return -1;
}
return rcmd | (wcmd << 8);
}
/** /**
* ata_dev_set_protocol - set taskfile protocol and r/w commands * ata_rwcmd_protocol - set taskfile r/w commands and protocol
* @dev: device to examine and configure * @qc: command to examine and configure
* *
* Examine the device configuration, after we have * Examine the device configuration and tf->flags to calculate
* read the identify-device page and configured the * the proper read/write commands and protocol to use.
* data transfer mode. Set internal state related to
* the ATA taskfile protocol (pio, pio mult, dma, etc.)
* and calculate the proper read/write commands to use.
* *
* LOCKING: * LOCKING:
* caller. * caller.
*/ */
static void ata_dev_set_protocol(struct ata_device *dev) void ata_rwcmd_protocol(struct ata_queued_cmd *qc)
{ {
int pio = (dev->flags & ATA_DFLAG_PIO); struct ata_taskfile *tf = &qc->tf;
int lba48 = (dev->flags & ATA_DFLAG_LBA48); struct ata_device *dev = qc->dev;
int proto, cmd;
if (pio) int index, lba48, write;
proto = dev->xfer_protocol = ATA_PROT_PIO;
else lba48 = (tf->flags & ATA_TFLAG_LBA48) ? 2 : 0;
proto = dev->xfer_protocol = ATA_PROT_DMA; write = (tf->flags & ATA_TFLAG_WRITE) ? 1 : 0;
cmd = ata_prot_to_cmd(proto, lba48); if (dev->flags & ATA_DFLAG_PIO) {
if (cmd < 0) tf->protocol = ATA_PROT_PIO;
BUG(); index = dev->multi_count ? 0 : 4;
} else {
tf->protocol = ATA_PROT_DMA;
index = 8;
}
dev->read_cmd = cmd & 0xff; tf->command = ata_rw_cmds[index + lba48 + write];
dev->write_cmd = (cmd >> 8) & 0xff;
} }
static const char * xfer_mode_str[] = { static const char * xfer_mode_str[] = {
...@@ -869,7 +845,7 @@ static unsigned int ata_devchk(struct ata_port *ap, ...@@ -869,7 +845,7 @@ static unsigned int ata_devchk(struct ata_port *ap,
* the event of failure. * the event of failure.
*/ */
unsigned int ata_dev_classify(struct ata_taskfile *tf) unsigned int ata_dev_classify(const struct ata_taskfile *tf)
{ {
/* Apple's open source Darwin code hints that some devices only /* Apple's open source Darwin code hints that some devices only
* put a proper signature into the LBA mid/high registers, * put a proper signature into the LBA mid/high registers,
...@@ -961,7 +937,7 @@ static u8 ata_dev_try_classify(struct ata_port *ap, unsigned int device) ...@@ -961,7 +937,7 @@ static u8 ata_dev_try_classify(struct ata_port *ap, unsigned int device)
* caller. * caller.
*/ */
void ata_dev_id_string(u16 *id, unsigned char *s, void ata_dev_id_string(const u16 *id, unsigned char *s,
unsigned int ofs, unsigned int len) unsigned int ofs, unsigned int len)
{ {
unsigned int c; unsigned int c;
...@@ -1078,7 +1054,7 @@ void ata_dev_select(struct ata_port *ap, unsigned int device, ...@@ -1078,7 +1054,7 @@ void ata_dev_select(struct ata_port *ap, unsigned int device,
* caller. * caller.
*/ */
static inline void ata_dump_id(struct ata_device *dev) static inline void ata_dump_id(const struct ata_device *dev)
{ {
DPRINTK("49==0x%04x " DPRINTK("49==0x%04x "
"53==0x%04x " "53==0x%04x "
...@@ -1106,6 +1082,31 @@ static inline void ata_dump_id(struct ata_device *dev) ...@@ -1106,6 +1082,31 @@ static inline void ata_dump_id(struct ata_device *dev)
dev->id[93]); dev->id[93]);
} }
/*
* Compute the PIO modes available for this device. This is not as
* trivial as it seems if we must consider early devices correctly.
*
* FIXME: pre IDE drive timing (do we care ?).
*/
static unsigned int ata_pio_modes(const struct ata_device *adev)
{
u16 modes;
/* Usual case. Word 53 indicates word 88 is valid */
if (adev->id[ATA_ID_FIELD_VALID] & (1 << 2)) {
modes = adev->id[ATA_ID_PIO_MODES] & 0x03;
modes <<= 3;
modes |= 0x7;
return modes;
}
/* If word 88 isn't valid then Word 51 holds the PIO timing number
for the maximum. Turn it into a mask and return it */
modes = (2 << (adev->id[ATA_ID_OLD_PIO_MODES] & 0xFF)) - 1 ;
return modes;
}
/** /**
* ata_dev_identify - obtain IDENTIFY x DEVICE page * ata_dev_identify - obtain IDENTIFY x DEVICE page
* @ap: port on which device we wish to probe resides * @ap: port on which device we wish to probe resides
...@@ -1131,7 +1132,7 @@ static inline void ata_dump_id(struct ata_device *dev) ...@@ -1131,7 +1132,7 @@ static inline void ata_dump_id(struct ata_device *dev)
static void ata_dev_identify(struct ata_port *ap, unsigned int device) static void ata_dev_identify(struct ata_port *ap, unsigned int device)
{ {
struct ata_device *dev = &ap->device[device]; struct ata_device *dev = &ap->device[device];
unsigned int i; unsigned int major_version;
u16 tmp; u16 tmp;
unsigned long xfer_modes; unsigned long xfer_modes;
u8 status; u8 status;
...@@ -1229,9 +1230,9 @@ static void ata_dev_identify(struct ata_port *ap, unsigned int device) ...@@ -1229,9 +1230,9 @@ static void ata_dev_identify(struct ata_port *ap, unsigned int device)
* common ATA, ATAPI feature tests * common ATA, ATAPI feature tests
*/ */
/* we require LBA and DMA support (bits 8 & 9 of word 49) */ /* we require DMA support (bits 8 of word 49) */
if (!ata_id_has_dma(dev->id) || !ata_id_has_lba(dev->id)) { if (!ata_id_has_dma(dev->id)) {
printk(KERN_DEBUG "ata%u: no dma/lba\n", ap->id); printk(KERN_DEBUG "ata%u: no dma\n", ap->id);
goto err_out_nosup; goto err_out_nosup;
} }
...@@ -1239,10 +1240,8 @@ static void ata_dev_identify(struct ata_port *ap, unsigned int device) ...@@ -1239,10 +1240,8 @@ static void ata_dev_identify(struct ata_port *ap, unsigned int device)
xfer_modes = dev->id[ATA_ID_UDMA_MODES]; xfer_modes = dev->id[ATA_ID_UDMA_MODES];
if (!xfer_modes) if (!xfer_modes)
xfer_modes = (dev->id[ATA_ID_MWDMA_MODES]) << ATA_SHIFT_MWDMA; xfer_modes = (dev->id[ATA_ID_MWDMA_MODES]) << ATA_SHIFT_MWDMA;
if (!xfer_modes) { if (!xfer_modes)
xfer_modes = (dev->id[ATA_ID_PIO_MODES]) << (ATA_SHIFT_PIO + 3); xfer_modes = ata_pio_modes(dev);
xfer_modes |= (0x7 << ATA_SHIFT_PIO);
}
ata_dump_id(dev); ata_dump_id(dev);
...@@ -1251,32 +1250,75 @@ static void ata_dev_identify(struct ata_port *ap, unsigned int device) ...@@ -1251,32 +1250,75 @@ static void ata_dev_identify(struct ata_port *ap, unsigned int device)
if (!ata_id_is_ata(dev->id)) /* sanity check */ if (!ata_id_is_ata(dev->id)) /* sanity check */
goto err_out_nosup; goto err_out_nosup;
/* get major version */
tmp = dev->id[ATA_ID_MAJOR_VER]; tmp = dev->id[ATA_ID_MAJOR_VER];
for (i = 14; i >= 1; i--) for (major_version = 14; major_version >= 1; major_version--)
if (tmp & (1 << i)) if (tmp & (1 << major_version))
break; break;
/* we require at least ATA-3 */ /*
if (i < 3) { * The exact sequence expected by certain pre-ATA4 drives is:
printk(KERN_DEBUG "ata%u: no ATA-3\n", ap->id); * SRST RESET
goto err_out_nosup; * IDENTIFY
* INITIALIZE DEVICE PARAMETERS
* anything else..
* Some drives were very specific about that exact sequence.
*/
if (major_version < 4 || (!ata_id_has_lba(dev->id))) {
ata_dev_init_params(ap, dev);
/* current CHS translation info (id[53-58]) might be
* changed. reread the identify device info.
*/
ata_dev_reread_id(ap, dev);
} }
if (ata_id_has_lba48(dev->id)) { if (ata_id_has_lba(dev->id)) {
dev->flags |= ATA_DFLAG_LBA48; dev->flags |= ATA_DFLAG_LBA;
dev->n_sectors = ata_id_u64(dev->id, 100);
} else { if (ata_id_has_lba48(dev->id)) {
dev->n_sectors = ata_id_u32(dev->id, 60); dev->flags |= ATA_DFLAG_LBA48;
dev->n_sectors = ata_id_u64(dev->id, 100);
} else {
dev->n_sectors = ata_id_u32(dev->id, 60);
}
/* print device info to dmesg */
printk(KERN_INFO "ata%u: dev %u ATA-%d, max %s, %Lu sectors:%s\n",
ap->id, device,
major_version,
ata_mode_string(xfer_modes),
(unsigned long long)dev->n_sectors,
dev->flags & ATA_DFLAG_LBA48 ? " LBA48" : " LBA");
} else {
/* CHS */
/* Default translation */
dev->cylinders = dev->id[1];
dev->heads = dev->id[3];
dev->sectors = dev->id[6];
dev->n_sectors = dev->cylinders * dev->heads * dev->sectors;
if (ata_id_current_chs_valid(dev->id)) {
/* Current CHS translation is valid. */
dev->cylinders = dev->id[54];
dev->heads = dev->id[55];
dev->sectors = dev->id[56];
dev->n_sectors = ata_id_u32(dev->id, 57);
}
/* print device info to dmesg */
printk(KERN_INFO "ata%u: dev %u ATA-%d, max %s, %Lu sectors: CHS %d/%d/%d\n",
ap->id, device,
major_version,
ata_mode_string(xfer_modes),
(unsigned long long)dev->n_sectors,
(int)dev->cylinders, (int)dev->heads, (int)dev->sectors);
} }
ap->host->max_cmd_len = 16; ap->host->max_cmd_len = 16;
/* print device info to dmesg */
printk(KERN_INFO "ata%u: dev %u ATA, max %s, %Lu sectors:%s\n",
ap->id, device,
ata_mode_string(xfer_modes),
(unsigned long long)dev->n_sectors,
dev->flags & ATA_DFLAG_LBA48 ? " lba48" : "");
} }
/* ATAPI-specific feature tests */ /* ATAPI-specific feature tests */
...@@ -1310,7 +1352,7 @@ static void ata_dev_identify(struct ata_port *ap, unsigned int device) ...@@ -1310,7 +1352,7 @@ static void ata_dev_identify(struct ata_port *ap, unsigned int device)
} }
static inline u8 ata_dev_knobble(struct ata_port *ap) static inline u8 ata_dev_knobble(const struct ata_port *ap)
{ {
return ((ap->cbl == ATA_CBL_SATA) && (!ata_id_is_sata(ap->device->id))); return ((ap->cbl == ATA_CBL_SATA) && (!ata_id_is_sata(ap->device->id)));
} }
...@@ -1496,7 +1538,153 @@ void ata_port_disable(struct ata_port *ap) ...@@ -1496,7 +1538,153 @@ void ata_port_disable(struct ata_port *ap)
ap->flags |= ATA_FLAG_PORT_DISABLED; ap->flags |= ATA_FLAG_PORT_DISABLED;
} }
static struct { /*
* This mode timing computation functionality is ported over from
* drivers/ide/ide-timing.h and was originally written by Vojtech Pavlik
*/
/*
* PIO 0-5, MWDMA 0-2 and UDMA 0-6 timings (in nanoseconds).
* These were taken from ATA/ATAPI-6 standard, rev 0a, except
* for PIO 5, which is a nonstandard extension and UDMA6, which
* is currently supported only by Maxtor drives.
*/
static const struct ata_timing ata_timing[] = {
{ XFER_UDMA_6, 0, 0, 0, 0, 0, 0, 0, 15 },
{ XFER_UDMA_5, 0, 0, 0, 0, 0, 0, 0, 20 },
{ XFER_UDMA_4, 0, 0, 0, 0, 0, 0, 0, 30 },
{ XFER_UDMA_3, 0, 0, 0, 0, 0, 0, 0, 45 },
{ XFER_UDMA_2, 0, 0, 0, 0, 0, 0, 0, 60 },
{ XFER_UDMA_1, 0, 0, 0, 0, 0, 0, 0, 80 },
{ XFER_UDMA_0, 0, 0, 0, 0, 0, 0, 0, 120 },
/* { XFER_UDMA_SLOW, 0, 0, 0, 0, 0, 0, 0, 150 }, */
{ XFER_MW_DMA_2, 25, 0, 0, 0, 70, 25, 120, 0 },
{ XFER_MW_DMA_1, 45, 0, 0, 0, 80, 50, 150, 0 },
{ XFER_MW_DMA_0, 60, 0, 0, 0, 215, 215, 480, 0 },
{ XFER_SW_DMA_2, 60, 0, 0, 0, 120, 120, 240, 0 },
{ XFER_SW_DMA_1, 90, 0, 0, 0, 240, 240, 480, 0 },
{ XFER_SW_DMA_0, 120, 0, 0, 0, 480, 480, 960, 0 },
/* { XFER_PIO_5, 20, 50, 30, 100, 50, 30, 100, 0 }, */
{ XFER_PIO_4, 25, 70, 25, 120, 70, 25, 120, 0 },
{ XFER_PIO_3, 30, 80, 70, 180, 80, 70, 180, 0 },
{ XFER_PIO_2, 30, 290, 40, 330, 100, 90, 240, 0 },
{ XFER_PIO_1, 50, 290, 93, 383, 125, 100, 383, 0 },
{ XFER_PIO_0, 70, 290, 240, 600, 165, 150, 600, 0 },
/* { XFER_PIO_SLOW, 120, 290, 240, 960, 290, 240, 960, 0 }, */
{ 0xFF }
};
#define ENOUGH(v,unit) (((v)-1)/(unit)+1)
#define EZ(v,unit) ((v)?ENOUGH(v,unit):0)
static void ata_timing_quantize(const struct ata_timing *t, struct ata_timing *q, int T, int UT)
{
q->setup = EZ(t->setup * 1000, T);
q->act8b = EZ(t->act8b * 1000, T);
q->rec8b = EZ(t->rec8b * 1000, T);
q->cyc8b = EZ(t->cyc8b * 1000, T);
q->active = EZ(t->active * 1000, T);
q->recover = EZ(t->recover * 1000, T);
q->cycle = EZ(t->cycle * 1000, T);
q->udma = EZ(t->udma * 1000, UT);
}
void ata_timing_merge(const struct ata_timing *a, const struct ata_timing *b,
struct ata_timing *m, unsigned int what)
{
if (what & ATA_TIMING_SETUP ) m->setup = max(a->setup, b->setup);
if (what & ATA_TIMING_ACT8B ) m->act8b = max(a->act8b, b->act8b);
if (what & ATA_TIMING_REC8B ) m->rec8b = max(a->rec8b, b->rec8b);
if (what & ATA_TIMING_CYC8B ) m->cyc8b = max(a->cyc8b, b->cyc8b);
if (what & ATA_TIMING_ACTIVE ) m->active = max(a->active, b->active);
if (what & ATA_TIMING_RECOVER) m->recover = max(a->recover, b->recover);
if (what & ATA_TIMING_CYCLE ) m->cycle = max(a->cycle, b->cycle);
if (what & ATA_TIMING_UDMA ) m->udma = max(a->udma, b->udma);
}
static const struct ata_timing* ata_timing_find_mode(unsigned short speed)
{
const struct ata_timing *t;
for (t = ata_timing; t->mode != speed; t++)
if (t->mode == 0xFF)
return NULL;
return t;
}
int ata_timing_compute(struct ata_device *adev, unsigned short speed,
struct ata_timing *t, int T, int UT)
{
const struct ata_timing *s;
struct ata_timing p;
/*
* Find the mode.
*/
if (!(s = ata_timing_find_mode(speed)))
return -EINVAL;
/*
* If the drive is an EIDE drive, it can tell us it needs extended
* PIO/MW_DMA cycle timing.
*/
if (adev->id[ATA_ID_FIELD_VALID] & 2) { /* EIDE drive */
memset(&p, 0, sizeof(p));
if(speed >= XFER_PIO_0 && speed <= XFER_SW_DMA_0) {
if (speed <= XFER_PIO_2) p.cycle = p.cyc8b = adev->id[ATA_ID_EIDE_PIO];
else p.cycle = p.cyc8b = adev->id[ATA_ID_EIDE_PIO_IORDY];
} else if(speed >= XFER_MW_DMA_0 && speed <= XFER_MW_DMA_2) {
p.cycle = adev->id[ATA_ID_EIDE_DMA_MIN];
}
ata_timing_merge(&p, t, t, ATA_TIMING_CYCLE | ATA_TIMING_CYC8B);
}
/*
* Convert the timing to bus clock counts.
*/
ata_timing_quantize(s, t, T, UT);
/*
* Even in DMA/UDMA modes we still use PIO access for IDENTIFY, S.M.A.R.T
* and some other commands. We have to ensure that the DMA cycle timing is
* slower/equal than the fastest PIO timing.
*/
if (speed > XFER_PIO_4) {
ata_timing_compute(adev, adev->pio_mode, &p, T, UT);
ata_timing_merge(&p, t, t, ATA_TIMING_ALL);
}
/*
* Lenghten active & recovery time so that cycle time is correct.
*/
if (t->act8b + t->rec8b < t->cyc8b) {
t->act8b += (t->cyc8b - (t->act8b + t->rec8b)) / 2;
t->rec8b = t->cyc8b - t->act8b;
}
if (t->active + t->recover < t->cycle) {
t->active += (t->cycle - (t->active + t->recover)) / 2;
t->recover = t->cycle - t->active;
}
return 0;
}
static const struct {
unsigned int shift; unsigned int shift;
u8 base; u8 base;
} xfer_mode_classes[] = { } xfer_mode_classes[] = {
...@@ -1603,7 +1791,7 @@ static void ata_host_set_dma(struct ata_port *ap, u8 xfer_mode, ...@@ -1603,7 +1791,7 @@ static void ata_host_set_dma(struct ata_port *ap, u8 xfer_mode,
*/ */
static void ata_set_mode(struct ata_port *ap) static void ata_set_mode(struct ata_port *ap)
{ {
unsigned int i, xfer_shift; unsigned int xfer_shift;
u8 xfer_mode; u8 xfer_mode;
int rc; int rc;
...@@ -1632,11 +1820,6 @@ static void ata_set_mode(struct ata_port *ap) ...@@ -1632,11 +1820,6 @@ static void ata_set_mode(struct ata_port *ap)
if (ap->ops->post_set_mode) if (ap->ops->post_set_mode)
ap->ops->post_set_mode(ap); ap->ops->post_set_mode(ap);
for (i = 0; i < 2; i++) {
struct ata_device *dev = &ap->device[i];
ata_dev_set_protocol(dev);
}
return; return;
err_out: err_out:
...@@ -1910,7 +2093,8 @@ void ata_bus_reset(struct ata_port *ap) ...@@ -1910,7 +2093,8 @@ void ata_bus_reset(struct ata_port *ap)
DPRINTK("EXIT\n"); DPRINTK("EXIT\n");
} }
static void ata_pr_blacklisted(struct ata_port *ap, struct ata_device *dev) static void ata_pr_blacklisted(const struct ata_port *ap,
const struct ata_device *dev)
{ {
printk(KERN_WARNING "ata%u: dev %u is on DMA blacklist, disabling DMA\n", printk(KERN_WARNING "ata%u: dev %u is on DMA blacklist, disabling DMA\n",
ap->id, dev->devno); ap->id, dev->devno);
...@@ -1948,7 +2132,7 @@ static const char * ata_dma_blacklist [] = { ...@@ -1948,7 +2132,7 @@ static const char * ata_dma_blacklist [] = {
"_NEC DV5800A", "_NEC DV5800A",
}; };
static int ata_dma_blacklisted(struct ata_port *ap, struct ata_device *dev) static int ata_dma_blacklisted(const struct ata_device *dev)
{ {
unsigned char model_num[40]; unsigned char model_num[40];
char *s; char *s;
...@@ -1973,9 +2157,9 @@ static int ata_dma_blacklisted(struct ata_port *ap, struct ata_device *dev) ...@@ -1973,9 +2157,9 @@ static int ata_dma_blacklisted(struct ata_port *ap, struct ata_device *dev)
return 0; return 0;
} }
static unsigned int ata_get_mode_mask(struct ata_port *ap, int shift) static unsigned int ata_get_mode_mask(const struct ata_port *ap, int shift)
{ {
struct ata_device *master, *slave; const struct ata_device *master, *slave;
unsigned int mask; unsigned int mask;
master = &ap->device[0]; master = &ap->device[0];
...@@ -1987,14 +2171,14 @@ static unsigned int ata_get_mode_mask(struct ata_port *ap, int shift) ...@@ -1987,14 +2171,14 @@ static unsigned int ata_get_mode_mask(struct ata_port *ap, int shift)
mask = ap->udma_mask; mask = ap->udma_mask;
if (ata_dev_present(master)) { if (ata_dev_present(master)) {
mask &= (master->id[ATA_ID_UDMA_MODES] & 0xff); mask &= (master->id[ATA_ID_UDMA_MODES] & 0xff);
if (ata_dma_blacklisted(ap, master)) { if (ata_dma_blacklisted(master)) {
mask = 0; mask = 0;
ata_pr_blacklisted(ap, master); ata_pr_blacklisted(ap, master);
} }
} }
if (ata_dev_present(slave)) { if (ata_dev_present(slave)) {
mask &= (slave->id[ATA_ID_UDMA_MODES] & 0xff); mask &= (slave->id[ATA_ID_UDMA_MODES] & 0xff);
if (ata_dma_blacklisted(ap, slave)) { if (ata_dma_blacklisted(slave)) {
mask = 0; mask = 0;
ata_pr_blacklisted(ap, slave); ata_pr_blacklisted(ap, slave);
} }
...@@ -2004,14 +2188,14 @@ static unsigned int ata_get_mode_mask(struct ata_port *ap, int shift) ...@@ -2004,14 +2188,14 @@ static unsigned int ata_get_mode_mask(struct ata_port *ap, int shift)
mask = ap->mwdma_mask; mask = ap->mwdma_mask;
if (ata_dev_present(master)) { if (ata_dev_present(master)) {
mask &= (master->id[ATA_ID_MWDMA_MODES] & 0x07); mask &= (master->id[ATA_ID_MWDMA_MODES] & 0x07);
if (ata_dma_blacklisted(ap, master)) { if (ata_dma_blacklisted(master)) {
mask = 0; mask = 0;
ata_pr_blacklisted(ap, master); ata_pr_blacklisted(ap, master);
} }
} }
if (ata_dev_present(slave)) { if (ata_dev_present(slave)) {
mask &= (slave->id[ATA_ID_MWDMA_MODES] & 0x07); mask &= (slave->id[ATA_ID_MWDMA_MODES] & 0x07);
if (ata_dma_blacklisted(ap, slave)) { if (ata_dma_blacklisted(slave)) {
mask = 0; mask = 0;
ata_pr_blacklisted(ap, slave); ata_pr_blacklisted(ap, slave);
} }
...@@ -2075,7 +2259,7 @@ static int fgb(u32 bitmap) ...@@ -2075,7 +2259,7 @@ static int fgb(u32 bitmap)
* Zero on success, negative on error. * Zero on success, negative on error.
*/ */
static int ata_choose_xfer_mode(struct ata_port *ap, static int ata_choose_xfer_mode(const struct ata_port *ap,
u8 *xfer_mode_out, u8 *xfer_mode_out,
unsigned int *xfer_shift_out) unsigned int *xfer_shift_out)
{ {
...@@ -2143,6 +2327,110 @@ static void ata_dev_set_xfermode(struct ata_port *ap, struct ata_device *dev) ...@@ -2143,6 +2327,110 @@ static void ata_dev_set_xfermode(struct ata_port *ap, struct ata_device *dev)
DPRINTK("EXIT\n"); DPRINTK("EXIT\n");
} }
/**
* ata_dev_reread_id - Reread the device identify device info
* @ap: port where the device is
* @dev: device to reread the identify device info
*
* LOCKING:
*/
static void ata_dev_reread_id(struct ata_port *ap, struct ata_device *dev)
{
DECLARE_COMPLETION(wait);
struct ata_queued_cmd *qc;
unsigned long flags;
int rc;
qc = ata_qc_new_init(ap, dev);
BUG_ON(qc == NULL);
ata_sg_init_one(qc, dev->id, sizeof(dev->id));
qc->dma_dir = DMA_FROM_DEVICE;
if (dev->class == ATA_DEV_ATA) {
qc->tf.command = ATA_CMD_ID_ATA;
DPRINTK("do ATA identify\n");
} else {
qc->tf.command = ATA_CMD_ID_ATAPI;
DPRINTK("do ATAPI identify\n");
}
qc->tf.flags |= ATA_TFLAG_DEVICE;
qc->tf.protocol = ATA_PROT_PIO;
qc->nsect = 1;
qc->waiting = &wait;
qc->complete_fn = ata_qc_complete_noop;
spin_lock_irqsave(&ap->host_set->lock, flags);
rc = ata_qc_issue(qc);
spin_unlock_irqrestore(&ap->host_set->lock, flags);
if (rc)
goto err_out;
wait_for_completion(&wait);
swap_buf_le16(dev->id, ATA_ID_WORDS);
ata_dump_id(dev);
DPRINTK("EXIT\n");
return;
err_out:
ata_port_disable(ap);
}
/**
* ata_dev_init_params - Issue INIT DEV PARAMS command
* @ap: Port associated with device @dev
* @dev: Device to which command will be sent
*
* LOCKING:
*/
static void ata_dev_init_params(struct ata_port *ap, struct ata_device *dev)
{
DECLARE_COMPLETION(wait);
struct ata_queued_cmd *qc;
int rc;
unsigned long flags;
u16 sectors = dev->id[6];
u16 heads = dev->id[3];
/* Number of sectors per track 1-255. Number of heads 1-16 */
if (sectors < 1 || sectors > 255 || heads < 1 || heads > 16)
return;
/* set up init dev params taskfile */
DPRINTK("init dev params \n");
qc = ata_qc_new_init(ap, dev);
BUG_ON(qc == NULL);
qc->tf.command = ATA_CMD_INIT_DEV_PARAMS;
qc->tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
qc->tf.protocol = ATA_PROT_NODATA;
qc->tf.nsect = sectors;
qc->tf.device |= (heads - 1) & 0x0f; /* max head = num. of heads - 1 */
qc->waiting = &wait;
qc->complete_fn = ata_qc_complete_noop;
spin_lock_irqsave(&ap->host_set->lock, flags);
rc = ata_qc_issue(qc);
spin_unlock_irqrestore(&ap->host_set->lock, flags);
if (rc)
ata_port_disable(ap);
else
wait_for_completion(&wait);
DPRINTK("EXIT\n");
}
/** /**
* ata_sg_clean - Unmap DMA memory associated with command * ata_sg_clean - Unmap DMA memory associated with command
* @qc: Command containing DMA memory to be released * @qc: Command containing DMA memory to be released
...@@ -2413,32 +2701,32 @@ void ata_poll_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat) ...@@ -2413,32 +2701,32 @@ void ata_poll_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
/** /**
* ata_pio_poll - * ata_pio_poll -
* @ap: * @ap: the target ata_port
* *
* LOCKING: * LOCKING:
* None. (executing in kernel thread context) * None. (executing in kernel thread context)
* *
* RETURNS: * RETURNS:
* * timeout value to use
*/ */
static unsigned long ata_pio_poll(struct ata_port *ap) static unsigned long ata_pio_poll(struct ata_port *ap)
{ {
u8 status; u8 status;
unsigned int poll_state = PIO_ST_UNKNOWN; unsigned int poll_state = HSM_ST_UNKNOWN;
unsigned int reg_state = PIO_ST_UNKNOWN; unsigned int reg_state = HSM_ST_UNKNOWN;
const unsigned int tmout_state = PIO_ST_TMOUT; const unsigned int tmout_state = HSM_ST_TMOUT;
switch (ap->pio_task_state) { switch (ap->hsm_task_state) {
case PIO_ST: case HSM_ST:
case PIO_ST_POLL: case HSM_ST_POLL:
poll_state = PIO_ST_POLL; poll_state = HSM_ST_POLL;
reg_state = PIO_ST; reg_state = HSM_ST;
break; break;
case PIO_ST_LAST: case HSM_ST_LAST:
case PIO_ST_LAST_POLL: case HSM_ST_LAST_POLL:
poll_state = PIO_ST_LAST_POLL; poll_state = HSM_ST_LAST_POLL;
reg_state = PIO_ST_LAST; reg_state = HSM_ST_LAST;
break; break;
default: default:
BUG(); BUG();
...@@ -2448,20 +2736,20 @@ static unsigned long ata_pio_poll(struct ata_port *ap) ...@@ -2448,20 +2736,20 @@ static unsigned long ata_pio_poll(struct ata_port *ap)
status = ata_chk_status(ap); status = ata_chk_status(ap);
if (status & ATA_BUSY) { if (status & ATA_BUSY) {
if (time_after(jiffies, ap->pio_task_timeout)) { if (time_after(jiffies, ap->pio_task_timeout)) {
ap->pio_task_state = tmout_state; ap->hsm_task_state = tmout_state;
return 0; return 0;
} }
ap->pio_task_state = poll_state; ap->hsm_task_state = poll_state;
return ATA_SHORT_PAUSE; return ATA_SHORT_PAUSE;
} }
ap->pio_task_state = reg_state; ap->hsm_task_state = reg_state;
return 0; return 0;
} }
/** /**
* ata_pio_complete - * ata_pio_complete - check if drive is busy or idle
* @ap: * @ap: the target ata_port
* *
* LOCKING: * LOCKING:
* None. (executing in kernel thread context) * None. (executing in kernel thread context)
...@@ -2480,14 +2768,14 @@ static int ata_pio_complete (struct ata_port *ap) ...@@ -2480,14 +2768,14 @@ static int ata_pio_complete (struct ata_port *ap)
* we enter, BSY will be cleared in a chk-status or two. If not, * we enter, BSY will be cleared in a chk-status or two. If not,
* the drive is probably seeking or something. Snooze for a couple * the drive is probably seeking or something. Snooze for a couple
* msecs, then chk-status again. If still busy, fall back to * msecs, then chk-status again. If still busy, fall back to
* PIO_ST_POLL state. * HSM_ST_POLL state.
*/ */
drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 10); drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 10);
if (drv_stat & (ATA_BUSY | ATA_DRQ)) { if (drv_stat & (ATA_BUSY | ATA_DRQ)) {
msleep(2); msleep(2);
drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 10); drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 10);
if (drv_stat & (ATA_BUSY | ATA_DRQ)) { if (drv_stat & (ATA_BUSY | ATA_DRQ)) {
ap->pio_task_state = PIO_ST_LAST_POLL; ap->hsm_task_state = HSM_ST_LAST_POLL;
ap->pio_task_timeout = jiffies + ATA_TMOUT_PIO; ap->pio_task_timeout = jiffies + ATA_TMOUT_PIO;
return 0; return 0;
} }
...@@ -2495,14 +2783,14 @@ static int ata_pio_complete (struct ata_port *ap) ...@@ -2495,14 +2783,14 @@ static int ata_pio_complete (struct ata_port *ap)
drv_stat = ata_wait_idle(ap); drv_stat = ata_wait_idle(ap);
if (!ata_ok(drv_stat)) { if (!ata_ok(drv_stat)) {
ap->pio_task_state = PIO_ST_ERR; ap->hsm_task_state = HSM_ST_ERR;
return 0; return 0;
} }
qc = ata_qc_from_tag(ap, ap->active_tag); qc = ata_qc_from_tag(ap, ap->active_tag);
assert(qc != NULL); assert(qc != NULL);
ap->pio_task_state = PIO_ST_IDLE; ap->hsm_task_state = HSM_ST_IDLE;
ata_poll_qc_complete(qc, drv_stat); ata_poll_qc_complete(qc, drv_stat);
...@@ -2513,7 +2801,7 @@ static int ata_pio_complete (struct ata_port *ap) ...@@ -2513,7 +2801,7 @@ static int ata_pio_complete (struct ata_port *ap)
/** /**
* swap_buf_le16 - * swap_buf_le16 - swap halves of 16-words in place
* @buf: Buffer to swap * @buf: Buffer to swap
* @buf_words: Number of 16-bit words in buffer. * @buf_words: Number of 16-bit words in buffer.
* *
...@@ -2522,6 +2810,7 @@ static int ata_pio_complete (struct ata_port *ap) ...@@ -2522,6 +2810,7 @@ static int ata_pio_complete (struct ata_port *ap)
* vice-versa. * vice-versa.
* *
* LOCKING: * LOCKING:
* Inherited from caller.
*/ */
void swap_buf_le16(u16 *buf, unsigned int buf_words) void swap_buf_le16(u16 *buf, unsigned int buf_words)
{ {
...@@ -2544,7 +2833,6 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words) ...@@ -2544,7 +2833,6 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words)
* *
* LOCKING: * LOCKING:
* Inherited from caller. * Inherited from caller.
*
*/ */
static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf, static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf,
...@@ -2590,7 +2878,6 @@ static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf, ...@@ -2590,7 +2878,6 @@ static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf,
* *
* LOCKING: * LOCKING:
* Inherited from caller. * Inherited from caller.
*
*/ */
static void ata_pio_data_xfer(struct ata_port *ap, unsigned char *buf, static void ata_pio_data_xfer(struct ata_port *ap, unsigned char *buf,
...@@ -2630,7 +2917,6 @@ static void ata_pio_data_xfer(struct ata_port *ap, unsigned char *buf, ...@@ -2630,7 +2917,6 @@ static void ata_pio_data_xfer(struct ata_port *ap, unsigned char *buf,
* *
* LOCKING: * LOCKING:
* Inherited from caller. * Inherited from caller.
*
*/ */
static void ata_data_xfer(struct ata_port *ap, unsigned char *buf, static void ata_data_xfer(struct ata_port *ap, unsigned char *buf,
...@@ -2662,7 +2948,7 @@ static void ata_pio_sector(struct ata_queued_cmd *qc) ...@@ -2662,7 +2948,7 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
unsigned char *buf; unsigned char *buf;
if (qc->cursect == (qc->nsect - 1)) if (qc->cursect == (qc->nsect - 1))
ap->pio_task_state = PIO_ST_LAST; ap->hsm_task_state = HSM_ST_LAST;
page = sg[qc->cursg].page; page = sg[qc->cursg].page;
offset = sg[qc->cursg].offset + qc->cursg_ofs * ATA_SECT_SIZE; offset = sg[qc->cursg].offset + qc->cursg_ofs * ATA_SECT_SIZE;
...@@ -2712,7 +2998,7 @@ static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes) ...@@ -2712,7 +2998,7 @@ static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes)
unsigned int offset, count; unsigned int offset, count;
if (qc->curbytes + bytes >= qc->nbytes) if (qc->curbytes + bytes >= qc->nbytes)
ap->pio_task_state = PIO_ST_LAST; ap->hsm_task_state = HSM_ST_LAST;
next_sg: next_sg:
if (unlikely(qc->cursg >= qc->n_elem)) { if (unlikely(qc->cursg >= qc->n_elem)) {
...@@ -2734,7 +3020,7 @@ static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes) ...@@ -2734,7 +3020,7 @@ static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes)
for (i = 0; i < words; i++) for (i = 0; i < words; i++)
ata_data_xfer(ap, (unsigned char*)pad_buf, 2, do_write); ata_data_xfer(ap, (unsigned char*)pad_buf, 2, do_write);
ap->pio_task_state = PIO_ST_LAST; ap->hsm_task_state = HSM_ST_LAST;
return; return;
} }
...@@ -2783,7 +3069,6 @@ static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes) ...@@ -2783,7 +3069,6 @@ static void __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes)
* *
* LOCKING: * LOCKING:
* Inherited from caller. * Inherited from caller.
*
*/ */
static void atapi_pio_bytes(struct ata_queued_cmd *qc) static void atapi_pio_bytes(struct ata_queued_cmd *qc)
...@@ -2815,12 +3100,12 @@ static void atapi_pio_bytes(struct ata_queued_cmd *qc) ...@@ -2815,12 +3100,12 @@ static void atapi_pio_bytes(struct ata_queued_cmd *qc)
err_out: err_out:
printk(KERN_INFO "ata%u: dev %u: ATAPI check failed\n", printk(KERN_INFO "ata%u: dev %u: ATAPI check failed\n",
ap->id, dev->devno); ap->id, dev->devno);
ap->pio_task_state = PIO_ST_ERR; ap->hsm_task_state = HSM_ST_ERR;
} }
/** /**
* ata_pio_sector - * ata_pio_block - start PIO on a block
* @ap: * @ap: the target ata_port
* *
* LOCKING: * LOCKING:
* None. (executing in kernel thread context) * None. (executing in kernel thread context)
...@@ -2832,19 +3117,19 @@ static void ata_pio_block(struct ata_port *ap) ...@@ -2832,19 +3117,19 @@ static void ata_pio_block(struct ata_port *ap)
u8 status; u8 status;
/* /*
* This is purely hueristic. This is a fast path. * This is purely heuristic. This is a fast path.
* Sometimes when we enter, BSY will be cleared in * Sometimes when we enter, BSY will be cleared in
* a chk-status or two. If not, the drive is probably seeking * a chk-status or two. If not, the drive is probably seeking
* or something. Snooze for a couple msecs, then * or something. Snooze for a couple msecs, then
* chk-status again. If still busy, fall back to * chk-status again. If still busy, fall back to
* PIO_ST_POLL state. * HSM_ST_POLL state.
*/ */
status = ata_busy_wait(ap, ATA_BUSY, 5); status = ata_busy_wait(ap, ATA_BUSY, 5);
if (status & ATA_BUSY) { if (status & ATA_BUSY) {
msleep(2); msleep(2);
status = ata_busy_wait(ap, ATA_BUSY, 10); status = ata_busy_wait(ap, ATA_BUSY, 10);
if (status & ATA_BUSY) { if (status & ATA_BUSY) {
ap->pio_task_state = PIO_ST_POLL; ap->hsm_task_state = HSM_ST_POLL;
ap->pio_task_timeout = jiffies + ATA_TMOUT_PIO; ap->pio_task_timeout = jiffies + ATA_TMOUT_PIO;
return; return;
} }
...@@ -2856,7 +3141,7 @@ static void ata_pio_block(struct ata_port *ap) ...@@ -2856,7 +3141,7 @@ static void ata_pio_block(struct ata_port *ap)
if (is_atapi_taskfile(&qc->tf)) { if (is_atapi_taskfile(&qc->tf)) {
/* no more data to transfer or unsupported ATAPI command */ /* no more data to transfer or unsupported ATAPI command */
if ((status & ATA_DRQ) == 0) { if ((status & ATA_DRQ) == 0) {
ap->pio_task_state = PIO_ST_LAST; ap->hsm_task_state = HSM_ST_LAST;
return; return;
} }
...@@ -2864,7 +3149,7 @@ static void ata_pio_block(struct ata_port *ap) ...@@ -2864,7 +3149,7 @@ static void ata_pio_block(struct ata_port *ap)
} else { } else {
/* handle BSY=0, DRQ=0 as error */ /* handle BSY=0, DRQ=0 as error */
if ((status & ATA_DRQ) == 0) { if ((status & ATA_DRQ) == 0) {
ap->pio_task_state = PIO_ST_ERR; ap->hsm_task_state = HSM_ST_ERR;
return; return;
} }
...@@ -2884,7 +3169,7 @@ static void ata_pio_error(struct ata_port *ap) ...@@ -2884,7 +3169,7 @@ static void ata_pio_error(struct ata_port *ap)
printk(KERN_WARNING "ata%u: PIO error, drv_stat 0x%x\n", printk(KERN_WARNING "ata%u: PIO error, drv_stat 0x%x\n",
ap->id, drv_stat); ap->id, drv_stat);
ap->pio_task_state = PIO_ST_IDLE; ap->hsm_task_state = HSM_ST_IDLE;
ata_poll_qc_complete(qc, drv_stat | ATA_ERR); ata_poll_qc_complete(qc, drv_stat | ATA_ERR);
} }
...@@ -2899,25 +3184,25 @@ static void ata_pio_task(void *_data) ...@@ -2899,25 +3184,25 @@ static void ata_pio_task(void *_data)
timeout = 0; timeout = 0;
qc_completed = 0; qc_completed = 0;
switch (ap->pio_task_state) { switch (ap->hsm_task_state) {
case PIO_ST_IDLE: case HSM_ST_IDLE:
return; return;
case PIO_ST: case HSM_ST:
ata_pio_block(ap); ata_pio_block(ap);
break; break;
case PIO_ST_LAST: case HSM_ST_LAST:
qc_completed = ata_pio_complete(ap); qc_completed = ata_pio_complete(ap);
break; break;
case PIO_ST_POLL: case HSM_ST_POLL:
case PIO_ST_LAST_POLL: case HSM_ST_LAST_POLL:
timeout = ata_pio_poll(ap); timeout = ata_pio_poll(ap);
break; break;
case PIO_ST_TMOUT: case HSM_ST_TMOUT:
case PIO_ST_ERR: case HSM_ST_ERR:
ata_pio_error(ap); ata_pio_error(ap);
return; return;
} }
...@@ -2928,52 +3213,6 @@ static void ata_pio_task(void *_data) ...@@ -2928,52 +3213,6 @@ static void ata_pio_task(void *_data)
goto fsm_start; goto fsm_start;
} }
static void atapi_request_sense(struct ata_port *ap, struct ata_device *dev,
struct scsi_cmnd *cmd)
{
DECLARE_COMPLETION(wait);
struct ata_queued_cmd *qc;
unsigned long flags;
int rc;
DPRINTK("ATAPI request sense\n");
qc = ata_qc_new_init(ap, dev);
BUG_ON(qc == NULL);
/* FIXME: is this needed? */
memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
ata_sg_init_one(qc, cmd->sense_buffer, sizeof(cmd->sense_buffer));
qc->dma_dir = DMA_FROM_DEVICE;
memset(&qc->cdb, 0, ap->cdb_len);
qc->cdb[0] = REQUEST_SENSE;
qc->cdb[4] = SCSI_SENSE_BUFFERSIZE;
qc->tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
qc->tf.command = ATA_CMD_PACKET;
qc->tf.protocol = ATA_PROT_ATAPI;
qc->tf.lbam = (8 * 1024) & 0xff;
qc->tf.lbah = (8 * 1024) >> 8;
qc->nbytes = SCSI_SENSE_BUFFERSIZE;
qc->waiting = &wait;
qc->complete_fn = ata_qc_complete_noop;
spin_lock_irqsave(&ap->host_set->lock, flags);
rc = ata_qc_issue(qc);
spin_unlock_irqrestore(&ap->host_set->lock, flags);
if (rc)
ata_port_disable(ap);
else
wait_for_completion(&wait);
DPRINTK("EXIT\n");
}
/** /**
* ata_qc_timeout - Handle timeout of queued command * ata_qc_timeout - Handle timeout of queued command
* @qc: Command that timed out * @qc: Command that timed out
...@@ -3091,14 +3330,14 @@ void ata_eng_timeout(struct ata_port *ap) ...@@ -3091,14 +3330,14 @@ void ata_eng_timeout(struct ata_port *ap)
DPRINTK("ENTER\n"); DPRINTK("ENTER\n");
qc = ata_qc_from_tag(ap, ap->active_tag); qc = ata_qc_from_tag(ap, ap->active_tag);
if (!qc) { if (qc)
ata_qc_timeout(qc);
else {
printk(KERN_ERR "ata%u: BUG: timeout without command\n", printk(KERN_ERR "ata%u: BUG: timeout without command\n",
ap->id); ap->id);
goto out; goto out;
} }
ata_qc_timeout(qc);
out: out:
DPRINTK("EXIT\n"); DPRINTK("EXIT\n");
} }
...@@ -3155,15 +3394,12 @@ struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap, ...@@ -3155,15 +3394,12 @@ struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap,
qc->nbytes = qc->curbytes = 0; qc->nbytes = qc->curbytes = 0;
ata_tf_init(ap, &qc->tf, dev->devno); ata_tf_init(ap, &qc->tf, dev->devno);
if (dev->flags & ATA_DFLAG_LBA48)
qc->tf.flags |= ATA_TFLAG_LBA48;
} }
return qc; return qc;
} }
static int ata_qc_complete_noop(struct ata_queued_cmd *qc, u8 drv_stat) int ata_qc_complete_noop(struct ata_queued_cmd *qc, u8 drv_stat)
{ {
return 0; return 0;
} }
...@@ -3201,7 +3437,6 @@ static void __ata_qc_complete(struct ata_queued_cmd *qc) ...@@ -3201,7 +3437,6 @@ static void __ata_qc_complete(struct ata_queued_cmd *qc)
* *
* LOCKING: * LOCKING:
* spin_lock_irqsave(host_set lock) * spin_lock_irqsave(host_set lock)
*
*/ */
void ata_qc_free(struct ata_queued_cmd *qc) void ata_qc_free(struct ata_queued_cmd *qc)
{ {
...@@ -3221,7 +3456,6 @@ void ata_qc_free(struct ata_queued_cmd *qc) ...@@ -3221,7 +3456,6 @@ void ata_qc_free(struct ata_queued_cmd *qc)
* *
* LOCKING: * LOCKING:
* spin_lock_irqsave(host_set lock) * spin_lock_irqsave(host_set lock)
*
*/ */
void ata_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat) void ata_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
...@@ -3360,7 +3594,7 @@ int ata_qc_issue_prot(struct ata_queued_cmd *qc) ...@@ -3360,7 +3594,7 @@ int ata_qc_issue_prot(struct ata_queued_cmd *qc)
case ATA_PROT_PIO: /* load tf registers, initiate polling pio */ case ATA_PROT_PIO: /* load tf registers, initiate polling pio */
ata_qc_set_polling(qc); ata_qc_set_polling(qc);
ata_tf_to_host_nolock(ap, &qc->tf); ata_tf_to_host_nolock(ap, &qc->tf);
ap->pio_task_state = PIO_ST; ap->hsm_task_state = HSM_ST;
queue_work(ata_wq, &ap->pio_task); queue_work(ata_wq, &ap->pio_task);
break; break;
...@@ -3586,7 +3820,7 @@ u8 ata_bmdma_status(struct ata_port *ap) ...@@ -3586,7 +3820,7 @@ u8 ata_bmdma_status(struct ata_port *ap)
void __iomem *mmio = (void __iomem *) ap->ioaddr.bmdma_addr; void __iomem *mmio = (void __iomem *) ap->ioaddr.bmdma_addr;
host_stat = readb(mmio + ATA_DMA_STATUS); host_stat = readb(mmio + ATA_DMA_STATUS);
} else } else
host_stat = inb(ap->ioaddr.bmdma_addr + ATA_DMA_STATUS); host_stat = inb(ap->ioaddr.bmdma_addr + ATA_DMA_STATUS);
return host_stat; return host_stat;
} }
...@@ -3715,7 +3949,6 @@ inline unsigned int ata_host_intr (struct ata_port *ap, ...@@ -3715,7 +3949,6 @@ inline unsigned int ata_host_intr (struct ata_port *ap,
* *
* RETURNS: * RETURNS:
* IRQ_NONE or IRQ_HANDLED. * IRQ_NONE or IRQ_HANDLED.
*
*/ */
irqreturn_t ata_interrupt (int irq, void *dev_instance, struct pt_regs *regs) irqreturn_t ata_interrupt (int irq, void *dev_instance, struct pt_regs *regs)
...@@ -3806,7 +4039,7 @@ static void atapi_packet_task(void *_data) ...@@ -3806,7 +4039,7 @@ static void atapi_packet_task(void *_data)
ata_data_xfer(ap, qc->cdb, ap->cdb_len, 1); ata_data_xfer(ap, qc->cdb, ap->cdb_len, 1);
/* PIO commands are handled by polling */ /* PIO commands are handled by polling */
ap->pio_task_state = PIO_ST; ap->hsm_task_state = HSM_ST;
queue_work(ata_wq, &ap->pio_task); queue_work(ata_wq, &ap->pio_task);
} }
...@@ -3827,6 +4060,7 @@ static void atapi_packet_task(void *_data) ...@@ -3827,6 +4060,7 @@ static void atapi_packet_task(void *_data)
* May be used as the port_start() entry in ata_port_operations. * May be used as the port_start() entry in ata_port_operations.
* *
* LOCKING: * LOCKING:
* Inherited from caller.
*/ */
int ata_port_start (struct ata_port *ap) int ata_port_start (struct ata_port *ap)
...@@ -3852,6 +4086,7 @@ int ata_port_start (struct ata_port *ap) ...@@ -3852,6 +4086,7 @@ int ata_port_start (struct ata_port *ap)
* May be used as the port_stop() entry in ata_port_operations. * May be used as the port_stop() entry in ata_port_operations.
* *
* LOCKING: * LOCKING:
* Inherited from caller.
*/ */
void ata_port_stop (struct ata_port *ap) void ata_port_stop (struct ata_port *ap)
...@@ -3874,6 +4109,7 @@ void ata_host_stop (struct ata_host_set *host_set) ...@@ -3874,6 +4109,7 @@ void ata_host_stop (struct ata_host_set *host_set)
* @do_unregister: 1 if we fully unregister, 0 to just stop the port * @do_unregister: 1 if we fully unregister, 0 to just stop the port
* *
* LOCKING: * LOCKING:
* Inherited from caller.
*/ */
static void ata_host_remove(struct ata_port *ap, unsigned int do_unregister) static void ata_host_remove(struct ata_port *ap, unsigned int do_unregister)
...@@ -3901,12 +4137,11 @@ static void ata_host_remove(struct ata_port *ap, unsigned int do_unregister) ...@@ -3901,12 +4137,11 @@ static void ata_host_remove(struct ata_port *ap, unsigned int do_unregister)
* *
* LOCKING: * LOCKING:
* Inherited from caller. * Inherited from caller.
*
*/ */
static void ata_host_init(struct ata_port *ap, struct Scsi_Host *host, static void ata_host_init(struct ata_port *ap, struct Scsi_Host *host,
struct ata_host_set *host_set, struct ata_host_set *host_set,
struct ata_probe_ent *ent, unsigned int port_no) const struct ata_probe_ent *ent, unsigned int port_no)
{ {
unsigned int i; unsigned int i;
...@@ -3962,10 +4197,9 @@ static void ata_host_init(struct ata_port *ap, struct Scsi_Host *host, ...@@ -3962,10 +4197,9 @@ static void ata_host_init(struct ata_port *ap, struct Scsi_Host *host,
* *
* RETURNS: * RETURNS:
* New ata_port on success, for NULL on error. * New ata_port on success, for NULL on error.
*
*/ */
static struct ata_port * ata_host_add(struct ata_probe_ent *ent, static struct ata_port * ata_host_add(const struct ata_probe_ent *ent,
struct ata_host_set *host_set, struct ata_host_set *host_set,
unsigned int port_no) unsigned int port_no)
{ {
...@@ -4010,10 +4244,9 @@ static struct ata_port * ata_host_add(struct ata_probe_ent *ent, ...@@ -4010,10 +4244,9 @@ static struct ata_port * ata_host_add(struct ata_probe_ent *ent,
* *
* RETURNS: * RETURNS:
* Number of ports registered. Zero on error (no ports registered). * Number of ports registered. Zero on error (no ports registered).
*
*/ */
int ata_device_add(struct ata_probe_ent *ent) int ata_device_add(const struct ata_probe_ent *ent)
{ {
unsigned int count = 0, i; unsigned int count = 0, i;
struct device *dev = ent->dev; struct device *dev = ent->dev;
...@@ -4113,7 +4346,7 @@ int ata_device_add(struct ata_probe_ent *ent) ...@@ -4113,7 +4346,7 @@ int ata_device_add(struct ata_probe_ent *ent)
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
struct ata_port *ap = host_set->ports[i]; struct ata_port *ap = host_set->ports[i];
scsi_scan_host(ap->host); ata_scsi_scan_host(ap);
} }
dev_set_drvdata(dev, host_set); dev_set_drvdata(dev, host_set);
...@@ -4142,7 +4375,6 @@ int ata_device_add(struct ata_probe_ent *ent) ...@@ -4142,7 +4375,6 @@ int ata_device_add(struct ata_probe_ent *ent)
* Inherited from calling layer (may sleep). * Inherited from calling layer (may sleep).
*/ */
void ata_host_set_remove(struct ata_host_set *host_set) void ata_host_set_remove(struct ata_host_set *host_set)
{ {
struct ata_port *ap; struct ata_port *ap;
...@@ -4232,7 +4464,7 @@ void ata_std_ports(struct ata_ioports *ioaddr) ...@@ -4232,7 +4464,7 @@ void ata_std_ports(struct ata_ioports *ioaddr)
} }
static struct ata_probe_ent * static struct ata_probe_ent *
ata_probe_ent_alloc(struct device *dev, struct ata_port_info *port) ata_probe_ent_alloc(struct device *dev, const struct ata_port_info *port)
{ {
struct ata_probe_ent *probe_ent; struct ata_probe_ent *probe_ent;
...@@ -4273,85 +4505,86 @@ void ata_pci_host_stop (struct ata_host_set *host_set) ...@@ -4273,85 +4505,86 @@ void ata_pci_host_stop (struct ata_host_set *host_set)
* ata_pci_init_native_mode - Initialize native-mode driver * ata_pci_init_native_mode - Initialize native-mode driver
* @pdev: pci device to be initialized * @pdev: pci device to be initialized
* @port: array[2] of pointers to port info structures. * @port: array[2] of pointers to port info structures.
* @ports: bitmap of ports present
* *
* Utility function which allocates and initializes an * Utility function which allocates and initializes an
* ata_probe_ent structure for a standard dual-port * ata_probe_ent structure for a standard dual-port
* PIO-based IDE controller. The returned ata_probe_ent * PIO-based IDE controller. The returned ata_probe_ent
* structure can be passed to ata_device_add(). The returned * structure can be passed to ata_device_add(). The returned
* ata_probe_ent structure should then be freed with kfree(). * ata_probe_ent structure should then be freed with kfree().
*
* The caller need only pass the address of the primary port, the
* secondary will be deduced automatically. If the device has non
* standard secondary port mappings this function can be called twice,
* once for each interface.
*/ */
struct ata_probe_ent * struct ata_probe_ent *
ata_pci_init_native_mode(struct pci_dev *pdev, struct ata_port_info **port) ata_pci_init_native_mode(struct pci_dev *pdev, struct ata_port_info **port, int ports)
{ {
struct ata_probe_ent *probe_ent = struct ata_probe_ent *probe_ent =
ata_probe_ent_alloc(pci_dev_to_dev(pdev), port[0]); ata_probe_ent_alloc(pci_dev_to_dev(pdev), port[0]);
int p = 0;
if (!probe_ent) if (!probe_ent)
return NULL; return NULL;
probe_ent->n_ports = 2;
probe_ent->irq = pdev->irq; probe_ent->irq = pdev->irq;
probe_ent->irq_flags = SA_SHIRQ; probe_ent->irq_flags = SA_SHIRQ;
probe_ent->port[0].cmd_addr = pci_resource_start(pdev, 0); if (ports & ATA_PORT_PRIMARY) {
probe_ent->port[0].altstatus_addr = probe_ent->port[p].cmd_addr = pci_resource_start(pdev, 0);
probe_ent->port[0].ctl_addr = probe_ent->port[p].altstatus_addr =
pci_resource_start(pdev, 1) | ATA_PCI_CTL_OFS; probe_ent->port[p].ctl_addr =
probe_ent->port[0].bmdma_addr = pci_resource_start(pdev, 4); pci_resource_start(pdev, 1) | ATA_PCI_CTL_OFS;
probe_ent->port[p].bmdma_addr = pci_resource_start(pdev, 4);
probe_ent->port[1].cmd_addr = pci_resource_start(pdev, 2); ata_std_ports(&probe_ent->port[p]);
probe_ent->port[1].altstatus_addr = p++;
probe_ent->port[1].ctl_addr = }
pci_resource_start(pdev, 3) | ATA_PCI_CTL_OFS;
probe_ent->port[1].bmdma_addr = pci_resource_start(pdev, 4) + 8;
ata_std_ports(&probe_ent->port[0]); if (ports & ATA_PORT_SECONDARY) {
ata_std_ports(&probe_ent->port[1]); probe_ent->port[p].cmd_addr = pci_resource_start(pdev, 2);
probe_ent->port[p].altstatus_addr =
probe_ent->port[p].ctl_addr =
pci_resource_start(pdev, 3) | ATA_PCI_CTL_OFS;
probe_ent->port[p].bmdma_addr = pci_resource_start(pdev, 4) + 8;
ata_std_ports(&probe_ent->port[p]);
p++;
}
probe_ent->n_ports = p;
return probe_ent; return probe_ent;
} }
static struct ata_probe_ent * static struct ata_probe_ent *ata_pci_init_legacy_port(struct pci_dev *pdev, struct ata_port_info **port, int port_num)
ata_pci_init_legacy_mode(struct pci_dev *pdev, struct ata_port_info **port,
struct ata_probe_ent **ppe2)
{ {
struct ata_probe_ent *probe_ent, *probe_ent2; struct ata_probe_ent *probe_ent;
probe_ent = ata_probe_ent_alloc(pci_dev_to_dev(pdev), port[0]); probe_ent = ata_probe_ent_alloc(pci_dev_to_dev(pdev), port[0]);
if (!probe_ent) if (!probe_ent)
return NULL; return NULL;
probe_ent2 = ata_probe_ent_alloc(pci_dev_to_dev(pdev), port[1]);
if (!probe_ent2) {
kfree(probe_ent);
return NULL;
}
probe_ent->n_ports = 1;
probe_ent->irq = 14;
probe_ent->hard_port_no = 0;
probe_ent->legacy_mode = 1; probe_ent->legacy_mode = 1;
probe_ent->n_ports = 1;
probe_ent2->n_ports = 1; probe_ent->hard_port_no = port_num;
probe_ent2->irq = 15;
switch(port_num)
probe_ent2->hard_port_no = 1; {
probe_ent2->legacy_mode = 1; case 0:
probe_ent->irq = 14;
probe_ent->port[0].cmd_addr = 0x1f0; probe_ent->port[0].cmd_addr = 0x1f0;
probe_ent->port[0].altstatus_addr = probe_ent->port[0].altstatus_addr =
probe_ent->port[0].ctl_addr = 0x3f6; probe_ent->port[0].ctl_addr = 0x3f6;
probe_ent->port[0].bmdma_addr = pci_resource_start(pdev, 4); break;
case 1:
probe_ent2->port[0].cmd_addr = 0x170; probe_ent->irq = 15;
probe_ent2->port[0].altstatus_addr = probe_ent->port[0].cmd_addr = 0x170;
probe_ent2->port[0].ctl_addr = 0x376; probe_ent->port[0].altstatus_addr =
probe_ent2->port[0].bmdma_addr = pci_resource_start(pdev, 4)+8; probe_ent->port[0].ctl_addr = 0x376;
break;
}
probe_ent->port[0].bmdma_addr = pci_resource_start(pdev, 4) + 8 * port_num;
ata_std_ports(&probe_ent->port[0]); ata_std_ports(&probe_ent->port[0]);
ata_std_ports(&probe_ent2->port[0]);
*ppe2 = probe_ent2;
return probe_ent; return probe_ent;
} }
...@@ -4374,13 +4607,12 @@ ata_pci_init_legacy_mode(struct pci_dev *pdev, struct ata_port_info **port, ...@@ -4374,13 +4607,12 @@ ata_pci_init_legacy_mode(struct pci_dev *pdev, struct ata_port_info **port,
* *
* RETURNS: * RETURNS:
* Zero on success, negative on errno-based value on error. * Zero on success, negative on errno-based value on error.
*
*/ */
int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info, int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
unsigned int n_ports) unsigned int n_ports)
{ {
struct ata_probe_ent *probe_ent, *probe_ent2 = NULL; struct ata_probe_ent *probe_ent = NULL, *probe_ent2 = NULL;
struct ata_port_info *port[2]; struct ata_port_info *port[2];
u8 tmp8, mask; u8 tmp8, mask;
unsigned int legacy_mode = 0; unsigned int legacy_mode = 0;
...@@ -4397,7 +4629,7 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info, ...@@ -4397,7 +4629,7 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
if ((port[0]->host_flags & ATA_FLAG_NO_LEGACY) == 0 if ((port[0]->host_flags & ATA_FLAG_NO_LEGACY) == 0
&& (pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) { && (pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) {
/* TODO: support transitioning to native mode? */ /* TODO: What if one channel is in native mode ... */
pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8); pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8);
mask = (1 << 2) | (1 << 0); mask = (1 << 2) | (1 << 0);
if ((tmp8 & mask) != mask) if ((tmp8 & mask) != mask)
...@@ -4405,11 +4637,20 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info, ...@@ -4405,11 +4637,20 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
} }
/* FIXME... */ /* FIXME... */
if ((!legacy_mode) && (n_ports > 1)) { if ((!legacy_mode) && (n_ports > 2)) {
printk(KERN_ERR "ata: BUG: native mode, n_ports > 1\n"); printk(KERN_ERR "ata: BUG: native mode, n_ports > 2\n");
return -EINVAL; n_ports = 2;
/* For now */
} }
/* FIXME: Really for ATA it isn't safe because the device may be
multi-purpose and we want to leave it alone if it was already
enabled. Secondly for shared use as Arjan says we want refcounting
Checking dev->is_enabled is insufficient as this is not set at
boot for the primary video which is BIOS enabled
*/
rc = pci_enable_device(pdev); rc = pci_enable_device(pdev);
if (rc) if (rc)
return rc; return rc;
...@@ -4420,6 +4661,7 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info, ...@@ -4420,6 +4661,7 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
goto err_out; goto err_out;
} }
/* FIXME: Should use platform specific mappers for legacy port ranges */
if (legacy_mode) { if (legacy_mode) {
if (!request_region(0x1f0, 8, "libata")) { if (!request_region(0x1f0, 8, "libata")) {
struct resource *conflict, res; struct resource *conflict, res;
...@@ -4464,10 +4706,17 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info, ...@@ -4464,10 +4706,17 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
goto err_out_regions; goto err_out_regions;
if (legacy_mode) { if (legacy_mode) {
probe_ent = ata_pci_init_legacy_mode(pdev, port, &probe_ent2); if (legacy_mode & (1 << 0))
} else probe_ent = ata_pci_init_legacy_port(pdev, port, 0);
probe_ent = ata_pci_init_native_mode(pdev, port); if (legacy_mode & (1 << 1))
if (!probe_ent) { probe_ent2 = ata_pci_init_legacy_port(pdev, port, 1);
} else {
if (n_ports == 2)
probe_ent = ata_pci_init_native_mode(pdev, port, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
else
probe_ent = ata_pci_init_native_mode(pdev, port, ATA_PORT_PRIMARY);
}
if (!probe_ent && !probe_ent2) {
rc = -ENOMEM; rc = -ENOMEM;
goto err_out_regions; goto err_out_regions;
} }
...@@ -4505,7 +4754,7 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info, ...@@ -4505,7 +4754,7 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
* @pdev: PCI device that was removed * @pdev: PCI device that was removed
* *
* PCI layer indicates to libata via this hook that * PCI layer indicates to libata via this hook that
* hot-unplug or module unload event has occured. * hot-unplug or module unload event has occurred.
* Handle this by unregistering all objects associated * Handle this by unregistering all objects associated
* with this PCI device. Free those objects. Then finally * with this PCI device. Free those objects. Then finally
* release PCI resources and disable device. * release PCI resources and disable device.
...@@ -4526,7 +4775,7 @@ void ata_pci_remove_one (struct pci_dev *pdev) ...@@ -4526,7 +4775,7 @@ void ata_pci_remove_one (struct pci_dev *pdev)
} }
/* move to PCI subsystem */ /* move to PCI subsystem */
int pci_test_config_bits(struct pci_dev *pdev, struct pci_bits *bits) int pci_test_config_bits(struct pci_dev *pdev, const struct pci_bits *bits)
{ {
unsigned long tmp = 0; unsigned long tmp = 0;
...@@ -4579,6 +4828,27 @@ static void __exit ata_exit(void) ...@@ -4579,6 +4828,27 @@ static void __exit ata_exit(void)
module_init(ata_init); module_init(ata_init);
module_exit(ata_exit); module_exit(ata_exit);
static unsigned long ratelimit_time;
static spinlock_t ata_ratelimit_lock = SPIN_LOCK_UNLOCKED;
int ata_ratelimit(void)
{
int rc;
unsigned long flags;
spin_lock_irqsave(&ata_ratelimit_lock, flags);
if (time_after(jiffies, ratelimit_time)) {
rc = 1;
ratelimit_time = jiffies + (HZ/5);
} else
rc = 0;
spin_unlock_irqrestore(&ata_ratelimit_lock, flags);
return rc;
}
/* /*
* libata is essentially a library of internal helper functions for * libata is essentially a library of internal helper functions for
* low-level ATA host controller drivers. As such, the API/ABI is * low-level ATA host controller drivers. As such, the API/ABI is
...@@ -4620,6 +4890,7 @@ EXPORT_SYMBOL_GPL(sata_phy_reset); ...@@ -4620,6 +4890,7 @@ EXPORT_SYMBOL_GPL(sata_phy_reset);
EXPORT_SYMBOL_GPL(__sata_phy_reset); EXPORT_SYMBOL_GPL(__sata_phy_reset);
EXPORT_SYMBOL_GPL(ata_bus_reset); EXPORT_SYMBOL_GPL(ata_bus_reset);
EXPORT_SYMBOL_GPL(ata_port_disable); EXPORT_SYMBOL_GPL(ata_port_disable);
EXPORT_SYMBOL_GPL(ata_ratelimit);
EXPORT_SYMBOL_GPL(ata_scsi_ioctl); EXPORT_SYMBOL_GPL(ata_scsi_ioctl);
EXPORT_SYMBOL_GPL(ata_scsi_queuecmd); EXPORT_SYMBOL_GPL(ata_scsi_queuecmd);
EXPORT_SYMBOL_GPL(ata_scsi_error); EXPORT_SYMBOL_GPL(ata_scsi_error);
...@@ -4631,6 +4902,9 @@ EXPORT_SYMBOL_GPL(ata_dev_id_string); ...@@ -4631,6 +4902,9 @@ EXPORT_SYMBOL_GPL(ata_dev_id_string);
EXPORT_SYMBOL_GPL(ata_dev_config); EXPORT_SYMBOL_GPL(ata_dev_config);
EXPORT_SYMBOL_GPL(ata_scsi_simulate); EXPORT_SYMBOL_GPL(ata_scsi_simulate);
EXPORT_SYMBOL_GPL(ata_timing_compute);
EXPORT_SYMBOL_GPL(ata_timing_merge);
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
EXPORT_SYMBOL_GPL(pci_test_config_bits); EXPORT_SYMBOL_GPL(pci_test_config_bits);
EXPORT_SYMBOL_GPL(ata_pci_host_stop); EXPORT_SYMBOL_GPL(ata_pci_host_stop);
......
...@@ -44,11 +44,19 @@ ...@@ -44,11 +44,19 @@
#include "libata.h" #include "libata.h"
typedef unsigned int (*ata_xlat_func_t)(struct ata_queued_cmd *qc, u8 *scsicmd); typedef unsigned int (*ata_xlat_func_t)(struct ata_queued_cmd *qc, const u8 *scsicmd);
static struct ata_device * static struct ata_device *
ata_scsi_find_dev(struct ata_port *ap, struct scsi_device *scsidev); ata_scsi_find_dev(struct ata_port *ap, const struct scsi_device *scsidev);
static void ata_scsi_invalid_field(struct scsi_cmnd *cmd,
void (*done)(struct scsi_cmnd *))
{
ata_scsi_set_sense(cmd, ILLEGAL_REQUEST, 0x24, 0x0);
/* "Invalid field in cbd" */
done(cmd);
}
/** /**
* ata_std_bios_param - generic bios head/sector/cylinder calculator used by sd. * ata_std_bios_param - generic bios head/sector/cylinder calculator used by sd.
* @sdev: SCSI device for which BIOS geometry is to be determined * @sdev: SCSI device for which BIOS geometry is to be determined
...@@ -182,7 +190,6 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat) ...@@ -182,7 +190,6 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
{ {
struct scsi_cmnd *cmd = qc->scsicmd; struct scsi_cmnd *cmd = qc->scsicmd;
u8 err = 0; u8 err = 0;
unsigned char *sb = cmd->sense_buffer;
/* Based on the 3ware driver translation table */ /* Based on the 3ware driver translation table */
static unsigned char sense_table[][4] = { static unsigned char sense_table[][4] = {
/* BBD|ECC|ID|MAR */ /* BBD|ECC|ID|MAR */
...@@ -225,8 +232,6 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat) ...@@ -225,8 +232,6 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
}; };
int i = 0; int i = 0;
cmd->result = SAM_STAT_CHECK_CONDITION;
/* /*
* Is this an error we can process/parse * Is this an error we can process/parse
*/ */
...@@ -281,11 +286,9 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat) ...@@ -281,11 +286,9 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
/* Look for best matches first */ /* Look for best matches first */
if((sense_table[i][0] & err) == sense_table[i][0]) if((sense_table[i][0] & err) == sense_table[i][0])
{ {
sb[0] = 0x70; ata_scsi_set_sense(cmd, sense_table[i][1] /* sk */,
sb[2] = sense_table[i][1]; sense_table[i][2] /* asc */,
sb[7] = 0x0a; sense_table[i][3] /* ascq */ );
sb[12] = sense_table[i][2];
sb[13] = sense_table[i][3];
return; return;
} }
i++; i++;
...@@ -300,11 +303,9 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat) ...@@ -300,11 +303,9 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
{ {
if(stat_table[i][0] & drv_stat) if(stat_table[i][0] & drv_stat)
{ {
sb[0] = 0x70; ata_scsi_set_sense(cmd, sense_table[i][1] /* sk */,
sb[2] = stat_table[i][1]; sense_table[i][2] /* asc */,
sb[7] = 0x0a; sense_table[i][3] /* ascq */ );
sb[12] = stat_table[i][2];
sb[13] = stat_table[i][3];
return; return;
} }
i++; i++;
...@@ -313,15 +314,12 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat) ...@@ -313,15 +314,12 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
printk(KERN_ERR "ata%u: called with no error (%02X)!\n", qc->ap->id, drv_stat); printk(KERN_ERR "ata%u: called with no error (%02X)!\n", qc->ap->id, drv_stat);
/* additional-sense-code[-qualifier] */ /* additional-sense-code[-qualifier] */
sb[0] = 0x70;
sb[2] = MEDIUM_ERROR;
sb[7] = 0x0A;
if (cmd->sc_data_direction == DMA_FROM_DEVICE) { if (cmd->sc_data_direction == DMA_FROM_DEVICE) {
sb[12] = 0x11; /* "unrecovered read error" */ ata_scsi_set_sense(cmd, MEDIUM_ERROR, 0x11, 0x4);
sb[13] = 0x04; /* "unrecovered read error" */
} else { } else {
sb[12] = 0x0C; /* "write error - */ ata_scsi_set_sense(cmd, MEDIUM_ERROR, 0xc, 0x2);
sb[13] = 0x02; /* auto-reallocation failed" */ /* "write error - auto-reallocation failed" */
} }
} }
...@@ -420,7 +418,7 @@ int ata_scsi_error(struct Scsi_Host *host) ...@@ -420,7 +418,7 @@ int ata_scsi_error(struct Scsi_Host *host)
*/ */
static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc, static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc,
u8 *scsicmd) const u8 *scsicmd)
{ {
struct ata_taskfile *tf = &qc->tf; struct ata_taskfile *tf = &qc->tf;
...@@ -430,15 +428,26 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc, ...@@ -430,15 +428,26 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc,
; /* ignore IMMED bit, violates sat-r05 */ ; /* ignore IMMED bit, violates sat-r05 */
} }
if (scsicmd[4] & 0x2) if (scsicmd[4] & 0x2)
return 1; /* LOEJ bit set not supported */ goto invalid_fld; /* LOEJ bit set not supported */
if (((scsicmd[4] >> 4) & 0xf) != 0) if (((scsicmd[4] >> 4) & 0xf) != 0)
return 1; /* power conditions not supported */ goto invalid_fld; /* power conditions not supported */
if (scsicmd[4] & 0x1) { if (scsicmd[4] & 0x1) {
tf->nsect = 1; /* 1 sector, lba=0 */ tf->nsect = 1; /* 1 sector, lba=0 */
tf->lbah = 0x0;
tf->lbam = 0x0; if (qc->dev->flags & ATA_DFLAG_LBA) {
tf->lbal = 0x0; qc->tf.flags |= ATA_TFLAG_LBA;
tf->device |= ATA_LBA;
tf->lbah = 0x0;
tf->lbam = 0x0;
tf->lbal = 0x0;
tf->device |= ATA_LBA;
} else {
/* CHS */
tf->lbal = 0x1; /* sect */
tf->lbam = 0x0; /* cyl low */
tf->lbah = 0x0; /* cyl high */
}
tf->command = ATA_CMD_VERIFY; /* READ VERIFY */ tf->command = ATA_CMD_VERIFY; /* READ VERIFY */
} else { } else {
tf->nsect = 0; /* time period value (0 implies now) */ tf->nsect = 0; /* time period value (0 implies now) */
...@@ -453,6 +462,11 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc, ...@@ -453,6 +462,11 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc,
*/ */
return 0; return 0;
invalid_fld:
ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x24, 0x0);
/* "Invalid field in cbd" */
return 1;
} }
...@@ -471,14 +485,14 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc, ...@@ -471,14 +485,14 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc,
* Zero on success, non-zero on error. * Zero on success, non-zero on error.
*/ */
static unsigned int ata_scsi_flush_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) static unsigned int ata_scsi_flush_xlat(struct ata_queued_cmd *qc, const u8 *scsicmd)
{ {
struct ata_taskfile *tf = &qc->tf; struct ata_taskfile *tf = &qc->tf;
tf->flags |= ATA_TFLAG_DEVICE; tf->flags |= ATA_TFLAG_DEVICE;
tf->protocol = ATA_PROT_NODATA; tf->protocol = ATA_PROT_NODATA;
if ((tf->flags & ATA_TFLAG_LBA48) && if ((qc->dev->flags & ATA_DFLAG_LBA48) &&
(ata_id_has_flush_ext(qc->dev->id))) (ata_id_has_flush_ext(qc->dev->id)))
tf->command = ATA_CMD_FLUSH_EXT; tf->command = ATA_CMD_FLUSH_EXT;
else else
...@@ -487,6 +501,99 @@ static unsigned int ata_scsi_flush_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) ...@@ -487,6 +501,99 @@ static unsigned int ata_scsi_flush_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
return 0; return 0;
} }
/**
* scsi_6_lba_len - Get LBA and transfer length
* @scsicmd: SCSI command to translate
*
* Calculate LBA and transfer length for 6-byte commands.
*
* RETURNS:
* @plba: the LBA
* @plen: the transfer length
*/
static void scsi_6_lba_len(const u8 *scsicmd, u64 *plba, u32 *plen)
{
u64 lba = 0;
u32 len = 0;
VPRINTK("six-byte command\n");
lba |= ((u64)scsicmd[2]) << 8;
lba |= ((u64)scsicmd[3]);
len |= ((u32)scsicmd[4]);
*plba = lba;
*plen = len;
}
/**
* scsi_10_lba_len - Get LBA and transfer length
* @scsicmd: SCSI command to translate
*
* Calculate LBA and transfer length for 10-byte commands.
*
* RETURNS:
* @plba: the LBA
* @plen: the transfer length
*/
static void scsi_10_lba_len(const u8 *scsicmd, u64 *plba, u32 *plen)
{
u64 lba = 0;
u32 len = 0;
VPRINTK("ten-byte command\n");
lba |= ((u64)scsicmd[2]) << 24;
lba |= ((u64)scsicmd[3]) << 16;
lba |= ((u64)scsicmd[4]) << 8;
lba |= ((u64)scsicmd[5]);
len |= ((u32)scsicmd[7]) << 8;
len |= ((u32)scsicmd[8]);
*plba = lba;
*plen = len;
}
/**
* scsi_16_lba_len - Get LBA and transfer length
* @scsicmd: SCSI command to translate
*
* Calculate LBA and transfer length for 16-byte commands.
*
* RETURNS:
* @plba: the LBA
* @plen: the transfer length
*/
static void scsi_16_lba_len(const u8 *scsicmd, u64 *plba, u32 *plen)
{
u64 lba = 0;
u32 len = 0;
VPRINTK("sixteen-byte command\n");
lba |= ((u64)scsicmd[2]) << 56;
lba |= ((u64)scsicmd[3]) << 48;
lba |= ((u64)scsicmd[4]) << 40;
lba |= ((u64)scsicmd[5]) << 32;
lba |= ((u64)scsicmd[6]) << 24;
lba |= ((u64)scsicmd[7]) << 16;
lba |= ((u64)scsicmd[8]) << 8;
lba |= ((u64)scsicmd[9]);
len |= ((u32)scsicmd[10]) << 24;
len |= ((u32)scsicmd[11]) << 16;
len |= ((u32)scsicmd[12]) << 8;
len |= ((u32)scsicmd[13]);
*plba = lba;
*plen = len;
}
/** /**
* ata_scsi_verify_xlat - Translate SCSI VERIFY command into an ATA one * ata_scsi_verify_xlat - Translate SCSI VERIFY command into an ATA one
* @qc: Storage for translated ATA taskfile * @qc: Storage for translated ATA taskfile
...@@ -501,82 +608,110 @@ static unsigned int ata_scsi_flush_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) ...@@ -501,82 +608,110 @@ static unsigned int ata_scsi_flush_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
* Zero on success, non-zero on error. * Zero on success, non-zero on error.
*/ */
static unsigned int ata_scsi_verify_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) static unsigned int ata_scsi_verify_xlat(struct ata_queued_cmd *qc, const u8 *scsicmd)
{ {
struct ata_taskfile *tf = &qc->tf; struct ata_taskfile *tf = &qc->tf;
unsigned int lba48 = tf->flags & ATA_TFLAG_LBA48; struct ata_device *dev = qc->dev;
u64 dev_sectors = qc->dev->n_sectors; u64 dev_sectors = qc->dev->n_sectors;
u64 sect = 0; u64 block;
u32 n_sect = 0; u32 n_block;
tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
tf->protocol = ATA_PROT_NODATA; tf->protocol = ATA_PROT_NODATA;
tf->device |= ATA_LBA;
if (scsicmd[0] == VERIFY) { if (scsicmd[0] == VERIFY)
sect |= ((u64)scsicmd[2]) << 24; scsi_10_lba_len(scsicmd, &block, &n_block);
sect |= ((u64)scsicmd[3]) << 16; else if (scsicmd[0] == VERIFY_16)
sect |= ((u64)scsicmd[4]) << 8; scsi_16_lba_len(scsicmd, &block, &n_block);
sect |= ((u64)scsicmd[5]); else
goto invalid_fld;
n_sect |= ((u32)scsicmd[7]) << 8; if (!n_block)
n_sect |= ((u32)scsicmd[8]); goto nothing_to_do;
} if (block >= dev_sectors)
goto out_of_range;
if ((block + n_block) > dev_sectors)
goto out_of_range;
else if (scsicmd[0] == VERIFY_16) { if (dev->flags & ATA_DFLAG_LBA) {
sect |= ((u64)scsicmd[2]) << 56; tf->flags |= ATA_TFLAG_LBA;
sect |= ((u64)scsicmd[3]) << 48;
sect |= ((u64)scsicmd[4]) << 40;
sect |= ((u64)scsicmd[5]) << 32;
sect |= ((u64)scsicmd[6]) << 24;
sect |= ((u64)scsicmd[7]) << 16;
sect |= ((u64)scsicmd[8]) << 8;
sect |= ((u64)scsicmd[9]);
n_sect |= ((u32)scsicmd[10]) << 24;
n_sect |= ((u32)scsicmd[11]) << 16;
n_sect |= ((u32)scsicmd[12]) << 8;
n_sect |= ((u32)scsicmd[13]);
}
else if (dev->flags & ATA_DFLAG_LBA48) {
return 1; if (n_block > (64 * 1024))
goto invalid_fld;
if (!n_sect) /* use LBA48 */
return 1; tf->flags |= ATA_TFLAG_LBA48;
if (sect >= dev_sectors) tf->command = ATA_CMD_VERIFY_EXT;
return 1;
if ((sect + n_sect) > dev_sectors)
return 1;
if (lba48) {
if (n_sect > (64 * 1024))
return 1;
} else {
if (n_sect > 256)
return 1;
}
if (lba48) { tf->hob_nsect = (n_block >> 8) & 0xff;
tf->command = ATA_CMD_VERIFY_EXT;
tf->hob_nsect = (n_sect >> 8) & 0xff; tf->hob_lbah = (block >> 40) & 0xff;
tf->hob_lbam = (block >> 32) & 0xff;
tf->hob_lbal = (block >> 24) & 0xff;
} else {
if (n_block > 256)
goto invalid_fld;
tf->hob_lbah = (sect >> 40) & 0xff; /* use LBA28 */
tf->hob_lbam = (sect >> 32) & 0xff; tf->command = ATA_CMD_VERIFY;
tf->hob_lbal = (sect >> 24) & 0xff;
tf->device |= (block >> 24) & 0xf;
}
tf->nsect = n_block & 0xff;
tf->lbah = (block >> 16) & 0xff;
tf->lbam = (block >> 8) & 0xff;
tf->lbal = block & 0xff;
tf->device |= ATA_LBA;
} else { } else {
/* CHS */
u32 sect, head, cyl, track;
if (n_block > 256)
goto invalid_fld;
/* Convert LBA to CHS */
track = (u32)block / dev->sectors;
cyl = track / dev->heads;
head = track % dev->heads;
sect = (u32)block % dev->sectors + 1;
DPRINTK("block %u track %u cyl %u head %u sect %u\n",
(u32)block, track, cyl, head, sect);
/* Check whether the converted CHS can fit.
Cylinder: 0-65535
Head: 0-15
Sector: 1-255*/
if ((cyl >> 16) || (head >> 4) || (sect >> 8) || (!sect))
goto out_of_range;
tf->command = ATA_CMD_VERIFY; tf->command = ATA_CMD_VERIFY;
tf->nsect = n_block & 0xff; /* Sector count 0 means 256 sectors */
tf->device |= (sect >> 24) & 0xf; tf->lbal = sect;
tf->lbam = cyl;
tf->lbah = cyl >> 8;
tf->device |= head;
} }
tf->nsect = n_sect & 0xff; return 0;
invalid_fld:
ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x24, 0x0);
/* "Invalid field in cbd" */
return 1;
tf->lbah = (sect >> 16) & 0xff; out_of_range:
tf->lbam = (sect >> 8) & 0xff; ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x21, 0x0);
tf->lbal = sect & 0xff; /* "Logical Block Address out of range" */
return 1;
return 0; nothing_to_do:
qc->scsicmd->result = SAM_STAT_GOOD;
return 1;
} }
/** /**
...@@ -599,106 +734,137 @@ static unsigned int ata_scsi_verify_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) ...@@ -599,106 +734,137 @@ static unsigned int ata_scsi_verify_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
* Zero on success, non-zero on error. * Zero on success, non-zero on error.
*/ */
static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc, const u8 *scsicmd)
{ {
struct ata_taskfile *tf = &qc->tf; struct ata_taskfile *tf = &qc->tf;
unsigned int lba48 = tf->flags & ATA_TFLAG_LBA48; struct ata_device *dev = qc->dev;
u64 block;
u32 n_block;
tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
tf->protocol = qc->dev->xfer_protocol;
tf->device |= ATA_LBA;
if (scsicmd[0] == READ_10 || scsicmd[0] == READ_6 || if (scsicmd[0] == WRITE_10 || scsicmd[0] == WRITE_6 ||
scsicmd[0] == READ_16) { scsicmd[0] == WRITE_16)
tf->command = qc->dev->read_cmd;
} else {
tf->command = qc->dev->write_cmd;
tf->flags |= ATA_TFLAG_WRITE; tf->flags |= ATA_TFLAG_WRITE;
}
if (scsicmd[0] == READ_10 || scsicmd[0] == WRITE_10) { /* Calculate the SCSI LBA and transfer length. */
if (lba48) { switch (scsicmd[0]) {
tf->hob_nsect = scsicmd[7]; case READ_10:
tf->hob_lbal = scsicmd[2]; case WRITE_10:
scsi_10_lba_len(scsicmd, &block, &n_block);
break;
case READ_6:
case WRITE_6:
scsi_6_lba_len(scsicmd, &block, &n_block);
qc->nsect = ((unsigned int)scsicmd[7] << 8) | /* for 6-byte r/w commands, transfer length 0
scsicmd[8]; * means 256 blocks of data, not 0 block.
} else { */
/* if we don't support LBA48 addressing, the request if (!n_block)
* -may- be too large. */ n_block = 256;
if ((scsicmd[2] & 0xf0) || scsicmd[7]) break;
return 1; case READ_16:
case WRITE_16:
scsi_16_lba_len(scsicmd, &block, &n_block);
break;
default:
DPRINTK("no-byte command\n");
goto invalid_fld;
}
/* stores LBA27:24 in lower 4 bits of device reg */ /* Check and compose ATA command */
tf->device |= scsicmd[2]; if (!n_block)
/* For 10-byte and 16-byte SCSI R/W commands, transfer
* length 0 means transfer 0 block of data.
* However, for ATA R/W commands, sector count 0 means
* 256 or 65536 sectors, not 0 sectors as in SCSI.
*/
goto nothing_to_do;
qc->nsect = scsicmd[8]; if (dev->flags & ATA_DFLAG_LBA) {
} tf->flags |= ATA_TFLAG_LBA;
tf->nsect = scsicmd[8]; if (dev->flags & ATA_DFLAG_LBA48) {
tf->lbal = scsicmd[5]; /* The request -may- be too large for LBA48. */
tf->lbam = scsicmd[4]; if ((block >> 48) || (n_block > 65536))
tf->lbah = scsicmd[3]; goto out_of_range;
VPRINTK("ten-byte command\n"); /* use LBA48 */
if (qc->nsect == 0) /* we don't support length==0 cmds */ tf->flags |= ATA_TFLAG_LBA48;
return 1;
return 0;
}
if (scsicmd[0] == READ_6 || scsicmd[0] == WRITE_6) { tf->hob_nsect = (n_block >> 8) & 0xff;
qc->nsect = tf->nsect = scsicmd[4];
if (!qc->nsect) {
qc->nsect = 256;
if (lba48)
tf->hob_nsect = 1;
}
tf->lbal = scsicmd[3]; tf->hob_lbah = (block >> 40) & 0xff;
tf->lbam = scsicmd[2]; tf->hob_lbam = (block >> 32) & 0xff;
tf->lbah = scsicmd[1] & 0x1f; /* mask out reserved bits */ tf->hob_lbal = (block >> 24) & 0xff;
} else {
/* use LBA28 */
VPRINTK("six-byte command\n"); /* The request -may- be too large for LBA28. */
return 0; if ((block >> 28) || (n_block > 256))
} goto out_of_range;
if (scsicmd[0] == READ_16 || scsicmd[0] == WRITE_16) { tf->device |= (block >> 24) & 0xf;
/* rule out impossible LBAs and sector counts */ }
if (scsicmd[2] || scsicmd[3] || scsicmd[10] || scsicmd[11])
return 1;
if (lba48) { ata_rwcmd_protocol(qc);
tf->hob_nsect = scsicmd[12];
tf->hob_lbal = scsicmd[6];
tf->hob_lbam = scsicmd[5];
tf->hob_lbah = scsicmd[4];
qc->nsect = ((unsigned int)scsicmd[12] << 8) | qc->nsect = n_block;
scsicmd[13]; tf->nsect = n_block & 0xff;
} else {
/* once again, filter out impossible non-zero values */
if (scsicmd[4] || scsicmd[5] || scsicmd[12] ||
(scsicmd[6] & 0xf0))
return 1;
/* stores LBA27:24 in lower 4 bits of device reg */ tf->lbah = (block >> 16) & 0xff;
tf->device |= scsicmd[6]; tf->lbam = (block >> 8) & 0xff;
tf->lbal = block & 0xff;
qc->nsect = scsicmd[13]; tf->device |= ATA_LBA;
} } else {
/* CHS */
u32 sect, head, cyl, track;
/* The request -may- be too large for CHS addressing. */
if ((block >> 28) || (n_block > 256))
goto out_of_range;
ata_rwcmd_protocol(qc);
/* Convert LBA to CHS */
track = (u32)block / dev->sectors;
cyl = track / dev->heads;
head = track % dev->heads;
sect = (u32)block % dev->sectors + 1;
DPRINTK("block %u track %u cyl %u head %u sect %u\n",
(u32)block, track, cyl, head, sect);
/* Check whether the converted CHS can fit.
Cylinder: 0-65535
Head: 0-15
Sector: 1-255*/
if ((cyl >> 16) || (head >> 4) || (sect >> 8) || (!sect))
goto out_of_range;
qc->nsect = n_block;
tf->nsect = n_block & 0xff; /* Sector count 0 means 256 sectors */
tf->lbal = sect;
tf->lbam = cyl;
tf->lbah = cyl >> 8;
tf->device |= head;
}
tf->nsect = scsicmd[13]; return 0;
tf->lbal = scsicmd[9];
tf->lbam = scsicmd[8];
tf->lbah = scsicmd[7];
VPRINTK("sixteen-byte command\n"); invalid_fld:
if (qc->nsect == 0) /* we don't support length==0 cmds */ ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x24, 0x0);
return 1; /* "Invalid field in cbd" */
return 0; return 1;
}
out_of_range:
ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x21, 0x0);
/* "Logical Block Address out of range" */
return 1;
DPRINTK("no-byte command\n"); nothing_to_do:
qc->scsicmd->result = SAM_STAT_GOOD;
return 1; return 1;
} }
...@@ -731,6 +897,12 @@ static int ata_scsi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat) ...@@ -731,6 +897,12 @@ static int ata_scsi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
* This function sets up an ata_queued_cmd structure for the * This function sets up an ata_queued_cmd structure for the
* SCSI command, and sends that ata_queued_cmd to the hardware. * SCSI command, and sends that ata_queued_cmd to the hardware.
* *
* The xlat_func argument (actor) returns 0 if ready to execute
* ATA command, else 1 to finish translation. If 1 is returned
* then cmd->result (and possibly cmd->sense_buffer) are assumed
* to be set reflecting an error condition or clean (early)
* termination.
*
* LOCKING: * LOCKING:
* spin_lock_irqsave(host_set lock) * spin_lock_irqsave(host_set lock)
*/ */
...@@ -747,7 +919,7 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev, ...@@ -747,7 +919,7 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev,
qc = ata_scsi_qc_new(ap, dev, cmd, done); qc = ata_scsi_qc_new(ap, dev, cmd, done);
if (!qc) if (!qc)
return; goto err_mem;
/* data is present; dma-map it */ /* data is present; dma-map it */
if (cmd->sc_data_direction == DMA_FROM_DEVICE || if (cmd->sc_data_direction == DMA_FROM_DEVICE ||
...@@ -755,7 +927,7 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev, ...@@ -755,7 +927,7 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev,
if (unlikely(cmd->request_bufflen < 1)) { if (unlikely(cmd->request_bufflen < 1)) {
printk(KERN_WARNING "ata%u(%u): WARNING: zero len r/w req\n", printk(KERN_WARNING "ata%u(%u): WARNING: zero len r/w req\n",
ap->id, dev->devno); ap->id, dev->devno);
goto err_out; goto err_did;
} }
if (cmd->use_sg) if (cmd->use_sg)
...@@ -770,19 +942,28 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev, ...@@ -770,19 +942,28 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev,
qc->complete_fn = ata_scsi_qc_complete; qc->complete_fn = ata_scsi_qc_complete;
if (xlat_func(qc, scsicmd)) if (xlat_func(qc, scsicmd))
goto err_out; goto early_finish;
/* select device, send command to hardware */ /* select device, send command to hardware */
if (ata_qc_issue(qc)) if (ata_qc_issue(qc))
goto err_out; goto err_did;
VPRINTK("EXIT\n"); VPRINTK("EXIT\n");
return; return;
err_out: early_finish:
ata_qc_free(qc);
done(cmd);
DPRINTK("EXIT - early finish (good or error)\n");
return;
err_did:
ata_qc_free(qc); ata_qc_free(qc);
ata_bad_cdb(cmd, done); err_mem:
DPRINTK("EXIT - badcmd\n"); cmd->result = (DID_ERROR << 16);
done(cmd);
DPRINTK("EXIT - internal\n");
return;
} }
/** /**
...@@ -849,7 +1030,8 @@ static inline void ata_scsi_rbuf_put(struct scsi_cmnd *cmd, u8 *buf) ...@@ -849,7 +1030,8 @@ static inline void ata_scsi_rbuf_put(struct scsi_cmnd *cmd, u8 *buf)
* Mapping the response buffer, calling the command's handler, * Mapping the response buffer, calling the command's handler,
* and handling the handler's return value. This return value * and handling the handler's return value. This return value
* indicates whether the handler wishes the SCSI command to be * indicates whether the handler wishes the SCSI command to be
* completed successfully, or not. * completed successfully (0), or not (in which case cmd->result
* and sense buffer are assumed to be set).
* *
* LOCKING: * LOCKING:
* spin_lock_irqsave(host_set lock) * spin_lock_irqsave(host_set lock)
...@@ -868,12 +1050,9 @@ void ata_scsi_rbuf_fill(struct ata_scsi_args *args, ...@@ -868,12 +1050,9 @@ void ata_scsi_rbuf_fill(struct ata_scsi_args *args,
rc = actor(args, rbuf, buflen); rc = actor(args, rbuf, buflen);
ata_scsi_rbuf_put(cmd, rbuf); ata_scsi_rbuf_put(cmd, rbuf);
if (rc) if (rc == 0)
ata_bad_cdb(cmd, args->done);
else {
cmd->result = SAM_STAT_GOOD; cmd->result = SAM_STAT_GOOD;
args->done(cmd); args->done(cmd);
}
} }
/** /**
...@@ -1179,8 +1358,16 @@ unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf, ...@@ -1179,8 +1358,16 @@ unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf,
* in the same manner) * in the same manner)
*/ */
page_control = scsicmd[2] >> 6; page_control = scsicmd[2] >> 6;
if ((page_control != 0) && (page_control != 3)) switch (page_control) {
return 1; case 0: /* current */
break; /* supported */
case 3: /* saved */
goto saving_not_supp;
case 1: /* changeable */
case 2: /* defaults */
default:
goto invalid_fld;
}
if (six_byte) if (six_byte)
output_len = 4; output_len = 4;
...@@ -1211,7 +1398,7 @@ unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf, ...@@ -1211,7 +1398,7 @@ unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf,
break; break;
default: /* invalid page code */ default: /* invalid page code */
return 1; goto invalid_fld;
} }
if (six_byte) { if (six_byte) {
...@@ -1224,6 +1411,16 @@ unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf, ...@@ -1224,6 +1411,16 @@ unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf,
} }
return 0; return 0;
invalid_fld:
ata_scsi_set_sense(args->cmd, ILLEGAL_REQUEST, 0x24, 0x0);
/* "Invalid field in cbd" */
return 1;
saving_not_supp:
ata_scsi_set_sense(args->cmd, ILLEGAL_REQUEST, 0x39, 0x0);
/* "Saving parameters not supported" */
return 1;
} }
/** /**
...@@ -1246,10 +1443,20 @@ unsigned int ata_scsiop_read_cap(struct ata_scsi_args *args, u8 *rbuf, ...@@ -1246,10 +1443,20 @@ unsigned int ata_scsiop_read_cap(struct ata_scsi_args *args, u8 *rbuf,
VPRINTK("ENTER\n"); VPRINTK("ENTER\n");
if (ata_id_has_lba48(args->id)) if (ata_id_has_lba(args->id)) {
n_sectors = ata_id_u64(args->id, 100); if (ata_id_has_lba48(args->id))
else n_sectors = ata_id_u64(args->id, 100);
n_sectors = ata_id_u32(args->id, 60); else
n_sectors = ata_id_u32(args->id, 60);
} else {
/* CHS default translation */
n_sectors = args->id[1] * args->id[3] * args->id[6];
if (ata_id_current_chs_valid(args->id))
/* CHS current translation */
n_sectors = ata_id_u32(args->id, 57);
}
n_sectors--; /* ATA TotalUserSectors - 1 */ n_sectors--; /* ATA TotalUserSectors - 1 */
if (args->cmd->cmnd[0] == READ_CAPACITY) { if (args->cmd->cmnd[0] == READ_CAPACITY) {
...@@ -1312,6 +1519,34 @@ unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf, ...@@ -1312,6 +1519,34 @@ unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf,
return 0; return 0;
} }
/**
* ata_scsi_set_sense - Set SCSI sense data and status
* @cmd: SCSI request to be handled
* @sk: SCSI-defined sense key
* @asc: SCSI-defined additional sense code
* @ascq: SCSI-defined additional sense code qualifier
*
* Helper function that builds a valid fixed format, current
* response code and the given sense key (sk), additional sense
* code (asc) and additional sense code qualifier (ascq) with
* a SCSI command status of %SAM_STAT_CHECK_CONDITION and
* DRIVER_SENSE set in the upper bits of scsi_cmnd::result .
*
* LOCKING:
* Not required
*/
void ata_scsi_set_sense(struct scsi_cmnd *cmd, u8 sk, u8 asc, u8 ascq)
{
cmd->result = (DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION;
cmd->sense_buffer[0] = 0x70; /* fixed format, current */
cmd->sense_buffer[2] = sk;
cmd->sense_buffer[7] = 18 - 8; /* additional sense length */
cmd->sense_buffer[12] = asc;
cmd->sense_buffer[13] = ascq;
}
/** /**
* ata_scsi_badcmd - End a SCSI request with an error * ata_scsi_badcmd - End a SCSI request with an error
* @cmd: SCSI request to be handled * @cmd: SCSI request to be handled
...@@ -1330,30 +1565,84 @@ unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf, ...@@ -1330,30 +1565,84 @@ unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf,
void ata_scsi_badcmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *), u8 asc, u8 ascq) void ata_scsi_badcmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *), u8 asc, u8 ascq)
{ {
DPRINTK("ENTER\n"); DPRINTK("ENTER\n");
cmd->result = SAM_STAT_CHECK_CONDITION; ata_scsi_set_sense(cmd, ILLEGAL_REQUEST, asc, ascq);
cmd->sense_buffer[0] = 0x70;
cmd->sense_buffer[2] = ILLEGAL_REQUEST;
cmd->sense_buffer[7] = 14 - 8; /* addnl. sense len. FIXME: correct? */
cmd->sense_buffer[12] = asc;
cmd->sense_buffer[13] = ascq;
done(cmd); done(cmd);
} }
void atapi_request_sense(struct ata_port *ap, struct ata_device *dev,
struct scsi_cmnd *cmd)
{
DECLARE_COMPLETION(wait);
struct ata_queued_cmd *qc;
unsigned long flags;
int rc;
DPRINTK("ATAPI request sense\n");
qc = ata_qc_new_init(ap, dev);
BUG_ON(qc == NULL);
/* FIXME: is this needed? */
memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
ata_sg_init_one(qc, cmd->sense_buffer, sizeof(cmd->sense_buffer));
qc->dma_dir = DMA_FROM_DEVICE;
memset(&qc->cdb, 0, ap->cdb_len);
qc->cdb[0] = REQUEST_SENSE;
qc->cdb[4] = SCSI_SENSE_BUFFERSIZE;
qc->tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
qc->tf.command = ATA_CMD_PACKET;
qc->tf.protocol = ATA_PROT_ATAPI;
qc->tf.lbam = (8 * 1024) & 0xff;
qc->tf.lbah = (8 * 1024) >> 8;
qc->nbytes = SCSI_SENSE_BUFFERSIZE;
qc->waiting = &wait;
qc->complete_fn = ata_qc_complete_noop;
spin_lock_irqsave(&ap->host_set->lock, flags);
rc = ata_qc_issue(qc);
spin_unlock_irqrestore(&ap->host_set->lock, flags);
if (rc)
ata_port_disable(ap);
else
wait_for_completion(&wait);
DPRINTK("EXIT\n");
}
static int atapi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat) static int atapi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
{ {
struct scsi_cmnd *cmd = qc->scsicmd; struct scsi_cmnd *cmd = qc->scsicmd;
if (unlikely(drv_stat & (ATA_ERR | ATA_BUSY | ATA_DRQ))) { VPRINTK("ENTER, drv_stat == 0x%x\n", drv_stat);
if (unlikely(drv_stat & (ATA_BUSY | ATA_DRQ)))
ata_to_sense_error(qc, drv_stat);
else if (unlikely(drv_stat & ATA_ERR)) {
DPRINTK("request check condition\n"); DPRINTK("request check condition\n");
/* FIXME: command completion with check condition
* but no sense causes the error handler to run,
* which then issues REQUEST SENSE, fills in the sense
* buffer, and completes the command (for the second
* time). We need to issue REQUEST SENSE some other
* way, to avoid completing the command twice.
*/
cmd->result = SAM_STAT_CHECK_CONDITION; cmd->result = SAM_STAT_CHECK_CONDITION;
qc->scsidone(cmd); qc->scsidone(cmd);
return 1; return 1;
} else { }
else {
u8 *scsicmd = cmd->cmnd; u8 *scsicmd = cmd->cmnd;
if (scsicmd[0] == INQUIRY) { if (scsicmd[0] == INQUIRY) {
...@@ -1361,15 +1650,30 @@ static int atapi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat) ...@@ -1361,15 +1650,30 @@ static int atapi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
unsigned int buflen; unsigned int buflen;
buflen = ata_scsi_rbuf_get(cmd, &buf); buflen = ata_scsi_rbuf_get(cmd, &buf);
buf[2] = 0x5;
buf[3] = (buf[3] & 0xf0) | 2; /* ATAPI devices typically report zero for their SCSI version,
* and sometimes deviate from the spec WRT response data
* format. If SCSI version is reported as zero like normal,
* then we make the following fixups: 1) Fake MMC-5 version,
* to indicate to the Linux scsi midlayer this is a modern
* device. 2) Ensure response data format / ATAPI information
* are always correct.
*/
/* FIXME: do we ever override EVPD pages and the like, with
* this code?
*/
if (buf[2] == 0) {
buf[2] = 0x5;
buf[3] = 0x32;
}
ata_scsi_rbuf_put(cmd, buf); ata_scsi_rbuf_put(cmd, buf);
} }
cmd->result = SAM_STAT_GOOD; cmd->result = SAM_STAT_GOOD;
} }
qc->scsidone(cmd); qc->scsidone(cmd);
return 0; return 0;
} }
/** /**
...@@ -1384,7 +1688,7 @@ static int atapi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat) ...@@ -1384,7 +1688,7 @@ static int atapi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
* Zero on success, non-zero on failure. * Zero on success, non-zero on failure.
*/ */
static unsigned int atapi_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) static unsigned int atapi_xlat(struct ata_queued_cmd *qc, const u8 *scsicmd)
{ {
struct scsi_cmnd *cmd = qc->scsicmd; struct scsi_cmnd *cmd = qc->scsicmd;
struct ata_device *dev = qc->dev; struct ata_device *dev = qc->dev;
...@@ -1453,7 +1757,7 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc, u8 *scsicmd) ...@@ -1453,7 +1757,7 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
*/ */
static struct ata_device * static struct ata_device *
ata_scsi_find_dev(struct ata_port *ap, struct scsi_device *scsidev) ata_scsi_find_dev(struct ata_port *ap, const struct scsi_device *scsidev)
{ {
struct ata_device *dev; struct ata_device *dev;
...@@ -1610,7 +1914,7 @@ void ata_scsi_simulate(u16 *id, ...@@ -1610,7 +1914,7 @@ void ata_scsi_simulate(u16 *id,
void (*done)(struct scsi_cmnd *)) void (*done)(struct scsi_cmnd *))
{ {
struct ata_scsi_args args; struct ata_scsi_args args;
u8 *scsicmd = cmd->cmnd; const u8 *scsicmd = cmd->cmnd;
args.id = id; args.id = id;
args.cmd = cmd; args.cmd = cmd;
...@@ -1630,7 +1934,7 @@ void ata_scsi_simulate(u16 *id, ...@@ -1630,7 +1934,7 @@ void ata_scsi_simulate(u16 *id,
case INQUIRY: case INQUIRY:
if (scsicmd[1] & 2) /* is CmdDt set? */ if (scsicmd[1] & 2) /* is CmdDt set? */
ata_bad_cdb(cmd, done); ata_scsi_invalid_field(cmd, done);
else if ((scsicmd[1] & 1) == 0) /* is EVPD clear? */ else if ((scsicmd[1] & 1) == 0) /* is EVPD clear? */
ata_scsi_rbuf_fill(&args, ata_scsiop_inq_std); ata_scsi_rbuf_fill(&args, ata_scsiop_inq_std);
else if (scsicmd[2] == 0x00) else if (scsicmd[2] == 0x00)
...@@ -1640,7 +1944,7 @@ void ata_scsi_simulate(u16 *id, ...@@ -1640,7 +1944,7 @@ void ata_scsi_simulate(u16 *id,
else if (scsicmd[2] == 0x83) else if (scsicmd[2] == 0x83)
ata_scsi_rbuf_fill(&args, ata_scsiop_inq_83); ata_scsi_rbuf_fill(&args, ata_scsiop_inq_83);
else else
ata_bad_cdb(cmd, done); ata_scsi_invalid_field(cmd, done);
break; break;
case MODE_SENSE: case MODE_SENSE:
...@@ -1650,7 +1954,7 @@ void ata_scsi_simulate(u16 *id, ...@@ -1650,7 +1954,7 @@ void ata_scsi_simulate(u16 *id,
case MODE_SELECT: /* unconditionally return */ case MODE_SELECT: /* unconditionally return */
case MODE_SELECT_10: /* bad-field-in-cdb */ case MODE_SELECT_10: /* bad-field-in-cdb */
ata_bad_cdb(cmd, done); ata_scsi_invalid_field(cmd, done);
break; break;
case READ_CAPACITY: case READ_CAPACITY:
...@@ -1661,7 +1965,7 @@ void ata_scsi_simulate(u16 *id, ...@@ -1661,7 +1965,7 @@ void ata_scsi_simulate(u16 *id,
if ((scsicmd[1] & 0x1f) == SAI_READ_CAPACITY_16) if ((scsicmd[1] & 0x1f) == SAI_READ_CAPACITY_16)
ata_scsi_rbuf_fill(&args, ata_scsiop_read_cap); ata_scsi_rbuf_fill(&args, ata_scsiop_read_cap);
else else
ata_bad_cdb(cmd, done); ata_scsi_invalid_field(cmd, done);
break; break;
case REPORT_LUNS: case REPORT_LUNS:
...@@ -1673,8 +1977,26 @@ void ata_scsi_simulate(u16 *id, ...@@ -1673,8 +1977,26 @@ void ata_scsi_simulate(u16 *id,
/* all other commands */ /* all other commands */
default: default:
ata_bad_scsiop(cmd, done); ata_scsi_set_sense(cmd, ILLEGAL_REQUEST, 0x20, 0x0);
/* "Invalid command operation code" */
done(cmd);
break; break;
} }
} }
void ata_scsi_scan_host(struct ata_port *ap)
{
struct ata_device *dev;
unsigned int i;
if (ap->flags & ATA_FLAG_PORT_DISABLED)
return;
for (i = 0; i < ATA_MAX_DEVICES; i++) {
dev = &ap->device[i];
if (ata_dev_present(dev))
scsi_scan_target(&ap->host->shost_gendev, 0, i, 0, 0);
}
}
...@@ -39,18 +39,23 @@ struct ata_scsi_args { ...@@ -39,18 +39,23 @@ struct ata_scsi_args {
/* libata-core.c */ /* libata-core.c */
extern int atapi_enabled; extern int atapi_enabled;
extern int ata_qc_complete_noop(struct ata_queued_cmd *qc, u8 drv_stat);
extern struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap, extern struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap,
struct ata_device *dev); struct ata_device *dev);
extern void ata_rwcmd_protocol(struct ata_queued_cmd *qc);
extern void ata_qc_free(struct ata_queued_cmd *qc); extern void ata_qc_free(struct ata_queued_cmd *qc);
extern int ata_qc_issue(struct ata_queued_cmd *qc); extern int ata_qc_issue(struct ata_queued_cmd *qc);
extern int ata_check_atapi_dma(struct ata_queued_cmd *qc); extern int ata_check_atapi_dma(struct ata_queued_cmd *qc);
extern void ata_dev_select(struct ata_port *ap, unsigned int device, extern void ata_dev_select(struct ata_port *ap, unsigned int device,
unsigned int wait, unsigned int can_sleep); unsigned int wait, unsigned int can_sleep);
extern void ata_tf_to_host_nolock(struct ata_port *ap, struct ata_taskfile *tf); extern void ata_tf_to_host_nolock(struct ata_port *ap, const struct ata_taskfile *tf);
extern void swap_buf_le16(u16 *buf, unsigned int buf_words); extern void swap_buf_le16(u16 *buf, unsigned int buf_words);
/* libata-scsi.c */ /* libata-scsi.c */
extern void atapi_request_sense(struct ata_port *ap, struct ata_device *dev,
struct scsi_cmnd *cmd);
extern void ata_scsi_scan_host(struct ata_port *ap);
extern void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat); extern void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat);
extern int ata_scsi_error(struct Scsi_Host *host); extern int ata_scsi_error(struct Scsi_Host *host);
extern unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf, extern unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf,
...@@ -76,18 +81,10 @@ extern unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf, ...@@ -76,18 +81,10 @@ extern unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf,
extern void ata_scsi_badcmd(struct scsi_cmnd *cmd, extern void ata_scsi_badcmd(struct scsi_cmnd *cmd,
void (*done)(struct scsi_cmnd *), void (*done)(struct scsi_cmnd *),
u8 asc, u8 ascq); u8 asc, u8 ascq);
extern void ata_scsi_set_sense(struct scsi_cmnd *cmd,
u8 sk, u8 asc, u8 ascq);
extern void ata_scsi_rbuf_fill(struct ata_scsi_args *args, extern void ata_scsi_rbuf_fill(struct ata_scsi_args *args,
unsigned int (*actor) (struct ata_scsi_args *args, unsigned int (*actor) (struct ata_scsi_args *args,
u8 *rbuf, unsigned int buflen)); u8 *rbuf, unsigned int buflen));
static inline void ata_bad_scsiop(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
{
ata_scsi_badcmd(cmd, done, 0x20, 0x00);
}
static inline void ata_bad_cdb(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
{
ata_scsi_badcmd(cmd, done, 0x24, 0x00);
}
#endif /* __LIBATA_H__ */ #endif /* __LIBATA_H__ */
/*
* pdc_adma.c - Pacific Digital Corporation ADMA
*
* Maintained by: Mark Lord <mlord@pobox.com>
*
* Copyright 2005 Mark Lord
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; see the file COPYING. If not, write to
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
*
*
* libata documentation is available via 'make {ps|pdf}docs',
* as Documentation/DocBook/libata.*
*
*
* Supports ATA disks in single-packet ADMA mode.
* Uses PIO for everything else.
*
* TODO: Use ADMA transfers for ATAPI devices, when possible.
* This requires careful attention to a number of quirks of the chip.
*
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/init.h>
#include <linux/blkdev.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/sched.h>
#include "scsi.h"
#include <scsi/scsi_host.h>
#include <asm/io.h>
#include <linux/libata.h>
#define DRV_NAME "pdc_adma"
#define DRV_VERSION "0.01"
/* macro to calculate base address for ATA regs */
#define ADMA_ATA_REGS(base,port_no) ((base) + ((port_no) * 0x40))
/* macro to calculate base address for ADMA regs */
#define ADMA_REGS(base,port_no) ((base) + 0x80 + ((port_no) * 0x20))
enum {
ADMA_PORTS = 2,
ADMA_CPB_BYTES = 40,
ADMA_PRD_BYTES = LIBATA_MAX_PRD * 16,
ADMA_PKT_BYTES = ADMA_CPB_BYTES + ADMA_PRD_BYTES,
ADMA_DMA_BOUNDARY = 0xffffffff,
/* global register offsets */
ADMA_MODE_LOCK = 0x00c7,
/* per-channel register offsets */
ADMA_CONTROL = 0x0000, /* ADMA control */
ADMA_STATUS = 0x0002, /* ADMA status */
ADMA_CPB_COUNT = 0x0004, /* CPB count */
ADMA_CPB_CURRENT = 0x000c, /* current CPB address */
ADMA_CPB_NEXT = 0x000c, /* next CPB address */
ADMA_CPB_LOOKUP = 0x0010, /* CPB lookup table */
ADMA_FIFO_IN = 0x0014, /* input FIFO threshold */
ADMA_FIFO_OUT = 0x0016, /* output FIFO threshold */
/* ADMA_CONTROL register bits */
aNIEN = (1 << 8), /* irq mask: 1==masked */
aGO = (1 << 7), /* packet trigger ("Go!") */
aRSTADM = (1 << 5), /* ADMA logic reset */
aRSTA = (1 << 2), /* ATA hard reset */
aPIOMD4 = 0x0003, /* PIO mode 4 */
/* ADMA_STATUS register bits */
aPSD = (1 << 6),
aUIRQ = (1 << 4),
aPERR = (1 << 0),
/* CPB bits */
cDONE = (1 << 0),
cVLD = (1 << 0),
cDAT = (1 << 2),
cIEN = (1 << 3),
/* PRD bits */
pORD = (1 << 4),
pDIRO = (1 << 5),
pEND = (1 << 7),
/* ATA register flags */
rIGN = (1 << 5),
rEND = (1 << 7),
/* ATA register addresses */
ADMA_REGS_CONTROL = 0x0e,
ADMA_REGS_SECTOR_COUNT = 0x12,
ADMA_REGS_LBA_LOW = 0x13,
ADMA_REGS_LBA_MID = 0x14,
ADMA_REGS_LBA_HIGH = 0x15,
ADMA_REGS_DEVICE = 0x16,
ADMA_REGS_COMMAND = 0x17,
/* PCI device IDs */
board_1841_idx = 0, /* ADMA 2-port controller */
};
typedef enum { adma_state_idle, adma_state_pkt, adma_state_mmio } adma_state_t;
struct adma_port_priv {
u8 *pkt;
dma_addr_t pkt_dma;
adma_state_t state;
};
static int adma_ata_init_one (struct pci_dev *pdev,
const struct pci_device_id *ent);
static irqreturn_t adma_intr (int irq, void *dev_instance,
struct pt_regs *regs);
static int adma_port_start(struct ata_port *ap);
static void adma_host_stop(struct ata_host_set *host_set);
static void adma_port_stop(struct ata_port *ap);
static void adma_phy_reset(struct ata_port *ap);
static void adma_qc_prep(struct ata_queued_cmd *qc);
static int adma_qc_issue(struct ata_queued_cmd *qc);
static int adma_check_atapi_dma(struct ata_queued_cmd *qc);
static void adma_bmdma_stop(struct ata_queued_cmd *qc);
static u8 adma_bmdma_status(struct ata_port *ap);
static void adma_irq_clear(struct ata_port *ap);
static void adma_eng_timeout(struct ata_port *ap);
static Scsi_Host_Template adma_ata_sht = {
.module = THIS_MODULE,
.name = DRV_NAME,
.ioctl = ata_scsi_ioctl,
.queuecommand = ata_scsi_queuecmd,
.eh_strategy_handler = ata_scsi_error,
.can_queue = ATA_DEF_QUEUE,
.this_id = ATA_SHT_THIS_ID,
.sg_tablesize = LIBATA_MAX_PRD,
.max_sectors = ATA_MAX_SECTORS,
.cmd_per_lun = ATA_SHT_CMD_PER_LUN,
.emulated = ATA_SHT_EMULATED,
.use_clustering = ENABLE_CLUSTERING,
.proc_name = DRV_NAME,
.dma_boundary = ADMA_DMA_BOUNDARY,
.slave_configure = ata_scsi_slave_config,
.bios_param = ata_std_bios_param,
};
static const struct ata_port_operations adma_ata_ops = {
.port_disable = ata_port_disable,
.tf_load = ata_tf_load,
.tf_read = ata_tf_read,
.check_status = ata_check_status,
.check_atapi_dma = adma_check_atapi_dma,
.exec_command = ata_exec_command,
.dev_select = ata_std_dev_select,
.phy_reset = adma_phy_reset,
.qc_prep = adma_qc_prep,
.qc_issue = adma_qc_issue,
.eng_timeout = adma_eng_timeout,
.irq_handler = adma_intr,
.irq_clear = adma_irq_clear,
.port_start = adma_port_start,
.port_stop = adma_port_stop,
.host_stop = adma_host_stop,
.bmdma_stop = adma_bmdma_stop,
.bmdma_status = adma_bmdma_status,
};
static struct ata_port_info adma_port_info[] = {
/* board_1841_idx */
{
.sht = &adma_ata_sht,
.host_flags = ATA_FLAG_SLAVE_POSS | ATA_FLAG_SRST |
ATA_FLAG_NO_LEGACY | ATA_FLAG_MMIO,
.pio_mask = 0x10, /* pio4 */
.udma_mask = 0x1f, /* udma0-4 */
.port_ops = &adma_ata_ops,
},
};
static struct pci_device_id adma_ata_pci_tbl[] = {
{ PCI_VENDOR_ID_PDC, 0x1841, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
board_1841_idx },
{ } /* terminate list */
};
static struct pci_driver adma_ata_pci_driver = {
.name = DRV_NAME,
.id_table = adma_ata_pci_tbl,
.probe = adma_ata_init_one,
.remove = ata_pci_remove_one,
};
static int adma_check_atapi_dma(struct ata_queued_cmd *qc)
{
return 1; /* ATAPI DMA not yet supported */
}
static void adma_bmdma_stop(struct ata_queued_cmd *qc)
{
/* nothing */
}
static u8 adma_bmdma_status(struct ata_port *ap)
{
return 0;
}
static void adma_irq_clear(struct ata_port *ap)
{
/* nothing */
}
static void adma_reset_engine(void __iomem *chan)
{
/* reset ADMA to idle state */
writew(aPIOMD4 | aNIEN | aRSTADM, chan + ADMA_CONTROL);
udelay(2);
writew(aPIOMD4, chan + ADMA_CONTROL);
udelay(2);
}
static void adma_reinit_engine(struct ata_port *ap)
{
struct adma_port_priv *pp = ap->private_data;
void __iomem *mmio_base = ap->host_set->mmio_base;
void __iomem *chan = ADMA_REGS(mmio_base, ap->port_no);
/* mask/clear ATA interrupts */
writeb(ATA_NIEN, (void __iomem *)ap->ioaddr.ctl_addr);
ata_check_status(ap);
/* reset the ADMA engine */
adma_reset_engine(chan);
/* set in-FIFO threshold to 0x100 */
writew(0x100, chan + ADMA_FIFO_IN);
/* set CPB pointer */
writel((u32)pp->pkt_dma, chan + ADMA_CPB_NEXT);
/* set out-FIFO threshold to 0x100 */
writew(0x100, chan + ADMA_FIFO_OUT);
/* set CPB count */
writew(1, chan + ADMA_CPB_COUNT);
/* read/discard ADMA status */
readb(chan + ADMA_STATUS);
}
static inline void adma_enter_reg_mode(struct ata_port *ap)
{
void __iomem *chan = ADMA_REGS(ap->host_set->mmio_base, ap->port_no);
writew(aPIOMD4, chan + ADMA_CONTROL);
readb(chan + ADMA_STATUS); /* flush */
}
static void adma_phy_reset(struct ata_port *ap)
{
struct adma_port_priv *pp = ap->private_data;
pp->state = adma_state_idle;
adma_reinit_engine(ap);
ata_port_probe(ap);
ata_bus_reset(ap);
}
static void adma_eng_timeout(struct ata_port *ap)
{
struct adma_port_priv *pp = ap->private_data;
if (pp->state != adma_state_idle) /* healthy paranoia */
pp->state = adma_state_mmio;
adma_reinit_engine(ap);
ata_eng_timeout(ap);
}
static int adma_fill_sg(struct ata_queued_cmd *qc)
{
struct scatterlist *sg = qc->sg;
struct ata_port *ap = qc->ap;
struct adma_port_priv *pp = ap->private_data;
u8 *buf = pp->pkt;
int nelem, i = (2 + buf[3]) * 8;
u8 pFLAGS = pORD | ((qc->tf.flags & ATA_TFLAG_WRITE) ? pDIRO : 0);
for (nelem = 0; nelem < qc->n_elem; nelem++,sg++) {
u32 addr;
u32 len;
addr = (u32)sg_dma_address(sg);
*(__le32 *)(buf + i) = cpu_to_le32(addr);
i += 4;
len = sg_dma_len(sg) >> 3;
*(__le32 *)(buf + i) = cpu_to_le32(len);
i += 4;
if ((nelem + 1) == qc->n_elem)
pFLAGS |= pEND;
buf[i++] = pFLAGS;
buf[i++] = qc->dev->dma_mode & 0xf;
buf[i++] = 0; /* pPKLW */
buf[i++] = 0; /* reserved */
*(__le32 *)(buf + i)
= (pFLAGS & pEND) ? 0 : cpu_to_le32(pp->pkt_dma + i + 4);
i += 4;
VPRINTK("PRD[%u] = (0x%lX, 0x%X)\n", nelem,
(unsigned long)addr, len);
}
return i;
}
static void adma_qc_prep(struct ata_queued_cmd *qc)
{
struct adma_port_priv *pp = qc->ap->private_data;
u8 *buf = pp->pkt;
u32 pkt_dma = (u32)pp->pkt_dma;
int i = 0;
VPRINTK("ENTER\n");
adma_enter_reg_mode(qc->ap);
if (qc->tf.protocol != ATA_PROT_DMA) {
ata_qc_prep(qc);
return;
}
buf[i++] = 0; /* Response flags */
buf[i++] = 0; /* reserved */
buf[i++] = cVLD | cDAT | cIEN;
i++; /* cLEN, gets filled in below */
*(__le32 *)(buf+i) = cpu_to_le32(pkt_dma); /* cNCPB */
i += 4; /* cNCPB */
i += 4; /* cPRD, gets filled in below */
buf[i++] = 0; /* reserved */
buf[i++] = 0; /* reserved */
buf[i++] = 0; /* reserved */
buf[i++] = 0; /* reserved */
/* ATA registers; must be a multiple of 4 */
buf[i++] = qc->tf.device;
buf[i++] = ADMA_REGS_DEVICE;
if ((qc->tf.flags & ATA_TFLAG_LBA48)) {
buf[i++] = qc->tf.hob_nsect;
buf[i++] = ADMA_REGS_SECTOR_COUNT;
buf[i++] = qc->tf.hob_lbal;
buf[i++] = ADMA_REGS_LBA_LOW;
buf[i++] = qc->tf.hob_lbam;
buf[i++] = ADMA_REGS_LBA_MID;
buf[i++] = qc->tf.hob_lbah;
buf[i++] = ADMA_REGS_LBA_HIGH;
}
buf[i++] = qc->tf.nsect;
buf[i++] = ADMA_REGS_SECTOR_COUNT;
buf[i++] = qc->tf.lbal;
buf[i++] = ADMA_REGS_LBA_LOW;
buf[i++] = qc->tf.lbam;
buf[i++] = ADMA_REGS_LBA_MID;
buf[i++] = qc->tf.lbah;
buf[i++] = ADMA_REGS_LBA_HIGH;
buf[i++] = 0;
buf[i++] = ADMA_REGS_CONTROL;
buf[i++] = rIGN;
buf[i++] = 0;
buf[i++] = qc->tf.command;
buf[i++] = ADMA_REGS_COMMAND | rEND;
buf[3] = (i >> 3) - 2; /* cLEN */
*(__le32 *)(buf+8) = cpu_to_le32(pkt_dma + i); /* cPRD */
i = adma_fill_sg(qc);
wmb(); /* flush PRDs and pkt to memory */
#if 0
/* dump out CPB + PRDs for debug */
{
int j, len = 0;
static char obuf[2048];
for (j = 0; j < i; ++j) {
len += sprintf(obuf+len, "%02x ", buf[j]);
if ((j & 7) == 7) {
printk("%s\n", obuf);
len = 0;
}
}
if (len)
printk("%s\n", obuf);
}
#endif
}
static inline void adma_packet_start(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
void __iomem *chan = ADMA_REGS(ap->host_set->mmio_base, ap->port_no);
VPRINTK("ENTER, ap %p\n", ap);
/* fire up the ADMA engine */
writew(aPIOMD4 | aGO, chan + ADMA_CONTROL);
}
static int adma_qc_issue(struct ata_queued_cmd *qc)
{
struct adma_port_priv *pp = qc->ap->private_data;
switch (qc->tf.protocol) {
case ATA_PROT_DMA:
pp->state = adma_state_pkt;
adma_packet_start(qc);
return 0;
case ATA_PROT_ATAPI_DMA:
BUG();
break;
default:
break;
}
pp->state = adma_state_mmio;
return ata_qc_issue_prot(qc);
}
static inline unsigned int adma_intr_pkt(struct ata_host_set *host_set)
{
unsigned int handled = 0, port_no;
u8 __iomem *mmio_base = host_set->mmio_base;
for (port_no = 0; port_no < host_set->n_ports; ++port_no) {
struct ata_port *ap = host_set->ports[port_no];
struct adma_port_priv *pp;
struct ata_queued_cmd *qc;
void __iomem *chan = ADMA_REGS(mmio_base, port_no);
u8 drv_stat, status = readb(chan + ADMA_STATUS);
if (status == 0)
continue;
handled = 1;
adma_enter_reg_mode(ap);
if ((ap->flags & ATA_FLAG_PORT_DISABLED))
continue;
pp = ap->private_data;
if (!pp || pp->state != adma_state_pkt)
continue;
qc = ata_qc_from_tag(ap, ap->active_tag);
drv_stat = 0;
if ((status & (aPERR | aPSD | aUIRQ)))
drv_stat = ATA_ERR;
else if (pp->pkt[0] != cDONE)
drv_stat = ATA_ERR;
ata_qc_complete(qc, drv_stat);
}
return handled;
}
static inline unsigned int adma_intr_mmio(struct ata_host_set *host_set)
{
unsigned int handled = 0, port_no;
for (port_no = 0; port_no < host_set->n_ports; ++port_no) {
struct ata_port *ap;
ap = host_set->ports[port_no];
if (ap && (!(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR)))) {
struct ata_queued_cmd *qc;
struct adma_port_priv *pp = ap->private_data;
if (!pp || pp->state != adma_state_mmio)
continue;
qc = ata_qc_from_tag(ap, ap->active_tag);
if (qc && (!(qc->tf.ctl & ATA_NIEN))) {
/* check main status, clearing INTRQ */
u8 status = ata_chk_status(ap);
if ((status & ATA_BUSY))
continue;
DPRINTK("ata%u: protocol %d (dev_stat 0x%X)\n",
ap->id, qc->tf.protocol, status);
/* complete taskfile transaction */
pp->state = adma_state_idle;
ata_qc_complete(qc, status);
handled = 1;
}
}
}
return handled;
}
static irqreturn_t adma_intr(int irq, void *dev_instance, struct pt_regs *regs)
{
struct ata_host_set *host_set = dev_instance;
unsigned int handled = 0;
VPRINTK("ENTER\n");
spin_lock(&host_set->lock);
handled = adma_intr_pkt(host_set) | adma_intr_mmio(host_set);
spin_unlock(&host_set->lock);
VPRINTK("EXIT\n");
return IRQ_RETVAL(handled);
}
static void adma_ata_setup_port(struct ata_ioports *port, unsigned long base)
{
port->cmd_addr =
port->data_addr = base + 0x000;
port->error_addr =
port->feature_addr = base + 0x004;
port->nsect_addr = base + 0x008;
port->lbal_addr = base + 0x00c;
port->lbam_addr = base + 0x010;
port->lbah_addr = base + 0x014;
port->device_addr = base + 0x018;
port->status_addr =
port->command_addr = base + 0x01c;
port->altstatus_addr =
port->ctl_addr = base + 0x038;
}
static int adma_port_start(struct ata_port *ap)
{
struct device *dev = ap->host_set->dev;
struct adma_port_priv *pp;
int rc;
rc = ata_port_start(ap);
if (rc)
return rc;
adma_enter_reg_mode(ap);
rc = -ENOMEM;
pp = kcalloc(1, sizeof(*pp), GFP_KERNEL);
if (!pp)
goto err_out;
pp->pkt = dma_alloc_coherent(dev, ADMA_PKT_BYTES, &pp->pkt_dma,
GFP_KERNEL);
if (!pp->pkt)
goto err_out_kfree;
/* paranoia? */
if ((pp->pkt_dma & 7) != 0) {
printk("bad alignment for pp->pkt_dma: %08x\n",
(u32)pp->pkt_dma);
goto err_out_kfree2;
}
memset(pp->pkt, 0, ADMA_PKT_BYTES);
ap->private_data = pp;
adma_reinit_engine(ap);
return 0;
err_out_kfree2:
kfree(pp);
err_out_kfree:
kfree(pp);
err_out:
ata_port_stop(ap);
return rc;
}
static void adma_port_stop(struct ata_port *ap)
{
struct device *dev = ap->host_set->dev;
struct adma_port_priv *pp = ap->private_data;
adma_reset_engine(ADMA_REGS(ap->host_set->mmio_base, ap->port_no));
if (pp != NULL) {
ap->private_data = NULL;
if (pp->pkt != NULL)
dma_free_coherent(dev, ADMA_PKT_BYTES,
pp->pkt, pp->pkt_dma);
kfree(pp);
}
ata_port_stop(ap);
}
static void adma_host_stop(struct ata_host_set *host_set)
{
unsigned int port_no;
for (port_no = 0; port_no < ADMA_PORTS; ++port_no)
adma_reset_engine(ADMA_REGS(host_set->mmio_base, port_no));
ata_pci_host_stop(host_set);
}
static void adma_host_init(unsigned int chip_id,
struct ata_probe_ent *probe_ent)
{
unsigned int port_no;
void __iomem *mmio_base = probe_ent->mmio_base;
/* enable/lock aGO operation */
writeb(7, mmio_base + ADMA_MODE_LOCK);
/* reset the ADMA logic */
for (port_no = 0; port_no < ADMA_PORTS; ++port_no)
adma_reset_engine(ADMA_REGS(mmio_base, port_no));
}
static int adma_set_dma_masks(struct pci_dev *pdev, void __iomem *mmio_base)
{
int rc;
rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
if (rc) {
printk(KERN_ERR DRV_NAME
"(%s): 32-bit DMA enable failed\n",
pci_name(pdev));
return rc;
}
rc = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK);
if (rc) {
printk(KERN_ERR DRV_NAME
"(%s): 32-bit consistent DMA enable failed\n",
pci_name(pdev));
return rc;
}
return 0;
}
static int adma_ata_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
static int printed_version;
struct ata_probe_ent *probe_ent = NULL;
void __iomem *mmio_base;
unsigned int board_idx = (unsigned int) ent->driver_data;
int rc, port_no;
if (!printed_version++)
printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
rc = pci_enable_device(pdev);
if (rc)
return rc;
rc = pci_request_regions(pdev, DRV_NAME);
if (rc)
goto err_out;
if ((pci_resource_flags(pdev, 4) & IORESOURCE_MEM) == 0) {
rc = -ENODEV;
goto err_out_regions;
}
mmio_base = pci_iomap(pdev, 4, 0);
if (mmio_base == NULL) {
rc = -ENOMEM;
goto err_out_regions;
}
rc = adma_set_dma_masks(pdev, mmio_base);
if (rc)
goto err_out_iounmap;
probe_ent = kcalloc(1, sizeof(*probe_ent), GFP_KERNEL);
if (probe_ent == NULL) {
rc = -ENOMEM;
goto err_out_iounmap;
}
probe_ent->dev = pci_dev_to_dev(pdev);
INIT_LIST_HEAD(&probe_ent->node);
probe_ent->sht = adma_port_info[board_idx].sht;
probe_ent->host_flags = adma_port_info[board_idx].host_flags;
probe_ent->pio_mask = adma_port_info[board_idx].pio_mask;
probe_ent->mwdma_mask = adma_port_info[board_idx].mwdma_mask;
probe_ent->udma_mask = adma_port_info[board_idx].udma_mask;
probe_ent->port_ops = adma_port_info[board_idx].port_ops;
probe_ent->irq = pdev->irq;
probe_ent->irq_flags = SA_SHIRQ;
probe_ent->mmio_base = mmio_base;
probe_ent->n_ports = ADMA_PORTS;
for (port_no = 0; port_no < probe_ent->n_ports; ++port_no) {
adma_ata_setup_port(&probe_ent->port[port_no],
ADMA_ATA_REGS((unsigned long)mmio_base, port_no));
}
pci_set_master(pdev);
/* initialize adapter */
adma_host_init(board_idx, probe_ent);
rc = ata_device_add(probe_ent);
kfree(probe_ent);
if (rc != ADMA_PORTS)
goto err_out_iounmap;
return 0;
err_out_iounmap:
pci_iounmap(pdev, mmio_base);
err_out_regions:
pci_release_regions(pdev);
err_out:
pci_disable_device(pdev);
return rc;
}
static int __init adma_ata_init(void)
{
return pci_module_init(&adma_ata_pci_driver);
}
static void __exit adma_ata_exit(void)
{
pci_unregister_driver(&adma_ata_pci_driver);
}
MODULE_AUTHOR("Mark Lord");
MODULE_DESCRIPTION("Pacific Digital Corporation ADMA low-level driver");
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(pci, adma_ata_pci_tbl);
MODULE_VERSION(DRV_VERSION);
module_init(adma_ata_init);
module_exit(adma_ata_exit);
...@@ -35,7 +35,7 @@ ...@@ -35,7 +35,7 @@
#include <asm/io.h> #include <asm/io.h>
#define DRV_NAME "sata_mv" #define DRV_NAME "sata_mv"
#define DRV_VERSION "0.12" #define DRV_VERSION "0.25"
enum { enum {
/* BAR's are enumerated in terms of pci_resource_start() terms */ /* BAR's are enumerated in terms of pci_resource_start() terms */
...@@ -55,31 +55,61 @@ enum { ...@@ -55,31 +55,61 @@ enum {
MV_SATAHC_ARBTR_REG_SZ = MV_MINOR_REG_AREA_SZ, /* arbiter */ MV_SATAHC_ARBTR_REG_SZ = MV_MINOR_REG_AREA_SZ, /* arbiter */
MV_PORT_REG_SZ = MV_MINOR_REG_AREA_SZ, MV_PORT_REG_SZ = MV_MINOR_REG_AREA_SZ,
MV_Q_CT = 32, MV_USE_Q_DEPTH = ATA_DEF_QUEUE,
MV_CRQB_SZ = 32,
MV_CRPB_SZ = 8,
MV_DMA_BOUNDARY = 0xffffffffU, MV_MAX_Q_DEPTH = 32,
SATAHC_MASK = (~(MV_SATAHC_REG_SZ - 1)), MV_MAX_Q_DEPTH_MASK = MV_MAX_Q_DEPTH - 1,
/* CRQB needs alignment on a 1KB boundary. Size == 1KB
* CRPB needs alignment on a 256B boundary. Size == 256B
* SG count of 176 leads to MV_PORT_PRIV_DMA_SZ == 4KB
* ePRD (SG) entries need alignment on a 16B boundary. Size == 16B
*/
MV_CRQB_Q_SZ = (32 * MV_MAX_Q_DEPTH),
MV_CRPB_Q_SZ = (8 * MV_MAX_Q_DEPTH),
MV_MAX_SG_CT = 176,
MV_SG_TBL_SZ = (16 * MV_MAX_SG_CT),
MV_PORT_PRIV_DMA_SZ = (MV_CRQB_Q_SZ + MV_CRPB_Q_SZ + MV_SG_TBL_SZ),
/* Our DMA boundary is determined by an ePRD being unable to handle
* anything larger than 64KB
*/
MV_DMA_BOUNDARY = 0xffffU,
MV_PORTS_PER_HC = 4, MV_PORTS_PER_HC = 4,
/* == (port / MV_PORTS_PER_HC) to determine HC from 0-7 port */ /* == (port / MV_PORTS_PER_HC) to determine HC from 0-7 port */
MV_PORT_HC_SHIFT = 2, MV_PORT_HC_SHIFT = 2,
/* == (port % MV_PORTS_PER_HC) to determine port from 0-7 port */ /* == (port % MV_PORTS_PER_HC) to determine hard port from 0-7 port */
MV_PORT_MASK = 3, MV_PORT_MASK = 3,
/* Host Flags */ /* Host Flags */
MV_FLAG_DUAL_HC = (1 << 30), /* two SATA Host Controllers */ MV_FLAG_DUAL_HC = (1 << 30), /* two SATA Host Controllers */
MV_FLAG_IRQ_COALESCE = (1 << 29), /* IRQ coalescing capability */ MV_FLAG_IRQ_COALESCE = (1 << 29), /* IRQ coalescing capability */
MV_FLAG_BDMA = (1 << 28), /* Basic DMA */ MV_FLAG_GLBL_SFT_RST = (1 << 28), /* Global Soft Reset support */
MV_COMMON_FLAGS = (ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO),
MV_6XXX_FLAGS = (MV_FLAG_IRQ_COALESCE |
MV_FLAG_GLBL_SFT_RST),
chip_504x = 0, chip_504x = 0,
chip_508x = 1, chip_508x = 1,
chip_604x = 2, chip_604x = 2,
chip_608x = 3, chip_608x = 3,
CRQB_FLAG_READ = (1 << 0),
CRQB_TAG_SHIFT = 1,
CRQB_CMD_ADDR_SHIFT = 8,
CRQB_CMD_CS = (0x2 << 11),
CRQB_CMD_LAST = (1 << 15),
CRPB_FLAG_STATUS_SHIFT = 8,
EPRD_FLAG_END_OF_TBL = (1 << 31),
/* PCI interface registers */ /* PCI interface registers */
PCI_COMMAND_OFS = 0xc00,
PCI_MAIN_CMD_STS_OFS = 0xd30, PCI_MAIN_CMD_STS_OFS = 0xd30,
STOP_PCI_MASTER = (1 << 2), STOP_PCI_MASTER = (1 << 2),
PCI_MASTER_EMPTY = (1 << 3), PCI_MASTER_EMPTY = (1 << 3),
...@@ -111,20 +141,13 @@ enum { ...@@ -111,20 +141,13 @@ enum {
HC_CFG_OFS = 0, HC_CFG_OFS = 0,
HC_IRQ_CAUSE_OFS = 0x14, HC_IRQ_CAUSE_OFS = 0x14,
CRBP_DMA_DONE = (1 << 0), /* shift by port # */ CRPB_DMA_DONE = (1 << 0), /* shift by port # */
HC_IRQ_COAL = (1 << 4), /* IRQ coalescing */ HC_IRQ_COAL = (1 << 4), /* IRQ coalescing */
DEV_IRQ = (1 << 8), /* shift by port # */ DEV_IRQ = (1 << 8), /* shift by port # */
/* Shadow block registers */ /* Shadow block registers */
SHD_PIO_DATA_OFS = 0x100, SHD_BLK_OFS = 0x100,
SHD_FEA_ERR_OFS = 0x104, SHD_CTL_AST_OFS = 0x20, /* ofs from SHD_BLK_OFS */
SHD_SECT_CNT_OFS = 0x108,
SHD_LBA_L_OFS = 0x10C,
SHD_LBA_M_OFS = 0x110,
SHD_LBA_H_OFS = 0x114,
SHD_DEV_HD_OFS = 0x118,
SHD_CMD_STA_OFS = 0x11C,
SHD_CTL_AST_OFS = 0x120,
/* SATA registers */ /* SATA registers */
SATA_STATUS_OFS = 0x300, /* ctrl, err regs follow status */ SATA_STATUS_OFS = 0x300, /* ctrl, err regs follow status */
...@@ -132,6 +155,11 @@ enum { ...@@ -132,6 +155,11 @@ enum {
/* Port registers */ /* Port registers */
EDMA_CFG_OFS = 0, EDMA_CFG_OFS = 0,
EDMA_CFG_Q_DEPTH = 0, /* queueing disabled */
EDMA_CFG_NCQ = (1 << 5),
EDMA_CFG_NCQ_GO_ON_ERR = (1 << 14), /* continue on error */
EDMA_CFG_RD_BRST_EXT = (1 << 11), /* read burst 512B */
EDMA_CFG_WR_BUFF_LEN = (1 << 13), /* write buffer 512B */
EDMA_ERR_IRQ_CAUSE_OFS = 0x8, EDMA_ERR_IRQ_CAUSE_OFS = 0x8,
EDMA_ERR_IRQ_MASK_OFS = 0xc, EDMA_ERR_IRQ_MASK_OFS = 0xc,
...@@ -161,33 +189,85 @@ enum { ...@@ -161,33 +189,85 @@ enum {
EDMA_ERR_LNK_DATA_TX | EDMA_ERR_LNK_DATA_TX |
EDMA_ERR_TRANS_PROTO), EDMA_ERR_TRANS_PROTO),
EDMA_REQ_Q_BASE_HI_OFS = 0x10,
EDMA_REQ_Q_IN_PTR_OFS = 0x14, /* also contains BASE_LO */
EDMA_REQ_Q_BASE_LO_MASK = 0xfffffc00U,
EDMA_REQ_Q_OUT_PTR_OFS = 0x18,
EDMA_REQ_Q_PTR_SHIFT = 5,
EDMA_RSP_Q_BASE_HI_OFS = 0x1c,
EDMA_RSP_Q_IN_PTR_OFS = 0x20,
EDMA_RSP_Q_OUT_PTR_OFS = 0x24, /* also contains BASE_LO */
EDMA_RSP_Q_BASE_LO_MASK = 0xffffff00U,
EDMA_RSP_Q_PTR_SHIFT = 3,
EDMA_CMD_OFS = 0x28, EDMA_CMD_OFS = 0x28,
EDMA_EN = (1 << 0), EDMA_EN = (1 << 0),
EDMA_DS = (1 << 1), EDMA_DS = (1 << 1),
ATA_RST = (1 << 2), ATA_RST = (1 << 2),
/* BDMA is 6xxx part only */ /* Host private flags (hp_flags) */
BDMA_CMD_OFS = 0x224, MV_HP_FLAG_MSI = (1 << 0),
BDMA_START = (1 << 0),
MV_UNDEF = 0, /* Port private flags (pp_flags) */
MV_PP_FLAG_EDMA_EN = (1 << 0),
MV_PP_FLAG_EDMA_DS_ACT = (1 << 1),
}; };
struct mv_port_priv { /* Command ReQuest Block: 32B */
struct mv_crqb {
u32 sg_addr;
u32 sg_addr_hi;
u16 ctrl_flags;
u16 ata_cmd[11];
};
/* Command ResPonse Block: 8B */
struct mv_crpb {
u16 id;
u16 flags;
u32 tmstmp;
}; };
struct mv_host_priv { /* EDMA Physical Region Descriptor (ePRD); A.K.A. SG */
struct mv_sg {
u32 addr;
u32 flags_size;
u32 addr_hi;
u32 reserved;
};
struct mv_port_priv {
struct mv_crqb *crqb;
dma_addr_t crqb_dma;
struct mv_crpb *crpb;
dma_addr_t crpb_dma;
struct mv_sg *sg_tbl;
dma_addr_t sg_tbl_dma;
unsigned req_producer; /* cp of req_in_ptr */
unsigned rsp_consumer; /* cp of rsp_out_ptr */
u32 pp_flags;
};
struct mv_host_priv {
u32 hp_flags;
}; };
static void mv_irq_clear(struct ata_port *ap); static void mv_irq_clear(struct ata_port *ap);
static u32 mv_scr_read(struct ata_port *ap, unsigned int sc_reg_in); static u32 mv_scr_read(struct ata_port *ap, unsigned int sc_reg_in);
static void mv_scr_write(struct ata_port *ap, unsigned int sc_reg_in, u32 val); static void mv_scr_write(struct ata_port *ap, unsigned int sc_reg_in, u32 val);
static u8 mv_check_err(struct ata_port *ap);
static void mv_phy_reset(struct ata_port *ap); static void mv_phy_reset(struct ata_port *ap);
static int mv_master_reset(void __iomem *mmio_base); static void mv_host_stop(struct ata_host_set *host_set);
static int mv_port_start(struct ata_port *ap);
static void mv_port_stop(struct ata_port *ap);
static void mv_qc_prep(struct ata_queued_cmd *qc);
static int mv_qc_issue(struct ata_queued_cmd *qc);
static irqreturn_t mv_interrupt(int irq, void *dev_instance, static irqreturn_t mv_interrupt(int irq, void *dev_instance,
struct pt_regs *regs); struct pt_regs *regs);
static void mv_eng_timeout(struct ata_port *ap);
static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent); static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
static Scsi_Host_Template mv_sht = { static Scsi_Host_Template mv_sht = {
...@@ -196,13 +276,13 @@ static Scsi_Host_Template mv_sht = { ...@@ -196,13 +276,13 @@ static Scsi_Host_Template mv_sht = {
.ioctl = ata_scsi_ioctl, .ioctl = ata_scsi_ioctl,
.queuecommand = ata_scsi_queuecmd, .queuecommand = ata_scsi_queuecmd,
.eh_strategy_handler = ata_scsi_error, .eh_strategy_handler = ata_scsi_error,
.can_queue = ATA_DEF_QUEUE, .can_queue = MV_USE_Q_DEPTH,
.this_id = ATA_SHT_THIS_ID, .this_id = ATA_SHT_THIS_ID,
.sg_tablesize = MV_UNDEF, .sg_tablesize = MV_MAX_SG_CT,
.max_sectors = ATA_MAX_SECTORS, .max_sectors = ATA_MAX_SECTORS,
.cmd_per_lun = ATA_SHT_CMD_PER_LUN, .cmd_per_lun = ATA_SHT_CMD_PER_LUN,
.emulated = ATA_SHT_EMULATED, .emulated = ATA_SHT_EMULATED,
.use_clustering = MV_UNDEF, .use_clustering = ATA_SHT_USE_CLUSTERING,
.proc_name = DRV_NAME, .proc_name = DRV_NAME,
.dma_boundary = MV_DMA_BOUNDARY, .dma_boundary = MV_DMA_BOUNDARY,
.slave_configure = ata_scsi_slave_config, .slave_configure = ata_scsi_slave_config,
...@@ -210,21 +290,22 @@ static Scsi_Host_Template mv_sht = { ...@@ -210,21 +290,22 @@ static Scsi_Host_Template mv_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations mv_ops = { static const struct ata_port_operations mv_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = ata_tf_load, .tf_load = ata_tf_load,
.tf_read = ata_tf_read, .tf_read = ata_tf_read,
.check_status = ata_check_status, .check_status = ata_check_status,
.check_err = mv_check_err,
.exec_command = ata_exec_command, .exec_command = ata_exec_command,
.dev_select = ata_std_dev_select, .dev_select = ata_std_dev_select,
.phy_reset = mv_phy_reset, .phy_reset = mv_phy_reset,
.qc_prep = ata_qc_prep, .qc_prep = mv_qc_prep,
.qc_issue = ata_qc_issue_prot, .qc_issue = mv_qc_issue,
.eng_timeout = ata_eng_timeout, .eng_timeout = mv_eng_timeout,
.irq_handler = mv_interrupt, .irq_handler = mv_interrupt,
.irq_clear = mv_irq_clear, .irq_clear = mv_irq_clear,
...@@ -232,46 +313,39 @@ static struct ata_port_operations mv_ops = { ...@@ -232,46 +313,39 @@ static struct ata_port_operations mv_ops = {
.scr_read = mv_scr_read, .scr_read = mv_scr_read,
.scr_write = mv_scr_write, .scr_write = mv_scr_write,
.port_start = ata_port_start, .port_start = mv_port_start,
.port_stop = ata_port_stop, .port_stop = mv_port_stop,
.host_stop = ata_host_stop, .host_stop = mv_host_stop,
}; };
static struct ata_port_info mv_port_info[] = { static struct ata_port_info mv_port_info[] = {
{ /* chip_504x */ { /* chip_504x */
.sht = &mv_sht, .sht = &mv_sht,
.host_flags = (ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | .host_flags = MV_COMMON_FLAGS,
ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO), .pio_mask = 0x1f, /* pio0-4 */
.pio_mask = 0x1f, /* pio4-0 */ .udma_mask = 0, /* 0x7f (udma0-6 disabled for now) */
.udma_mask = 0, /* 0x7f (udma6-0 disabled for now) */
.port_ops = &mv_ops, .port_ops = &mv_ops,
}, },
{ /* chip_508x */ { /* chip_508x */
.sht = &mv_sht, .sht = &mv_sht,
.host_flags = (ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | .host_flags = (MV_COMMON_FLAGS | MV_FLAG_DUAL_HC),
ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO | .pio_mask = 0x1f, /* pio0-4 */
MV_FLAG_DUAL_HC), .udma_mask = 0, /* 0x7f (udma0-6 disabled for now) */
.pio_mask = 0x1f, /* pio4-0 */
.udma_mask = 0, /* 0x7f (udma6-0 disabled for now) */
.port_ops = &mv_ops, .port_ops = &mv_ops,
}, },
{ /* chip_604x */ { /* chip_604x */
.sht = &mv_sht, .sht = &mv_sht,
.host_flags = (ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | .host_flags = (MV_COMMON_FLAGS | MV_6XXX_FLAGS),
ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO | .pio_mask = 0x1f, /* pio0-4 */
MV_FLAG_IRQ_COALESCE | MV_FLAG_BDMA), .udma_mask = 0x7f, /* udma0-6 */
.pio_mask = 0x1f, /* pio4-0 */
.udma_mask = 0, /* 0x7f (udma6-0 disabled for now) */
.port_ops = &mv_ops, .port_ops = &mv_ops,
}, },
{ /* chip_608x */ { /* chip_608x */
.sht = &mv_sht, .sht = &mv_sht,
.host_flags = (ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | .host_flags = (MV_COMMON_FLAGS | MV_6XXX_FLAGS |
ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO | MV_FLAG_DUAL_HC),
MV_FLAG_IRQ_COALESCE | MV_FLAG_DUAL_HC | .pio_mask = 0x1f, /* pio0-4 */
MV_FLAG_BDMA), .udma_mask = 0x7f, /* udma0-6 */
.pio_mask = 0x1f, /* pio4-0 */
.udma_mask = 0, /* 0x7f (udma6-0 disabled for now) */
.port_ops = &mv_ops, .port_ops = &mv_ops,
}, },
}; };
...@@ -306,12 +380,6 @@ static inline void writelfl(unsigned long data, void __iomem *addr) ...@@ -306,12 +380,6 @@ static inline void writelfl(unsigned long data, void __iomem *addr)
(void) readl(addr); /* flush to avoid PCI posted write */ (void) readl(addr); /* flush to avoid PCI posted write */
} }
static inline void __iomem *mv_port_addr_to_hc_base(void __iomem *port_mmio)
{
return ((void __iomem *)((unsigned long)port_mmio &
(unsigned long)SATAHC_MASK));
}
static inline void __iomem *mv_hc_base(void __iomem *base, unsigned int hc) static inline void __iomem *mv_hc_base(void __iomem *base, unsigned int hc)
{ {
return (base + MV_SATAHC0_REG_BASE + (hc * MV_SATAHC_REG_SZ)); return (base + MV_SATAHC0_REG_BASE + (hc * MV_SATAHC_REG_SZ));
...@@ -329,24 +397,150 @@ static inline void __iomem *mv_ap_base(struct ata_port *ap) ...@@ -329,24 +397,150 @@ static inline void __iomem *mv_ap_base(struct ata_port *ap)
return mv_port_base(ap->host_set->mmio_base, ap->port_no); return mv_port_base(ap->host_set->mmio_base, ap->port_no);
} }
static inline int mv_get_hc_count(unsigned long flags) static inline int mv_get_hc_count(unsigned long hp_flags)
{ {
return ((flags & MV_FLAG_DUAL_HC) ? 2 : 1); return ((hp_flags & MV_FLAG_DUAL_HC) ? 2 : 1);
} }
static inline int mv_is_edma_active(struct ata_port *ap) static void mv_irq_clear(struct ata_port *ap)
{
}
/**
* mv_start_dma - Enable eDMA engine
* @base: port base address
* @pp: port private data
*
* Verify the local cache of the eDMA state is accurate with an
* assert.
*
* LOCKING:
* Inherited from caller.
*/
static void mv_start_dma(void __iomem *base, struct mv_port_priv *pp)
{
if (!(MV_PP_FLAG_EDMA_EN & pp->pp_flags)) {
writelfl(EDMA_EN, base + EDMA_CMD_OFS);
pp->pp_flags |= MV_PP_FLAG_EDMA_EN;
}
assert(EDMA_EN & readl(base + EDMA_CMD_OFS));
}
/**
* mv_stop_dma - Disable eDMA engine
* @ap: ATA channel to manipulate
*
* Verify the local cache of the eDMA state is accurate with an
* assert.
*
* LOCKING:
* Inherited from caller.
*/
static void mv_stop_dma(struct ata_port *ap)
{ {
void __iomem *port_mmio = mv_ap_base(ap); void __iomem *port_mmio = mv_ap_base(ap);
return (EDMA_EN & readl(port_mmio + EDMA_CMD_OFS)); struct mv_port_priv *pp = ap->private_data;
u32 reg;
int i;
if (MV_PP_FLAG_EDMA_EN & pp->pp_flags) {
/* Disable EDMA if active. The disable bit auto clears.
*/
writelfl(EDMA_DS, port_mmio + EDMA_CMD_OFS);
pp->pp_flags &= ~MV_PP_FLAG_EDMA_EN;
} else {
assert(!(EDMA_EN & readl(port_mmio + EDMA_CMD_OFS)));
}
/* now properly wait for the eDMA to stop */
for (i = 1000; i > 0; i--) {
reg = readl(port_mmio + EDMA_CMD_OFS);
if (!(EDMA_EN & reg)) {
break;
}
udelay(100);
}
if (EDMA_EN & reg) {
printk(KERN_ERR "ata%u: Unable to stop eDMA\n", ap->id);
/* FIXME: Consider doing a reset here to recover */
}
} }
static inline int mv_port_bdma_capable(struct ata_port *ap) #ifdef ATA_DEBUG
static void mv_dump_mem(void __iomem *start, unsigned bytes)
{ {
return (ap->flags & MV_FLAG_BDMA); int b, w;
for (b = 0; b < bytes; ) {
DPRINTK("%p: ", start + b);
for (w = 0; b < bytes && w < 4; w++) {
printk("%08x ",readl(start + b));
b += sizeof(u32);
}
printk("\n");
}
} }
#endif
static void mv_irq_clear(struct ata_port *ap) static void mv_dump_pci_cfg(struct pci_dev *pdev, unsigned bytes)
{
#ifdef ATA_DEBUG
int b, w;
u32 dw;
for (b = 0; b < bytes; ) {
DPRINTK("%02x: ", b);
for (w = 0; b < bytes && w < 4; w++) {
(void) pci_read_config_dword(pdev,b,&dw);
printk("%08x ",dw);
b += sizeof(u32);
}
printk("\n");
}
#endif
}
static void mv_dump_all_regs(void __iomem *mmio_base, int port,
struct pci_dev *pdev)
{ {
#ifdef ATA_DEBUG
void __iomem *hc_base = mv_hc_base(mmio_base,
port >> MV_PORT_HC_SHIFT);
void __iomem *port_base;
int start_port, num_ports, p, start_hc, num_hcs, hc;
if (0 > port) {
start_hc = start_port = 0;
num_ports = 8; /* shld be benign for 4 port devs */
num_hcs = 2;
} else {
start_hc = port >> MV_PORT_HC_SHIFT;
start_port = port;
num_ports = num_hcs = 1;
}
DPRINTK("All registers for port(s) %u-%u:\n", start_port,
num_ports > 1 ? num_ports - 1 : start_port);
if (NULL != pdev) {
DPRINTK("PCI config space regs:\n");
mv_dump_pci_cfg(pdev, 0x68);
}
DPRINTK("PCI regs:\n");
mv_dump_mem(mmio_base+0xc00, 0x3c);
mv_dump_mem(mmio_base+0xd00, 0x34);
mv_dump_mem(mmio_base+0xf00, 0x4);
mv_dump_mem(mmio_base+0x1d00, 0x6c);
for (hc = start_hc; hc < start_hc + num_hcs; hc++) {
hc_base = mv_hc_base(mmio_base, port >> MV_PORT_HC_SHIFT);
DPRINTK("HC regs (HC %i):\n", hc);
mv_dump_mem(hc_base, 0x1c);
}
for (p = start_port; p < start_port + num_ports; p++) {
port_base = mv_port_base(mmio_base, p);
DPRINTK("EDMA regs (port %i):\n",p);
mv_dump_mem(port_base, 0x54);
DPRINTK("SATA regs (port %i):\n",p);
mv_dump_mem(port_base+0x300, 0x60);
}
#endif
} }
static unsigned int mv_scr_offset(unsigned int sc_reg_in) static unsigned int mv_scr_offset(unsigned int sc_reg_in)
...@@ -389,30 +583,37 @@ static void mv_scr_write(struct ata_port *ap, unsigned int sc_reg_in, u32 val) ...@@ -389,30 +583,37 @@ static void mv_scr_write(struct ata_port *ap, unsigned int sc_reg_in, u32 val)
} }
} }
static int mv_master_reset(void __iomem *mmio_base) /**
* mv_global_soft_reset - Perform the 6xxx global soft reset
* @mmio_base: base address of the HBA
*
* This routine only applies to 6xxx parts.
*
* LOCKING:
* Inherited from caller.
*/
static int mv_global_soft_reset(void __iomem *mmio_base)
{ {
void __iomem *reg = mmio_base + PCI_MAIN_CMD_STS_OFS; void __iomem *reg = mmio_base + PCI_MAIN_CMD_STS_OFS;
int i, rc = 0; int i, rc = 0;
u32 t; u32 t;
VPRINTK("ENTER\n");
/* Following procedure defined in PCI "main command and status /* Following procedure defined in PCI "main command and status
* register" table. * register" table.
*/ */
t = readl(reg); t = readl(reg);
writel(t | STOP_PCI_MASTER, reg); writel(t | STOP_PCI_MASTER, reg);
for (i = 0; i < 100; i++) { for (i = 0; i < 1000; i++) {
msleep(10); udelay(1);
t = readl(reg); t = readl(reg);
if (PCI_MASTER_EMPTY & t) { if (PCI_MASTER_EMPTY & t) {
break; break;
} }
} }
if (!(PCI_MASTER_EMPTY & t)) { if (!(PCI_MASTER_EMPTY & t)) {
printk(KERN_ERR DRV_NAME "PCI master won't flush\n"); printk(KERN_ERR DRV_NAME ": PCI master won't flush\n");
rc = 1; /* broken HW? */ rc = 1;
goto done; goto done;
} }
...@@ -425,39 +626,399 @@ static int mv_master_reset(void __iomem *mmio_base) ...@@ -425,39 +626,399 @@ static int mv_master_reset(void __iomem *mmio_base)
} while (!(GLOB_SFT_RST & t) && (i-- > 0)); } while (!(GLOB_SFT_RST & t) && (i-- > 0));
if (!(GLOB_SFT_RST & t)) { if (!(GLOB_SFT_RST & t)) {
printk(KERN_ERR DRV_NAME "can't set global reset\n"); printk(KERN_ERR DRV_NAME ": can't set global reset\n");
rc = 1; /* broken HW? */ rc = 1;
goto done; goto done;
} }
/* clear reset */ /* clear reset and *reenable the PCI master* (not mentioned in spec) */
i = 5; i = 5;
do { do {
writel(t & ~GLOB_SFT_RST, reg); writel(t & ~(GLOB_SFT_RST | STOP_PCI_MASTER), reg);
t = readl(reg); t = readl(reg);
udelay(1); udelay(1);
} while ((GLOB_SFT_RST & t) && (i-- > 0)); } while ((GLOB_SFT_RST & t) && (i-- > 0));
if (GLOB_SFT_RST & t) { if (GLOB_SFT_RST & t) {
printk(KERN_ERR DRV_NAME "can't clear global reset\n"); printk(KERN_ERR DRV_NAME ": can't clear global reset\n");
rc = 1; /* broken HW? */ rc = 1;
} }
done:
done:
VPRINTK("EXIT, rc = %i\n", rc);
return rc; return rc;
} }
static void mv_err_intr(struct ata_port *ap) /**
* mv_host_stop - Host specific cleanup/stop routine.
* @host_set: host data structure
*
* Disable ints, cleanup host memory, call general purpose
* host_stop.
*
* LOCKING:
* Inherited from caller.
*/
static void mv_host_stop(struct ata_host_set *host_set)
{ {
void __iomem *port_mmio; struct mv_host_priv *hpriv = host_set->private_data;
u32 edma_err_cause, serr = 0; struct pci_dev *pdev = to_pci_dev(host_set->dev);
if (hpriv->hp_flags & MV_HP_FLAG_MSI) {
pci_disable_msi(pdev);
} else {
pci_intx(pdev, 0);
}
kfree(hpriv);
ata_host_stop(host_set);
}
/**
* mv_port_start - Port specific init/start routine.
* @ap: ATA channel to manipulate
*
* Allocate and point to DMA memory, init port private memory,
* zero indices.
*
* LOCKING:
* Inherited from caller.
*/
static int mv_port_start(struct ata_port *ap)
{
struct device *dev = ap->host_set->dev;
struct mv_port_priv *pp;
void __iomem *port_mmio = mv_ap_base(ap);
void *mem;
dma_addr_t mem_dma;
pp = kmalloc(sizeof(*pp), GFP_KERNEL);
if (!pp) {
return -ENOMEM;
}
memset(pp, 0, sizeof(*pp));
mem = dma_alloc_coherent(dev, MV_PORT_PRIV_DMA_SZ, &mem_dma,
GFP_KERNEL);
if (!mem) {
kfree(pp);
return -ENOMEM;
}
memset(mem, 0, MV_PORT_PRIV_DMA_SZ);
/* First item in chunk of DMA memory:
* 32-slot command request table (CRQB), 32 bytes each in size
*/
pp->crqb = mem;
pp->crqb_dma = mem_dma;
mem += MV_CRQB_Q_SZ;
mem_dma += MV_CRQB_Q_SZ;
/* Second item:
* 32-slot command response table (CRPB), 8 bytes each in size
*/
pp->crpb = mem;
pp->crpb_dma = mem_dma;
mem += MV_CRPB_Q_SZ;
mem_dma += MV_CRPB_Q_SZ;
/* Third item:
* Table of scatter-gather descriptors (ePRD), 16 bytes each
*/
pp->sg_tbl = mem;
pp->sg_tbl_dma = mem_dma;
writelfl(EDMA_CFG_Q_DEPTH | EDMA_CFG_RD_BRST_EXT |
EDMA_CFG_WR_BUFF_LEN, port_mmio + EDMA_CFG_OFS);
writel((pp->crqb_dma >> 16) >> 16, port_mmio + EDMA_REQ_Q_BASE_HI_OFS);
writelfl(pp->crqb_dma & EDMA_REQ_Q_BASE_LO_MASK,
port_mmio + EDMA_REQ_Q_IN_PTR_OFS);
writelfl(0, port_mmio + EDMA_REQ_Q_OUT_PTR_OFS);
writelfl(0, port_mmio + EDMA_RSP_Q_IN_PTR_OFS);
writel((pp->crpb_dma >> 16) >> 16, port_mmio + EDMA_RSP_Q_BASE_HI_OFS);
writelfl(pp->crpb_dma & EDMA_RSP_Q_BASE_LO_MASK,
port_mmio + EDMA_RSP_Q_OUT_PTR_OFS);
pp->req_producer = pp->rsp_consumer = 0;
/* Don't turn on EDMA here...do it before DMA commands only. Else
* we'll be unable to send non-data, PIO, etc due to restricted access
* to shadow regs.
*/
ap->private_data = pp;
return 0;
}
/**
* mv_port_stop - Port specific cleanup/stop routine.
* @ap: ATA channel to manipulate
*
* Stop DMA, cleanup port memory.
*
* LOCKING:
* This routine uses the host_set lock to protect the DMA stop.
*/
static void mv_port_stop(struct ata_port *ap)
{
struct device *dev = ap->host_set->dev;
struct mv_port_priv *pp = ap->private_data;
unsigned long flags;
spin_lock_irqsave(&ap->host_set->lock, flags);
mv_stop_dma(ap);
spin_unlock_irqrestore(&ap->host_set->lock, flags);
ap->private_data = NULL;
dma_free_coherent(dev, MV_PORT_PRIV_DMA_SZ, pp->crpb, pp->crpb_dma);
kfree(pp);
}
/**
* mv_fill_sg - Fill out the Marvell ePRD (scatter gather) entries
* @qc: queued command whose SG list to source from
*
* Populate the SG list and mark the last entry.
*
* LOCKING:
* Inherited from caller.
*/
static void mv_fill_sg(struct ata_queued_cmd *qc)
{
struct mv_port_priv *pp = qc->ap->private_data;
unsigned int i;
for (i = 0; i < qc->n_elem; i++) {
u32 sg_len;
dma_addr_t addr;
addr = sg_dma_address(&qc->sg[i]);
sg_len = sg_dma_len(&qc->sg[i]);
pp->sg_tbl[i].addr = cpu_to_le32(addr & 0xffffffff);
pp->sg_tbl[i].addr_hi = cpu_to_le32((addr >> 16) >> 16);
assert(0 == (sg_len & ~MV_DMA_BOUNDARY));
pp->sg_tbl[i].flags_size = cpu_to_le32(sg_len);
}
if (0 < qc->n_elem) {
pp->sg_tbl[qc->n_elem - 1].flags_size |=
cpu_to_le32(EPRD_FLAG_END_OF_TBL);
}
}
static inline unsigned mv_inc_q_index(unsigned *index)
{
*index = (*index + 1) & MV_MAX_Q_DEPTH_MASK;
return *index;
}
static inline void mv_crqb_pack_cmd(u16 *cmdw, u8 data, u8 addr, unsigned last)
{
*cmdw = data | (addr << CRQB_CMD_ADDR_SHIFT) | CRQB_CMD_CS |
(last ? CRQB_CMD_LAST : 0);
}
/* bug here b/c we got an err int on a port we don't know about, /**
* so there's no way to clear it * mv_qc_prep - Host specific command preparation.
* @qc: queued command to prepare
*
* This routine simply redirects to the general purpose routine
* if command is not DMA. Else, it handles prep of the CRQB
* (command request block), does some sanity checking, and calls
* the SG load routine.
*
* LOCKING:
* Inherited from caller.
*/
static void mv_qc_prep(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
struct mv_port_priv *pp = ap->private_data;
u16 *cw;
struct ata_taskfile *tf;
u16 flags = 0;
if (ATA_PROT_DMA != qc->tf.protocol) {
return;
}
/* the req producer index should be the same as we remember it */
assert(((readl(mv_ap_base(qc->ap) + EDMA_REQ_Q_IN_PTR_OFS) >>
EDMA_REQ_Q_PTR_SHIFT) & MV_MAX_Q_DEPTH_MASK) ==
pp->req_producer);
/* Fill in command request block
*/ */
BUG_ON(NULL == ap); if (!(qc->tf.flags & ATA_TFLAG_WRITE)) {
port_mmio = mv_ap_base(ap); flags |= CRQB_FLAG_READ;
}
assert(MV_MAX_Q_DEPTH > qc->tag);
flags |= qc->tag << CRQB_TAG_SHIFT;
pp->crqb[pp->req_producer].sg_addr =
cpu_to_le32(pp->sg_tbl_dma & 0xffffffff);
pp->crqb[pp->req_producer].sg_addr_hi =
cpu_to_le32((pp->sg_tbl_dma >> 16) >> 16);
pp->crqb[pp->req_producer].ctrl_flags = cpu_to_le16(flags);
cw = &pp->crqb[pp->req_producer].ata_cmd[0];
tf = &qc->tf;
/* Sadly, the CRQB cannot accomodate all registers--there are
* only 11 bytes...so we must pick and choose required
* registers based on the command. So, we drop feature and
* hob_feature for [RW] DMA commands, but they are needed for
* NCQ. NCQ will drop hob_nsect.
*/
switch (tf->command) {
case ATA_CMD_READ:
case ATA_CMD_READ_EXT:
case ATA_CMD_WRITE:
case ATA_CMD_WRITE_EXT:
mv_crqb_pack_cmd(cw++, tf->hob_nsect, ATA_REG_NSECT, 0);
break;
#ifdef LIBATA_NCQ /* FIXME: remove this line when NCQ added */
case ATA_CMD_FPDMA_READ:
case ATA_CMD_FPDMA_WRITE:
mv_crqb_pack_cmd(cw++, tf->hob_feature, ATA_REG_FEATURE, 0);
mv_crqb_pack_cmd(cw++, tf->feature, ATA_REG_FEATURE, 0);
break;
#endif /* FIXME: remove this line when NCQ added */
default:
/* The only other commands EDMA supports in non-queued and
* non-NCQ mode are: [RW] STREAM DMA and W DMA FUA EXT, none
* of which are defined/used by Linux. If we get here, this
* driver needs work.
*
* FIXME: modify libata to give qc_prep a return value and
* return error here.
*/
BUG_ON(tf->command);
break;
}
mv_crqb_pack_cmd(cw++, tf->nsect, ATA_REG_NSECT, 0);
mv_crqb_pack_cmd(cw++, tf->hob_lbal, ATA_REG_LBAL, 0);
mv_crqb_pack_cmd(cw++, tf->lbal, ATA_REG_LBAL, 0);
mv_crqb_pack_cmd(cw++, tf->hob_lbam, ATA_REG_LBAM, 0);
mv_crqb_pack_cmd(cw++, tf->lbam, ATA_REG_LBAM, 0);
mv_crqb_pack_cmd(cw++, tf->hob_lbah, ATA_REG_LBAH, 0);
mv_crqb_pack_cmd(cw++, tf->lbah, ATA_REG_LBAH, 0);
mv_crqb_pack_cmd(cw++, tf->device, ATA_REG_DEVICE, 0);
mv_crqb_pack_cmd(cw++, tf->command, ATA_REG_CMD, 1); /* last */
if (!(qc->flags & ATA_QCFLAG_DMAMAP)) {
return;
}
mv_fill_sg(qc);
}
/**
* mv_qc_issue - Initiate a command to the host
* @qc: queued command to start
*
* This routine simply redirects to the general purpose routine
* if command is not DMA. Else, it sanity checks our local
* caches of the request producer/consumer indices then enables
* DMA and bumps the request producer index.
*
* LOCKING:
* Inherited from caller.
*/
static int mv_qc_issue(struct ata_queued_cmd *qc)
{
void __iomem *port_mmio = mv_ap_base(qc->ap);
struct mv_port_priv *pp = qc->ap->private_data;
u32 in_ptr;
if (ATA_PROT_DMA != qc->tf.protocol) {
/* We're about to send a non-EDMA capable command to the
* port. Turn off EDMA so there won't be problems accessing
* shadow block, etc registers.
*/
mv_stop_dma(qc->ap);
return ata_qc_issue_prot(qc);
}
in_ptr = readl(port_mmio + EDMA_REQ_Q_IN_PTR_OFS);
/* the req producer index should be the same as we remember it */
assert(((in_ptr >> EDMA_REQ_Q_PTR_SHIFT) & MV_MAX_Q_DEPTH_MASK) ==
pp->req_producer);
/* until we do queuing, the queue should be empty at this point */
assert(((in_ptr >> EDMA_REQ_Q_PTR_SHIFT) & MV_MAX_Q_DEPTH_MASK) ==
((readl(port_mmio + EDMA_REQ_Q_OUT_PTR_OFS) >>
EDMA_REQ_Q_PTR_SHIFT) & MV_MAX_Q_DEPTH_MASK));
mv_inc_q_index(&pp->req_producer); /* now incr producer index */
mv_start_dma(port_mmio, pp);
/* and write the request in pointer to kick the EDMA to life */
in_ptr &= EDMA_REQ_Q_BASE_LO_MASK;
in_ptr |= pp->req_producer << EDMA_REQ_Q_PTR_SHIFT;
writelfl(in_ptr, port_mmio + EDMA_REQ_Q_IN_PTR_OFS);
return 0;
}
/**
* mv_get_crpb_status - get status from most recently completed cmd
* @ap: ATA channel to manipulate
*
* This routine is for use when the port is in DMA mode, when it
* will be using the CRPB (command response block) method of
* returning command completion information. We assert indices
* are good, grab status, and bump the response consumer index to
* prove that we're up to date.
*
* LOCKING:
* Inherited from caller.
*/
static u8 mv_get_crpb_status(struct ata_port *ap)
{
void __iomem *port_mmio = mv_ap_base(ap);
struct mv_port_priv *pp = ap->private_data;
u32 out_ptr;
out_ptr = readl(port_mmio + EDMA_RSP_Q_OUT_PTR_OFS);
/* the response consumer index should be the same as we remember it */
assert(((out_ptr >> EDMA_RSP_Q_PTR_SHIFT) & MV_MAX_Q_DEPTH_MASK) ==
pp->rsp_consumer);
/* increment our consumer index... */
pp->rsp_consumer = mv_inc_q_index(&pp->rsp_consumer);
/* and, until we do NCQ, there should only be 1 CRPB waiting */
assert(((readl(port_mmio + EDMA_RSP_Q_IN_PTR_OFS) >>
EDMA_RSP_Q_PTR_SHIFT) & MV_MAX_Q_DEPTH_MASK) ==
pp->rsp_consumer);
/* write out our inc'd consumer index so EDMA knows we're caught up */
out_ptr &= EDMA_RSP_Q_BASE_LO_MASK;
out_ptr |= pp->rsp_consumer << EDMA_RSP_Q_PTR_SHIFT;
writelfl(out_ptr, port_mmio + EDMA_RSP_Q_OUT_PTR_OFS);
/* Return ATA status register for completed CRPB */
return (pp->crpb[pp->rsp_consumer].flags >> CRPB_FLAG_STATUS_SHIFT);
}
/**
* mv_err_intr - Handle error interrupts on the port
* @ap: ATA channel to manipulate
*
* In most cases, just clear the interrupt and move on. However,
* some cases require an eDMA reset, which is done right before
* the COMRESET in mv_phy_reset(). The SERR case requires a
* clear of pending errors in the SATA SERROR register. Finally,
* if the port disabled DMA, update our cached copy to match.
*
* LOCKING:
* Inherited from caller.
*/
static void mv_err_intr(struct ata_port *ap)
{
void __iomem *port_mmio = mv_ap_base(ap);
u32 edma_err_cause, serr = 0;
edma_err_cause = readl(port_mmio + EDMA_ERR_IRQ_CAUSE_OFS); edma_err_cause = readl(port_mmio + EDMA_ERR_IRQ_CAUSE_OFS);
...@@ -465,8 +1026,12 @@ static void mv_err_intr(struct ata_port *ap) ...@@ -465,8 +1026,12 @@ static void mv_err_intr(struct ata_port *ap)
serr = scr_read(ap, SCR_ERROR); serr = scr_read(ap, SCR_ERROR);
scr_write_flush(ap, SCR_ERROR, serr); scr_write_flush(ap, SCR_ERROR, serr);
} }
DPRINTK("port %u error; EDMA err cause: 0x%08x SERR: 0x%08x\n", if (EDMA_ERR_SELF_DIS & edma_err_cause) {
ap->port_no, edma_err_cause, serr); struct mv_port_priv *pp = ap->private_data;
pp->pp_flags &= ~MV_PP_FLAG_EDMA_EN;
}
DPRINTK(KERN_ERR "ata%u: port error; EDMA err cause: 0x%08x "
"SERR: 0x%08x\n", ap->id, edma_err_cause, serr);
/* Clear EDMA now that SERR cleanup done */ /* Clear EDMA now that SERR cleanup done */
writelfl(0, port_mmio + EDMA_ERR_IRQ_CAUSE_OFS); writelfl(0, port_mmio + EDMA_ERR_IRQ_CAUSE_OFS);
...@@ -477,7 +1042,21 @@ static void mv_err_intr(struct ata_port *ap) ...@@ -477,7 +1042,21 @@ static void mv_err_intr(struct ata_port *ap)
} }
} }
/* Handle any outstanding interrupts in a single SATAHC /**
* mv_host_intr - Handle all interrupts on the given host controller
* @host_set: host specific structure
* @relevant: port error bits relevant to this host controller
* @hc: which host controller we're to look at
*
* Read then write clear the HC interrupt status then walk each
* port connected to the HC and see if it needs servicing. Port
* success ints are reported in the HC interrupt status reg, the
* port error ints are reported in the higher level main
* interrupt status register and thus are passed in via the
* 'relevant' argument.
*
* LOCKING:
* Inherited from caller.
*/ */
static void mv_host_intr(struct ata_host_set *host_set, u32 relevant, static void mv_host_intr(struct ata_host_set *host_set, u32 relevant,
unsigned int hc) unsigned int hc)
...@@ -487,8 +1066,8 @@ static void mv_host_intr(struct ata_host_set *host_set, u32 relevant, ...@@ -487,8 +1066,8 @@ static void mv_host_intr(struct ata_host_set *host_set, u32 relevant,
struct ata_port *ap; struct ata_port *ap;
struct ata_queued_cmd *qc; struct ata_queued_cmd *qc;
u32 hc_irq_cause; u32 hc_irq_cause;
int shift, port, port0, hard_port; int shift, port, port0, hard_port, handled;
u8 ata_status; u8 ata_status = 0;
if (hc == 0) { if (hc == 0) {
port0 = 0; port0 = 0;
...@@ -499,7 +1078,7 @@ static void mv_host_intr(struct ata_host_set *host_set, u32 relevant, ...@@ -499,7 +1078,7 @@ static void mv_host_intr(struct ata_host_set *host_set, u32 relevant,
/* we'll need the HC success int register in most cases */ /* we'll need the HC success int register in most cases */
hc_irq_cause = readl(hc_mmio + HC_IRQ_CAUSE_OFS); hc_irq_cause = readl(hc_mmio + HC_IRQ_CAUSE_OFS);
if (hc_irq_cause) { if (hc_irq_cause) {
writelfl(0, hc_mmio + HC_IRQ_CAUSE_OFS); writelfl(~hc_irq_cause, hc_mmio + HC_IRQ_CAUSE_OFS);
} }
VPRINTK("ENTER, hc%u relevant=0x%08x HC IRQ cause=0x%08x\n", VPRINTK("ENTER, hc%u relevant=0x%08x HC IRQ cause=0x%08x\n",
...@@ -508,35 +1087,38 @@ static void mv_host_intr(struct ata_host_set *host_set, u32 relevant, ...@@ -508,35 +1087,38 @@ static void mv_host_intr(struct ata_host_set *host_set, u32 relevant,
for (port = port0; port < port0 + MV_PORTS_PER_HC; port++) { for (port = port0; port < port0 + MV_PORTS_PER_HC; port++) {
ap = host_set->ports[port]; ap = host_set->ports[port];
hard_port = port & MV_PORT_MASK; /* range 0-3 */ hard_port = port & MV_PORT_MASK; /* range 0-3 */
ata_status = 0xffU; handled = 0; /* ensure ata_status is set if handled++ */
if (((CRBP_DMA_DONE | DEV_IRQ) << hard_port) & hc_irq_cause) { if ((CRPB_DMA_DONE << hard_port) & hc_irq_cause) {
BUG_ON(NULL == ap); /* new CRPB on the queue; just one at a time until NCQ
/* rcv'd new resp, basic DMA complete, or ATA IRQ */ */
/* This is needed to clear the ATA INTRQ. ata_status = mv_get_crpb_status(ap);
* FIXME: don't read the status reg in EDMA mode! handled++;
} else if ((DEV_IRQ << hard_port) & hc_irq_cause) {
/* received ATA IRQ; read the status reg to clear INTRQ
*/ */
ata_status = readb((void __iomem *) ata_status = readb((void __iomem *)
ap->ioaddr.status_addr); ap->ioaddr.status_addr);
handled++;
} }
shift = port * 2; shift = port << 1; /* (port * 2) */
if (port >= MV_PORTS_PER_HC) { if (port >= MV_PORTS_PER_HC) {
shift++; /* skip bit 8 in the HC Main IRQ reg */ shift++; /* skip bit 8 in the HC Main IRQ reg */
} }
if ((PORT0_ERR << shift) & relevant) { if ((PORT0_ERR << shift) & relevant) {
mv_err_intr(ap); mv_err_intr(ap);
/* FIXME: smart to OR in ATA_ERR? */ /* OR in ATA_ERR to ensure libata knows we took one */
ata_status = readb((void __iomem *) ata_status = readb((void __iomem *)
ap->ioaddr.status_addr) | ATA_ERR; ap->ioaddr.status_addr) | ATA_ERR;
handled++;
} }
if (ap) { if (handled && ap) {
qc = ata_qc_from_tag(ap, ap->active_tag); qc = ata_qc_from_tag(ap, ap->active_tag);
if (NULL != qc) { if (NULL != qc) {
VPRINTK("port %u IRQ found for qc, " VPRINTK("port %u IRQ found for qc, "
"ata_status 0x%x\n", port,ata_status); "ata_status 0x%x\n", port,ata_status);
BUG_ON(0xffU == ata_status);
/* mark qc status appropriately */ /* mark qc status appropriately */
ata_qc_complete(qc, ata_status); ata_qc_complete(qc, ata_status);
} }
...@@ -545,17 +1127,30 @@ static void mv_host_intr(struct ata_host_set *host_set, u32 relevant, ...@@ -545,17 +1127,30 @@ static void mv_host_intr(struct ata_host_set *host_set, u32 relevant,
VPRINTK("EXIT\n"); VPRINTK("EXIT\n");
} }
/**
* mv_interrupt -
* @irq: unused
* @dev_instance: private data; in this case the host structure
* @regs: unused
*
* Read the read only register to determine if any host
* controllers have pending interrupts. If so, call lower level
* routine to handle. Also check for PCI errors which are only
* reported here.
*
* LOCKING:
* This routine holds the host_set lock while processing pending
* interrupts.
*/
static irqreturn_t mv_interrupt(int irq, void *dev_instance, static irqreturn_t mv_interrupt(int irq, void *dev_instance,
struct pt_regs *regs) struct pt_regs *regs)
{ {
struct ata_host_set *host_set = dev_instance; struct ata_host_set *host_set = dev_instance;
unsigned int hc, handled = 0, n_hcs; unsigned int hc, handled = 0, n_hcs;
void __iomem *mmio; void __iomem *mmio = host_set->mmio_base;
u32 irq_stat; u32 irq_stat;
mmio = host_set->mmio_base;
irq_stat = readl(mmio + HC_MAIN_IRQ_CAUSE_OFS); irq_stat = readl(mmio + HC_MAIN_IRQ_CAUSE_OFS);
n_hcs = mv_get_hc_count(host_set->ports[0]->flags);
/* check the cases where we either have nothing pending or have read /* check the cases where we either have nothing pending or have read
* a bogus register value which can indicate HW removal or PCI fault * a bogus register value which can indicate HW removal or PCI fault
...@@ -564,64 +1159,105 @@ static irqreturn_t mv_interrupt(int irq, void *dev_instance, ...@@ -564,64 +1159,105 @@ static irqreturn_t mv_interrupt(int irq, void *dev_instance,
return IRQ_NONE; return IRQ_NONE;
} }
n_hcs = mv_get_hc_count(host_set->ports[0]->flags);
spin_lock(&host_set->lock); spin_lock(&host_set->lock);
for (hc = 0; hc < n_hcs; hc++) { for (hc = 0; hc < n_hcs; hc++) {
u32 relevant = irq_stat & (HC0_IRQ_PEND << (hc * HC_SHIFT)); u32 relevant = irq_stat & (HC0_IRQ_PEND << (hc * HC_SHIFT));
if (relevant) { if (relevant) {
mv_host_intr(host_set, relevant, hc); mv_host_intr(host_set, relevant, hc);
handled = 1; handled++;
} }
} }
if (PCI_ERR & irq_stat) { if (PCI_ERR & irq_stat) {
/* FIXME: these are all masked by default, but still need printk(KERN_ERR DRV_NAME ": PCI ERROR; PCI IRQ cause=0x%08x\n",
* to recover from them properly. readl(mmio + PCI_IRQ_CAUSE_OFS));
*/
}
DPRINTK("All regs @ PCI error\n");
mv_dump_all_regs(mmio, -1, to_pci_dev(host_set->dev));
writelfl(0, mmio + PCI_IRQ_CAUSE_OFS);
handled++;
}
spin_unlock(&host_set->lock); spin_unlock(&host_set->lock);
return IRQ_RETVAL(handled); return IRQ_RETVAL(handled);
} }
/**
* mv_check_err - Return the error shadow register to caller.
* @ap: ATA channel to manipulate
*
* Marvell requires DMA to be stopped before accessing shadow
* registers. So we do that, then return the needed register.
*
* LOCKING:
* Inherited from caller. FIXME: protect mv_stop_dma with lock?
*/
static u8 mv_check_err(struct ata_port *ap)
{
mv_stop_dma(ap); /* can't read shadow regs if DMA on */
return readb((void __iomem *) ap->ioaddr.error_addr);
}
/**
* mv_phy_reset - Perform eDMA reset followed by COMRESET
* @ap: ATA channel to manipulate
*
* Part of this is taken from __sata_phy_reset and modified to
* not sleep since this routine gets called from interrupt level.
*
* LOCKING:
* Inherited from caller. This is coded to safe to call at
* interrupt level, i.e. it does not sleep.
*/
static void mv_phy_reset(struct ata_port *ap) static void mv_phy_reset(struct ata_port *ap)
{ {
void __iomem *port_mmio = mv_ap_base(ap); void __iomem *port_mmio = mv_ap_base(ap);
struct ata_taskfile tf; struct ata_taskfile tf;
struct ata_device *dev = &ap->device[0]; struct ata_device *dev = &ap->device[0];
u32 edma = 0, bdma; unsigned long timeout;
VPRINTK("ENTER, port %u, mmio 0x%p\n", ap->port_no, port_mmio); VPRINTK("ENTER, port %u, mmio 0x%p\n", ap->port_no, port_mmio);
edma = readl(port_mmio + EDMA_CMD_OFS); mv_stop_dma(ap);
if (EDMA_EN & edma) {
/* disable EDMA if active */
edma &= ~EDMA_EN;
writelfl(edma | EDMA_DS, port_mmio + EDMA_CMD_OFS);
udelay(1);
} else if (mv_port_bdma_capable(ap) &&
(bdma = readl(port_mmio + BDMA_CMD_OFS)) & BDMA_START) {
/* disable BDMA if active */
writelfl(bdma & ~BDMA_START, port_mmio + BDMA_CMD_OFS);
}
writelfl(edma | ATA_RST, port_mmio + EDMA_CMD_OFS); writelfl(ATA_RST, port_mmio + EDMA_CMD_OFS);
udelay(25); /* allow reset propagation */ udelay(25); /* allow reset propagation */
/* Spec never mentions clearing the bit. Marvell's driver does /* Spec never mentions clearing the bit. Marvell's driver does
* clear the bit, however. * clear the bit, however.
*/ */
writelfl(edma & ~ATA_RST, port_mmio + EDMA_CMD_OFS); writelfl(0, port_mmio + EDMA_CMD_OFS);
VPRINTK("Done. Now calling __sata_phy_reset()\n"); VPRINTK("S-regs after ATA_RST: SStat 0x%08x SErr 0x%08x "
"SCtrl 0x%08x\n", mv_scr_read(ap, SCR_STATUS),
mv_scr_read(ap, SCR_ERROR), mv_scr_read(ap, SCR_CONTROL));
/* proceed to init communications via the scr_control reg */ /* proceed to init communications via the scr_control reg */
__sata_phy_reset(ap); scr_write_flush(ap, SCR_CONTROL, 0x301);
mdelay(1);
scr_write_flush(ap, SCR_CONTROL, 0x300);
timeout = jiffies + (HZ * 1);
do {
mdelay(10);
if ((scr_read(ap, SCR_STATUS) & 0xf) != 1)
break;
} while (time_before(jiffies, timeout));
if (ap->flags & ATA_FLAG_PORT_DISABLED) { VPRINTK("S-regs after PHY wake: SStat 0x%08x SErr 0x%08x "
VPRINTK("Port disabled pre-sig. Exiting.\n"); "SCtrl 0x%08x\n", mv_scr_read(ap, SCR_STATUS),
mv_scr_read(ap, SCR_ERROR), mv_scr_read(ap, SCR_CONTROL));
if (sata_dev_present(ap)) {
ata_port_probe(ap);
} else {
printk(KERN_INFO "ata%u: no device found (phy stat %08x)\n",
ap->id, scr_read(ap, SCR_STATUS));
ata_port_disable(ap);
return; return;
} }
ap->cbl = ATA_CBL_SATA;
tf.lbah = readb((void __iomem *) ap->ioaddr.lbah_addr); tf.lbah = readb((void __iomem *) ap->ioaddr.lbah_addr);
tf.lbam = readb((void __iomem *) ap->ioaddr.lbam_addr); tf.lbam = readb((void __iomem *) ap->ioaddr.lbam_addr);
...@@ -636,37 +1272,118 @@ static void mv_phy_reset(struct ata_port *ap) ...@@ -636,37 +1272,118 @@ static void mv_phy_reset(struct ata_port *ap)
VPRINTK("EXIT\n"); VPRINTK("EXIT\n");
} }
static void mv_port_init(struct ata_ioports *port, unsigned long base) /**
* mv_eng_timeout - Routine called by libata when SCSI times out I/O
* @ap: ATA channel to manipulate
*
* Intent is to clear all pending error conditions, reset the
* chip/bus, fail the command, and move on.
*
* LOCKING:
* This routine holds the host_set lock while failing the command.
*/
static void mv_eng_timeout(struct ata_port *ap)
{
struct ata_queued_cmd *qc;
unsigned long flags;
printk(KERN_ERR "ata%u: Entering mv_eng_timeout\n",ap->id);
DPRINTK("All regs @ start of eng_timeout\n");
mv_dump_all_regs(ap->host_set->mmio_base, ap->port_no,
to_pci_dev(ap->host_set->dev));
qc = ata_qc_from_tag(ap, ap->active_tag);
printk(KERN_ERR "mmio_base %p ap %p qc %p scsi_cmnd %p &cmnd %p\n",
ap->host_set->mmio_base, ap, qc, qc->scsicmd,
&qc->scsicmd->cmnd);
mv_err_intr(ap);
mv_phy_reset(ap);
if (!qc) {
printk(KERN_ERR "ata%u: BUG: timeout without command\n",
ap->id);
} else {
/* hack alert! We cannot use the supplied completion
* function from inside the ->eh_strategy_handler() thread.
* libata is the only user of ->eh_strategy_handler() in
* any kernel, so the default scsi_done() assumes it is
* not being called from the SCSI EH.
*/
spin_lock_irqsave(&ap->host_set->lock, flags);
qc->scsidone = scsi_finish_command;
ata_qc_complete(qc, ATA_ERR);
spin_unlock_irqrestore(&ap->host_set->lock, flags);
}
}
/**
* mv_port_init - Perform some early initialization on a single port.
* @port: libata data structure storing shadow register addresses
* @port_mmio: base address of the port
*
* Initialize shadow register mmio addresses, clear outstanding
* interrupts on the port, and unmask interrupts for the future
* start of the port.
*
* LOCKING:
* Inherited from caller.
*/
static void mv_port_init(struct ata_ioports *port, void __iomem *port_mmio)
{ {
/* PIO related setup */ unsigned long shd_base = (unsigned long) port_mmio + SHD_BLK_OFS;
port->data_addr = base + SHD_PIO_DATA_OFS; unsigned serr_ofs;
port->error_addr = port->feature_addr = base + SHD_FEA_ERR_OFS;
port->nsect_addr = base + SHD_SECT_CNT_OFS; /* PIO related setup
port->lbal_addr = base + SHD_LBA_L_OFS; */
port->lbam_addr = base + SHD_LBA_M_OFS; port->data_addr = shd_base + (sizeof(u32) * ATA_REG_DATA);
port->lbah_addr = base + SHD_LBA_H_OFS; port->error_addr =
port->device_addr = base + SHD_DEV_HD_OFS; port->feature_addr = shd_base + (sizeof(u32) * ATA_REG_ERR);
port->status_addr = port->command_addr = base + SHD_CMD_STA_OFS; port->nsect_addr = shd_base + (sizeof(u32) * ATA_REG_NSECT);
port->altstatus_addr = port->ctl_addr = base + SHD_CTL_AST_OFS; port->lbal_addr = shd_base + (sizeof(u32) * ATA_REG_LBAL);
/* unused */ port->lbam_addr = shd_base + (sizeof(u32) * ATA_REG_LBAM);
port->lbah_addr = shd_base + (sizeof(u32) * ATA_REG_LBAH);
port->device_addr = shd_base + (sizeof(u32) * ATA_REG_DEVICE);
port->status_addr =
port->command_addr = shd_base + (sizeof(u32) * ATA_REG_STATUS);
/* special case: control/altstatus doesn't have ATA_REG_ address */
port->altstatus_addr = port->ctl_addr = shd_base + SHD_CTL_AST_OFS;
/* unused: */
port->cmd_addr = port->bmdma_addr = port->scr_addr = 0; port->cmd_addr = port->bmdma_addr = port->scr_addr = 0;
/* Clear any currently outstanding port interrupt conditions */
serr_ofs = mv_scr_offset(SCR_ERROR);
writelfl(readl(port_mmio + serr_ofs), port_mmio + serr_ofs);
writelfl(0, port_mmio + EDMA_ERR_IRQ_CAUSE_OFS);
/* unmask all EDMA error interrupts */ /* unmask all EDMA error interrupts */
writel(~0, (void __iomem *)base + EDMA_ERR_IRQ_MASK_OFS); writelfl(~0, port_mmio + EDMA_ERR_IRQ_MASK_OFS);
VPRINTK("EDMA cfg=0x%08x EDMA IRQ err cause/mask=0x%08x/0x%08x\n", VPRINTK("EDMA cfg=0x%08x EDMA IRQ err cause/mask=0x%08x/0x%08x\n",
readl((void __iomem *)base + EDMA_CFG_OFS), readl(port_mmio + EDMA_CFG_OFS),
readl((void __iomem *)base + EDMA_ERR_IRQ_CAUSE_OFS), readl(port_mmio + EDMA_ERR_IRQ_CAUSE_OFS),
readl((void __iomem *)base + EDMA_ERR_IRQ_MASK_OFS)); readl(port_mmio + EDMA_ERR_IRQ_MASK_OFS));
} }
/**
* mv_host_init - Perform some early initialization of the host.
* @probe_ent: early data struct representing the host
*
* If possible, do an early global reset of the host. Then do
* our port init and clear/unmask all/relevant host interrupts.
*
* LOCKING:
* Inherited from caller.
*/
static int mv_host_init(struct ata_probe_ent *probe_ent) static int mv_host_init(struct ata_probe_ent *probe_ent)
{ {
int rc = 0, n_hc, port, hc; int rc = 0, n_hc, port, hc;
void __iomem *mmio = probe_ent->mmio_base; void __iomem *mmio = probe_ent->mmio_base;
void __iomem *port_mmio; void __iomem *port_mmio;
if (mv_master_reset(probe_ent->mmio_base)) { if ((MV_FLAG_GLBL_SFT_RST & probe_ent->host_flags) &&
mv_global_soft_reset(probe_ent->mmio_base)) {
rc = 1; rc = 1;
goto done; goto done;
} }
...@@ -676,17 +1393,27 @@ static int mv_host_init(struct ata_probe_ent *probe_ent) ...@@ -676,17 +1393,27 @@ static int mv_host_init(struct ata_probe_ent *probe_ent)
for (port = 0; port < probe_ent->n_ports; port++) { for (port = 0; port < probe_ent->n_ports; port++) {
port_mmio = mv_port_base(mmio, port); port_mmio = mv_port_base(mmio, port);
mv_port_init(&probe_ent->port[port], (unsigned long)port_mmio); mv_port_init(&probe_ent->port[port], port_mmio);
} }
for (hc = 0; hc < n_hc; hc++) { for (hc = 0; hc < n_hc; hc++) {
VPRINTK("HC%i: HC config=0x%08x HC IRQ cause=0x%08x\n", hc, void __iomem *hc_mmio = mv_hc_base(mmio, hc);
readl(mv_hc_base(mmio, hc) + HC_CFG_OFS),
readl(mv_hc_base(mmio, hc) + HC_IRQ_CAUSE_OFS)); VPRINTK("HC%i: HC config=0x%08x HC IRQ cause "
"(before clear)=0x%08x\n", hc,
readl(hc_mmio + HC_CFG_OFS),
readl(hc_mmio + HC_IRQ_CAUSE_OFS));
/* Clear any currently outstanding hc interrupt conditions */
writelfl(0, hc_mmio + HC_IRQ_CAUSE_OFS);
} }
writel(~HC_MAIN_MASKED_IRQS, mmio + HC_MAIN_IRQ_MASK_OFS); /* Clear any currently outstanding host interrupt conditions */
writel(PCI_UNMASK_ALL_IRQS, mmio + PCI_IRQ_MASK_OFS); writelfl(0, mmio + PCI_IRQ_CAUSE_OFS);
/* and unmask interrupt generation for host regs */
writelfl(PCI_UNMASK_ALL_IRQS, mmio + PCI_IRQ_MASK_OFS);
writelfl(~HC_MAIN_MASKED_IRQS, mmio + HC_MAIN_IRQ_MASK_OFS);
VPRINTK("HC MAIN IRQ cause/mask=0x%08x/0x%08x " VPRINTK("HC MAIN IRQ cause/mask=0x%08x/0x%08x "
"PCI int cause/mask=0x%08x/0x%08x\n", "PCI int cause/mask=0x%08x/0x%08x\n",
...@@ -694,11 +1421,53 @@ static int mv_host_init(struct ata_probe_ent *probe_ent) ...@@ -694,11 +1421,53 @@ static int mv_host_init(struct ata_probe_ent *probe_ent)
readl(mmio + HC_MAIN_IRQ_MASK_OFS), readl(mmio + HC_MAIN_IRQ_MASK_OFS),
readl(mmio + PCI_IRQ_CAUSE_OFS), readl(mmio + PCI_IRQ_CAUSE_OFS),
readl(mmio + PCI_IRQ_MASK_OFS)); readl(mmio + PCI_IRQ_MASK_OFS));
done:
done:
return rc; return rc;
} }
/**
* mv_print_info - Dump key info to kernel log for perusal.
* @probe_ent: early data struct representing the host
*
* FIXME: complete this.
*
* LOCKING:
* Inherited from caller.
*/
static void mv_print_info(struct ata_probe_ent *probe_ent)
{
struct pci_dev *pdev = to_pci_dev(probe_ent->dev);
struct mv_host_priv *hpriv = probe_ent->private_data;
u8 rev_id, scc;
const char *scc_s;
/* Use this to determine the HW stepping of the chip so we know
* what errata to workaround
*/
pci_read_config_byte(pdev, PCI_REVISION_ID, &rev_id);
pci_read_config_byte(pdev, PCI_CLASS_DEVICE, &scc);
if (scc == 0)
scc_s = "SCSI";
else if (scc == 0x01)
scc_s = "RAID";
else
scc_s = "unknown";
printk(KERN_INFO DRV_NAME
"(%s) %u slots %u ports %s mode IRQ via %s\n",
pci_name(pdev), (unsigned)MV_MAX_Q_DEPTH, probe_ent->n_ports,
scc_s, (MV_HP_FLAG_MSI & hpriv->hp_flags) ? "MSI" : "INTx");
}
/**
* mv_init_one - handle a positive probe of a Marvell host
* @pdev: PCI device found
* @ent: PCI device ID entry for the matched host
*
* LOCKING:
* Inherited from caller.
*/
static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{ {
static int printed_version = 0; static int printed_version = 0;
...@@ -706,16 +1475,12 @@ static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -706,16 +1475,12 @@ static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
struct mv_host_priv *hpriv; struct mv_host_priv *hpriv;
unsigned int board_idx = (unsigned int)ent->driver_data; unsigned int board_idx = (unsigned int)ent->driver_data;
void __iomem *mmio_base; void __iomem *mmio_base;
int pci_dev_busy = 0; int pci_dev_busy = 0, rc;
int rc;
if (!printed_version++) { if (!printed_version++) {
printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n"); printk(KERN_INFO DRV_NAME " version " DRV_VERSION "\n");
} }
VPRINTK("ENTER for PCI Bus:Slot.Func=%u:%u.%u\n", pdev->bus->number,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
rc = pci_enable_device(pdev); rc = pci_enable_device(pdev);
if (rc) { if (rc) {
return rc; return rc;
...@@ -727,8 +1492,6 @@ static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -727,8 +1492,6 @@ static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out; goto err_out;
} }
pci_intx(pdev, 1);
probe_ent = kmalloc(sizeof(*probe_ent), GFP_KERNEL); probe_ent = kmalloc(sizeof(*probe_ent), GFP_KERNEL);
if (probe_ent == NULL) { if (probe_ent == NULL) {
rc = -ENOMEM; rc = -ENOMEM;
...@@ -739,8 +1502,7 @@ static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -739,8 +1502,7 @@ static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
probe_ent->dev = pci_dev_to_dev(pdev); probe_ent->dev = pci_dev_to_dev(pdev);
INIT_LIST_HEAD(&probe_ent->node); INIT_LIST_HEAD(&probe_ent->node);
mmio_base = ioremap_nocache(pci_resource_start(pdev, MV_PRIMARY_BAR), mmio_base = pci_iomap(pdev, MV_PRIMARY_BAR, 0);
pci_resource_len(pdev, MV_PRIMARY_BAR));
if (mmio_base == NULL) { if (mmio_base == NULL) {
rc = -ENOMEM; rc = -ENOMEM;
goto err_out_free_ent; goto err_out_free_ent;
...@@ -769,37 +1531,40 @@ static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -769,37 +1531,40 @@ static int mv_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (rc) { if (rc) {
goto err_out_hpriv; goto err_out_hpriv;
} }
/* mv_print_info(probe_ent); */
{ /* Enable interrupts */
int b, w; if (pci_enable_msi(pdev) == 0) {
u32 dw[4]; /* hold a line of 16b */ hpriv->hp_flags |= MV_HP_FLAG_MSI;
VPRINTK("PCI config space:\n"); } else {
for (b = 0; b < 0x40; ) { pci_intx(pdev, 1);
for (w = 0; w < 4; w++) {
(void) pci_read_config_dword(pdev,b,&dw[w]);
b += sizeof(*dw);
}
VPRINTK("%08x %08x %08x %08x\n",
dw[0],dw[1],dw[2],dw[3]);
}
} }
/* FIXME: check ata_device_add return value */ mv_dump_pci_cfg(pdev, 0x68);
ata_device_add(probe_ent); mv_print_info(probe_ent);
kfree(probe_ent);
if (ata_device_add(probe_ent) == 0) {
rc = -ENODEV; /* No devices discovered */
goto err_out_dev_add;
}
kfree(probe_ent);
return 0; return 0;
err_out_hpriv: err_out_dev_add:
if (MV_HP_FLAG_MSI & hpriv->hp_flags) {
pci_disable_msi(pdev);
} else {
pci_intx(pdev, 0);
}
err_out_hpriv:
kfree(hpriv); kfree(hpriv);
err_out_iounmap: err_out_iounmap:
iounmap(mmio_base); pci_iounmap(pdev, mmio_base);
err_out_free_ent: err_out_free_ent:
kfree(probe_ent); kfree(probe_ent);
err_out_regions: err_out_regions:
pci_release_regions(pdev); pci_release_regions(pdev);
err_out: err_out:
if (!pci_dev_busy) { if (!pci_dev_busy) {
pci_disable_device(pdev); pci_disable_device(pdev);
} }
......
...@@ -238,7 +238,7 @@ static Scsi_Host_Template nv_sht = { ...@@ -238,7 +238,7 @@ static Scsi_Host_Template nv_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations nv_ops = { static const struct ata_port_operations nv_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = ata_tf_load, .tf_load = ata_tf_load,
.tf_read = ata_tf_read, .tf_read = ata_tf_read,
...@@ -331,7 +331,7 @@ static u32 nv_scr_read (struct ata_port *ap, unsigned int sc_reg) ...@@ -331,7 +331,7 @@ static u32 nv_scr_read (struct ata_port *ap, unsigned int sc_reg)
return 0xffffffffU; return 0xffffffffU;
if (host->host_flags & NV_HOST_FLAGS_SCR_MMIO) if (host->host_flags & NV_HOST_FLAGS_SCR_MMIO)
return readl((void*)ap->ioaddr.scr_addr + (sc_reg * 4)); return readl((void __iomem *)ap->ioaddr.scr_addr + (sc_reg * 4));
else else
return inl(ap->ioaddr.scr_addr + (sc_reg * 4)); return inl(ap->ioaddr.scr_addr + (sc_reg * 4));
} }
...@@ -345,7 +345,7 @@ static void nv_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val) ...@@ -345,7 +345,7 @@ static void nv_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val)
return; return;
if (host->host_flags & NV_HOST_FLAGS_SCR_MMIO) if (host->host_flags & NV_HOST_FLAGS_SCR_MMIO)
writel(val, (void*)ap->ioaddr.scr_addr + (sc_reg * 4)); writel(val, (void __iomem *)ap->ioaddr.scr_addr + (sc_reg * 4));
else else
outl(val, ap->ioaddr.scr_addr + (sc_reg * 4)); outl(val, ap->ioaddr.scr_addr + (sc_reg * 4));
} }
...@@ -405,7 +405,7 @@ static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -405,7 +405,7 @@ static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
rc = -ENOMEM; rc = -ENOMEM;
ppi = &nv_port_info; ppi = &nv_port_info;
probe_ent = ata_pci_init_native_mode(pdev, &ppi); probe_ent = ata_pci_init_native_mode(pdev, &ppi, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
if (!probe_ent) if (!probe_ent)
goto err_out_regions; goto err_out_regions;
......
...@@ -87,8 +87,8 @@ static void pdc_port_stop(struct ata_port *ap); ...@@ -87,8 +87,8 @@ static void pdc_port_stop(struct ata_port *ap);
static void pdc_pata_phy_reset(struct ata_port *ap); static void pdc_pata_phy_reset(struct ata_port *ap);
static void pdc_sata_phy_reset(struct ata_port *ap); static void pdc_sata_phy_reset(struct ata_port *ap);
static void pdc_qc_prep(struct ata_queued_cmd *qc); static void pdc_qc_prep(struct ata_queued_cmd *qc);
static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf); static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
static void pdc_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf); static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
static void pdc_irq_clear(struct ata_port *ap); static void pdc_irq_clear(struct ata_port *ap);
static int pdc_qc_issue_prot(struct ata_queued_cmd *qc); static int pdc_qc_issue_prot(struct ata_queued_cmd *qc);
...@@ -113,7 +113,7 @@ static Scsi_Host_Template pdc_ata_sht = { ...@@ -113,7 +113,7 @@ static Scsi_Host_Template pdc_ata_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations pdc_sata_ops = { static const struct ata_port_operations pdc_sata_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = pdc_tf_load_mmio, .tf_load = pdc_tf_load_mmio,
.tf_read = ata_tf_read, .tf_read = ata_tf_read,
...@@ -136,7 +136,7 @@ static struct ata_port_operations pdc_sata_ops = { ...@@ -136,7 +136,7 @@ static struct ata_port_operations pdc_sata_ops = {
.host_stop = ata_pci_host_stop, .host_stop = ata_pci_host_stop,
}; };
static struct ata_port_operations pdc_pata_ops = { static const struct ata_port_operations pdc_pata_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = pdc_tf_load_mmio, .tf_load = pdc_tf_load_mmio,
.tf_read = ata_tf_read, .tf_read = ata_tf_read,
...@@ -324,7 +324,7 @@ static u32 pdc_sata_scr_read (struct ata_port *ap, unsigned int sc_reg) ...@@ -324,7 +324,7 @@ static u32 pdc_sata_scr_read (struct ata_port *ap, unsigned int sc_reg)
{ {
if (sc_reg > SCR_CONTROL) if (sc_reg > SCR_CONTROL)
return 0xffffffffU; return 0xffffffffU;
return readl((void *) ap->ioaddr.scr_addr + (sc_reg * 4)); return readl((void __iomem *) ap->ioaddr.scr_addr + (sc_reg * 4));
} }
...@@ -333,7 +333,7 @@ static void pdc_sata_scr_write (struct ata_port *ap, unsigned int sc_reg, ...@@ -333,7 +333,7 @@ static void pdc_sata_scr_write (struct ata_port *ap, unsigned int sc_reg,
{ {
if (sc_reg > SCR_CONTROL) if (sc_reg > SCR_CONTROL)
return; return;
writel(val, (void *) ap->ioaddr.scr_addr + (sc_reg * 4)); writel(val, (void __iomem *) ap->ioaddr.scr_addr + (sc_reg * 4));
} }
static void pdc_qc_prep(struct ata_queued_cmd *qc) static void pdc_qc_prep(struct ata_queued_cmd *qc)
...@@ -438,11 +438,11 @@ static inline unsigned int pdc_host_intr( struct ata_port *ap, ...@@ -438,11 +438,11 @@ static inline unsigned int pdc_host_intr( struct ata_port *ap,
break; break;
default: default:
ap->stats.idle_irq++; ap->stats.idle_irq++;
break; break;
} }
return handled; return handled;
} }
static void pdc_irq_clear(struct ata_port *ap) static void pdc_irq_clear(struct ata_port *ap)
...@@ -523,8 +523,8 @@ static inline void pdc_packet_start(struct ata_queued_cmd *qc) ...@@ -523,8 +523,8 @@ static inline void pdc_packet_start(struct ata_queued_cmd *qc)
pp->pkt[2] = seq; pp->pkt[2] = seq;
wmb(); /* flush PRD, pkt writes */ wmb(); /* flush PRD, pkt writes */
writel(pp->pkt_dma, (void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT); writel(pp->pkt_dma, (void __iomem *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT);
readl((void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT); /* flush */ readl((void __iomem *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT); /* flush */
} }
static int pdc_qc_issue_prot(struct ata_queued_cmd *qc) static int pdc_qc_issue_prot(struct ata_queued_cmd *qc)
...@@ -546,7 +546,7 @@ static int pdc_qc_issue_prot(struct ata_queued_cmd *qc) ...@@ -546,7 +546,7 @@ static int pdc_qc_issue_prot(struct ata_queued_cmd *qc)
return ata_qc_issue_prot(qc); return ata_qc_issue_prot(qc);
} }
static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf) static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
WARN_ON (tf->protocol == ATA_PROT_DMA || WARN_ON (tf->protocol == ATA_PROT_DMA ||
tf->protocol == ATA_PROT_NODATA); tf->protocol == ATA_PROT_NODATA);
...@@ -554,7 +554,7 @@ static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -554,7 +554,7 @@ static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf)
} }
static void pdc_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf) static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
WARN_ON (tf->protocol == ATA_PROT_DMA || WARN_ON (tf->protocol == ATA_PROT_DMA ||
tf->protocol == ATA_PROT_NODATA); tf->protocol == ATA_PROT_NODATA);
......
...@@ -51,8 +51,6 @@ enum { ...@@ -51,8 +51,6 @@ enum {
QS_PRD_BYTES = QS_MAX_PRD * 16, QS_PRD_BYTES = QS_MAX_PRD * 16,
QS_PKT_BYTES = QS_CPB_BYTES + QS_PRD_BYTES, QS_PKT_BYTES = QS_CPB_BYTES + QS_PRD_BYTES,
QS_DMA_BOUNDARY = ~0UL,
/* global register offsets */ /* global register offsets */
QS_HCF_CNFG3 = 0x0003, /* host configuration offset */ QS_HCF_CNFG3 = 0x0003, /* host configuration offset */
QS_HID_HPHY = 0x0004, /* host physical interface info */ QS_HID_HPHY = 0x0004, /* host physical interface info */
...@@ -101,6 +99,10 @@ enum { ...@@ -101,6 +99,10 @@ enum {
board_2068_idx = 0, /* QStor 4-port SATA/RAID */ board_2068_idx = 0, /* QStor 4-port SATA/RAID */
}; };
enum {
QS_DMA_BOUNDARY = ~0UL
};
typedef enum { qs_state_idle, qs_state_pkt, qs_state_mmio } qs_state_t; typedef enum { qs_state_idle, qs_state_pkt, qs_state_mmio } qs_state_t;
struct qs_port_priv { struct qs_port_priv {
...@@ -145,7 +147,7 @@ static Scsi_Host_Template qs_ata_sht = { ...@@ -145,7 +147,7 @@ static Scsi_Host_Template qs_ata_sht = {
.bios_param = ata_std_bios_param, .bios_param = ata_std_bios_param,
}; };
static struct ata_port_operations qs_ata_ops = { static const struct ata_port_operations qs_ata_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = ata_tf_load, .tf_load = ata_tf_load,
.tf_read = ata_tf_read, .tf_read = ata_tf_read,
......
...@@ -150,7 +150,7 @@ static Scsi_Host_Template sil_sht = { ...@@ -150,7 +150,7 @@ static Scsi_Host_Template sil_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations sil_ops = { static const struct ata_port_operations sil_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.dev_config = sil_dev_config, .dev_config = sil_dev_config,
.tf_load = ata_tf_load, .tf_load = ata_tf_load,
...@@ -289,7 +289,7 @@ static inline unsigned long sil_scr_addr(struct ata_port *ap, unsigned int sc_re ...@@ -289,7 +289,7 @@ static inline unsigned long sil_scr_addr(struct ata_port *ap, unsigned int sc_re
static u32 sil_scr_read (struct ata_port *ap, unsigned int sc_reg) static u32 sil_scr_read (struct ata_port *ap, unsigned int sc_reg)
{ {
void *mmio = (void *) sil_scr_addr(ap, sc_reg); void __iomem *mmio = (void __iomem *) sil_scr_addr(ap, sc_reg);
if (mmio) if (mmio)
return readl(mmio); return readl(mmio);
return 0xffffffffU; return 0xffffffffU;
...@@ -297,7 +297,7 @@ static u32 sil_scr_read (struct ata_port *ap, unsigned int sc_reg) ...@@ -297,7 +297,7 @@ static u32 sil_scr_read (struct ata_port *ap, unsigned int sc_reg)
static void sil_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val) static void sil_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val)
{ {
void *mmio = (void *) sil_scr_addr(ap, sc_reg); void *mmio = (void __iomem *) sil_scr_addr(ap, sc_reg);
if (mmio) if (mmio)
writel(val, mmio); writel(val, mmio);
} }
......
/*
* sata_sil24.c - Driver for Silicon Image 3124/3132 SATA-2 controllers
*
* Copyright 2005 Tejun Heo
*
* Based on preview driver from Silicon Image.
*
* NOTE: No NCQ/ATAPI support yet. The preview driver didn't support
* NCQ nor ATAPI, and, unfortunately, I couldn't find out how to make
* those work. Enabling those shouldn't be difficult. Basic
* structure is all there (in libata-dev tree). If you have any
* information about this hardware, please contact me or linux-ide.
* Info is needed on...
*
* - How to issue tagged commands and turn on sactive on issue accordingly.
* - Where to put an ATAPI command and how to tell the device to send it.
* - How to enable/use 64bit.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2, or (at your option) any
* later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/blkdev.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/dma-mapping.h>
#include <scsi/scsi_host.h>
#include "scsi.h"
#include <linux/libata.h>
#include <asm/io.h>
#define DRV_NAME "sata_sil24"
#define DRV_VERSION "0.22" /* Silicon Image's preview driver was 0.10 */
/*
* Port request block (PRB) 32 bytes
*/
struct sil24_prb {
u16 ctrl;
u16 prot;
u32 rx_cnt;
u8 fis[6 * 4];
};
/*
* Scatter gather entry (SGE) 16 bytes
*/
struct sil24_sge {
u64 addr;
u32 cnt;
u32 flags;
};
/*
* Port multiplier
*/
struct sil24_port_multiplier {
u32 diag;
u32 sactive;
};
enum {
/*
* Global controller registers (128 bytes @ BAR0)
*/
/* 32 bit regs */
HOST_SLOT_STAT = 0x00, /* 32 bit slot stat * 4 */
HOST_CTRL = 0x40,
HOST_IRQ_STAT = 0x44,
HOST_PHY_CFG = 0x48,
HOST_BIST_CTRL = 0x50,
HOST_BIST_PTRN = 0x54,
HOST_BIST_STAT = 0x58,
HOST_MEM_BIST_STAT = 0x5c,
HOST_FLASH_CMD = 0x70,
/* 8 bit regs */
HOST_FLASH_DATA = 0x74,
HOST_TRANSITION_DETECT = 0x75,
HOST_GPIO_CTRL = 0x76,
HOST_I2C_ADDR = 0x78, /* 32 bit */
HOST_I2C_DATA = 0x7c,
HOST_I2C_XFER_CNT = 0x7e,
HOST_I2C_CTRL = 0x7f,
/* HOST_SLOT_STAT bits */
HOST_SSTAT_ATTN = (1 << 31),
/*
* Port registers
* (8192 bytes @ +0x0000, +0x2000, +0x4000 and +0x6000 @ BAR2)
*/
PORT_REGS_SIZE = 0x2000,
PORT_PRB = 0x0000, /* (32 bytes PRB + 16 bytes SGEs * 6) * 31 (3968 bytes) */
PORT_PM = 0x0f80, /* 8 bytes PM * 16 (128 bytes) */
/* 32 bit regs */
PORT_CTRL_STAT = 0x1000, /* write: ctrl-set, read: stat */
PORT_CTRL_CLR = 0x1004, /* write: ctrl-clear */
PORT_IRQ_STAT = 0x1008, /* high: status, low: interrupt */
PORT_IRQ_ENABLE_SET = 0x1010, /* write: enable-set */
PORT_IRQ_ENABLE_CLR = 0x1014, /* write: enable-clear */
PORT_ACTIVATE_UPPER_ADDR= 0x101c,
PORT_EXEC_FIFO = 0x1020, /* command execution fifo */
PORT_CMD_ERR = 0x1024, /* command error number */
PORT_FIS_CFG = 0x1028,
PORT_FIFO_THRES = 0x102c,
/* 16 bit regs */
PORT_DECODE_ERR_CNT = 0x1040,
PORT_DECODE_ERR_THRESH = 0x1042,
PORT_CRC_ERR_CNT = 0x1044,
PORT_CRC_ERR_THRESH = 0x1046,
PORT_HSHK_ERR_CNT = 0x1048,
PORT_HSHK_ERR_THRESH = 0x104a,
/* 32 bit regs */
PORT_PHY_CFG = 0x1050,
PORT_SLOT_STAT = 0x1800,
PORT_CMD_ACTIVATE = 0x1c00, /* 64 bit cmd activate * 31 (248 bytes) */
PORT_EXEC_DIAG = 0x1e00, /* 32bit exec diag * 16 (64 bytes, 0-10 used on 3124) */
PORT_PSD_DIAG = 0x1e40, /* 32bit psd diag * 16 (64 bytes, 0-8 used on 3124) */
PORT_SCONTROL = 0x1f00,
PORT_SSTATUS = 0x1f04,
PORT_SERROR = 0x1f08,
PORT_SACTIVE = 0x1f0c,
/* PORT_CTRL_STAT bits */
PORT_CS_PORT_RST = (1 << 0), /* port reset */
PORT_CS_DEV_RST = (1 << 1), /* device reset */
PORT_CS_INIT = (1 << 2), /* port initialize */
PORT_CS_IRQ_WOC = (1 << 3), /* interrupt write one to clear */
PORT_CS_RESUME = (1 << 6), /* port resume */
PORT_CS_32BIT_ACTV = (1 << 10), /* 32-bit activation */
PORT_CS_PM_EN = (1 << 13), /* port multiplier enable */
PORT_CS_RDY = (1 << 31), /* port ready to accept commands */
/* PORT_IRQ_STAT/ENABLE_SET/CLR */
/* bits[11:0] are masked */
PORT_IRQ_COMPLETE = (1 << 0), /* command(s) completed */
PORT_IRQ_ERROR = (1 << 1), /* command execution error */
PORT_IRQ_PORTRDY_CHG = (1 << 2), /* port ready change */
PORT_IRQ_PWR_CHG = (1 << 3), /* power management change */
PORT_IRQ_PHYRDY_CHG = (1 << 4), /* PHY ready change */
PORT_IRQ_COMWAKE = (1 << 5), /* COMWAKE received */
PORT_IRQ_UNK_FIS = (1 << 6), /* Unknown FIS received */
PORT_IRQ_SDB_FIS = (1 << 11), /* SDB FIS received */
/* bits[27:16] are unmasked (raw) */
PORT_IRQ_RAW_SHIFT = 16,
PORT_IRQ_MASKED_MASK = 0x7ff,
PORT_IRQ_RAW_MASK = (0x7ff << PORT_IRQ_RAW_SHIFT),
/* ENABLE_SET/CLR specific, intr steering - 2 bit field */
PORT_IRQ_STEER_SHIFT = 30,
PORT_IRQ_STEER_MASK = (3 << PORT_IRQ_STEER_SHIFT),
/* PORT_CMD_ERR constants */
PORT_CERR_DEV = 1, /* Error bit in D2H Register FIS */
PORT_CERR_SDB = 2, /* Error bit in SDB FIS */
PORT_CERR_DATA = 3, /* Error in data FIS not detected by dev */
PORT_CERR_SEND = 4, /* Initial cmd FIS transmission failure */
PORT_CERR_INCONSISTENT = 5, /* Protocol mismatch */
PORT_CERR_DIRECTION = 6, /* Data direction mismatch */
PORT_CERR_UNDERRUN = 7, /* Ran out of SGEs while writing */
PORT_CERR_OVERRUN = 8, /* Ran out of SGEs while reading */
PORT_CERR_PKT_PROT = 11, /* DIR invalid in 1st PIO setup of ATAPI */
PORT_CERR_SGT_BOUNDARY = 16, /* PLD ecode 00 - SGT not on qword boundary */
PORT_CERR_SGT_TGTABRT = 17, /* PLD ecode 01 - target abort */
PORT_CERR_SGT_MSTABRT = 18, /* PLD ecode 10 - master abort */
PORT_CERR_SGT_PCIPERR = 19, /* PLD ecode 11 - PCI parity err while fetching SGT */
PORT_CERR_CMD_BOUNDARY = 24, /* ctrl[15:13] 001 - PRB not on qword boundary */
PORT_CERR_CMD_TGTABRT = 25, /* ctrl[15:13] 010 - target abort */
PORT_CERR_CMD_MSTABRT = 26, /* ctrl[15:13] 100 - master abort */
PORT_CERR_CMD_PCIPERR = 27, /* ctrl[15:13] 110 - PCI parity err while fetching PRB */
PORT_CERR_XFR_UNDEF = 32, /* PSD ecode 00 - undefined */
PORT_CERR_XFR_TGTABRT = 33, /* PSD ecode 01 - target abort */
PORT_CERR_XFR_MSGABRT = 34, /* PSD ecode 10 - master abort */
PORT_CERR_XFR_PCIPERR = 35, /* PSD ecode 11 - PCI prity err during transfer */
PORT_CERR_SENDSERVICE = 36, /* FIS received while sending service */
/*
* Other constants
*/
SGE_TRM = (1 << 31), /* Last SGE in chain */
PRB_SOFT_RST = (1 << 7), /* Soft reset request (ign BSY?) */
/* board id */
BID_SIL3124 = 0,
BID_SIL3132 = 1,
BID_SIL3131 = 2,
IRQ_STAT_4PORTS = 0xf,
};
struct sil24_cmd_block {
struct sil24_prb prb;
struct sil24_sge sge[LIBATA_MAX_PRD];
};
/*
* ap->private_data
*
* The preview driver always returned 0 for status. We emulate it
* here from the previous interrupt.
*/
struct sil24_port_priv {
struct sil24_cmd_block *cmd_block; /* 32 cmd blocks */
dma_addr_t cmd_block_dma; /* DMA base addr for them */
struct ata_taskfile tf; /* Cached taskfile registers */
};
/* ap->host_set->private_data */
struct sil24_host_priv {
void *host_base; /* global controller control (128 bytes @BAR0) */
void *port_base; /* port registers (4 * 8192 bytes @BAR2) */
};
static u8 sil24_check_status(struct ata_port *ap);
static u8 sil24_check_err(struct ata_port *ap);
static u32 sil24_scr_read(struct ata_port *ap, unsigned sc_reg);
static void sil24_scr_write(struct ata_port *ap, unsigned sc_reg, u32 val);
static void sil24_tf_read(struct ata_port *ap, struct ata_taskfile *tf);
static void sil24_phy_reset(struct ata_port *ap);
static void sil24_qc_prep(struct ata_queued_cmd *qc);
static int sil24_qc_issue(struct ata_queued_cmd *qc);
static void sil24_irq_clear(struct ata_port *ap);
static void sil24_eng_timeout(struct ata_port *ap);
static irqreturn_t sil24_interrupt(int irq, void *dev_instance, struct pt_regs *regs);
static int sil24_port_start(struct ata_port *ap);
static void sil24_port_stop(struct ata_port *ap);
static void sil24_host_stop(struct ata_host_set *host_set);
static int sil24_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
static struct pci_device_id sil24_pci_tbl[] = {
{ 0x1095, 0x3124, PCI_ANY_ID, PCI_ANY_ID, 0, 0, BID_SIL3124 },
{ 0x1095, 0x3132, PCI_ANY_ID, PCI_ANY_ID, 0, 0, BID_SIL3132 },
{ 0x1095, 0x3131, PCI_ANY_ID, PCI_ANY_ID, 0, 0, BID_SIL3131 },
{ 0x1095, 0x3531, PCI_ANY_ID, PCI_ANY_ID, 0, 0, BID_SIL3131 },
{ } /* terminate list */
};
static struct pci_driver sil24_pci_driver = {
.name = DRV_NAME,
.id_table = sil24_pci_tbl,
.probe = sil24_init_one,
.remove = ata_pci_remove_one, /* safe? */
};
static Scsi_Host_Template sil24_sht = {
.module = THIS_MODULE,
.name = DRV_NAME,
.ioctl = ata_scsi_ioctl,
.queuecommand = ata_scsi_queuecmd,
.eh_strategy_handler = ata_scsi_error,
.can_queue = ATA_DEF_QUEUE,
.this_id = ATA_SHT_THIS_ID,
.sg_tablesize = LIBATA_MAX_PRD,
.max_sectors = ATA_MAX_SECTORS,
.cmd_per_lun = ATA_SHT_CMD_PER_LUN,
.emulated = ATA_SHT_EMULATED,
.use_clustering = ATA_SHT_USE_CLUSTERING,
.proc_name = DRV_NAME,
.dma_boundary = ATA_DMA_BOUNDARY,
.slave_configure = ata_scsi_slave_config,
.bios_param = ata_std_bios_param,
.ordered_flush = 1, /* NCQ not supported yet */
};
static const struct ata_port_operations sil24_ops = {
.port_disable = ata_port_disable,
.check_status = sil24_check_status,
.check_altstatus = sil24_check_status,
.check_err = sil24_check_err,
.dev_select = ata_noop_dev_select,
.tf_read = sil24_tf_read,
.phy_reset = sil24_phy_reset,
.qc_prep = sil24_qc_prep,
.qc_issue = sil24_qc_issue,
.eng_timeout = sil24_eng_timeout,
.irq_handler = sil24_interrupt,
.irq_clear = sil24_irq_clear,
.scr_read = sil24_scr_read,
.scr_write = sil24_scr_write,
.port_start = sil24_port_start,
.port_stop = sil24_port_stop,
.host_stop = sil24_host_stop,
};
/*
* Use bits 30-31 of host_flags to encode available port numbers.
* Current maxium is 4.
*/
#define SIL24_NPORTS2FLAG(nports) ((((unsigned)(nports) - 1) & 0x3) << 30)
#define SIL24_FLAG2NPORTS(flag) ((((flag) >> 30) & 0x3) + 1)
static struct ata_port_info sil24_port_info[] = {
/* sil_3124 */
{
.sht = &sil24_sht,
.host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO |
ATA_FLAG_PIO_DMA | SIL24_NPORTS2FLAG(4),
.pio_mask = 0x1f, /* pio0-4 */
.mwdma_mask = 0x07, /* mwdma0-2 */
.udma_mask = 0x3f, /* udma0-5 */
.port_ops = &sil24_ops,
},
/* sil_3132 */
{
.sht = &sil24_sht,
.host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO |
ATA_FLAG_PIO_DMA | SIL24_NPORTS2FLAG(2),
.pio_mask = 0x1f, /* pio0-4 */
.mwdma_mask = 0x07, /* mwdma0-2 */
.udma_mask = 0x3f, /* udma0-5 */
.port_ops = &sil24_ops,
},
/* sil_3131/sil_3531 */
{
.sht = &sil24_sht,
.host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO |
ATA_FLAG_PIO_DMA | SIL24_NPORTS2FLAG(1),
.pio_mask = 0x1f, /* pio0-4 */
.mwdma_mask = 0x07, /* mwdma0-2 */
.udma_mask = 0x3f, /* udma0-5 */
.port_ops = &sil24_ops,
},
};
static inline void sil24_update_tf(struct ata_port *ap)
{
struct sil24_port_priv *pp = ap->private_data;
void *port = (void *)ap->ioaddr.cmd_addr;
struct sil24_prb *prb = port;
ata_tf_from_fis(prb->fis, &pp->tf);
}
static u8 sil24_check_status(struct ata_port *ap)
{
struct sil24_port_priv *pp = ap->private_data;
return pp->tf.command;
}
static u8 sil24_check_err(struct ata_port *ap)
{
struct sil24_port_priv *pp = ap->private_data;
return pp->tf.feature;
}
static int sil24_scr_map[] = {
[SCR_CONTROL] = 0,
[SCR_STATUS] = 1,
[SCR_ERROR] = 2,
[SCR_ACTIVE] = 3,
};
static u32 sil24_scr_read(struct ata_port *ap, unsigned sc_reg)
{
void *scr_addr = (void *)ap->ioaddr.scr_addr;
if (sc_reg < ARRAY_SIZE(sil24_scr_map)) {
void *addr;
addr = scr_addr + sil24_scr_map[sc_reg] * 4;
return readl(scr_addr + sil24_scr_map[sc_reg] * 4);
}
return 0xffffffffU;
}
static void sil24_scr_write(struct ata_port *ap, unsigned sc_reg, u32 val)
{
void *scr_addr = (void *)ap->ioaddr.scr_addr;
if (sc_reg < ARRAY_SIZE(sil24_scr_map)) {
void *addr;
addr = scr_addr + sil24_scr_map[sc_reg] * 4;
writel(val, scr_addr + sil24_scr_map[sc_reg] * 4);
}
}
static void sil24_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
{
struct sil24_port_priv *pp = ap->private_data;
*tf = pp->tf;
}
static void sil24_phy_reset(struct ata_port *ap)
{
__sata_phy_reset(ap);
/*
* No ATAPI yet. Just unconditionally indicate ATA device.
* If ATAPI device is attached, it will fail ATA_CMD_ID_ATA
* and libata core will ignore the device.
*/
if (!(ap->flags & ATA_FLAG_PORT_DISABLED))
ap->device[0].class = ATA_DEV_ATA;
}
static inline void sil24_fill_sg(struct ata_queued_cmd *qc,
struct sil24_cmd_block *cb)
{
struct scatterlist *sg = qc->sg;
struct sil24_sge *sge = cb->sge;
unsigned i;
for (i = 0; i < qc->n_elem; i++, sg++, sge++) {
sge->addr = cpu_to_le64(sg_dma_address(sg));
sge->cnt = cpu_to_le32(sg_dma_len(sg));
sge->flags = 0;
sge->flags = i < qc->n_elem - 1 ? 0 : cpu_to_le32(SGE_TRM);
}
}
static void sil24_qc_prep(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
struct sil24_port_priv *pp = ap->private_data;
struct sil24_cmd_block *cb = pp->cmd_block + qc->tag;
struct sil24_prb *prb = &cb->prb;
switch (qc->tf.protocol) {
case ATA_PROT_PIO:
case ATA_PROT_DMA:
case ATA_PROT_NODATA:
break;
default:
/* ATAPI isn't supported yet */
BUG();
}
ata_tf_to_fis(&qc->tf, prb->fis, 0);
if (qc->flags & ATA_QCFLAG_DMAMAP)
sil24_fill_sg(qc, cb);
}
static int sil24_qc_issue(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
void *port = (void *)ap->ioaddr.cmd_addr;
struct sil24_port_priv *pp = ap->private_data;
dma_addr_t paddr = pp->cmd_block_dma + qc->tag * sizeof(*pp->cmd_block);
writel((u32)paddr, port + PORT_CMD_ACTIVATE);
return 0;
}
static void sil24_irq_clear(struct ata_port *ap)
{
/* unused */
}
static int __sil24_reset_controller(void *port)
{
int cnt;
u32 tmp;
/* Reset controller state. Is this correct? */
writel(PORT_CS_DEV_RST, port + PORT_CTRL_STAT);
readl(port + PORT_CTRL_STAT); /* sync */
/* Max ~100ms */
for (cnt = 0; cnt < 1000; cnt++) {
udelay(100);
tmp = readl(port + PORT_CTRL_STAT);
if (!(tmp & PORT_CS_DEV_RST))
break;
}
if (tmp & PORT_CS_DEV_RST)
return -1;
return 0;
}
static void sil24_reset_controller(struct ata_port *ap)
{
printk(KERN_NOTICE DRV_NAME
" ata%u: resetting controller...\n", ap->id);
if (__sil24_reset_controller((void *)ap->ioaddr.cmd_addr))
printk(KERN_ERR DRV_NAME
" ata%u: failed to reset controller\n", ap->id);
}
static void sil24_eng_timeout(struct ata_port *ap)
{
struct ata_queued_cmd *qc;
qc = ata_qc_from_tag(ap, ap->active_tag);
if (!qc) {
printk(KERN_ERR "ata%u: BUG: tiemout without command\n",
ap->id);
return;
}
/*
* hack alert! We cannot use the supplied completion
* function from inside the ->eh_strategy_handler() thread.
* libata is the only user of ->eh_strategy_handler() in
* any kernel, so the default scsi_done() assumes it is
* not being called from the SCSI EH.
*/
printk(KERN_ERR "ata%u: command timeout\n", ap->id);
qc->scsidone = scsi_finish_command;
ata_qc_complete(qc, ATA_ERR);
sil24_reset_controller(ap);
}
static void sil24_error_intr(struct ata_port *ap, u32 slot_stat)
{
struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag);
struct sil24_port_priv *pp = ap->private_data;
void *port = (void *)ap->ioaddr.cmd_addr;
u32 irq_stat, cmd_err, sstatus, serror;
irq_stat = readl(port + PORT_IRQ_STAT);
writel(irq_stat, port + PORT_IRQ_STAT); /* clear irq */
if (!(irq_stat & PORT_IRQ_ERROR)) {
/* ignore non-completion, non-error irqs for now */
printk(KERN_WARNING DRV_NAME
"ata%u: non-error exception irq (irq_stat %x)\n",
ap->id, irq_stat);
return;
}
cmd_err = readl(port + PORT_CMD_ERR);
sstatus = readl(port + PORT_SSTATUS);
serror = readl(port + PORT_SERROR);
if (serror)
writel(serror, port + PORT_SERROR);
printk(KERN_ERR DRV_NAME " ata%u: error interrupt on port%d\n"
" stat=0x%x irq=0x%x cmd_err=%d sstatus=0x%x serror=0x%x\n",
ap->id, ap->port_no, slot_stat, irq_stat, cmd_err, sstatus, serror);
if (cmd_err == PORT_CERR_DEV || cmd_err == PORT_CERR_SDB) {
/*
* Device is reporting error, tf registers are valid.
*/
sil24_update_tf(ap);
} else {
/*
* Other errors. libata currently doesn't have any
* mechanism to report these errors. Just turn on
* ATA_ERR.
*/
pp->tf.command = ATA_ERR;
}
if (qc)
ata_qc_complete(qc, pp->tf.command);
sil24_reset_controller(ap);
}
static inline void sil24_host_intr(struct ata_port *ap)
{
struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag);
void *port = (void *)ap->ioaddr.cmd_addr;
u32 slot_stat;
slot_stat = readl(port + PORT_SLOT_STAT);
if (!(slot_stat & HOST_SSTAT_ATTN)) {
struct sil24_port_priv *pp = ap->private_data;
/*
* !HOST_SSAT_ATTN guarantees successful completion,
* so reading back tf registers is unnecessary for
* most commands. TODO: read tf registers for
* commands which require these values on successful
* completion (EXECUTE DEVICE DIAGNOSTIC, CHECK POWER,
* DEVICE RESET and READ PORT MULTIPLIER (any more?).
*/
sil24_update_tf(ap);
if (qc)
ata_qc_complete(qc, pp->tf.command);
} else
sil24_error_intr(ap, slot_stat);
}
static irqreturn_t sil24_interrupt(int irq, void *dev_instance, struct pt_regs *regs)
{
struct ata_host_set *host_set = dev_instance;
struct sil24_host_priv *hpriv = host_set->private_data;
unsigned handled = 0;
u32 status;
int i;
status = readl(hpriv->host_base + HOST_IRQ_STAT);
if (status == 0xffffffff) {
printk(KERN_ERR DRV_NAME ": IRQ status == 0xffffffff, "
"PCI fault or device removal?\n");
goto out;
}
if (!(status & IRQ_STAT_4PORTS))
goto out;
spin_lock(&host_set->lock);
for (i = 0; i < host_set->n_ports; i++)
if (status & (1 << i)) {
struct ata_port *ap = host_set->ports[i];
if (ap && !(ap->flags & ATA_FLAG_PORT_DISABLED)) {
sil24_host_intr(host_set->ports[i]);
handled++;
} else
printk(KERN_ERR DRV_NAME
": interrupt from disabled port %d\n", i);
}
spin_unlock(&host_set->lock);
out:
return IRQ_RETVAL(handled);
}
static int sil24_port_start(struct ata_port *ap)
{
struct device *dev = ap->host_set->dev;
struct sil24_port_priv *pp;
struct sil24_cmd_block *cb;
size_t cb_size = sizeof(*cb);
dma_addr_t cb_dma;
pp = kmalloc(sizeof(*pp), GFP_KERNEL);
if (!pp)
return -ENOMEM;
memset(pp, 0, sizeof(*pp));
pp->tf.command = ATA_DRDY;
cb = dma_alloc_coherent(dev, cb_size, &cb_dma, GFP_KERNEL);
if (!cb) {
kfree(pp);
return -ENOMEM;
}
memset(cb, 0, cb_size);
pp->cmd_block = cb;
pp->cmd_block_dma = cb_dma;
ap->private_data = pp;
return 0;
}
static void sil24_port_stop(struct ata_port *ap)
{
struct device *dev = ap->host_set->dev;
struct sil24_port_priv *pp = ap->private_data;
size_t cb_size = sizeof(*pp->cmd_block);
dma_free_coherent(dev, cb_size, pp->cmd_block, pp->cmd_block_dma);
kfree(pp);
}
static void sil24_host_stop(struct ata_host_set *host_set)
{
struct sil24_host_priv *hpriv = host_set->private_data;
iounmap(hpriv->host_base);
iounmap(hpriv->port_base);
kfree(hpriv);
}
static int sil24_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{
static int printed_version = 0;
unsigned int board_id = (unsigned int)ent->driver_data;
struct ata_port_info *pinfo = &sil24_port_info[board_id];
struct ata_probe_ent *probe_ent = NULL;
struct sil24_host_priv *hpriv = NULL;
void *host_base = NULL, *port_base = NULL;
int i, rc;
if (!printed_version++)
printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
rc = pci_enable_device(pdev);
if (rc)
return rc;
rc = pci_request_regions(pdev, DRV_NAME);
if (rc)
goto out_disable;
rc = -ENOMEM;
/* ioremap mmio registers */
host_base = ioremap(pci_resource_start(pdev, 0),
pci_resource_len(pdev, 0));
if (!host_base)
goto out_free;
port_base = ioremap(pci_resource_start(pdev, 2),
pci_resource_len(pdev, 2));
if (!port_base)
goto out_free;
/* allocate & init probe_ent and hpriv */
probe_ent = kmalloc(sizeof(*probe_ent), GFP_KERNEL);
if (!probe_ent)
goto out_free;
hpriv = kmalloc(sizeof(*hpriv), GFP_KERNEL);
if (!hpriv)
goto out_free;
memset(probe_ent, 0, sizeof(*probe_ent));
probe_ent->dev = pci_dev_to_dev(pdev);
INIT_LIST_HEAD(&probe_ent->node);
probe_ent->sht = pinfo->sht;
probe_ent->host_flags = pinfo->host_flags;
probe_ent->pio_mask = pinfo->pio_mask;
probe_ent->udma_mask = pinfo->udma_mask;
probe_ent->port_ops = pinfo->port_ops;
probe_ent->n_ports = SIL24_FLAG2NPORTS(pinfo->host_flags);
probe_ent->irq = pdev->irq;
probe_ent->irq_flags = SA_SHIRQ;
probe_ent->mmio_base = port_base;
probe_ent->private_data = hpriv;
memset(hpriv, 0, sizeof(*hpriv));
hpriv->host_base = host_base;
hpriv->port_base = port_base;
/*
* Configure the device
*/
/*
* FIXME: This device is certainly 64-bit capable. We just
* don't know how to use it. After fixing 32bit activation in
* this function, enable 64bit masks here.
*/
rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
if (rc) {
printk(KERN_ERR DRV_NAME "(%s): 32-bit DMA enable failed\n",
pci_name(pdev));
goto out_free;
}
rc = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK);
if (rc) {
printk(KERN_ERR DRV_NAME "(%s): 32-bit consistent DMA enable failed\n",
pci_name(pdev));
goto out_free;
}
/* GPIO off */
writel(0, host_base + HOST_FLASH_CMD);
/* Mask interrupts during initialization */
writel(0, host_base + HOST_CTRL);
for (i = 0; i < probe_ent->n_ports; i++) {
void *port = port_base + i * PORT_REGS_SIZE;
unsigned long portu = (unsigned long)port;
u32 tmp;
int cnt;
probe_ent->port[i].cmd_addr = portu + PORT_PRB;
probe_ent->port[i].scr_addr = portu + PORT_SCONTROL;
ata_std_ports(&probe_ent->port[i]);
/* Initial PHY setting */
writel(0x20c, port + PORT_PHY_CFG);
/* Clear port RST */
tmp = readl(port + PORT_CTRL_STAT);
if (tmp & PORT_CS_PORT_RST) {
writel(PORT_CS_PORT_RST, port + PORT_CTRL_CLR);
readl(port + PORT_CTRL_STAT); /* sync */
for (cnt = 0; cnt < 10; cnt++) {
msleep(10);
tmp = readl(port + PORT_CTRL_STAT);
if (!(tmp & PORT_CS_PORT_RST))
break;
}
if (tmp & PORT_CS_PORT_RST)
printk(KERN_ERR DRV_NAME
"(%s): failed to clear port RST\n",
pci_name(pdev));
}
/* Zero error counters. */
writel(0x8000, port + PORT_DECODE_ERR_THRESH);
writel(0x8000, port + PORT_CRC_ERR_THRESH);
writel(0x8000, port + PORT_HSHK_ERR_THRESH);
writel(0x0000, port + PORT_DECODE_ERR_CNT);
writel(0x0000, port + PORT_CRC_ERR_CNT);
writel(0x0000, port + PORT_HSHK_ERR_CNT);
/* FIXME: 32bit activation? */
writel(0, port + PORT_ACTIVATE_UPPER_ADDR);
writel(PORT_CS_32BIT_ACTV, port + PORT_CTRL_STAT);
/* Configure interrupts */
writel(0xffff, port + PORT_IRQ_ENABLE_CLR);
writel(PORT_IRQ_COMPLETE | PORT_IRQ_ERROR | PORT_IRQ_SDB_FIS,
port + PORT_IRQ_ENABLE_SET);
/* Clear interrupts */
writel(0x0fff0fff, port + PORT_IRQ_STAT);
writel(PORT_CS_IRQ_WOC, port + PORT_CTRL_CLR);
/* Clear port multiplier enable and resume bits */
writel(PORT_CS_PM_EN | PORT_CS_RESUME, port + PORT_CTRL_CLR);
/* Reset itself */
if (__sil24_reset_controller(port))
printk(KERN_ERR DRV_NAME
"(%s): failed to reset controller\n",
pci_name(pdev));
}
/* Turn on interrupts */
writel(IRQ_STAT_4PORTS, host_base + HOST_CTRL);
pci_set_master(pdev);
/* FIXME: check ata_device_add return value */
ata_device_add(probe_ent);
kfree(probe_ent);
return 0;
out_free:
if (host_base)
iounmap(host_base);
if (port_base)
iounmap(port_base);
kfree(probe_ent);
kfree(hpriv);
pci_release_regions(pdev);
out_disable:
pci_disable_device(pdev);
return rc;
}
static int __init sil24_init(void)
{
return pci_module_init(&sil24_pci_driver);
}
static void __exit sil24_exit(void)
{
pci_unregister_driver(&sil24_pci_driver);
}
MODULE_AUTHOR("Tejun Heo");
MODULE_DESCRIPTION("Silicon Image 3124/3132 SATA low-level driver");
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(pci, sil24_pci_tbl);
module_init(sil24_init);
module_exit(sil24_exit);
...@@ -102,7 +102,7 @@ static Scsi_Host_Template sis_sht = { ...@@ -102,7 +102,7 @@ static Scsi_Host_Template sis_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations sis_ops = { static const struct ata_port_operations sis_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = ata_tf_load, .tf_load = ata_tf_load,
.tf_read = ata_tf_read, .tf_read = ata_tf_read,
...@@ -263,7 +263,7 @@ static int sis_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -263,7 +263,7 @@ static int sis_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out_regions; goto err_out_regions;
ppi = &sis_port_info; ppi = &sis_port_info;
probe_ent = ata_pci_init_native_mode(pdev, &ppi); probe_ent = ata_pci_init_native_mode(pdev, &ppi, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
if (!probe_ent) { if (!probe_ent) {
rc = -ENOMEM; rc = -ENOMEM;
goto err_out_regions; goto err_out_regions;
......
...@@ -102,7 +102,7 @@ static void k2_sata_scr_write (struct ata_port *ap, unsigned int sc_reg, ...@@ -102,7 +102,7 @@ static void k2_sata_scr_write (struct ata_port *ap, unsigned int sc_reg,
} }
static void k2_sata_tf_load(struct ata_port *ap, struct ata_taskfile *tf) static void k2_sata_tf_load(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
struct ata_ioports *ioaddr = &ap->ioaddr; struct ata_ioports *ioaddr = &ap->ioaddr;
unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR; unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR;
...@@ -297,7 +297,7 @@ static Scsi_Host_Template k2_sata_sht = { ...@@ -297,7 +297,7 @@ static Scsi_Host_Template k2_sata_sht = {
}; };
static struct ata_port_operations k2_sata_ops = { static const struct ata_port_operations k2_sata_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = k2_sata_tf_load, .tf_load = k2_sata_tf_load,
.tf_read = k2_sata_tf_read, .tf_read = k2_sata_tf_read,
......
...@@ -137,7 +137,7 @@ struct pdc_port_priv { ...@@ -137,7 +137,7 @@ struct pdc_port_priv {
}; };
struct pdc_host_priv { struct pdc_host_priv {
void *dimm_mmio; void __iomem *dimm_mmio;
unsigned int doing_hdma; unsigned int doing_hdma;
unsigned int hdma_prod; unsigned int hdma_prod;
...@@ -157,8 +157,8 @@ static void pdc_20621_phy_reset (struct ata_port *ap); ...@@ -157,8 +157,8 @@ static void pdc_20621_phy_reset (struct ata_port *ap);
static int pdc_port_start(struct ata_port *ap); static int pdc_port_start(struct ata_port *ap);
static void pdc_port_stop(struct ata_port *ap); static void pdc_port_stop(struct ata_port *ap);
static void pdc20621_qc_prep(struct ata_queued_cmd *qc); static void pdc20621_qc_prep(struct ata_queued_cmd *qc);
static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf); static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
static void pdc_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf); static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
static void pdc20621_host_stop(struct ata_host_set *host_set); static void pdc20621_host_stop(struct ata_host_set *host_set);
static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe); static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe);
static int pdc20621_detect_dimm(struct ata_probe_ent *pe); static int pdc20621_detect_dimm(struct ata_probe_ent *pe);
...@@ -196,7 +196,7 @@ static Scsi_Host_Template pdc_sata_sht = { ...@@ -196,7 +196,7 @@ static Scsi_Host_Template pdc_sata_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations pdc_20621_ops = { static const struct ata_port_operations pdc_20621_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = pdc_tf_load_mmio, .tf_load = pdc_tf_load_mmio,
.tf_read = ata_tf_read, .tf_read = ata_tf_read,
...@@ -247,7 +247,7 @@ static void pdc20621_host_stop(struct ata_host_set *host_set) ...@@ -247,7 +247,7 @@ static void pdc20621_host_stop(struct ata_host_set *host_set)
{ {
struct pci_dev *pdev = to_pci_dev(host_set->dev); struct pci_dev *pdev = to_pci_dev(host_set->dev);
struct pdc_host_priv *hpriv = host_set->private_data; struct pdc_host_priv *hpriv = host_set->private_data;
void *dimm_mmio = hpriv->dimm_mmio; void __iomem *dimm_mmio = hpriv->dimm_mmio;
pci_iounmap(pdev, dimm_mmio); pci_iounmap(pdev, dimm_mmio);
kfree(hpriv); kfree(hpriv);
...@@ -669,8 +669,8 @@ static void pdc20621_packet_start(struct ata_queued_cmd *qc) ...@@ -669,8 +669,8 @@ static void pdc20621_packet_start(struct ata_queued_cmd *qc)
readl(mmio + PDC_20621_SEQCTL + (seq * 4)); /* flush */ readl(mmio + PDC_20621_SEQCTL + (seq * 4)); /* flush */
writel(port_ofs + PDC_DIMM_ATA_PKT, writel(port_ofs + PDC_DIMM_ATA_PKT,
(void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT); (void __iomem *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT);
readl((void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT); readl((void __iomem *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT);
VPRINTK("submitted ofs 0x%x (%u), seq %u\n", VPRINTK("submitted ofs 0x%x (%u), seq %u\n",
port_ofs + PDC_DIMM_ATA_PKT, port_ofs + PDC_DIMM_ATA_PKT,
port_ofs + PDC_DIMM_ATA_PKT, port_ofs + PDC_DIMM_ATA_PKT,
...@@ -747,8 +747,8 @@ static inline unsigned int pdc20621_host_intr( struct ata_port *ap, ...@@ -747,8 +747,8 @@ static inline unsigned int pdc20621_host_intr( struct ata_port *ap,
writel(0x00000001, mmio + PDC_20621_SEQCTL + (seq * 4)); writel(0x00000001, mmio + PDC_20621_SEQCTL + (seq * 4));
readl(mmio + PDC_20621_SEQCTL + (seq * 4)); readl(mmio + PDC_20621_SEQCTL + (seq * 4));
writel(port_ofs + PDC_DIMM_ATA_PKT, writel(port_ofs + PDC_DIMM_ATA_PKT,
(void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT); (void __iomem *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT);
readl((void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT); readl((void __iomem *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT);
} }
/* step two - execute ATA command */ /* step two - execute ATA command */
...@@ -899,7 +899,7 @@ static void pdc_eng_timeout(struct ata_port *ap) ...@@ -899,7 +899,7 @@ static void pdc_eng_timeout(struct ata_port *ap)
DPRINTK("EXIT\n"); DPRINTK("EXIT\n");
} }
static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf) static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
WARN_ON (tf->protocol == ATA_PROT_DMA || WARN_ON (tf->protocol == ATA_PROT_DMA ||
tf->protocol == ATA_PROT_NODATA); tf->protocol == ATA_PROT_NODATA);
...@@ -907,7 +907,7 @@ static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf) ...@@ -907,7 +907,7 @@ static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf)
} }
static void pdc_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf) static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
WARN_ON (tf->protocol == ATA_PROT_DMA || WARN_ON (tf->protocol == ATA_PROT_DMA ||
tf->protocol == ATA_PROT_NODATA); tf->protocol == ATA_PROT_NODATA);
...@@ -1014,7 +1014,7 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource, ...@@ -1014,7 +1014,7 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource,
idx++; idx++;
dist = ((long)(s32)(window_size - (offset + size))) >= 0 ? size : dist = ((long)(s32)(window_size - (offset + size))) >= 0 ? size :
(long) (window_size - offset); (long) (window_size - offset);
memcpy_toio((char *) (dimm_mmio + offset / 4), (char *) psource, dist); memcpy_toio(dimm_mmio + offset / 4, psource, dist);
writel(0x01, mmio + PDC_GENERAL_CTLR); writel(0x01, mmio + PDC_GENERAL_CTLR);
readl(mmio + PDC_GENERAL_CTLR); readl(mmio + PDC_GENERAL_CTLR);
...@@ -1023,8 +1023,7 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource, ...@@ -1023,8 +1023,7 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource,
for (; (long) size >= (long) window_size ;) { for (; (long) size >= (long) window_size ;) {
writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR);
readl(mmio + PDC_DIMM_WINDOW_CTLR); readl(mmio + PDC_DIMM_WINDOW_CTLR);
memcpy_toio((char *) (dimm_mmio), (char *) psource, memcpy_toio(dimm_mmio, psource, window_size / 4);
window_size / 4);
writel(0x01, mmio + PDC_GENERAL_CTLR); writel(0x01, mmio + PDC_GENERAL_CTLR);
readl(mmio + PDC_GENERAL_CTLR); readl(mmio + PDC_GENERAL_CTLR);
psource += window_size; psource += window_size;
...@@ -1035,7 +1034,7 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource, ...@@ -1035,7 +1034,7 @@ static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource,
if (size) { if (size) {
writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR); writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR);
readl(mmio + PDC_DIMM_WINDOW_CTLR); readl(mmio + PDC_DIMM_WINDOW_CTLR);
memcpy_toio((char *) (dimm_mmio), (char *) psource, size / 4); memcpy_toio(dimm_mmio, psource, size / 4);
writel(0x01, mmio + PDC_GENERAL_CTLR); writel(0x01, mmio + PDC_GENERAL_CTLR);
readl(mmio + PDC_GENERAL_CTLR); readl(mmio + PDC_GENERAL_CTLR);
} }
......
...@@ -90,7 +90,7 @@ static Scsi_Host_Template uli_sht = { ...@@ -90,7 +90,7 @@ static Scsi_Host_Template uli_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations uli_ops = { static const struct ata_port_operations uli_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = ata_tf_load, .tf_load = ata_tf_load,
...@@ -202,7 +202,7 @@ static int uli_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) ...@@ -202,7 +202,7 @@ static int uli_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out_regions; goto err_out_regions;
ppi = &uli_port_info; ppi = &uli_port_info;
probe_ent = ata_pci_init_native_mode(pdev, &ppi); probe_ent = ata_pci_init_native_mode(pdev, &ppi, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
if (!probe_ent) { if (!probe_ent) {
rc = -ENOMEM; rc = -ENOMEM;
goto err_out_regions; goto err_out_regions;
......
...@@ -109,7 +109,7 @@ static Scsi_Host_Template svia_sht = { ...@@ -109,7 +109,7 @@ static Scsi_Host_Template svia_sht = {
.ordered_flush = 1, .ordered_flush = 1,
}; };
static struct ata_port_operations svia_sata_ops = { static const struct ata_port_operations svia_sata_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = ata_tf_load, .tf_load = ata_tf_load,
...@@ -212,7 +212,7 @@ static struct ata_probe_ent *vt6420_init_probe_ent(struct pci_dev *pdev) ...@@ -212,7 +212,7 @@ static struct ata_probe_ent *vt6420_init_probe_ent(struct pci_dev *pdev)
struct ata_probe_ent *probe_ent; struct ata_probe_ent *probe_ent;
struct ata_port_info *ppi = &svia_port_info; struct ata_port_info *ppi = &svia_port_info;
probe_ent = ata_pci_init_native_mode(pdev, &ppi); probe_ent = ata_pci_init_native_mode(pdev, &ppi, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
if (!probe_ent) if (!probe_ent)
return NULL; return NULL;
......
...@@ -86,7 +86,7 @@ static u32 vsc_sata_scr_read (struct ata_port *ap, unsigned int sc_reg) ...@@ -86,7 +86,7 @@ static u32 vsc_sata_scr_read (struct ata_port *ap, unsigned int sc_reg)
{ {
if (sc_reg > SCR_CONTROL) if (sc_reg > SCR_CONTROL)
return 0xffffffffU; return 0xffffffffU;
return readl((void *) ap->ioaddr.scr_addr + (sc_reg * 4)); return readl((void __iomem *) ap->ioaddr.scr_addr + (sc_reg * 4));
} }
...@@ -95,16 +95,16 @@ static void vsc_sata_scr_write (struct ata_port *ap, unsigned int sc_reg, ...@@ -95,16 +95,16 @@ static void vsc_sata_scr_write (struct ata_port *ap, unsigned int sc_reg,
{ {
if (sc_reg > SCR_CONTROL) if (sc_reg > SCR_CONTROL)
return; return;
writel(val, (void *) ap->ioaddr.scr_addr + (sc_reg * 4)); writel(val, (void __iomem *) ap->ioaddr.scr_addr + (sc_reg * 4));
} }
static void vsc_intr_mask_update(struct ata_port *ap, u8 ctl) static void vsc_intr_mask_update(struct ata_port *ap, u8 ctl)
{ {
unsigned long mask_addr; void __iomem *mask_addr;
u8 mask; u8 mask;
mask_addr = (unsigned long) ap->host_set->mmio_base + mask_addr = ap->host_set->mmio_base +
VSC_SATA_INT_MASK_OFFSET + ap->port_no; VSC_SATA_INT_MASK_OFFSET + ap->port_no;
mask = readb(mask_addr); mask = readb(mask_addr);
if (ctl & ATA_NIEN) if (ctl & ATA_NIEN)
...@@ -115,7 +115,7 @@ static void vsc_intr_mask_update(struct ata_port *ap, u8 ctl) ...@@ -115,7 +115,7 @@ static void vsc_intr_mask_update(struct ata_port *ap, u8 ctl)
} }
static void vsc_sata_tf_load(struct ata_port *ap, struct ata_taskfile *tf) static void vsc_sata_tf_load(struct ata_port *ap, const struct ata_taskfile *tf)
{ {
struct ata_ioports *ioaddr = &ap->ioaddr; struct ata_ioports *ioaddr = &ap->ioaddr;
unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR; unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR;
...@@ -231,7 +231,7 @@ static Scsi_Host_Template vsc_sata_sht = { ...@@ -231,7 +231,7 @@ static Scsi_Host_Template vsc_sata_sht = {
}; };
static struct ata_port_operations vsc_sata_ops = { static const struct ata_port_operations vsc_sata_ops = {
.port_disable = ata_port_disable, .port_disable = ata_port_disable,
.tf_load = vsc_sata_tf_load, .tf_load = vsc_sata_tf_load,
.tf_read = vsc_sata_tf_read, .tf_read = vsc_sata_tf_read,
...@@ -283,7 +283,7 @@ static int __devinit vsc_sata_init_one (struct pci_dev *pdev, const struct pci_d ...@@ -283,7 +283,7 @@ static int __devinit vsc_sata_init_one (struct pci_dev *pdev, const struct pci_d
struct ata_probe_ent *probe_ent = NULL; struct ata_probe_ent *probe_ent = NULL;
unsigned long base; unsigned long base;
int pci_dev_busy = 0; int pci_dev_busy = 0;
void *mmio_base; void __iomem *mmio_base;
int rc; int rc;
if (!printed_version++) if (!printed_version++)
......
...@@ -42,13 +42,18 @@ enum { ...@@ -42,13 +42,18 @@ enum {
ATA_SECT_SIZE = 512, ATA_SECT_SIZE = 512,
ATA_ID_WORDS = 256, ATA_ID_WORDS = 256,
ATA_ID_PROD_OFS = 27,
ATA_ID_FW_REV_OFS = 23,
ATA_ID_SERNO_OFS = 10, ATA_ID_SERNO_OFS = 10,
ATA_ID_MAJOR_VER = 80, ATA_ID_FW_REV_OFS = 23,
ATA_ID_PIO_MODES = 64, ATA_ID_PROD_OFS = 27,
ATA_ID_OLD_PIO_MODES = 51,
ATA_ID_FIELD_VALID = 53,
ATA_ID_MWDMA_MODES = 63, ATA_ID_MWDMA_MODES = 63,
ATA_ID_PIO_MODES = 64,
ATA_ID_EIDE_DMA_MIN = 65,
ATA_ID_EIDE_PIO = 67,
ATA_ID_EIDE_PIO_IORDY = 68,
ATA_ID_UDMA_MODES = 88, ATA_ID_UDMA_MODES = 88,
ATA_ID_MAJOR_VER = 80,
ATA_ID_PIO4 = (1 << 1), ATA_ID_PIO4 = (1 << 1),
ATA_PCI_CTL_OFS = 2, ATA_PCI_CTL_OFS = 2,
...@@ -128,10 +133,15 @@ enum { ...@@ -128,10 +133,15 @@ enum {
ATA_CMD_PIO_READ_EXT = 0x24, ATA_CMD_PIO_READ_EXT = 0x24,
ATA_CMD_PIO_WRITE = 0x30, ATA_CMD_PIO_WRITE = 0x30,
ATA_CMD_PIO_WRITE_EXT = 0x34, ATA_CMD_PIO_WRITE_EXT = 0x34,
ATA_CMD_READ_MULTI = 0xC4,
ATA_CMD_READ_MULTI_EXT = 0x29,
ATA_CMD_WRITE_MULTI = 0xC5,
ATA_CMD_WRITE_MULTI_EXT = 0x39,
ATA_CMD_SET_FEATURES = 0xEF, ATA_CMD_SET_FEATURES = 0xEF,
ATA_CMD_PACKET = 0xA0, ATA_CMD_PACKET = 0xA0,
ATA_CMD_VERIFY = 0x40, ATA_CMD_VERIFY = 0x40,
ATA_CMD_VERIFY_EXT = 0x42, ATA_CMD_VERIFY_EXT = 0x42,
ATA_CMD_INIT_DEV_PARAMS = 0x91,
/* SETFEATURES stuff */ /* SETFEATURES stuff */
SETFEATURES_XFER = 0x03, SETFEATURES_XFER = 0x03,
...@@ -146,14 +156,14 @@ enum { ...@@ -146,14 +156,14 @@ enum {
XFER_MW_DMA_2 = 0x22, XFER_MW_DMA_2 = 0x22,
XFER_MW_DMA_1 = 0x21, XFER_MW_DMA_1 = 0x21,
XFER_MW_DMA_0 = 0x20, XFER_MW_DMA_0 = 0x20,
XFER_SW_DMA_2 = 0x12,
XFER_SW_DMA_1 = 0x11,
XFER_SW_DMA_0 = 0x10,
XFER_PIO_4 = 0x0C, XFER_PIO_4 = 0x0C,
XFER_PIO_3 = 0x0B, XFER_PIO_3 = 0x0B,
XFER_PIO_2 = 0x0A, XFER_PIO_2 = 0x0A,
XFER_PIO_1 = 0x09, XFER_PIO_1 = 0x09,
XFER_PIO_0 = 0x08, XFER_PIO_0 = 0x08,
XFER_SW_DMA_2 = 0x12,
XFER_SW_DMA_1 = 0x11,
XFER_SW_DMA_0 = 0x10,
XFER_PIO_SLOW = 0x00, XFER_PIO_SLOW = 0x00,
/* ATAPI stuff */ /* ATAPI stuff */
...@@ -181,6 +191,7 @@ enum { ...@@ -181,6 +191,7 @@ enum {
ATA_TFLAG_ISADDR = (1 << 1), /* enable r/w to nsect/lba regs */ ATA_TFLAG_ISADDR = (1 << 1), /* enable r/w to nsect/lba regs */
ATA_TFLAG_DEVICE = (1 << 2), /* enable r/w to device reg */ ATA_TFLAG_DEVICE = (1 << 2), /* enable r/w to device reg */
ATA_TFLAG_WRITE = (1 << 3), /* data dir: host->dev==1 (write) */ ATA_TFLAG_WRITE = (1 << 3), /* data dir: host->dev==1 (write) */
ATA_TFLAG_LBA = (1 << 4), /* enable LBA */
}; };
enum ata_tf_protocols { enum ata_tf_protocols {
...@@ -250,7 +261,19 @@ struct ata_taskfile { ...@@ -250,7 +261,19 @@ struct ata_taskfile {
((u64) (id)[(n) + 1] << 16) | \ ((u64) (id)[(n) + 1] << 16) | \
((u64) (id)[(n) + 0]) ) ((u64) (id)[(n) + 0]) )
static inline int atapi_cdb_len(u16 *dev_id) static inline int ata_id_current_chs_valid(const u16 *id)
{
/* For ATA-1 devices, if the INITIALIZE DEVICE PARAMETERS command
has not been issued to the device then the values of
id[54] to id[56] are vendor specific. */
return (id[53] & 0x01) && /* Current translation valid */
id[54] && /* cylinders in current translation */
id[55] && /* heads in current translation */
id[55] <= 16 &&
id[56]; /* sectors in current translation */
}
static inline int atapi_cdb_len(const u16 *dev_id)
{ {
u16 tmp = dev_id[0] & 0x3; u16 tmp = dev_id[0] & 0x3;
switch (tmp) { switch (tmp) {
...@@ -260,7 +283,7 @@ static inline int atapi_cdb_len(u16 *dev_id) ...@@ -260,7 +283,7 @@ static inline int atapi_cdb_len(u16 *dev_id)
} }
} }
static inline int is_atapi_taskfile(struct ata_taskfile *tf) static inline int is_atapi_taskfile(const struct ata_taskfile *tf)
{ {
return (tf->protocol == ATA_PROT_ATAPI) || return (tf->protocol == ATA_PROT_ATAPI) ||
(tf->protocol == ATA_PROT_ATAPI_NODATA) || (tf->protocol == ATA_PROT_ATAPI_NODATA) ||
......
...@@ -91,12 +91,13 @@ enum { ...@@ -91,12 +91,13 @@ enum {
ATA_SHT_EMULATED = 1, ATA_SHT_EMULATED = 1,
ATA_SHT_CMD_PER_LUN = 1, ATA_SHT_CMD_PER_LUN = 1,
ATA_SHT_THIS_ID = -1, ATA_SHT_THIS_ID = -1,
ATA_SHT_USE_CLUSTERING = 0, ATA_SHT_USE_CLUSTERING = 1,
/* struct ata_device stuff */ /* struct ata_device stuff */
ATA_DFLAG_LBA48 = (1 << 0), /* device supports LBA48 */ ATA_DFLAG_LBA48 = (1 << 0), /* device supports LBA48 */
ATA_DFLAG_PIO = (1 << 1), /* device currently in PIO mode */ ATA_DFLAG_PIO = (1 << 1), /* device currently in PIO mode */
ATA_DFLAG_LOCK_SECTORS = (1 << 2), /* don't adjust max_sectors */ ATA_DFLAG_LOCK_SECTORS = (1 << 2), /* don't adjust max_sectors */
ATA_DFLAG_LBA = (1 << 3), /* device supports LBA */
ATA_DEV_UNKNOWN = 0, /* unknown device */ ATA_DEV_UNKNOWN = 0, /* unknown device */
ATA_DEV_ATA = 1, /* ATA device */ ATA_DEV_ATA = 1, /* ATA device */
...@@ -154,17 +155,21 @@ enum { ...@@ -154,17 +155,21 @@ enum {
ATA_SHIFT_UDMA = 0, ATA_SHIFT_UDMA = 0,
ATA_SHIFT_MWDMA = 8, ATA_SHIFT_MWDMA = 8,
ATA_SHIFT_PIO = 11, ATA_SHIFT_PIO = 11,
/* Masks for port functions */
ATA_PORT_PRIMARY = (1 << 0),
ATA_PORT_SECONDARY = (1 << 1),
}; };
enum pio_task_states { enum hsm_task_states {
PIO_ST_UNKNOWN, HSM_ST_UNKNOWN,
PIO_ST_IDLE, HSM_ST_IDLE,
PIO_ST_POLL, HSM_ST_POLL,
PIO_ST_TMOUT, HSM_ST_TMOUT,
PIO_ST, HSM_ST,
PIO_ST_LAST, HSM_ST_LAST,
PIO_ST_LAST_POLL, HSM_ST_LAST_POLL,
PIO_ST_ERR, HSM_ST_ERR,
}; };
/* forward declarations */ /* forward declarations */
...@@ -197,7 +202,7 @@ struct ata_ioports { ...@@ -197,7 +202,7 @@ struct ata_ioports {
struct ata_probe_ent { struct ata_probe_ent {
struct list_head node; struct list_head node;
struct device *dev; struct device *dev;
struct ata_port_operations *port_ops; const struct ata_port_operations *port_ops;
Scsi_Host_Template *sht; Scsi_Host_Template *sht;
struct ata_ioports port[ATA_MAX_PORTS]; struct ata_ioports port[ATA_MAX_PORTS];
unsigned int n_ports; unsigned int n_ports;
...@@ -220,7 +225,7 @@ struct ata_host_set { ...@@ -220,7 +225,7 @@ struct ata_host_set {
void __iomem *mmio_base; void __iomem *mmio_base;
unsigned int n_ports; unsigned int n_ports;
void *private_data; void *private_data;
struct ata_port_operations *ops; const struct ata_port_operations *ops;
struct ata_port * ports[0]; struct ata_port * ports[0];
}; };
...@@ -278,15 +283,18 @@ struct ata_device { ...@@ -278,15 +283,18 @@ struct ata_device {
u8 xfer_mode; u8 xfer_mode;
unsigned int xfer_shift; /* ATA_SHIFT_xxx */ unsigned int xfer_shift; /* ATA_SHIFT_xxx */
/* cache info about current transfer mode */ unsigned int multi_count; /* sectors count for
u8 xfer_protocol; /* taskfile xfer protocol */ READ/WRITE MULTIPLE */
u8 read_cmd; /* opcode to use on read */
u8 write_cmd; /* opcode to use on write */ /* for CHS addressing */
u16 cylinders; /* Number of cylinders */
u16 heads; /* Number of heads */
u16 sectors; /* Number of sectors per track */
}; };
struct ata_port { struct ata_port {
struct Scsi_Host *host; /* our co-allocated scsi host */ struct Scsi_Host *host; /* our co-allocated scsi host */
struct ata_port_operations *ops; const struct ata_port_operations *ops;
unsigned long flags; /* ATA_FLAG_xxx */ unsigned long flags; /* ATA_FLAG_xxx */
unsigned int id; /* unique id req'd by scsi midlyr */ unsigned int id; /* unique id req'd by scsi midlyr */
unsigned int port_no; /* unique port #; from zero */ unsigned int port_no; /* unique port #; from zero */
...@@ -319,7 +327,7 @@ struct ata_port { ...@@ -319,7 +327,7 @@ struct ata_port {
struct work_struct packet_task; struct work_struct packet_task;
struct work_struct pio_task; struct work_struct pio_task;
unsigned int pio_task_state; unsigned int hsm_task_state;
unsigned long pio_task_timeout; unsigned long pio_task_timeout;
void *private_data; void *private_data;
...@@ -333,10 +341,10 @@ struct ata_port_operations { ...@@ -333,10 +341,10 @@ struct ata_port_operations {
void (*set_piomode) (struct ata_port *, struct ata_device *); void (*set_piomode) (struct ata_port *, struct ata_device *);
void (*set_dmamode) (struct ata_port *, struct ata_device *); void (*set_dmamode) (struct ata_port *, struct ata_device *);
void (*tf_load) (struct ata_port *ap, struct ata_taskfile *tf); void (*tf_load) (struct ata_port *ap, const struct ata_taskfile *tf);
void (*tf_read) (struct ata_port *ap, struct ata_taskfile *tf); void (*tf_read) (struct ata_port *ap, struct ata_taskfile *tf);
void (*exec_command)(struct ata_port *ap, struct ata_taskfile *tf); void (*exec_command)(struct ata_port *ap, const struct ata_taskfile *tf);
u8 (*check_status)(struct ata_port *ap); u8 (*check_status)(struct ata_port *ap);
u8 (*check_altstatus)(struct ata_port *ap); u8 (*check_altstatus)(struct ata_port *ap);
u8 (*check_err)(struct ata_port *ap); u8 (*check_err)(struct ata_port *ap);
...@@ -377,9 +385,22 @@ struct ata_port_info { ...@@ -377,9 +385,22 @@ struct ata_port_info {
unsigned long pio_mask; unsigned long pio_mask;
unsigned long mwdma_mask; unsigned long mwdma_mask;
unsigned long udma_mask; unsigned long udma_mask;
struct ata_port_operations *port_ops; const struct ata_port_operations *port_ops;
};
struct ata_timing {
unsigned short mode; /* ATA mode */
unsigned short setup; /* t1 */
unsigned short act8b; /* t2 for 8-bit I/O */
unsigned short rec8b; /* t2i for 8-bit I/O */
unsigned short cyc8b; /* t0 for 8-bit I/O */
unsigned short active; /* t2 or tD */
unsigned short recover; /* t2i or tK */
unsigned short cycle; /* t0 */
unsigned short udma; /* t2CYCTYP/2 */
}; };
#define FIT(v,vmin,vmax) max_t(short,min_t(short,v,vmax),vmin)
extern void ata_port_probe(struct ata_port *); extern void ata_port_probe(struct ata_port *);
extern void __sata_phy_reset(struct ata_port *ap); extern void __sata_phy_reset(struct ata_port *ap);
...@@ -392,7 +413,7 @@ extern int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_i ...@@ -392,7 +413,7 @@ extern int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_i
unsigned int n_ports); unsigned int n_ports);
extern void ata_pci_remove_one (struct pci_dev *pdev); extern void ata_pci_remove_one (struct pci_dev *pdev);
#endif /* CONFIG_PCI */ #endif /* CONFIG_PCI */
extern int ata_device_add(struct ata_probe_ent *ent); extern int ata_device_add(const struct ata_probe_ent *ent);
extern void ata_host_set_remove(struct ata_host_set *host_set); extern void ata_host_set_remove(struct ata_host_set *host_set);
extern int ata_scsi_detect(Scsi_Host_Template *sht); extern int ata_scsi_detect(Scsi_Host_Template *sht);
extern int ata_scsi_ioctl(struct scsi_device *dev, int cmd, void __user *arg); extern int ata_scsi_ioctl(struct scsi_device *dev, int cmd, void __user *arg);
...@@ -400,19 +421,21 @@ extern int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmn ...@@ -400,19 +421,21 @@ extern int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmn
extern int ata_scsi_error(struct Scsi_Host *host); extern int ata_scsi_error(struct Scsi_Host *host);
extern int ata_scsi_release(struct Scsi_Host *host); extern int ata_scsi_release(struct Scsi_Host *host);
extern unsigned int ata_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc); extern unsigned int ata_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc);
extern int ata_ratelimit(void);
/* /*
* Default driver ops implementations * Default driver ops implementations
*/ */
extern void ata_tf_load(struct ata_port *ap, struct ata_taskfile *tf); extern void ata_tf_load(struct ata_port *ap, const struct ata_taskfile *tf);
extern void ata_tf_read(struct ata_port *ap, struct ata_taskfile *tf); extern void ata_tf_read(struct ata_port *ap, struct ata_taskfile *tf);
extern void ata_tf_to_fis(struct ata_taskfile *tf, u8 *fis, u8 pmp); extern void ata_tf_to_fis(const struct ata_taskfile *tf, u8 *fis, u8 pmp);
extern void ata_tf_from_fis(u8 *fis, struct ata_taskfile *tf); extern void ata_tf_from_fis(const u8 *fis, struct ata_taskfile *tf);
extern void ata_noop_dev_select (struct ata_port *ap, unsigned int device); extern void ata_noop_dev_select (struct ata_port *ap, unsigned int device);
extern void ata_std_dev_select (struct ata_port *ap, unsigned int device); extern void ata_std_dev_select (struct ata_port *ap, unsigned int device);
extern u8 ata_check_status(struct ata_port *ap); extern u8 ata_check_status(struct ata_port *ap);
extern u8 ata_altstatus(struct ata_port *ap); extern u8 ata_altstatus(struct ata_port *ap);
extern u8 ata_chk_err(struct ata_port *ap); extern u8 ata_chk_err(struct ata_port *ap);
extern void ata_exec_command(struct ata_port *ap, struct ata_taskfile *tf); extern void ata_exec_command(struct ata_port *ap, const struct ata_taskfile *tf);
extern int ata_port_start (struct ata_port *ap); extern int ata_port_start (struct ata_port *ap);
extern void ata_port_stop (struct ata_port *ap); extern void ata_port_stop (struct ata_port *ap);
extern void ata_host_stop (struct ata_host_set *host_set); extern void ata_host_stop (struct ata_host_set *host_set);
...@@ -423,8 +446,8 @@ extern void ata_sg_init_one(struct ata_queued_cmd *qc, void *buf, ...@@ -423,8 +446,8 @@ extern void ata_sg_init_one(struct ata_queued_cmd *qc, void *buf,
unsigned int buflen); unsigned int buflen);
extern void ata_sg_init(struct ata_queued_cmd *qc, struct scatterlist *sg, extern void ata_sg_init(struct ata_queued_cmd *qc, struct scatterlist *sg,
unsigned int n_elem); unsigned int n_elem);
extern unsigned int ata_dev_classify(struct ata_taskfile *tf); extern unsigned int ata_dev_classify(const struct ata_taskfile *tf);
extern void ata_dev_id_string(u16 *id, unsigned char *s, extern void ata_dev_id_string(const u16 *id, unsigned char *s,
unsigned int ofs, unsigned int len); unsigned int ofs, unsigned int len);
extern void ata_dev_config(struct ata_port *ap, unsigned int i); extern void ata_dev_config(struct ata_port *ap, unsigned int i);
extern void ata_bmdma_setup (struct ata_queued_cmd *qc); extern void ata_bmdma_setup (struct ata_queued_cmd *qc);
...@@ -441,6 +464,32 @@ extern int ata_std_bios_param(struct scsi_device *sdev, ...@@ -441,6 +464,32 @@ extern int ata_std_bios_param(struct scsi_device *sdev,
sector_t capacity, int geom[]); sector_t capacity, int geom[]);
extern int ata_scsi_slave_config(struct scsi_device *sdev); extern int ata_scsi_slave_config(struct scsi_device *sdev);
/*
* Timing helpers
*/
extern int ata_timing_compute(struct ata_device *, unsigned short,
struct ata_timing *, int, int);
extern void ata_timing_merge(const struct ata_timing *,
const struct ata_timing *, struct ata_timing *,
unsigned int);
enum {
ATA_TIMING_SETUP = (1 << 0),
ATA_TIMING_ACT8B = (1 << 1),
ATA_TIMING_REC8B = (1 << 2),
ATA_TIMING_CYC8B = (1 << 3),
ATA_TIMING_8BIT = ATA_TIMING_ACT8B | ATA_TIMING_REC8B |
ATA_TIMING_CYC8B,
ATA_TIMING_ACTIVE = (1 << 4),
ATA_TIMING_RECOVER = (1 << 5),
ATA_TIMING_CYCLE = (1 << 6),
ATA_TIMING_UDMA = (1 << 7),
ATA_TIMING_ALL = ATA_TIMING_SETUP | ATA_TIMING_ACT8B |
ATA_TIMING_REC8B | ATA_TIMING_CYC8B |
ATA_TIMING_ACTIVE | ATA_TIMING_RECOVER |
ATA_TIMING_CYCLE | ATA_TIMING_UDMA,
};
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
struct pci_bits { struct pci_bits {
...@@ -452,8 +501,8 @@ struct pci_bits { ...@@ -452,8 +501,8 @@ struct pci_bits {
extern void ata_pci_host_stop (struct ata_host_set *host_set); extern void ata_pci_host_stop (struct ata_host_set *host_set);
extern struct ata_probe_ent * extern struct ata_probe_ent *
ata_pci_init_native_mode(struct pci_dev *pdev, struct ata_port_info **port); ata_pci_init_native_mode(struct pci_dev *pdev, struct ata_port_info **port, int portmask);
extern int pci_test_config_bits(struct pci_dev *pdev, struct pci_bits *bits); extern int pci_test_config_bits(struct pci_dev *pdev, const struct pci_bits *bits);
#endif /* CONFIG_PCI */ #endif /* CONFIG_PCI */
...@@ -463,7 +512,7 @@ static inline unsigned int ata_tag_valid(unsigned int tag) ...@@ -463,7 +512,7 @@ static inline unsigned int ata_tag_valid(unsigned int tag)
return (tag < ATA_MAX_QUEUE) ? 1 : 0; return (tag < ATA_MAX_QUEUE) ? 1 : 0;
} }
static inline unsigned int ata_dev_present(struct ata_device *dev) static inline unsigned int ata_dev_present(const struct ata_device *dev)
{ {
return ((dev->class == ATA_DEV_ATA) || return ((dev->class == ATA_DEV_ATA) ||
(dev->class == ATA_DEV_ATAPI)); (dev->class == ATA_DEV_ATAPI));
...@@ -662,7 +711,7 @@ static inline unsigned int sata_dev_present(struct ata_port *ap) ...@@ -662,7 +711,7 @@ static inline unsigned int sata_dev_present(struct ata_port *ap)
return ((scr_read(ap, SCR_STATUS) & 0xf) == 0x3) ? 1 : 0; return ((scr_read(ap, SCR_STATUS) & 0xf) == 0x3) ? 1 : 0;
} }
static inline int ata_try_flush_cache(struct ata_device *dev) static inline int ata_try_flush_cache(const struct ata_device *dev)
{ {
return ata_id_wcache_enabled(dev->id) || return ata_id_wcache_enabled(dev->id) ||
ata_id_has_flush(dev->id) || ata_id_has_flush(dev->id) ||
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment