- 03 Jul, 2011 40 commits
-
-
Dan Williams authored
Remove the distinction between these two implementations and unify on isci_port (local instances named iport). The duplicate '->owning_port' and '->isci_port' in both isci_phy and isci_remote_device will be fixed in a later patch... this is just the straightforward rename/unification. Reported-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Commit 0815632 "isci: unify remote_device stop_handlers" introduced the possibility that not all requests get terminated if we reach the request_count. Now that we properly reference count devices we don't need this self-defense and can do the straightforward scan of all active requests. Reported-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Acked-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
They are one in the same object so remove the distinction. The near duplicate fields (owning_port, and isci_port) will be cleaned up after the scic_sds_port isci_port unification. Reported-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
They are one in the same object so remove the distinction. The near duplicate fields (owning_controller, and isci_host) will be cleaned up after the scic_sds_contoller isci_host unification. Reported-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
* Rename scic_sds_stp_request to isci_stp_request * Remove the unused fields and union indirection Reported-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
the dma_pool interface is optimized for object_size << page_size which is not the case with isci_request objects and the dma_pool routines show up in the top of the profile. The old io_request_table which tracked whether tci slots were in-flight or not is replaced with an IREQ_ACTIVE flag per request. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Combine three bools into one unsigned long 'flags'. Doesn't increase the request size due to packing. (to do: optimize the structure layout). Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
The tci_pool tracks our outstanding command slots which are also the 'index' portion of our tags. Grabbing the tag early in ->lldd_execute_task let's us drop the isci_host_can_queue() and ->was_tag_assigned_by_user infrastructure. ->was_tag_assigned_by_user required the task context to be duplicated in request-local buffer. With the tci established early we can build the task_context directly into its final location and skip a memcpy. With the task context buffer at a known address at request construction we have the opportunity/obligation to also fix sgl handling. This rework feels like it belongs in another patch but the sgl handling and task_context are too intertwined. 1/ fix the 'ab' pair embedded in the task context to point to the 'cd' pair in the task context (previously we were prematurely linking to the staging buffer). 2/ fix the broken iteration of pio sgls that assumes all sgls are relative to the request, and does a dangerous looking reverse lookup of physical address to virtual address. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Jeff Skirvin authored
When the remote device transitions to a not-ready state because of an NCQ error condition, all outstanding requests to that device are terminated and completed to libsas on the normal path. The device then waits for a READ LOG EXT command to issue on the task management path. Signed-off-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Updates to the frame_rcvd before need to be atomic with respect to when they are evaluated by libsas. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Maciej Patelczyk authored
scu_index is a parameter of isci_parse_eom_parameters and is an index in controller table. There is a check: scu_index > SCI_MAX_CONTROLLERS which is insufficient and should be: scu_index >= SCI_MAX_CONTROLLERS. scu_index is used as an index in the table which size is SCI_MAX_CONTROLLERS. Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
1/ fix the timeout for wait_for_completion_timeout 2/ In the tmf timeout case we need to wait for our termination callback 3/ Once the request is successfully started it will be freed according to the normal lifetime for requests. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Instead of duplicating the smp request buffer reuse the one provided by libsas. This future proofs the driver to support arbitrarily large smp requests, and shrinks the request structure size by ~700 bytes. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
One bug and a cleanup: 1/ Fix cases where we were unmapping invalid addresses (smp requests were being unmapped) [ 604.662770] ------------[ cut here ]------------ [ 604.668026] WARNING: at lib/dma-debug.c:800 check_unmap+0x418/0x740() [ 604.675315] Hardware name: SandyBridge Platform [ 604.680465] isci 0000:03:00.0: DMA-API: device driver tries to free an invalid DMA memory address 2/ The unmap routine is too large to be an inline function, and isci_request_io_request_get_next_sge is unused. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Due to a typo we currently copy way too much when copying over the response data, but since a request is likely backed by a full page allocation we don't corrupt live data. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Now that we have upleveled device reassignment protection to the isci_remote_device reference count we no longer need this level of self-defense. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Now that "stopping/stopped" are one in the same and signalled by a NULL device pointer the rest of the device status infrastructure can be removed (->status and ->state_lock). The "not ready for i/o state" is replaced with a state flag, and is evaluated under scic_lock so that we don't see transients from taking the device reference to submitting the i/o. This also fixes a potential leakage of can_queue slots in the rare case that SAS_TASK_ABORTED is set at submission. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
We have unsafe references to remote devices that are notified to disappear at lldd_dev_gone. In order to clean this up we need a single canonical source for device lookups and stable references once a lookup succeeds. Towards that end guarantee that domain_device.lldd_dev is NULL as soon as we start the process of stopping a device. Any code path that wants to safely lookup a remote device must do so through task->dev->lldd_dev (isci_lookup_device()). For in-flight references outside of scic_lock we need reference counting to ensure that the device is not recycled before we are done with it. Simplify device back references to just scic_sds_request.target_device which is now the only permissible internal reference that is maintained relative to the reference count. There were two occasions where we wanted new i/o's to be treated as SAS_TASK_UNDELIVERED but where the domain_dev->lldd_dev link is still intact. Introduce a 'gone' flag to prevent i/o while waiting for libsas to take action on the port down event. One 'core' leftover is that we currently call scic_remote_device_destruct() from isci_remote_device_deconstruct() which is called when the 'core' says the device is stopped. It would be more natural for the final put to trigger isci_remote_device_deconstruct() but this implementation is deferred as it requires other changes. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
In isci_task_request_complete() we save the response/sense data from the command. Make sure isci_tmf has enough space to hold the full response. [ it does not look like we actually use this data, and response_data_len/sense_data_len should be specifying the byte count, in any event do the simple fix first so we don't corrupt memory ] Reported-by: Adam Gruchala <adam.gruchala@intel.com> Tested-by: Edmund Nadolski <edmund.nadolski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Rather than return an error code and update a pointer that was passed by reference just return the request object directly (or null if allocation failed). Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Every single i/o or event completion incurs a test and branch to see if the cycle bit changed. For power-of-2 queue sizes the cycle bit can be read directly from the rollover of the queue pointer. Likely premature optimization, but the hidden if() and hidden assignments / side-effects in the macros were already asking to be cleaned up. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
A tag is a 16 bit number where the upper four bits is a sequence number and the remainder is the task context index (tci). Sanitize the macro names and shave 256-bytes out of scic_sds_controller by reducing the size of io_request_sequence. scic_sds_io_tag_construct --> ISCI_TAG scic_sds_io_tag_get_sequence --> ISCI_TAG_SEQ scic_sds_io_tag_get_index() --> ISCI_TAG_TCI scic_sds_io_sequence_increment() [delete / open code] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
The circ_buf macros are ~6% faster, as measured by perf, because they take advantage of power-of-two math assumptions i.e. no test and branch for rollover. Their semantics are clearer than the hidden side effects in pool.h (like sci_pool_get() which hides an assignment). Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Jeff Skirvin authored
Some targets exceed the hang detect timer. Use the OS timeout to catch hung tasks. Signed-off-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Jeff Skirvin authored
In the case where the hard reset process fails, each link in the port is put through a link reset sequence. Signed-off-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Jeff Skirvin authored
The remote node context should only signal a device reset condition in a suspended state. Signed-off-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Walk through the list of pending requests being careful to consider that multiple requests can be terminated when the lock is dropped (i.e. invalidating the 'next' reference established by list_for_each_entry_safe). Also noticed that all callers to isci_terminate_pending_requests() specifying terminating, so just drop the parameter. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Jeff Skirvin authored
In the situation where a termination of an I/O times-out, make sure that the linkage from the request to the task is severed completely. Also make sure that the selection of tasks to terminate occurs under scic_lock. Signed-off-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Jeff Skirvin authored
Requests that fail at start because of a reset pending condition must be set to complete in order to allow for later cleanup. Signed-off-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Jeff Skirvin authored
There are situations with slow expanders in which a first attempt to execute an SMP request will fail with a timeout. Immediate subsequent retries will generally succeed. This change makes sure SMP I/O failures are immediately failed to libsas so that retries happen with no discovery process timeout delay. Signed-off-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Jeff Skirvin authored
When resetting a sata device in the domain we have seen occasions where libsas prematurely marks a device gone in the time it takes for the device to re-establish the link. This plays badly with software raid arrays. Other libsas drivers have non-uniform delays in their reset handlers to try to cover this condition, but not sufficient to close the hole. Given that a sata device can take many seconds to recover we filter bcns and poll for the device reattach state before notifying libsas that the port needs the domain to be rediscovered. Once this has been proven out at the lldd level we can think about uplevelling this feature to a common implementation in libsas. Signed-off-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> [ use kzalloc instead of kmem_cache ] Signed-off-by: Dave Jiang <dave.jiang@intel.com> [ use eventq and time macros ] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Jeff Skirvin authored
Delay after bringing up the RNC to allow for resumption latency. Signed-off-by: Jeff Skirvin <jeffrey.d.skirvin@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
The old 'core' had aspirations of running in severely memory constrained environments like bios option-rom, it's not needed for Linux and gets in the way of other cleanups (like unifying/reducing the number of structure members in scic_sds_controller/isci_host). This also fixes a theoretical bug in that the driver would blindly override the silicon advertised limits for number of ports, task contexts, and remote node contexts. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Adam Gruchala authored
C0 silicon updates the pci revision id and requires new AFE parameters for phy signal integrity. Support for previous silicon revisions is deprecated (it's also broken for the theoretical case of multiple controllers at different silicon revisions, all the more reason to get it removed as soon as possible) Signed-off-by: Adam Gruchala <adam.gruchala@intel.com> [fixed up deprecated silicon support] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Edmund Nadolski authored
Additional state machine cleanups: o Remove static functions sci_state_machine_exit_state() and sci_state_machine_enter_state() o Combines sci_base_state_machine_construct() and sci_base_state_machine_start() into a single function, sci_init_sm() o Remove sci_base_state_machine_stop() which is unused. o Kill state_machine.[ch] Signed-off-by: Edmund Nadolski <edmund.nadolski@intel.com> [fixed too large to inline functions] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Edmund Nadolski authored
This cleans up several areas of the state machine mechanism: o Rename sci_base_state_machine_change_state to sci_change_state o Remove sci_base_state_machine_get_state function o Rename 'state_machine' struct member to 'sm' in client structs o Shorten the name of request states o Shorten state machine state names as follows: SCI_BASE_CONTROLLER_STATE_xxx to SCIC_xxx SCI_BASE_PHY_STATE_xxx to SCI_PHY_xxx SCIC_SDS_PHY_STARTING_SUBSTATE_xxx to SCI_PHY_SUB_xxx SCI_BASE_PORT_STATE_xxx to SCI_PORT_xxx and SCIC_SDS_PORT_READY_SUBSTATE_xxx to SCI_PORT_SUB_xxx SCI_BASE_REMOTE_DEVICE_STATE_xxx to SCI_DEV_xxx SCIC_SDS_STP_REMOTE_DEVICE_READY_SUBSTATE_xxx to SCI_STP_DEV_xxx SCIC_SDS_SMP_REMOTE_DEVICE_READY_SUBSTATE_xxx to SCI_SMP_DEV_xxx SCIC_SDS_REMOTE_NODE_CONTEXT_xxx_STATE to SCI_RNC_xxx Signed-off-by: Edmund Nadolski <edmund.nadolski@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dave Jiang authored
Newer gcc's are better at identifying "set, but not used" variables. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dave Jiang authored
We can call the EFI get_variable service routine directly to retrieve the EFI variable that holds the OEM parameters table. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Dave Jiang authored
It doesn't look like there is any reason to do a kmalloc. We can do the byte swap in place and avoid the allocation. This allow us to remove a kmalloc and a memcpy. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-
Edmund Nadolski authored
Delete code which is no longer used. Signed-off-by: Edmund Nadolski <edmund.nadolski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
-