- 12 Jun, 2015 22 commits
-
-
Ira Weiny authored
For devices which support OPA MADs 1) Use previously defined SMP support functions. 2) Pass correct base version to ib_create_send_mad when processing OPA MADs. 3) Process out_mad_key_index returned by agents for a response. This is necessary because OPA SMP packets must carry a valid pkey. 4) Carry the correct segment size (OPA vs IBTA) of RMPP messages within ib_mad_recv_wc. 5) Handle variable length OPA MADs by: * Adjusting the 'fake' WC for locally routed SMP's to represent the proper incoming byte_len * out_mad_size is used from the local HCA agents 1) when sending agent responses on the wire 2) when passing responses through the local_completions function NOTE: wc.byte_len includes the GRH length and therefore is different from the in_mad_size specified to the local HCA agents. out_mad_size should _not_ include the GRH length as it is added Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
Add OPA SMP processing functionality. Define the new OPA SMP format, create support functions for this format using the previously defined helper functions as appropriate. These functions are defined in this patch and used in the final OPA MAD support patch. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
This patch is the first of 3 which adds processing of OPA MADs 1) Add Intel Omni-Path Architecture defines 2) Increase max management version to accommodate OPA 3) update ib_create_send_mad If the device supports OPA MADs and the MAD being sent is the OPA base version alter the MAD size and sg lengths as appropriate Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
Add OPA MAD support flags to the core capability immutable flags. In addition add the rdma_cap_opa_mad helper function for core functions to use to detect OPA MAD support. OPA MADs share a common header with IBTA MADs but with some differences for increased performance. Sharing a common header with IBTA MADs allows us to share most of the MAD processing code when dealing with OPA MADs in addition to supporting some IBTA MADs on OPA devices. OPA MADs differ in the following ways: 1) MADs are variable size up to 2K IBTA defined MADs remain fixed at 256 bytes 2) OPA SMPs must carry valid PKeys 3) OPA SMP packets are a different format The MAD stack will use this new functionality to determine if OPA MAD processing should occur on individual device ports. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
In order to support alternate sized MADs (and variable sized MADs on OPA devices) add in/out MAD size parameters to the process_mad core call. In addition, add an out_mad_pkey_index to communicate the pkey index the driver wishes the MAD stack to use when sending OPA MAD responses. The out MAD size and the out MAD PKey index are required by the MAD stack to generate responses on OPA devices. Furthermore, the in and out MAD parameters are made generic by specifying them as ib_mad_hdr rather than ib_mad. Drivers are modified as needed and are protected by BUG_ON flags if the MAD sizes passed to them is incorrect. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
This patch implements allocating alternate receive MAD buffers within the MAD stack. Support for OPA to send/recv variable sized MADs is implemented later. 1) Convert MAD allocations from kmem_cache to kzalloc kzalloc is more flexible to support devices with different sized MADs and research and testing showed that the current use of kmem_cache does not provide performance benefits over kzalloc. 2) Change struct ib_mad_private to use a flex array for the mad data 3) Allocate ib_mad_private based on the size specified by devices in rdma_max_mad_size. 4) Carry the allocated size in ib_mad_private to be used when processing ib_mad_private objects. 5) Alter DMA mappings based on the mad_size of ib_mad_private. 6) Replace the use of sizeof and static defines as appropriate 7) Add appropriate casts for the MAD data when calling processing functions. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
Add max MAD size to the device immutable data set and have all drivers that support MADs report the current IB MAD size (IB_MGMT_MAD_SIZE) to the core. Verify MAD size data in both the MAD core and when reading the immutable data. OPA drivers will report alternate MAD sizes in subsequent patches. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
In preparation to support the new OPA MAD Base version, add a base version parameter to ib_create_send_mad and set it to IB_MGMT_BASE_VERSION for current users. Definition of the new base version and it's processing will occur in later patches. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
IB and OPA SMPs share the same processing algorithm but have different header formats and permissive LID detection. Add a helper function which is generic to processing the DR forwarding checks which can be used by both IB and OPA SMP code. Use this function in the current IB function smi_check_forward_dr_smp. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
IB and OPA SMPs share the same processing algorithm but have different header formats and permissive LID detection. Add a helper function which is generic to processing DR SMP Recv messages which can be used by both IB and OPA SMP code. Use this function in the current IB function smi_handle_dr_smp_recv. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
IB and OPA SMPs share the same processing algorithm but have different header formats and permissive LID detection. Add a helper function which is generic to processing DR SMP Send messages which can be used by both IB and OPA SMP code. Use this function in the current IB function smi_handle_dr_smp_send. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
Make a helper function to process Directed Route SMPs to be called by the IB MAD Recv Handler, ib_mad_recv_done_handler. This cleans up the MAD receive handler code a bit and allows for us to better share the SMP processing code between IB and OPA SMPs. IB and OPA SMPs share the same processing algorithm but have different header formats and permissive LID detection. Therefore this and subsequent patches split the common processing code from the IB specific code in anticipation of sharing those algorithms with the OPA code. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
ib_find_send_mad only needs access to the MAD header not the full IB MAD. Change the local variable to ib_mad_hdr and change the corresponding cast. This allows for clean usage of this function with both IB and OPA MADs because OPA MADs carry the same header as IB MADs. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
find_mad_agent only needs read only access to the MAD header. Update the ib_mad pointer to be const ib_mad_hdr. Adjust call tree. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
This includes: * support allocation of CQ with the TIMESTAMP_COMPLETION creation flag. * add timestamp_mask and hca_core_clock to query_device, reporting the number of supported timestamp bits (mask) and the hca_core_clock frequency. * return hca core clock's offset in query_device vendor's data, this is needed in order to read the HCA's core clock. Signed-off-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
In order to read the HCA's cycle counter efficiently in user space, we need to map the HCA's register. This is done through mmap call. Signed-off-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
Vendors should be able to pass vendor specific data to/from user-space via query_device uverb. In order to do this, we need to pass the vendors' specific udata. Signed-off-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
In order to expose timestamp we need to expose two new attributes in query_device to be used for CQ completion time-stamping: timestamp_mask - how many bits are valid in the timestamp, where timestamp values could be 64bits the most. hca_core_clock - timestamp is given in HW cycles, the frequency in KHZ units of the HCA, necessary in order to convert cycles to seconds. This is added both to ib_query_device and its respective uverbs counterpart. Signed-off-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
ib_uverbs_ex_create_cq follows the extension verbs mechanism. New features (for example, CQ creation flags field which is added in a downstream patch) could used via user-space libraries without breaking the ABI. Signed-off-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
Add CQ creation flag which dictates that the created CQ will report completion time-stamp value in the WC. Signed-off-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
Currently, ib_create_cq uses cqe and comp_vecotr instead of the extendible ib_cq_init_attr struct. Earlier patches already changed the vendors to work with ib_cq_init_attr. This patch changes the consumers too. Signed-off-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Matan Barak authored
Add a new ib_cq_init_attr structure which contains the previous cqe (minimum number of CQ entries) and comp_vector (completion vector) in addition to a new flags field. All vendors' create_cq callbacks are changed in order to work with the new API. This commit does not change any functionality. Signed-off-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Reviewed-By: Devesh Sharma <devesh.sharma@avagotech.com> to patch #2 Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 11 Jun, 2015 8 commits
-
-
Hariprasad S authored
Handle this configuration: Queues Per Page * SGE BAR2 Queue Register Area Size > Page Size Use cxgb4_bar2_sge_qregs() to obtain the proper location within the bar2 region for a given qid. Rework the DB and GTS write functions to make use of this bar2 info. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Hariprasad S authored
Enhance cxgb4_t4_bar2_sge_qregs() and cxgb4_bar2_sge_qregs() to support T4 user mode mappings. Update all the current users as well. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Doug Ledford authored
-
Colin Ian King authored
A reorganisation of the PD allocation and deallocation in commit 9ba1377d ("RDMA/ocrdma: Move PD resource management to driver.") introduced a double free on pd, as detected by static analysis by smatch: drivers/infiniband/hw/ocrdma/ocrdma_verbs.c:682 ocrdma_alloc_pd() error: double free of 'pd'^ The original call to ocrdma_mbx_dealloc_pd() (which does not kfree pd) was replaced with a call to _ocrdma_dealloc_pd() (which does kfree pd). The kfree following this call causes the double free, so just remove it to fix the problem. Fixes: 9ba1377d ("RDMA/ocrdma: Move PD resource management to driver.") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-By: Devesh Sharma <devesh.sharma@avagotech.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Dan Carpenter authored
This code causes a static checker warning: drivers/infiniband/hw/usnic/usnic_uiom.c:476 usnic_uiom_alloc_pd() warn: passing zero to 'PTR_ERR' This code isn't buggy, but iommu_domain_alloc() doesn't return an error pointer so we can simplify the error handling and silence the static checker warning. The static checker warning is to catch place which do: if (!ptr) return ERR_PTR(ptr); Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Dave Goodell <dgoodell@cisco.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Fabian Frederick authored
Use kernel.h macro definition. Thanks to Julia Lawall for Coccinelle scripting support. Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Moni Shoua authored
Registering an event handler is done for a device. This device may have one RoCE port (no SA cap) and one InfiniBand port (has SA cap). Therefore, warning from the event handler about a specific port that doesn't have SA cap is correct but pollutes the kernel log without a need. Signed-off-by: Moni Shoua <monis@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Moni Shoua authored
The Subnet Administrator (SA) is not a component of the RoCE spec. Therefore, it should not be a capability of a RoCE port. Signed-off-by: Moni Shoua <monis@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 02 Jun, 2015 10 commits
-
-
Doug Ledford authored
-
Ira Weiny authored
In order to support constant callers of agent_send_response we add const specifiers to the its pointer arguments. Adjust the call tree accordingly. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Hal Rosenstock <hal@mellanox.com> Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
The process_mad device function declares some parameters as "in". Make those parameters const and adjust the call tree under process_mad in the various drivers accordingly. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Hal Rosenstock <hal@mellanox.com> Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Ira Weiny authored
The ib_device passed to the new RDMA helpers is constant. Declare the ib_device as const in the following functions. rdma_protocol_ib rdma_protocol_roce rdma_protocol_iwarp rdma_ib_or_roce rdma_cap_ib_mad rdma_cap_ib_smi rdma_cap_ib_cm rdma_cap_iw_cm rdma_cap_ib_sa rdma_cap_ib_mcast rdma_cap_af_ib rdma_cap_eth_ah Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Hal Rosenstock <hal@mellanox.com> Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Roland Dreier authored
The unwinding clean up code are err_create_flow starts at the current index i. That means we shouldn't increment i until we're really sure we won't have to destroy the current flow; otherwise we might increment the index, fail inside an is_bonded block, and end up accessing off the end of the reg_id[] array. This was detected by Coverity (CID 1271229). Signed-off-by: Roland Dreier <roland@purestorage.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Roland Dreier authored
If ocrdma_get_pd_num() fails, then we need to free the pd struct we allocated. This was detected by Coverity (CID 1271245). Signed-off-by: Roland Dreier <roland@purestorage.com> Acked-By: Devesh Sharma <devesh.sharma@avagotech.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Wengang Wang authored
The BUG_ON at line 452/453 is triggered in function rds_send_xmit. 441 while (ret) { 442 tmp = min_t(int, ret, sg->length - 443 conn->c_xmit_data_off); 444 conn->c_xmit_data_off += tmp; 445 ret -= tmp; 446 if (conn->c_xmit_data_off == sg->length) { 447 conn->c_xmit_data_off = 0; 448 sg++; 449 conn->c_xmit_sg++; 450 if (ret != 0 && conn->c_xmit_sg == rm->data.op_nents) 451 printk(KERN_ERR "conn %p rm %p sg %p ret %d\n", conn, rm, sg, ret); 452 BUG_ON(ret != 0 && 453 conn->c_xmit_sg == rm->data.op_nents); 454 } 455 } it is complaining the total sent length is bigger that we want to send. rds_ib_xmit() is wrong for the second entry for the same rds_message returning wrong value. the sg and off passed by rds_send_xmit to rds_ib_xmit is based on scatterlist.offset/length, but the rds_ib_xmit action is based on scatterlist.dma_address/dma_length. in case dma_length is larger than length there is problem. for the 2nd and later entries of rds_ib_xmit for same rds_message, at least one of the following two is wrong: 1) the scatterlist to start with, the choosen one can far beyond the correct one. 2) the offset to start with within the scatterlist. fix: add op_dmasg and op_dmaoff to rm_data_op structure indicating the scatterlist and offset within the it to start with for rds_ib_xmit respectively. op_dmasg and op_dmaoff are initialized to zero when doing dma mapping for the first see of the message and are changed when filling send slots. the same applies to rds_iw_xmit too. Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Bart Van Assche authored
Avoid that sparse complains about ipoib_neigh_hash_init(). This patch does not change any functionality. See also patch "IPoIB: Fix memory leak in the neigh table deletion flow" (commit ID 66172c09). Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Shlomo Pongratz <shlomop@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Faisal Latif authored
RDMA/nes: Enable the use of the tos field in the nes driver Signed-off-by: Faisal Latif <Faisal.Latif@intel.com> Signed-off-by: Tatyana Nikolova <Tatyana.E.Nikolova@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Steve Wise authored
rdma-cma/iw_cm: Export tos field to iwarp providers Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Tatyana Nikolova <Tatyana.E.Nikolova@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-