aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/infiniband/hw/cxgb3/iwch_provider.c
Commit message (Collapse)AuthorAgeFilesLines
* more driver stuff from 3.2.72Wolfgang Wiedmeyer2015-10-231-8/+1
|
* RDMA/cxgb3: When a user QP is marked in error, also mark the CQs in errorSteve Wise2010-10-221-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The flushing of work requests for user QPs is implemented entirely in the user mode library. The only kernel interaction is to mark the user QP object indicating it is in error when the QP exits RTS. When the user QP operations are called by the application (eg: post_send, post_recv), the QP in error bit is checked and if set, the library flushes the QP. If, however, the application is not doing IO, but rather just polling the CQ, it will never get flushed work requests. This breaks some classes of applications. This patch adds logic to mark user CQs in error when a QP that is bound to the CQ is marked in error. The library poll code can then notice the CQ is in error and flush all the in error QPs bound to that CQ. Design: - add 1 extra CQE entry to the CQ memory that will be used to indicate in error status. - return the desired CQ memory size that should be mapped by the library - bump the ABI since the create_cq uverbs response changes. - detect older libraries and reduce the mmap size accordingly. (The ABI bump doesn't break old libraries, since they didn't check the ABI field anyway) Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/core: Allow device-specific per-port sysfs filesRalph Campbell2010-05-211-1/+1
| | | | | | | | | | | | | | | | | Add a new parameter to ib_register_device() so that low-level device drivers can pass in a pointer to a callback function that will be called for each port that is registered in sysfs. This allows low-level device drivers to create files in /sys/class/infiniband/<hca>/ports/<N>/ without having to poke through the internals of the RDMA sysfs handling. There is no need for an unregister function since the kobject reference will go to zero when ib_unregister_device() is called. Signed-off-by: Ralph Campbell <ralph.campbell@qlogic.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo2010-03-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
* RDMA/cxgb3: Don't allocate the SW queue for user mode CQsSteve Wise2010-02-241-1/+1
| | | | | | | | Only kernel mode CQs need the SW queue memory allocated. The SW queue for user mode CQs is allocated in userspace by libcxgb3. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* headers: remove sched.h from interrupt.hAlexey Dobriyan2009-10-111-0/+1
| | | | | | | | After m68k's task_thread_info() doesn't refer to current, it's possible to remove sched.h from interrupt.h and not break m68k! Many thanks to Heiko Carstens for allowing this. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
* RDMA/cxgb3: Handle NULL inetdev pointer in iwch_query_port()Steve Wise2009-10-071-4/+7
| | | | | | | | in_dev_get() can return NULL. If it does, iwch_query_port() will crash. Handle the NULL case by mapping it to port state INIT. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Don't ignore insert_handle() failuresSteve Wise2009-09-091-15/+35
| | | | | Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: iwch_unregister_device leaks memorySteve Wise2009-09-051-0/+2
| | | | | | | The iwcm struct mem is never freed. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Report correct port state and MTUSteve Wise2009-05-271-2/+30
| | | | | Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fix deadlock in iw_cxgb3 (hang when configuring interface)Steve Wise2008-11-121-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When the iw_cxgb3 module's cxgb3_client "add" func gets called by the cxgb3 module, the iwarp driver ends up calling the ethtool ops get_drvinfo function in cxgb3 to get the fw version and other info. Currently the iwarp driver grabs the rtnl lock around this down call to serialize. As of 2.6.27 or so, things changed such that the rtnl lock is held around the call to the netdev driver open function. Also the cxgb3_client "add" function doesn't get called if the device is down. So, if you load cxgb3, then load iw_cxgb3, then ifconfig up the device, the iw_cxgb3 add func gets called with the rtnl_lock held. If you load cxgb3, ifconfig up the device, then load iw_cxgb3, the add func gets called without the rtnl_lock held. The former causes the deadlock, the latter does not. In addition, there are iw_cxgb3 sysfs handlers that also can call down into cxgb3 to gather the fw and hw versions. These can be called concurrently on different processors and at any time. Thus we need to push this serialization down in the cxgb3 driver get_drvinfo func. The fix is to remove rtnl lock usage, and use a per-device lock in cxgb3. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Acked-by: Divy Le Ray <divy@chelsio.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Set active_mtu in ib_port_attrJon Mason2008-09-301-6/+3
| | | | | | | | | | | | When running ibv_devinfo, the active_mtu returned is garbage. This is due to the field not being populated in the query_port function in the driver. The patch below populates the active_mtu field with a MTU of 2k. It also zeros the struct, so that any new additions to it will return 0. Signed-off-by: Jon Mason <jon@opengridcomputing.com> Acked-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fix deadlock initializing iw_cxgb3 deviceSteve Wise2008-08-041-25/+3
| | | | | | | | | | | | | | | | | | | Running 'ifconfig up' on the cxgb3 interface with iw_cxgb3 loaded causes a deadlock. The rtnl lock is already held in this path. The function fw_supports_fastreg() was introduced in 2.6.27 to conditionally set the IB_DEVICE_MEM_MGT_EXTENSIONS bit iff the firmware was at 7.0 or greater, and this function also acquires the rtnl lock and which thus causes a deadlock. Further, if iw_cxgb3 is loaded _after_ the nic interface is brought up, then the deadlock does not occur and therefore fw_supports_fastreg() does need to grab the rtnl lock in that path. It turns out this code is all useless anyway. The low level driver will NOT allow the open if the firmware isn't 7.0, so iw_cxgb3 can always set the MEM_MGT_EXTENSIONS bit. Simplify... Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fixes for zero STagSteve Wise2008-07-141-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Handling the zero STag in receive work request requires some extra logic in the driver: - Only set the QP_PRIV bit for kernel mode QPs. - Add a zero STag build function for recv wrs. The uP needs a PBL allocated and passed down in the recv WR so it can construct a HW PBL for the zero STag S/G entries. Note: we need to place a few restrictions on zero STag usage because of this: 1) all SGEs in a recv WR must either be zero STag or not. No mixing. 2) an individual SGE length cannot exceed 128MB for a zero-stag SGE. This should be OK since it's not really practical to allocate such a large chunk of pinned contiguous DMA mapped memory. - Add an optimized non-zero-STag recv wr format for kernel users. This is needed to optimize both zero and non-zero STag cracking in the recv path for kernel users. - Remove the iwch_ prefix from the static build functions. - Bump required FW version. Signed-off-by: Steve Wise <swise@opengridcomputing.com>
* RDMA/core: Add local DMA L_Key supportSteve Wise2008-07-141-2/+4
| | | | | | | | | | | | | | - Change the IB_DEVICE_ZERO_STAG flag to the transport-neutral name IB_DEVICE_LOCAL_DMA_LKEY, which is used by iWARP RNICs to indicate 0 STag support and IB HCAs to indicate reserved L_Key support. - Add a u32 local_dma_lkey member to struct ib_device. Drivers fill this in with the appropriate local DMA L_Key (if they support it). - Fix up the drivers using this flag. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Set rkey field for new memory windows in iwch_alloc_mw()Steve Wise2008-07-141-0/+1
| | | | | Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Propagate HW page size capabilitiesJon Mason2008-07-141-0/+1
| | | | | | | | | | | | cxgb3 does not currently report the page size capabilities, and incorrectly reports them internally. This version changes the bit-shifting to a static value (per Steve's request). Signed-off-by: Jon Mason <jon@opengridcomputing.com> Acked-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Add support for protocol statisticsSteve Wise2008-07-141-4/+58
| | | | | | | | | | | - Add a new rdma ctl command called RDMA_GET_MIB to the cxgb3 low level driver to obtain the protocol mib from the rnic hardware. - Add new iw_cxgb3 provider method to get the MIB from the low level driver. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fix up some ib_device_attr fieldsSteve Wise2008-07-141-0/+25
| | | | | | | | | | - set fw_ver - set hw_ver - set max_qp_wr to something reasonable - set max_cqe to something reasonable Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: MEM_MGT_EXTENSIONS supportSteve Wise2008-07-141-3/+101
| | | | | | | | | | | | | | | | | - set IB_DEVICE_MEM_MGT_EXTENSIONS capability bit if fw supports it. - set max_fast_reg_page_list_len device attribute. - add iwch_alloc_fast_reg_mr function. - add iwch_alloc_fastreg_pbl - add iwch_free_fastreg_pbl - adjust the WQ depth for kernel mode work queues to account for fastreg possibly taking 2 WR slots. - add fastreg_mr work request support. - add local_inv work request support. - add send_with_inv and send_with_se_inv work request support. - removed useless duplicate enums/defines for TPT/MW/MR stuff. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fix regression caused by class_device -> device conversionSteve Wise2008-07-081-0/+4
| | | | | | | | | | | | The change to iwch_provider.c in commit f4e91eb4 ("IB: convert struct class_device to struct device") undid the fix done in commit 7f049f2f ("RDMA/cxgb3: Hold rtnl_lock() around ethtool get_drvinfo call"). It removed the calls to rtnl_lock() that serialized the iw_cxgb3 ethtool ops calls into the cxgb3 driver. This locking is needed to avoid messing up the internal state of the cxgb3 driver. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fix severe limit on userspace memory registration sizeRoland Dreier2008-05-061-15/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, iw_cxgb3 is severely limited on the amount of userspace memory that can be registered in in a single memory region, which causes big problems for applications that expect to be able to register 100s of MB. The problem is that the driver uses a single kmalloc()ed buffer to hold the physical buffer list (PBL) for the entire memory region during registration, which means that 8 bytes of contiguous memory are required for each page of memory being registered. For example, a 64 MB registration will require 128 KB of contiguous memory with 4 KB pages, and it unlikely that such an allocation will succeed on a busy system. This is purely a driver problem: the temporary page list buffer is not needed by the hardware, so we can fix this by writing the PBL to the hardware in page-sized chunks rather than all at once. We do this by splitting the memory registration operation up into several steps: - Allocate PBL space in adapter memory for the full registration - Copy PBL to adapter memory in chunks - Allocate STag and enable memory region This also allows several other cleanups to the __cxio_tpt_op() interface and related parts of the driver. This change leaves the reregister memory region and memory window operations broken, but they already didn't work due to other longstanding bugs, so fixing them will be left to a later patch. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Set the max_mr_size device attribute correctlySteve Wise2008-04-291-1/+1
| | | | | | | | cxgb3 only supports 4GB memory regions. The lustre RDMA code uses this attribute and currently has to code around our bad setting. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: expand ib_umem_get() prototypeArthur Kepner2008-04-291-1/+1
| | | | | | | | | | | | | | | | | | | Add a new parameter, dmasync, to the ib_umem_get() prototype. Use dmasync = 1 when mapping user-allocated CQs with ib_umem_get(). Signed-off-by: Arthur Kepner <akepner@sgi.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Jes Sorensen <jes@sgi.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Roland Dreier <rdreier@cisco.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* IB: convert struct class_device to struct deviceTony Jones2008-04-191-38/+37
| | | | | | | | | | | | | | This converts the main ib_device to use struct device instead of struct class_device as class_device is going away. Signed-off-by: Tony Jones <tonyj@suse.de> Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Cc: Roland Dreier <rolandd@cisco.com> Cc: Sean Hefty <sean.hefty@intel.com> Cc: Hal Rosenstock <hal.rosenstock@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* IB/core: Add support for "send with invalidate" work requestsRoland Dreier2008-04-161-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a new IB_WR_SEND_WITH_INV send opcode that can be used to mark a "send with invalidate" work request as defined in the iWARP verbs and the InfiniBand base memory management extensions. Also put "imm_data" and a new "invalidate_rkey" member in a new "ex" union in struct ib_send_wr. The invalidate_rkey member can be used to pass in an R_Key/STag to be invalidated. Add this new union to struct ib_uverbs_send_wr. Add code to copy the invalidate_rkey field in ib_uverbs_post_send(). Fix up low-level drivers to deal with the change to struct ib_send_wr, and just remove the imm_data initialization from net/sunrpc/xprtrdma/, since that code never does any send with immediate operations. Also, move the existing IB_DEVICE_SEND_W_INV flag to a new bit, since the iWARP drivers currently in the tree set the bit. The amso1100 driver at least will silently fail to honor the IB_SEND_INVALIDATE bit if passed in as part of userspace send requests (since it does not implement kernel bypass work request queueing). Remove the flag from all existing drivers that set it until we know which ones are OK. The values chosen for the new flag is not consecutive to avoid clashing with flags defined in the XRC patches, which are not merged yet but which are already in use and are likely to be merged soon. This resurrects a patch sent long ago by Mikkel Hagen <mhagen@iol.unh.edu>. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: Replace remaining __FUNCTION__ occurrences with __func__Harvey Harrison2008-04-161-38/+38
| | | | | | | | __FUNCTION__ is gcc-specific, use __func__ instead. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fix iwch_create_cq() off-by-one errorJon Mason2008-03-091-1/+1
| | | | | | | | | | The cxbg3 driver is unnecessarily decreasing the number of CQ entries by one when creating a CQ. This will cause the CQ not to have as many entries as requested by the user if the user requests a power of 2 size. Signed-off-by: Jon Mason <jon@opengridcomputing.com> Acked-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Return correct max_inline_data when creating a QPJon Mason2008-02-291-0/+3
| | | | | | | | | Set cap.max_inline_data to the actual max inline data that the adapter support, so that userspace apps see the right value returned. Signed-off-by: Jon Mason <jon@opengridcomputing.com> Acked-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fix the T3A workaround checksSteve Wise2008-01-251-1/+1
| | | | | | | | Correctly work around T3A issues by checking "hwtype != T3A" instead of "hwtype == T3B". This will be needed for new hardware types. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Hold rtnl_lock() around ethtool get_drvinfo callSteve Wise2008-01-251-0/+5
| | | | | | | | | Currently the call into cxgb3 to get the driver info is not serialized. The iw_cxgb3 module needs to hold the rtnl_lock around the ethtool ops call like dev_ioctl() does. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Set the max_qp_init_rd_atom attribute in query_deviceSteve Wise2007-11-131-0/+1
| | | | | | | | | | | | | The device attribute max_qp_init_rd_atom is not getting set in cxgb3's query_device method. Version 1.0.4 of librdmacm now validates the user's requested initiator and responder resources against the max supported by the device. Since iw_cxgb3 wasn't setting this attribute (and it defaulted to 0), all rdma_connect()s fail if there are initiator resources requested by the app. Fix this by setting the correct value in iwch_query_device(). Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Check return of kmalloc() in iwch_register_device()WANG Cong2007-07-091-3/+4
| | | | | | | Signed-off-by: WANG Cong <xiyou.wangcong@gmail.com> [ Also remove cast from void * return of kmalloc() as suggested by Jesper Juhl <jesper.juhl@gmail.com>. ] Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB/uverbs: Export ib_umem_get()/ib_umem_release() to modulesRoland Dreier2007-05-081-8/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Export ib_umem_get()/ib_umem_release() and put low-level drivers in control of when to call ib_umem_get() to pin and DMA map userspace, rather than always calling it in ib_uverbs_reg_mr() before calling the low-level driver's reg_user_mr method. Also move these functions to be in the ib_core module instead of ib_uverbs, so that driver modules using them do not depend on ib_uverbs. This has a number of advantages: - It is better design from the standpoint of making generic code a library that can be used or overridden by device-specific code as the details of specific devices dictate. - Drivers that do not need to pin userspace memory regions do not need to take the performance hit of calling ib_mem_get(). For example, although I have not tried to implement it in this patch, the ipath driver should be able to avoid pinning memory and just use copy_{to,from}_user() to access userspace memory regions. - Buffers that need special mapping treatment can be identified by the low-level driver. For example, it may be possible to solve some Altix-specific memory ordering issues with mthca CQs in userspace by mapping CQ buffers with extra flags. - Drivers that need to pin and DMA map userspace memory for things other than memory regions can use ib_umem_get() directly, instead of hacks using extra parameters to their reg_phys_mr method. For example, the mlx4 driver that is pending being merged needs to pin and DMA map QP and CQ buffers, but it does not need to create a memory key for these buffers. So the cleanest solution is for mlx4 to call ib_umem_get() in the create_qp and create_cq methods. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: Return "maybe missed event" hint from ib_req_notify_cq()Roland Dreier2007-05-061-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The semantics defined by the InfiniBand specification say that completion events are only generated when a completions is added to a completion queue (CQ) after completion notification is requested. In other words, this means that the following race is possible: while (CQ is not empty) ib_poll_cq(CQ); // new completion is added after while loop is exited ib_req_notify_cq(CQ); // no event is generated for the existing completion To close this race, the IB spec recommends doing another poll of the CQ after requesting notification. However, it is not always possible to arrange code this way (for example, we have found that NAPI for IPoIB cannot poll after requesting notification). Also, some hardware (eg Mellanox HCAs) actually will generate an event for completions added before the call to ib_req_notify_cq() -- which is allowed by the spec, since there's no way for any upper-layer consumer to know exactly when a completion was really added -- so the extra poll of the CQ is just a waste. Motivated by this, we add a new flag "IB_CQ_REPORT_MISSED_EVENTS" for ib_req_notify_cq() so that it can return a hint about whether the a completion may have been added before the request for notification. The return value of ib_req_notify_cq() is extended so: < 0 means an error occurred while requesting notification == 0 means notification was requested successfully, and if IB_CQ_REPORT_MISSED_EVENTS was passed in, then no events were missed and it is safe to wait for another event. > 0 is only returned if IB_CQ_REPORT_MISSED_EVENTS was passed in. It means that the consumer must poll the CQ again to make sure it is empty to avoid the race described above. We add a flag to enable this behavior rather than turning it on unconditionally, because checking for missed events may incur significant overhead for some low-level drivers, and consumers that don't care about the results of this test shouldn't be forced to pay for the test. Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: Add CQ comp_vector supportMichael S. Tsirkin2007-05-061-1/+2
| | | | | | | | | | | | | | | | | | | | | Add a num_comp_vectors member to struct ib_device and extend ib_create_cq() to pass in a comp_vector parameter -- this parallels the userspace libibverbs API. Update all hardware drivers to set num_comp_vectors to 1 and have all ULPs pass 0 for the comp_vector value. Pass the value of num_comp_vectors to userspace rather than hard-coding a value of 1. We want multiple CQ event vector support (via MSI-X or similar for adapters that can generate multiple interrupts), but it's not clear how many vectors we want, or how we want to deal with policy issues such as how to decide which vector to use or how to set up interrupt affinity. This patch is useful for experimenting, since no core changes will be necessary when updating a driver to support multiple vectors, and we know that we want to make at least these changes anyway. Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fail qp creation if the requested max_inline is too largeSteve Wise2007-04-301-0/+3
| | | | | Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* IB: Set class_dev->dev in core for nice device symlinkJoachim Fenkes2007-04-241-1/+0
| | | | | | | | | | | All RDMA drivers except ehca set class_dev->dev to their dma_device value (ehca leaves this unset). dma_device is the only value that makes any sense, so move this assignment to core/sysfs.c. This reduce the duplicated code in the rest of the drivers and gives ehca a nice /sys/class/infiniband/ehcaX/device symlink. Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Handle build_phys_page_list() failure in iwch_reregister_phys_mem()Steve Wise2007-03-221-1/+4
| | | | | Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Fix MR permission problemsSteve Wise2007-03-061-20/+4
| | | | | | | | | | | | | | | | | Fix memory region permission problems: - remove useless and redundant iwch_mem_perms enum. - create ib_to_tpt_access_rights() for mapping ib access rights to T3 TPT permissions. - create ib_to_mwbind_access_rights() for mapping ib access rights to T3 MWBIND WR permissions. - fix up the mem reg code to utilize the new functions. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Move QP to error on destroy if the state is IDLESteve Wise2007-03-061-4/+2
| | | | | | | | Change iwch_destroy_qp() to always move the QP to ERROR and let iwch_modify_qp() decide what to do. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Don't use mm after it's freed in iwch_mmap()Steve Wise2007-03-061-4/+6
| | | | | Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: cleanupsAdrian Bunk2007-02-231-1/+1
| | | | | | | | | | | | | | | | | | - don't mark static functions in C files as inline - gcc should know best whether inlining makes sense - never compile the unused cxio_dbg.c - make the following needlessly global functions static: - cxio_hal.c: cxio_hal_clear_qp_ctx() - iwch_provider.c: iwch_get_qp() - remove the following unused global functions: - cxio_hal.c: cxio_allocate_stag() - cxio_resource.: cxio_hal_get_rhdl() - cxio_resource.: cxio_hal_put_rhdl() Signed-off-by: Adrian Bunk <bunk@stusta.de> Acked-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Remove Open Grid Computing copyrights in iw_cxgb3 driverSteve Wise2007-02-161-1/+0
| | | | | | | Remove the Open Grid Computing copyright. It shouldn't be there. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
* RDMA/cxgb3: Add driver for Chelsio T3 RNICSteve Wise2007-02-121-0/+1203
Add an RDMA/iWARP driver for the Chelsio T3 1GbE and 10GbE adapters. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>