aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/dma/ioat/dma.h
Commit message (Collapse)AuthorAgeFilesLines
* ioat2: catch and recover from broken vtd configurations v6Dan Williams2010-08-041-0/+1
| | | | | | | | | | | | | | | | | | | | | On some platforms (MacPro3,1) the BIOS assigns the ioatdma device to the incorrect iommu causing faults when the driver initializes. Add a quirk to catch this misconfiguration and try falling back to untranslated operation (which works in the MacPro3,1 case). Assuming there are other platforms with misconfigured iommus teach the ioatdma driver to treat initialization failures as non-fatal (just fail the driver load and emit a warning instead of triggering a BUG_ON). This can be classified as a boot regression since 2.6.32 on affected platforms since the ioatdma module did not autoload prior to that kernel. Cc: <stable@kernel.org> Acked-by: David Woodhouse <David.Woodhouse@intel.com> Reported-by: Chris Li <lkml@chrisli.org> Tested-by: Chris Li <lkml@chrisli.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* Merge branch 'ioat' into dmaengineDan Williams2010-05-171-0/+1
|\
| * ioat2,3: convert to producer/consumer lockingDan Williams2010-05-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | Use separate locks for the descriptor prep (producer) and descriptor cleanup (consumer) paths. Allows the producer path to run concurrently with the cleanup path. Inspired by Documentation/circular-buffer.txt. Cc: David Howells <dhowells@redhat.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | dmaengine: provide helper for setting txstateDan Williams2010-03-261-5/+1
| | | | | | | | | | | | Simple conditional struct filler to cut out some duplicated code. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | DMAENGINE: generic channel status v2Linus Walleij2010-03-261-11/+11
|/ | | | | | | | | | | | | | | | | | | | | | | | Convert the device_is_tx_complete() operation on the DMA engine to a generic device_tx_status()operation which can return three states, DMA_TX_RUNNING, DMA_TX_COMPLETE, DMA_TX_PAUSED. [dan.j.williams@intel.com: update for timberdale] Signed-off-by: Linus Walleij <linus.walleij@stericsson.com> Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Li Yang <leoli@freescale.com> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Cc: Magnus Damm <damm@opensource.se> Cc: Liam Girdwood <lrg@slimlogic.co.uk> Cc: Joe Perches <joe@perches.com> Cc: Roland Dreier <rdreier@cisco.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* Driver core: Constify struct sysfs_ops in struct kobj_typeEmese Revfy2010-03-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Constify struct sysfs_ops. This is part of the ops structure constification effort started by Arjan van de Ven et al. Benefits of this constification: * prevents modification of data that is shared (referenced) by many other structure instances at runtime * detects/prevents accidental (but not intentional) modification attempts on archs that enforce read-only kernel data at runtime * potentially better optimized code as the compiler can assume that the const data cannot be changed * the compiler/linker move const data into .rodata and therefore exclude them from false sharing Signed-off-by: Emese Revfy <re.emese@gmail.com> Acked-by: David Teigland <teigland@redhat.com> Acked-by: Matt Domsch <Matt_Domsch@dell.com> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Acked-by: Hans J. Koch <hjk@linutronix.de> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Acked-by: Jens Axboe <jens.axboe@oracle.com> Acked-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* ioat: cleanup ->timer_fn() and ->cleanup_fn() prototypesDan Williams2010-03-031-6/+5
| | | | | | | | | | | If the calling convention of ->timer_fn() and ->cleanup_fn() are unified across hardware versions we can drop parameters to ioat_init_channel() and unify ioat_is_dma_complete() implementations. Both ->timer_fn() and ->cleanup_fn() are modified to expect a struct dma_chan pointer. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat2,3: put channel hardware in known state at initDan Williams2009-12-191-0/+18
| | | | | | | | | | | | | | Put the ioat2 and ioat3 state machines in the halted state with all errors cleared. The ioat1 init path is not disturbed for stability, there are no reported ioat1 initiaization issues. Cc: <stable@kernel.org> Reported-by: Roland Dreier <rdreier@cisco.com> Tested-by: Roland Dreier <rdreier@cisco.com> Acked-by: Simon Horman <horms@verge.net.au> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat2,3: report all uncorrectable errorsDan Williams2009-11-191-3/+1
| | | | | | | Modify is_ioat_bug() to catch all errors that are uncorrectable, or not currently handled. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: driver version 4.0Dan Williams2009-09-101-1/+1
| | | | | | | | A new ring implementation and the addition of raid functionality constitutes a bump in the driver major version number. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* Merge branch 'dmaengine' into async-tx-nextDan Williams2009-09-081-1/+2
|\ | | | | | | | | | | | | | | Conflicts: crypto/async_tx/async_xor.c drivers/dma/ioat/dma_v2.h drivers/dma/ioat/pci.c drivers/md/raid5.c
| * ioat: implement a private tx_listDan Williams2009-09-081-1/+2
| | | | | | | | | | | | | | | | | | Drop ioatdma's use of tx_list from struct dma_async_tx_descriptor in preparation for removal of this field. Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | ioat3: xor self testDan Williams2009-09-081-1/+3
| | | | | | | | | | | | | | | | This adds a hardware specific self test to be called from ioat_probe. In the ioat3 case we will have tests for all the different raid operations, while ioat1 and ioat2 will continue to just test memcpy. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | ioat: add 'ioat' sysfs attributesDan Williams2009-09-081-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | Export driver attributes for diagnostic purposes: 'ring_size': total number of descriptors available to the engine 'ring_active': number of descriptors in-flight 'capabilities': supported operation types for this channel 'version': Intel(R) QuickData specfication revision This also allows some chattiness to be removed from the driver startup as this information is now available via sysfs. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | ioat3: split ioat3 support to its own file, add memsetDan Williams2009-09-081-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up until this point the driver for Intel(R) QuickData Technology engines, specification versions 2 and 3, were mostly identical save for a few quirks. Version 3.2 hardware adds many new capabilities (like raid offload support) requiring some infrastructure that is not relevant for v2. For better code organization of the new funcionality move v3 and v3.2 support to its own file dma_v3.c, and export some routines from the base files (dma.c and dma_v2.c) that can be reused directly. The first new capability included in this code reorganization is support for v3.2 memset operations. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | ioat3: hardware version 3.2 register / descriptor definitionsDan Williams2009-09-081-1/+1
|/ | | | | | | ioat3.2 adds raid5 and raid6 offload capabilities. Signed-off-by: Tom Picard <tom.s.picard@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat2,3: dynamically resize descriptor ringDan Williams2009-09-081-0/+1
| | | | | | | | | | | | Increment the allocation order of the descriptor ring every time we run out of descriptors up to a maximum of allocation order specified by the module parameter 'ioat_max_alloc_order'. After each idle period decrement the allocation order to a minimum order of 'ioat_ring_alloc_order' (i.e. the default ring size, tunable as a module parameter). Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: switch watchdog and reset handler from workqueue to timerDan Williams2009-09-081-19/+93
| | | | | | | | | | | | | | | | | | In order to support dynamic resizing of the descriptor ring or polling for a descriptor in the presence of a hung channel the reset handler needs to make progress while in a non-preemptible context. The current workqueue implementation precludes polling channel reset completion under spin_lock(). This conversion also allows us to return to opportunistic cleanup in the ioat2 case as the timer implementation guarantees at least one cleanup after every descriptor is submitted. This means the worst case completion latency becomes the timer frequency (for exceptional circumstances), but with the benefit of avoiding busy waiting when the lock is contended. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat1: trim ioat_dma_desc_swDan Williams2009-09-081-2/+0
| | | | | | | | Save 4 bytes per software descriptor by transmitting tx_cnt in an unused portion of the hardware descriptor. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: ___devinit annotate the initialization pathsDan Williams2009-09-081-5/+6
| | | | | | | Mark all single use initialization routines with __devinit. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: cleanup completion status readsDan Williams2009-09-081-8/+2
| | | | | | | | | | The cleanup path makes an effort to only perform an atomic read of the 64-bit completion address. However in the 32-bit case it does not matter if we read the upper-32 and lower-32 non-atomically because the upper-32 will always be zero. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: add some dev_dbg() callsDan Williams2009-09-081-0/+28
| | | | | | | Provide some output for debugging the driver. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat1: kill unused unmap parametersDan Williams2009-09-081-2/+0
| | | | | | | | | The unified ioat1/ioat2 ioat_dma_unmap() implementation derives the source and dest addresses from the unmap descriptor. There is no longer a need to track this information in struct ioat_desc_sw. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat2,3: convert to a true ring bufferDan Williams2009-09-081-5/+45
| | | | | | | | | | | | | | | | | | | | Replace the current linked list munged into a ring with a native ring buffer implementation. The benefit of this approach is reduced overhead as many parameters can be derived from ring position with simple pointer comparisons and descriptor allocation/freeing becomes just a manipulation of head/tail pointers. It requires a contiguous allocation for the software descriptor information. Since this arrangement is significantly different from the ioat1 chain, move ioat2,3 support into its own file and header. Common routines are exported from driver/dma/ioat/dma.[ch]. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: prepare the code for ioat[12]_dma_chan splitDan Williams2009-09-081-17/+32
| | | | | | | | | | | | | | | | | | | | | | Prepare the code for the conversion of the ioat2 linked-list-ring into a native ring buffer. After this conversion ioat2 channels will share less of the ioat1 infrastructure, but there will still be places where sharing is possible. struct ioat_chan_common is created to house the channel attributes that will remain common between ioat1 and ioat2 channels. For every routine that accesses both common and hardware specific fields the old unified 'ioat_chan' pointer is split into an 'ioat' and 'chan' pointer. Where 'chan' references common fields and 'ioat' the hardware/version specific. [ Impact: pure structure member movement/variable renames, no logic changes ] Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: fix type mismatch for ->dmacountDan Williams2009-09-081-2/+2
| | | | | | | | | | | | | | | ->dmacount tracks the sequence number of active descriptors. It is written to the DMACOUNT register to update the channel's view of pending descriptors in the chain. The register is 16-bits so ->dmacount should be unsigned and 16-bit as well. Also modify ->desccount to maintain alignment. This was never a problem in practice because we never compared dmacount values, but this is a bug waiting to happen. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: split ioat_dma_probe into core/version-specific routinesDan Williams2009-09-081-14/+9
| | | | | | | | | | | | Towards the removal of ioatdma_device.version split the initialization path into distinct versions. This conversion: 1/ moves version specific probe code to version specific routines 2/ removes the need for ioat_device 3/ turns off the ioat1 msi quirk if the device is reinitialized for intx Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: kill function prototype ifdef guardsDan Williams2009-09-081-9/+0
| | | | | | | | | | | | The only .c files that utilize these protected prototypes depend on CONFIG_INTEL_IOATDMA=y, so there is no value gained in providing empty prototypes. [ Impact: pure cleanup ] Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: cleanup some long deref chains and 80 column collisionsDan Williams2009-09-081-3/+4
| | | | | | | | | | | | | * reduce device->common. to dma-> in ioat_dma_{probe,remove,selftest} * ioat_lookup_chan_by_index to ioat_chan_by_index * multi-line function definitions * ioat_desc_sw.async_tx to ioat_desc_sw.txd * desc->txd. to tx-> in cleanup routine Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: convert ioat_probe to pcim/devmDan Williams2009-09-081-11/+0
| | | | | | | | | | | | | | The driver currently duplicates much of what these routines offer, so just use the common code. For example ->irq_mode tracks what interrupt mode was initialized, which duplicates the ->msix_enabled and ->msi_enabled handling in pcim_release. This also adds a check to the return value of dma_async_device_register, which can fail. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: move definitions to dma.hDan Williams2009-09-081-0/+16
| | | | | | | | | Some of these defines may be useful outside of dma.c and the header is private so there are no namespace pollution concerns. Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: move to drivers/dma/ioat/Dan Williams2009-07-281-0/+165
When first created the ioat driver was the only inhabitant of drivers/dma/. Now, it is the only multi-file (more than a .c and a .h) driver in the directory. Moving it to an ioat/ subdirectory allows the naming convention to be cleaned up, and allows for future splitting of the source files by hardware version (v1, v2, and v3). Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>