aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/vhost
Commit message (Collapse)AuthorAgeFilesLines
* vhost: actually track log eventfd fileMarc-André Lureau2015-08-121-0/+1
| | | | | | | | | | | commit 7932c0bd7740f4cd2aa168d3ce0199e7af7d72d5 upstream. While reviewing vhost log code, I found out that log_file is never set. Note: I haven't tested the change (QEMU doesn't use LOG_FD yet). Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
* vhost: validate vhost_get_vq_desc return valueMichael S. Tsirkin2014-04-301-1/+5
| | | | | | | | | | | | | | | | | | | [ Upstream commit a39ee449f96a2cd44ce056d8a0a112211a9b1a1f ] vhost fails to validate negative error code from vhost_get_vq_desc causing a crash: we are using -EFAULT which is 0xfffffff2 as vector size, which exceeds the allocated size. The code in question was introduced in commit 8dd014adfea6f173c1ef6378f7e5e7924866c923 vhost-net: mergeable buffers support CVE-2014-0055 Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
* vhost: fix total length when packets are too shortMichael S. Tsirkin2014-04-301-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit d8316f3991d207fe32881a9ac20241be8fa2bad0 ] When mergeable buffers are disabled, and the incoming packet is too large for the rx buffer, get_rx_bufs returns success. This was intentional in order for make recvmsg truncate the packet and then handle_rx would detect err != sock_len and drop it. Unfortunately we pass the original sock_len to recvmsg - which means we use parts of iov not fully validated. Fix this up by detecting this overrun and doing packet drop immediately. CVE-2014-0077 Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
* vhost/net: fix heads usage of ubuf_infoMichael S. Tsirkin2013-03-271-1/+2
| | | | | | | | | | | | | | commit 46aa92d1ba162b4b3d6b7102440e459d4e4ee255 upstream. ubuf info allocator uses guest controlled head as an index, so a malicious guest could put the same head entry in the ring twice, and we will get two callbacks on the same value. To fix use upend_idx which is guaranteed to be unique. Reported-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
* vhost: fix length for cross region descriptorMichael S. Tsirkin2013-03-061-1/+1
| | | | | | | | | | | | | | commit bd97120fc3d1a11f3124c7c9ba1d91f51829eb85 upstream. If a single descriptor crosses a region, the second chunk length should be decremented by size translated so far, instead it includes the full descriptor length. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
* vhost: fix mergeable bufs on BE hostsMichael S. Tsirkin2012-10-301-1/+2
| | | | | | | | | | | | commit 910a578f7e9400a78a3b13aba0b4d2df16a2cb05 upstream. We copy head count to a 16 bit field, this works by chance on LE but on BE guest gets 0. Fix it up. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Alexander Graf <agraf@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
* vhost: don't forget to schedule()Nadav Har'El2012-07-251-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit d550dda192c1bd039afb774b99485e88b70d7cb8 upstream. This is a tiny, but important, patch to vhost. Vhost's worker thread only called schedule() when it had no work to do, and it wanted to go to sleep. But if there's always work to do, e.g., the guest is running a network-intensive program like netperf with small message sizes, schedule() was *never* called. This had several negative implications (on non-preemptive kernels): 1. Passing time was not properly accounted to the "vhost" process (ps and top would wrongly show it using zero CPU time). 2. Sometimes error messages about RCU timeouts would be printed, if the core running the vhost thread didn't schedule() for a very long time. 3. Worst of all, a vhost thread would "hog" the core. If several vhost threads need to share the same core, typically one would get most of the CPU time (and its associated guest most of the performance), while the others hardly get any work done. The trivial solution is to add if (need_resched()) schedule(); After doing every piece of work. This will not do the heavy schedule() all the time, just when the timer interrupt decided a reschedule is warranted (so need_resched returns true). Thanks to Abel Gordon for this patch. Signed-off-by: Nadav Har'El <nyh@il.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
* atomic: use <linux/atomic.h>Arun Sharma2011-07-261-1/+1
| | | | | | | | | | | | | | This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: Arun Sharma <asharma@fb.com> Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vhost: handle wrap around in # of bufs mathShirley Ma2011-07-211-3/+9
| | | | | | | | | The meth for calculating the # of outstanding buffers gives incorrect results when vq->upend_idx wraps around zero. Fix that. Signed-off-by: Shirley Ma <xma@us.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost-net: update used ring on backend changeMichael S. Tsirkin2011-07-211-1/+5
| | | | | | | | | | | On backend change, we flushed out outstanding skbs but forgot to update the used ring, so that done entries were left in the ubuf_info ring. As a result we lose heads or complete incorrect ones, crashing the guest or leaking memory. Fix by updating the used ring. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: optimize interrupt enable/disableMichael S. Tsirkin2011-07-191-2/+2
| | | | | | | | As we now only update used ring after enabling the backend, we can write flags with __put_user: as that's done on data path, it matters. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: fix zcopy reference countingMichael S. Tsirkin2011-07-191-1/+0
| | | | | | | Fix get/put refcount imbalance with zero copy, which caused qemu to hang forever on guest driver unload. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: set log when updating used flags or avail eventJason Wang2011-07-191-30/+54
| | | | | | | | | We need to log writes when updating used flags and avail event fields. Otherwise the guest may see a stale value after migration and miss notifying the host. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: init used ring after backend was setJason Wang2011-07-194-8/+16
| | | | | | | | | | Move the used ring initialization after backend was set. This makes it possible to disable the backend and tweak the used ring, then restart. This will also make it possible to log the used ring write correctly. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: vhost TX zero-copy supportMichael S. Tsirkin2011-07-183-16/+220
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | >From: Shirley Ma <mashirle@us.ibm.com> This adds experimental zero copy support in vhost-net, disabled by default. To enable, set experimental_zcopytx module option to 1. This patch maintains the outstanding userspace buffers in the sequence it is delivered to vhost. The outstanding userspace buffers will be marked as done once the lower device buffers DMA has finished. This is monitored through last reference of kfree_skb callback. Two buffer indices are used for this purpose. The vhost-net device passes the userspace buffers info to lower device skb through message control. DMA done status check and guest notification are handled by handle_tx: in the worst case is all buffers in the vq are in pending/done status, so we need to notify guest to release DMA done buffers first before we get any new buffers from the vq. One known problem is that if the guest stops submitting buffers, buffers might never get used until some further action, e.g. device reset. This does not seem to affect linux guests. Signed-off-by: Shirley <xma@us.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost: support event indexMichael S. Tsirkin2011-05-304-50/+127
| | | | | | | | Support the new event index feature. When acked, utilize it to reduce the # of interrupts sent to the guest. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* Correct occurrences ofRob Landley2011-05-061-1/+1
| | | | | | | | | | - Documentation/kvm/ to Documentation/virtual/kvm - Documentation/uml/ to Documentation/virtual/uml - Documentation/lguest/ to Documentation/virtual/lguest throughout the kernel source tree. Signed-off-by: Rob Landley <rob@landley.net> Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
* vhost-net: remove unlocked use of receive_queueMichael S. Tsirkin2011-03-131-1/+1
| | | | | | | | | Use of skb_queue_empty(&sock->sk->sk_receive_queue) without taking the sk_receive_queue.lock is unsafe or useless. Take it out. Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: lock receive queue, not the socketJason Wang2011-03-131-3/+4
| | | | | | | | | vhost takes a sock lock to try and prevent the skb from being pulled from the receive queue after skb_peek. However this is not the right lock to use for that, sk_receive_queue.lock is. Fix that up. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost-net: Unify the code of mergeable and big buffer handlingJason Wang2011-03-131-121/+7
| | | | | | | | | | | | Codes duplication were found between the handling of mergeable and big buffers, so this patch tries to unify them. This could be easily done by adding a quota to the get_rx_bufs() which is used to limit the number of buffers it returns (for mergeable buffer, the quota is simply UIO_MAXIOV, for big buffers, the quota is just 1), and then the previous handle_rx_mergeable() could be resued also for big buffers. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost-net: check the support of mergeable buffer outside the receive loopJason Wang2011-03-131-2/+3
| | | | | | | | | No need to check the support of mergeable buffer inside the recevie loop as the whole handle_rx()_xx is in the read critical region. So this patch move it ahead of the receiving loop. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: copy_from_user -> __copy_from_userMichael S. Tsirkin2011-03-081-1/+1
| | | | | | | | copy_from_user is pretty high on perf top profile, replacing it with __copy_from_user helps. It's also safe because we do access_ok checks during setup. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: Cleanup vhost.c and net.cKrishna Kumar2011-03-082-23/+49
| | | | | | | Minor cleanup of vhost.c and net.c to match coding style. Signed-off-by: Krishna Kumar <krkumar2@in.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: rcu annotation fixupMichael S. Tsirkin2011-02-012-7/+8
| | | | | | | | | | | | | When built with rcu checks enabled, vhost triggers bogus warnings as vhost features are read without dev->mutex sometimes, and private pointer is read with our kind of rcu where work serves as a read side critical section. Fixing it properly is not trivial. Disable the warnings by stubbing out the checks for now. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: fix signed/unsigned comparisonMichael S. Tsirkin2011-01-101-7/+11
| | | | | | | | | | | To detect that a sequence number is done, we are doing math on unsigned integers so the result is unsigned too. Not what was intended for the <= comparison. The result is user stuck forever in flush call. Convert to int to fix this. Further, get rid of ({}) to make code clearer. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* Merge branch 'vhost-net-next' of ↵David S. Miller2010-12-145-30/+352
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
| * vhost test moduleMichael S. Tsirkin2010-12-092-0/+327
| | | | | | | | | | | | | | | | This adds a test module for vhost infrastructure. Intentionally not tied to kbuild to prevent people from installing and loading it accidentally. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost: better variable name in loggingMichael S. Tsirkin2010-12-091-4/+4
| | | | | | | | | | | | | | | | We really store a page offset in write_address, so rename it write_page to avoid confusion. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost: correctly set bits of dirty pagesMichael S. Tsirkin2010-12-091-1/+2
| | | | | | | | | | | | | | | | | | | | Fix two bugs in dirty page logging: When counting pages we should increase address by 1 instead of VHOST_PAGE_SIZE. Make log_write() correctly process requests that cross pages with write_address not starting at page boundary. Reported-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost: fix typos in commentJason Wang2010-12-092-2/+2
| | | | | | | | | | Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost: remove unused includeMichael S. Tsirkin2010-12-091-2/+0
| | | | | | | | | | | | | | vhost.c does not need to know about sockets, don't include sock.h Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost: get/put_user -> __get/__put_userMichael S. Tsirkin2010-11-041-8/+8
| | | | | | | | | | | | | | We do access_ok checks on all ring values on an ioctl, so we don't need to redo them on each access. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost: copy_to_user -> __copy_to_userMichael S. Tsirkin2010-11-041-1/+1
| | | | | | | | | | | | | | We do access_ok checks at setup time, so we don't need to redo them on each access. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost-net: batch use/unuse mmMichael S. Tsirkin2010-11-042-8/+6
| | | | | | | | | | | | | | Move use/unuse mm to vhost.c which makes it possible to batch these operations. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost: put mm after thread stopMichael S. Tsirkin2010-11-041-4/+3
| | | | | | | | | | | | makes it possible to batch use/unuse mm Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * drivers/vhost/vhost.c: delete double assignmentJulia Lawall2010-10-261-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Delete successive assignments to the same location. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression i; @@ *i = ...; i = ...; // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* | vhost/net: fix rcu check usageMichael S. Tsirkin2010-11-251-2/+3
|/ | | | | | | | Incorrect rcu check was used as rcu isn't done under mutex here. Force check to 1 for now, to stop it from complaining. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6Linus Torvalds2010-10-233-13/+58
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (1699 commits) bnx2/bnx2x: Unsupported Ethtool operations should return -EINVAL. vlan: Calling vlan_hwaccel_do_receive() is always valid. tproxy: use the interface primary IP address as a default value for --on-ip tproxy: added IPv6 support to the socket match cxgb3: function namespace cleanup tproxy: added IPv6 support to the TPROXY target tproxy: added IPv6 socket lookup function to nf_tproxy_core be2net: Changes to use only priority codes allowed by f/w tproxy: allow non-local binds of IPv6 sockets if IP_TRANSPARENT is enabled tproxy: added tproxy sockopt interface in the IPV6 layer tproxy: added udp6_lib_lookup function tproxy: added const specifiers to udp lookup functions tproxy: split off ipv6 defragmentation to a separate module l2tp: small cleanup nf_nat: restrict ICMP translation for embedded header can: mcp251x: fix generation of error frames can: mcp251x: fix endless loop in interrupt handler if CANINTF_MERRF is set can-raw: add msg_flags to distinguish local traffic 9p: client code cleanup rds: make local functions/variables static ... Fix up conflicts in net/core/dev.c, drivers/net/pcmcia/smc91c92_cs.c and drivers/net/wireless/ath/ath9k/debug.c as per David
| * Merge branch 'master' of ↵David S. Miller2010-10-211-1/+1
| |\ | | | | | | | | | | | | | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: net/core/dev.c
| | * vhost: fix return code for log_access_ok()Dan Carpenter2010-10-121-1/+1
| | | | | | | | | | | | | | | | | | | | | access_ok() returns 1 if it's OK otherwise it should return 0. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * | vhost: max s/g to match qemuJason Wang2010-10-053-12/+57
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | Qemu supports up to UIO_MAXIOV s/g so we have to match that because guest drivers may rely on this. Allocate indirect and log arrays dynamically to avoid using too much contigious memory and make the length of hdr array to match the header length since each iovec entry has a least one byte. Test with copying large files w/ and w/o migration in both linux and windows guests. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* | Merge branch 'llseek' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/bklLinus Torvalds2010-10-221-0/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'llseek' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/bkl: vfs: make no_llseek the default vfs: don't use BKL in default_llseek llseek: automatically add .llseek fop libfs: use generic_file_llseek for simple_attr mac80211: disallow seeks in minstrel debug code lirc: make chardev nonseekable viotape: use noop_llseek raw: use explicit llseek file operations ibmasmfs: use generic_file_llseek spufs: use llseek in all file operations arm/omap: use generic_file_llseek in iommu_debug lkdtm: use generic_file_llseek in debugfs net/wireless: use generic_file_llseek in debugfs drm: use noop_llseek
| * | llseek: automatically add .llseek fopArnd Bergmann2010-10-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All file_operations should get a .llseek operation so we can make nonseekable_open the default for future file operations without a .llseek pointer. The three cases that we can automatically detect are no_llseek, seq_lseek and default_llseek. For cases where we can we can automatically prove that the file offset is always ignored, we use noop_llseek, which maintains the current behavior of not returning an error from a seek. New drivers should normally not use noop_llseek but instead use no_llseek and call nonseekable_open at open time. Existing drivers can be converted to do the same when the maintainer knows for certain that no user code relies on calling seek on the device file. The generated code is often incorrectly indented and right now contains comments that clarify for each added line why a specific variant was chosen. In the version that gets submitted upstream, the comments will be gone and I will manually fix the indentation, because there does not seem to be a way to do that using coccinelle. Some amount of new code is currently sitting in linux-next that should get the same modifications, which I will do at the end of the merge window. Many thanks to Julia Lawall for helping me learn to write a semantic patch that does all this. ===== begin semantic patch ===== // This adds an llseek= method to all file operations, // as a preparation for making no_llseek the default. // // The rules are // - use no_llseek explicitly if we do nonseekable_open // - use seq_lseek for sequential files // - use default_llseek if we know we access f_pos // - use noop_llseek if we know we don't access f_pos, // but we still want to allow users to call lseek // @ open1 exists @ identifier nested_open; @@ nested_open(...) { <+... nonseekable_open(...) ...+> } @ open exists@ identifier open_f; identifier i, f; identifier open1.nested_open; @@ int open_f(struct inode *i, struct file *f) { <+... ( nonseekable_open(...) | nested_open(...) ) ...+> } @ read disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ read_no_fpos disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { ... when != off } @ write @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ write_no_fpos @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { ... when != off } @ fops0 @ identifier fops; @@ struct file_operations fops = { ... }; @ has_llseek depends on fops0 @ identifier fops0.fops; identifier llseek_f; @@ struct file_operations fops = { ... .llseek = llseek_f, ... }; @ has_read depends on fops0 @ identifier fops0.fops; identifier read_f; @@ struct file_operations fops = { ... .read = read_f, ... }; @ has_write depends on fops0 @ identifier fops0.fops; identifier write_f; @@ struct file_operations fops = { ... .write = write_f, ... }; @ has_open depends on fops0 @ identifier fops0.fops; identifier open_f; @@ struct file_operations fops = { ... .open = open_f, ... }; // use no_llseek if we call nonseekable_open //////////////////////////////////////////// @ nonseekable1 depends on !has_llseek && has_open @ identifier fops0.fops; identifier nso ~= "nonseekable_open"; @@ struct file_operations fops = { ... .open = nso, ... +.llseek = no_llseek, /* nonseekable */ }; @ nonseekable2 depends on !has_llseek @ identifier fops0.fops; identifier open.open_f; @@ struct file_operations fops = { ... .open = open_f, ... +.llseek = no_llseek, /* open uses nonseekable */ }; // use seq_lseek for sequential files ///////////////////////////////////// @ seq depends on !has_llseek @ identifier fops0.fops; identifier sr ~= "seq_read"; @@ struct file_operations fops = { ... .read = sr, ... +.llseek = seq_lseek, /* we have seq_read */ }; // use default_llseek if there is a readdir /////////////////////////////////////////// @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier readdir_e; @@ // any other fop is used that changes pos struct file_operations fops = { ... .readdir = readdir_e, ... +.llseek = default_llseek, /* readdir is present */ }; // use default_llseek if at least one of read/write touches f_pos ///////////////////////////////////////////////////////////////// @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read.read_f; @@ // read fops use offset struct file_operations fops = { ... .read = read_f, ... +.llseek = default_llseek, /* read accesses f_pos */ }; @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, ... + .llseek = default_llseek, /* write accesses f_pos */ }; // Use noop_llseek if neither read nor write accesses f_pos /////////////////////////////////////////////////////////// @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; identifier write_no_fpos.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, .read = read_f, ... +.llseek = noop_llseek, /* read and write both use no f_pos */ }; @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write_no_fpos.write_f; @@ struct file_operations fops = { ... .write = write_f, ... +.llseek = noop_llseek, /* write uses no f_pos */ }; @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; @@ struct file_operations fops = { ... .read = read_f, ... +.llseek = noop_llseek, /* read uses no f_pos */ }; @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; @@ struct file_operations fops = { ... +.llseek = noop_llseek, /* no read or write fn */ }; ===== End semantic patch ===== Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Julia Lawall <julia@diku.dk> Cc: Christoph Hellwig <hch@infradead.org>
* | | Merge commit 'v2.6.36-rc7' into core/rcuIngo Molnar2010-10-072-26/+63
|\ \ \ | | |/ | |/| | | | | | | | | | Merge reason: Update from -rc3 to -rc7. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | vhost: fix log ctx signallingMichael S. Tsirkin2010-09-221-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | The log eventfd signalling got put in dead code. We didn't notice because qemu currently does polling instead of eventfd select. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * | vhost-net: fix range checking in mrg bufs caseMichael S. Tsirkin2010-09-141-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In mergeable buffer case, we use headcount, log_num and seg as indexes in same-size arrays, and we know that headcount <= seg and log_num equals either 0 or seg. Therefore, the right thing to do is range-check seg, not headcount as we do now: these will be different if guest chains s/g descriptors (this does not happen now, but we can not trust the guest). Long term, we should add BUG_ON checks to verify two other indexes are what we think they should be. Reported-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost: error handling fixMichael S. Tsirkin2010-09-061-0/+1
| | | | | | | | | | | | | | vhost should set worker to NULL on cgroups attach failure, so that we won't try to destroy the worker again on close. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
| * vhost: fix attach to cgroups regressionMichael S. Tsirkin2010-09-061-22/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 2.6.36-rc1, non-root users of vhost-net fail to attach if they are in any cgroups. The reason is that when qemu uses vhost, vhost wants to attach its thread to all cgroups that qemu has. But we got the API backwards, so a non-priveledged process (Qemu) tried to control the priveledged one (vhost), which fails. Fix this by switching to the new cgroup_attach_task_all, and running it from the vhost thread. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* | Merge branch 'rcu/urgent' of ↵Ingo Molnar2010-10-071-1/+4
|\ \ | |/ | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-2.6-rcu into core/rcu
| * vhost: stop worker only if createdEric Dumazet2010-09-011-1/+4
| | | | | | | | | | | | | | | | | | Its currently illegal to call kthread_stop(NULL) Reported-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>