aboutsummaryrefslogtreecommitdiffstats
path: root/virt/kvm/kvm_main.c
Commit message (Collapse)AuthorAgeFilesLines
* initial merge with 3.2.72Wolfgang Wiedmeyer2015-10-231-30/+230
|\
| * kvm: don't take vcpu mutex for obviously invalid vcpu ioctlsDavid Matlack2014-12-141-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 2ea75be3219571d0ec009ce20d9971e54af96e09 upstream. vcpu ioctls can hang the calling thread if issued while a vcpu is running. However, invalid ioctls can happen when userspace tries to probe the kind of file descriptors (e.g. isatty() calls ioctl(TCGETS)); in that case, we know the ioctl is going to be rejected as invalid anyway and we can fail before trying to take the vcpu mutex. This patch does not change functionality, it just makes invalid ioctls fail faster. Signed-off-by: David Matlack <dmatlack@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
| * KVM: Fix iommu map/unmap to handle memory slot movesAlex Williamson2014-01-031-8/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit e40f193f5bb022e927a57a4f5d5194e4f12ddb74 upstream. The iommu integration into memory slots expects memory slots to be added or removed and doesn't handle the move case. We can unmap slots from the iommu after we mark them invalid and map them before installing the final memslot array. Also re-order the kmemdup vs map so we don't leave iommu mappings if we get ENOMEM. Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
| * KVM: perform an invalid memslot step for gpa base changeMarcelo Tosatti2014-01-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | commit 12d6e7538e2d418c08f082b1b44ffa5fb7270ed8 upstream. PPC must flush all translations before the new memory slot is visible. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
| * KVM: Improve create VCPU parameter (CVE-2013-4587)Andy Honig2014-01-031-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 338c7dbadd2671189cec7faf64c84d01071b3f96 upstream. In multiple functions the vcpu_id is used as an offset into a bitfield. Ag malicious user could specify a vcpu_id greater than 255 in order to set or clear bits in kernel memory. This could be used to elevate priveges in the kernel. This patch verifies that the vcpu_id provided is less than 255. The api documentation already specifies that the vcpu_id must be less than max_vcpus, but this is currently not checked. Reported-by: Andrew Honig <ahonig@google.com> Signed-off-by: Andrew Honig <ahonig@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
| * KVM: Allow cross page reads and writes from cached translations.Andrew Honig2013-04-251-10/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 8f964525a121f2ff2df948dac908dcc65be21b5b upstream. This patch adds support for kvm_gfn_to_hva_cache_init functions for reads and writes that will cross a page. If the range falls within the same memslot, then this will be a fast operation. If the range is split between two memslots, then the slower kvm_read_guest and kvm_write_guest are used. Tested: Test against kvm_clock unit tests. Signed-off-by: Andrew Honig <ahonig@google.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> [bwh: Backported to 3.2: - Drop change in lapic.c - Keep using __gfn_to_memslot() in kvm_gfn_to_hva_cache_init()] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
| * KVM: Ensure all vcpus are consistent with in-kernel irqchip settingsAvi Kivity2012-05-311-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (cherry picked from commit 3e515705a1f46beb1c942bb8043c16f8ac7b1e9e) If some vcpus are created before KVM_CREATE_IRQCHIP, then irqchip_in_kernel() and vcpu->arch.apic will be inconsistent, leading to potential NULL pointer dereferences. Fix by: - ensuring that no vcpus are installed when KVM_CREATE_IRQCHIP is called - ensuring that a vcpu has an apic if it is installed after KVM_CREATE_IRQCHIP This is somewhat long winded because vcpu->arch.apic is created without kvm->lock held. Based on earlier patch by Michael Ellerman. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
| * KVM: mmu_notifier: Flush TLBs before releasing mmu_lockTakuya Yoshikawa2012-05-311-9/+10
| | | | | | | | | | | | | | | | | | | | | | | | (cherry picked from commit 565f3be2174611f364405bbea2d86e153c2e7e78 Other threads may process the same page in that small window and skip TLB flush and then return before these functions do flush. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
| * KVM: unmap pages from the iommu when slots are removedAlex Williamson2012-05-111-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 32f6daad4651a748a58a3ab6da0611862175722f upstream. We've been adding new mappings, but not destroying old mappings. This can lead to a page leak as pages are pinned using get_user_pages, but only unpinned with put_page if they still exist in the memslots list on vm shutdown. A memslot that is destroyed while an iommu domain is enabled for the guest will therefore result in an elevated page reference count that is never cleared. Additionally, without this fix, the iommu is only programmed with the first translation for a gpa. This can result in peer-to-peer errors if a mapping is destroyed and replaced by a new mapping at the same gpa as the iommu will still be pointing to the original, pinned memory address. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
| * KVM: Intelligent device lookup on I/O busSasha Levin2011-09-251-12/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the method of dealing with an IO operation on a bus (PIO/MMIO) is to call the read or write callback for each device registered on the bus until we find a device which handles it. Since the number of devices on a bus can be significant due to ioeventfds and coalesced MMIO zones, this leads to a lot of overhead on each IO operation. Instead of registering devices, we now register ranges which points to a device. Lookup is done using an efficient bsearch instead of a linear search. Performance test was conducted by comparing exit count per second with 200 ioeventfds created on one byte and the guest is trying to access a different byte continuously (triggering usermode exits). Before the patch the guest has achieved 259k exits per second, after the patch the guest does 274k exits per second. Cc: Avi Kivity <avi@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: MMU: mmio page fault supportXiao Guangrong2011-07-241-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The idea is from Avi: | We could cache the result of a miss in an spte by using a reserved bit, and | checking the page fault error code (or seeing if we get an ept violation or | ept misconfiguration), so if we get repeated mmio on a page, we don't need to | search the slot list/tree. | (https://lkml.org/lkml/2011/2/22/221) When the page fault is caused by mmio, we cache the info in the shadow page table, and also set the reserved bits in the shadow page table, so if the mmio is caused again, we can quickly identify it and emulate it directly Searching mmio gfn in memslots is heavy since we need to walk all memeslots, it can be reduced by this feature, and also avoid walking guest page table for soft mmu. [jan: fix operator precedence issue] Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: MMU: filter out the mmio pfn from the fault pfnXiao Guangrong2011-07-241-2/+14
| | | | | | | | | | | | | | | | | | | | | | If the page fault is caused by mmio, the gfn can not be found in memslots, and 'bad_pfn' is returned on gfn_to_hva path, so we can use 'bad_pfn' to identify the mmio page fault. And, to clarify the meaning of mmio pfn, we return fault page instead of bad page when the gfn is not allowd to prefetch Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: introduce kvm_read_guest_cachedGleb Natapov2011-07-121-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | Introduce kvm_read_guest_cached() function in addition to write one we already have. [ by glauber: export function signature in kvm header ] Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Glauber Costa <glommer@redhat.com> Acked-by: Rik van Riel <riel@redhat.com> Tested-by: Eric Munson <emunson@mgebm.net> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: Add compat ioctl for KVM_SET_SIGNAL_MASKAlexander Graf2011-07-121-1/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | KVM has an ioctl to define which signal mask should be used while running inside VCPU_RUN. At least for big endian systems, this mask is different on 32-bit and 64-bit systems (though the size is identical). Add a compat wrapper that converts the mask to whatever the kernel accepts, allowing 32-bit kvm user space to set signal masks. This patch fixes qemu with --enable-io-thread on ppc64 hosts when running 32-bit user land. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: Clean up error handling during VCPU creationJan Kiszka2011-07-121-5/+6
| | | | | | | | | | | | | | | | | | | | So far kvm_arch_vcpu_setup is responsible for freeing the vcpu struct if it fails. Move this confusing resonsibility back into the hands of kvm_vm_ioctl_create_vcpu. Only kvm_arch_vcpu_setup of x86 is affected, all other archs cannot fail. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: use __copy_to_user/__clear_user to write guest pageXiao Guangrong2011-07-121-2/+2
| | | | | | | | | | | | | | | | Simply use __copy_to_user/__clear_user to write guest page since we have already verified the user address when the memslot is set Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* | merged 3.0.101 tagWolfgang Wiedmeyer2015-10-221-13/+41
| |
* | KVM: unmap pages from the iommu when slots are removedAlex Williamson2012-05-071-2/+3
|/ | | | | | | | | | | | | | | | | | | | | | | commit 32f6daad4651a748a58a3ab6da0611862175722f upstream. We've been adding new mappings, but not destroying old mappings. This can lead to a page leak as pages are pinned using get_user_pages, but only unpinned with put_page if they still exist in the memslots list on vm shutdown. A memslot that is destroyed while an iommu domain is enabled for the guest will therefore result in an elevated page reference count that is never cleared. Additionally, without this fix, the iommu is only programmed with the first translation for a gpa. This can result in peer-to-peer errors if a mapping is destroyed and replaced by a new mapping at the same gpa as the iommu will still be pointing to the original, pinned memory address. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* KVM: Initialize kvm before registering the mmu notifierMike Waychison2011-06-061-5/+6
| | | | | | | | | | | | | | | | It doesn't make sense to ever see a half-initialized kvm structure on mmu notifier callbacks. Previously, 85722cda changed the ordering to ensure that the mmu_lock was initialized before mmu notifier registration, but there is still a race where the mmu notifier could come in and try accessing other portions of struct kvm before they are intialized. Solve this by moving the mmu notifier registration to occur after the structure is completely initialized. Google-Bug-Id: 452199 Signed-off-by: Mike Waychison <mikew@google.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: add missing void __user * cast to access_ok() callHeiko Carstens2011-05-261-1/+3
| | | | | | | | | | | | | | | fa3d315a "KVM: Validate userspace_addr of memslot when registered" introduced this new warning onn s390: kvm_main.c: In function '__kvm_set_memory_region': kvm_main.c:654:7: warning: passing argument 1 of '__access_ok' makes pointer from integer without a cast arch/s390/include/asm/uaccess.h:53:19: note: expected 'const void *' but argument is of type '__u64' Add the missing cast to get rid of it again... Cc: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Fix kvm mmu_notifier initialization orderOGAWA Hirofumi2011-05-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Like the following, mmu_notifier can be called after registering immediately. So, kvm have to initialize kvm->mmu_lock before it. BUG: spinlock bad magic on CPU#0, kswapd0/342 lock: ffff8800af8c4000, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 Pid: 342, comm: kswapd0 Not tainted 2.6.39-rc5+ #1 Call Trace: [<ffffffff8118ce61>] spin_bug+0x9c/0xa3 [<ffffffff8118ce91>] do_raw_spin_lock+0x29/0x13c [<ffffffff81024923>] ? flush_tlb_others_ipi+0xaf/0xfd [<ffffffff812e22f3>] _raw_spin_lock+0x9/0xb [<ffffffffa0582325>] kvm_mmu_notifier_clear_flush_young+0x2c/0x66 [kvm] [<ffffffff810d3ff3>] __mmu_notifier_clear_flush_young+0x2b/0x57 [<ffffffff810c8761>] page_referenced_one+0x88/0xea [<ffffffff810c89bf>] page_referenced+0x1fc/0x256 [<ffffffff810b2771>] shrink_page_list+0x187/0x53a [<ffffffff810b2ed7>] shrink_inactive_list+0x1e0/0x33d [<ffffffff810acf95>] ? determine_dirtyable_memory+0x15/0x27 [<ffffffff812e90ee>] ? call_function_single_interrupt+0xe/0x20 [<ffffffff810b3356>] shrink_zone+0x322/0x3de [<ffffffff810a9587>] ? zone_watermark_ok_safe+0xe2/0xf1 [<ffffffff810b3928>] kswapd+0x516/0x818 [<ffffffff810b3412>] ? shrink_zone+0x3de/0x3de [<ffffffff81053d17>] kthread+0x7d/0x85 [<ffffffff812e9394>] kernel_thread_helper+0x4/0x10 [<ffffffff81053c9a>] ? __init_kthread_worker+0x37/0x37 [<ffffffff812e9390>] ? gs_change+0xb/0xb Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Validate userspace_addr of memslot when registeredTakuya Yoshikawa2011-05-221-2/+5
| | | | | | | | | | | | | | This way, we can avoid checking the user space address many times when we read the guest memory. Although we can do the same for write if we check which slots are writable, we do not care write now: reading the guest memory happens more often than writing. [avi: change VERIFY_READ to VERIFY_WRITE] Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: cleanup memslot_id functionXiao Guangrong2011-05-111-17/+0
| | | | | | | We can get memslot id from memslot->id directly Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Enable async page fault processingGleb Natapov2011-04-061-2/+21
| | | | | | | | | | If asynchronous hva_to_pfn() is requested call GUP with FOLL_NOWAIT to avoid sleeping on IO. Check for hwpoison is done at the same time, otherwise check_user_page_hwpoison() will call GUP again and will put vcpu to sleep. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* Merge branch 'syscore' of ↵Linus Torvalds2011-03-251-26/+8
|\ | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6 * 'syscore' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6: Introduce ARCH_NO_SYSDEV_OPS config option (v2) cpufreq: Use syscore_ops for boot CPU suspend/resume (v2) KVM: Use syscore_ops instead of sysdev class and sysdev PCI / Intel IOMMU: Use syscore_ops instead of sysdev class and sysdev timekeeping: Use syscore_ops instead of sysdev class and sysdev x86: Use syscore_ops instead of sysdev classes and sysdevs
| * KVM: Use syscore_ops instead of sysdev class and sysdevRafael J. Wysocki2011-03-231-26/+8
| | | | | | | | | | | | | | | | | | | | | | | | KVM uses a sysdev class and a sysdev for executing kvm_suspend() after interrupts have been turned off on the boot CPU (during system suspend) and for executing kvm_resume() before turning on interrupts on the boot CPU (during system resume). However, since both of these functions ignore their arguments, the entire mechanism may be replaced with a struct syscore_ops object which is simpler. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Acked-by: Avi Kivity <avi@redhat.com>
* | kvm: use little-endian bitopsAkinobu Mita2011-03-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | As a preparation for removing ext2 non-atomic bit operations from asm/bitops.h. This converts ext2 non-atomic bit operations to little-endian bit operations. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Avi Kivity <avi@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | kvm: stop including asm-generic/bitops/le.h directlyAkinobu Mita2011-03-231-2/+1
|/ | | | | | | | | | | | | | | | | | | | asm-generic/bitops/le.h is only intended to be included directly from asm-generic/bitops/ext2-non-atomic.h or asm-generic/bitops/minix-le.h which implements generic ext2 or minix bit operations. This stops including asm-generic/bitops/le.h directly and use ext2 non-atomic bit operations instead. It seems odd to use ext2_set_bit() on kvm, but it will replaced with __set_bit_le() after introducing little endian bit operations for all architectures. This indirect step is necessary to maintain bisectability for some architectures which have their own little-endian bit operations. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Avi Kivity <avi@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* KVM: Convert kvm_lock to raw_spinlockJan Kiszka2011-03-171-18/+18
| | | | | | | | Code under this lock requires non-preemptibility. Ensure this also over -rt by converting it to raw spinlock. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: use yield_to instead of sleep in kvm_vcpu_on_spinRik van Riel2011-03-171-10/+47
| | | | | | | | | | | | Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic slowdowns of certain workloads, we instead use yield_to to get another VCPU in the same KVM guest to run sooner. This seems to give a 10-15% speedup in certain workloads. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: keep track of which task is running a KVM vcpuRik van Riel2011-03-171-0/+10
| | | | | | | | | | | | | Keep track of which task is running a KVM vcpu. This helps us figure out later what task to wake up if we want to boost a vcpu that got preempted. Unfortunately there are no guarantees that the same task always keeps the same vcpu, so we can only track the task across a single "run" of the vcpu. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Replace is_hwpoison_address with __get_user_pagesHuang Ying2011-03-171-1/+10
| | | | | | | | | | | | | | | | is_hwpoison_address only checks whether the page table entry is hwpoisoned, regardless the memory page mapped. While __get_user_pages will check both. QEMU will clear the poisoned page table entry (via unmap/map) to make it possible to allocate a new memory page for the virtual address across guest rebooting. But it is also possible that the underlying memory page is kept poisoned even after the corresponding page table entry is cleared, that is, a new memory page can not be allocated. __get_user_pages can catch these situations. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: make make_all_cpus_request() locklessXiao Guangrong2011-03-171-6/+3
| | | | | | | | | Now, we have 'vcpu->mode' to judge whether need to send ipi to other cpus, this way is very exact, so checking request bit is needless, then we can drop the spinlock let it's collateral Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Add "exiting guest mode" stateXiao Guangrong2011-03-171-1/+6
| | | | | | | | | | | | | | | | Currently we keep track of only two states: guest mode and host mode. This patch adds an "exiting guest mode" state that tells us that an IPI will happen soon, so unless we need to wait for the IPI, we can avoid it completely. Also 1: No need atomically to read/write ->mode in vcpu's thread 2: reorganize struct kvm_vcpu to make ->mode and ->requests in the same cache line explicitly Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: fix build warning within __kvm_set_memory_region() on s390Heiko Carstens2011-03-171-0/+2
| | | | | | | | | | | | | Get rid of this warning: CC arch/s390/kvm/../../../virt/kvm/kvm_main.o arch/s390/kvm/../../../virt/kvm/kvm_main.c:596:12: warning: 'kvm_create_dirty_bitmap' defined but not used The only caller of the function is within a !CONFIG_S390 section, so add the same ifdef around kvm_create_dirty_bitmap() as well. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: Don't flush shadow when enabling dirty trackingAvi Kivity2011-03-171-6/+1
| | | | | | | Instead, drop large mappings, which were the reason we dropped shadow. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* thp: add compound_trans_head() helperAndrea Arcangeli2011-01-131-24/+14
| | | | | | | | | | | | Cleanup some code with common compound_trans_head helper. Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <jweiner@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Avi Kivity <avi@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* thp: mmu_notifier_test_youngAndrea Arcangeli2011-01-131-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For GRU and EPT, we need gup-fast to set referenced bit too (this is why it's correct to return 0 when shadow_access_mask is zero, it requires gup-fast to set the referenced bit). qemu-kvm access already sets the young bit in the pte if it isn't zero-copy, if it's zero copy or a shadow paging EPT minor fault we relay on gup-fast to signal the page is in use... We also need to check the young bits on the secondary pagetables for NPT and not nested shadow mmu as the data may never get accessed again by the primary pte. Without this closer accuracy, we'd have to remove the heuristic that avoids collapsing hugepages in hugepage virtual regions that have not even a single subpage in use. ->test_young is full backwards compatible with GRU and other usages that don't have young bits in pagetables set by the hardware and that should nuke the secondary mmu mappings when ->clear_flush_young runs just like EPT does. Removing the heuristic that checks the young bit in khugepaged/collapse_huge_page completely isn't so bad either probably but I thought it was worth it and this makes it reliable. Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* thp: kvm mmu transparent hugepage supportAndrea Arcangeli2011-01-131-2/+30
| | | | | | | | | | | This should work for both hugetlbfs and transparent hugepages. [akpm@linux-foundation.org: bring forward PageTransCompound() addition for bisectability] Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Cc: Avi Kivity <avi@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* KVM: Don't spin on virt instruction faults during rebootAvi Kivity2011-01-121-9/+4
| | | | | | | | | | | | | Since vmx blocks INIT signals, we disable virtualization extensions during reboot. This leads to virtualization instructions faulting; we trap these faults and spin while the reboot continues. Unfortunately spinning on a non-preemptible kernel may block a task that reboot depends on; this causes the reboot to hang. Fix by skipping over the instruction and hoping for the best. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: delay flush all tlbs on sync_page pathXiao Guangrong2011-01-121-1/+6
| | | | | | | | | | Quote from Avi: | I don't think we need to flush immediately; set a "tlb dirty" bit somewhere | that is cleareded when we flush the tlb. kvm_mmu_notifier_invalidate_page() | can consult the bit and force a flush if set. Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: rename hardware_[dis|en]able() to *_nolock() and add locking wrappersTakuya Yoshikawa2011-01-121-12/+22
| | | | | | | | | | | | The naming convension of hardware_[dis|en]able family is little bit confusing because only hardware_[dis|en]able_all are using _nolock suffix. Renaming current hardware_[dis|en]able() to *_nolock() and using hardware_[dis|en]able() as wrapper functions which take kvm_lock for them reduces extra confusion. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: take kvm_lock for hardware_disable() during cpu hotplugTakuya Yoshikawa2011-01-121-0/+2
| | | | | | | | In kvm_cpu_hotplug(), only CPU_STARTING case is protected by kvm_lock. This patch adds missing protection for CPU_DYING case. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Clean up vm creation and releaseJan Kiszka2011-01-121-6/+13
| | | | | | | | | | | | IA64 support forces us to abstract the allocation of the kvm structure. But instead of mixing this up with arch-specific initialization and doing the same on destruction, split both steps. This allows to move generic destruction calls into generic code. It also fixes error clean-up on failures of kvm_create_vm for IA64. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Refactor srcu struct release on early errorsJan Kiszka2011-01-121-8/+6
| | | | | Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: replace vmalloc and memset with vzallocTakuya Yoshikawa2011-01-121-7/+2
| | | | | | | | Let's use newly introduced vzalloc(). Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Jesper Juhl <jj@chaosbits.net> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: get rid of warning within kvm_dev_ioctl_create_vmHeiko Carstens2011-01-121-4/+4
| | | | | | | | | | | Fixes this: CC arch/s390/kvm/../../../virt/kvm/kvm_main.o arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_dev_ioctl_create_vm': arch/s390/kvm/../../../virt/kvm/kvm_main.c:1828:10: warning: unused variable 'r' Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: add cast within kvm_clear_guest_page to fix warningHeiko Carstens2011-01-121-1/+2
| | | | | | | | | | | | Fixes this: CC arch/s390/kvm/../../../virt/kvm/kvm_main.o arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_clear_guest_page': arch/s390/kvm/../../../virt/kvm/kvm_main.c:1224:2: warning: passing argument 3 of 'kvm_write_guest_page' makes pointer from integer without a cast arch/s390/kvm/../../../virt/kvm/kvm_main.c:1185:5: note: expected 'const void *' but argument is of type 'long unsigned int' Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: use kmalloc() for small dirty bitmapsTakuya Yoshikawa2011-01-121-3/+10
| | | | | | | | | | | | | Currently we are using vmalloc() for all dirty bitmaps even if they are small enough, say less than K bytes. We use kmalloc() if dirty bitmap size is less than or equal to PAGE_SIZE so that we can avoid vmalloc area usage for VGA. This will also make the logging start/stop faster. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: pre-allocate one more dirty bitmap to avoid vmalloc()Takuya Yoshikawa2011-01-121-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently x86's kvm_vm_ioctl_get_dirty_log() needs to allocate a bitmap by vmalloc() which will be used in the next logging and this has been causing bad effect to VGA and live-migration: vmalloc() consumes extra systime, triggers tlb flush, etc. This patch resolves this issue by pre-allocating one more bitmap and switching between two bitmaps during dirty logging. Performance improvement: I measured performance for the case of VGA update by trace-cmd. The result was 1.5 times faster than the original one. In the case of live migration, the improvement ratio depends on the workload and the guest memory size. In general, the larger the memory size is the more benefits we get. Note: This does not change other architectures's logic but the allocation size becomes twice. This will increase the actual memory consumption only when the new size changes the number of pages allocated by vmalloc(). Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>