aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/lib
diff options
context:
space:
mode:
authorNick Piggin <npiggin@suse.de>2007-10-13 03:06:00 +0200
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-10-12 18:41:21 -0700
commitdf1bdc0667eb3132fe60b3562347ca9133694ee0 (patch)
tree5d044be900ccc9f13662e1aeae7df6ed108ee43a /arch/x86/lib
parent2b9e0aae1d50e880c58d46788e5e3ebd89d75d62 (diff)
downloadkernel_samsung_smdk4412-df1bdc0667eb3132fe60b3562347ca9133694ee0.zip
kernel_samsung_smdk4412-df1bdc0667eb3132fe60b3562347ca9133694ee0.tar.gz
kernel_samsung_smdk4412-df1bdc0667eb3132fe60b3562347ca9133694ee0.tar.bz2
x86: fence oostores on 64-bit
movnt* instructions are not strongly ordered with respect to other stores, so if we are to assume stores are strongly ordered in the rest of the 64 bit code, we must fence these off (see similar examples in 32 bit code). [ The AMD memory ordering document seems to say that nontemporal stores can also pass earlier regular stores, so maybe we need sfences _before_ movnt* everywhere too? ] Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/x86/lib')
-rw-r--r--arch/x86/lib/copy_user_nocache_64.S1
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/x86/lib/copy_user_nocache_64.S b/arch/x86/lib/copy_user_nocache_64.S
index 4620efb..5196762 100644
--- a/arch/x86/lib/copy_user_nocache_64.S
+++ b/arch/x86/lib/copy_user_nocache_64.S
@@ -117,6 +117,7 @@ ENTRY(__copy_user_nocache)
popq %rbx
CFI_ADJUST_CFA_OFFSET -8
CFI_RESTORE rbx
+ sfence
ret
CFI_RESTORE_STATE