aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/dbell.c
diff options
context:
space:
mode:
authorPaul Mackerras <paulus@samba.org>2012-09-04 18:33:08 +0000
committerBen Hutchings <ben@decadent.org.uk>2012-09-19 15:04:42 +0100
commit241ee90a69ede9cf9255df1a18036210beeb8adf (patch)
tree5d3e9221a71e6b940d151841a52d8018419f31f7 /arch/powerpc/kernel/dbell.c
parent4d676c891354e871351e741eedf6a909ebffd265 (diff)
downloadkernel_samsung_smdk4412-241ee90a69ede9cf9255df1a18036210beeb8adf.zip
kernel_samsung_smdk4412-241ee90a69ede9cf9255df1a18036210beeb8adf.tar.gz
kernel_samsung_smdk4412-241ee90a69ede9cf9255df1a18036210beeb8adf.tar.bz2
powerpc: Make sure IPI handlers see data written by IPI senders
commit 9fb1b36ca1234e64a5d1cc573175303395e3354d upstream. We have been observing hangs, both of KVM guest vcpu tasks and more generally, where a process that is woken doesn't properly wake up and continue to run, but instead sticks in TASK_WAKING state. This happens because the update of rq->wake_list in ttwu_queue_remote() is not ordered with the update of ipi_message in smp_muxed_ipi_message_pass(), and the reading of rq->wake_list in scheduler_ipi() is not ordered with the reading of ipi_message in smp_ipi_demux(). Thus it is possible for the IPI receiver not to see the updated rq->wake_list and therefore conclude that there is nothing for it to do. In order to make sure that anything done before smp_send_reschedule() is ordered before anything done in the resulting call to scheduler_ipi(), this adds barriers in smp_muxed_message_pass() and smp_ipi_demux(). The barrier in smp_muxed_message_pass() is a full barrier to ensure that there is a full ordering between the smp_send_reschedule() caller and scheduler_ipi(). In smp_ipi_demux(), we use xchg() rather than xchg_local() because xchg() includes release and acquire barriers. Using xchg() rather than xchg_local() makes sense given that ipi_message is not just accessed locally. This moves the barrier between setting the message and calling the cause_ipi() function into the individual cause_ipi implementations. Most of them -- those that used outb, out_8 or similar -- already had a full barrier because out_8 etc. include a sync before the MMIO store. This adds an explicit barrier in the two remaining cases. These changes made no measurable difference to the speed of IPIs as measured using a simple ping-pong latency test across two CPUs on different cores of a POWER7 machine. The analysis of the reason why processes were not waking up properly is due to Milton Miller. Reported-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'arch/powerpc/kernel/dbell.c')
-rw-r--r--arch/powerpc/kernel/dbell.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/powerpc/kernel/dbell.c b/arch/powerpc/kernel/dbell.c
index 2cc451a..6856062 100644
--- a/arch/powerpc/kernel/dbell.c
+++ b/arch/powerpc/kernel/dbell.c
@@ -28,6 +28,8 @@ void doorbell_setup_this_cpu(void)
void doorbell_cause_ipi(int cpu, unsigned long data)
{
+ /* Order previous accesses vs. msgsnd, which is treated as a store */
+ mb();
ppc_msgsnd(PPC_DBELL, 0, data);
}