summaryrefslogtreecommitdiffstats
path: root/mojo
diff options
context:
space:
mode:
authorjam <jam@chromium.org>2016-01-04 10:16:16 -0800
committerCommit bot <commit-bot@chromium.org>2016-01-04 18:17:02 +0000
commitb0c8bd36ad840b609050caee081fa73e0e212559 (patch)
tree76c55b425b368363d82e0adbb638611fe4910358 /mojo
parent5f2d8e77d599dbf356254efb1ec44e79748837c4 (diff)
downloadchromium_src-b0c8bd36ad840b609050caee081fa73e0e212559.zip
chromium_src-b0c8bd36ad840b609050caee081fa73e0e212559.tar.gz
chromium_src-b0c8bd36ad840b609050caee081fa73e0e212559.tar.bz2
Fix assumption in mojo binding class Connector that message pipes notice the other end's closing synchronously.
This was causing mojo_public_bindings_unittests' MultiplexRouterTest.BasicRequestResponse and MultiplexRouterTest.RequestWithNoReceiver to fail with the new EDK. The problem is that in Connector::HandleError message_pipe_ is reset to a dummy message pipe whose other end is closed immediately. In the old EDK, the live end notices that the peer is closed synchronously. In the new EDK this happens asynchronously because of thread hops to the IO thread. The fix is to simply ensure we don't watch the message pipe handle twice (since it's not cancelled immediately). BUG=561803 Review URL: https://codereview.chromium.org/1557753002 Cr-Commit-Position: refs/heads/master@{#367327}
Diffstat (limited to 'mojo')
-rw-r--r--mojo/public/cpp/bindings/lib/connector.cc8
1 files changed, 7 insertions, 1 deletions
diff --git a/mojo/public/cpp/bindings/lib/connector.cc b/mojo/public/cpp/bindings/lib/connector.cc
index c5e9b7f..e6f2d83 100644
--- a/mojo/public/cpp/bindings/lib/connector.cc
+++ b/mojo/public/cpp/bindings/lib/connector.cc
@@ -264,7 +264,13 @@ void Connector::ReadAllAvailableMessages() {
return;
if (rv == MOJO_RESULT_SHOULD_WAIT) {
- WaitToReadMore();
+ // ReadSingleMessage could end up calling HandleError which resets
+ // message_pipe_ to a dummy one that is closed. The old EDK will see the
+ // that the peer is closed immediately, while the new one is asynchronous
+ // because of thread hops. In that case, there'll still be an async
+ // waiter.
+ if (!async_wait_id_)
+ WaitToReadMore();
break;
}
}