diff options
author | willchan@chromium.org <willchan@chromium.org@0039d316-1c4b-4281-b951-d872f2087c98> | 2010-07-07 19:08:10 +0000 |
---|---|---|
committer | willchan@chromium.org <willchan@chromium.org@0039d316-1c4b-4281-b951-d872f2087c98> | 2010-07-07 19:08:10 +0000 |
commit | 8ae03f418758747db6fa9b50a6cbf2fef7810b7c (patch) | |
tree | da312e3bab118577d561ef9f5b55c7ea9470674a /net/socket/socks_client_socket_pool_unittest.cc | |
parent | 0aeacda086dd438fc6ff3fa54b9ea7f39c42f6cf (diff) | |
download | chromium_src-8ae03f418758747db6fa9b50a6cbf2fef7810b7c.zip chromium_src-8ae03f418758747db6fa9b50a6cbf2fef7810b7c.tar.gz chromium_src-8ae03f418758747db6fa9b50a6cbf2fef7810b7c.tar.bz2 |
Revert 51081 for causing crashes -
We basically don't do late socket binding when a connect has already
been started for a request, even if another socket frees up earlier.
The reassignment logic was quite complicated, so I reworked it. Fixing
this bug was easy by changing the way FindTopStalledGroup worked, but
because that function is called in that loop, changing this case
caused the loop to go infinitely in some cases. This led me to look
into unwinding the loop.
The problem really came down to ReleaseSocket/DoReleaseSocket. Because
we allow for a pending queue of released sockets, we had to do this
looping (which has been a source of bugs before). To fix, I
eliminated the pending_releases queue. I also reworked the routes
through OnAvailableSocketSlot to unify them and always run asynchronously.
The result is that now we no longer have the loop. So when one
socket is released, we hand out exactly one socket. Note also that
this logic slightly changes the priority of how we recycle sockets.
Previously, we always consulted the TopStalledGroup. The TopStalledGroup
is really only interesting in the case where we're at our max global
socket limit, which is rarely the case. In the new logic, when a
socket is released, first priority goes to any pending socket in the
same group, regardless of that group's priority. The reason is why
close a socket we already have open? Previously, if the released
socket's group was not the highest priority group, the socket would
be marked idle, then closed (to make space for a socket to the
TopStalledGroup), and finally a new socket created. I believe the
new algorithm, while not perfectly matching the priorities, is more
efficient (less churn on sockets), and also is more graceful to the
common case.
Finally OnAvailableSocketSlot does two things. First, it tries to
"give" the now available slot to a particular group, which is dependent
on how OnAvailableSocketSlot was called. If we're currently
stalled on max sockets, it will also check (after giving the socket
out) to see if we can somehow free something up to satisfy a
stalled group. If that second step fails for whatever reason,
we don't loop. In theory, this could mean that we go under the
socket max and didn't dish out some sockets right away. To make
sure that multiple stalled groups can get unblocked, we'll record
the number of stalled groups, and once in this mode,
OnAvailableSocketSlot will keep checking for stalled groups until the
count finally drops to zero.
BUG=47375
TEST=DelayedSocketBindingWaitingForConnect,CancelStalledSocketAtSocketLimit
Review URL: http://codereview.chromium.org/2861023
TBR=mbelshe@chromium.org
BUG=48094
Review URL: http://codereview.chromium.org/2809052
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@51755 0039d316-1c4b-4281-b951-d872f2087c98
Diffstat (limited to 'net/socket/socks_client_socket_pool_unittest.cc')
-rw-r--r-- | net/socket/socks_client_socket_pool_unittest.cc | 14 |
1 files changed, 6 insertions, 8 deletions
diff --git a/net/socket/socks_client_socket_pool_unittest.cc b/net/socket/socks_client_socket_pool_unittest.cc index 7bfce55..fb83d96 100644 --- a/net/socket/socks_client_socket_pool_unittest.cc +++ b/net/socket/socks_client_socket_pool_unittest.cc @@ -319,16 +319,15 @@ TEST_F(SOCKSClientSocketPoolTest, CancelDuringTCPConnect) { pool_->CancelRequest("a", requests_[0]->handle()); pool_->CancelRequest("a", requests_[1]->handle()); - // Requests in the connect phase don't actually get cancelled. - EXPECT_EQ(0, tcp_socket_pool_->cancel_count()); + EXPECT_EQ(1, tcp_socket_pool_->cancel_count()); // Now wait for the TCP sockets to connect. MessageLoop::current()->RunAllPending(); EXPECT_EQ(kRequestNotFound, GetOrderOfRequest(1)); EXPECT_EQ(kRequestNotFound, GetOrderOfRequest(2)); - EXPECT_EQ(0, tcp_socket_pool_->cancel_count()); - EXPECT_EQ(2, pool_->IdleSocketCount()); + EXPECT_EQ(1, tcp_socket_pool_->cancel_count()); + EXPECT_EQ(1, pool_->IdleSocketCount()); requests_[0]->handle()->Reset(); requests_[1]->handle()->Reset(); @@ -355,8 +354,7 @@ TEST_F(SOCKSClientSocketPoolTest, CancelDuringSOCKSConnect) { pool_->CancelRequest("a", requests_[0]->handle()); pool_->CancelRequest("a", requests_[1]->handle()); EXPECT_EQ(0, tcp_socket_pool_->cancel_count()); - // Requests in the connect phase don't actually get cancelled. - EXPECT_EQ(0, tcp_socket_pool_->release_count()); + EXPECT_EQ(1, tcp_socket_pool_->release_count()); // Now wait for the async data to reach the SOCKS connect jobs. MessageLoop::current()->RunAllPending(); @@ -364,8 +362,8 @@ TEST_F(SOCKSClientSocketPoolTest, CancelDuringSOCKSConnect) { EXPECT_EQ(kRequestNotFound, GetOrderOfRequest(1)); EXPECT_EQ(kRequestNotFound, GetOrderOfRequest(2)); EXPECT_EQ(0, tcp_socket_pool_->cancel_count()); - EXPECT_EQ(0, tcp_socket_pool_->release_count()); - EXPECT_EQ(2, pool_->IdleSocketCount()); + EXPECT_EQ(1, tcp_socket_pool_->release_count()); + EXPECT_EQ(1, pool_->IdleSocketCount()); requests_[0]->handle()->Reset(); requests_[1]->handle()->Reset(); |