| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
BUG=78303
TEST=net_unittests
Review URL: http://codereview.chromium.org/6814017
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@80972 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
preparation for non-TCP transports. This helps because the alternative
is to either use non-TCP protocols (like SCTP) in classes which are called
"TCPClientSocketPool", or to clone the code as "SCTPClientSocketPool", both
of which are less than ideal.
For now, we're just testing transports, so the TransportSocketPool classes
will determine a single type of transport and just use them. In the future
we'll likely need to figure out how to deal with runtime selection of
transports, and probably support use of multiple transports either within
the same pools or within subpools. But that is for the future.
Note that the histograms have some "tcp" references to them. I didn't change
these to "transport" yet, because it will effect existing histograms.
Renames include:
classes:
TCPClientSocketPool -> TransportClientSocketPool
MockTCPClientSocketPool -> MockTransportClientSocketPool
TCPSocketParams -> TransportSocketParams
methods (not the exhaustive list):
CreateTCPClientSocket() -> CreateTransportClientSocket()
DoTCPConnect() -> DoTransportConnect()
BUG=none
TEST=none
Review URL: http://codereview.chromium.org/6804028
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@80781 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Letting sync request ignore the socket and group limits when
loading to prevent the WebCore thread from locking up.
BUG=45986,58703
TEST=No new tests
Review URL: http://codereview.chromium.org/6756004
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@79817 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
raw_scoped_refptr_mismatch_checker.h
ref_counted.cc
ref_counted.h
ref_counted_memory.cc
ref_counted_memory.h
ref_counted_unittest.cc
scoped_callback_factory.h
scoped_comptr_win.h
scoped_handle.h
scoped_native_library.cc
scoped_native_library.h
scoped_native_library_unittest.cc
scoped_nsobject.h
scoped_open_process.h
scoped_ptr.h
scoped_ptr_unittest.cc
scoped_temp_dir.cc
scoped_temp_dir.h
scoped_temp_dir_unittest.cc
scoped_vector.h
singleton.h
singleton_objc.h
singleton_unittest.cc
linked_ptr.h
linked_ptr_unittest.cc
weak_ptr.cc
weak_ptr.h
weak_ptr_unittest.cc
BUG=None
TEST=Compile
Review URL: http://codereview.chromium.org/6714032
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@79524 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
| |
I had temporarily reverted it so I could break it up into 2 commits, so the first could be merged to 696. This is part 2.
BUG=none
TEST=none
Review URL: http://codereview.chromium.org/6684019
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@77908 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reland r77075,r77077.
They were reverted due to flaky tests, especially on valgrind. Basically, we kept hitting the backup socket timer (500ms) which would create another socket, which the tests don't expect, so they crash. I disabled the backup socket timer completely for the SPDY tests, because they make it too hard to handle the parallel alternate protocol jobs.
I also deleted the HTTP fallback test from SpdyNetworkTransactionTest, because it had a similar problem. Also, it was already being tested in HttpNetworkTransactionTest.
BUG=69688,75000
TEST=Try connecting to belshe.com with various proxy configurations. Should work still.
Review URL: http://codereview.chromium.org/6635047
TBR=willchan@chromium.org
Review URL: http://codereview.chromium.org/6681012
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@77864 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
| |
CID 13152, 14252
BUG=NONE
TEST=NONE
Review URL: http://codereview.chromium.org/6665016
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@77826 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They were reverted due to flaky tests, especially on valgrind. Basically, we kept hitting the backup socket timer (500ms) which would create another socket, which the tests don't expect, so they crash. I disabled the backup socket timer completely for the SPDY tests, because they make it too hard to handle the parallel alternate protocol jobs.
I also deleted the HTTP fallback test from SpdyNetworkTransactionTest, because it had a similar problem. Also, it was already being tested in HttpNetworkTransactionTest.
BUG=69688,75000
TEST=Try connecting to belshe.com with various proxy configurations. Should work still.
Review URL: http://codereview.chromium.org/6635047
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@77399 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
navigator.onLine and online/offline events. Only works on Windows at the moment, as IsCurrentlyOffline() is supported only by NetworkChangeNotifierWin.
Most of the changes are due to the need to support two different kinds of NetworkChangeNotifier observers. Both observers currently happen to trigger on the same event, but that could change, e.g., if we store the previous online state and only notify on a change. Thus the need for two different observer interfaces, and associated Add/Remove methods.
BUG=7469
TEST=Load https://bug336359.bugzilla.mozilla.org/attachment.cgi?id=220609, unplug network cable, reload, see that page changes to note offline status
Review URL: http://codereview.chromium.org/6526059
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@76985 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
| |
BUG=68682
TEST=compiles
Review URL: http://codereview.chromium.org/6314010
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@71880 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
| |
BUG=68682
TEST=compiles
Review URL: http://codereview.chromium.org/6186005
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@71017 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we hit the max socket limit, we close an idle socket in order to make space for the new preconnecting socket. It's possible for the selected socket to belong to the same connection group as the one we're preconnecting a socket for. This is obviously broken. The bug currently results in us potentially deleting the ClientSocketPoolHelper::Group associated with that connection group, so the |Group* group| local variable in RequestSocketInternal() is now invalid. Any access to that variable later on in the function results in badness.
This was safe before because we would never try to close an idle socket in the connection group we're request a socket for, because the first condition in RequestSocketInternal() checks to see if we can reuse an idle socket. In the preconnect case, since we want to warm up the number of sockets in the connection group, we bypass idle sockets.
The solution is to create a new function: CloseOneIdleSocketExceptInGroup(const Group*). This way we avoid this problem. I provide a return value so that the caller can tell whether or not an idle socket was closed. If it was not closed (it's possible that the connection group we're requesting a socket for is the only one with idle sockets), then the caller can tell, so it can pass up the failure so RequestSockets() knows that we've hit the max socket limit and there's no point in trying to preconnect more sockets.
BUG=64940
TEST=New unittest.
Review URL: http://codereview.chromium.org/5549001
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@67953 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
| |
BUG=none
TEST=builds
Review URL: http://codereview.chromium.org/3768001
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@62370 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds a RequestSockets() API to ClientSocketPool interface.
- no RequestPriority param, all requests default to LOWEST.
- adds a |num_sockets| param to control how many sockets to try to ensure are connected.
Adds an implementation for said function in ClientSocketPoolBaseHelper.
Adds a new ClientSocketPoolBaseHelper::Flag type to modify socket request behavior. In this case, we bypass idle sockets.
Adds a preconnect concept to ConnectJob. This lets normal requests hijack preconnect jobs.
Modifies all ClientSocketPool subclasses to support new RequestSockets API().
Adds new tests.
No client actually uses this API yet. We need to plumb it up to the preconnect system.
BUG=54450
TEST=new tests in ClientSocketPoolBaseTest
Review URL: http://codereview.chromium.org/3689004
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@62365 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Pick used idle sockets LIFO.
* In absence of used idle sockets, pick unused idle sockets FIFO.
BUG=57491
TEST=ClientSocketPoolBaseTest.PreferUsedSocketToUnusedSocket
Review URL: http://codereview.chromium.org/3539013
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@61710 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tab.
Added two new filter types: "type" and "id". "Type"
specifies source type, and "id" specifies source ID.
Both only allow exact matches, and multiple values of both
can be specified ("id:5 id:6" will match source IDs 5 and
6).
Added links to the #dns/#sockets tabs to link to filters to
display all outstanding DNS requests/live sockets. The
idle and connecting socket counts also now link to source
id filters, using ids which are now passed to Javascript
along with the rest of the socket pool information.
As socket pools do not seem to maintain a list of active
sockets, the same is not yet done for active
sockets.
Also fixes a bug that resulted in handed out/connecting
socket counts being reversed in the primary socket pool
table.
BUG=56437
TEST=manual
Review URL: http://codereview.chromium.org/3436033
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@61069 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Establishes that HttpNetworkSession owns all the socket pools.
Move out all the socket pools into a ClientSocketPoolManager. This is because
of the dependency tree amongst socket pools, which dictates the order in which
they must be constructed and destructed. In order to better establish it, I
moved them out to their own class. HttpNetworkSession owns the
ClientSocketPoolManager which owns the pools. We pass the pools as raw
pointers everywhere.
Note that ClientSocketPoolManager owns more pools than are publicly accessible via its interface. That's because some of them are wrapped by publicly exposed pools.
Also, ClientSocketPoolHistograms used to be reference counted. That's because it can be shared by multiple ClientSocketPools. But it's effectively a global as well, so I make their lifetimes persist for the length of ClientSocketPoolManager too.
I also removed internal refcounting in ClientSocketPoolBase. I had refcounted
it before I knew about ScopedRunnableMethodFactory back when I first started.
I cleaned up the unit tests a lot. Back when I was a young padawan, I didn't
really know what I was doing, so I copy/pasted a metric asston of code. Turns
out most of it was stupid, so I fixed it. I also stopped the use of
implementation inheritance with ClientSocketPoolTest because it's discouraged
by the style guide and more importantly because it caused the
ClientSocketHandles within the TestSocketRequest vector to be destroyed _after_
the pools themselves were destroyed, which is bad since the handles will call
pool_->Release() which blows up.
BUG=56215,56215
TEST=Existing unit tests
Review URL: http://codereview.chromium.org/3389020
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@60983 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
| |
BUG=56532
TEST=ClientSocketPoolBaseTest.CancelBackupSocketAfterFinishingAllRequests
Review URL: http://codereview.chromium.org/3441034
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@60693 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note that this undoes the fix for http://crbug.com/49387 which is now unnecessary without the cycle.
Some other miscellaneous cleanup is thrown in here.
BUG=55175
TEST=none
Review URL: http://codereview.chromium.org/3418018
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@59873 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
| |
should be displayed once and only once. Also, if more than one proxy is in use, socket pools with the same proxy will appear adjacent to each other.
BUG=39756
TEST=manual
Review URL: http://codereview.chromium.org/3328009
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@58826 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In particular, we used to consider that a socket had been used whenever it got returned to the ClientSocketPool. But, with preconnect, that is no longer true. Luckily, we now have UseHistory in the transport sockets. So, I create a WasEverUsed() method in ClientSocket, plumb this into all sockets, and use that in ClientSocketPoolBaseHelper instead of tracking whether or not the socket had been returned to the client or not.
This ultimately will have two implications. We will record the correct values in Net.HttpSocketType histograms and we will use the correct timeout for preconnect sockets in ClientSocketPoolBaseHelper::CleanupIdleSockets().
BUG=none
TEST=none
Review URL: http://codereview.chromium.org/3353004
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@58363 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
| |
BUG=50665
TEST=ClientSocketPoolBaseTest.AbortAllRequestsOnFlush
Review URL: http://codereview.chromium.org/3255005
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@58042 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
| |
current socket pool state. Table padding slightly increased for legibility.
TEST=manual
BUG=39756
Review URL: http://codereview.chromium.org/3267002
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@57869 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
| |
Includes a fix. I forgot that the iterator is invalidated after std::map::erase().
Review URL: http://codereview.chromium.org/3135047
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@57515 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
| |
the GroupMap.
BUG=49254
Review URL: http://codereview.chromium.org/3104026
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@57346 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We would like to test the impact of automatic retries when a TCP connection exceeds a certain threshold before it gets back an ACK. We are observing a fair number of sockets where the connection had been established, but the sockets
were not used thereafter.
r=mbelche
Review URL: http://codereview.chromium.org/3191019
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@57276 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reduces the unnecessary NetLog spam, since we would log to the NetLog everytime we created the backup ConnectJob, even though we usually wouldn't actually call Connect() on it. Now, we only create the backup ConnectJob when we intend to call Connect() on it.
Includes some various cleanup necessary for this:
* struct Group => class Group
* method_factory moves from CSP to Group
Review URL: http://codereview.chromium.org/3171017
TBR=willchan@chromium.org
Review URL: http://codereview.chromium.org/3198009
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@57107 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
| |
This reduces the unnecessary NetLog spam, since we would log to the NetLog everytime we created the backup ConnectJob, even though we usually wouldn't actually call Connect() on it. Now, we only create the backup ConnectJob when we intend to call Connect() on it.
Includes some various cleanup necessary for this:
* struct Group => class Group
* method_factory moves from CSP to Group
Review URL: http://codereview.chromium.org/3171017
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@57100 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I pulled out the code to update the socket connectivity stats.
I added defensive code which should preclude the crash that
was reported on the stability bot.
I added a second call to update the stats from
~ClientSocketHandle to ensure that we updated the
related ClientSocket when we are torn down.
As noted in the original checkin:
Enable speculative preconnection by default
Added histogram to track preconnection utilization (vs waste).
BUG=42694
r=mbelshe
Review URL: http://codereview.chromium.org/3050040
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@55197 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Leaks reported with this CL.
http://build.chromium.org/buildbot/memory/builders/Linux%20Heapcheck/builds/6130/steps/heapcheck%20test:%20net/logs/stdio
Eg.
Leak of 24 bytes in 1 objects allocated from:
@ 84aece net::SSLClientSocketNSS::BufferRecv
@ 84b161 net::SSLClientSocketNSS::DoTransportIO
@ 84ca1f net::SSLClientSocketNSS::DoHandshakeLoop
@ 84ca6b net::SSLClientSocketNSS::OnHandshakeIOComplete
@ 84cadc net::SSLClientSocketNSS::OnRecvComplete
@ 84cbb0 net::SSLClientSocketNSS::BufferRecvComplete
@ 84ea4b void DispatchToMethod
@ 84ea7b CallbackImpl::RunWithParams
@ 4b3a10 CallbackRunner::Run
@ 853e7e net::TCPClientSocketLibevent::DoReadCallback
@ 85426f net::TCPClientSocketLibevent::DidCompleteRead
@ 856a5c net::TCPClientSocketLibevent::ReadWatcher::OnFileCanReadWithoutBlocking
@ 93d8fd base::MessagePumpLibevent::FileDescriptorWatcher::OnFileCanReadWithoutBlocking
@ 93d966 base::MessagePumpLibevent::OnLibeventNotification
@ 9da639 event_process_active
@ 9da923 event_base_loop
@ 93dfd0 base::MessagePumpLibevent::Run
@ 8f2873 MessageLoop::RunInternal
@ 8f2893 MessageLoop::RunHandler
@ 8f2938 MessageLoop::Run
@ 44b7f9 TestCompletionCallback::WaitForResult
@ 6a1ee6 SSLClientSocketTest_ConnectMismatched_Test::TestBody
@ 961831 testing::Test::Run
@ 965026 testing::internal::TestInfoImpl::Run
@ 96515c testing::TestCase::Run
@ 965bbe testing::internal::UnitTestImpl::RunAllTests
@ 965d35 testing::UnitTest::Run
@ 4a4bf7 TestSuite::Run
@ 4a3b6d main
@ 2adff5bb11c4 __libc_start_main
Suppression:
{
<insert_a_suppression_name_here>
Heapcheck:Leak
fun:net::SSLClientSocketNSS::BufferRecv
fun:net::SSLClientSocketNSS::DoTransportIO
fun:net::SSLClientSocketNSS::DoHandshakeLoop
fun:net::SSLClientSocketNSS::OnHandshakeIOComplete
fun:net::SSLClientSocketNSS::OnRecvComplete
fun:net::SSLClientSocketNSS::BufferRecvComplete
fun:void DispatchToMethod
fun:CallbackImpl::RunWithParams
fun:CallbackRunner::Run
fun:net::TCPClientSocketLibevent::DoReadCallback
fun:net::TCPClientSocketLibevent::DidCompleteRead
fun:net::TCPClientSocketLibevent::ReadWatcher::OnFileCanReadWithoutBlocking
fun:base::MessagePumpLibevent::FileDescriptorWatcher::OnFileCanReadWithoutBlocking
fun:base::MessagePumpLibevent::OnLibeventNotification
fun:event_process_active
fun:event_base_loop
fun:base::MessagePumpLibevent::Run
fun:MessageLoop::RunInternal
fun:MessageLoop::RunHandler
fun:MessageLoop::Run
fun:TestCompletionCallback::WaitForResult
fun:SSLClientSocketTest_ConnectMismatched_Test::TestBody
fun:testing::Test::Run
fun:testing::internal::TestInfoImpl::Run
fun:testing::TestCase::Run
fun:testing::internal::UnitTestImpl::RunAllTests
fun:testing::UnitTest::Run
fun:TestSuite::Run
fun:main
fun:__libc_start_main
}
I added defensive code in ClientSocketHandle::ReleaseSocket(),
which should preclude the crash that was reported on the
stability bot.
I added a second call to ReleaseSocket() from
~ClientSocketHandle to ensure that we updated the
related ClientSocket when we are torn down.
r=mbelshe
Review URL: http://codereview.chromium.org/3071022
TBR=jar@chromium.org
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@55090 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I added defensive code in ClientSocketHandle::ReleaseSocket(),
which should preclude the crash that was reported on the
stability bot.
I added a second call to ReleaseSocket() from
~ClientSocketHandle to ensure that we updated the
related ClientSocket when we are torn down.
r=mbelshe
Review URL: http://codereview.chromium.org/3071022
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@55071 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added histogram to track utilization (vs waste).
[The stability bot was reporting problems, so I'm reverting]
r=mbelshe
Review URL: http://codereview.chromium.org/3026038
TBR=jar@chromium.org
Review URL: http://codereview.chromium.org/3090011
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@54930 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
| |
Added histogram to track utilization (vs waste).
r=mbelshe
Review URL: http://codereview.chromium.org/3026038
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@54771 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
| |
BUG=50273
TEST=everything still builds, build is 10% faster on windows, same speed on mac/linux
TBR: erg
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@53716 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverses the order that sockets removed from the idle list and deleted.
It also detects and handles recursive calls to CleanupIdleSockets.
BUG=49387
TEST=none
Review URL: http://codereview.chromium.org/2856049
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@53374 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
| |
Specifically, we defer asynchronous user callbacks to tasks.
BUG=48861
Review URL: http://codereview.chromium.org/2994003
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@52509 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
| |
To support SSLClientSocketPool, ClientSocketPoolBase and ClientSocketHandle require a notion of additional error state reported from the pool. Overtime the error handling may get become more integrated, alleviating the need for some of the additional error state.
To support getting Http Proxy credentials from the user, the SSLClientSocketPool will release unauthenticated HttpProxyClientSocket's into the pool as idle. However, it checks their authentication status when receiving one, completing the authentication once the user has provided the credentials.
BUG=30357
TEST=existing unit tests, ClientSocketPoolBaseTest.AdditionalErrorState*, SSLClientSocketPoolTest.*
Review URL: http://codereview.chromium.org/2870030
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@52275 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
| |
This is so that the SSLSocketParam can hold one of any of the existing SocketParams.
BUG=30357
TEST=existing unit tests
Review URL: http://codereview.chromium.org/2848029
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@52107 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is relandable now because we fixed a problem with the backup sockets,
which was the real reason for initially reverting.
We basically don't do late socket binding when a connect has already
been started for a request, even if another socket frees up earlier.
The reassignment logic was quite complicated, so I reworked it. Fixing
this bug was easy by changing the way FindTopStalledGroup worked, but
because that function is called in that loop, changing this case
caused the loop to go infinitely in some cases. This led me to look
into unwinding the loop.
The problem really came down to ReleaseSocket/DoReleaseSocket. Because
we allow for a pending queue of released sockets, we had to do this
looping (which has been a source of bugs before). To fix, I
eliminated the pending_releases queue. I also reworked the routes
through OnAvailableSocketSlot to unify them and always run asynchronously.
The result is that now we no longer have the loop. So when one
socket is released, we hand out exactly one socket. Note also that
this logic slightly changes the priority of how we recycle sockets.
Previously, we always consulted the TopStalledGroup. The TopStalledGroup
is really only interesting in the case where we're at our max global
socket limit, which is rarely the case. In the new logic, when a
socket is released, first priority goes to any pending socket in the
same group, regardless of that group's priority. The reason is why
close a socket we already have open? Previously, if the released
socket's group was not the highest priority group, the socket would
be marked idle, then closed (to make space for a socket to the
TopStalledGroup), and finally a new socket created. I believe the
new algorithm, while not perfectly matching the priorities, is more
efficient (less churn on sockets), and also is more graceful to the
common case.
Finally OnAvailableSocketSlot does two things. First, it tries to
"give" the now available slot to a particular group, which is dependent
on how OnAvailableSocketSlot was called. If we're currently
stalled on max sockets, it will also check (after giving the socket
out) to see if we can somehow free something up to satisfy a
stalled group. If that second step fails for whatever reason,
we don't loop. In theory, this could mean that we go under the
socket max and didn't dish out some sockets right away. To make
sure that multiple stalled groups can get unblocked, we'll record
the number of stalled groups, and once in this mode,
OnAvailableSocketSlot will keep checking for stalled groups until the
count finally drops to zero.
BUG=47375
TEST=DelayedSocketBindingWaitingForConnect,CancelStalledSocketAtSocketLimit
Review URL: http://codereview.chromium.org/2938006
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@52050 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We basically don't do late socket binding when a connect has already
been started for a request, even if another socket frees up earlier.
The reassignment logic was quite complicated, so I reworked it. Fixing
this bug was easy by changing the way FindTopStalledGroup worked, but
because that function is called in that loop, changing this case
caused the loop to go infinitely in some cases. This led me to look
into unwinding the loop.
The problem really came down to ReleaseSocket/DoReleaseSocket. Because
we allow for a pending queue of released sockets, we had to do this
looping (which has been a source of bugs before). To fix, I
eliminated the pending_releases queue. I also reworked the routes
through OnAvailableSocketSlot to unify them and always run asynchronously.
The result is that now we no longer have the loop. So when one
socket is released, we hand out exactly one socket. Note also that
this logic slightly changes the priority of how we recycle sockets.
Previously, we always consulted the TopStalledGroup. The TopStalledGroup
is really only interesting in the case where we're at our max global
socket limit, which is rarely the case. In the new logic, when a
socket is released, first priority goes to any pending socket in the
same group, regardless of that group's priority. The reason is why
close a socket we already have open? Previously, if the released
socket's group was not the highest priority group, the socket would
be marked idle, then closed (to make space for a socket to the
TopStalledGroup), and finally a new socket created. I believe the
new algorithm, while not perfectly matching the priorities, is more
efficient (less churn on sockets), and also is more graceful to the
common case.
Finally OnAvailableSocketSlot does two things. First, it tries to
"give" the now available slot to a particular group, which is dependent
on how OnAvailableSocketSlot was called. If we're currently
stalled on max sockets, it will also check (after giving the socket
out) to see if we can somehow free something up to satisfy a
stalled group. If that second step fails for whatever reason,
we don't loop. In theory, this could mean that we go under the
socket max and didn't dish out some sockets right away. To make
sure that multiple stalled groups can get unblocked, we'll record
the number of stalled groups, and once in this mode,
OnAvailableSocketSlot will keep checking for stalled groups until the
count finally drops to zero.
BUG=47375
TEST=DelayedSocketBindingWaitingForConnect,CancelStalledSocketAtSocketLimit
Review URL: http://codereview.chromium.org/2861023
TBR=mbelshe@chromium.org
BUG=48094
Review URL: http://codereview.chromium.org/2809052
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@51755 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
been started for a request, even if another socket frees up earlier.
The reassignment logic was quite complicated, so I reworked it. Fixing
this bug was easy by changing the way FindTopStalledGroup worked, but
because that function is called in that loop, changing this case
caused the loop to go infinitely in some cases. This led me to look
into unwinding the loop.
The problem really came down to ReleaseSocket/DoReleaseSocket. Because
we allow for a pending queue of released sockets, we had to do this
looping (which has been a source of bugs before). To fix, I
eliminated the pending_releases queue. I also reworked the routes
through OnAvailableSocketSlot to unify them and always run asynchronously.
The result is that now we no longer have the loop. So when one
socket is released, we hand out exactly one socket. Note also that
this logic slightly changes the priority of how we recycle sockets.
Previously, we always consulted the TopStalledGroup. The TopStalledGroup
is really only interesting in the case where we're at our max global
socket limit, which is rarely the case. In the new logic, when a
socket is released, first priority goes to any pending socket in the
same group, regardless of that group's priority. The reason is - why
close a socket we already have open? Previously, if the released
socket's group was not the highest priority group, the socket would
be marked idle, then closed (to make space for a socket to the
TopStalledGroup), and finally a new socket created. I believe the
new algorithm, while not perfectly matching the priorities, is more
efficient (less churn on sockets), and also is more graceful to the
common case.
Finally OnAvailableSocketSlot does two things. First, it tries to
"give" the now available slot to a particular group, which is dependent
on how OnAvailableSocketSlot was called. If we're currently
stalled on max sockets, it will also check (after giving the socket
out) to see if we can somehow free something up to satisfy a
stalled group. If that second step fails for whatever reason,
we don't loop. In theory, this could mean that we go under the
socket max and didn't dish out some sockets right away. To make
sure that multiple stalled groups can get unblocked, we'll record
the number of stalled groups, and once in this mode,
OnAvailableSocketSlot will keep checking for stalled groups until the
count finally drops to zero.
BUG=47375
TEST=DelayedSocketBindingWaitingForConnect,CancelStalledSocketAtSocketLimit
Review URL: http://codereview.chromium.org/2861023
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@51081 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Use a process-wide object (singleton pattern)
* Create/destroy this object on the main thread, make it outlive all consumers
* Make observer-related functions threadsafe
As a result, the notifier can now be used by any thread (eliminating things like NetworkChangeObserverProxy and NetworkChangeNotifierProxy, and expanding its usefulness); its creation and inner workings are much simplified (eliminating implementation-specific classes); and it is simpler to access (eliminating things like NetworkChangeNotifierThread and a LOT of passing pointers around).
BUG=none
TEST=Unittests; network changes still trigger notifications
Review URL: http://codereview.chromium.org/2802015
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@50895 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
timeout defaults to 10 seconds. Having this value set too low won't allow us
to take advantage of idle sockets. Setting it to too high could possibly
result in more ERR_CONNECT_RESETs, requiring one RTT to receive the RST packet
and possibly another RTT to re-establish the connection.
r=jar
Review URL: http://codereview.chromium.org/2827016
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@50364 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
| |
I reverted it the first time because it was suspected of causing a hang on the IO thread. A different CL caused that.
When I relanded it previously, the fix for the hang on the IO thread broke this (changed an assertion). I've now deleted that assertion.
BUG=32817
Review URL: http://codereview.chromium.org/2716004
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@49444 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
| |
revert again soon."
Review URL: http://codereview.chromium.org/2761001
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@49146 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
| |
soon.
Review URL: http://codereview.chromium.org/2760001
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@49145 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
| |
network change).
Implements this functionality by adding an |id_| field to ClientSocketPoolBaseHelper that is incremented on each Flush().
BUG=45872
Review URL: http://codereview.chromium.org/2647003
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@49076 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In particular, make it work better when using backup jobs / late binding (the display was very confused before because of how these asynchronous events would nest).
Also changed the paradigm for how PassiveLogCollector preserves these async associations -- this fixes how it replays the events to net-internals. (Before we would collapse the event streams into the SOURCE_URL_REQUEST which lost some hiearchy.. now I keep the separate streams).
Some of the particular changes to the event streams:
* ConnectJobs now create their own source stream internally.
* Sockets are now bounded by +SOCKET_ALIVE / -SOCKET_ALIVE events (removed the one-off SOCKET_DONE event).
* The socket log streams contains +SOCKET_IN_USE / -SOCKET_IN_USE event blocks to show which URLRequest was controlling it at various points in time (this makes it much easier to understand which read/writes belonged to a particular network transaction when a socket gets re-used).
* ConnectJobs are bounded by +SOCKET_POOL_CONNECT_JOB / - SOCKET_POOL_CONNECT_JOB events.
* ConnectJobs log the net error they failed with.
* Removed the SOCKET_BACKUP_TIMER_EXTENDED event.
Review URL: http://codereview.chromium.org/2363003
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@48797 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also change their names so that they appear all together on the histograms
page.
BUG=43375
TEST=none
Review URL: http://codereview.chromium.org/2029004
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@47843 0039d316-1c4b-4281-b951-d872f2087c98
|
|
|
|
|
|
|
|
| |
BUG=40455,40457
Review URL: http://codereview.chromium.org/2104013
git-svn-id: svn://svn.chromium.org/chrome/trunk/src@47753 0039d316-1c4b-4281-b951-d872f2087c98
|