I am experimenting with the limit
and limit_per_host
parameters to aiohttp.connector.TCPConnector
.
In the script below, I pass connector = aiohttp.connector.TCPConnector(limit=25, limit_per_host=5)
to aiohttp.ClientSession
, then open 2 requests to docs.aiohttp.org and 3 to github.com.
The result of session.request
is an instance of aiohttp.ClientResponse
, and in this example I intentionally do not call .close()
on it, either via .close()
or __aexit__
. I would assume this would leave the connection pool open and decrease the available connections to that (host, ssl, port) triple by -1.
The table below represents the ._available_connections()
after each request. Why does the number hang at 4 even after completing the 2nd request to docs.aiohttp.org? Both of these connections are presumably still open and haven't accessed ._content
yet or been closed. Shouldn't the available connections decrease by 1?
After Request Num. To _available_connections
1 docs.aiohttp.org 4
2 docs.aiohttp.org 4 <--- Why?
3 github.com 4
4 github.com 3
5 github.com 2
Furthermore, why does ._acquired_per_host
only ever contain 1 key? I guess I may be understanding the methods of TCPConnector
; what explains the behavior above?
Full script:
import aiohttp
async def main():
connector = aiohttp.connector.TCPConnector(limit=25, limit_per_host=5)
print("Connector arguments:")
print("_limit:", connector._limit)
print("_limit_per_host:", connector._limit_per_host)
print("-" * 70, end="\n\n")
async with aiohttp.client.ClientSession(
connector=connector,
headers={"User-Agent": "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36"},
raise_for_status=True
) as session:
# Make 2 connections to docs.aiohttp.org and
# 3 connections to github.com
#
# Note that these instances intentionally do not use
# .close(), either explicitly or via __aexit__
# in an async with block
r1 = await session.request(
"GET",
"https://docs.aiohttp.org/en/stable/client_reference.html#connectors"
)
print_connector_attrs("r1", session)
r2 = await session.request(
"GET",
"https://docs.aiohttp.org/en/stable/index.html"
)
print_connector_attrs("r2", session)
r3 = await session.request(
"GET",
"https://github.com/aio-libs/aiohttp/blob/master/aiohttp/client.py"
)
print_connector_attrs("r3", session)
r4 = await session.request(
"GET",
"https://github.com/python/cpython/blob/3.7/Lib/typing.py"
)
print_connector_attrs("r4", session)
r5 = await session.request(
"GET",
"https://github.com/aio-libs/aiohttp"
)
print_connector_attrs("r5", session)
def print_connector_attrs(name: str, session: aiohttp.client.ClientSession):
print("Connection attributes for", name, end="\n\n")
conn = session._connector
print("_conns:", conn._conns, end="\n\n")
print("_acquired:", conn._acquired, end="\n\n")
print("_acquired_per_host:", conn._acquired_per_host, end="\n\n")
print("_available_connections:")
for k in conn._acquired_per_host:
print("\t", k, conn._available_connections(k))
print("-" * 70, end="\n\n")
if __name__ == "__main__":
import asyncio
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
The output is pasted at https://pastebin.com/rvfzMTe3. I've put it there rather than here because the lines are long and not very wrap-able.
from Understanding aiohttp.TCPConnector pooling & connection limits
No comments:
Post a Comment