Friday, 24 January 2020

How do I get an secondary list item from ThreadPoolExecutor while sending requests?

Using the python documentation on ThreadPoolExecutor there is this request function:

import concurrent.futures
import urllib.request

URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
        return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        try:
            data = future.result()
        except Exception as exc:
            print('%r generated an exception: %s' % (url, exc))
        else:
            print('%r page is %d bytes' % (url, len(data)))

And if the URL list was adjusted as such:

URLS = [['http://www.foxnews.com/','American'],
        ['http://www.cnn.com/','American'],
        ['http://europe.wsj.com/', 'European'],
        ['http://www.bbc.co.uk/', 'Eurpoean']
        ['http://some-made-up-domain.com/','Unknown']]

You can easily pull the URL by indexing the list:

future_to_url = {executor.submit(load_url, url, 60): url[0] for url in URLS}

What I'm struggling with is how would I go about extracting the region from this list (index 1) to be included in as_completed result so the print is something like:

print('%r %r page is %d bytes' % (region, url, len(data))


from How do I get an secondary list item from ThreadPoolExecutor while sending requests?

No comments:

Post a Comment