If you've been working with Python for a bit, especially in the particular case of data scraping, you've probably encountered situations where you are blocked while trying to retrieve the data you want. In such a situation, knowing how to use a proxy is a handy skill to have.
In this article, we'll explore what proxies are, why they're useful, and how you can use them using the library request in Python.
Let’s start from the beginning by defining what a proxy is.
You can think of a proxy server as a “middleman” between your computer and the internet. When you send a request to a website, the request goes through the proxy server first. The proxy then forwards your request to the website, receives the response, and sends it back to you. This process masks your IP address, making it appear as if the request is coming from the proxy server instead of your own device.
As understandable, this has a lot of consequences and uses. For example, it can be used to bypass some pesky IP restrictions, or maintain anonymity.
So, why proxies might be helpful while scraping data? Well, we already gave a reason before. For example, you can use them to bypass some restrictions.
So, in the particular case of web scraping, they can be useful for the following reasons:
The requests library is a popular choice for making HTTP requests in Python and incorporating proxies into your requests is straightforward.
Let’s see how!
First things first: you have to get valid proxies before actually using them. To do so, you have two options:
Now that you have your list of proxies you can start using them. For example, you can create a dictionary like so:
proxies = { 'http': 'http://proxy_ip:proxy_port', 'https': 'https://proxy_ip:proxy_port', }
Now you can make a request using the proxies:
import requests proxies = { 'http': 'http://your_proxy_ip:proxy_port', 'https': 'https://your_proxy_ip:proxy_port', } response = requests.get('https://httpbin.org/ip', proxies=proxies)
To see the outcome of your request, you can print the response:
print(response.status_code) # Should return 200 if successful print(response.text) # Prints the content of the response
Note that, if everything went smoothly, the response should display the IP address of the proxy server, not yours.
If your proxy requires authentication, you can handle it in a couple of ways.
Method 1: including Credentials in the Proxy URL
To include the username and password to manage authentication in your proxy, you can do so:
proxies = { 'http': 'http://username:password@proxy_ip:proxy_port', 'https': 'https://username:password@proxy_ip:proxy_port', }
Method 2: using HTTPProxyAuth
Alternatively, you can use the HTTPProxyAuth class to handle authentication like so:
from requests.auth import HTTPProxyAuth proxies = { 'http': 'http://proxy_ip:proxy_port', 'https': 'https://proxy_ip:proxy_port', } auth = HTTPProxyAuth('username', 'password') response = requests.get('https://httpbin.org/ip', proxies=proxies, auth=auth)
Using a single proxy might not be sufficient if you're making numerous requests. In this case, you can use a rotating proxy: this changes the proxy IP address at regular intervals or per request.
If you’d like to test this solution, you have two options: manually rotate proxies using a list or using a proxy rotation service.
Let’s see both approaches!
If you have a list of proxies, you can rotate them manually like so:
import random proxies_list = [ 'http://proxy1_ip:port', 'http://proxy2_ip:port', 'http://proxy3_ip:port', # Add more proxies as needed ] def get_random_proxy(): proxy = random.choice(proxies_list) return { 'http': proxy, 'https': proxy, } for i in range(10): proxy = get_random_proxy() response = requests.get('https://httpbin.org/ip', proxies=proxy) print(response.text)
Services like ScraperAPI handle proxy rotation for you. You typically just need to update the proxy URL they provide and manage a dictionary of URLs like so:
proxies = { 'http': 'http://your_service_proxy_url', 'https': 'https://your_service_proxy_url', } response = requests.get('https://httpbin.org/ip', proxies=proxies)
Using a proxy in Python is a valuable technique for web scraping, testing, and accessing geo-restricted content. As we’ve seen, integrating proxies into your HTTP requests is straightforward using the library requests.
A few parting tips when scraping data from the web:
Happy coding!
The above is the detailed content of How to Use Proxies in Python. For more information, please follow other related articles on the PHP Chinese website!