Last modified: Jan 28, 2026 By Alexander Williams
Python Download File from URL Tutorial
Downloading files is a common task. Python makes it simple. You can fetch images, documents, and data. This guide shows you how.
We will use two main libraries. The requests library is popular and easy. The built-in urllib module works without extra installs.
Prerequisites for File Download
You need Python installed. Check your version. Use the command line.
python --version
You might need to install the requests library. It is not part of the standard library. Use pip for installation.
pip install requests
Now you are ready to write code. Let's start with the requests method.
Download Files Using the Requests Library
The requests library is the easiest way. It handles HTTP requests smoothly. You can download content in simple steps.
First, import the library. Then use the requests.get() function. This fetches the data from the URL.
import requests
# URL of the file to download
file_url = 'https://example.com/sample.pdf'
# Send a GET request to the URL
response = requests.get(file_url)
# Check if the request was successful
if response.status_code == 200:
# Open a local file in binary write mode
with open('downloaded_file.pdf', 'wb') as file:
# Write the content of the response to the file
file.write(response.content)
print("File downloaded successfully!")
else:
print(f"Failed to download file. Status code: {response.status_code}")
The code sends a GET request. It saves the content to a local file. The 'wb' mode is for writing binary data. This is crucial for non-text files.
Always check the status code. A 200 means success. Other codes indicate errors.
Handling Large File Downloads
Downloading large files requires care. Loading everything into memory can be bad. Use streaming to download in chunks.
The requests.get() method has a stream parameter. Set it to True. Then iterate over the response chunks.
import requests
file_url = 'https://example.com/large_video.mp4'
response = requests.get(file_url, stream=True)
if response.status_code == 200:
with open('large_video.mp4', 'wb') as file:
# Iterate over the response data in chunks of 1024 bytes
for chunk in response.iter_content(chunk_size=1024):
# Write each chunk to the file
file.write(chunk)
print("Large file downloaded successfully!")
else:
print("Download failed.")
This method is memory efficient. It downloads pieces of the file. It writes them one by one. Your system memory stays safe.
Download Files Using urllib
Python's standard library has urllib. You do not need to install anything. It is a good fallback option.
Use the urllib.request.urlretrieve() function. It is a one-line solution for simple downloads.
import urllib.request
file_url = 'https://example.com/image.jpg'
local_filename = 'local_image.jpg'
# Download the file and save it locally
urllib.request.urlretrieve(file_url, local_filename)
print(f"File saved as {local_filename}")
This function is straightforward. It takes the URL and a local path. It handles the download automatically.
For more control, use urllib.request.urlopen(). You can read the data and write it yourself.
import urllib.request
file_url = 'https://example.com/data.txt'
# Open the URL
with urllib.request.urlopen(file_url) as response:
# Read the data
data = response.read()
# Save the data to a file
with open('data.txt', 'wb') as file:
file.write(data)
print("Download complete.")
Best Practices and Error Handling
Always handle exceptions. Networks can fail. Files might not exist. Use try-except blocks.
Check for common errors. Like connection errors or timeouts. Your code should be robust.
import requests
from requests.exceptions import RequestException
file_url = 'https://example.com/missing_file.zip'
try:
response = requests.get(file_url, timeout=10)
response.raise_for_status() # Raises an HTTPError for bad responses
with open('file.zip', 'wb') as f:
f.write(response.content)
print("Success!")
except RequestException as e:
print(f"An error occurred: {e}")
This code catches request-related exceptions. The raise_for_status() method checks for HTTP errors. It makes error handling cleaner.
Also, respect server resources. Do not bombard a server with requests. Add delays if downloading many files.
Conclusion
Downloading files in Python is a vital skill. The requests library offers a simple and powerful way. The built-in urllib module is a reliable alternative.
Remember to handle errors. Use streaming for large files. Always check HTTP status codes.
You can now integrate file downloads into your scripts. Fetch datasets, images, or documents automatically. Python makes web interactions easy.