Last modified: Jan 01, 2026 By Alexander Williams
Manage AWS S3 Buckets with Python Boto3
Amazon S3 is a core cloud storage service. It stores data as objects in buckets. Python is a great tool for managing it.
This guide shows you how to use Python with the Boto3 library. You will learn to automate S3 tasks. This includes bucket creation and file management.
We will cover essential operations step by step. You will see practical code examples. This makes learning easy and effective.
Prerequisites
You need a few things before starting. First, have an AWS account. You also need IAM credentials with S3 permissions.
Install Python on your machine. Then, install the Boto3 SDK. Use pip for installation.
pip install boto3
For a deeper dive into Boto3, check our Getting Started with AWS SDK for Python Boto3 guide.
Configure your AWS credentials. Use the AWS CLI or environment variables. This lets Boto3 authenticate your requests.
Connecting to S3 with Boto3
Start by importing the Boto3 library. Then, create a client or resource object. This object is your interface to S3.
import boto3
# Create a low-level client
s3_client = boto3.client('s3')
# Create a high-level resource (more Pythonic)
s3_resource = boto3.resource('s3')
The client offers detailed control. The resource provides an abstract interface. Choose based on your task complexity.
Both objects need proper AWS credentials. They will use the default profile. Ensure your IAM user has S3 access rights.
Creating an S3 Bucket
Buckets are containers for your objects. Use the create_bucket method. You must provide a unique bucket name.
def create_bucket(bucket_name, region='us-east-1'):
"""Creates a new S3 bucket."""
try:
s3_client.create_bucket(
Bucket=bucket_name,
CreateBucketConfiguration={
'LocationConstraint': region
}
)
print(f"Bucket '{bucket_name}' created successfully in {region}.")
except Exception as e:
print(f"Error creating bucket: {e}")
# Example usage
create_bucket('my-unique-python-bucket-2023', 'us-west-2')
Bucket 'my-unique-python-bucket-2023' created successfully in us-west-2.
Bucket names must be globally unique. The region parameter is important. It affects latency and data governance.
Always handle exceptions. This catches errors like duplicate names. It makes your scripts more robust.
Listing Buckets and Objects
You can list all your S3 buckets. Use the list_buckets method. It returns a dictionary with bucket details.
def list_all_buckets():
"""Lists all S3 buckets in your account."""
response = s3_client.list_buckets()
print("Your S3 Buckets:")
for bucket in response['Buckets']:
print(f" - {bucket['Name']} (Created: {bucket['CreationDate']})")
list_all_buckets()
To list objects inside a bucket, use list_objects_v2. This shows files and prefixes (folders).
def list_bucket_contents(bucket_name):
"""Lists all objects within a specified bucket."""
try:
response = s3_client.list_objects_v2(Bucket=bucket_name)
if 'Contents' in response:
for obj in response['Contents']:
print(f" - {obj['Key']} (Size: {obj['Size']} bytes)")
else:
print("Bucket is empty.")
except Exception as e:
print(f"Error listing contents: {e}")
list_bucket_contents('my-unique-python-bucket-2023')
Uploading Files to S3
Uploading is a common task. Use the upload_file method. It handles large files efficiently.
def upload_file_to_s3(file_path, bucket_name, object_name=None):
"""Uploads a file to an S3 bucket."""
if object_name is None:
object_name = file_path
try:
s3_client.upload_file(file_path, bucket_name, object_name)
print(f"File '{file_path}' uploaded to '{bucket_name}/{object_name}'.")
except Exception as e:
print(f"Error uploading file: {e}")
# Upload a local file
upload_file_to_s3('report.pdf', 'my-unique-python-bucket-2023', 'documents/report.pdf')
You can also upload data directly from memory. Use put_object for strings or bytes. This is good for logs or generated content.
Downloading Files from S3
Download objects to your local machine. The download_file method is simple. Specify bucket, key, and local filename.
def download_file_from_s3(bucket_name, object_key, download_path):
"""Downloads a file from S3 to a local path."""
try:
s3_client.download_file(bucket_name, object_key, download_path)
print(f"File '{object_key}' downloaded to '{download_path}'.")
except Exception as e:
print(f"Error downloading file: {e}")
download_file_from_s3('my-unique-python-bucket-2023', 'documents/report.pdf', './local_report.pdf')
Managing Bucket Security and Policies
Security is crucial. You can manage access with Python. Set bucket policies and block public access.
Use put_bucket_policy to apply a JSON policy. This controls who can access your bucket. Always follow the principle of least privilege.
def apply_bucket_policy(bucket_name):
"""Applies a basic policy to make a bucket private."""
policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": f"arn:aws:s3:::{bucket_name}/*",
"Condition": {"Bool": {"aws:SecureTransport": "false"}}
}
]
}
s3_client.put_bucket_policy(Bucket=bucket_name, Policy=json.dumps(policy))
print(f"Policy applied to '{bucket_name}' to deny non-HTTPS traffic.")
Never store sensitive credentials in your code. Use IAM roles or AWS Secrets Manager. This is a critical security practice.
Automating S3 Tasks
Python scripts can automate S3 management. Combine operations for backups, syncing, or cleanup.
For example, automate a daily backup. Upload logs or database dumps. Use scheduled tasks or cron jobs.
You can run these scripts on servers. Better yet, run them serverless. Use AWS Lambda for event-driven automation.
Learn how in our Deploy Python Apps on AWS Lambda Guide. It covers packaging and triggers.
Need specific packages? See our guide on how to Install Python Package in AWS Lambda.
Error Handling and Best Practices
Always implement error handling. Use try-except blocks. Catch specific Boto3 exceptions like ClientError.
Use pagination for large lists. S3 limits responses to 1000 objects. Paginators handle this automatically.
Enable versioning on important buckets. It protects against accidental deletion. You can manage versions with Boto3 too.
Consider cost. Monitor storage and request counts. Use lifecycle rules to archive or delete old data.
Conclusion
Python and Boto3 make S3 management easy. You can create buckets and upload files. You can also automate complex workflows.
Start with the basics shown here. Then explore advanced features. These include events, encryption, and replication.
Combine S3 with other AWS services. For instance, trigger a Lambda function on file upload. This enables powerful serverless applications.
For building web APIs that interact with S3, consider Deploy FastAPI to AWS Lambda with Mangum.
Remember to follow security best practices. Manage permissions carefully. Use IAM roles for production applications.
Happy coding with AWS S3 and Python.