Last modified: May 05, 2025 By Alexander Williams

Asynchronous Processing in Plone with Celery and Redis

Plone is a powerful CMS, but some tasks can slow it down. Asynchronous processing helps. This guide shows how to use Celery and Redis with Plone.

Why Use Asynchronous Processing?

Long-running tasks block Plone's main thread. This hurts performance. Async processing moves these tasks to background workers.

Common use cases include:

  • File processing
  • Email sending
  • Data exports
  • External API calls

For scaling Plone further, see our Scaling Plone with ZEO and RelStorage guide.

Celery and Redis Overview

Celery is a task queue system. It manages background job execution. Workers process tasks independently of Plone.

Redis acts as the message broker. It passes tasks between Plone and Celery workers. Redis is fast and reliable.

Setting Up the Environment

First, install required packages:


pip install celery redis plone.app.async

Configuring Redis

Install Redis server:


sudo apt-get install redis-server

Verify Redis is running:


redis-cli ping
# Should return: PONG

Celery Configuration in Plone

Create a celery.py file in your package:


from celery import Celery
import os

app = Celery(
    'async_tasks',
    broker='redis://localhost:6379/0',
    backend='redis://localhost:6379/1'
)

# Load settings from Plone's configuration
app.conf.update(
    task_serializer='json',
    accept_content=['json'],
    result_serializer='json',
    timezone='UTC',
    enable_utc=True,
)

Creating Async Tasks

Define a sample task:


from .celery import app

@app.task
def process_large_file(file_path):
    """Process large file asynchronously"""
    # Your processing logic here
    return f"Processed {file_path}"

Calling Tasks from Plone

Trigger the task from Plone code:


from my.package.tasks import process_large_file

def handle_upload(context, file):
    # Start async processing
    result = process_large_file.delay(file.path)
    return f"Task ID: {result.id}"

Monitoring and Results

Check task status:


result = process_large_file.AsyncResult(task_id)
if result.ready():
    print(result.get())

For more monitoring options, see Plone Event Systems Guide.

Error Handling

Celery provides retry mechanisms:


@app.task(bind=True, max_retries=3)
def risky_operation(self, data):
    try:
        # Operation that might fail
    except Exception as exc:
        self.retry(exc=exc)

Best Practices

Follow these tips for better async processing:

  • Keep tasks small and focused
  • Use proper error handling
  • Monitor queue lengths
  • Scale workers as needed

For related best practices, check our Plone Testing Best Practices.

Conclusion

Asynchronous processing improves Plone performance. Celery and Redis make it easy to implement. Move long tasks to background workers.

This setup keeps your Plone site responsive. Users get better experience while complex tasks run separately.

Start with simple tasks. Expand as needed. Your site will handle more traffic efficiently.