Dockerizing Django with Postgres, Redis and Celery
In this episode, we are going to build a dockerized Django application with Redis, celery, and Postgres to handle asynchronous tasks.

In this article, we are going to build a dockerized Django application to execute background tasks asynchronously by using Redis as a message broker with the Celery queue. You will be able to create your own asynchronous apps easily by using the initial configuration of this project. Before we start, you’ll need a basic understanding of Django, Docker, and Celery.
Adding Dockerfile and dependencies
Let's start by adding the required dependencies inside requirements.txt
file
requirements.txt
Django==3.1
celery==4.4.1
redis==3.4.1
psycopg2==2.9.3
Next, create a Dockerfile
inside the project:
Dockerfile
FROM python:3.8-alpine
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client jpeg-dev
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
RUN mkdir /app
COPY ./app /app
WORKDIR /app
Let's break down the configurations for better understanding:
FROM python:3.8-alpine
We are using python:3.8-alpine
which is a very lightweight image and allocates a smaller size of memory.
ENV PYTHONUNBUFFERED 1
Setting the non-empty value of PYTHONUNBUFFERED
means that the python output is transmitted directly to the terminal without being buffered and that allows displaying the application’s output in real-time.
COPY ./requirements.txt /requirements.txt
We are installing our dependencies by copying requirements.txt
into the docker image.
RUN apk add --update --no-cache postgresql-client jpeg-dev
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
In order to interact with Postgres
database we'll install postgresql-client
by using the package manager that comes with alpine. We’re going to add a package by using —update
option which means updating the registry before adding it and –no-cache
option is not storing the registry index on our docker file.
The reason for including these options is to minimize the number of extra files and packages that are added to our docker container. This is the best practice for keeping the smallest footprints possible in our application and it also doesn’t include any extra dependencies in the system which may cause unexpected side effects or it may even create security vulnerabilities.
Now that we have this package we are going to install some temporary packages that need to be installed on the system while we run our requirements and then we can remove them after the installation is finished. The –virtual
option here sets up an alias for our dependencies that we can use to easily remove all those dependencies later. Then we list out all of the temporary dependencies that are required for installing the Postgres client.
RUN mkdir /app
COPY ./app /app
WORKDIR /app
Next, create an an empty directory named app
in root level of project.
We are going to make a directory within our Docker image that we can use to store our application source code. It creates an empty folder on our docker image and then it switches to that as the default directory. So any application we run using our docker container will run from this location. Basically, it copies the app
directory from our local machine to the docker container.
Setting up Docker-Compose and Environment Variables
Next, we will create a Docker compose configuration for our project. Docker-compose is a tool that allows us to run our Docker images easily from our project location. Basically, it allows us to easily manage the different services that combine our project.
This configuration locates in a file within the root of the level our project named docker-compose.yml
and contains the configuration for all of the services that make up our project. The first line of our Docker compose configuration file is the version of Docker compose that we’re going to be writing our file for and then we define the services for our application below.
docker-compose.yml
version: '3'
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python3 manage.py migrate &&
python3 manage.py wait_for_db &&
python3 manage.py runserver 0.0.0.0:8000"
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:10-alpine
env_file:
- ./.env.dev
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:alpine
celery:
restart: always
build:
context: .
command: celery -A app worker -l info
volumes:
- ./app:/app
env_file:
- ./.env.dev
depends_on:
- db
- redis
- app
volumes:
pgdata:
.env.dev
# django app
DB_HOST=db
DB_NAME=app
DB_USER=postgres
DB_PASS=supersecretpassword
# postgres
POSTGRES_DB=app
POSTGRES_USER=postgres
POSTGRES_PASSWORD=supersecretpassword
Let's break down the compose file:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python3 manage.py migrate &&
python3 manage.py wait_for_db &&
python3 manage.py runserver 0.0.0.0:8000"
env_file:
- ./.env.dev
depends_on:
- db
The Django project will run under a service named app
, and then we are going to build the section of the configuration by setting the context to .
symbol which represents our current directory.
The port configuration maps our project from port 8000
on our host to port 8000
on Docker container.
Below the ports, we defined a volumes
which mounts our app
directory into the Docker container for getting the updates in real-time that we make in our project.
Once the service starts, we'll run migrate
command of Django to apply models for initializing the Postgres tables. Also, we'll add the custom command wait_for_db
to wait for Postgres before running the Django server.
We are holding environment variables inside .env.dev
file that Docker will use while building and running the images.
The app
service should run after the db
service is ready and we're defining it with depends_on
configuration.
db:
image: postgres:10-alpine
env_file:
- ./.env.dev
volumes:
- pgdata:/var/lib/postgresql/data
db
service represents the Postgres database and uses a lightweight image with an alpine
tag.
Next, we are pointing to .env.dev
to get environment variables for the database name
, username
, and password
. Declaring password
required by the official image of postgres
but we also provided the name and username.
redis:
image: redis:alpine
celery:
restart: always
build:
context: .
command: celery -A app worker -l info
volumes:
- ./app:/app
env_file:
- ./.env.dev
depends_on:
- db
- redis
- app
Simply, we're pulling redis
image with alpine
tag to use as a message broker.
In the celery
service, we're also pointing environment variables to .env.dev
file and the reason is celery
must access the database while running. If we don’t provide the database credentials then it will throw a connection exception.
Until now, your project structure should be like the below:
.
├── Dockerfile
├── app
├── docker-compose.yml
└── requirements.txt
Finally, build the compose file by the following command:
docker-compose build
Create a Django Project
We used Docker compose to run a command on our image that contains the Django dependency and that will create the project files that we need for our app. Basically, we will run the command on app
and then anything you pass in after is going to be treated as a command.
docker-compose run app sh -c "django-admin startproject app ."
The reason we added sh -c
option is to make it clear to see the command that you’re running on versus all the Docker compose command. It’s also possible to run it without this option but it’s a good practice to separate the actual command. Once you run it a new Django project will be created inside the app directory.
Now we have docker all set up, so we can continue by configuring the Django project to use it with the Postgres database. Replace the SQLite configuration in settings.py
with Postgres as shown below:
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': os.environ.get('DB_HOST'),
'NAME': os.environ.get('DB_NAME'),
'USER': os.environ.get('DB_USER'),
'PASSWORD': os.environ.get('DB_PASS'),
}
}
We are getting all variables directly from the environment to make our settings more secure. Let’s build again and run the docker to see if our app is running properly:
docker-compose up --build
Waiting for Postgres
In this part, we’re going add a management command to the core app of our Django project. The management command is going to be a helper command
that allows us to wait for the database to be available before continuing and running other commands.
This command will be used in our docker compose file when starting our Django app. The reason that we need this command because sometimes when using Postgres with docker compose in a Django app it fails to start because of a database error. It turns out that, once the Postgres service has started there are a few extra setup tasks that need to be done on the Postgres before it is ready to accept connections.
The Django will try and connect to database before the database is ready and therefore it will fail with an exception.
First, let’s create a new app named core
and add it into INSTALLED_APPS
configuration in settings.py
:
docker-compose run app sh -c "django-admin startapp core"
settings.py
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'core',
]
To create the command we’re going to start by creating a new directory in our core app that we are going to store our management commands. So this is the Django convention and it’s recommended on the Django website to put all of your commands in a directory called management/commands
. So we’re going to start by creating a folder called management
and make sure that it locates in the actual core
app folder. In each of these folders create a new __init__.py
file to make sure that this is marked as a Python module. The project structure so far will look like below:
\---app
| manage.py
|
+---app
| | asgi.py
| | settings.py
| | urls.py
| | wsgi.py
| | __init__.py
|
\---core
| admin.py
| apps.py
| models.py
| tests.py
| views.py
| __init__.py
|
+---management
| | __init__.py
| |
| \---commands
| __init__.py
| wait_for_db.py
|
\---migrations
__init__.py
We can add the custom name of the command that we want to create. For now, the command is going to be called wait_for_db.py
and we’re going to start by importing the time module which comes up with a standard Python library that we can use to make our applications sleep for a few seconds in between each database check. Add the following to the commands/wait_for_db.py
file:
commands/wait_for_db.py
import time
from django.db import connections
from django.db.utils import OperationalError
from django.core.management import BaseCommand
class Command(BaseCommand):
"""Django command to pause execution until db is available
https://stackoverflow.com/questions/52621819/django-unit-test-wait-for-database
"""
def handle(self, *args, **options):
self.stdout.write('Waiting for database...')
db_conn = None
while not db_conn:
try:
db_conn = connections['default']
except OperationalError:
self.stdout.write('Database unavailable, waititng 1 second...')
time.sleep(1)
self.stdout.write(self.style.SUCCESS('Database available!'))
Now, each time we start our project this command needs to be executed. You can see we already placed it in our compose file under app
service in the commands
block.
Configuring Redis and Celery Service
In this part, we will add Redis and celery services to our compose file. There are few tricky points while configuring celery due to the database connection. However, before we add the services let’s add a new file named celery.py
into our Django project directory:
celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings')
app = Celery('app')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
Basically, it will discover all tasks alongside the project and will pass them to the queue. Next, we also need update __init__.py
file inside the current directory, which is our Django project:
__init__.py
from .celery import app as celery_app
__all__ = ['celery_app']
Celery requires a broker URL
for tasks so in this case, we will use Redis as a message broker. Open your settings file and add the following configurations:
CELERY_BROKER_URL = "redis://redis:6379"
CELERY_RESULT_BACKEND = "redis://redis:6379"
The project is ready to run, so start all services in detached mode with docker-compose up -d
.
What did you learn?
In this episode, you unlocked a new experience about how to combine Redis and celery with a dockerized Django web application. You will find the source code in my GitHub account and you can use it as an initial state of your own application.
Support 🌏
If you feel like you unlocked new skills, please share it with your friends and subscribe to youtube channel to not miss any valuable information.