Automating Deployments: Streamlining Your Python Django Project with CI/CD

Automating Deployments: Streamlining Your Python Django Project with CI/CD

Introduction

This is part one of a two-part series where we are going to use CI/CD to streamline the development and deployment process of a blog API built using Django and the Django Rest Framework. So, what is CI/CD? CI/CD stands for Continuous Integration/Continuous Delivery/Deployment and involves the practices and principles of automating the software development cycle by automating the building, testing, and deployment of software. In the CD part I would separate the delivery and deployment components. Delivery ensures software is in a stage where it is production ready and can be released to end users most likely by human intervention, while deployment goes a step further and ensures that software is automatically released to production.

The are many stages that are involved in the CI/CD process. The most common stages are code compilation, testing, security scans, image building, and deployment. Popular CI tools include Jenkins, Gitlab CI CircleCI, Tekton and TravisCI while popular CD tools include flux CD and ArgoCD. For this project however we will focus on GitHub as our git tool, Jenkins as our CI tool and FluxCD as our CD tool. We will use AWS as our cloud provider and deploy most of the tools we'll use on EC2 instances. You can read a separate article here on deploying EC2 instances.

Creating Jenkins Job

We are going to use a sample Django project that you can find here (feel free to use your own Django project). To set up Jenkins, integrate it with GitHub, and create a Django job on Jenkins, read this article.

Jenkinsfile walkthrough

At the root of the project, there is a file named Jenkinsfile. Since we are creating a pipeline project, this file contains all the stages executed by Jenkins during the CI process. All the stages run when the job is active are enclosed in the pipeline block. Inside this block, we declare the agent (select any if you don't have an agent set up) to be used to run the job. Inside the stages block, we define all the required stages:

pipeline {
    agent any
    stages {
        // all pipeline stages
    }
}

Note that additional functions can be written after the pipeline and called within the pipeline block.

clean workspace and clone repository

In the next two stages, we clean the workspace that Jenkins will use to build the project, then clone it from GitHub. Jenkins will use the GitHub repository link provided during the project's configuration. After cloning the project, we extract additional information from the repo, such as the branch name, the current git commit, and the repo name. These will be used in later stages of the pipeline.

stage('clean workspace') {
    steps {
        cleanWs()
    }
}

stage('Clone Repository') {
    steps {
        script {
            final scmVars = checkout(scm)
            env.BRANCH_NAME = scmVars.GIT_BRANCH
            env.GIT_COMMIT = "${scmVars.GIT_COMMIT[0..7]}"
            env.GIT_REPO_NAME = scmVars.GIT_URL.replaceFirst('.+/(.+?)(?:.git)?$', '$1')
        }
    }
}

Install Dependencies

Django needs dependencies installed via pip in a virtual environment to run. This stage handles creating the virtual environment, activating it, and installing the dependencies.

stage('install dependencies') {
    steps {
        script {
            sh 'python3 -m venv venv'
            sh '. venv/bin/activate'
            sh 'venv/bin/pip3 install -r requirements.txt'
        }
    }
}

Copy Environment Variables

Next, we copy the environment variables needed to run the project into the workspace. These values are usually found in the .env file. This file is added to Jenkins credentials as either a txt file or .env file. In this Jenkins setup, I use the naming convention <repo_name>_UAT. When storing the environment variables as credentials in Jenkins, the ID should match the project name (repository name) that will use them. The _UAT or _PROD indicates the release environment.

stage('copy credentials') {
    steps {
        script {
            if (env.BRANCH_NAME == 'develop') {
                ENV_UAT = "${env.GIT_REPO_NAME}_UAT"
            }

            withCredentials([file(credentialsId: ENV_UAT, variable: 'ENV_UAT')]) {
                ENV_UAT = "${ENV_UAT}"
                if (env.BRANCH_NAME == 'develop') {
                    sh "cp \$ENV_UAT .env"
                }
            }
        }
    }
}

Run Tests and Record Coverage

The next stage involves running tests, which can vary depending on your project. In this case, we are running unit tests and linting tests. Unlike other stages, we run both the linting and unit tests in parallel. This is because these stages do not depend on each other, and it also reduces the time needed to run the pipeline. Finally, we record the coverage results for both the unit and linting tests in XML files. These results can be read and displayed in the Jenkins UI.

stage('run tests') {
    parallel {
        stage('Lint Test') {
            steps {
                script {
                    try {
                        sh '. venv/bin/activate && pre-commit run --all-files'
                    }
                catch (Error|Exception err) {
                        echo err
                    }
                }
            }
        }
        stage('Unit Test') {
            steps {
                script {
                    try {
                        sh '. venv/bin/activate && pytest -v --junitxml=test-report.xml'
                        testing = false
                    }
                    catch (Error|Exception err) {
                        echo err
                    }
                }
            }
        }
    }
}
stage('coverage') {
    steps {
        script {
            while (testing) {
                sleep 30
            }
            try {
                sh '. venv/bin/activate && coverage run -m pytest'
                sh '. venv/bin/activate && coverage xml'
            }
            catch (Error|Exception err) {
                echo err
            }
        }
    }
}

SonarQube Scan and Quality Gate

SonarQube is used to analyze source code and provide reports on code quality. It performs static scans of our code and detects bugs, code smells, and test coverage. The Quality Gate enforces the standards we set for our code and fails the pipeline if our code does not meet the minimum required threshold. To run a Sonar scan, you need to set up a SonarQube server as shown in an article by Sugam Arora.

stage('sonarqube scan') {
    environment {
        scannerHome = tool 'sonar-scanner-tool'
    }
    steps {
        script {
            withSonarQubeEnv(credentialsId: 'jenkins-sonarqube-token', installationName: 'sonar-scanner-tool') {
                sh "${scannerHome}/bin/sonar-scanner -X"
            }
        }
    }
}
stage('sonarqube quality gate') {
    steps {
        script {
            def qualitygate = waitForQualityGate()
            qualityStatus = ["OK", "SUCCESS"]
            if (qualitygate.status in qualityStatus){
                echo "analysis for project ${env.GIT_REPO_NAME} on branch ${env.BRANCH_NAME} is: ${qualitygate.status}"
            } else {
                echo "analysis for project ${env.GIT_REPO_NAME} on branch ${env.BRANCH_NAME} failed!"
                waitForQualityGate abortPipeline: true
            }
        }
    }
}

Build Docker Image and Push to ECR

When the job passes all the stages above, we can build a Docker image. The root of the project contains a Dockerfile that we will use for this stage. Once we build the image, we push it to a registry for storage. In this tutorial, I will use the AWS ECR repository. You can use any repository, including the public Docker Hub. If you are using ECR, make sure to create the credentials that will allow you to push to ECR, add them to Jenkins credentials, and install the AWS ECR plugins. In my case, I have named the credentials aws-ecr-credentials. Before pushing the image, I have custom tagged them so I can identify each image from previous builds. Note that you can create your own custom tags to suit your needs.

stage ('Build Docker Image') {
    steps {
        script {
            try {
                sh "docker build --network=host --build-arg BRANCH_NAME=${env.BRANCH_NAME} -t ${env.GIT_REPO_NAME} -f Dockerfile ."
            } catch (Error|Exception e) {
                echo e
            }
        }
    }
}
stage ('Deploy') {
    environment {
        AWS_ACC = "058264430807.dkr.ecr.eu-west-1.amazonaws.com"
    }
    steps {
        script {
            retry(3){
                env.BUILDVERSION = buildVersion()
                docker.withRegistry("https://${AWS_ACC}/","ecr:eu-west-1:aws-ecr-credentials") {
                    sh "docker tag ${env.GIT_REPO_NAME}:latest ${AWS_ACC}/${env.GIT_REPO_NAME}:dev-${env.GIT_COMMIT}-${env.BUILDVERSION}"
                    sh "docker push ${AWS_ACC}/${env.GIT_REPO_NAME}:dev-${env.GIT_COMMIT}-${env.BUILDVERSION}"
                }
            }
        }
    }
}

Post Steps

These actions will be performed after all the stages are completed. They are defined in a post block outside the stages block. In this tutorial, we will only run the post step to display the results of the unit and lint tests in the Jenkins UI. We use the Junit and Cobertura plugins, which you may need to install in your Jenkins.

post {
    always {
        junit 'test-report.xml'
        step([$class: 'CoberturaPublisher',
            coberturaReportFile: 'coverage.xml',
            failNoReports: true, // Fail build if no report is generated
            failUnhealthy: true, // Optionally fail on low coverage (configure thresholds in Jenkins UI)
            autoUpdateHealth: false, // Avoid automatic health updates based on coverage
            autoUpdateStability: false // Avoid automatic stability updates based on coverage
        ])
    }
}

The full code for the pipeline can be found here.

The stages you can run on your project are not limited to these. You can add or remove stages based on your needs. This article aims to provide a starting point to help you get up and running.

Happy hacking❤️!