All articles

Building Robust CI/CD Pipelines: Best Practices and Automation

You've set up your basic CI/CD pipeline in Part 1, and it's running smoothly. Now what? As your project grows, you'll need to evolve your pipeline to handle more complex scenarios, maintain performance, and ensure reliability. Let's dive into how to build robust CI/CD pipelines that can scale with your needs.

And guess what? Accompanying this blogpost, we got you covered with the full code implementation in our public repository: https://github.com/wolkwork/ci-cd-pipeline

The Path to Pipeline Maturity

A mature CI/CD pipeline isn't built in a day. It evolves through several stages:

  1. Basic Integration (Where you started → see Part 1)

    • Simple build and test

    • Manual deployments

    • Basic error checking

  2. Automated Quality (Where you're heading)

    • Code quality enforcement

    • Security scanning

    • Performance testing

    • Automated deployments

  3. Enterprise Grade (The goal)

    • Multi-environment orchestration

    • Sophisticated test strategies

    • Automated rollbacks

    • Comprehensive monitoring

Building Better Pipelines

In the basics of CI/CD pipeline in BLOGPOST 1 we started simple, but that can be extended with your needs. Here's what the scaling to a full pipeline may look like:

CI (Continuous Integration) Pipeline:

Starting point:

Code Push → Build → Test

Enhanced pipeline:

Code Push → Install Dependencies → Build → Unit Tests → Integration Tests →
Code Quality (Linting, Formatting) → Security Scan (Dependencies, Vulnerabilities) → Performance Checks → Code Coverage → Artifact Generation

CD (Continuous Delivery) Pipeline:

Starting point:

Approved Changes → Staging Deploy

Enhanced pipeline:

Approved Changes → Build Artifacts → Deploy to Dev → Integration Tests → Deploy to Staging → Smoke Tests → Load Tests → Security Scans → Deploy to Production → Health Checks → Performance Monitoring → Automated Rollback (if needed)

Keep It Simple (But Not Too Simple)

The art of CI/CD lies in finding the right balance. Here's how to add sophistication without unnecessary complexity (using GitHub Actions like in Part 1 on CI/CD):

# Find the implemented CI/CD pipeline with placeholder code examples on our public repo:
# <https://github.com/wolkwork/ci-cd-pipeline>

# Enhanced CI/CD workflow for Python projects
name: CI/CD Pipeline

# Define workflow triggers
on:
  push:
    branches: [main, dev]
  pull_request:
    branches: [main, dev]

# Define environment variables used across jobs
env:
  PYTHON_VERSION: "3.12"
  UV_VERSION: ">=0.4.0"

jobs:
  test:
    runs-on: ubuntu-latest
    timeout-minutes: 10 # Prevent hanging jobs

    steps:
      - name: Check out repository
        uses: actions/checkout@v4

      - name: Install uv
        uses: astral-sh/setup-uv@v5
        with:
          version: ${{ env.UV_VERSION }}
          python-version: ${{ env.PYTHON_VERSION }}

      - name: Install dependencies
        run: uv sync

      - name: Run tests with coverage reporting
        run: >
          uv run pytest
          --cov=src
          --cov-report=term-missing
          --cov-report=html
          --cov-fail-under=95

  quality:
    runs-on: ubuntu-latest
    needs: test # Run only after tests pass

    steps:
      - uses: actions/checkout@v4

      - name: Install uv
        uses: astral-sh/setup-uv@v5
        with:
          version: ${{ env.UV_VERSION }}
          python-version: ${{ env.PYTHON_VERSION }}

      - name: Install quality tools
        run: uv sync

      - name: Run linting
        # Optional: auto-correct minor issues with --fix
        run: |
          uv run ruff check . --fix --exit-zero
          uv run ruff format .

      - name: Run type checking
        run: uv run mypy src/

  security:
    runs-on: ubuntu-latest
    needs: test # Run only after tests pass

    steps:
      - uses: actions/checkout@v4

      - name: Install uv
        uses: astral-sh/setup-uv@v5
        with:
          version: ${{ env.UV_VERSION }}
          python-version: ${{ env.PYTHON_VERSION }}

      - name: Install security tools
        run: uv sync

      - name: Check dependencies for vulnerabilities
        run: uv run safety check

      - name: Run security scan
        run: uv run bandit -r src/ -c pyproject.toml

  deploy:
    needs: [quality, security] # Only deploy if all quality and security checks pass
    runs-on: ubuntu-latest
    environment: staging
    if: github.ref == 'refs/heads/main' # Only deploy on main branch

    steps:
      - uses: actions/checkout@v4

      - name: Install uv
        uses: astral-sh/setup-uv@v5
        with:
          version: ${{ env.UV_VERSION }}
          python-version: ${{ env.PYTHON_VERSION }}

      - name: Install dependencies
        run: uv sync

      - name: Build application
        run: uv run python -m build

      - name: Run pre-deployment checks
        run: |
          echo "Running pre-deployment validation..."
          uv run python scripts/validate_config.py

      - name: Dummy deployment to staging
        run: |
          echo "Starting dummy deployment to staging..."
          uv run python scripts/deploy.py --environment staging

      - name: Run dummy smoke tests
        run: |
          echo "Running post-deployment checks..."
          uv run pytest tests/smoke/

      - name: Notify deployment status
        if: always()
        run: |
          echo "::notice::Deployment to staging ${{ job.status == 'success' && 'succeeded' || 'failed' }}"

          echo "Deployment status: ${{ job.status }}"
          echo "Completed at: $(date)"

This enhanced pipeline builds upon our basic version by adding some elements. Note that you can check out a full implementation with template code on our public GitHub repository. Now let's break down the key improvements and best practices implemented in this enhanced pipeline:

1. Environment Management

env:
  PYTHON_VERSION: "3.12"
  UV_VERSION: ">=0.4.0"
  • Centralised version management

  • Easy to update for all jobs

  • Consistent environment across pipeline

2. Parallel Job Execution

quality:
  runs-on: ubuntu-latest
  needs: test  # Run after tests pass

security:
  runs-on: ubuntu-latest
  needs: test  # Run after tests pass
  • Quality and security checks run in parallel (after ‘tests’)

  • Clear job dependencies with needs

  • Efficient pipeline execution

3. Enhanced Testing

- name: Run tests with coverage reporting
  run: >
    uv run pytest
    --cov=src
    --cov-report=term-missing
    --cov-report=html
    --cov-fail-under=95
  • Code coverage enforcement (>95%), fail build if coverage too low (you can play with the threshold number as desired)

  • Terminal report showing missing lines

  • HTML report for detailed analysis

4. Code Quality Checks

- name: Run linting
  run: |
    uv run ruff check . --fix --exit-zero
    uv run ruff format .
  
- name: Run type checking
  run: uv run mypy src/
  • Linting and formatting with ruff, type checking with mypy

  • Consistent code formatting verification, with automated minor fixes

  • Separated for better feedback

5. Security Scanning

- name: Check dependencies for vulnerabilities
  run: uv run safety check

- name: Run security scan
  run: uv run bandit -r src/ -c pyproject.toml
  • Dependency vulnerability scanning

  • Code security analysis

  • Custom security rules via pyproject.toml

6. Robust Deployment

deploy:
  needs: [quality, security]
  environment: staging
  • Requires all checks to pass

  • Uses GitHub environments

  • Environment-specific secrets

  • Pre and post-deployment checks

  • Deployment notifications

Smart Automation Strategies

Don't automate everything just because you can. Focus on:

  1. High-Impact Areas

    • Test execution

    • Code quality checks

    • Security scanning

    • Deployment steps

  2. Error-Prone Tasks

    • Environment setup

    • Dependency management

    • Configuration updates

  3. Repetitive Operations

    • Build processes

    • Release tagging

    • Documentation generation

Common Pitfalls and How to Avoid Them

1. Over-Engineering

Problem: Adding complexity before it's needed. Solution: Follow the "Rule of Three":

  • Wait until you need something three times before automating it

  • Start with manual processes to understand the requirements

  • Automate incrementally based on actual needs

2. Ignoring Failed Tests

Problem: Bypassing failures creates technical debt. Solution: Implement a "Zero Tolerance" policy

3. Poor Error Handling

Problem: Unclear failures waste developer time. Solution: Implement robust error handling and set a maximum number of retries

4. Insufficient Documentation

Problem: Knowledge silos and maintenance difficulties. Solution: Implement living documentation:

  • Add detailed comments in pipeline configurations (visualisations always help: ci-cd-pipeline.md)

  • Maintain a README with setup instructions

  • Document common failure scenarios and solutions

5. Missing Environment Management

Problem: Secrets and configurations mixed in code. Solution: Use environment management tools (and make sure to not hardcode them):

# Example environment configuration
- name: Configure Environment
  env:
    DB_URL: ${{ secrets.DB_URL }}
    API_KEY: ${{ secrets.API_KEY }}
  run: |
    echo "Setting up environment..."
    create_env_file.sh

6. Slow Pipelines

Problem: Long feedback cycles reduce productivity. Solution: Implement performance optimizations, such as:

  • Use dependency caching

  • Run jobs in parallel

  • Implement test splitting

  • Use faster tools (like uv for Python) → checkout more on UV here

# Example of parallel job execution
jobs:
  test:
    strategy:
      matrix:
        chunk: [1, 2, 3, 4]
    steps:
      - name: Run Tests
        run: pytest tests/ --splits 4 --chunk ${{ matrix.chunk }}

Best Practices for Success

  1. Monitor Pipeline Health

    • Track build times

    • Monitor test reliability

    • Measure deployment success rates

  2. Regular Maintenance

    • Update dependencies

    • Review and optimize test suites

    • Clean up unused configurations

  3. Continuous Improvement

    • Gather team feedback

    • Analyze failure patterns

    • Implement incremental improvements

Looking Ahead

Remember that CI/CD is a journey, not a destination. As your project evolves, so should your pipeline. Keep these principles in mind:

  • Start with essential automation

  • Add complexity only when needed

  • Focus on reliability and maintainability

  • Keep documentation current

  • Monitor and optimize performance

In our next post, we'll dive into measuring and maintaining CI/CD success, including detailed metrics and monitoring strategies. And the very important million dollar question: “which tools should I use?”

Action Items

Ready to improve your pipeline? Start with these steps:

  1. Audit your current pipeline for common pitfalls

  2. Implement proper error handling

  3. Set up environment management

  4. Add performance optimizations

  5. Document your pipeline setup

Remember: The goal is to make development more efficient, not more complicated. Each automation should serve a clear purpose and provide measurable value to your team.

P.s. Good job on making it all the way to the end here! And a reminder that everything we discussed above, you can find for free in the Wolk public repo


More articles

Get in touch

Make smarter decisions with actionable insights from your data. We combine analytics, visualisations, and advanced AI models to surface what matters most.

Contact us

We believe in making a difference through innovation. Utilizing data and AI, we align your strategy and operations with cutting-edge technology, propelling your business to scale and succeed.

Wolk Tech B.V. & Wolk Work B.V.
Schaverijstraat 11
3534 AS, Utrecht
The Netherlands

Keep in touch!

Subscribe to our newsletter, de Wolkskrant, to get the latest tools, trends and tips from the industry.