Skip to main content

Test Automation

Test automation establishes executable verification that runs without human intervention, producing consistent results across development machines and CI pipelines. You configure test frameworks, write tests that exercise code at multiple levels of granularity, integrate those tests into build pipelines, and generate reports that make failures visible and actionable.

Prerequisites

Before implementing test automation, ensure you have the following in place.

RequirementSpecificationVerification
Source repositoryGit repository with branch protectiongit remote -v returns configured origin
Runtime environmentNode.js 18+ or Python 3.10+node --version or python --version
Package managernpm 9+, yarn 1.22+, or pip 23+npm --version, yarn --version, or pip --version
CI platform accessGitHub Actions, GitLab CI, or JenkinsCan create/edit pipeline configuration
Repository write accessPermission to commit to feature branchesCan push to non-protected branches
Development dependenciesPermission to install dev packagesnpm install --save-dev succeeds

You need a working local development environment where you can run the application and install packages. If your organisation uses a monorepo structure, confirm which package manager and workspace configuration applies to your project before proceeding.

For Python projects, create a virtual environment before installing test dependencies:

Terminal window
python -m venv .venv
source .venv/bin/activate # Linux/macOS
.venv\Scripts\activate # Windows

For JavaScript projects, confirm your Node.js version matches the project’s requirements in package.json under the engines field.

Procedure

Setting up unit test frameworks

Unit tests verify individual functions and classes in isolation. You write unit tests to confirm that given specific inputs, a function produces expected outputs. The test framework provides assertion methods, test discovery, and execution infrastructure.

  1. Install the test framework and assertion libraries for your runtime.

    For Python projects using pytest:

Terminal window
pip install pytest pytest-cov pytest-xdist

For JavaScript projects using Jest:

Terminal window
npm install --save-dev jest @types/jest

For TypeScript projects, add the TypeScript preprocessor:

Terminal window
npm install --save-dev ts-jest typescript
  1. Create the test framework configuration file in your project root.

    For pytest, create pyproject.toml with test configuration:

[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
python_functions = ["test_*"]
addopts = "-v --tb=short"
filterwarnings = [
"ignore::DeprecationWarning",
]
[tool.coverage.run]
source = ["src"]
branch = true
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"if TYPE_CHECKING:",
"raise NotImplementedError",
]
fail_under = 80

For Jest, create jest.config.js:

/ @type {import('jest').Config} */
module.exports = {
testEnvironment: 'node',
roots: ['<rootDir>/src', '<rootDir>/tests'],
testMatch: ['/*.test.js', '/*.test.ts'],
collectCoverageFrom: [
'src//*.{js,ts}',
'!src//*.d.ts',
'!src//index.{js,ts}',
],
coverageThreshold: {
global: {
branches: 80,
functions: 80,
lines: 80,
statements: 80,
},
},
transform: {
'^.+\\.tsx?$': 'ts-jest',
},
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json'],
};
  1. Create the test directory structure following the convention for your framework.
project-root/
+-- src/
| +-- services/
| | +-- user_service.py
| +-- utils/
| +-- validators.py
+-- tests/
+-- unit/
| +-- services/
| | +-- test_user_service.py
| +-- utils/
| +-- test_validators.py
+-- conftest.py

The test directory mirrors the source directory structure. Each source module has a corresponding test module prefixed with test_.

  1. Write your first unit test to verify the framework functions correctly.

    For pytest, create tests/unit/test_example.py:

def test_addition():
"""Verify basic arithmetic to confirm pytest works."""
result = 2 + 2
assert result == 4
def test_string_concatenation():
"""Verify string operations."""
result = "hello" + " " + "world"
assert result == "hello world"
assert len(result) == 11

For Jest, create tests/unit/example.test.js:

describe('Example tests', () => {
test('addition works correctly', () => {
const result = 2 + 2;
expect(result).toBe(4);
});
test('string concatenation works correctly', () => {
const result = 'hello' + ' ' + 'world';
expect(result).toBe('hello world');
expect(result).toHaveLength(11);
});
});
  1. Execute the test suite to confirm proper configuration.

    For pytest:

Terminal window
pytest tests/unit/ -v

Expected output:

tests/unit/test_example.py::test_addition PASSED
tests/unit/test_example.py::test_string_concatenation PASSED
========================= 2 passed in 0.03s =========================

For Jest:

Terminal window
npm test -- --testPathPattern=unit

Expected output:

PASS tests/unit/example.test.js
Example tests
✓ addition works correctly (2 ms)
✓ string concatenation works correctly (1 ms)
Test Suites: 1 passed, 1 total
Tests: 2 passed, 2 total

Configuring fixtures and mocks

Fixtures provide reusable test data and setup logic that multiple tests share. Mocks replace real dependencies with controlled substitutes, isolating the code under test from external systems like databases, APIs, and file systems.

  1. Create shared fixtures in a central configuration file.

    For pytest, create tests/conftest.py:

import pytest
from datetime import datetime, timezone
@pytest.fixture
def sample_user():
"""Provide a standard user dictionary for tests."""
return {
"id": "usr_12345",
"email": "test@example.org",
"name": "Test User",
"created_at": datetime(2024, 1, 15, 10, 30, 0, tzinfo=timezone.utc),
"is_active": True,
}
@pytest.fixture
def sample_users(sample_user):
"""Provide a list of users for bulk operation tests."""
users = [sample_user.copy() for _ in range(5)]
for i, user in enumerate(users):
user["id"] = f"usr_{10000 + i}"
user["email"] = f"user{i}@example.org"
return users
@pytest.fixture
def mock_database(mocker):
"""Provide a mock database connection."""
mock_db = mocker.MagicMock()
mock_db.query.return_value = []
mock_db.execute.return_value = True
return mock_db

Install the pytest-mock plugin for the mocker fixture:

Terminal window
pip install pytest-mock

For Jest, create tests/setup.js:

// Global test setup
beforeAll(() => {
// Set consistent timezone for date tests
process.env.TZ = 'UTC';
});
// Shared test data factory
global.createTestUser = (overrides = {}) => ({
id: 'usr_12345',
email: 'test@example.org',
name: 'Test User',
createdAt: new Date('2024-01-15T10:30:00Z'),
isActive: true,
...overrides,
});
global.createTestUsers = (count = 5) =>
Array.from({ length: count }, (_, i) =>
global.createTestUser({
id: `usr_${10000 + i}`,
email: `user${i}@example.org`,
})
);

Reference the setup file in jest.config.js:

module.exports = {
// ... other config
setupFilesAfterEnv: ['<rootDir>/tests/setup.js'],
};
  1. Create mocks for external service dependencies.

    For pytest, mock external API calls:

tests/unit/services/test_user_service.py
import pytest
from unittest.mock import AsyncMock
from src.services.user_service import UserService
@pytest.fixture
def mock_http_client(mocker):
"""Mock the HTTP client used by UserService."""
client = mocker.MagicMock()
client.get = AsyncMock(return_value={
"status": 200,
"data": {"id": "usr_12345", "name": "Test User"}
})
client.post = AsyncMock(return_value={
"status": 201,
"data": {"id": "usr_99999"}
})
return client
@pytest.fixture
def user_service(mock_http_client, mock_database):
"""Create UserService with mocked dependencies."""
return UserService(
http_client=mock_http_client,
database=mock_database
)
class TestUserService:
async def test_get_user_returns_user_data(self, user_service, mock_http_client):
result = await user_service.get_user("usr_12345")
assert result["id"] == "usr_12345"
mock_http_client.get.assert_called_once_with(
"/users/usr_12345"
)
async def test_get_user_raises_on_not_found(self, user_service, mock_http_client):
mock_http_client.get.return_value = {"status": 404, "data": None}
with pytest.raises(UserNotFoundError):
await user_service.get_user("usr_nonexistent")

For Jest, mock modules:

tests/unit/services/userService.test.js
const { UserService } = require('../../../src/services/userService');
// Mock the HTTP client module
jest.mock('../../../src/utils/httpClient', () => ({
get: jest.fn(),
post: jest.fn(),
}));
const httpClient = require('../../../src/utils/httpClient');
describe('UserService', () => {
let userService;
beforeEach(() => {
userService = new UserService();
jest.clearAllMocks();
});
describe('getUser', () => {
test('returns user data for valid ID', async () => {
httpClient.get.mockResolvedValue({
status: 200,
data: { id: 'usr_12345', name: 'Test User' },
});
const result = await userService.getUser('usr_12345');
expect(result.id).toBe('usr_12345');
expect(httpClient.get).toHaveBeenCalledWith('/users/usr_12345');
});
test('throws UserNotFoundError for missing user', async () => {
httpClient.get.mockResolvedValue({ status: 404, data: null });
await expect(userService.getUser('usr_nonexistent')).rejects.toThrow(
'User not found'
);
});
});
});
  1. Create database fixtures with transaction rollback for isolation.

    For pytest with SQLAlchemy:

tests/conftest.py
import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.database import Base
@pytest.fixture(scope="session")
def test_engine():
"""Create test database engine."""
engine = create_engine(
"sqlite:///:memory:",
echo=False
)
Base.metadata.create_all(engine)
return engine
@pytest.fixture
def db_session(test_engine):
"""Provide transactional database session that rolls back after each test."""
connection = test_engine.connect()
transaction = connection.begin()
Session = sessionmaker(bind=connection)
session = Session()
yield session
session.close()
transaction.rollback()
connection.close()

Setting up integration tests

Integration tests verify that multiple components work together correctly. You test the boundaries between your code and external systems: databases, message queues, third-party APIs, and file systems. Integration tests run slower than unit tests and require more infrastructure, so you run them less frequently.

  1. Create a separate test directory and configuration for integration tests.
tests/
+-- unit/
+-- integration/
| +-- conftest.py
| +-- test_database_operations.py
| +-- test_api_endpoints.py
+-- conftest.py

For pytest, create tests/integration/conftest.py:

import pytest
import os
def pytest_configure(config):
"""Mark all tests in this directory as integration tests."""
config.addinivalue_line(
"markers", "integration: mark test as integration test"
)
def pytest_collection_modifyitems(items):
"""Auto-mark all tests in integration directory."""
for item in items:
if "integration" in str(item.fspath):
item.add_marker(pytest.mark.integration)
@pytest.fixture(scope="session")
def database_url():
"""Provide test database URL from environment."""
url = os.environ.get(
"TEST_DATABASE_URL",
"postgresql://test:test@localhost:5432/test_db"
)
return url
  1. Configure test containers for database integration tests.

    Install testcontainers:

Terminal window
pip install testcontainers[postgres]

Create database container fixture:

tests/integration/conftest.py
import pytest
from testcontainers.postgres import PostgresContainer
from sqlalchemy import create_engine
from src.database import Base
@pytest.fixture(scope="session")
def postgres_container():
"""Start PostgreSQL container for integration tests."""
with PostgresContainer("postgres:15-alpine") as postgres:
yield postgres
@pytest.fixture(scope="session")
def integration_engine(postgres_container):
"""Create database engine connected to test container."""
engine = create_engine(postgres_container.get_connection_url())
Base.metadata.create_all(engine)
return engine

For JavaScript, use testcontainers-node:

Terminal window
npm install --save-dev testcontainers
tests/integration/setup.js
const { PostgreSqlContainer } = require('testcontainers');
let postgresContainer;
module.exports = async () => {
postgresContainer = await new PostgreSqlContainer('postgres:15-alpine')
.withDatabase('test_db')
.withUsername('test')
.withPassword('test')
.start();
process.env.DATABASE_URL = postgresContainer.getConnectionUri();
};
module.exports.teardown = async () => {
if (postgresContainer) {
await postgresContainer.stop();
}
};
  1. Write integration tests that exercise real database operations.
tests/integration/test_database_operations.py
import pytest
from src.repositories.user_repository import UserRepository
from src.models import User
class TestUserRepository:
@pytest.fixture
def repository(self, integration_session):
return UserRepository(integration_session)
def test_create_and_retrieve_user(self, repository):
# Create user
user = User(
email="integration@example.org",
name="Integration Test User"
)
created = repository.create(user)
# Verify creation
assert created.id is not None
assert created.email == "integration@example.org"
# Retrieve and verify
retrieved = repository.get_by_id(created.id)
assert retrieved.email == created.email
assert retrieved.name == created.name
def test_query_users_by_email_domain(self, repository):
# Create test users
for i in range(3):
repository.create(User(
email=f"user{i}@example.org",
name=f"User {i}"
))
repository.create(User(
email="other@different.org",
name="Other User"
))
# Query by domain
results = repository.find_by_email_domain("example.org")
assert len(results) == 3
assert all("@example.org" in u.email for u in results)
  1. Configure separate test execution for unit and integration tests.

    Update pyproject.toml:

[tool.pytest.ini_options]
markers = [
"integration: mark test as integration test (requires external services)",
]

Run unit tests only (fast, no external dependencies):

Terminal window
pytest tests/unit/ -v

Run integration tests only:

Terminal window
pytest tests/integration/ -v -m integration

Run all tests:

Terminal window
pytest tests/ -v

Setting up end-to-end tests

End-to-end tests verify complete user workflows through the full application stack. You automate browser interactions for web applications or API call sequences for backend services. End-to-end tests catch integration issues that unit and integration tests miss, but they run slowly and require the complete application environment.

  1. Install the end-to-end testing framework.

    For web applications, install Playwright:

Terminal window
pip install playwright pytest-playwright
playwright install chromium

Or for JavaScript:

Terminal window
npm install --save-dev @playwright/test
npx playwright install chromium
  1. Create the Playwright configuration file.

    For Python, create pytest.ini additions:

[pytest]
# ... existing config
[tool:playwright]
browser = chromium
headless = true
slow_mo = 0

For JavaScript, create playwright.config.js:

const { defineConfig } = require('@playwright/test');
module.exports = defineConfig({
testDir: './tests/e2e',
timeout: 30000,
retries: 2,
workers: 4,
reporter: [
['html', { outputFolder: 'playwright-report' }],
['junit', { outputFile: 'test-results/e2e-results.xml' }],
],
use: {
baseURL: process.env.BASE_URL || 'http://localhost:3000',
trace: 'on-first-retry',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
},
projects: [
{
name: 'chromium',
use: { browserName: 'chromium' },
},
],
webServer: {
command: 'npm run start',
url: 'http://localhost:3000',
reuseExistingServer: !process.env.CI,
timeout: 120000,
},
});
  1. Create end-to-end tests for critical user journeys.
tests/e2e/user-registration.spec.js
const { test, expect } = require('@playwright/test');
test.describe('User Registration', () => {
test('new user can register and access dashboard', async ({ page }) => {
// Navigate to registration page
await page.goto('/register');
// Fill registration form
await page.fill('[data-testid="email-input"]', 'newuser@example.org');
await page.fill('[data-testid="password-input"]', 'SecureP@ssw0rd!');
await page.fill('[data-testid="confirm-password-input"]', 'SecureP@ssw0rd!');
await page.fill('[data-testid="name-input"]', 'New Test User');
// Submit form
await page.click('[data-testid="register-button"]');
// Verify redirect to dashboard
await expect(page).toHaveURL('/dashboard');
// Verify welcome message
await expect(page.locator('[data-testid="welcome-message"]')).toContainText(
'Welcome, New Test User'
);
});
test('registration fails with invalid email', async ({ page }) => {
await page.goto('/register');
await page.fill('[data-testid="email-input"]', 'invalid-email');
await page.fill('[data-testid="password-input"]', 'SecureP@ssw0rd!');
await page.click('[data-testid="register-button"]');
// Verify error message
await expect(page.locator('[data-testid="email-error"]')).toContainText(
'Please enter a valid email address'
);
// Verify still on registration page
await expect(page).toHaveURL('/register');
});
test('registration fails with weak password', async ({ page }) => {
await page.goto('/register');
await page.fill('[data-testid="email-input"]', 'user@example.org');
await page.fill('[data-testid="password-input"]', 'weak');
await page.click('[data-testid="register-button"]');
await expect(page.locator('[data-testid="password-error"]')).toContainText(
'Password must be at least 12 characters'
);
});
});

For Python with Playwright:

tests/e2e/test_user_registration.py
import pytest
from playwright.sync_api import Page, expect
class TestUserRegistration:
def test_new_user_can_register_and_access_dashboard(self, page: Page):
page.goto("/register")
page.fill('[data-testid="email-input"]', "newuser@example.org")
page.fill('[data-testid="password-input"]', "SecureP@ssw0rd!")
page.fill('[data-testid="confirm-password-input"]', "SecureP@ssw0rd!")
page.fill('[data-testid="name-input"]', "New Test User")
page.click('[data-testid="register-button"]')
expect(page).to_have_url("/dashboard")
expect(page.locator('[data-testid="welcome-message"]')).to_contain_text(
"Welcome, New Test User"
)
  1. Organise end-to-end tests by user journey, not by page.
tests/e2e/
+-- user-registration.spec.js
+-- user-authentication.spec.js
+-- beneficiary-intake.spec.js
+-- case-management.spec.js
+-- reporting-export.spec.js

Each file tests a complete workflow that a user performs, crossing multiple pages and components.

Configuring test parallelisation

Parallel test execution reduces total test time by running independent tests simultaneously across multiple CPU cores or machines.

  1. Configure pytest for parallel execution using pytest-xdist.
Terminal window
pip install pytest-xdist

Run tests across all available CPU cores:

Terminal window
pytest tests/ -n auto

Run tests across a specific number of workers:

Terminal window
pytest tests/ -n 4

The -n auto flag detects available CPU cores. For a machine with 8 cores, this spawns 8 parallel test processes.

  1. Configure Jest for parallel execution.

    Jest runs tests in parallel by default. Configure worker count in jest.config.js:

module.exports = {
// ... other config
maxWorkers: '50%', // Use half of available CPUs
// Or specify exact count:
// maxWorkers: 4,
};

For CI environments with limited resources:

module.exports = {
maxWorkers: process.env.CI ? 2 : '50%',
};
  1. Ensure tests are isolated for parallel execution.

    Tests running in parallel must not share mutable state. Each test should:

    • Create its own test data
    • Use unique identifiers to prevent collisions
    • Clean up resources it creates
    • Not depend on execution order

    Generate unique identifiers in fixtures:

import uuid
@pytest.fixture
def unique_email():
return f"test-{uuid.uuid4().hex[:8]}@example.org"
  1. Configure test distribution strategy for large test suites.

    For pytest-xdist, use load balancing:

Terminal window
pytest tests/ -n 4 --dist loadscope

Distribution strategies:

  • load (default): Distribute tests as workers become available
  • loadscope: Group tests by module, distribute modules to workers
  • loadfile: Similar to loadscope but by file

The loadscope strategy keeps tests from the same module together, which helps when tests share expensive fixtures scoped to the module.

Integrating tests into CI pipelines

CI pipeline integration runs tests automatically on every code change, preventing broken code from reaching protected branches.

  1. Create the CI workflow configuration file.

    For GitHub Actions, create .github/workflows/test.yml:

name: Test Suite
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run unit tests
run: pytest tests/unit/ -v --cov=src --cov-report=xml
- name: Upload coverage report
uses: codecov/codecov-action@v4
with:
files: coverage.xml
fail_ci_if_error: true
integration-tests:
runs-on: ubuntu-latest
needs: unit-tests
services:
postgres:
image: postgres:15-alpine
env:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test_db
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run integration tests
env:
TEST_DATABASE_URL: postgresql://test:test@localhost:5432/test_db
run: pytest tests/integration/ -v -m integration
e2e-tests:
runs-on: ubuntu-latest
needs: integration-tests
steps:
- uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install --with-deps chromium
- name: Run end-to-end tests
run: npx playwright test
- name: Upload test artifacts
if: failure()
uses: actions/upload-artifact@v4
with:
name: playwright-report
path: playwright-report/
retention-days: 7
  1. For GitLab CI, create .gitlab-ci.yml:
stages:
- test
- integration
- e2e
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- node_modules/
unit-tests:
stage: test
image: python:3.11-slim
script:
- pip install -r requirements.txt -r requirements-dev.txt
- pytest tests/unit/ -v --cov=src --cov-report=xml --junitxml=unit-results.xml
artifacts:
reports:
junit: unit-results.xml
coverage_report:
coverage_format: cobertura
path: coverage.xml
coverage: '/TOTAL.*\s+(\d+%)/'
integration-tests:
stage: integration
image: python:3.11-slim
services:
- postgres:15-alpine
variables:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: test_db
TEST_DATABASE_URL: postgresql://test:test@postgres:5432/test_db
script:
- pip install -r requirements.txt -r requirements-dev.txt
- pytest tests/integration/ -v --junitxml=integration-results.xml
artifacts:
reports:
junit: integration-results.xml
e2e-tests:
stage: e2e
image: mcr.microsoft.com/playwright:v1.40.0-jammy
script:
- npm ci
- npx playwright test
artifacts:
when: on_failure
paths:
- playwright-report/
expire_in: 1 week
  1. Configure test result reporting and quality gates.

    Add coverage thresholds that fail the build:

# GitHub Actions step
- name: Check coverage threshold
run: |
coverage report --fail-under=80

For Jest in package.json:

{
"scripts": {
"test:ci": "jest --coverage --coverageThreshold='{\"global\":{\"branches\":80,\"functions\":80,\"lines\":80}}'"
}
}

Configuring test reporting

Test reports make results visible to developers and stakeholders. You configure reporters to generate human-readable summaries and machine-readable formats for CI integration.

  1. Configure pytest to generate multiple report formats.
Terminal window
pytest tests/ \
--junitxml=test-results/results.xml \
--cov=src \
--cov-report=xml:coverage/coverage.xml \
--cov-report=html:coverage/html \
--cov-report=term-missing

Add to pyproject.toml for default report generation:

[tool.pytest.ini_options]
addopts = """
-v
--tb=short
--junitxml=test-results/results.xml
--cov=src
--cov-report=term-missing
--cov-report=xml:coverage/coverage.xml
"""
  1. Configure Jest reporters.

    Update jest.config.js:

module.exports = {
// ... other config
reporters: [
'default',
['jest-junit', {
outputDirectory: 'test-results',
outputName: 'results.xml',
classNameTemplate: '{classname}',
titleTemplate: '{title}',
}],
['jest-html-reporter', {
pageTitle: 'Test Report',
outputPath: 'test-results/report.html',
includeFailureMsg: true,
}],
],
coverageReporters: ['text', 'lcov', 'cobertura'],
};

Install reporters:

Terminal window
npm install --save-dev jest-junit jest-html-reporter
  1. Configure Playwright test reporting.

    Update playwright.config.js:

module.exports = defineConfig({
reporter: [
['list'],
['html', { outputFolder: 'playwright-report', open: 'never' }],
['junit', { outputFile: 'test-results/e2e-results.xml' }],
['json', { outputFile: 'test-results/e2e-results.json' }],
],
});
  1. Generate a combined test summary for CI dashboards.

    Create a script to aggregate results:

scripts/aggregate_test_results.py
import xml.etree.ElementTree as ET
import json
from pathlib import Path
def parse_junit_xml(filepath):
tree = ET.parse(filepath)
root = tree.getroot()
testsuite = root if root.tag == 'testsuite' else root.find('.//testsuite')
return {
'tests': int(testsuite.get('tests', 0)),
'failures': int(testsuite.get('failures', 0)),
'errors': int(testsuite.get('errors', 0)),
'skipped': int(testsuite.get('skipped', 0)),
'time': float(testsuite.get('time', 0)),
}
results = {
'unit': parse_junit_xml('test-results/unit-results.xml'),
'integration': parse_junit_xml('test-results/integration-results.xml'),
'e2e': parse_junit_xml('test-results/e2e-results.xml'),
}
total_tests = sum(r['tests'] for r in results.values())
total_failures = sum(r['failures'] + r['errors'] for r in results.values())
total_time = sum(r['time'] for r in results.values())
print(f"Total: {total_tests} tests, {total_failures} failures, {total_time:.1f}s")

Verification

After completing test automation setup, verify that the system functions correctly.

Run the complete test suite and confirm all tests pass:

Terminal window
# Python
pytest tests/ -v --tb=short
# JavaScript
npm test

Expected output shows test counts and pass/fail status:

tests/unit/test_validators.py::test_email_validation PASSED
tests/unit/test_validators.py::test_phone_validation PASSED
tests/unit/services/test_user_service.py::TestUserService::test_get_user_returns_user_data PASSED
tests/integration/test_database_operations.py::TestUserRepository::test_create_and_retrieve_user PASSED
========================= 47 passed, 2 skipped in 12.34s =========================

Verify coverage reports generate correctly:

Terminal window
pytest tests/ --cov=src --cov-report=term-missing

Expected output includes coverage summary:

Name Stmts Miss Cover Missing
---------------------------------------------------------------
src/__init__.py 0 0 100%
src/services/user_service.py 45 3 93% 78-80
src/repositories/user_repository.py 32 0 100%
src/utils/validators.py 28 2 93% 45-46
---------------------------------------------------------------
TOTAL 105 5 95%

Confirm CI pipeline executes successfully by pushing a test commit:

Terminal window
git checkout -b test/verify-ci-pipeline
git add .
git commit -m "test: verify CI pipeline configuration"
git push origin test/verify-ci-pipeline

Check the CI dashboard for green status on all test stages.

Verify parallel execution reduces test time:

Terminal window
# Sequential execution
time pytest tests/unit/ -v
# Parallel execution
time pytest tests/unit/ -n auto -v

For a suite of 200 unit tests on a 4-core machine, parallel execution reduces runtime from approximately 45 seconds to 12 seconds.

Troubleshooting

SymptomCauseResolution
ModuleNotFoundError when running pytestTest file imports source module incorrectlyAdd src to Python path: export PYTHONPATH="${PYTHONPATH}:$(pwd)/src" or add [tool.pytest.ini_options] pythonpath = ["src"] to pyproject.toml
Tests pass locally but fail in CIEnvironment differences between local and CIPin dependency versions in requirements.txt; use same Python/Node version in CI as local
fixture 'db_session' not foundFixture defined in wrong conftest.py or not importedMove fixture to tests/conftest.py (root test directory) for global availability
Tests fail intermittently (flaky tests)Shared state between tests or timing dependenciesAdd @pytest.mark.usefixtures("clean_database") to reset state; avoid time.sleep() in favour of explicit waits
Coverage report shows 0%Coverage not measuring correct source directoryVerify --cov=src points to actual source directory; check [tool.coverage.run] source in pyproject.toml
Jest tests timeoutAsync operations not awaited or mocks not configuredAdd await to async calls; increase timeout with jest.setTimeout(10000)
Playwright tests fail to find elementsPage not fully loaded or selectors incorrectAdd await page.waitForSelector('[data-testid="element"]') before interaction
Integration tests fail with connection refusedService container not readyAdd health checks to CI service configuration; increase startup timeout
PermissionError when writing test reportsCI runner lacks write permission to output directoryCreate output directories in CI script: mkdir -p test-results coverage
Tests run slowly despite parallelisationFixtures with scope="session" force sequential executionUse scope="function" for fixtures that can run in parallel; isolate session-scoped fixtures to separate test files
Mock not applied to correct moduleImport path mismatch between mock and usageMock where the function is used, not where it is defined: @patch('src.services.user_service.http_client')
Database tests leave data behindMissing cleanup or transaction rollbackUse pytest fixture with yield and cleanup in finally block; wrap tests in transactions that roll back
E2E tests fail with net::ERR_CONNECTION_REFUSEDApplication not running or wrong portVerify webServer configuration in Playwright config; check application starts before tests
SyntaxError: Cannot use import statement outside a moduleJest not configured for ES modulesAdd transform configuration for Babel or ts-jest; or use .mjs extension with Node’s native ES modules

See also