Code Review Process
Code review is the systematic examination of source code changes by someone other than the author before those changes merge into a shared branch. The practice serves three functions: defect detection, knowledge transfer, and codebase consistency. Every code change requires review regardless of author seniority or change size.
This reference defines the review workflow, approval requirements, and quality standards that apply to all development work. For security-specific review criteria, see Secure Coding Standards. For version control mechanics underlying the review process, see Version Control.
Review workflow
A review proceeds through four stages from submission to merge. Each stage has defined entry criteria that must be satisfied before progression.
+------------------+ +------------------+ +------------------+ +------------------+| | | | | | | || SUBMITTED +---->+ IN REVIEW +---->+ APPROVED +---->+ MERGED || | | | | | | |+--------+---------+ +--------+---------+ +--------+---------+ +------------------+ | | | | | | v v v - Automated checks - Reviewer assigned - All approvals passing - Comments addressed obtained - Description complete - No blocking issues - CI pipeline green - Linked to work item - Tests adequate - No merge conflictsThe submitted stage begins when an author creates a merge request or pull request. Automated checks run immediately: linting, compilation, unit tests, and security scans. The request cannot proceed until these pass. Authors must provide a description explaining what changed and why, with a link to the relevant work item or issue.
The in review stage begins when a reviewer is assigned and starts examining the change. Reviewers leave comments, request changes, or approve. Authors respond to feedback by updating the code, replying to comments, or discussing disagreements. This stage iterates until all blocking issues are resolved.
The approved stage indicates all required approvals are obtained and no unresolved blocking comments remain. The CI pipeline must show green status. Any merge conflicts with the target branch must be resolved by the author before merge.
The merged stage completes the review. The change integrates into the target branch and the merge request closes. Post-merge, the author monitors for any issues in subsequent pipeline stages or deployments.
Reviewer selection
The appropriate reviewer depends on the change’s scope, risk, and affected components. A single reviewer suffices for most changes. High-risk changes require additional reviewers with specific expertise.
Selection criteria
Select reviewers based on these factors in order of priority:
Component ownership: The person or team responsible for the affected code area has primary review responsibility. Component owners understand the design intent and can identify deviations.
Recent context: Someone who recently worked on related code can identify integration issues and unintended side effects. Git history identifies recent contributors.
Domain expertise: Changes touching specialised areas (cryptography, database queries, accessibility, internationalisation) benefit from domain expert review even if the expert lacks component familiarity.
Availability: A less-ideal reviewer who can respond within the turnaround window is preferable to the perfect reviewer who is unavailable.
Reviewer count by risk level
| Risk level | Minimum reviewers | Reviewer requirements |
|---|---|---|
| Standard | 1 | Any qualified team member |
| Elevated | 2 | At least one component owner |
| High | 2 | Component owner plus domain expert |
| Critical | 3 | Component owner, domain expert, and senior engineer |
Risk level determination follows these thresholds:
Standard risk applies to changes affecting a single component with existing test coverage, under 200 lines modified, and no security-sensitive code paths.
Elevated risk applies to changes affecting multiple components, modifying shared libraries or utilities, or touching code without adequate test coverage.
High risk applies to changes affecting authentication, authorisation, payment processing, personal data handling, cryptographic operations, or external integrations.
Critical risk applies to changes affecting incident response systems, backup and recovery mechanisms, data migration scripts, or infrastructure security controls.
Self-review prohibition
Authors cannot approve their own changes under any circumstances. Organisations with single developers must establish external review arrangements through peer organisations, contractors, or community reviewers for changes above standard risk.
Review checklists
Reviewers verify different aspects depending on the change category. These checklists supplement automated checks rather than replacing them. Items marked with severity indicate blocking issues that prevent approval.
Feature changes
| Check | Severity | Verification |
|---|---|---|
| Requirements traced | Blocking | Change implements documented requirement with link to specification |
| Design consistent | Blocking | Implementation follows established patterns in the codebase |
| Tests cover acceptance criteria | Blocking | Tests verify each acceptance criterion explicitly |
| Edge cases handled | Blocking | Null values, empty collections, boundary conditions addressed |
| Error messages actionable | Non-blocking | Users can understand and respond to error conditions |
| Performance acceptable | Blocking | No obvious N+1 queries, unbounded loops, or memory accumulation |
| Documentation updated | Non-blocking | README, API docs, and comments reflect changes |
| Backwards compatible | Blocking | Existing integrations continue functioning |
| Feature flag wrapped | Non-blocking | New features toggleable for gradual rollout where applicable |
| Logging adequate | Non-blocking | Key operations emit logs at appropriate levels |
Bug fixes
| Check | Severity | Verification |
|---|---|---|
| Root cause identified | Blocking | Fix addresses underlying cause, not symptoms |
| Regression test added | Blocking | Test fails without fix, passes with fix |
| Related code examined | Non-blocking | Similar patterns elsewhere checked for same bug |
| Original issue reproducible | Blocking | Bug confirmed before fix applied |
| Fix minimal | Non-blocking | Change limited to necessary corrections |
| Changelog entry added | Non-blocking | User-facing bugs documented in release notes |
Security changes
| Check | Severity | Verification |
|---|---|---|
| Threat model updated | Blocking | Security changes reflect updated threat analysis |
| No secrets in code | Blocking | Credentials, keys, tokens absent from committed files |
| Input validation complete | Blocking | All external input validated before use |
| Output encoding applied | Blocking | Data rendered in context-appropriate encoding |
| Least privilege maintained | Blocking | Permissions minimised for operations |
| Audit logging present | Blocking | Security-relevant operations recorded |
| Dependencies scanned | Blocking | No known vulnerabilities in added dependencies |
| Cryptography reviewed | Blocking | Cryptographic changes reviewed by qualified personnel |
For detailed security verification criteria, see Secure Coding Standards.
Configuration changes
| Check | Severity | Verification |
|---|---|---|
| Environment parity | Blocking | Change valid across all deployment environments |
| Secrets externalised | Blocking | Sensitive values reference external secret store |
| Defaults sensible | Blocking | Missing configuration fails safe or provides reasonable default |
| Validation present | Non-blocking | Invalid configuration detected at startup |
| Documentation complete | Non-blocking | New parameters documented with type, range, and purpose |
| Rollback tested | Non-blocking | Previous configuration can be restored |
Database changes
| Check | Severity | Verification |
|---|---|---|
| Migration reversible | Blocking | Down migration restores previous state |
| Migration tested | Blocking | Migration verified against production-like data volume |
| Indices appropriate | Blocking | Queries have supporting indices; no unused indices added |
| Constraints preserved | Blocking | Referential integrity maintained |
| Data migration safe | Blocking | Existing data transforms correctly |
| Performance validated | Blocking | Migration completes within maintenance window |
| Backup verified | Blocking | Pre-migration backup confirmed |
Approval requirements
Approval indicates a reviewer endorses the change for merge. Different approval types carry different weight and meaning.
Approval types
Approve indicates the reviewer examined the change, found it acceptable, and endorses merging. This approval counts toward the required approval threshold.
Approve with comments indicates endorsement contingent on addressing non-blocking feedback. The author should address comments but may merge without further review cycle. This approval counts toward the threshold.
Request changes indicates blocking issues that prevent approval. The change cannot merge until the reviewer converts this to an approval. Request changes creates a blocking state that requires the original reviewer to re-review.
Comment indicates feedback without approval or rejection. Comments alone do not advance or block the review. Reviewers use this for questions, suggestions, or observations that do not require action.
Approval thresholds
| Change characteristic | Required approvals | Additional requirements |
|---|---|---|
| Under 50 lines, single file | 1 | None |
| 50-200 lines, single component | 1 | None |
| 200-500 lines, single component | 1 | Component owner approval |
| Over 500 lines | 2 | Consider splitting change |
| Multiple components | 2 | One approval per affected component |
| API contract changes | 2 | Consumer team consultation |
| Database schema changes | 2 | DBA or data team review |
| Security-sensitive changes | 2 | Security-qualified reviewer |
| Infrastructure changes | 2 | Operations team member |
Stale approvals
Approvals become stale when subsequent commits modify the approved code substantively. The following changes invalidate existing approvals:
- Modifications to files the approver reviewed
- Changes exceeding 20 lines after approval
- Force pushes that rewrite history
- Merge conflict resolutions affecting approved code
Approvers must re-review after invalidation. Administrative overrides require documented justification and security team notification.
Turnaround expectations
Review turnaround measures from submission (automated checks passing) to first substantive reviewer response. Response means actionable feedback or approval, not acknowledgement.
| Priority | First response | Review completion | Escalation trigger |
|---|---|---|---|
| Critical | 2 hours | 4 hours | 3 hours without response |
| High | 4 hours | 1 business day | 6 hours without response |
| Standard | 1 business day | 2 business days | 2 business days without response |
| Low | 2 business days | 5 business days | 5 business days without response |
Priority assignment follows these criteria:
Critical priority applies to production incidents, security vulnerabilities, and changes blocking release deployments.
High priority applies to bug fixes affecting users, time-sensitive features, and dependency updates addressing known vulnerabilities.
Standard priority applies to feature development, refactoring, documentation, and routine maintenance.
Low priority applies to speculative improvements, experiments, and changes without immediate value.
Authors set initial priority. Reviewers may adjust priority based on their assessment. Disagreements escalate to the technical lead.
Timezone considerations
Distributed teams calculate turnaround in the reviewer’s working hours. A submission at 17:00 in the author’s timezone to a reviewer whose working day starts in 12 hours begins the clock at the reviewer’s 09:00.
For critical and high priority reviews, authors should identify reviewers in overlapping or proximate timezones. If no qualified reviewer is available within the turnaround window, escalate to the technical lead for alternative arrangements.
Feedback standards
Review feedback must be constructive, specific, and actionable. Reviewers critique code, not authors. Feedback that fails these standards should be revised before posting.
Constructive feedback
Feedback is constructive when it:
- Explains the concern rather than just identifying it
- Provides alternatives when requesting changes
- Acknowledges good work alongside improvement areas
- Distinguishes blocking issues from suggestions
- Respects the author’s approach when multiple valid approaches exist
Unconstructive feedback example: “This is wrong.”
Constructive equivalent: “This query executes inside the loop, causing N+1 database calls. Consider fetching all records before the loop with a single query, then looking up by ID within the loop.”
Specific feedback
Feedback is specific when it:
- References exact lines or code blocks
- Uses concrete examples rather than abstract principles
- Identifies the actual problem rather than symptoms
- Quantifies impact where measurable
Unspecific feedback example: “This seems slow.”
Specific equivalent: “This method performs three sequential HTTP calls totalling 600ms typical latency. Parallel execution with Promise.all() would reduce this to the slowest single call, approximately 250ms.”
Actionable feedback
Feedback is actionable when it:
- Describes what change would address the concern
- Indicates whether the concern blocks approval
- Provides enough information to implement without further clarification
- Distinguishes requirements from preferences
Unactionable feedback example: “Consider the error handling.”
Actionable equivalent: “The catch block swallows exceptions silently. Add logging at ERROR level with the exception details, and re-throw or return an error response so callers know the operation failed. This blocks approval.”
Feedback prefixes
Use these prefixes to clarify feedback intent:
| Prefix | Meaning | Author response |
|---|---|---|
blocking: | Issue prevents approval | Must address before merge |
suggestion: | Improvement opportunity | Address or explain why not |
question: | Clarification needed | Reply with explanation |
nit: | Minor style or preference | Address if convenient |
praise: | Positive observation | No action required |
future: | Outside current scope | Consider for later work |
Example: “blocking: This SQL query concatenates user input directly. Use parameterised queries to prevent SQL injection. See Secure Coding Standards for the required pattern.”
Escalation procedures
Disagreements during review require structured resolution. Most disagreements resolve through discussion between author and reviewer. Persistent disagreements escalate through defined channels.
Disagreement categories
Technical disagreements concern implementation approach, design patterns, performance tradeoffs, or architectural decisions. These escalate to the technical lead or architect.
Standards disagreements concern interpretation of coding standards, style guides, or documented practices. These escalate to the standards owner or documentation maintainer.
Priority disagreements concern review urgency, turnaround expectations, or resource allocation. These escalate to the team lead or delivery manager.
Conduct disagreements concern feedback tone, professionalism, or interpersonal issues. These escalate to the team lead or HR as appropriate.
Escalation process
+------------------+ +------------------+ +------------------+| | | | | || Author and | | Technical lead | | Architecture || reviewer +---->+ mediation +---->+ review board || discussion | | | | |+--------+---------+ +--------+---------+ +------------------+ | | | 2 exchanges | 1 business day | without resolution | without resolution | |Authors and reviewers should exchange at least two rounds of comments before escalating. Escalation requests include:
- Summary of the disagreement
- Both positions with supporting rationale
- Impact of each approach
- Attempted resolutions
The technical lead mediates by reviewing both positions, potentially consulting additional experts, and issuing a binding decision within one business day. If the technical lead cannot resolve the disagreement, the architecture review board makes a final determination.
Override authority
In exceptional circumstances, changes may merge without full approval. Override authority exists for:
- Production incidents requiring immediate fixes
- Security vulnerabilities requiring urgent patching
- Deployment blockers during release windows
Overrides require:
- Documentation of the justification
- Approval from on-call lead or incident commander
- Commitment to post-merge review within one business day
- Notification to the security team for security-related changes
Override usage is tracked and reviewed monthly. Patterns of override usage indicate process issues requiring attention.
Pair programming alternative
Pair programming substitutes for asynchronous code review when two developers work together in real-time on the same change. The navigator provides continuous review while the driver writes code. This approach suits complex changes, knowledge transfer scenarios, and time-sensitive work.
When to use pair programming
Pair programming is appropriate when:
- Change requires knowledge held by different team members
- Author is unfamiliar with the affected codebase area
- Change is exploratory or design is uncertain
- Turnaround time for async review is unacceptable
- Multiple review cycles are anticipated
Pair programming is not appropriate when:
- Change is straightforward and low-risk
- Team members are in incompatible timezones
- Independent review perspective is specifically needed
- Audit requirements mandate separate author and reviewer
Pair programming review requirements
Changes developed through pair programming still require merge request creation with:
- Both participants listed (author and co-author)
- Note indicating pair programming session
- Session duration recorded
- Standard automated checks passing
One participant may approve the merge request. The other participant counts as the author. For elevated risk changes, an additional reviewer outside the pair is required.
Review tooling
Code review requires tooling that supports the workflow stages, approval tracking, and feedback mechanisms defined in this reference.
Required capabilities
Review tooling must provide:
| Capability | Purpose |
|---|---|
| Diff visualisation | Display changes in context with surrounding code |
| Inline comments | Attach feedback to specific lines |
| Threaded discussions | Track conversation on individual topics |
| Approval workflow | Record approvals and blocks with attribution |
| Status checks | Display automated check results |
| Merge controls | Enforce approval requirements before merge |
| Notification | Alert reviewers to assignments and updates |
| History | Retain review records for audit purposes |
Tooling options
| Tool | Deployment | Strengths | Considerations |
|---|---|---|---|
| GitLab | Self-hosted or SaaS | Integrated CI/CD, comprehensive features | Resource requirements for self-hosted |
| Gitea | Self-hosted | Lightweight, simple, low overhead | Limited advanced features |
| GitHub | SaaS or Enterprise | Ecosystem integration, familiar interface | SaaS version has data residency implications |
| Gerrit | Self-hosted | Fine-grained controls, scales well | Steeper learning curve |
| Bitbucket | SaaS or Data Center | Atlassian integration | Licensing costs at scale |
For self-hosted options prioritising data sovereignty, Gitea provides the simplest deployment while GitLab offers the most comprehensive feature set. See the Version Control reference for repository platform selection criteria.
Review metrics
Metrics inform process improvement without becoming targets that distort behaviour. Track these metrics at team level, not individual level, to avoid gaming.
Health indicators
| Metric | Healthy range | Investigation trigger |
|---|---|---|
| Median review turnaround | Under 1 business day | Over 2 business days |
| Review cycles per change | 1-2 cycles | Over 3 cycles average |
| Approval without comments | 30-50% of reviews | Over 80% (rubber-stamping) |
| Changes merged without review | 0% | Any occurrence outside override process |
| Reviewer concentration | No reviewer over 40% of reviews | Single reviewer over 60% |
| Stale review rate | Under 10% | Over 25% |
Calculating metrics
Review turnaround measures from first automated check pass to first substantive reviewer response. Calculate median, not mean, to reduce outlier influence.
Review cycles counts the number of times a change moves from author to reviewer and back. A cycle begins when the author marks the change ready for review and ends when the reviewer responds. Healthy changes complete in one or two cycles.
Approval without comments indicates reviews where the reviewer approved without leaving any comments. Some approvals without comments are normal for straightforward changes. Excessive rates suggest insufficient scrutiny.
Reviewer concentration measures distribution of review workload. Calculate as percentage of total reviews performed by each reviewer. Concentration above 40% indicates knowledge silos and single points of failure.
Stale review rate measures reviews that exceeded turnaround expectations. Calculate as percentage of reviews missing SLA targets.
Metric review cadence
Review metrics monthly at team retrospectives. Identify trends over three-month periods rather than reacting to individual data points. Investigate triggers promptly when metrics move outside healthy ranges.