Skip to main content

Code Review Process

Code review is the systematic examination of source code changes by someone other than the author before those changes merge into a shared branch. The practice serves three functions: defect detection, knowledge transfer, and codebase consistency. Every code change requires review regardless of author seniority or change size.

This reference defines the review workflow, approval requirements, and quality standards that apply to all development work. For security-specific review criteria, see Secure Coding Standards. For version control mechanics underlying the review process, see Version Control.

Review workflow

A review proceeds through four stages from submission to merge. Each stage has defined entry criteria that must be satisfied before progression.

+------------------+ +------------------+ +------------------+ +------------------+
| | | | | | | |
| SUBMITTED +---->+ IN REVIEW +---->+ APPROVED +---->+ MERGED |
| | | | | | | |
+--------+---------+ +--------+---------+ +--------+---------+ +------------------+
| | |
| | |
v v v
- Automated checks - Reviewer assigned - All approvals
passing - Comments addressed obtained
- Description complete - No blocking issues - CI pipeline green
- Linked to work item - Tests adequate - No merge conflicts

The submitted stage begins when an author creates a merge request or pull request. Automated checks run immediately: linting, compilation, unit tests, and security scans. The request cannot proceed until these pass. Authors must provide a description explaining what changed and why, with a link to the relevant work item or issue.

The in review stage begins when a reviewer is assigned and starts examining the change. Reviewers leave comments, request changes, or approve. Authors respond to feedback by updating the code, replying to comments, or discussing disagreements. This stage iterates until all blocking issues are resolved.

The approved stage indicates all required approvals are obtained and no unresolved blocking comments remain. The CI pipeline must show green status. Any merge conflicts with the target branch must be resolved by the author before merge.

The merged stage completes the review. The change integrates into the target branch and the merge request closes. Post-merge, the author monitors for any issues in subsequent pipeline stages or deployments.

Reviewer selection

The appropriate reviewer depends on the change’s scope, risk, and affected components. A single reviewer suffices for most changes. High-risk changes require additional reviewers with specific expertise.

Selection criteria

Select reviewers based on these factors in order of priority:

  1. Component ownership: The person or team responsible for the affected code area has primary review responsibility. Component owners understand the design intent and can identify deviations.

  2. Recent context: Someone who recently worked on related code can identify integration issues and unintended side effects. Git history identifies recent contributors.

  3. Domain expertise: Changes touching specialised areas (cryptography, database queries, accessibility, internationalisation) benefit from domain expert review even if the expert lacks component familiarity.

  4. Availability: A less-ideal reviewer who can respond within the turnaround window is preferable to the perfect reviewer who is unavailable.

Reviewer count by risk level

Risk levelMinimum reviewersReviewer requirements
Standard1Any qualified team member
Elevated2At least one component owner
High2Component owner plus domain expert
Critical3Component owner, domain expert, and senior engineer

Risk level determination follows these thresholds:

Standard risk applies to changes affecting a single component with existing test coverage, under 200 lines modified, and no security-sensitive code paths.

Elevated risk applies to changes affecting multiple components, modifying shared libraries or utilities, or touching code without adequate test coverage.

High risk applies to changes affecting authentication, authorisation, payment processing, personal data handling, cryptographic operations, or external integrations.

Critical risk applies to changes affecting incident response systems, backup and recovery mechanisms, data migration scripts, or infrastructure security controls.

Self-review prohibition

Authors cannot approve their own changes under any circumstances. Organisations with single developers must establish external review arrangements through peer organisations, contractors, or community reviewers for changes above standard risk.

Review checklists

Reviewers verify different aspects depending on the change category. These checklists supplement automated checks rather than replacing them. Items marked with severity indicate blocking issues that prevent approval.

Feature changes

CheckSeverityVerification
Requirements tracedBlockingChange implements documented requirement with link to specification
Design consistentBlockingImplementation follows established patterns in the codebase
Tests cover acceptance criteriaBlockingTests verify each acceptance criterion explicitly
Edge cases handledBlockingNull values, empty collections, boundary conditions addressed
Error messages actionableNon-blockingUsers can understand and respond to error conditions
Performance acceptableBlockingNo obvious N+1 queries, unbounded loops, or memory accumulation
Documentation updatedNon-blockingREADME, API docs, and comments reflect changes
Backwards compatibleBlockingExisting integrations continue functioning
Feature flag wrappedNon-blockingNew features toggleable for gradual rollout where applicable
Logging adequateNon-blockingKey operations emit logs at appropriate levels

Bug fixes

CheckSeverityVerification
Root cause identifiedBlockingFix addresses underlying cause, not symptoms
Regression test addedBlockingTest fails without fix, passes with fix
Related code examinedNon-blockingSimilar patterns elsewhere checked for same bug
Original issue reproducibleBlockingBug confirmed before fix applied
Fix minimalNon-blockingChange limited to necessary corrections
Changelog entry addedNon-blockingUser-facing bugs documented in release notes

Security changes

CheckSeverityVerification
Threat model updatedBlockingSecurity changes reflect updated threat analysis
No secrets in codeBlockingCredentials, keys, tokens absent from committed files
Input validation completeBlockingAll external input validated before use
Output encoding appliedBlockingData rendered in context-appropriate encoding
Least privilege maintainedBlockingPermissions minimised for operations
Audit logging presentBlockingSecurity-relevant operations recorded
Dependencies scannedBlockingNo known vulnerabilities in added dependencies
Cryptography reviewedBlockingCryptographic changes reviewed by qualified personnel

For detailed security verification criteria, see Secure Coding Standards.

Configuration changes

CheckSeverityVerification
Environment parityBlockingChange valid across all deployment environments
Secrets externalisedBlockingSensitive values reference external secret store
Defaults sensibleBlockingMissing configuration fails safe or provides reasonable default
Validation presentNon-blockingInvalid configuration detected at startup
Documentation completeNon-blockingNew parameters documented with type, range, and purpose
Rollback testedNon-blockingPrevious configuration can be restored

Database changes

CheckSeverityVerification
Migration reversibleBlockingDown migration restores previous state
Migration testedBlockingMigration verified against production-like data volume
Indices appropriateBlockingQueries have supporting indices; no unused indices added
Constraints preservedBlockingReferential integrity maintained
Data migration safeBlockingExisting data transforms correctly
Performance validatedBlockingMigration completes within maintenance window
Backup verifiedBlockingPre-migration backup confirmed

Approval requirements

Approval indicates a reviewer endorses the change for merge. Different approval types carry different weight and meaning.

Approval types

Approve indicates the reviewer examined the change, found it acceptable, and endorses merging. This approval counts toward the required approval threshold.

Approve with comments indicates endorsement contingent on addressing non-blocking feedback. The author should address comments but may merge without further review cycle. This approval counts toward the threshold.

Request changes indicates blocking issues that prevent approval. The change cannot merge until the reviewer converts this to an approval. Request changes creates a blocking state that requires the original reviewer to re-review.

Comment indicates feedback without approval or rejection. Comments alone do not advance or block the review. Reviewers use this for questions, suggestions, or observations that do not require action.

Approval thresholds

Change characteristicRequired approvalsAdditional requirements
Under 50 lines, single file1None
50-200 lines, single component1None
200-500 lines, single component1Component owner approval
Over 500 lines2Consider splitting change
Multiple components2One approval per affected component
API contract changes2Consumer team consultation
Database schema changes2DBA or data team review
Security-sensitive changes2Security-qualified reviewer
Infrastructure changes2Operations team member

Stale approvals

Approvals become stale when subsequent commits modify the approved code substantively. The following changes invalidate existing approvals:

  • Modifications to files the approver reviewed
  • Changes exceeding 20 lines after approval
  • Force pushes that rewrite history
  • Merge conflict resolutions affecting approved code

Approvers must re-review after invalidation. Administrative overrides require documented justification and security team notification.

Turnaround expectations

Review turnaround measures from submission (automated checks passing) to first substantive reviewer response. Response means actionable feedback or approval, not acknowledgement.

PriorityFirst responseReview completionEscalation trigger
Critical2 hours4 hours3 hours without response
High4 hours1 business day6 hours without response
Standard1 business day2 business days2 business days without response
Low2 business days5 business days5 business days without response

Priority assignment follows these criteria:

Critical priority applies to production incidents, security vulnerabilities, and changes blocking release deployments.

High priority applies to bug fixes affecting users, time-sensitive features, and dependency updates addressing known vulnerabilities.

Standard priority applies to feature development, refactoring, documentation, and routine maintenance.

Low priority applies to speculative improvements, experiments, and changes without immediate value.

Authors set initial priority. Reviewers may adjust priority based on their assessment. Disagreements escalate to the technical lead.

Timezone considerations

Distributed teams calculate turnaround in the reviewer’s working hours. A submission at 17:00 in the author’s timezone to a reviewer whose working day starts in 12 hours begins the clock at the reviewer’s 09:00.

For critical and high priority reviews, authors should identify reviewers in overlapping or proximate timezones. If no qualified reviewer is available within the turnaround window, escalate to the technical lead for alternative arrangements.

Feedback standards

Review feedback must be constructive, specific, and actionable. Reviewers critique code, not authors. Feedback that fails these standards should be revised before posting.

Constructive feedback

Feedback is constructive when it:

  • Explains the concern rather than just identifying it
  • Provides alternatives when requesting changes
  • Acknowledges good work alongside improvement areas
  • Distinguishes blocking issues from suggestions
  • Respects the author’s approach when multiple valid approaches exist

Unconstructive feedback example: “This is wrong.”

Constructive equivalent: “This query executes inside the loop, causing N+1 database calls. Consider fetching all records before the loop with a single query, then looking up by ID within the loop.”

Specific feedback

Feedback is specific when it:

  • References exact lines or code blocks
  • Uses concrete examples rather than abstract principles
  • Identifies the actual problem rather than symptoms
  • Quantifies impact where measurable

Unspecific feedback example: “This seems slow.”

Specific equivalent: “This method performs three sequential HTTP calls totalling 600ms typical latency. Parallel execution with Promise.all() would reduce this to the slowest single call, approximately 250ms.”

Actionable feedback

Feedback is actionable when it:

  • Describes what change would address the concern
  • Indicates whether the concern blocks approval
  • Provides enough information to implement without further clarification
  • Distinguishes requirements from preferences

Unactionable feedback example: “Consider the error handling.”

Actionable equivalent: “The catch block swallows exceptions silently. Add logging at ERROR level with the exception details, and re-throw or return an error response so callers know the operation failed. This blocks approval.”

Feedback prefixes

Use these prefixes to clarify feedback intent:

PrefixMeaningAuthor response
blocking:Issue prevents approvalMust address before merge
suggestion:Improvement opportunityAddress or explain why not
question:Clarification neededReply with explanation
nit:Minor style or preferenceAddress if convenient
praise:Positive observationNo action required
future:Outside current scopeConsider for later work

Example: “blocking: This SQL query concatenates user input directly. Use parameterised queries to prevent SQL injection. See Secure Coding Standards for the required pattern.”

Escalation procedures

Disagreements during review require structured resolution. Most disagreements resolve through discussion between author and reviewer. Persistent disagreements escalate through defined channels.

Disagreement categories

Technical disagreements concern implementation approach, design patterns, performance tradeoffs, or architectural decisions. These escalate to the technical lead or architect.

Standards disagreements concern interpretation of coding standards, style guides, or documented practices. These escalate to the standards owner or documentation maintainer.

Priority disagreements concern review urgency, turnaround expectations, or resource allocation. These escalate to the team lead or delivery manager.

Conduct disagreements concern feedback tone, professionalism, or interpersonal issues. These escalate to the team lead or HR as appropriate.

Escalation process

+------------------+ +------------------+ +------------------+
| | | | | |
| Author and | | Technical lead | | Architecture |
| reviewer +---->+ mediation +---->+ review board |
| discussion | | | | |
+--------+---------+ +--------+---------+ +------------------+
| |
| 2 exchanges | 1 business day
| without resolution | without resolution
| |

Authors and reviewers should exchange at least two rounds of comments before escalating. Escalation requests include:

  1. Summary of the disagreement
  2. Both positions with supporting rationale
  3. Impact of each approach
  4. Attempted resolutions

The technical lead mediates by reviewing both positions, potentially consulting additional experts, and issuing a binding decision within one business day. If the technical lead cannot resolve the disagreement, the architecture review board makes a final determination.

Override authority

In exceptional circumstances, changes may merge without full approval. Override authority exists for:

  • Production incidents requiring immediate fixes
  • Security vulnerabilities requiring urgent patching
  • Deployment blockers during release windows

Overrides require:

  1. Documentation of the justification
  2. Approval from on-call lead or incident commander
  3. Commitment to post-merge review within one business day
  4. Notification to the security team for security-related changes

Override usage is tracked and reviewed monthly. Patterns of override usage indicate process issues requiring attention.

Pair programming alternative

Pair programming substitutes for asynchronous code review when two developers work together in real-time on the same change. The navigator provides continuous review while the driver writes code. This approach suits complex changes, knowledge transfer scenarios, and time-sensitive work.

When to use pair programming

Pair programming is appropriate when:

  • Change requires knowledge held by different team members
  • Author is unfamiliar with the affected codebase area
  • Change is exploratory or design is uncertain
  • Turnaround time for async review is unacceptable
  • Multiple review cycles are anticipated

Pair programming is not appropriate when:

  • Change is straightforward and low-risk
  • Team members are in incompatible timezones
  • Independent review perspective is specifically needed
  • Audit requirements mandate separate author and reviewer

Pair programming review requirements

Changes developed through pair programming still require merge request creation with:

  • Both participants listed (author and co-author)
  • Note indicating pair programming session
  • Session duration recorded
  • Standard automated checks passing

One participant may approve the merge request. The other participant counts as the author. For elevated risk changes, an additional reviewer outside the pair is required.

Review tooling

Code review requires tooling that supports the workflow stages, approval tracking, and feedback mechanisms defined in this reference.

Required capabilities

Review tooling must provide:

CapabilityPurpose
Diff visualisationDisplay changes in context with surrounding code
Inline commentsAttach feedback to specific lines
Threaded discussionsTrack conversation on individual topics
Approval workflowRecord approvals and blocks with attribution
Status checksDisplay automated check results
Merge controlsEnforce approval requirements before merge
NotificationAlert reviewers to assignments and updates
HistoryRetain review records for audit purposes

Tooling options

ToolDeploymentStrengthsConsiderations
GitLabSelf-hosted or SaaSIntegrated CI/CD, comprehensive featuresResource requirements for self-hosted
GiteaSelf-hostedLightweight, simple, low overheadLimited advanced features
GitHubSaaS or EnterpriseEcosystem integration, familiar interfaceSaaS version has data residency implications
GerritSelf-hostedFine-grained controls, scales wellSteeper learning curve
BitbucketSaaS or Data CenterAtlassian integrationLicensing costs at scale

For self-hosted options prioritising data sovereignty, Gitea provides the simplest deployment while GitLab offers the most comprehensive feature set. See the Version Control reference for repository platform selection criteria.

Review metrics

Metrics inform process improvement without becoming targets that distort behaviour. Track these metrics at team level, not individual level, to avoid gaming.

Health indicators

MetricHealthy rangeInvestigation trigger
Median review turnaroundUnder 1 business dayOver 2 business days
Review cycles per change1-2 cyclesOver 3 cycles average
Approval without comments30-50% of reviewsOver 80% (rubber-stamping)
Changes merged without review0%Any occurrence outside override process
Reviewer concentrationNo reviewer over 40% of reviewsSingle reviewer over 60%
Stale review rateUnder 10%Over 25%

Calculating metrics

Review turnaround measures from first automated check pass to first substantive reviewer response. Calculate median, not mean, to reduce outlier influence.

Review cycles counts the number of times a change moves from author to reviewer and back. A cycle begins when the author marks the change ready for review and ends when the reviewer responds. Healthy changes complete in one or two cycles.

Approval without comments indicates reviews where the reviewer approved without leaving any comments. Some approvals without comments are normal for straightforward changes. Excessive rates suggest insufficient scrutiny.

Reviewer concentration measures distribution of review workload. Calculate as percentage of total reviews performed by each reviewer. Concentration above 40% indicates knowledge silos and single points of failure.

Stale review rate measures reviews that exceeded turnaround expectations. Calculate as percentage of reviews missing SLA targets.

Metric review cadence

Review metrics monthly at team retrospectives. Identify trends over three-month periods rather than reacting to individual data points. Investigate triggers promptly when metrics move outside healthy ranges.

See also