DORA Metrics
CodeStax tracks the four DORA (DevOps Research and Assessment) metrics to help your team measure and improve engineering performance. Access them from Dashboard → Reviews → DORA.
The Four Metrics
1. Review Frequency
How often your team submits code for security review.
| Rating | Threshold | What It Means |
|---|---|---|
| Elite | Multiple per day | Continuous delivery cadence |
| High | Weekly to daily | Regular review habit |
| Medium | Monthly to weekly | Room for improvement |
| Low | Less than monthly | Reviews are infrequent |
Why it matters: Teams that review frequently catch issues earlier and ship smaller, safer changes.
How it’s calculated: Count of completed PR reviews and scans per time period, normalized per active developer.
2. Lead Time for Changes
Time from code commit to completed security review.
| Rating | Threshold | What It Means |
|---|---|---|
| Elite | Less than 1 hour | Fast feedback loop |
| High | 1 hour to 1 day | Same-day reviews |
| Medium | 1 day to 1 week | Delays in review pipeline |
| Low | More than 1 week | Significant bottleneck |
Why it matters: Long lead times mean developers wait days for security feedback, leading to context-switching and delayed fixes.
How it’s calculated: Median time between PR creation and review completion across all reviewed PRs.
3. Change Failure Rate
Percentage of changes that introduce new security issues.
| Rating | Threshold | What It Means |
|---|---|---|
| Elite | 0-5% | Almost all changes are clean |
| High | 5-10% | Occasional issues caught |
| Medium | 10-25% | Frequent security regressions |
| Low | Above 25% | Systemic quality issues |
Why it matters: A high failure rate indicates gaps in secure coding practices, missing pre-commit checks, or inadequate developer training.
How it’s calculated: (PR reviews with risk score > 50) / (total PR reviews completed) over the selected period.
4. Mean Time to Review (MTTR)
Average time to complete a security review once triggered.
| Rating | Threshold | What It Means |
|---|---|---|
| Elite | Less than 10 minutes | Automated and fast |
| High | 10-30 minutes | Efficient pipeline |
| Medium | 30 minutes to 2 hours | Possible queue congestion |
| Low | More than 2 hours | Scanner or infrastructure issues |
Why it matters: Slow reviews block merges and slow down the entire development pipeline.
How it’s calculated: Average duration from scan trigger to review completion across all reviews.
Dashboard Views
Trend Charts
Each metric is displayed as a line chart over the selected time range (7 days, 30 days, 90 days). Trend direction arrows indicate whether you are improving or regressing.
Team Comparison
Compare DORA metrics across repositories to identify which teams are performing well and which need support.
Summary Cards
Four cards at the top of the page show:
- Current value for each metric
- Rating badge (Elite / High / Medium / Low)
- Trend direction vs. previous period
Using DORA Metrics Effectively
- Set team goals: Use the rating thresholds as targets — aim for “High” initially, then “Elite”
- Identify bottlenecks: If Lead Time is high but MTTR is low, the bottleneck is before the scan (process, not tooling)
- Track over time: Week-over-week trends matter more than absolute numbers
- Combine with scan data: Pair Change Failure Rate with the types of issues found to target training
CLI Access
Retrieve DORA metrics programmatically:
codestax dora --repo my-org/my-repo --period 30dSee CLI Reference for full usage.
API Access
curl -H "X-API-Key: $CODESTAX_API_KEY" \
https://codestax.co/api/v1/dora/metrics?repo_id=123&period=30dResponse includes all four metrics with current values, ratings, and trend data.