Skip to Content
FeaturesPR Reviews — Detailed Guide

PR Reviews — Detailed Guide

CodeStax reviews every pull request for security vulnerabilities, leaked secrets, dangerous patterns, and code quality issues using comprehensive detection across multiple categories.

Detection Layers

Secrets Detection

Catches credentials before they reach your main branch:

  • Cloud provider keys (AWS, GCP, Azure)
  • Source control tokens (GitHub, GitLab, Bitbucket)
  • Payment and messaging service keys
  • Private cryptographic keys
  • Database connection strings
  • API tokens and bearer credentials

Dangerous Code Patterns

Identifies code that introduces exploitable vulnerabilities:

  • Injection attacks (SQL, command, XSS)
  • Insecure deserialization
  • Disabled security controls
  • Weak cryptographic usage
  • Production misconfigurations

Vibe Coding Detection

Detects AI-generated code that was accepted without proper review:

  • Placeholder implementations and incomplete code
  • Copy-pasted patterns without adaptation
  • Generic error handling that masks issues
  • Code quality anti-patterns common in LLM output

Dead Code Detection

Finds code that serves no purpose:

  • Unreachable code paths
  • Unused definitions
  • Redundant branches and empty bodies

Scoring

Every PR review produces a Risk Score (0-100) calculated from multiple weighted categories including security vulnerabilities, secrets, data handling, code quality, and architecture. The weighting is optimized to prioritize the most critical security risks.

Score Interpretation

ScoreLevelAction
75-100CriticalBlock merge — severe security issues
50-74HighReview required — significant risk
25-49MediumConsider fixing before merge
0-24LowClean or minimal risk

Impact Analysis & Blast Radius

Every PR review generates an impact graph showing which files and functions are affected by the changes.

What It Shows

  • Changed files: Files directly modified in the PR
  • Callers: Functions that call the changed code (up to 4 hops)
  • Downstream: Code that depends on the changed functions
  • Entry points: HTTP routes and API endpoints that could be affected

Blast Radius Visualization

The impact graph is rendered as an interactive radial visualization on the review detail page. It shows:

  • Direct impact (files changed in the PR)
  • Indirect impact (callers and downstream dependencies)
  • Entry points affected (API routes that reach the changed code)

Mermaid diagrams showing call flow and blast radius are also posted directly in your PR comment on GitHub/Bitbucket/GitLab.

Hotspots

Files with the most connections (callers + downstream) are highlighted as hotspots — these are the files most likely to cause cascading issues if the PR introduces a bug.

Vibe Coding Detection

CodeStax detects AI-generated code that was accepted without proper human review — commonly called “vibe coding.”

What It Catches

  • Placeholder implementations and incomplete code
  • Copy-pasted patterns without adaptation
  • Generic error handling that masks issues
  • Disabled security controls (CORS wildcards, auth bypasses)
  • Hallucinated imports from non-existent modules
  • Over-verbose identifiers and AI comment markers

Vibe Coding Score (0-100)

Each PR receives a vibe coding probability score:

  • 0-25: Likely human-written
  • 26-50: Possibly AI-assisted
  • 51-75: Likely AI-generated
  • 76-100: Very high AI probability

The score appears as a badge on the review detail page. Configure alert thresholds in Settings → Policies.

Why It Matters

AI-generated code often compiles and passes basic tests but contains subtle issues — disabled security checks, hardcoded credentials, unreachable error handlers, and copy-paste anti-patterns. Vibe coding detection catches these before they reach production.

Quality Gates

Quality gates automatically enforce your security standards on PRs.

Configuration Levels

Gates can be set at three levels (most specific wins):

  1. Organization — Default for all repos
  2. Repository — Override for a specific repo
  3. Branch — Override for protected branches (e.g., main, release/*)

Gate Conditions

  • Max Risk Score: Block PRs above a threshold (e.g., 50)
  • No Critical Findings: Require zero critical-severity issues
  • No Secrets: Require zero detected secrets
  • Max High Findings: Set a cap on high-severity issues

Configure gates in Settings → Policies → PR Review Gates.

AI Chat

After a review completes, you can open an AI Chat session to discuss findings:

  • Ask “Why is this dangerous?” for any finding
  • Request alternative implementations
  • Multi-turn context — the AI remembers previous messages in the conversation
  • Code suggestions are formatted as diff blocks you can copy directly

Inline PR Comments

CodeStax posts findings directly on the PR in your SCM provider:

  • Each finding appears as a comment on the relevant line
  • Suggestion blocks contain ready-to-apply fixes:
```suggestion # Use parameterized queries instead cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
- Summary comment at the top includes the overall risk score and category breakdown ## Issue Suppression ### In-Code Suppression Add a comment on any line to skip detection: - `// codestax-ignore` — skip all checks for this line - `// codestax-disable` — same as above - `// nosec` — skip security checks (compatible with other tools) Supported comment formats by language: - `# codestax-ignore: <pattern-id>` (Python, Ruby, Shell) - `// codestax-ignore: <pattern-id>` (JavaScript, TypeScript, Java, Go, C) - `<!-- codestax-ignore: <pattern-id> -->` (HTML, XML) Suppressed findings still appear in the review but are marked as "Suppressed" and do not count toward the risk score. ### UI Suppression On the review detail page, hover over any issue and click the **eye-off icon** to mark it as a false positive. The issue is: - Immediately removed from the visible list - Excluded from future API responses (unless `include_suppressed=true`) - Reason recorded for audit trail ### Feedback Use the **thumbs up/down** buttons on each issue to provide feedback: - **Thumbs up**: "This finding was helpful" — reinforces the detection - **Thumbs down**: "Not useful" — helps reduce similar findings over time Feedback is aggregated in the **Team Learnings** dashboard to show which categories your team commonly accepts or rejects. See [Team Learning](/features/team-learning) for details. ## SCM Provider Support | Feature | GitHub | Bitbucket | GitLab | |---------|--------|-----------|--------| | Webhook auto-trigger | Yes | Yes | Yes | | Inline PR comments | Yes | Yes | Yes | | Suggestion blocks | Yes | Yes | Planned | | Status checks | Yes | Yes | Planned | | Quality gate enforcement | Yes | Yes | Planned |