Skip to content

Enterprise Testing Guide

How to evaluate CKB for large codebases, monorepos, and enterprise teams.

CKB provides 90+ code intelligence tools. This guide gives you concrete testing scenarios to determine if CKB delivers value for your specific pain points. Each scenario includes the problem it solves, which tools to use, step-by-step instructions, and what to expect.


Quick Start (5 Minutes)

# 1. Install
npm install -g @tastehub/ckb

# 2. Initialize in your repo
cd /path/to/your/repo
ckb init

# 3. Generate index (auto-detects language)
ckb index

# 4. Connect your AI tool
ckb setup

# 5. Verify everything works
ckb status
ckb doctor

C++ monorepos: C++ is in the Enhanced Tier with full SCIP index support. For custom/proprietary languages, all git-based features (ownership, hotspots, coupling, churn) work automatically.


Scenario 1: Tame Large Pull Requests

Problem: PRs with 600+ files burn tokens and overwhelm reviewers. AI tools hit context limits before they even start.

Tools: summarizePr, summarizeDiff, analyzeImpact, getAffectedTests

Steps:

  1. Pick a recent large PR (ideally 100+ files)
  2. Run summarizePr to get an intelligent summary without loading the full diff
  3. Use analyzeImpact on the changed modules to see downstream effects
  4. Run getAffectedTests to identify which tests actually need to run

Expected result: A concise, structured PR overview instead of scrolling through hundreds of files. Reviewers see what matters.


Scenario 2: Blast Radius Analysis

Problem: Changing a core header, shared utility, or base class in a monorepo can silently break dozens of modules. You only find out after the merge.

Tools: analyzeImpact, getCallGraph, traceUsage, auditRisk

Steps:

  1. Pick a central, frequently-included file (e.g. a shared header or core utility)
  2. Run analyzeImpact to see the full propagation chain
  3. Use getCallGraph to trace the dependency depth
  4. Run auditRisk for a multi-factor risk assessment (8 weighted factors)

Expected result: Precise visibility into how far a change propagates. Risk scores that confirm your gut feeling with data.


Scenario 3: Hidden Dependencies & Coupling

Problem: In a large, grown codebase, modules develop invisible coupling. Files that always change together, but no one knows why.

Tools: analyzeCoupling, findCycles, getArchitecture

Steps:

  1. Run analyzeCoupling on your core modules to see co-change patterns from git history
  2. Use findCycles to detect circular dependencies at module or directory level
  3. Run getArchitecture for a module dependency map

Expected result: A data-driven view of your actual architecture vs. your intended architecture. Hidden coupling made visible.


Scenario 4: Hotspot & Tech Debt Tracking

Problem: Everyone knows "that one file" that causes problems. But how many are there really? And which ones are getting worse?

Tools: getHotspots, auditRisk, suggestRefactorings

Steps:

  1. Run getHotspots on your repository to see churn data with trend analysis
  2. Use auditRisk on the top hotspots for a deeper multi-factor assessment
  3. Run suggestRefactorings to get actionable recommendations sorted by severity and effort

Expected result: A prioritized list of problem areas with trend data. Not gut feelings, but numbers.


Scenario 5: Ownership & Review Intelligence

Problem: CODEOWNERS says Team A owns a module, but Team B has been doing all the commits for months. Reviews go to the wrong people.

Tools: getOwnership, getOwnershipDrift, getModuleResponsibilities, getReviewers

Steps:

  1. Run getOwnership on a few key directories
  2. Use getOwnershipDrift to compare CODEOWNERS vs. actual commit activity
  3. Run getModuleResponsibilities to see what each module is supposed to do

Expected result: Accurate, data-driven ownership that reflects reality. The right reviewers for every PR.


Scenario 6: Dead Code & Cleanup

Problem: Years of development leave behind dead code, unused functions, abandoned features. Nobody dares to delete anything.

Tools: findDeadCode, findDeadCodeCandidates, justifySymbol, getObservedUsage

Steps:

  1. Run findDeadCode for static dead code detection across the codebase
  2. Use justifySymbol on suspicious symbols to check if they have any remaining purpose
  3. If OpenTelemetry is available: use getObservedUsage to cross-reference with runtime data

Expected result: Confidence-scored dead code candidates. Safe cleanup based on evidence, not hope.


Scenario 7: Trace Critical Paths

Problem: In safety-critical software, you need to know exactly how data flows through the system. How does a sensor input reach the UI? Where does a command pass through authorization?

Tools: traceUsage, explainPath, getCallGraph, listEntrypoints

Steps:

  1. Use listEntrypoints to find all ingress points to a critical module
  2. Run traceUsage to follow a symbol from definition to all consumers
  3. Use explainPath to understand the call chain between two specific points
  4. Run getCallGraph for a complete dependency visualization

Expected result: Complete traceability from entry to exit. Essential for audit requirements and safety reviews.


Scenario 8: Onboarding & Knowledge Transfer

Problem: New developers take weeks to understand a large codebase. Documentation is outdated. Tribal knowledge is everywhere.

Tools: explore, understand, explainSymbol, explainFile, explainOrigin, getDecisions

Steps:

  1. Use explore on a module to get a structured overview
  2. Run understand on key symbols for deep dives with context
  3. Use explainOrigin to understand why code exists and how it evolved
  4. Query getDecisions to find Architectural Decision Records

Expected result: AI-powered onboarding that gives new team members instant access to the full code knowledge base.


Scenario 9: API Change Management

Problem: Internal APIs change, external integrations break. Nobody has a clear view of what changed between versions.

Tools: compareAPI, listContracts, analyzeContractImpact, getContractDependencies

Steps:

  1. Run compareAPI between two versions to see all API changes
  2. Use listContracts to see all defined API contracts
  3. Run analyzeContractImpact to check what breaks when a contract changes

Expected result: Clear API change visibility across the monorepo. No more surprise breakages from internal API changes.


Scenario 10: CI/CD Integration

Problem: Running the full test suite on every PR takes too long. Static analysis generates noise, not signal.

Tools: getAffectedTests, getStatus, reindex

Steps:

  1. Set up the CKB daemon for automatic index refresh: ckb daemon start
  2. Integrate getAffectedTests into your CI pipeline to run only relevant tests
  3. Use the webhook API to trigger re-indexing from CI: curl -X POST http://localhost:9120/api/v1/refresh
  4. Add quality gates with configurable risk thresholds

Expected result: Faster CI pipelines that run targeted tests. Quality gates that catch real issues, not noise.


Language Support

Tier Languages Capabilities
Enhanced (SCIP Index) Go, TypeScript, Python, Rust, Java, Kotlin, C++, Dart, Ruby, C# Full symbol resolution, cross-references, call graphs, impact analysis
Basic (LSP) Any language with a language server Navigation and references
Minimal (Git) Every file in the repo Ownership, hotspots, coupling, churn, dead code candidates

Custom or proprietary languages automatically get Minimal Tier coverage. All git-based analysis tools work regardless of language.


CKB Complements Your Existing Tools

CKB is not a linter or static code analyzer. It's a different category:

Static Analyzers (SonarQube, etc.) CKB
Find code smells, bugs, vulnerabilities Understand code structure, ownership, and relationships
Rule-based analysis Query-based intelligence
Reports and dashboards Answers to specific questions
Runs on code Runs on code + git history + runtime data

Both tools work together. SonarQube tells you what's wrong. CKB tells you what breaks, who owns it, and how to fix it safely.


Next Steps

Need help with evaluation or enterprise licensing? See the Enterprise page or reach out directly.