Skip to content

CKB Prompt Cookbook

Real prompts for real problems. Copy, paste, and adapt these to your codebase.

New to CKB or MCP? This page shows you what to ask your AI assistant when CKB is connected. These prompts work with Claude Code, Claude Desktop, or any MCP-compatible tool.


Table of Contents

Getting Started

Core Recipes

Workflow Sequences

Reference


Why This Matters

Without CKB, your AI assistant is guessing. It searches for text patterns, reads random files, and hopes for the best. With CKB, it has actual code intelligence—it knows what calls what, who owns what, and what's risky to change.

You don't need to be an expert to use these prompts. If you're new to a codebase (or new to coding), CKB helps you ask the right questions and get structured answers instead of walls of text.


Quick Reference

I want to... Start with this prompt
Find where something is defined "Search for symbols named X"
Understand how code flows "Trace how X is reached from entrypoints"
Debug an error "Trace how X is called and show recent changes nearby"
Know if a change is safe "What's the impact of changing X?"
Review a PR intelligently "Summarize the diff and highlight risks"
Learn a new module "Explain the internal/X module"
Understand a file's purpose "Explain what this file does and its role"
Find dead code "Is X still used? Should I keep or remove it?"
Find who to ask "Who owns internal/api?"
See what's volatile "Show me hotspots in the codebase"
Generate a cleanup plan "Audit this repo and produce a prioritized improvement report"
Analyze a PR in CI "Summarize this PR with risk assessment and suggested reviewers"
Check CODEOWNERS accuracy "Show ownership drift—where CODEOWNERS doesn't match reality"
Run long analysis in background "Refresh the architecture model asynchronously"
Search across all repos "Search for auth modules across the platform federation"
Check API change impact "What breaks if I change the user.proto contract?"
Find dead code (telemetry) "Find dead code candidates using runtime telemetry"
Check if code is called "Is ProcessPayment actually called in production?"
Understand why code exists "Explain the origin of this function—who wrote it, when, and why"
Find coupled files "What files typically change together with this one?"
Export for LLM context "Export the codebase structure in LLM-friendly format"
Find risky code "Audit this codebase and show me high-risk areas"
Get quick wins "Find high-impact, low-effort refactoring targets"
Find docs for a symbol "What docs reference UserService?"
Check for stale docs "Are there any stale symbol references in our docs?"
Check doc coverage "What's our documentation coverage? What needs docs?"
Check daemon status "Is the CKB daemon running? What tasks are scheduled?"
Find complex code "Find functions with cyclomatic complexity above 15"
Check analysis tier "What tier am I using? What features are available?"
Check index freshness "Is my index up to date? How stale is it?"
Query remote server "Search for auth across local AND remote servers"
Upload index to server "How do I upload a SCIP index to the server from CI?"

Find the Symbol

When to use

  • You know a function/type name but not where it lives
  • You want to find all variations (Handler, handleRequest, etc.)
  • You're searching for something you saw in logs or errors

Prompts

Basic search:

Search for symbols named "authenticate"

Filtered search:

Find all functions containing "user" in the internal/api module

Type-specific:

Show me all interfaces in this codebase

What to expect

Found 3 symbols matching "authenticate":

1. UserService.Authenticate (method)
   Location: internal/auth/service.go:45
   Signature: func (s *UserService) Authenticate(ctx context.Context, creds Credentials) (*User, error)

2. AuthenticateMiddleware (function)
   Location: internal/api/middleware.go:23
   Signature: func AuthenticateMiddleware(next http.Handler) http.Handler

3. Authenticator (interface)
   Location: internal/auth/types.go:12
   Signature: type Authenticator interface { ... }

If results aren't helpful

Problem Try this instead
Too many results Add a scope: "Search for authenticate in internal/auth"
Wrong kind of symbol Filter by kind: "Find functions named authenticate"
Nothing found Try partial name: "Search for auth"
Misspelled? CKB does fuzzy matching, but check spelling

Follow-up prompts

  • "Explain the UserService.Authenticate symbol"
  • "Who calls UserService.Authenticate?"
  • "What's the impact of changing this?"

Trace Request Flow from Entrypoints

When to use

  • You're debugging and need to understand how a request reaches some code
  • You want to see the full call chain from API/CLI to a function
  • You're new and want to understand how the system processes requests

Prompts

Trace from all entrypoints:

How is the ProcessPayment function reached? Show me the call paths from API endpoints.

Trace specific entrypoint types:

Trace how UserService.Create is called from CLI commands

List entrypoints first:

What are the main entrypoints in this codebase? Show me API endpoints and CLI commands.

What to expect

ProcessPayment is reached via 2 paths:

Path 1: API → ProcessPayment (confidence: 0.95)
  POST /api/v1/checkout
  → CheckoutHandler.Handle (internal/api/checkout.go:34)
  → OrderService.Complete (internal/orders/service.go:89)
  → PaymentService.ProcessPayment (internal/payments/service.go:45)

Path 2: Job → ProcessPayment (confidence: 0.89)
  RetryFailedPayments (cron job)
  → PaymentService.RetryPending (internal/payments/service.go:112)
  → PaymentService.ProcessPayment (internal/payments/service.go:45)

Entrypoint types: api, job

If results aren't helpful

Problem Try this instead
No paths found The symbol might be internal—try "Show me what calls X directly"
Too many paths Filter by type: "Trace from API endpoints only"
Paths seem incomplete Check if SCIP index is fresh: "Run ckb doctor"
Want more detail "Show me the call graph for X with depth 3"

Follow-up prompts

  • "Explain the CheckoutHandler file"
  • "What else does OrderService.Complete call?"
  • "Is POST /api/v1/checkout a risky endpoint?"

Bug Investigation

When to use

  • You're debugging an error and need to find the root cause
  • You see a stack trace and want to understand the flow
  • Something broke and you need to find what changed recently
  • You want to trace back from a symptom to its source

Prompts

Trace the error path:

I'm seeing an error in FooHandler. Trace how it's reached from entrypoints and show me the call chain.

Find recent changes:

What changed recently near ProcessPayment? Show me hotspots and recent commits in that area.

Understand dependencies:

UserService.Authenticate is failing. What does it depend on? What could cause it to fail?

Combine tracing with context:

This error happens in OrderService.Complete. Show me:
1. How it's reached from the API
2. What it calls downstream
3. Recent changes to this code

What to expect

Bug Investigation: FooHandler error

Call Path (from entrypoint):
  POST /api/v1/foo
  → FooHandler.Handle (internal/api/foo.go:23)
  → FooService.Process (internal/foo/service.go:45)
  → Database.Query (internal/storage/db.go:89)  ← error originates here

Dependencies of Database.Query:
  - ConnectionPool (internal/storage/pool.go)
  - QueryBuilder (internal/storage/query.go)
  - Logger (internal/logging/logger.go)

Recent Activity (last 7 days):
  ⚠️ internal/storage/db.go - 3 commits, 2 authors
     - abc123: "Optimize query timeout" (2 days ago)
     - def456: "Add connection retry" (5 days ago)

Hotspot Status: INCREASING (was stable, now volatile)

Suggested Investigation:
  - Check commit abc123 "Optimize query timeout" - most recent change
  - Review ConnectionPool for connection issues

If results aren't helpful

Problem Try this instead
Can't find the symbol "Search for symbols containing 'Foo'" first
Path seems wrong Check if you're looking at the right function (overloads?)
No recent changes Expand time window: "Show changes in last 30 days"
Need more context "Explain the Database.Query symbol and its callers"

Follow-up prompts

  • "Show me the diff for commit abc123"
  • "What tests cover Database.Query?"
  • "Who owns internal/storage? I need to ask them about this."

Understand a File or Path

When to use

  • You landed on a file and don't know what it's for
  • You want quick orientation before reading code
  • You're trying to understand why a path exists
  • You need to classify: is this core, legacy, glue code, or tests?

Prompts

File explanation:

Explain what internal/query/engine.go does and what I should read next.

Path role:

Why does internal/legacy/handler.go exist? Is it core, glue, legacy, or test-only?

Quick orientation:

Give me a quick overview of internal/api/middleware.go - key functions, what it imports, what uses it.

What to expect

File: internal/query/engine.go

Role: core (main query processing logic)

Purpose: Central query engine that coordinates all CKB operations.
This is the main entry point for query processing.

Key Symbols (top 5):
  - Engine (struct) - Main query engine type
  - Engine.Search - Symbol search implementation
  - Engine.GetReferences - Find all usages
  - Engine.AnalyzeImpact - Change impact analysis
  - NewEngine - Constructor

Imports:
  ← internal/backends (SCIP, LSP, Git adapters)
  ← internal/cache (query caching)
  ← internal/compression (response optimization)

Used By:
  → internal/api (HTTP handlers)
  → internal/mcp (MCP tool handlers)
  → internal/cli (CLI commands)

Local Hotspots:
  Engine.Search - modified 3 times in 30 days

Read Next:
  - internal/backends/orchestrator.go (how backends are coordinated)
  - internal/query/types.go (query/response types)

If results aren't helpful

Problem Try this instead
Too high-level "Show me the signatures of key functions in this file"
File not found Check the path, or "Search for files named engine.go"
Want more detail "Explain the Engine.Search function specifically"
Need context "What module does this file belong to?"

Follow-up prompts

  • "Show me how Engine.Search is called"
  • "What's the impact of changing Engine?"
  • "Who owns this file?"

Blast Radius / Impact Analysis

When to use

  • Before refactoring a function
  • Before changing a public API
  • When reviewing someone else's changes
  • When deciding if a change needs more testing

Prompts

Basic impact:

What's the impact of changing UserService.Authenticate?

With risk assessment:

Analyze the blast radius if I modify the Response struct. Is this a risky change?

Scope to tests:

What tests would be affected if I change the Database.Query method?

What to expect

Impact Analysis: UserService.Authenticate

Risk Score: HIGH (0.82)
Blast Radius: 4 modules, 15 files, 18 callers → high risk   [v7.6]

Visibility: public (exported, used across modules)

Direct Callers (12):
  - AuthenticateMiddleware (internal/api/middleware.go:45)
  - LoginHandler.Handle (internal/api/auth.go:23)
  - RefreshTokenHandler.Handle (internal/api/auth.go:89)
  ... and 9 more

Transitive Callers (6):   [v7.6]
  - main (cmd/server/main.go) — depth 3, confidence 0.75
  - ServeHTTP (internal/api/router.go) — depth 2, confidence 0.85
  ... and 4 more

Affected Modules (4):
  - internal/api (8 callers)
  - internal/admin (2 callers)
  - internal/jobs (1 caller)
  - tests (15 references)

Breaking Change Risk:
  ⚠️ Signature change would break 12 direct + 6 transitive callers
  ⚠️ Return type change affects error handling in 8 places

Suggested Drilldowns:
  - "Show callers in internal/api"
  - "What tests cover UserService.Authenticate?"

v7.6: Response now includes blast radius summary (module/file/caller counts with risk level) and transitive callers (callers-of-callers up to depth 4).

If results aren't helpful

Problem Try this instead
Risk seems too high/low "Explain why UserService.Authenticate has high risk"
Missing callers Index might be stale—regenerate SCIP index
Want deeper analysis "Show impact with depth 4" (max transitive depth)
Need owner info "Who owns the code that calls UserService.Authenticate?"

Follow-up prompts

  • "Show me all 12 direct callers"
  • "Which of these callers are in production code vs tests?"
  • "What's the safest way to change this without breaking callers?"

Summarize PR Diff by Risk

When to use

  • Reviewing a pull request
  • Understanding what changed in a commit range
  • Checking if recent changes are risky before deploying
  • Catching up on what happened while you were away

Prompts

For a PR:

Summarize PR #123 and highlight any risky changes

For a commit range:

What changed between main and feature/new-auth? Summarize by risk level.

For recent changes:

Summarize changes in the last 7 days. What's most likely to cause problems?

For a specific commit:

Explain commit abc123 and its potential impact

What to expect

PR #123 Summary: "Add rate limiting to API"

Files Changed: 8
Lines: +245 / -32

Risk Assessment: MEDIUM

High Risk Changes:
  ⚠️ internal/api/middleware.go
     - Modified request handling path
     - Affects all API endpoints
     - 3 functions changed: RateLimiter, CheckLimit, ResetBucket

  ⚠️ internal/api/config.go
     - New configuration options
     - Defaults may affect existing deployments

Medium Risk Changes:
  ⚡ internal/api/errors.go
     - New error type: RateLimitExceeded
     - Added to error handling chain

Low Risk Changes:
  ✓ internal/api/middleware_test.go (tests)
  ✓ docs/rate-limiting.md (documentation)

Suggested Reviewers: @api-team, @security-team
Suggested Tests: Run API integration tests, load tests

Hotspot Warning:
  internal/api/middleware.go has been modified 8 times in 30 days
  (trend: increasing)

If results aren't helpful

Problem Try this instead
PR not found Make sure you're in the right repo, or use commit range instead
Risk seems wrong "Explain why middleware.go is high risk"
Want more detail on a file "Explain the changes to internal/api/middleware.go"
Need ownership info "Who should review changes to internal/api?"

Follow-up prompts

  • "Show me the impact of the RateLimiter changes"
  • "What tests cover the modified code?"
  • "Are there any architectural decisions related to rate limiting?"

Dead Code Cleanup

When to use

  • You suspect something isn't used anymore
  • You're cleaning up after a migration
  • You want to remove deprecated code safely
  • You're doing a codebase cleanup sprint

Prompts

Check if something is used:

Is the LegacyAuthenticator still used? Should I keep it or remove it?

Get a verdict:

Justify whether I should keep or remove the ValidateV1 function.

Find unused code in a module:

Are there any unused exports in internal/utils?

Check deprecated code:

What still calls the deprecated ProcessV1 function? Can I remove it?

What to expect

Symbol: LegacyAuthenticator

Verdict: REMOVE (confidence: 0.92)

Reasoning:
  - No direct callers found in production code
  - Only referenced in 2 test files (test-only usage)
  - Marked @deprecated in documentation
  - Last modified 8 months ago
  - Superseded by: Authenticator (new implementation)

References Found:
  - internal/auth/legacy_test.go:23 (test)
  - internal/auth/legacy_test.go:45 (test)

Safe to Remove:
  ✓ No production callers
  ✓ Tests can be removed with it
  ⚠️ Check if external packages depend on it (not indexed)

Suggested Actions:
  1. Remove LegacyAuthenticator and its tests
  2. Search for string references: "LegacyAuthenticator" in configs
  3. Check CHANGELOG for migration notes

If results aren't helpful

Problem Try this instead
Says "keep" but you think it's dead "Show me ALL references to LegacyAuthenticator, including tests"
Uncertain verdict "Explain why LegacyAuthenticator might still be needed"
External dependencies CKB only indexes your repo—check external consumers manually
Want to see the code "Show me the LegacyAuthenticator symbol details"

Follow-up prompts

  • "What replaced LegacyAuthenticator?"
  • "Show me the file where LegacyAuthenticator is defined"
  • "Are there any ADRs about deprecating the legacy auth?"

Onboarding to an Unfamiliar Module

When to use

  • You're new to the codebase
  • You need to work in a module you've never touched
  • You're reviewing code in an area you don't know
  • You want the "lay of the land" before diving in

Prompts

Module overview:

Explain the internal/payments module. What does it do and what are the key types?

Architecture first:

Show me the architecture of this codebase. What are the main modules and how do they connect?

Key concepts:

What are the main domain concepts in this codebase?

File orientation:

Explain internal/payments/service.go. What's this file for?

Find the important stuff:

What are the most important symbols in the internal/auth module?

What to expect

Module: internal/payments

Responsibility: Payment processing and transaction management

Key Types:
  - PaymentService (main service, 12 methods)
  - Transaction (domain model)
  - PaymentProvider (interface for Stripe/PayPal)
  - PaymentResult (return type for operations)

Key Functions:
  - ProcessPayment - Main entry point for payments
  - RefundTransaction - Handle refunds
  - ValidateCard - Card validation logic

Dependencies:
  → internal/users (gets user info)
  → internal/orders (updates order status)
  ← internal/api (called by handlers)
  ← internal/jobs (called by retry jobs)

Owner: @payments-team
Recent Activity: 5 commits in last 30 days (stable)

Entry Points:
  - POST /api/v1/payments (ProcessPayment)
  - POST /api/v1/refunds (RefundTransaction)
  - RetryFailedPayments job (ProcessPayment)

If results aren't helpful

Problem Try this instead
Too high-level "Show me the key symbols in internal/payments with their signatures"
Module not found "What modules exist in this codebase?"
Want relationships "What does internal/payments depend on? What depends on it?"
Need examples "Show me how PaymentService.ProcessPayment is called"

Follow-up prompts

  • "Trace how ProcessPayment is reached from the API"
  • "What tests exist for the payments module?"
  • "Who has been working on payments recently?"

Codebase Improvement Audit

When to use

  • You want a quick, evidence-based cleanup plan (dead folders, duplicated code, dependency bloat, TODO debt)
  • You're preparing for a cleanup sprint or tech-debt week
  • You suspect the repo contains legacy copies, build artifacts, or unused packages

Prompts

Safe default (recommended):

Audit this repository and produce a prioritized codebase improvement report.

Constraints:
- Do not modify files.
- Do not propose refactors unless you can cite specific evidence (paths/symbols/usages).

Output format:
1. Top 10 findings sorted by impact × confidence
2. For each: evidence (file paths, counts/sizes, symbols), why it matters, and a safe next action
3. Confidence: High / Medium / Low
4. Include a verification step before any destructive action (delete/move)

Focus on: obsolete/duplicate dirs, build artifacts, dependency bloat, TODO clusters, and hotspots.

Scope it (faster + less noise):

Audit only internal/api and internal/query. Ignore build/ and generated artifacts.
Same output format as above.

Dependency hygiene:

Check go.mod (or package.json / pubspec.yaml) for:
- Dependencies that should be dev-only
- Unused packages
- Heavy optional dependencies

Give a proposed change list (no edits) + expected impact + how to verify.

What to expect

CKB Codebase Improvement Report

🚨 High Priority: Obsolete / Duplicate Code
┌─────────────────────────────────────────────────────────────────┐
│ Issue: obsolete/legacy-api/                                     │
│ Evidence: 45MB, contains older sources duplicated in internal/  │
│ Why it matters: Wastes disk, confuses onboarding, risk of       │
│   accidental edits to dead code                                 │
│ Confidence: HIGH                                                │
│                                                                 │
│ Action: Delete obsolete/legacy-api/ (after verification)        │
│ Verification:                                                   │
│   1. rg "obsolete/legacy-api" -n  (expect: no references)       │
│   2. Confirm newer implementation exists in internal/api/       │
└─────────────────────────────────────────────────────────────────┘

⚠️ Medium Priority: Dependency Hygiene
┌─────────────────────────────────────────────────────────────────┐
│ Issue: testify in dependencies instead of test-only             │
│ Evidence: go.mod line 45                                        │
│ Confidence: HIGH                                                │
│                                                                 │
│ Action: Move to require block with // test comment              │
│ Verification: go mod tidy && go test ./...                      │
└─────────────────────────────────────────────────────────────────┘

📋 Low Priority: TODO Cluster
┌─────────────────────────────────────────────────────────────────┐
│ Evidence: 23 TODOs across 7 files, concentrated in:             │
│   - internal/cache/manager.go (8 TODOs)                         │
│   - internal/query/engine.go (6 TODOs)                          │
│ Confidence: MEDIUM (some may be intentional placeholders)       │
│                                                                 │
│ Action: Triage into 3 categories (fix now, track, won't fix)    │
│ Verification: Link actionable TODOs to issues/tickets           │
└─────────────────────────────────────────────────────────────────┘

Summary: 3 high, 4 medium, 6 low priority findings
Estimated cleanup effort: 2-3 hours for high priority items

If results aren't helpful

Problem Try this instead
Too generic "Only report findings that include file paths + concrete evidence"
Too many findings "Limit to top 10 by impact × confidence"
Confidence feels low "Show raw evidence: references/usages/sizes and why confidence is low"
Suggests risky deletes "Add verification steps and require 'no references found' before delete"

Follow-up prompts

  • "For finding #1: show me all references/usages that prove it's safe to remove."
  • "Turn the top 5 findings into a 2-week cleanup plan with milestones."
  • "Which findings touch hotspots (high churn) and should be scheduled carefully?"
  • "Create GitHub issues for the top 3 findings with acceptance criteria."

CI/CD PR Analysis

When to use

  • Running automated PR checks in GitHub Actions
  • Getting risk assessment before merge
  • Finding the right reviewers automatically
  • Detecting when PRs touch volatile code

Prompts

Full PR analysis:

Summarize this PR against main branch. Include:
- Risk assessment (low/medium/high)
- Affected modules
- Hotspots touched
- Suggested reviewers

With specific base branch:

Analyze the PR from feature/auth to develop. What's the risk level and who should review?

Hotspot focus:

Does this PR touch any hotspots? Which files have been volatile lately?

What to expect

PR Analysis: feature/new-auth → main

Risk Assessment: MEDIUM

Files Changed: 12
Lines: +450 / -120

Modules Affected:
  - internal/auth (primary)
  - internal/api (secondary)

Hotspots Touched:
  ⚠️ internal/auth/service.go
     Score: 0.78, Trend: increasing
     Modified 8 times in last 30 days

Suggested Reviewers:
  - @security-team (owns internal/auth)
  - alice@example.com (recent contributor, 65% of auth changes)
  - bob@example.com (CODEOWNERS for internal/api)

Risk Factors:
  - Touches 1 hotspot file
  - Affects authentication flow (security-sensitive)
  - 2 modules affected

Recommended Actions:
  - Request review from @security-team
  - Run integration tests before merge
  - Consider splitting into smaller PRs

If results aren't helpful

Problem Try this instead
No reviewers suggested Check if CODEOWNERS exists and is indexed
Risk seems wrong "Explain why this PR is medium risk"
Missing hotspots Run refreshArchitecture to update hotspot data
Need more detail "Show me the impact of changes to internal/auth/service.go"

Follow-up prompts

  • "Show me the hotspot history for internal/auth/service.go"
  • "What tests cover the changed files?"
  • "Are there any ADRs about authentication I should know about?"

Ownership Drift Check

When to use

  • Auditing CODEOWNERS accuracy
  • Finding stale ownership assignments
  • Identifying knowledge silos
  • Preparing for team reorganization

Prompts

Full drift analysis:

Show me ownership drift across the codebase. Where does CODEOWNERS not match who actually works on the code?

Scoped to a module:

Check ownership drift in internal/api. Is the declared owner still the main contributor?

High-drift only:

Show me files where ownership drift is above 50%. These need CODEOWNERS updates.

What to expect

Ownership Drift Analysis

Scope: all files
Threshold: 0.3 (30% drift)

High Drift Files (>0.7):

  internal/legacy/handler.go
  ├── Declared Owner: @old-team
  ├── Actual Contributors:
  │   - alice@example.com (72%)
  │   - bob@example.com (18%)
  │   - charlie@example.com (10%)
  ├── Drift Score: 0.89
  └── Recommendation: Update CODEOWNERS to @alice or create @new-team

  internal/api/middleware.go
  ├── Declared Owner: @api-team
  ├── Actual Contributors:
  │   - dave@example.com (65%)
  │   - @api-team members (25%)
  │   - external (10%)
  ├── Drift Score: 0.72
  └── Recommendation: Add dave@example.com as co-owner

Medium Drift Files (0.3-0.7):
  - internal/cache/manager.go (0.45)
  - internal/query/engine.go (0.38)

Summary:
  - 2 high drift files (need immediate attention)
  - 4 medium drift files (review recommended)
  - CODEOWNERS accuracy: 78%

If results aren't helpful

Problem Try this instead
No drift found Lower threshold: "Show drift above 0.2"
Too many results Raise threshold: "Show only drift above 0.6"
Missing contributors Run refreshArchitecture to update git-blame data
Want specific file "Who actually owns internal/api/handler.go?"

Follow-up prompts

  • "Show me the git history for internal/legacy/handler.go"
  • "Who should own files that @old-team used to own?"
  • "Create a CODEOWNERS update plan for high-drift files"

When to use

  • Searching for modules across multiple repositories
  • Finding who owns similar code across the organization
  • Looking for patterns or implementations across microservices
  • Understanding organization-wide architecture

Prompts

Search modules across repos:

Search for authentication modules across the platform federation

Find ownership patterns:

Who owns API code across all repositories in the platform federation?

Get org-wide hotspots:

Show me the hottest files across all repos in our federation. What's volatile organization-wide?

Search decisions:

Find all architectural decisions about caching across the platform federation

What to expect

Cross-Repo Module Search: "auth"
Federation: platform

Found 4 modules across 3 repositories:

  api-service/internal/auth
  ├── Responsibility: JWT validation and session management
  ├── Owner: @security-team
  └── Last sync: 2 hours ago

  user-service/internal/auth
  ├── Responsibility: User authentication and password handling
  ├── Owner: @security-team
  └── Last sync: 2 hours ago

  gateway/pkg/auth
  ├── Responsibility: OAuth2 client implementation
  ├── Owner: @platform-team
  └── Last sync: 1 day ago

  admin/internal/auth
  ├── Responsibility: Admin-specific auth middleware
  ├── Owner: @admin-team
  └── Last sync: 3 hours ago

Staleness: fresh (all repos synced within 24h)

If results aren't helpful

Problem Try this instead
Federation not found "List available federations"
Stale results "Sync the platform federation"
Too many results Filter by repo: "Search auth in api-service and user-service only"
Missing repos Check federation config: "Show repos in platform federation"

Follow-up prompts

  • "Compare the auth modules—what's different between api-service and user-service?"
  • "Who's the common owner across auth code?"
  • "Are there any shared contracts between these auth modules?"

Contract Impact Analysis

When to use

  • Before changing a shared protobuf or OpenAPI contract
  • Understanding who consumes your API
  • Assessing risk of breaking changes
  • Planning API versioning or deprecation

Prompts

Analyze contract impact:

What breaks if I change the user.proto contract in api-service?

List contracts:

Show me all public API contracts in the platform federation

Check dependencies:

What contracts does the gateway service depend on? What depends on gateway's contracts?

Before making changes:

I need to add a required field to OrderRequest in proto/api/v1/order.proto.
1. Who consumes this contract?
2. What's the risk level?
3. Who should I notify?

What to expect

Contract Impact Analysis

Contract: api-service/proto/api/v1/user.proto
Type: proto
Visibility: public (versioned, has services)

Direct Consumers (3):
  ┌─────────────────────────────────────────────────────────┐
  │ gateway                                                  │
  │ ├── Evidence: proto import in gateway/proto/deps.proto  │
  │ ├── Tier: declared (confidence: 1.0)                    │
  │ └── Owner: @platform-team                               │
  ├─────────────────────────────────────────────────────────┤
  │ web-app                                                  │
  │ ├── Evidence: generated types in web-app/src/api/user.ts│
  │ ├── Tier: declared (confidence: 1.0)                    │
  │ └── Owner: @frontend-team                               │
  ├─────────────────────────────────────────────────────────┤
  │ mobile-app                                               │
  │ ├── Evidence: type reference in mobile/lib/api.dart     │
  │ ├── Tier: derived (confidence: 0.85)                    │
  │ └── Owner: @mobile-team                                 │
  └─────────────────────────────────────────────────────────┘

Transitive Consumers (1):
  - analytics (via gateway, depth: 2)

Risk Assessment: HIGH
Risk Factors:
  ⚠️ Public contract with 3 direct consumers
  ⚠️ Has service definitions (UserService)
  ⚠️ Versioned API (v1) - breaking changes need v2

Approval Required From:
  - @platform-team (gateway owner)
  - @frontend-team (web-app owner)
  - @mobile-team (mobile-app owner)

Recommended Actions:
  1. Create v2 of the proto if making breaking changes
  2. Notify all consumer owners before merge
  3. Coordinate deployment order: api-service first, then consumers

If results aren't helpful

Problem Try this instead
Contract not found "List all contracts in api-service"
Missing consumers "Include heuristic matches for user.proto"
Need more depth "Show transitive consumers with depth 4"
Wrong visibility Visibility is inferred—check path conventions

Follow-up prompts

  • "Show me the proto import graph for user.proto"
  • "What other contracts does gateway consume?"
  • "Are there any architectural decisions about API versioning?"

Dead Code with Telemetry

When to use

  • You have runtime telemetry enabled and want high-confidence dead code detection
  • Cleaning up code with actual production usage data
  • Verifying static analysis with observed behavior
  • Finding code that exists but is never called at runtime

Prompts

Find dead code candidates:

Find dead code candidates using runtime telemetry. Show me functions with zero calls in the last 90 days.

Check telemetry coverage first:

What's our telemetry coverage? Is it good enough for dead code detection?

Scoped dead code search:

Find dead code in internal/legacy using telemetry. What's never called?

High confidence only:

Show me dead code candidates with confidence above 0.8

What to expect

Dead Code Analysis (Telemetry-Enhanced)

Coverage Status:
  Overall: 76% (medium)
  Attribute Coverage: 85%
  Match Quality: 72%
  ✓ Coverage sufficient for dead code detection

Dead Code Candidates (3 found):

  ┌─────────────────────────────────────────────────────────┐
  │ LegacyExporter.Export                                   │
  │ File: internal/export/v1.go:42                          │
  │ Static References: 3                                    │
  │ Observed Calls (90d): 0                                 │
  │ Match Quality: exact                                    │
  │ Confidence: 0.87                                        │
  │ Reasons:                                                │
  │   ✓ 90+ days observation period                        │
  │   ✓ Exact telemetry match                              │
  │   ⚠️ Has static references (test files)                │
  └─────────────────────────────────────────────────────────┘

  ┌─────────────────────────────────────────────────────────┐
  │ ValidateV1Request                                       │
  │ File: internal/api/validate_legacy.go:23                │
  │ Static References: 1                                    │
  │ Observed Calls (90d): 0                                 │
  │ Match Quality: strong                                   │
  │ Confidence: 0.82                                        │
  │ Reasons:                                                │
  │   ✓ 90+ days observation                               │
  │   ✓ Strong telemetry match                             │
  │   ✓ Low static reference count                         │
  └─────────────────────────────────────────────────────────┘

  ┌─────────────────────────────────────────────────────────┐
  │ FormatOutputV2                                          │
  │ File: internal/format/output.go:89                      │
  │ Static References: 5                                    │
  │ Observed Calls (90d): 0                                 │
  │ Match Quality: exact                                    │
  │ Confidence: 0.78                                        │
  │ Reasons:                                                │
  │   ✓ 90+ days observation                               │
  │   ⚠️ Multiple static references                        │
  │   ⚠️ Sampling may miss low-traffic paths               │
  └─────────────────────────────────────────────────────────┘

Summary:
  Analyzed: 1,247 symbols
  Candidates: 3
  Observation Period: 120 days

Limitations:
  - Scheduled jobs may not emit telemetry
  - Sampling rate: ~10% (low-traffic functions may be missed)

If results aren't helpful

Problem Try this instead
Coverage too low "What services are unmapped? How do I improve coverage?"
Too many candidates Raise threshold: "Find dead code with confidence above 0.85"
False positives Check exclusions: tests, migrations, scheduled jobs should be excluded
No telemetry data Enable telemetry first—see Configuration guide

Follow-up prompts

  • "For LegacyExporter.Export, show me the static references—are they just tests?"
  • "Justify LegacyExporter.Export—should I keep or remove it?"
  • "What replaced LegacyExporter? Is there a V2?"

Runtime Usage Check

When to use

  • You want to know if a function is actually called in production
  • Validating that a feature is being used
  • Checking call volume before deprecation
  • Understanding which services call a function

Prompts

Check if code is used:

Is ProcessPayment actually called in production? How often?

Get usage with callers:

Show me runtime usage for UserService.Authenticate. Who's calling it?

Check before deprecation:

I want to deprecate the V1 API. Show me runtime usage for all V1 endpoints.

Trend analysis:

Is usage of LegacyHandler increasing or decreasing?

What to expect

Runtime Usage: ProcessPayment
Symbol: ckb:payments:sym:abc123

Observed Calls (90 days): 145,230
Error Count: 23 (0.02%)
Match Quality: exact (confidence: 0.95)
Trend: stable

Callers:
  ┌────────────────────────────────────────────────┐
  │ Service          │ Calls    │ Last Seen       │
  ├──────────────────┼──────────┼─────────────────┤
  │ api-gateway      │ 98,450   │ 2 minutes ago   │
  │ checkout-service │ 42,100   │ 5 minutes ago   │
  │ retry-worker     │ 4,680    │ 1 hour ago      │
  └────────────────────────────────────────────────┘

Usage Pattern:
  - Peak hours: 10am-2pm, 6pm-9pm
  - Weekend drop: ~40%
  - Consistent daily pattern

Static vs Observed:
  Static callers found: 5
  Observed callers: 3
  Discrepancy: 2 callers in static analysis not seen at runtime
    - test_payment_handler (test file)
    - benchmark_payment (benchmark file)

If results aren't helpful

Problem Try this instead
No usage data Check telemetry status: "Is telemetry enabled and receiving data?"
Low confidence Match might be weak—check symbol path and function name
Missing callers Caller tracking may be disabled—check privacy settings
Trend unclear Extend period: "Show usage over last 90 days"

Follow-up prompts

  • "Why are there 2 callers in static analysis but not in runtime?"
  • "Compare ProcessPayment usage this month vs last month"
  • "What's the error rate trend for ProcessPayment?"

Understand Why Code Exists

When to use

  • You're looking at unfamiliar code and want to know the backstory
  • You want to understand the original intent, not just what the code does
  • You need to know if code is temporary, experimental, or deprecated
  • You're investigating why something was implemented a certain way

Prompts

Explain the origin:

Explain the origin of UserService.Authenticate—who wrote it, when, and why?

Check for warnings:

What concerns should I have about internal/legacy/handler.go? Any warnings?

With full context:

I need to modify ProcessPayment. Before I touch it, tell me:
1. Who originally wrote it and why
2. How has it evolved over time
3. Any linked issues or PRs
4. What files typically change with it
5. Any warnings I should know about

Focus on evolution:

How has the auth module changed over time? Show me the evolution timeline.

What to expect

Symbol Origin: UserService.Authenticate

Origin:
  Commit: abc123
  Author: alice@example.com
  Date: 2023-06-15
  Message: "Add JWT-based authentication"
  PR: #234 "Implement auth system"
  Issue: #189 "Need user authentication"

Evolution (8 commits over 18 months):
  2023-06-15  alice@example.com  Initial implementation
  2023-08-20  bob@example.com    Add refresh token support
  2023-11-10  alice@example.com  Fix session timeout bug (#312)
  2024-02-15  charlie@example.com Add MFA support
  2024-06-20  dave@example.com   Performance optimization
  ... 3 more commits

Contributors: 4 unique authors
Last Modified: 2024-11-01 (2 months ago)

Co-Changes (files that typically change together):
  - internal/auth/token.go (correlation: 0.85, 7 co-commits)
  - internal/auth/session.go (correlation: 0.72, 5 co-commits)
  - internal/api/middleware.go (correlation: 0.68, 4 co-commits)

Warnings:
  ⚠️ bus_factor: Only 2 primary contributors
  ⚠️ high_coupling: 3 files with correlation >0.7

References:
  Issues: #189, #245, #312
  PRs: #234, #267, #340
  Docs: docs/auth.md

Usage:
  Static refs: 45
  Observed calls: 125k/day (if telemetry enabled)
  Importance: critical

If results aren't helpful

Problem Try this instead
No origin found File might be untracked or in a shallow clone
Limited history Increase historyLimit: "Show last 20 changes"
No warnings Code might be healthy! Or check coupling threshold
Missing PRs/issues Commit messages might not reference them

Follow-up prompts

  • "Why is there a bus factor warning? Who else can I involve?"
  • "Show me the linked issues for this code"
  • "What's the coupling between auth and middleware?"

Find Co-Change Patterns

When to use

  • You're refactoring and want to know what else might need to change
  • You suspect hidden dependencies between files
  • You want to suggest related files in code reviews
  • You're looking for candidates to extract into a shared module

Prompts

Find coupled files:

What files typically change together with internal/query/engine.go?

High correlation only:

Find files with strong coupling (>0.7) to the auth module

Before refactoring:

I'm going to refactor PaymentService. What files usually change with it?

Module-level coupling:

What's the coupling between internal/api and internal/auth modules?

What to expect

Co-Change Analysis: internal/query/engine.go

Analyzed commits: 156 (last 365 days)

Coupled Files:

Strong Coupling (>0.7):
  ┌─────────────────────────────────────────────────────────┐
  │ internal/query/types.go                                  │
  │ ├── Correlation: 0.92                                   │
  │ ├── Co-commits: 23                                      │
  │ ├── Shared authors: 4                                   │
  │ └── Last co-change: 3 days ago                         │
  ├─────────────────────────────────────────────────────────┤
  │ internal/backends/orchestrator.go                        │
  │ ├── Correlation: 0.78                                   │
  │ ├── Co-commits: 15                                      │
  │ ├── Shared authors: 3                                   │
  │ └── Last co-change: 1 week ago                         │
  └─────────────────────────────────────────────────────────┘

Moderate Coupling (0.5-0.7):
  - internal/cache/query_cache.go (0.65, 12 co-commits)
  - internal/compression/budget.go (0.58, 9 co-commits)
  - internal/query/engine_test.go (0.55, 18 co-commits)

Weak Coupling (0.3-0.5):
  - internal/api/handler.go (0.42, 6 co-commits)
  - internal/mcp/server.go (0.38, 5 co-commits)

Summary:
  Files above threshold: 7
  Strong: 2, Moderate: 3, Weak: 2

Insights:
  💡 engine.go and types.go are tightly coupled—consider if they should be merged
  💡 High coupling with backends/orchestrator suggests shared abstractions
  💡 Test file co-changes as expected (healthy pattern)

If results aren't helpful

Problem Try this instead
No coupling found Extend window: "Check coupling over last 2 years"
Too much coupling Raise threshold: "Show only correlation >0.6"
Coupling seems wrong Check if commits are atomic—large commits inflate correlation
Need direction "Is A coupled to B, or B coupled to A?" (usually symmetric)

Follow-up prompts

  • "Should engine.go and types.go be merged?"
  • "Explain why engine and orchestrator are so coupled"
  • "When I change engine.go, should I also review types.go?"

LLM Codebase Export

When to use

  • You want to give an LLM context about your codebase
  • You need a token-efficient summary of code structure
  • You're generating documentation or READMEs
  • You want to share codebase structure with non-technical stakeholders

Prompts

Full export:

Export the codebase structure in LLM-friendly format

Filtered by importance:

Export only high-complexity or high-usage symbols—skip the trivial stuff

Scoped export:

Export just the internal/api module with ownership and contracts

Custom filtering:

Export symbols with complexity >10 or calls >1000/day, max 200 symbols

What to expect

# Codebase: my-project
# Generated: 2025-12-18T10:00:00Z
# Symbols: 147 | Files: 42 | Modules: 8

## internal/api/ (owner: @api-team)

  ! handler.go
      $ Server                  c=5  calls=10k/day ★★★
      # HandleRequest()         c=12 calls=8k/day  ★★★
      # ValidateInput()         c=8  calls=8k/day  ★★

  ! middleware.go
      # AuthMiddleware()        c=15 calls=10k/day ★★★  contract:auth.proto
      # RateLimiter()           c=8  calls=10k/day ★★
      # Logger()                c=3  calls=10k/day ★

## internal/auth/ (owner: @security-team)

  ! service.go
      $ AuthService             c=8  calls=5k/day  ★★★
      # Authenticate()          c=12 calls=5k/day  ★★★
      # ValidateToken()         c=6  calls=8k/day  ★★

  ! token.go
      # GenerateJWT()           c=5  calls=5k/day  ★★
      # ParseJWT()              c=4  calls=8k/day  ★★

## internal/payments/ (owner: @payments-team)

  ! processor.go
      $ PaymentProcessor        c=15 calls=2k/day  ★★★  ⚠️ high complexity
      # ProcessPayment()        c=22 calls=2k/day  ★★★  ⚠️ high complexity
      # RefundPayment()         c=18 calls=100/day ★★

---
Legend:
  !  = file
  $  = class/struct
  #  = function/method
  c  = complexity (cyclomatic)
  ★  = importance (usage × complexity)
  contract: = exposes or consumes a contract
  ⚠️ = warning (high complexity, etc.)

If results aren't helpful

Problem Try this instead
Too many symbols Add minComplexity or minCalls filter
Missing usage data Check if telemetry is enabled
No contracts shown Set includeContracts: true explicitly
Output too long Set maxSymbols: 100

Follow-up prompts

  • "Now explain the relationship between api and auth modules"
  • "What are the highest complexity functions? They need attention."
  • "Generate a README section describing this architecture"

Risk Audit and Quick Wins

When to use

  • You want to identify technical debt hotspots
  • You're planning a cleanup sprint and need priorities
  • You need to justify refactoring work to stakeholders
  • You want quick wins that deliver high impact with low effort

Prompts

Full risk audit:

Audit this codebase and show me high-risk areas

Focus on specific factor:

Show me risky code due to complexity—what's hardest to maintain?

Get quick wins:

Find high-impact, low-effort refactoring targets (quick wins)

Scoped audit:

Audit risk in internal/payments module

With threshold:

Show me code with risk score above 60

What to expect

Risk Audit Report

Scope: all modules
Threshold: score ≥ 40
Analyzed: 234 files, 1,847 symbols

High Risk (score ≥ 70):

  ┌─────────────────────────────────────────────────────────┐
  │ internal/payments/processor.go                           │
  │ Risk Score: 85                                           │
  │                                                          │
  │ Factor Breakdown:                                        │
  │   complexity:        92 (weight: 20%) ████████████████░  │
  │   test_coverage:     75 (weight: 20%) ████████████░░░░   │
  │   bus_factor:        80 (weight: 15%) ████████████████░  │
  │   security:          90 (weight: 15%) ██████████████████ │
  │   staleness:         40 (weight: 10%) ████████░░░░░░░░░  │
  │   error_rate:        65 (weight: 10%) █████████████░░░░  │
  │   coupling:          70 (weight: 5%)  ██████████████░░░  │
  │   churn:             45 (weight: 5%)  █████████░░░░░░░░  │
  │                                                          │
  │ Top Factors: complexity, security, bus_factor            │
  │ Suggestions:                                             │
  │   - Break ProcessPayment into smaller functions          │
  │   - Add more contributors to reduce bus factor           │
  │   - Add test coverage for error paths                   │
  └─────────────────────────────────────────────────────────┘

  ┌─────────────────────────────────────────────────────────┐
  │ internal/legacy/handler.go                               │
  │ Risk Score: 78                                           │
  │ Top Factors: staleness, bus_factor, test_coverage       │
  └─────────────────────────────────────────────────────────┘

Medium Risk (40-70): 12 files
Low Risk (<40): 220 files

Summary:
  High risk: 2 files
  Medium risk: 12 files
  Dominant factors: complexity (35%), test_coverage (28%)

Quick Wins (high impact, low effort):

  1. internal/utils/validator.go
     Risk: 52, Effort: low
     Fix: Add tests (1-2 hours)
     Impact: ★★★ (called 50k/day)

  2. internal/api/errors.go
     Risk: 48, Effort: low
     Fix: Reduce cyclomatic complexity
     Impact: ★★ (core error handling)

  3. internal/cache/ttl.go
     Risk: 45, Effort: low
     Fix: Add second contributor
     Impact: ★★ (used by all caching)

If results aren't helpful

Problem Try this instead
Too many high-risk Raise threshold: "Show only risk above 70"
No quick wins Lower effort filter or expand scope
Missing security signals Check if security patterns are configured
Wrong factor weights Use factor filter: "Focus on complexity"

Follow-up prompts

  • "Explain why processor.go has such high risk"
  • "Create a 2-week plan to address the top 5 risks"
  • "Who owns these high-risk files? Who should fix them?"
  • "Show me the test coverage for processor.go"

Documentation Maintenance

When to use

  • You're about to rename a symbol and want to find which docs reference it
  • You want to check if documentation is stale (references deleted symbols)
  • You're planning a documentation cleanup sprint
  • You want to enforce doc coverage in CI

Prompts

Find docs for a symbol:

What documentation references UserService.Authenticate?

Check for stale references:

Are there any stale symbol references in our docs?

Check specific doc:

Check README.md for broken symbol references

Get doc coverage:

What's our documentation coverage? What important symbols need docs?

Before renaming:

I'm about to rename ProcessPayment to HandlePayment. What docs reference it?

Find module docs:

What documentation is linked to the internal/auth module?

What to expect

Documentation Coverage Report

Total Symbols: 1,847
Documented Symbols: 312
Coverage: 16.9%

Top Undocumented (by centrality):
  1. internal/query/Engine.Execute
     Kind: method, Refs: 45
     Central to query processing, no docs found

  2. internal/api/Server.HandleRequest
     Kind: method, Refs: 38
     Main API entry point, no docs found

  3. internal/auth/Authenticator.Validate
     Kind: method, Refs: 28
     Core auth flow, no docs found

Staleness Report:
  README.md:
    Line 42: `OldService.Method` → symbol_renamed
      Suggestion: NewService.Method

  docs/architecture.md:
    Line 156: `DeletedHandler` → missing_symbol
      No suggestions available

Total Stale: 2 references

If results aren't helpful

Problem Try this instead
No docs found Run ckb docs index first
Coverage seems low Check docs.paths in config
Missing symbol refs Use directives: <!-- ckb:symbol full.path -->
Single-word symbols not found Use <!-- ckb:known_symbols Name --> directive

Follow-up prompts

  • "Show me all symbols referenced in README.md"
  • "Which docs reference the auth module?"
  • "What's the staleness trend over the last month?"
  • "Generate a doc coverage badge for our README"

Daemon Mode

When to use

  • You want CKB running continuously in the background
  • You need scheduled index refreshes (nightly, hourly)
  • You want automatic reindexing when files change
  • You need webhooks for CI/CD integration

Prompts

Check daemon status:

Is the CKB daemon running? What's its status?

Start with scheduled tasks:

Start the CKB daemon with hourly index refresh

Check scheduled tasks:

What scheduled tasks are configured in the daemon?

Check webhooks:

List all configured webhooks. Are any failing?

What to expect

Daemon Status

Status: running
PID: 12345
Uptime: 3 days, 4 hours
Port: 9120

Scheduled Tasks:
  ┌────────────────────────────────────────────────┐
  │ Task              │ Schedule    │ Last Run     │
  ├───────────────────┼─────────────┼──────────────┤
  │ index-refresh     │ 0 * * * *   │ 45 min ago   │
  │ hotspot-snapshot  │ 0 0 * * *   │ 4 hours ago  │
  │ architecture-sync │ 0 2 * * *   │ 6 hours ago  │
  └────────────────────────────────────────────────┘

File Watcher:
  Enabled: true
  Patterns: **/*.go, **/*.ts
  Debounce: 5s

Webhooks:
  - POST https://ci.example.com/ckb (last: success, 2h ago)

If results aren't helpful

Problem Try this instead
Daemon not running Start it: ckb daemon start
Tasks not executing Check logs: ckb daemon logs --follow
Webhook failing Check endpoint and auth: ckb daemon logs --filter=webhook
Want different schedule Edit config: see Daemon Mode wiki

Follow-up prompts

  • "Show me the daemon logs for the last hour"
  • "Stop the daemon"
  • "Restart the daemon with debug logging"

Complexity Analysis

When to use

  • You want to find the most complex functions for refactoring
  • You're assessing code quality or maintainability
  • You need complexity metrics for code review
  • You're prioritizing technical debt

Prompts

Get file complexity:

What's the complexity of internal/query/engine.go? Show me the most complex functions.

Find high-complexity code:

Find all functions with cyclomatic complexity above 15

Module complexity overview:

Show me complexity metrics for the internal/payments module

Compare complexity:

Which file is more complex: engine.go or orchestrator.go?

What to expect

Complexity Analysis: internal/query/engine.go

File Metrics:
  Total Lines: 450
  Functions: 12
  Avg Cyclomatic: 8.3
  Avg Cognitive: 12.1

High Complexity Functions:
  ┌─────────────────────────────────────────────────────────┐
  │ Function              │ Cyclomatic │ Cognitive │ Lines  │
  ├───────────────────────┼────────────┼───────────┼────────┤
  │ Engine.Search         │ 22         │ 35        │ 120    │
  │ Engine.AnalyzeImpact  │ 18         │ 28        │ 95     │
  │ Engine.TraceUsage     │ 15         │ 22        │ 78     │
  │ parseQuery            │ 12         │ 18        │ 45     │
  └─────────────────────────────────────────────────────────┘

Recommendations:
  ⚠️ Engine.Search exceeds threshold (22 > 15)
     Consider breaking into smaller functions

  ⚠️ Engine.AnalyzeImpact is borderline (18)
     Review for potential simplification

Language: Go (tree-sitter analysis)

If results aren't helpful

Problem Try this instead
No complexity data Tree-sitter supports 8 languages—check if yours is supported
Metrics seem wrong Different tools calculate differently—CKB uses tree-sitter
Want different threshold Specify: "Find functions with complexity above 20"
Need historical trend "Has complexity increased in this file over time?"

Follow-up prompts

  • "Explain why Engine.Search is so complex"
  • "What would it take to reduce complexity in this function?"
  • "Show me the call graph for Engine.Search—can it be split?"

Analysis Tiers

When to use

  • You want faster results and don't need full analysis
  • You're running CKB in CI and need to balance speed vs depth
  • You want to understand what each tier provides
  • You're troubleshooting why certain features aren't available

Prompts

Check current tier:

What analysis tier am I using? What features are available?

Run with specific tier:

Search for auth symbols using fast tier (skip git analysis)

Diagnose tier requirements:

What's missing to use full tier? Run tier diagnostics.

Compare tiers:

What's the difference between standard and full tier?

What to expect

Analysis Tier: standard

Current Capabilities:
  ✓ Symbol search (SCIP index)
  ✓ Call graph navigation
  ✓ Reference finding
  ✓ Basic ownership (CODEOWNERS)
  ✗ Git blame analysis (requires: full tier)
  ✗ Hotspot trends (requires: full tier)
  ✗ Telemetry integration (requires: full tier + config)

Tier Comparison:
  ┌─────────────────────────────────────────────────────────┐
  │ Feature              │ fast  │ standard │ full         │
  ├──────────────────────┼───────┼──────────┼──────────────┤
  │ Symbol search        │ ✓     │ ✓        │ ✓            │
  │ References           │ ✓     │ ✓        │ ✓            │
  │ Call graph           │ ✗     │ ✓        │ ✓            │
  │ Basic ownership      │ ✗     │ ✓        │ ✓            │
  │ Git blame            │ ✗     │ ✗        │ ✓            │
  │ Hotspot trends       │ ✗     │ ✗        │ ✓            │
  │ Telemetry            │ ✗     │ ✗        │ ✓ (if configured) │
  └─────────────────────────────────────────────────────────┘

To upgrade to full tier:
  - Ensure git history is available (not shallow clone)
  - Run: ckb index --tier=full

If results aren't helpful

Problem Try this instead
Tier seems wrong Check env: echo $CKB_TIER or config file
Features missing Run diagnostics: ckb doctor --tier full
Want faster CI Use fast tier: ckb index --tier=fast
Need specific feature Check tier requirements in Configuration

Follow-up prompts

  • "Run this search with fast tier"
  • "What would I get with full tier that I don't have now?"
  • "Is full tier worth the extra time for CI?"

Index Management

When to use

  • You want to check if your index is up to date
  • You're deciding whether to do incremental or full reindex
  • You want to understand index state after changes
  • You're troubleshooting stale results

Prompts

Check index freshness:

Is my index up to date? How stale is it?

Check index state:

Show me index status—commits behind, dirty files, last indexed commit

Decide on reindex strategy:

Should I do incremental or full reindex? What are the trade-offs?

Force full reindex:

My index seems stale. How do I force a full reindex?

What to expect

Index Status

State: partial (incremental)
Last Indexed: abc1234 (2 hours ago)
Current HEAD: def5678

Freshness:
  Commits behind: 3
  Dirty files: 2 (uncommitted changes)
  Files changed since last index: 8

Accuracy:
  ✓ Go to definition - accurate
  ✓ Find refs (forward) - accurate
  ⚠️ Find refs (reverse) - may be stale
  ✓ Callees (outgoing) - accurate
  ⚠️ Callers (incoming) - may be stale

Recommendation:
  For most tasks, current index is fine.
  Run 'ckb index --force' if you need accurate caller information.

Last Full Index: 3 days ago (47 files changed since)

If results aren't helpful

Problem Try this instead
Index always stale Enable watch mode: ckb mcp --watch
Incremental not working Check if Go project—only Go supports incremental
Results seem wrong Force full reindex: ckb index --force
Index taking too long Use incremental for PRs, full for nightly

Follow-up prompts

  • "Force a full reindex now"
  • "Why is incremental index less accurate for callers?"
  • "Set up automatic reindexing when files change"

Remote Index Server

When to use

  • You want to serve indexes to multiple clients
  • You're setting up a central code intelligence server
  • You need to upload indexes from CI pipelines
  • You want to manage API tokens for index access

Prompts

Check server status:

Is the index server running? What repos are being served?

List served repositories:

What repositories are available on the index server?

Upload an index:

How do I upload a SCIP index to the server from CI?

Manage tokens:

List API tokens. Create a new upload token for CI.

What to expect

Index Server Status

Mode: index-server
Port: 8080
Repos Served: 5

Repositories:
  ┌─────────────────────────────────────────────────────────┐
  │ ID                  │ Files  │ Symbols │ Last Upload    │
  ├─────────────────────┼────────┼─────────┼────────────────┤
  │ company/api         │ 245    │ 3,200   │ 2 hours ago    │
  │ company/core-lib    │ 120    │ 1,500   │ 1 day ago      │
  │ company/gateway     │ 89     │ 980     │ 3 hours ago    │
  │ company/admin       │ 156    │ 2,100   │ 5 hours ago    │
  │ company/mobile      │ 312    │ 4,500   │ 1 hour ago     │
  └─────────────────────────────────────────────────────────┘

API Tokens:
  - ci-upload (scope: write, repos: company/*)
  - dashboard (scope: read, rate limit: 120/min)
  - admin (scope: admin)

Upload Example:
  curl -X POST http://server:8080/index/repos/company/api/upload \
    -H "Authorization: Bearer $CKB_TOKEN" \
    -H "Content-Encoding: gzip" \
    --data-binary @index.scip.gz

If results aren't helpful

Problem Try this instead
Server not running Start with: ckb serve --index-server --index-config config.toml
Upload failing Check token scope—needs 'write' permission
Repo not appearing Check if upload completed—look at server logs
Need compression Use gzip: gzip -c index.scip | curl ... --data-binary @-

Follow-up prompts

  • "Create a read-only token for the dashboard"
  • "How do I set up delta uploads for faster CI?"
  • "Show me the upload API documentation"

Remote Federation

When to use

  • You want to query a remote CKB server alongside local repos
  • You're setting up organization-wide code intelligence
  • You need to search symbols across distributed teams
  • You want hybrid local+remote results

Prompts

Add a remote server:

Add a remote CKB server to my federation: https://ckb.company.com

List remote servers:

What remote servers are in my federation? Are they online?

Search across local and remote:

Search for authentication modules across local repos AND remote servers

Check remote server status:

Is the production CKB server responding? What repos does it have?

Sync remote metadata:

Sync metadata from all remote servers in my federation

What to expect

Hybrid Search: "auth"
Federation: platform

Sources:
  ┌────────────────────────────────────────────────────────┐
  │ Source    │ Status  │ Repos │ Latency │ Results       │
  ├───────────┼─────────┼───────┼─────────┼───────────────┤
  │ local     │ online  │ 3     │ 5ms     │ 2 modules     │
  │ prod      │ online  │ 12    │ 45ms    │ 4 modules     │
  │ staging   │ offline │ 8     │ -       │ (cached: 3)   │
  └────────────────────────────────────────────────────────┘

Results (9 modules):

  Local:
    api/internal/auth (owner: @security-team)
    gateway/pkg/auth (owner: @platform-team)

  Remote (prod):
    user-service/internal/auth
    payment-service/internal/auth
    notification-service/internal/auth
    admin-portal/internal/auth

  Remote (staging, cached):
    experimental/internal/auth
    beta-features/internal/auth
    test-harness/internal/auth

Note: staging server offline, showing cached results (2 hours old)

If results aren't helpful

Problem Try this instead
Remote server offline Check status: ckb federation status-remote platform prod
Auth failing Check token: ensure $CKB_TOKEN is set correctly
Results stale Sync metadata: ckb federation sync-remote platform
Too slow Check latency—remote adds network overhead
Missing repos Remote server may not have all repos indexed

Follow-up prompts

  • "Disable the staging server temporarily"
  • "Show me what repos are on the prod server"
  • "Compare auth implementations between local and remote"
  • "Set up caching for remote queries"

Workflow Sequences

These are the recommended tool sequences for common tasks. You can ask for these explicitly or let CKB chain them automatically.

New Codebase Ramp-Up

The sequence:

getStatus → listKeyConcepts → getArchitecture → listEntrypoints → searchSymbols

As a prompt:

I'm new to this codebase. Give me:
1. System status (is everything working?)
2. Key domain concepts
3. Architecture overview (depth 2)
4. Main entrypoints (API, CLI, jobs)
5. Then I'll search for specific symbols

What you'll learn: Overall health, main concepts, module structure, where requests enter the system.


Bug Investigation

The sequence:

searchSymbols → traceUsage → getCallGraph → recentlyRelevant

As a prompt:

I'm debugging an error in FooHandler. Help me:
1. Find the FooHandler symbol
2. Trace how it's reached from entrypoints
3. Show me its call graph (what it calls, what calls it)
4. What changed recently in related code?

What you'll learn: The exact path to the error, dependencies, and recent changes that might be the cause.


Before Making Changes

The sequence:

searchSymbols → explainSymbol → findReferences → analyzeImpact → getHotspots

As a prompt:

I want to change the authenticate() function. Before I touch it:
1. Find and explain the symbol
2. Show me all references (including tests)
3. Analyze the impact—what breaks if I change this?
4. Is this a hotspot? How volatile is this area?

What you'll learn: Full context of what you're changing, who depends on it, risk level, and recent volatility.


Code Review

The sequence:

summarizeDiff → getHotspots → getOwnership → traceUsage

As a prompt:

Review PR #123 for me:
1. Summarize the diff by risk level
2. Does it touch any hotspots?
3. Who should review these changes?
4. What execution paths are affected?

What you'll learn: Risk assessment, volatility warnings, suggested reviewers, and downstream impact.


Dead Code Sanity Check

The sequence:

searchSymbols → justifySymbol → explainFile

As a prompt:

Is LegacyFoo dead code?
1. Find the symbol
2. Justify: keep, investigate, or remove?
3. Explain the file's role—is it all legacy?

What you'll learn: Whether code is safe to remove, with evidence.


Understanding Module Ownership

The sequence:

getArchitecture → getOwnership → getModuleResponsibilities

As a prompt:

I need to understand who owns what:
1. Show me the module structure
2. Who owns internal/api?
3. What is internal/api responsible for?

What you'll learn: Module boundaries, ownership, and responsibilities.


Recording Design Decisions

The sequence:

getDecisions → recordDecision → annotateModule

As a prompt:

We decided to use Redis for caching. Help me:
1. Check if there are existing decisions about caching
2. Record this new decision as an ADR
3. Link it to the affected modules

What you'll learn: Existing context, and you'll create a permanent record.


Dead Code Detection with ADR Awareness (v6.5)

The sequence:

findDeadCodeCandidates → justifySymbol → [review verdict]

As a prompt:

Find potentially dead code in internal/auth, but respect architectural decisions.
Show me which symbols have "investigate" verdicts due to ADRs vs "remove-candidate".

What you'll learn: The justifySymbol tool now shows when code appears unused but is protected by an ADR. Look for:

  • "verdict": "investigate" with reasoning like "No callers found, but related to ADR-007: Extension point for plugins"
  • "relatedDecisions" array in the response

Example output:

{
  "verdict": "investigate",
  "confidence": 0.75,
  "reasoning": "No callers found, but related to ADR-007: Plugin extension points",
  "relatedDecisions": [
    {"id": "ADR-007", "title": "Plugin extension points", "status": "accepted"}
  ]
}

Impact Analysis with Architectural Context (v6.5)

The sequence:

analyzeImpact → [review relatedDecisions] → getDecisions

As a prompt:

Analyze the impact of changing internal/cache/client.go.
Show me any architectural decisions I should be aware of before making changes.

What you'll learn: The analyzeImpact response now includes relatedDecisions showing ADRs that affect the symbol's module or any impacted modules. Review these before making changes to avoid violating documented design intent.


CI/CD Pipeline Integration

The sequence:

summarizePr → getOwnershipDrift → getHotspots

As a prompt:

Analyze this PR for our CI pipeline:
1. Summarize changes with risk assessment
2. Check if any touched files have ownership drift
3. Show hotspot data for changed files
4. Output suggested reviewers

What you'll learn: Automated risk assessment, reviewer suggestions, and drift warnings for CI.

GitHub Actions integration: See examples/github-actions/pr-analysis.yml for a complete workflow.


Background Refresh

The sequence:

refreshArchitecture (async: true) → getJobStatus → [poll] → result

As a prompt:

Refresh the architecture model in the background. I'll check status as it runs.

As explicit commands:

1. "Start async architecture refresh"
2. "Check job status for job-abc123"
3. "List running jobs"
4. "Cancel job job-abc123" (if needed)

What you'll learn: How to run long operations without blocking.

Use cases:

  • Scheduled daily refresh in CI
  • Post-merge architecture update
  • Manual refresh after large refactoring

Cross-Repo Investigation (v6.2)

The sequence:

listFederations → federationSearchModules → federationSearchOwnership → federationGetHotspots

As a prompt:

I need to understand auth across all our services:
1. List available federations
2. Search for auth modules across repos
3. Who owns auth code organization-wide?
4. Are there any auth-related hotspots?

What you'll learn: Organization-wide view of a capability.


Before Changing a Contract (v6.3)

The sequence:

listContracts → analyzeContractImpact → getContractDependencies → getOwnership

As a prompt:

I need to change proto/api/v1/user.proto:
1. List related contracts
2. Who consumes this contract?
3. What's the risk level?
4. Who should approve this change?

What you'll learn: Consumer impact, risk assessment, required approvals.


Telemetry-Enhanced Dead Code (v6.4)

The sequence:

getTelemetryStatus → findDeadCodeCandidates → justifySymbol → getObservedUsage

As a prompt:

Help me clean up dead code with confidence:
1. Check telemetry coverage
2. Find dead code candidates
3. For each candidate, justify keep/remove
4. Show me the observed usage to confirm

What you'll learn: High-confidence dead code with production evidence.


Blended Impact Analysis (v6.4)

The sequence:

searchSymbols → analyzeImpact (with telemetry) → getObservedUsage

As a prompt:

I want to change ProcessPayment:
1. Find the symbol
2. Analyze impact including observed callers from telemetry
3. Show me actual runtime usage

What you'll learn: Both static and runtime callers, with comparison.


Developer Intelligence (v6.5)

The sequence:

explainOrigin → analyzeCoupling → auditRisk → exportForLLM

As a prompt:

Help me understand this legacy code before I refactor it:
1. Explain who wrote it, when, and why
2. Show me what files change together with it
3. Audit the risk factors
4. Export a summary I can share with the team

What you'll learn: Full context including origin, coupling, risk, and shareable documentation.


Before Risky Refactoring (v6.5)

The sequence:

explainOrigin → analyzeCoupling → analyzeImpact → auditRisk

As a prompt:

I'm about to refactor PaymentProcessor:
1. Who wrote it and why? Any warnings?
2. What files typically change with it?
3. What's the blast radius?
4. How risky is this change?

What you'll learn: Whether you should proceed, with full evidence.


Quick Tech Debt Triage (v6.5)

The sequence:

auditRisk (quickWins: true) → explainOrigin → getOwnership

As a prompt:

I have 2 hours for cleanup. What gives the best ROI?
1. Find quick wins (high impact, low effort)
2. For each, explain the origin
3. Tell me who owns it so I can coordinate

What you'll learn: Prioritized cleanup targets with owners.


More Recipes

"Who should I ask about this code?"

Who owns internal/api/handler.go?
Who are the main contributors to the auth module in the last 90 days?
Suggest reviewers for changes to internal/payments

"What's been causing problems?"

Show me hotspots in the codebase—what's been changing a lot?
What files in internal/api have the highest churn?
Are there any increasing hotspots I should be worried about?

"What decisions led to this?"

Are there any architectural decisions about caching?
Why does the payments module use this pattern? Any ADRs?
Show me all accepted architectural decisions

"Help me understand this error"

I'm seeing an error in ProcessPayment. Trace how it's called and show me the relevant code.
The UserService.Authenticate is failing. What does it depend on?

"What's happening across all our repos?" (v6.2)

Search for database connection modules across the platform federation
Show me hotspots across all repos—where is the org-wide tech debt?
Find all ADRs about authentication across all repositories

"Is my API change safe?" (v6.3)

What breaks if I change the order.proto contract?
List all consumers of the UserService proto—direct and transitive
Show me contract statistics for our federation—how many public vs internal APIs?

"Is this code actually used?" (v6.4)

Check telemetry status—do we have enough coverage for dead code detection?
Show me runtime usage for LegacyHandler over the last 90 days
Find all functions with zero runtime calls but static references—potential dead code
Compare static callers vs observed callers for ProcessPayment—who's really calling it?

"Why does this code exist?" (v6.5)

Explain the origin of UserService.Authenticate—who wrote it and why?
What warnings should I know about before modifying ProcessPayment?
How has the auth module evolved over time?

"What files change together?" (v6.5)

What files typically change together with internal/query/engine.go?
Find files with strong coupling (>0.7) to the payments module
I'm refactoring PaymentService—what else usually needs to change?

"Where's the tech debt?" (v6.5)

Audit this codebase and show me high-risk areas
Find quick wins—high impact, low effort refactoring targets
Show me code with risk score above 60, sorted by complexity

"Give me a codebase summary" (v6.5)

Export the codebase structure in LLM-friendly format
Export only the high-complexity functions for review
Generate a token-efficient summary I can paste into Claude

"Is my documentation current?" (v7.3)

Check all our docs for stale symbol references
What important symbols aren't documented?
Which docs mention UserService? I'm about to rename it.
What's our doc coverage? Is it above 80%?
Show me symbols referenced in README.md—are any broken?

"How complex is this code?" (v6.2.2)

What's the complexity of internal/query/engine.go?
Find functions with cyclomatic complexity above 20
Show me the most complex files in this module

"Is my index fresh?" (v7.2)

Is my index up to date? How many commits behind?
Should I do incremental or full reindex?
Force a full reindex—I need accurate caller information

"What tier am I using?" (v7.2)

What analysis tier am I using? What features are available?
Run this with fast tier—I just need quick symbol search
What's missing to use full tier?

"How do I manage the daemon?" (v6.2.1)

Is the daemon running? Show me scheduled tasks.
What webhooks are configured? Are any failing?
Show me daemon logs for the last hour

"How do I use the index server?" (v7.3)

What repos are being served by the index server?
Create a new API token for CI uploads
How do I upload an index with compression?

"How do I query remote servers?" (v7.3)

Add a remote CKB server to my federation
Search for auth modules across local AND remote servers
Is the production CKB server online? What repos does it have?
Sync metadata from all remote servers

Tips for Better Results

Be specific about scope

❌ "Find handlers"
✅ "Find HTTP handlers in internal/api"

Ask for what you'll do next

❌ "Show me the auth code"
✅ "I need to add a new auth method. Show me the Authenticator interface and its implementations."

Chain prompts for complex tasks

1. "What modules exist in this codebase?"
2. "Explain the internal/auth module"
3. "Show me the key symbols in internal/auth"
4. "How is UserService.Authenticate called?"
5. "What's the impact of adding a parameter to Authenticate?"

Use CKB tools explicitly when needed

"Use getArchitecture to show me the module structure"
"Use traceUsage to find how this is reached"
"Use analyzeImpact to assess this change"

Always require verification before deletes

❌ "Delete all unused code"
✅ "Show me unused code with evidence, and include verification steps before any delete"

Any finding that recommends deletion should include:

  1. Evidence (no references found, no callers, etc.)
  2. A verification command you can run
  3. Confidence level

When CKB Can't Help

CKB is for navigation and comprehension, not code changes. It won't:

  • Write or modify code for you
  • Generate tests
  • Fix bugs
  • Suggest refactorings
  • Enforce style rules

But it will tell you where to make changes, what might break, and who to ask—so you can make informed decisions.


Troubleshooting

"No results found"

  1. Check if CKB is initialized: ckb status
  2. Regenerate index if stale: scip-go --repository-root=.
  3. Try broader search terms

"Results seem incomplete"

  1. Run diagnostics: ckb doctor
  2. Check index freshness in status
  3. The index might not cover all files—check .ckb/config.json

"I don't have CKB tools available"

  1. Make sure CKB MCP server is configured
  2. Check: claude mcp list should show "ckb"
  3. See MCP Integration for setup

Next Steps