Skip to content
Back to News
insights
ai
developer-experience

Why Your AI Keeps Grepping Instead of Understanding

By CKB Team

You ask your AI assistant "What calls the handlePayment function?" and it responds with a list of files containing the string "handlePayment". That's not what you asked. You asked about callers, not mentions.

This happens constantly. Here's why, and how to fix it.

The Problem: AI Without Context

Large language models are trained on code, so they understand programming concepts. But when you connect an AI to your codebase, it typically gets access to:

  • File contents (via read tools)
  • Text search (via grep/ripgrep)
  • File listing (via ls/find)

That's it. The AI can read code and search for strings. It cannot:

  • Build an AST (abstract syntax tree)
  • Resolve imports and references
  • Track call graphs
  • Understand type hierarchies
  • Know what code is actually executed

So when you ask "what calls handlePayment?", the AI does the only thing it can: grep for the string "handlePayment".

What Grep Returns vs. What You Need

You ask: "What calls handlePayment?"

Grep finds:

payment.go:45:     func handlePayment(ctx context.Context) error {
payment_test.go:23:    t.Run("handlePayment success", func(t *testing.T) {
README.md:89:    The `handlePayment` function processes...
old_code.bak:12:    // handlePayment was moved to payment.go
docs/api.md:34:    ## handlePayment
checkout.go:78:    err := handlePayment(ctx)  // ← actual caller
refund.go:91:    // Similar to handlePayment but for refunds

What you actually wanted:

checkout.go:78 - CheckoutController.Process() calls handlePayment()

Out of 7 matches, only 1 is a real caller. The rest are definitions, tests, docs, comments, and backup files.

The Grep Cascade

When grep returns noise, the AI tries to filter it:

  1. "Find handlePayment" → 47 matches
  2. "Okay, filter out test files" → 31 matches
  3. "Filter out markdown" → 24 matches
  4. "Look for lines with handlePayment(" → 12 matches
  5. "Read each file to understand context" → 12 file reads
  6. "Determine which are actual calls" → Finally, 3 callers

This cascade burns tokens, takes time, and still might miss things (what about h := handlePayment; h(ctx)?).

Why This Matters

1. Wrong Answers

Grep-based analysis misses:

  • Aliased functions (h := handlePayment)
  • Interface implementations (var p PaymentHandler = &concreteHandler{})
  • Method calls on embedded types
  • Dynamic dispatch and reflection
  • Calls from generated code

2. Wasted Tokens

That 6-step cascade above might use 10,000+ tokens. A semantic query uses ~500.

3. Slow Responses

Multiple round-trips (search, read, search again, read more) add latency. You wait while the AI figures out what it should have known instantly.

4. Developer Frustration

When your AI gives wrong answers, you lose trust. You start manually verifying everything, which defeats the purpose of AI assistance.

The Solution: Semantic Code Intelligence

CKB pre-indexes your codebase with semantic understanding. It knows:

  • Definitions - Where symbols are defined
  • References - Where symbols are used (not just mentioned)
  • Call graphs - What calls what
  • Type hierarchies - Interface implementations, inheritance
  • Import resolution - What each file actually imports

When an AI with CKB asks "what calls handlePayment?", it gets:

{
  "symbol": "handlePayment",
  "kind": "function",
  "file": "internal/payment/handler.go",
  "line": 45,
  "callers": [
    {
      "function": "CheckoutController.Process",
      "file": "controllers/checkout.go",
      "line": 78
    },
    {
      "function": "RefundController.Execute",
      "file": "controllers/refund.go",
      "line": 91
    }
  ]
}

One call. Correct answer. No grep cascade.

Real Comparison

Task: "What's the impact of changing the User struct?"

Without CKB (Grep Approach)

AI thinking:
1. Search for "User struct" → definition found
2. Search for "User" → 234 matches
3. Filter to .go files → 189 matches
4. Try to identify struct usages vs string "User" → complex
5. Read 20 files to understand context
6. Build mental model of dependencies
7. Search for test files → 45 matches
8. Correlate tests with usages

Result: "The User struct is used in many places including
user.go, auth.go, profile.go... [incomplete list].
I found about 45 test files that might be affected."

Tokens: ~15,000
Time: ~30 seconds
Accuracy: ~60%

With CKB (Semantic Approach)

AI: [calls prepareChange for User struct]

Result: "Changing the User struct affects:
- 23 functions that access User fields
- 8 API endpoints that serialize User
- 3 database queries that map to User
- 45 tests cover User-related code

Risk score: 67 (Medium-High)
Primary owners: @alice, @bob

Suggested approach: Add new fields as optional first,
migrate consumers, then make required."

Tokens: ~800
Time: ~2 seconds
Accuracy: ~99%

How CKB Works

Indexing Phase

When you run ckb init, CKB:

  1. Parses your code with language-specific parsers (Tree-sitter)
  2. Resolves imports and builds a dependency graph
  3. Identifies symbols: functions, types, variables, constants
  4. Maps references: what symbol is used where
  5. Builds call graphs: what function calls what
  6. Analyzes git history: ownership, hotspots, churn

This creates a semantic index—a queryable database of your code's structure.

Query Phase

When your AI calls a CKB tool:

  1. Query hits the pre-built index
  2. Results return in milliseconds
  3. Data includes relationships, not just text matches

Keeping Fresh

CKB updates incrementally:

  • File watchers detect changes
  • Only affected symbols are re-indexed
  • Git hooks can trigger updates

The AI Feedback Loop

With semantic tools, AI assistants learn to ask better questions:

Without CKB: AI learns grep patterns and filtering heuristics. Gets clever about string matching but still fundamentally limited.

With CKB: AI learns to use semantic queries. Asks "what calls X" instead of "search for X". Uses impact analysis instead of guessing at dependencies.

The better tools create better AI behavior.

Getting Started

npm install -g @tastehub/ckb
cd /your/project
ckb init      # Build semantic index
ckb setup     # Connect to your AI assistant

Your AI now has semantic understanding instead of just text search.

The Bottom Line

AI coding assistants are limited by the tools they're given. Grep is the wrong tool for understanding code structure. Semantic code intelligence gives AI the context it needs to actually help.

Stop making your AI grep. Give it understanding.


Links: