Skip to content

AI-powered codebase search using RAG. Ask questions about your code in plain English and get accurate, cited answers.

Notifications You must be signed in to change notification settings

Anish2905/CodeRAG

Repository files navigation

CodeRAG

CodeRAG Banner

Ask questions about your codebase in plain English. CodeRAG indexes your code and uses AI to provide accurate, cited answers.

Quick Start

Prerequisites

# 1. Start Ollama (if not running)
ollama serve

# 2. Pull the model
ollama pull llama3

# 3. Start CodeRAG
docker-compose up

# 4. Open http://localhost:8000

Usage

Index a Repository

Via UI:

  1. Go to the Index tab
  2. Enter a GitHub URL or local path
  3. Click Start Indexing

Via API:

curl -X POST http://localhost:8000/api/v1/index \
  -H "Content-Type: application/json" \
  -d '{"repo_url": "https://github.com/user/repo"}'

Query Your Code

Via UI:

  1. Go to the Query tab
  2. Ask a question like "Where is authentication implemented?"
  3. Get AI-generated answers with source citations

Via API:

curl -X POST http://localhost:8000/api/v1/query \
  -H "Content-Type: application/json" \
  -d '{"query": "How does the login work?"}'

How It Works

Query Flow

  1. Index → Code is parsed, chunked by function/class, and embedded
  2. Search → Your question is matched against code using hybrid search
  3. Answer → AI generates a response with exact file:line citations

Indexing Pipeline

API Reference

Endpoint Method Description
/api/v1/query POST Query the codebase
/api/v1/query/stream POST Query with streaming response
/api/v1/index POST Index a repository
/api/v1/index/status/{job_id} GET Check indexing progress
/api/v1/repos GET List indexed repositories
/api/v1/health GET Health check

Configuration

Set via environment variables:

# LLM
CODERAG_LLM__PROVIDER=ollama
CODERAG_LLM__MODEL_NAME=llama3
CODERAG_LLM__BASE_URL=http://host.docker.internal:11434

# Embeddings
CODERAG_EMBEDDING__MODEL_NAME=all-MiniLM-L6-v2

Troubleshooting

Issue Solution
"LLM not available" Ensure Ollama is running: ollama serve
Slow first query Model loading on first use—wait ~10s
No results Check if repo is indexed in Index tab

License

MIT

About

AI-powered codebase search using RAG. Ask questions about your code in plain English and get accurate, cited answers.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published