diff --git a/.git-branches-guide.md b/.git-branches-guide.md deleted file mode 100644 index f262a258..00000000 --- a/.git-branches-guide.md +++ /dev/null @@ -1,62 +0,0 @@ -# QuantCoder Branch Guide - -## Quick Switch Commands - -```bash -# Default branch - Latest development (2.0) ⭐ -git checkout gamma - -# Switch to original stable (1.0) -git checkout main - -# Switch to improved testing (1.1) -git checkout beta -``` - -## Branch Mapping - -| Local Branch | Version | Remote Branch | -|--------------|---------|---------------| -| `main` | 1.0.0 | `origin/main` | -| `beta` | 1.1.0-beta.1 | `origin/refactor/modernize-2025` | -| `gamma` | 2.0.0-alpha.1 | `origin/claude/refactor-quantcoder-cli-JwrsM` | - -## Common Operations - -### Check current branch -```bash -git branch -``` - -### Pull latest changes -```bash -git pull -``` - -### Push your changes -```bash -git push -# Git knows where to push based on tracking -``` - -### See all branches -```bash -git branch -vv -# Shows local branches with tracking info -``` - -## Package Info - -- **main**: Uses `quantcli` package -- **beta**: Uses `quantcli` package (improved) -- **gamma**: Uses `quantcoder` package (new) - -## Why Different Remote Names? - -The remote uses technical naming for system requirements. -Your local names are clean and user-friendly. -**This is normal and a Git best practice!** - ---- - -**Need help?** See `docs/BRANCH_VERSION_MAP.md` diff --git a/CHANGELOG.md b/CHANGELOG.md new file mode 100644 index 00000000..181b4984 --- /dev/null +++ b/CHANGELOG.md @@ -0,0 +1,216 @@ +# Changelog + +All notable changes to QuantCoder CLI will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +--- + +## [Unreleased] - v2.0 (develop branch) + +### Added +- **Multi-Agent Architecture**: Specialized agents for algorithm generation + - `CoordinatorAgent` - Orchestrates multi-agent workflow + - `UniverseAgent` - Generates stock selection logic (Universe.py) + - `AlphaAgent` - Generates trading signals (Alpha.py) + - `RiskAgent` - Generates risk management (Risk.py) + - `StrategyAgent` - Integrates components (Main.py) +- **Autonomous Pipeline**: Self-improving strategy generation + - `AutonomousPipeline` - Continuous generation loop + - `LearningDatabase` - SQLite storage for patterns + - `ErrorLearner` - Analyzes and learns from errors + - `PerformanceLearner` - Tracks successful patterns + - `PromptRefiner` - Dynamically improves prompts +- **Library Builder**: Batch strategy generation system + - 13+ strategy categories (momentum, mean reversion, factor, etc.) + - Checkpointing for resumable builds + - Coverage tracking and reporting +- **Multi-LLM Support**: Provider abstraction layer + - OpenAI (GPT-4, GPT-4o) + - Anthropic (Claude 3, 3.5) + - Mistral (Mistral Large, Codestral) + - DeepSeek +- **Tool System**: Pluggable tool architecture (Mistral Vibe pattern) + - `SearchArticlesTool`, `DownloadArticleTool` + - `SummarizeArticleTool`, `GenerateCodeTool` + - `ValidateCodeTool`, `ReadFileTool`, `WriteFileTool` +- **Rich Terminal UI**: Modern CLI experience + - Interactive REPL with command history + - Syntax highlighting for generated code + - Progress indicators and panels + - Markdown rendering +- **Parallel Execution**: AsyncIO + ThreadPool for concurrent agent execution +- **MCP Integration**: QuantConnect Model Context Protocol for validation +- **Configuration System**: TOML-based configuration with dataclasses + +### Changed +- Package renamed from `quantcli` to `quantcoder` +- Complete architectural rewrite +- CLI framework enhanced with multiple execution modes +- Removed Tkinter GUI in favor of Rich terminal interface + +### Removed +- Tkinter GUI (replaced by Rich terminal) +- Legacy OpenAI SDK v0.28 support + +--- + +## [1.1.0] - Beta Release + +### Added +- **LLM Client Abstraction** (`llm_client.py`) + - `LLMClient` class with modern OpenAI SDK v1.x+ support + - `LLMResponse` dataclass for standardized responses + - Token usage tracking + - `simple_prompt()` convenience method +- **QuantConnect Static Validator** (`qc_validator.py`) + - `QuantConnectValidator` class for code analysis + - Division by zero detection + - Missing `.IsReady` indicator checks + - `None` value risk detection in comparisons + - `max()/min()` on potentially None values + - Portfolio access pattern validation + - Severity levels (error, warning, info) + - Formatted report generation +- **Unit Tests** (`tests/test_llm_client.py`) + - LLMClient initialization tests + - Chat completion tests + - Error handling tests +- **Documentation** + - `TESTING_GUIDE.md` - Comprehensive testing documentation + - `MAIN_VS_BETA.md` - Branch comparison guide + - `.env.example` - Environment variable template + +### Changed +- `processor.py`: Refactored to use `LLMClient` instead of direct OpenAI calls +- `processor.py`: Enhanced code generation prompts with defensive programming requirements + - Added runtime safety check requirements + - Added `IsReady` check reminders + - Added None guard requirements + - Added zero-division protection patterns +- `cli.py`: Added verbose flag handling improvements +- `setup.py`: Updated dependencies for OpenAI v1.x+ +- `requirements.txt`: Added explicit dependency versions + +### Fixed +- Lazy loading for Tkinter imports (better startup performance) +- Improved error handling in PDF download + +### Dependencies +- Upgraded OpenAI SDK from v0.28 to v1.x+ +- Added pytest for testing + +--- + +## [1.0.0] - Legacy Release + +### Features +- **Article Search**: CrossRef API integration + - Search by query keywords + - Configurable result count + - HTML export of results +- **PDF Download**: Multiple download methods + - Direct URL download + - Unpaywall API fallback for open access + - Manual browser fallback +- **NLP Processing**: spaCy-based text analysis + - PDF text extraction (pdfplumber) + - Text preprocessing (URL removal, normalization) + - Heading detection (title-cased sentences) + - Section splitting + - Keyword analysis for trading signals and risk management +- **Code Generation**: OpenAI GPT-4 integration + - Strategy summarization + - QuantConnect algorithm generation + - AST validation + - Iterative refinement (up to 6 attempts) +- **Tkinter GUI**: Desktop interface + - Search panel with results table + - Summary display with copy/save + - Code display with syntax highlighting (Monokai theme) +- **CLI Commands** + - `search ` - Search articles + - `list` - Show cached results + - `download ` - Download PDF + - `summarize ` - Generate summary + - `generate-code ` - Generate algorithm + - `open-article ` - Open in browser + - `interactive` - Launch GUI + +### Dependencies +- Python 3.8+ +- OpenAI SDK v0.28 (legacy) +- pdfplumber 0.10+ +- spaCy 3.x with en_core_web_sm +- Click 8.x +- python-dotenv +- Pygments +- InquirerPy + +--- + +## Branch History + +``` +main ────●──────────────────────────────●──────────────▶ + │ │ + v1.0 v1.1 + (legacy) (LLM client + + validator) + ▲ + │ +beta ───────────────────────────────────┘ + +develop ──────────────────────────────────────────────▶ + ▲ (v2.0) + │ +gamma ─┘ +``` + +--- + +## Migration Notes + +### v1.0 → v1.1 + +1. Update OpenAI SDK: + ```bash + pip uninstall openai + pip install openai>=1.0.0 + ``` + +2. Ensure `OPENAI_API_KEY` environment variable is set + +3. No CLI command changes required + +### v1.1 → v2.0 (future) + +1. Package renamed: + ```bash + pip uninstall quantcli + pip install quantcoder + ``` + +2. CLI command prefix changes: + ```bash + # Old + quantcli search "query" + + # New + quantcoder search "query" + ``` + +3. New commands available: + ```bash + quantcoder auto start --query "..." + quantcoder library build + ``` + +--- + +## Links + +- [Version Guide](VERSIONS.md) +- [Architecture Documentation](ARCHITECTURE.md) +- [GitHub Repository](https://github.com/SL-Mar/quantcoder-cli) diff --git a/CLEANUP_SUMMARY.md b/CLEANUP_SUMMARY.md deleted file mode 100644 index f39b3545..00000000 --- a/CLEANUP_SUMMARY.md +++ /dev/null @@ -1,180 +0,0 @@ -# ✅ Branch Cleanup Complete - -**Date**: 2025-12-15 -**Status**: All 3 branches are now clean and consistent - ---- - -## 📦 Clean Branch Summary - -| Branch | Package | Version | Packaging | Status | -|--------|---------|---------|-----------|--------| -| **main** | `quantcli` | 0.3 | setup.py | ✅ Clean | -| **beta** | `quantcli` | 1.0.0 | setup.py | ✅ Clean | -| **gamma** | `quantcoder` | 2.0.0-alpha.1 | pyproject.toml | ✅ Clean | - ---- - -## 🧹 What Was Cleaned - -### MAIN Branch -- ✅ Already clean -- ✅ Only `quantcli/` package -- ✅ Version 0.3 confirmed -- ✅ Legacy OpenAI SDK 0.28 - -### BETA Branch -- ✅ Already clean -- ✅ Only `quantcli/` package -- ✅ Version 1.0.0 confirmed -- ✅ Modern OpenAI SDK 1.x - -### GAMMA Branch -- ✅ **Removed** `quantcli/` directory (1,426 lines of legacy code) -- ✅ **Removed** old `setup.py` (conflicting with pyproject.toml) -- ✅ **Fixed** version: 2.0.0 → 2.0.0-alpha.1 (consistent with __init__.py) -- ✅ **Only** `quantcoder/` package remains (~10,000+ lines) -- ✅ Modern packaging with `pyproject.toml` - ---- - -## 📊 Current Structure - -### MAIN (v0.3) - Legacy Stable -``` -quantcoder-cli/ -├── quantcli/ ← Only this package -│ ├── cli.py -│ ├── gui.py -│ ├── processor.py -│ ├── search.py -│ └── utils.py -├── setup.py ← Legacy packaging -└── README.md -``` - -### BETA (v1.0.0) - Modernized -``` -quantcoder-cli/ -├── quantcli/ ← Only this package -│ ├── cli.py -│ ├── gui.py -│ ├── llm_client.py ← NEW -│ ├── processor.py -│ ├── qc_validator.py ← NEW -│ ├── search.py -│ └── utils.py -├── setup.py ← Legacy packaging -└── README.md -``` - -### GAMMA (v2.0.0-alpha.1) - AI Rewrite -``` -quantcoder-cli/ -├── quantcoder/ ← Only this package -│ ├── __init__.py (v2.0.0-alpha.1) -│ ├── cli.py -│ ├── chat.py -│ ├── config.py -│ ├── agents/ ← Multi-agent system -│ ├── autonomous/ ← Self-learning 🤖 -│ ├── library/ ← Strategy builder 📚 -│ ├── codegen/ -│ ├── core/ -│ ├── execution/ -│ ├── llm/ -│ ├── mcp/ -│ └── tools/ -├── pyproject.toml ← Modern packaging -├── docs/ -│ ├── AUTONOMOUS_MODE.md -│ ├── LIBRARY_BUILDER.md -│ ├── VERSION_COMPARISON.md -│ └── BRANCH_VERSION_MAP.md -└── README.md -``` - ---- - -## 🎯 Version Consistency Check - -### MAIN -- ✅ `setup.py`: "0.3" -- ✅ No version in __init__.py (legacy style) -- ✅ **Consistent** - -### BETA -- ✅ `setup.py`: "1.0.0" -- ✅ No version in __init__.py -- ✅ **Consistent** - -### GAMMA -- ✅ `pyproject.toml`: "2.0.0-alpha.1" -- ✅ `__init__.py`: "2.0.0-alpha.1" -- ✅ **Consistent** ← Fixed! - ---- - -## 📝 Commands Reference - -### Install MAIN (v0.3) -```bash -git checkout main -pip install -e . -quantcli --help -``` - -### Install BETA (v1.0.0) -```bash -git checkout beta -pip install -e . -quantcli --help -``` - -### Install GAMMA (v2.0.0-alpha.1) -```bash -git checkout gamma -pip install -e . -quantcoder --help # or: qc --help -``` - ---- - -## 🚀 Next Steps - -### To Merge Gamma Cleanup into Remote -The cleanup is on branch: `claude/cleanup-gamma-JwrsM` - -**From Mobile**: -1. Visit: https://github.com/SL-Mar/quantcoder-cli/compare/gamma...claude/cleanup-gamma-JwrsM -2. Create PR -3. Merge into gamma - -**From Computer**: -```bash -git checkout gamma -git merge origin/claude/cleanup-gamma-JwrsM -git push origin gamma -``` - -### Other Pending Merges -1. **Enhanced Help** for main: `claude/re-add-enhanced-help-JwrsM` -2. **Docs Update** for gamma: `claude/gamma-docs-update-JwrsM` -3. **Branch Comparison** doc: `claude/branch-comparison-JwrsM` - ---- - -## ✅ Summary - -All branches are now **clean and consistent**: - -- 🟢 **No duplicate packages** (each branch has only one package) -- 🟢 **No conflicting config files** (gamma uses only pyproject.toml) -- 🟢 **Version numbers consistent** across all files -- 🟢 **Clear separation** between legacy (quantcli) and new (quantcoder) - -**You can now work confidently knowing each branch has a single, clear purpose!** - ---- - -Generated: 2025-12-15 diff --git a/COMPLETE_BRANCH_COMPARISON.md b/COMPLETE_BRANCH_COMPARISON.md deleted file mode 100644 index 3435695b..00000000 --- a/COMPLETE_BRANCH_COMPARISON.md +++ /dev/null @@ -1,351 +0,0 @@ -# Complete Branch & Version Comparison - -**Date**: 2025-12-15 -**Repository**: SL-Mar/quantcoder-cli - -## 🎯 Quick Decision Guide - -| What you need | Use this branch | -|---------------|----------------| -| **Stable, tested, legacy** | `main` (v0.3) | -| **Modernized with OpenAI SDK 1.x** | `beta` (v1.0.0) | -| **AI assistant, autonomous mode** | `gamma` (v2.0.0) | - ---- - -## 📊 Branch Comparison Table - -| Feature | main | beta | gamma | -|---------|------|------|-------| -| **Package Name** | `quantcli` | `quantcli` | `quantcoder` | -| **Version** | 0.3 | 1.0.0 | 2.0.0-alpha.1 | -| **Last Update** | Dec 2024 | Dec 2025 | Dec 2025 | -| **Python Required** | ≥3.8 | ≥3.9 | ≥3.10 | -| **OpenAI SDK** | 0.28 (legacy) | 1.x (modern) | 1.x (modern) | -| **Packaging** | setup.py | setup.py | pyproject.toml | -| **Command** | `quantcli` | `quantcli` | `quantcoder` or `qc` | -| **Total Code** | ~1,426 lines | ~1,874 lines | ~10,000+ lines | - ---- - -## 🔍 Detailed Comparison - -### 📦 MAIN Branch (v0.3) - -**Status**: 🟢 Stable Legacy -**Package**: `quantcli` -**Last Commit**: `f4b4674 - Update project title in README.md` - -#### Structure -``` -quantcli/ -├── __init__.py (empty) -├── cli.py (217 lines) - Basic Click CLI -├── gui.py (344 lines) - Tkinter GUI -├── processor.py (641 lines) - PDF/NLP processing -├── search.py (109 lines) - CrossRef search -└── utils.py (115 lines) - Utilities -``` - -#### Features -- ✅ Basic CLI commands (search, download, summarize, generate-code) -- ✅ CrossRef article search -- ✅ PDF processing with pdfplumber -- ✅ NLP with spacy -- ✅ Tkinter GUI (interactive mode) -- ✅ OpenAI GPT integration (SDK 0.28) -- ❌ No enhanced help (was reverted) -- ❌ Old OpenAI SDK -- ❌ No modern features - -#### Dependencies -- OpenAI SDK 0.28 (old) -- Click, requests, pdfplumber, spacy -- InquirerPy, pygments - -#### Use Case -- **Legacy projects** requiring old OpenAI SDK -- **Proven stable** version -- **Simple workflows** - ---- - -### 📦 BETA Branch (v1.0.0) - -**Status**: 🧪 Testing (Modernized) -**Package**: `quantcli` -**Last Commit**: `9a5f173 - Merge pull request #7` - -#### Structure -``` -quantcli/ -├── __init__.py (empty) -├── cli.py (235 lines) - Click CLI -├── gui.py (349 lines) - Tkinter GUI (lazy imports) -├── llm_client.py (138 lines) - ✨ NEW: LLM client abstraction -├── processor.py (691 lines) - Enhanced processing -├── qc_validator.py (202 lines) - ✨ NEW: QuantConnect validator -├── search.py (109 lines) - CrossRef search -└── utils.py (150 lines) - Enhanced utilities -``` - -#### Features -- ✅ All main branch features -- ✅ **OpenAI SDK 1.x** (modern) -- ✅ **LLM client abstraction** (supports multiple providers) -- ✅ **QuantConnect code validator** -- ✅ **Lazy GUI imports** (no tkinter errors) -- ✅ **Improved error handling** -- ✅ **Better logging** -- ❌ Still basic CLI (no AI assistant mode) - -#### New Files -- `llm_client.py`: Abstraction for OpenAI/Anthropic/local models -- `qc_validator.py`: Validates generated QuantConnect code - -#### Use Case -- **Modern OpenAI SDK** compatibility -- **Better than main** but same workflow -- **Not yet tested** by user - ---- - -### 📦 GAMMA Branch (v2.0.0-alpha.1) - -**Status**: 🚀 Alpha (Complete Rewrite) -**Package**: `quantcoder` -**Last Commit**: `1b7cea5 - Add mobile-friendly branch reorganization tools` - -#### Structure -``` -quantcoder/ -├── __init__.py - Version 2.0.0-alpha.1 -├── cli.py - Modern CLI with subcommands -├── chat.py - Interactive chat interface -├── config.py - TOML configuration system -├── agents/ - Multi-agent architecture -│ ├── base.py -│ ├── coordinator.py -│ ├── universe.py -│ ├── alpha.py -│ ├── risk.py -│ └── strategy.py -├── autonomous/ - 🤖 Self-learning system -│ ├── database.py - Learning database (SQLite) -│ ├── learner.py - Error & performance learning -│ ├── pipeline.py - Autonomous orchestration -│ └── prompt_refiner.py - Dynamic prompt enhancement -├── library/ - 📚 Strategy library builder -│ ├── taxonomy.py - 10 categories, 86 strategies -│ ├── coverage.py - Progress tracking -│ └── builder.py - Systematic building -├── codegen/ - Code generation -├── core/ - Core utilities -├── execution/ - Parallel execution (AsyncIO) -├── llm/ - LLM providers (OpenAI, Anthropic, Mistral) -├── mcp/ - Model Context Protocol -└── tools/ - CLI tools -``` - -#### Features - -**🎨 Modern Architecture** -- ✅ **Vibe CLI-inspired** design (Mistral) -- ✅ **Interactive chat** interface -- ✅ **Tool-based architecture** -- ✅ **TOML configuration** -- ✅ **Rich terminal UI** -- ✅ **Persistent context** - -**🤖 AI Assistant** -- ✅ **Multi-agent system** (6 specialized agents) -- ✅ **Parallel execution** (AsyncIO, 3-5x faster) -- ✅ **Conversational interface** -- ✅ **Context-aware responses** - -**🧠 Autonomous Mode** (NEW!) -- ✅ **Self-learning** from errors -- ✅ **Performance analysis** -- ✅ **Auto-fix compilation** errors -- ✅ **Prompt refinement** based on learnings -- ✅ **SQLite database** for learnings -- ✅ **Success rate** improves over time (50% → 85%) - -**📚 Library Builder** (NEW!) -- ✅ **10 strategy categories** -- ✅ **86 strategies** (target) -- ✅ **Systematic coverage** -- ✅ **Priority-based** building -- ✅ **Checkpoint/resume** -- ✅ **Progress tracking** - -**🔧 Advanced Features** -- ✅ **MCP integration** (QuantConnect) -- ✅ **Multi-provider LLMs** (OpenAI, Anthropic, Mistral) -- ✅ **Comprehensive testing** -- ✅ **Modern packaging** (pyproject.toml) - -#### Commands -```bash -# Chat mode -quantcoder chat "Create momentum strategy" - -# Autonomous mode -quantcoder auto start "momentum trading" --max-iterations 50 - -# Library builder -quantcoder library build --comprehensive - -# Regular commands (like old CLI) -quantcoder search "pairs trading" -quantcoder generate -``` - -#### Use Case -- **AI-powered** strategy generation -- **Autonomous learning** systems -- **Library building** from scratch -- **Research & experimentation** -- **Cutting edge** features - ---- - -## 🌿 Archive Branches - -These are **not main development branches**: - -### feature/enhanced-help-command -- **Purpose**: Enhanced `--help` documentation + `--version` flag -- **Status**: ✅ Feature complete, ❌ Reverted from main -- **Use**: Can be re-merged if needed - -### revert-3-feature/enhanced-help-command -- **Purpose**: Revert PR for enhanced help -- **Status**: Already merged to main -- **Use**: Historical record only - -### claude/gamma-docs-update-JwrsM -- **Purpose**: Documentation cleanup for gamma -- **Status**: Temporary branch, ready to merge -- **Use**: Merge into gamma when ready - -### claude/re-add-enhanced-help-JwrsM -- **Purpose**: Re-add enhanced help to main -- **Status**: Ready to merge -- **Use**: Merge into main if enhanced help is wanted - ---- - -## 📈 Migration Paths - -### From main → beta -**Reason**: Modernize to OpenAI SDK 1.x - -```bash -# Update code -git checkout beta - -# Update dependencies -pip install -e . - -# Update .env if needed -OPENAI_API_KEY=sk-... - -# Test -quantcli search "test" -``` - -**Breaking Changes**: -- OpenAI SDK 0.28 → 1.x (API changed) -- Python 3.8 → 3.9 minimum - -### From main/beta → gamma -**Reason**: Get AI assistant + autonomous mode - -```bash -# New package name! -git checkout gamma - -# Install -pip install -e . - -# Configure -quantcoder config - -# Try chat mode -quantcoder chat "Create a momentum strategy" -``` - -**Breaking Changes**: -- Package name: `quantcli` → `quantcoder` -- Command name: `quantcli` → `quantcoder` or `qc` -- Python 3.9 → 3.10 minimum -- Completely different CLI interface -- New TOML config system - ---- - -## 🎯 Recommendations - -### For Production Use -→ **main** (v0.3) -Most stable, proven, but old SDK - -### For Modern SDK -→ **beta** (v1.0.0) -Same workflow, updated dependencies - -### For AI Features -→ **gamma** (v2.0.0-alpha.1) -Complete rewrite, autonomous mode, library builder - ---- - -## 📊 Version History - -``` -main (0.3) - ↓ -beta (1.0.0) ← Modernize OpenAI SDK, add validators - ↓ -gamma (2.0.0-alpha.1) ← Complete rewrite, AI assistant -``` - ---- - -## 🔧 Current Issues - -### All Branches -- ❌ 75 dependency vulnerabilities (GitHub Dependabot alert) - - 4 critical, 29 high, 33 moderate, 9 low - - Should be addressed across all branches - -### main -- ❌ Enhanced help was reverted (basic help only) -- ❌ Old OpenAI SDK (0.28) - -### beta -- ⚠️ Not tested by user yet -- ⚠️ Version says 1.0.0 but documentation says 1.1.0-beta.1 - -### gamma -- ⚠️ Alpha quality (testing phase) -- ⚠️ Version mismatch: pyproject.toml says 2.0.0, __init__.py says 2.0.0-alpha.1 -- ⚠️ Old setup.py still exists (should remove, use pyproject.toml only) - ---- - -## ✅ Next Steps - -1. **Fix version inconsistencies** in gamma -2. **Remove old setup.py** from gamma (use pyproject.toml) -3. **Address security vulnerabilities** across all branches -4. **Test beta** branch thoroughly -5. **Decide on enhanced help** for main (merge or leave reverted) -6. **Archive feature branches** that are no longer needed - ---- - -**Generated**: 2025-12-15 -**Tool**: Claude Code -**Repository**: https://github.com/SL-Mar/quantcoder-cli diff --git a/GAMMA_UPGRADE_PROPOSAL.md b/GAMMA_UPGRADE_PROPOSAL.md deleted file mode 100644 index 4629b78b..00000000 --- a/GAMMA_UPGRADE_PROPOSAL.md +++ /dev/null @@ -1,361 +0,0 @@ -# Gamma Branch Upgrade Proposal - -**Date:** 2026-01-25 -**Author:** Claude -**Current Branch:** `claude/cli-zed-integration-mRF07` - ---- - -## Executive Summary - -After analyzing all 17 branches in the quantcoder-cli repository, this document proposes a prioritized list of upgrades for the gamma branch (v2.0.0-alpha.1). The gamma branch already scores 88% on production readiness - these upgrades would bring it to production quality. - ---- - -## Branch Analysis Summary - -| Branch | Type | Key Features | Lines Changed | Merge Priority | -|--------|------|--------------|---------------|----------------| -| `claude/wire-mcp-production-mRF07` | Feature | MCP wiring for backtest/validate | +459 | **HIGH** | -| `claude/add-evolve-to-gamma-Kh22K` | Feature | AlphaEvolve evolution engine | +1,747 | **HIGH** | -| `copilot/add-ollama-backend-adapter` | Feature | Local LLM via Ollama | ~200 | **HIGH** | -| `claude/cli-zed-integration-mRF07` | Feature | Editor integration (Zed, VSCode) | +116 | **MEDIUM** | -| `claude/create-app-flowcharts-oAhVJ` | Docs | Architecture documentation | +docs | **MEDIUM** | -| `claude/assess-prod-readiness-Kh22K` | Docs | Production readiness assessment | +docs | **LOW** | -| `beta` | Enhancement | Testing/security improvements | varies | **LOW** | - ---- - -## Proposed Upgrades (Priority Order) - -### 1. MCP Production Wiring [HIGH PRIORITY] -**Branch:** `claude/wire-mcp-production-mRF07` -**Status:** Already implemented, ready to merge - -**What it adds:** -- `BacktestTool` class that wraps QuantConnect MCP for real backtesting -- Updated `ValidateCodeTool` with QuantConnect compilation -- CLI commands: `quantcoder validate ` and `quantcoder backtest ` -- Chat interface integration for `backtest` and `validate` commands -- Config methods: `load_quantconnect_credentials()` and `has_quantconnect_credentials()` -- Fixed `autonomous/pipeline.py` to use real MCP instead of mock data - -**Files modified:** -``` -quantcoder/tools/code_tools.py (+195 lines) - Added BacktestTool -quantcoder/config.py (+33 lines) - Credential management -quantcoder/cli.py (+89 lines) - CLI commands -quantcoder/chat.py (+94 lines) - Chat integration -quantcoder/autonomous/pipeline.py (+64 lines) - Real MCP calls -quantcoder/tools/__init__.py (+3 lines) - Export BacktestTool -``` - -**Impact:** CRITICAL - Enables actual strategy validation and backtesting - ---- - -### 2. AlphaEvolve Evolution Engine [HIGH PRIORITY] -**Branch:** `claude/add-evolve-to-gamma-Kh22K` -**Status:** Implemented, needs integration review - -**What it adds:** -- Complete evolution engine for strategy optimization -- Variation generator for creating strategy mutations -- QC evaluator for ranking variants by Sharpe ratio -- Persistence layer for evolution state and checkpoints -- CLI integration for evolution commands - -**New module structure:** -``` -quantcoder/evolver/ -├── __init__.py (32 lines) - Module exports -├── config.py (99 lines) - Evolution configuration -├── engine.py (346 lines) - Main orchestrator -├── evaluator.py (319 lines) - QuantConnect evaluator -├── persistence.py (272 lines) - State persistence -└── variation.py (350 lines) - Variation generator -``` - -**Key features:** -- Generate variations from baseline strategy -- Evaluate variants via QuantConnect backtest -- Maintain elite pool of best performers -- Support resumable evolution runs -- Async architecture compatible with gamma - -**New CLI commands (proposed):** -```bash -quantcoder evolve start --baseline --generations 50 -quantcoder evolve status -quantcoder evolve resume -quantcoder evolve export --format json -``` - -**Impact:** HIGH - Adds powerful strategy optimization via genetic evolution - ---- - -### 3. Ollama Provider (Local LLM) [HIGH PRIORITY] -**Branch:** `copilot/add-ollama-backend-adapter` -**Status:** Implemented for quantcli, needs port to quantcoder - -**What it adds:** -- OllamaAdapter class for local LLM inference -- Support for any Ollama-compatible model (llama2, codellama, mistral, etc.) -- Environment configuration via OLLAMA_BASE_URL and OLLAMA_MODEL -- Chat completion API compatible with existing provider interface - -**Required work:** -1. Port OllamaAdapter to quantcoder/llm/providers.py -2. Add "ollama" as provider option in ModelConfig -3. Update config.py to support Ollama settings -4. Add CLI flag: `--provider ollama` - -**Proposed implementation:** -```python -# In quantcoder/llm/providers.py - -class OllamaProvider(BaseLLMProvider): - """Provider for local LLM via Ollama.""" - - def __init__(self, config): - self.base_url = os.getenv('OLLAMA_BASE_URL', 'http://localhost:11434') - self.model = os.getenv('OLLAMA_MODEL', config.model.model or 'codellama') - - async def generate(self, prompt: str, **kwargs) -> str: - # Implementation from copilot branch adapter - ... -``` - -**Impact:** HIGH - Enables fully offline/local strategy generation with no API costs - ---- - -### 4. Editor Integration [MEDIUM PRIORITY] -**Branch:** `claude/cli-zed-integration-mRF07` -**Status:** Implemented for quantcli, needs port to quantcoder - -**What it adds:** -- `open_in_zed()` function for Zed editor -- `open_in_editor()` generic function supporting: - - Zed - - VS Code - - Cursor - - Sublime Text -- CLI flags: `--zed`, `--editor `, `--json-output` -- New command: `quantcoder open-code ` - -**Proposed integration:** -```python -# In quantcoder/tools/file_tools.py - -def open_in_editor(file_path: str, editor: str = "zed") -> bool: - """Open file in specified editor.""" - editors = { - "zed": ["zed", file_path], - "code": ["code", file_path], - "cursor": ["cursor", file_path], - "sublime": ["subl", file_path], - } - ... -``` - -**Impact:** MEDIUM - Improves developer workflow - ---- - -### 5. Architecture Documentation [MEDIUM PRIORITY] -**Branch:** `claude/create-app-flowcharts-oAhVJ` -**Status:** Complete, ready to merge - -**What it adds:** -- ARCHITECTURE.md with comprehensive flowcharts -- System architecture diagrams (ASCII art) -- Component relationship documentation -- CHANGELOG.md -- PRODUCTION_SETUP.md -- VERSIONS.md - -**Files to merge:** -``` -ARCHITECTURE.md -CHANGELOG.md -PRODUCTION_SETUP.md -VERSIONS.md -``` - -**Impact:** MEDIUM - Essential for onboarding and maintenance - ---- - -### 6. Testing Improvements [LOW PRIORITY] -**Source:** `beta` branch + production readiness assessment - -**Recommended additions:** -- Unit tests for agents (coordinator, universe, alpha, risk) -- Integration tests for autonomous pipeline -- Tests for library builder -- Tests for chat interface -- Test coverage reporting - -**Current test gap:** -``` -COVERED: NOT COVERED: -- test_llm.py - agents/* -- test_processor - autonomous/* -- conftest.py - library/* - - chat.py - - cli.py -``` - -**Impact:** LOW immediate, HIGH long-term for maintainability - ---- - -## Implementation Roadmap - -### Phase 1: Production Critical (Immediate) -1. **Merge MCP wiring** from `claude/wire-mcp-production-mRF07` - - All backtest/validate functionality now works - - Autonomous mode uses real data - -### Phase 2: Feature Enhancement (Week 1) -2. **Port Ollama provider** from `copilot/add-ollama-backend-adapter` - - Add to providers.py - - Update config.py - - Test with codellama - -3. **Merge Evolution engine** from `claude/add-evolve-to-gamma-Kh22K` - - Review integration points - - Add CLI commands - - Update documentation - -### Phase 3: Developer Experience (Week 2) -4. **Port editor integration** from `claude/cli-zed-integration-mRF07` - - Add to file_tools.py - - Update CLI - -5. **Merge documentation** from `claude/create-app-flowcharts-oAhVJ` - - Architecture docs - - Changelog - -### Phase 4: Quality (Ongoing) -6. **Add test coverage** - - Agent tests - - Integration tests - - CI coverage reporting - ---- - -## New Command Reference (After Upgrades) - -### Current gamma commands: -```bash -quantcoder chat # Interactive chat -quantcoder search # Search articles -quantcoder download # Download article -quantcoder summarize # Summarize article -quantcoder generate # Generate code -quantcoder auto start # Autonomous mode -quantcoder library build # Library builder -``` - -### Proposed new commands: -```bash -# From MCP wiring (Phase 1) -quantcoder validate # Validate code on QuantConnect -quantcoder backtest # Run backtest on QuantConnect - -# From Evolution engine (Phase 2) -quantcoder evolve start # Start evolution -quantcoder evolve status # Check evolution status -quantcoder evolve resume # Resume evolution -quantcoder evolve export # Export elite pool - -# From Editor integration (Phase 3) -quantcoder open-code # Open in editor -quantcoder generate --zed # Generate and open in Zed -quantcoder generate --editor code # Generate and open in VS Code -``` - -### Proposed new config options: -```toml -# ~/.quantcoder/config.toml - -[model] -provider = "ollama" # NEW: "anthropic", "mistral", "deepseek", "openai", "ollama" -ollama_model = "codellama" # NEW: for ollama provider -ollama_url = "http://localhost:11434" # NEW: custom Ollama server - -[ui] -default_editor = "zed" # NEW: "zed", "code", "cursor", "sublime" -auto_open = true # NEW: auto-open generated code - -[evolution] # NEW section -max_generations = 50 -population_size = 10 -elite_size = 3 -auto_save = true -``` - ---- - -## Risk Assessment - -| Upgrade | Risk Level | Mitigation | -|---------|------------|------------| -| MCP wiring | LOW | Already tested, minimal changes | -| Evolution engine | MEDIUM | Large codebase, needs integration review | -| Ollama provider | LOW | Simple adapter pattern | -| Editor integration | LOW | Optional feature, fallback to manual | -| Documentation | NONE | Non-code changes | - ---- - -## Estimated Effort - -| Upgrade | Effort | Type | -|---------|--------|------| -| MCP wiring merge | 30 min | Git merge + test | -| Ollama provider port | 2-3 hours | Code adaptation | -| Evolution engine merge | 1-2 hours | Integration review | -| Editor integration port | 1 hour | Code adaptation | -| Documentation merge | 30 min | Git merge | -| **Total** | **5-7 hours** | | - ---- - -## Conclusion - -The gamma branch is already 88% production-ready. These upgrades would: - -1. **Enable real backtesting** (MCP wiring) - Currently critical gap -2. **Add strategy optimization** (Evolution engine) - Competitive advantage -3. **Support local LLMs** (Ollama) - Cost savings, privacy -4. **Improve DX** (Editor integration) - Workflow improvement -5. **Document architecture** (Docs) - Maintainability - -Recommended immediate action: **Merge MCP wiring first** as it's the most critical production gap. - ---- - -## Appendix: Branch Details - -### Active Feature Branches -- `gamma` - Main development (v2.0.0-alpha.1) -- `beta` - Improved legacy (v1.1.0-beta.1) -- `main` - Stable production (v1.0.0) -- `claude/wire-mcp-production-mRF07` - MCP wiring -- `claude/add-evolve-to-gamma-Kh22K` - Evolution engine -- `copilot/add-ollama-backend-adapter` - Ollama support - -### Documentation Branches -- `claude/create-app-flowcharts-oAhVJ` - Architecture diagrams -- `claude/assess-prod-readiness-Kh22K` - Readiness assessment -- `claude/create-architecture-diagram-mjQqa` - Diagrams + evolver - -### Analysis Branches (Read-only reference) -- `claude/compare-agent-architectures-Qc6Ok` -- `claude/compare-gamma-opencode-arch-C4KzZ` -- `claude/audit-gamma-branch-ADxNt` -- `claude/check-credential-leaks-t3ZYa` diff --git a/LICENSE b/LICENSE index bba554bc..2ede2fbe 100644 --- a/LICENSE +++ b/LICENSE @@ -48,7 +48,7 @@ "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner + submitted to the Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent diff --git a/MOBILE_BRANCH_GUIDE.md b/MOBILE_BRANCH_GUIDE.md deleted file mode 100644 index 833eaac0..00000000 --- a/MOBILE_BRANCH_GUIDE.md +++ /dev/null @@ -1,112 +0,0 @@ -# Mobile-Friendly Branch Reorganization Guide - -## For Android/Mobile Users 📱 - -Since you're on Android, here are your options: - ---- - -## Option 1: GitHub Mobile Web (Easiest for Mobile) 🌐 - -1. **Open GitHub in your mobile browser** - - Go to: https://github.com/SL-Mar/quantcoder-cli - - Use **Desktop Site** mode for full features - -2. **Create Beta Branch** - - Tap "main" dropdown → Find "refactor/modernize-2025" - - Tap the ⋮ (three dots) next to branch name - - Select "Rename branch" - - Enter new name: `beta` - - Confirm - -3. **Create Gamma Branch** - - Tap "main" dropdown → Find "claude/refactor-quantcoder-cli-JwrsM" - - Tap the ⋮ (three dots) next to branch name - - Select "Rename branch" - - Enter new name: `gamma` - - Confirm - -4. **Done!** ✓ - ---- - -## Option 2: Use Termux (Android Terminal) 📟 - -If you have Termux installed: - -```bash -# Install git -pkg install git - -# Clone repo -git clone https://github.com/SL-Mar/quantcoder-cli -cd quantcoder-cli - -# Run reorganization script -chmod +x reorganize-branches.sh -./reorganize-branches.sh -``` - ---- - -## Option 3: Wait for Computer Access 💻 - -The reorganization script is ready at: -``` -./reorganize-branches.sh -``` - -When you have computer access: -1. Clone the repository -2. Run the script -3. Done! - ---- - -## Current Status (What You Have Now) - -✅ **All code is complete and pushed** -- Autonomous mode: ✓ -- Library builder: ✓ -- Documentation: ✓ -- Version 2.0.0-alpha.1: ✓ - -✅ **Working locally with clean names** -You can already use: -```bash -git checkout main # v1.0 -git checkout beta # v1.1 -git checkout gamma # v2.0 -``` - -❌ **Remote branches have technical names** -- `origin/main` -- `origin/refactor/modernize-2025` (should be beta) -- `origin/claude/refactor-quantcoder-cli-JwrsM` (should be gamma) - ---- - -## Why Can't Claude Do This? - -Claude's Git access is proxied with strict restrictions: -- Can only push to branches matching: `claude/*-sessionID` -- Cannot rename existing remote branches -- You need full GitHub access (which you have!) - ---- - -## Questions? - -**Q: Is my code safe?** -A: Yes! All v2.0 code is pushed to `origin/claude/refactor-quantcoder-cli-JwrsM` - -**Q: Can I use it now?** -A: Yes! The branch names are just labels. All functionality works. - -**Q: What's the priority?** -A: Low priority. Renaming is cosmetic - the code is complete and working. - ---- - -**Created:** 2025-12-15 -**Repository:** https://github.com/SL-Mar/quantcoder-cli diff --git a/PRODUCTION_SETUP.md b/PRODUCTION_SETUP.md new file mode 100644 index 00000000..a4910041 --- /dev/null +++ b/PRODUCTION_SETUP.md @@ -0,0 +1,188 @@ +# QuantCoder CLI - Production Setup Instructions + +## Overview + +**Goal:** Set up production-ready repository with 2 versions and clean branch structure. + +### Target State + +``` +BRANCHES: TAGS: +───────── ───── +main (stable) v1.0 → main (legacy) +beta (to be deleted) v1.1 → beta (enhanced) +develop (v2.0 WIP) + +After cleanup: main + develop only +``` + +### Version Summary + +| Version | Source | Features | +|---------|--------|----------| +| v1.0 | main | Legacy, OpenAI v0.28, Tkinter GUI | +| v1.1 | beta | + LLM client abstraction, + QC static validator | +| v2.0 | develop (from gamma) | Multi-agent, autonomous, library builder | + +--- + +## Step-by-Step Instructions + +### Phase 1: Create Tags + +```bash +# Tag main as v1.0 +git checkout main +git tag -a v1.0 -m "v1.0: Legacy - OpenAI v0.28, Tkinter GUI, basic features" +git push origin v1.0 + +# Tag beta as v1.1 +git checkout beta +git tag -a v1.1 -m "v1.1: LLM client abstraction + QC static validator" +git push origin v1.1 +``` + +### Phase 2: Rename gamma → develop + +```bash +git checkout gamma +git checkout -b develop +git push origin develop +git push origin --delete gamma +``` + +### Phase 3: Merge Documentation + +```bash +git checkout main +git merge origin/claude/create-app-flowcharts-oAhVJ -m "Add version documentation" +git push origin main +``` + +**Files added:** +- `ARCHITECTURE.md` - Gamma branch flowcharts +- `VERSIONS.md` - Version comparison guide +- `CHANGELOG.md` - Detailed changelog + +### Phase 4: Delete Old Branches + +```bash +# Delete merged/obsolete branches +git push origin --delete beta +git push origin --delete claude/assess-gamma-quality-d5N6F +git push origin --delete claude/audit-gamma-branch-ADxNt +git push origin --delete claude/check-credential-leaks-t3ZYa +git push origin --delete claude/compare-gamma-opencode-arch-C4KzZ +git push origin --delete claude/create-app-flowcharts-oAhVJ +git push origin --delete copilot/add-ollama-backend-adapter +``` + +### Phase 5: Verify + +```bash +# Check branches (should be: main, develop) +git branch -a + +# Check tags +git tag -l + +# Expected output: +# Branches: main, develop +# Tags: v1.0, v1.1 +``` + +--- + +## Alternative: GitHub Web Interface + +If using mobile/web browser: + +### Create Tags (via Releases) +1. Go to **Releases** → **Create a new release** +2. **Choose a tag** → type `v1.0` → **Create new tag** +3. **Target:** select `main` +4. **Title:** `v1.0: Legacy Release` +5. **Publish release** +6. Repeat for `v1.1` targeting `beta` + +### Create develop branch +1. Go to **Code** tab +2. Click branch dropdown (shows `main`) +3. Type `develop` +4. Select **Create branch: develop from gamma** + +### Delete branches +1. Go to **Branches** (click "X branches") +2. Click 🗑️ trash icon next to each unwanted branch + +### Merge documentation PR +1. Go to **Pull requests** +2. Create PR from `claude/create-app-flowcharts-oAhVJ` → `main` +3. Merge + +--- + +## Final Repository Structure + +``` +quantcoder-cli/ +├── main branch (v1.0 code + docs) +│ ├── quantcli/ # v1.0 package +│ ├── ARCHITECTURE.md # Flowcharts +│ ├── VERSIONS.md # Version guide +│ ├── CHANGELOG.md # Changelog +│ └── README.md +│ +├── develop branch (v2.0 WIP) +│ ├── quantcoder/ # v2.0 package (new name) +│ ├── agents/ +│ ├── autonomous/ +│ ├── library/ +│ └── ... +│ +└── Tags + ├── v1.0 → points to main (legacy) + └── v1.1 → points to beta commit (enhanced) +``` + +--- + +## Release Workflow (Future) + +```bash +# When v2.0 is ready: +git checkout main +git merge develop +git tag -a v2.0 -m "v2.0: Multi-agent architecture" +git push origin main --tags + +# v1.0 and v1.1 remain accessible via tags +git checkout v1.0 # Access old version anytime +``` + +--- + +## Checklist + +- [ ] Tag main as v1.0 +- [ ] Tag beta as v1.1 +- [ ] Create develop branch from gamma +- [ ] Delete gamma branch +- [ ] Merge docs to main +- [ ] Delete claude/* branches (5) +- [ ] Delete copilot/* branch (1) +- [ ] Delete beta branch +- [ ] Verify: 2 branches + 2 tags + +--- + +## Quick Reference + +| Action | Command | +|--------|---------| +| List branches | `git branch -a` | +| List tags | `git tag -l` | +| Checkout version | `git checkout v1.0` | +| Delete remote branch | `git push origin --delete ` | +| Create tag | `git tag -a v1.0 -m "message"` | +| Push tag | `git push origin v1.0` | diff --git a/README.md b/README.md index 60fbd57f..9b6baf67 100644 --- a/README.md +++ b/README.md @@ -1,16 +1,17 @@ -# QuantCoder 2.0.0-alpha.1 +# QuantCoder 2.0.0 -[![Version](https://img.shields.io/badge/version-2.0.0--alpha.1-orange)](https://github.com/SL-Mar/quantcoder-cli) -[![Branch](https://img.shields.io/badge/branch-gamma-purple)](https://github.com/SL-Mar/quantcoder-cli/tree/gamma) -[![Package](https://img.shields.io/badge/package-quantcoder-blue)](.) +[![Version](https://img.shields.io/badge/version-2.0.0-green)](https://github.com/SL-Mar/quantcoder-cli) +[![Python](https://img.shields.io/badge/python-3.10+-blue)](https://python.org) +[![License](https://img.shields.io/badge/license-Apache%202.0-blue)](LICENSE) > **AI-powered CLI for generating QuantConnect trading algorithms from research articles** -**This is QuantCoder 2.0 (GAMMA)** - The primary development branch with autonomous mode & library builder +> **Note** +> This version (v2.0.0) has not been systematically tested yet. +> It represents a complete architectural rewrite from the legacy v1.x codebase. +> Use with caution and report any issues. -**Want the original stable version?** → [QuantCoder 1.0 (main branch)](https://github.com/SL-Mar/quantcoder-cli/tree/main) - -📖 **[Version Comparison Guide](docs/VERSION_COMPARISON.md)** | **[Branch Map](docs/BRANCH_VERSION_MAP.md)** +Features: Multi-agent system, AlphaEvolve-inspired evolution, autonomous learning, MCP integration. --- @@ -30,7 +31,7 @@ The initial version successfully coded a blended momentum and mean-reversion str - 📝 **Programmable Mode** via `--prompt` flag - 💾 **Persistent Context** and conversation history -👉 **[See full v2.0 documentation →](README_v2.md)** +📖 **[Architecture](docs/AGENTIC_WORKFLOW.md)** | **[Autonomous Mode](docs/AUTONOMOUS_MODE.md)** | **[Changelog](CHANGELOG.md)** --- diff --git a/README_v2.md b/README_v2.md deleted file mode 100644 index 4866857b..00000000 --- a/README_v2.md +++ /dev/null @@ -1,377 +0,0 @@ -# QuantCoder v2.0 - -> **AI-powered CLI for generating QuantConnect trading algorithms from research articles** - -QuantCoder v2.0 is a complete refactoring inspired by [Mistral's Vibe CLI](https://github.com/mistralai/mistral-vibe), featuring a modern architecture with conversational AI, tool-based workflows, and an enhanced developer experience. - ---- - -## 🌟 What's New in v2.0 - -### Inspired by Mistral Vibe CLI - -This version draws inspiration from Mistral's excellent Vibe CLI architecture: - -- **🤖 Interactive Chat Interface**: Conversational AI that understands natural language -- **🛠️ Tool-Based Architecture**: Modular, extensible tool system -- **⚙️ Configuration System**: Customizable via TOML config files -- **🎨 Modern UI**: Beautiful terminal output with Rich library -- **📝 Programmable Mode**: Use `--prompt` for automation -- **💾 Persistent Context**: Conversation history and smart completions - -### Core Improvements - -- Modern Python packaging with `pyproject.toml` -- Updated OpenAI SDK (v1.0+) -- Rich terminal UI with syntax highlighting -- Prompt-toolkit for advanced CLI features -- Configuration management system -- Tool approval workflows -- Better error handling and logging - ---- - -## 🚀 Installation - -### Prerequisites - -- **Python 3.10 or later** -- OpenAI API key - -### Install from Source - -```bash -# Clone the repository -git clone https://github.com/SL-Mar/quantcoder-cli.git -cd quantcoder-cli - -# Create and activate virtual environment -python -m venv .venv -source .venv/bin/activate # On Windows: .venv\Scripts\activate - -# Install the package -pip install -e . - -# Download SpaCy model -python -m spacy download en_core_web_sm -``` - -### Quick Install (pip) - -```bash -pip install -e . -python -m spacy download en_core_web_sm -``` - ---- - -## 🎯 Quick Start - -### First Run - -On first run, QuantCoder will: -1. Create configuration directory at `~/.quantcoder/` -2. Generate default `config.toml` -3. Prompt for your OpenAI API key (saved to `~/.quantcoder/.env`) - -### Launch Interactive Mode - -```bash -quantcoder -``` - -Or use the short alias: - -```bash -qc -``` - -### Programmatic Mode - -```bash -quantcoder --prompt "Search for momentum trading strategies" -``` - ---- - -## 💡 Usage - -### Interactive Mode - -QuantCoder provides a conversational interface: - -```bash -quantcoder> search momentum trading -quantcoder> download 1 -quantcoder> summarize 1 -quantcoder> generate 1 -``` - -You can also use natural language: - -```bash -quantcoder> Find articles about algorithmic trading -quantcoder> How do I generate code from an article? -quantcoder> Explain mean reversion strategies -``` - -### Direct Commands - -```bash -# Search for articles -quantcoder search "algorithmic trading" --num 5 - -# Download article by ID -quantcoder download 1 - -# Summarize trading strategy -quantcoder summarize 1 - -# Generate QuantConnect code -quantcoder generate 1 - -# Show configuration -quantcoder config-show - -# Show version -quantcoder version -``` - -### Workflow Example - -```bash -# 1. Search for articles -quantcoder> search "momentum and mean reversion strategies" - -# 2. Download the most relevant article -quantcoder> download 1 - -# 3. Extract and summarize the trading strategy -quantcoder> summarize 1 - -# 4. Generate QuantConnect algorithm code -quantcoder> generate 1 -``` - ---- - -## ⚙️ Configuration - -### Config File Location - -`~/.quantcoder/config.toml` - -### Example Configuration - -```toml -[model] -provider = "openai" -model = "gpt-4o-2024-11-20" -temperature = 0.5 -max_tokens = 2000 - -[ui] -theme = "monokai" -auto_approve = false -show_token_usage = true - -[tools] -enabled_tools = ["*"] -disabled_tools = [] -downloads_dir = "downloads" -generated_code_dir = "generated_code" -``` - -### Environment Variables - -- `OPENAI_API_KEY`: Your OpenAI API key -- `QUANTCODER_HOME`: Override default config directory (default: `~/.quantcoder`) - -### API Key Setup - -Three ways to set your API key: - -1. **Interactive prompt** (first-time setup) -2. **Environment variable**: `export OPENAI_API_KEY=your_key` -3. **`.env` file**: Create `~/.quantcoder/.env` with `OPENAI_API_KEY=your_key` - ---- - -## 🏗️ Architecture - -### Directory Structure - -``` -quantcoder/ -├── __init__.py # Package initialization -├── cli.py # Main CLI interface -├── config.py # Configuration management -├── chat.py # Interactive & programmatic chat -├── core/ -│ ├── __init__.py -│ ├── llm.py # LLM handler (OpenAI) -│ └── processor.py # Article processing pipeline -├── tools/ -│ ├── __init__.py -│ ├── base.py # Base tool classes -│ ├── article_tools.py # Search, download, summarize -│ ├── code_tools.py # Generate, validate code -│ └── file_tools.py # Read, write files -└── agents/ - └── __init__.py # Future: custom agents -``` - -### Tool System - -Tools are modular, composable components: - -- **ArticleTools**: Search, download, summarize articles -- **CodeTools**: Generate and validate QuantConnect code -- **FileTools**: Read and write files - -Each tool: -- Has a clear interface (`execute(**kwargs) -> ToolResult`) -- Can be enabled/disabled via configuration -- Supports approval workflows -- Provides rich error handling - -### LLM Integration - -- Supports OpenAI API (v1.0+) -- Configurable models, temperature, max tokens -- Context-aware conversations -- Automatic code refinement with validation - ---- - -## 🎨 Features - -### Interactive Chat - -- **Prompt Toolkit**: Advanced line editing, history, auto-suggest -- **Natural Language**: Ask questions in plain English -- **Context Awareness**: Maintains conversation history -- **Smart Completions**: Auto-complete for commands - -### Rich Terminal UI - -- **Syntax Highlighting**: Beautiful code display with Pygments -- **Markdown Rendering**: Formatted summaries and help -- **Progress Indicators**: Status messages for long operations -- **Color-Coded Output**: Errors, success, info messages - -### Tool Approval - -- **Auto-Approve Mode**: For trusted operations -- **Manual Approval**: Review before execution (coming soon) -- **Safety Controls**: Configurable tool permissions - ---- - -## 📚 Comparison with Legacy Version - -| Feature | Legacy (v0.3) | v2.0 | -|---------|---------------|------| -| Python Version | 3.8+ | 3.10+ | -| OpenAI SDK | 0.28 | 1.0+ | -| CLI Framework | Click | Click + Rich + Prompt Toolkit | -| Architecture | Monolithic | Tool-based, modular | -| Configuration | Hardcoded | TOML config file | -| UI | Basic text | Rich terminal UI | -| Interactive Mode | Tkinter GUI | Conversational CLI | -| Programmable | No | Yes (`--prompt` flag) | -| Extensibility | Limited | Plugin-ready | - ---- - -## 🛠️ Development - -### Install Development Dependencies - -```bash -pip install -e ".[dev]" -``` - -### Code Quality - -```bash -# Format code -black quantcoder/ - -# Lint code -ruff check quantcoder/ - -# Type checking -mypy quantcoder/ - -# Run tests -pytest -``` - -### Project Structure - -The project follows modern Python best practices: - -- **pyproject.toml**: Single source of truth for dependencies -- **Type hints**: Improved code quality and IDE support -- **Logging**: Structured logging with Rich -- **Modularity**: Clear separation of concerns - ---- - -## 🤝 Contributing - -We welcome contributions! To contribute: - -1. Fork the repository -2. Create a feature branch (`git checkout -b feature/amazing-feature`) -3. Commit your changes (`git commit -m 'Add amazing feature'`) -4. Push to the branch (`git push origin feature/amazing-feature`) -5. Open a Pull Request - ---- - -## 📄 License - -This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details. - ---- - -## 🙏 Acknowledgments - -- **Mistral AI** - For the excellent [Vibe CLI](https://github.com/mistralai/mistral-vibe) architecture that inspired this refactoring -- **OpenAI** - For GPT models powering the code generation -- **QuantConnect** - For the algorithmic trading platform -- Original QuantCoder concept from November 2023 - ---- - -## 📞 Support - -- **Issues**: [GitHub Issues](https://github.com/SL-Mar/quantcoder-cli/issues) -- **Discussions**: [GitHub Discussions](https://github.com/SL-Mar/quantcoder-cli/discussions) -- **Email**: smr.laignel@gmail.com - ---- - -## 🗺️ Roadmap - -- [ ] Custom agent system for specialized workflows -- [ ] MCP server integration for external tools -- [ ] Web interface option -- [ ] Backtesting integration with QuantConnect -- [ ] Strategy optimization tools -- [ ] Multi-provider LLM support (Anthropic, Mistral, etc.) -- [ ] Plugin system for custom tools - ---- - -**Note**: This is v2.0 with breaking changes from the legacy version. For the original version, see the `quantcoder-legacy` branch. - ---- - -## Sources - -This refactoring was inspired by: -- [GitHub - mistralai/mistral-vibe](https://github.com/mistralai/mistral-vibe) -- [Introducing: Devstral 2 and Mistral Vibe CLI | Mistral AI](https://mistral.ai/news/devstral-2-vibe-cli) diff --git a/docs/ARCHITECTURE.md b/docs/ARCHITECTURE.md new file mode 100644 index 00000000..f8e01be4 --- /dev/null +++ b/docs/ARCHITECTURE.md @@ -0,0 +1,1219 @@ +# QuantCoder CLI v2.0 - Architecture Documentation (Gamma Branch) + +This document provides comprehensive flowcharts and diagrams describing the architecture of QuantCoder CLI v2.0 (gamma branch). + +--- + +## Table of Contents + +1. [High-Level System Architecture](#1-high-level-system-architecture) +2. [Entry Points & Execution Modes](#2-entry-points--execution-modes) +3. [Tool System Architecture](#3-tool-system-architecture) +4. [Multi-Agent Orchestration](#4-multi-agent-orchestration) +5. [Autonomous Pipeline (Self-Improving)](#5-autonomous-pipeline-self-improving) +6. [Library Builder System](#6-library-builder-system) +7. [Chat Interface Flow](#7-chat-interface-flow) +8. [LLM Provider Abstraction](#8-llm-provider-abstraction) +9. [Data Flow & Entity Relationships](#9-data-flow--entity-relationships) +10. [File Structure Reference](#10-file-structure-reference) + +--- + +## 1. High-Level System Architecture + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ QUANTCODER CLI v2.0 (GAMMA BRANCH) │ +│ AI-Powered QuantConnect Algorithm Generator │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + + ┌──────────┐ + │ USER │ + └────┬─────┘ + │ + ┌───────────────────────────────┼───────────────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Interactive │ │ Programmatic │ │ Direct │ +│ Chat Mode │ │ Mode (--prompt)│ │ Commands │ +│ │ │ │ │ (search, etc.) │ +└────────┬────────┘ └────────┬────────┘ └────────┬────────┘ + │ │ │ + └─────────────────────────────┼─────────────────────────────┘ + │ + ▼ + ┌────────────────────┐ + │ cli.py │ + │ (Click Group) │ + │ Entry Point │ + └─────────┬──────────┘ + │ + ┌───────────────────────┼───────────────────────┐ + │ │ │ + ▼ ▼ ▼ + ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ + │ TOOL SYSTEM │ │ MULTI-AGENT │ │ ADVANCED MODES │ + │ (tools/*.py) │ │ SYSTEM │ │ │ + │ │ │ (agents/*.py) │ │ ┌───────────┐ │ + │ • SearchArticles│ │ │ │ │Autonomous │ │ + │ • Download │ │ • Coordinator │ │ │Pipeline │ │ + │ • Summarize │ │ • Universe │ │ └───────────┘ │ + │ • GenerateCode │ │ • Alpha │ │ ┌───────────┐ │ + │ • Validate │ │ • Risk │ │ │Library │ │ + │ • ReadFile │ │ • Strategy │ │ │Builder │ │ + │ • WriteFile │ │ │ │ └───────────┘ │ + └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ + │ │ │ + └──────────────────────┼──────────────────────┘ + │ + ▼ + ┌────────────────────┐ + │ LLM PROVIDERS │ + │ (llm/*.py) │ + │ │ + │ • OpenAI (GPT-4) │ + │ • Anthropic │ + │ • Mistral │ + │ • DeepSeek │ + └─────────┬──────────┘ + │ + ┌──────────────┼──────────────┐ + │ │ │ + ▼ ▼ ▼ + ┌────────────┐ ┌────────────┐ ┌────────────┐ + │ CrossRef │ │ Unpaywall │ │QuantConnect│ + │ API │ │ API │ │ MCP │ + │ (Search) │ │ (PDF) │ │ (Validate) │ + └────────────┘ └────────────┘ └────────────┘ +``` + +### Component Summary + +| Layer | Components | Source Files | +|-------|------------|--------------| +| Entry | CLI, Chat | `cli.py:40-510`, `chat.py:27-334` | +| Tools | Search, Download, Summarize, Generate, Validate | `tools/*.py` | +| Agents | Coordinator, Universe, Alpha, Risk, Strategy | `agents/*.py` | +| Advanced | Autonomous Pipeline, Library Builder | `autonomous/*.py`, `library/*.py` | +| LLM | Multi-provider abstraction | `llm/providers.py` | +| Core | PDF Processing, Article Processor | `core/processor.py`, `core/llm.py` | + +--- + +## 2. Entry Points & Execution Modes + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ ENTRY POINTS │ +│ quantcoder/cli.py │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + + ┌─────────────────────┐ + │ $ quantcoder │ + │ or $ qc │ + └──────────┬──────────┘ + │ + ▼ + ┌─────────────────────┐ + │ main() │ + │ cli.py:45 │ + │ │ + │ 1. setup_logging() │ + │ 2. Config.load() │ + │ 3. load_api_key() │ + └──────────┬──────────┘ + │ + ┌──────────────────────────┼──────────────────────────┐ + │ │ │ + ┌────────▼────────┐ ┌──────────▼──────────┐ ┌──────────▼──────────┐ + │ --prompt flag? │ │ Subcommand given? │ │ No args (default) │ + │ │ │ │ │ │ + │ ProgrammaticChat│ │ Execute subcommand │ │ InteractiveChat │ + │ cli.py:81-86 │ │ │ │ cli.py:88-90 │ + └─────────────────┘ └──────────┬──────────┘ └─────────────────────┘ + │ + ┌──────────────────────────────────┼──────────────────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ +│ STANDARD │ │ AUTONOMOUS │ │ LIBRARY │ +│ COMMANDS │ │ MODE │ │ BUILDER │ +│ │ │ │ │ │ +│ • search │ │ • auto start │ │ • library build │ +│ • download │ │ • auto status│ │ • library status │ +│ • summarize │ │ • auto report│ │ • library resume │ +│ • generate │ │ cli.py: │ │ • library export │ +│ • config │ │ 276-389 │ │ cli.py:392-506 │ +│ cli.py: │ │ │ │ │ +│ 109-270 │ │ │ │ │ +└──────────────┘ └──────────────┘ └──────────────────┘ +``` + +### CLI Commands Reference + +| Command | Function | Source | Description | +|---------|----------|--------|-------------| +| `quantcoder` | `main()` | `cli.py:45` | Launch interactive mode | +| `quantcoder --prompt "..."` | `ProgrammaticChat` | `cli.py:81` | Non-interactive query | +| `quantcoder search ` | `search()` | `cli.py:113` | Search CrossRef API | +| `quantcoder download ` | `download()` | `cli.py:141` | Download article PDF | +| `quantcoder summarize ` | `summarize()` | `cli.py:162` | Generate AI summary | +| `quantcoder generate ` | `generate_code()` | `cli.py:189` | Generate QC algorithm | +| `quantcoder auto start` | `auto_start()` | `cli.py:293` | Autonomous generation | +| `quantcoder library build` | `library_build()` | `cli.py:414` | Build strategy library | + +--- + +## 3. Tool System Architecture + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ TOOL SYSTEM (Mistral Vibe Pattern) │ +│ tools/base.py │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + + ┌───────────────┐ + │ Tool (ABC) │ + │ base.py:27 │ + │ │ + │ + name │ + │ + description │ + │ + execute() │ + │ + is_enabled()│ + └───────┬───────┘ + │ + │ inherits + ┌───────────────┬───────────────┼───────────────┬───────────────┐ + │ │ │ │ │ + ▼ ▼ ▼ ▼ ▼ +┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ +│SearchArticles │ │DownloadArticle│ │SummarizeArticle│ │ GenerateCode │ │ ValidateCode │ +│ Tool │ │ Tool │ │ Tool │ │ Tool │ │ Tool │ +│ │ │ │ │ │ │ │ │ │ +│article_tools │ │article_tools │ │article_tools │ │ code_tools │ │ code_tools │ +│ .py │ │ .py │ │ .py │ │ .py │ │ .py │ +└───────┬───────┘ └───────┬───────┘ └───────┬───────┘ └───────┬──────┘ └───────┬───────┘ + │ │ │ │ │ + ▼ ▼ ▼ ▼ ▼ +┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ +│ CrossRef │ │ Unpaywall │ │ OpenAI API │ │ OpenAI API │ │ AST Parser │ +│ API │ │ API │ │ (Summarize) │ │ (Generate) │ │ (Validate) │ +└───────────────┘ └───────────────┘ └───────────────┘ └───────────────┘ └───────────────┘ + + + TOOL EXECUTION FLOW + + ┌──────────────────┐ + │ User Command │ + │ "search query" │ + └────────┬─────────┘ + │ + ▼ + ┌──────────────────┐ + │ Tool Selection │ + │ chat.py:129 │ + └────────┬─────────┘ + │ + ▼ + ┌──────────────────┐ + │ tool.execute() │ + │ **kwargs │ + └────────┬─────────┘ + │ + ▼ + ┌──────────────────┐ + │ ToolResult │ + │ base.py:11 │ + │ │ + │ • success: bool │ + │ • data: Any │ + │ • error: str │ + │ • message: str │ + └────────┬─────────┘ + │ + ▼ + ┌──────────────────┐ + │ Display Result │ + │ (Rich Console) │ + └──────────────────┘ +``` + +### Tool Result Flow Example + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ GENERATE CODE TOOL FLOW │ +│ tools/code_tools.py │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + + ┌─────────────────┐ + │ execute( │ + │ article_id=1, │ + │ max_attempts=6│ + │ ) │ + └────────┬────────┘ + │ + ▼ + ┌─────────────────────┐ + │ Load article PDF │ + │ from downloads/ │ + └────────┬────────────┘ + │ + ▼ + ┌─────────────────────┐ + │ ArticleProcessor │ + │ .extract_structure()│ + │ core/processor.py │ + └────────┬────────────┘ + │ + ▼ + ┌─────────────────────┐ + │ LLMHandler │ + │ .generate_summary() │ + │ core/llm.py │ + └────────┬────────────┘ + │ + ▼ + ┌─────────────────────┐ + │ LLMHandler │ + │ .generate_qc_code() │ + └────────┬────────────┘ + │ + ▼ + ┌─────────────────────────────────┐ + │ VALIDATION & REFINEMENT │ + │ LOOP │ + │ │ + │ ┌────────────────────────┐ │ + │ │ CodeValidator │ │ + │ │ .validate_code() │ │ + │ │ (AST parse check) │ │ + │ └───────────┬────────────┘ │ + │ │ │ + │ ◇────┴────◇ │ + │ Valid? Invalid? │ + │ │ │ │ + │ ▼ ▼ │ + │ ┌────────┐ ┌───────────┐ │ + │ │ Return │ │ Refine │ │ + │ │ Code │ │ with LLM │ │ + │ └────────┘ │ (max 6x) │ │ + │ └─────┬─────┘ │ + │ │ │ + │ ▼ │ + │ Loop back to │ + │ validation │ + └─────────────────────────────────┘ + │ + ▼ + ┌─────────────────────┐ + │ ToolResult( │ + │ success=True, │ + │ data={ │ + │ 'summary':...,│ + │ 'code':... │ + │ } │ + │ ) │ + └─────────────────────┘ +``` + +--- + +## 4. Multi-Agent Orchestration + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ MULTI-AGENT SYSTEM │ +│ agents/coordinator_agent.py │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + + ┌──────────────────────────┐ + │ User Request │ + │ "Create momentum │ + │ strategy with RSI" │ + └────────────┬─────────────┘ + │ + ▼ + ┌──────────────────────────┐ + │ CoordinatorAgent │ + │ coordinator_agent.py:14│ + │ │ + │ Responsibilities: │ + │ • Analyze request │ + │ • Create execution plan│ + │ • Spawn agents │ + │ • Integrate results │ + │ • Validate via MCP │ + └────────────┬─────────────┘ + │ + ▼ + ┌──────────────────────────┐ + │ Step 1: Create Plan │ + │ _create_execution_plan() │ + │ coordinator_agent.py:83│ + │ │ + │ Uses LLM to determine: │ + │ • Required components │ + │ • Execution order │ + │ • Key parameters │ + │ • Parallel vs Sequential │ + └────────────┬─────────────┘ + │ + ▼ + ┌──────────────────────────┐ + │ Execution Plan (JSON): │ + │ { │ + │ "components": { │ + │ "universe": "...", │ + │ "alpha": "...", │ + │ "risk": "..." │ + │ }, │ + │ "execution_strategy": │ + │ "parallel" │ + │ } │ + └────────────┬─────────────┘ + │ + ▼ + ┌──────────────────────────┐ + │ Step 2: Execute Plan │ + │ _execute_plan() │ + │ coordinator_agent.py:153│ + └────────────┬─────────────┘ + │ + ┌──────────────────────────────┼──────────────────────────────┐ + │ │ │ + │ PARALLEL EXECUTION │ SEQUENTIAL EXECUTION │ + │ (strategy="parallel") │ (strategy="sequential") │ + │ │ │ + ▼ │ ▼ +┌───────────────────────────────────┐ │ ┌───────────────────────────────────┐ +│ ParallelExecutor │ │ │ Sequential Execution │ +│ execution/parallel_executor │ │ │ │ +│ │ │ │ Universe ──▶ Alpha ──▶ Risk │ +│ ┌─────────────┐ ┌─────────────┐ │ │ │ │ │ │ │ +│ │ Universe │ │ Alpha │ │ │ │ ▼ ▼ ▼ │ +│ │ Agent │ │ Agent │ │ │ │ Universe.py Alpha.py Risk.py │ +│ │ │ │ │ │ │ │ │ +│ │ (Parallel) │ │ (Parallel) │ │ │ └───────────────────────────────────┘ +│ └──────┬──────┘ └──────┬──────┘ │ │ +│ │ │ │ │ +│ └───────┬───────┘ │ │ +│ │ │ │ +│ ▼ │ │ +│ ┌───────────────┐ │ │ +│ │ Risk Agent │ │ │ +│ │ (Sequential) │ │ │ +│ └───────┬───────┘ │ │ +│ │ │ │ +└────────────────┼──────────────────┘ │ + │ │ + ▼ │ + ┌───────────────────────────────────┐ + │ Strategy Agent │ + │ strategy_agent.py │ + │ │ + │ Integrates all components into │ + │ Main.py │ + │ │ + │ • Imports Universe, Alpha, Risk │ + │ • Initialize() method │ + │ • OnData() method │ + │ • Wiring of components │ + └───────────────┬───────────────────┘ + │ + ▼ + ┌───────────────────────────────────┐ + │ Generated Files │ + │ │ + │ ┌──────────┐ ┌──────────┐ │ + │ │ Main.py │ │ Alpha.py │ │ + │ └──────────┘ └──────────┘ │ + │ ┌──────────┐ ┌──────────┐ │ + │ │Universe.py│ │ Risk.py │ │ + │ └──────────┘ └──────────┘ │ + └───────────────┬───────────────────┘ + │ + ▼ + ┌───────────────────────────────────┐ + │ Step 3: Validate via MCP │ + │ _validate_and_refine() │ + │ coordinator_agent.py:257 │ + │ │ + │ • Send to QuantConnect MCP │ + │ • Check compilation │ + │ • If errors: use LLM to fix │ + │ • Retry up to 3 times │ + └───────────────────────────────────┘ +``` + +### Agent Class Hierarchy + +``` + ┌──────────────────┐ + │ BaseAgent │ + │ base.py:28 │ + │ │ + │ + llm: Provider │ + │ + config │ + │ + agent_name │ + │ + agent_descr │ + │ + execute() │ + │ + _generate_llm()│ + │ + _extract_code()│ + └────────┬─────────┘ + │ + ┌─────────────────────────────┼─────────────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ CoordinatorAgent│ │ UniverseAgent │ │ AlphaAgent │ +│ │ │ │ │ │ +│ Orchestrates │ │ Generates │ │ Generates │ +│ multi-agent │ │ Universe.py │ │ Alpha.py │ +│ workflow │ │ │ │ │ +│ │ │ Stock selection │ │ Trading signals │ +│ │ │ & filtering │ │ Entry/exit logic│ +└─────────────────┘ └─────────────────┘ └─────────────────┘ + │ + ├─────────────────────────────┐ + │ │ + ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ +│ RiskAgent │ │ StrategyAgent │ +│ │ │ │ +│ Generates │ │ Generates │ +│ Risk.py │ │ Main.py │ +│ │ │ │ +│ Position sizing │ │ Integrates all │ +│ Stop-loss logic │ │ components │ +└─────────────────┘ └─────────────────┘ +``` + +--- + +## 5. Autonomous Pipeline (Self-Improving) + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ AUTONOMOUS SELF-IMPROVING PIPELINE │ +│ autonomous/pipeline.py │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + +$ quantcoder auto start --query "momentum trading" --max-iterations 50 + + ┌──────────────────────┐ + │ AutonomousPipeline │ + │ pipeline.py:54 │ + │ │ + │ • LearningDatabase │ + │ • ErrorLearner │ + │ • PerformanceLearner │ + │ • PromptRefiner │ + └──────────┬───────────┘ + │ + ▼ + ┌──────────────────────┐ + │ run() Main Loop │ + │ pipeline.py:82 │ + │ │ + │ while iteration < │ + │ max_iterations: │ + └──────────┬───────────┘ + │ + ▼ +┌──────────────────────────────────────────────────────────────────────────────────────┐ +│ SINGLE ITERATION (_run_iteration) │ +│ pipeline.py:143-258 │ +│ │ +│ ┌────────────────────────────────────────────────────────────────────────────────┐ │ +│ │ │ │ +│ │ STEP 1: FETCH PAPERS │ │ +│ │ ┌─────────────────┐ │ │ +│ │ │ _fetch_papers() │───▶ CrossRef/arXiv API ───▶ List of Papers │ │ +│ │ │ pipeline.py:260│ │ │ +│ │ └─────────────────┘ │ │ +│ │ │ │ │ +│ │ ▼ │ │ +│ │ STEP 2: APPLY LEARNED PATTERNS │ │ +│ │ ┌─────────────────────────────────────────────────────────┐ │ │ +│ │ │ PromptRefiner.get_enhanced_prompts_for_agents() │ │ │ +│ │ │ │ │ │ +│ │ │ • Retrieves successful patterns from database │ │ │ +│ │ │ • Enhances prompts with learned fixes │ │ │ +│ │ │ • Applies error avoidance patterns │ │ │ +│ │ └─────────────────────────────────────────────────────────┘ │ │ +│ │ │ │ │ +│ │ ▼ │ │ +│ │ STEP 3: GENERATE STRATEGY │ │ +│ │ ┌─────────────────┐ │ │ +│ │ │_generate_strategy│───▶ Multi-Agent System ───▶ Strategy Code │ │ +│ │ │ pipeline.py:269 │ │ │ +│ │ └─────────────────┘ │ │ +│ │ │ │ │ +│ │ ▼ │ │ +│ │ STEP 4: VALIDATE & LEARN FROM ERRORS │ │ +│ │ ┌─────────────────────────────────────────────────────────┐ │ │ +│ │ │ _validate_and_learn() pipeline.py:282 │ │ │ +│ │ │ │ │ │ +│ │ │ ┌─────────────◇─────────────┐ │ │ +│ │ │ │ Validation Passed? │ │ │ │ +│ │ │ └───────┬───────────┬───────┘ │ │ +│ │ │ Yes │ │ No │ │ +│ │ │ │ ▼ │ │ +│ │ │ │ ┌─────────────────────────┐ │ │ +│ │ │ │ │ SELF-HEALING │ │ │ +│ │ │ │ │ _apply_learned_fixes() │ │ │ +│ │ │ │ │ pipeline.py:302 │ │ │ +│ │ │ │ │ │ │ │ +│ │ │ │ │ • ErrorLearner.analyze │ │ │ +│ │ │ │ │ • Apply suggested_fix │ │ │ +│ │ │ │ │ • Re-validate │ │ │ +│ │ │ │ └────────────┬────────────┘ │ │ +│ │ │ │ │ │ │ +│ │ │ └───────────────┤ │ │ +│ │ └──────────────────────────┼──────────────────────────────┘ │ +│ │ │ │ │ +│ │ ▼ ▼ │ +│ │ STEP 5: BACKTEST │ +│ │ ┌─────────────────┐ │ │ +│ │ │ _backtest() │───▶ QuantConnect MCP ───▶ {sharpe, drawdown, return} │ │ +│ │ │ pipeline.py:322 │ │ │ +│ │ └─────────────────┘ │ │ +│ │ │ │ │ +│ │ ▼ │ │ +│ │ STEP 6: LEARN FROM PERFORMANCE │ │ +│ │ ┌─────────────────────────────────────────────────────────┐ │ │ +│ │ │ │ │ │ +│ │ │ ┌─────────────◇─────────────┐ │ │ │ +│ │ │ │ Sharpe >= min_sharpe? │ │ │ │ +│ │ │ └───────┬───────────┬───────┘ │ │ │ +│ │ │ Yes │ │ No │ │ │ +│ │ │ ▼ ▼ │ │ +│ │ │ ┌───────────────┐ ┌───────────────────────┐ │ │ +│ │ │ │ SUCCESS! │ │ PerformanceLearner │ │ │ +│ │ │ │ │ │ .analyze_poor_perf() │ │ │ +│ │ │ │ identify_ │ │ │ │ │ +│ │ │ │ success_ │ │ • Identify issues │ │ │ +│ │ │ │ patterns() │ │ • Store for learning │ │ │ +│ │ │ └───────────────┘ └───────────────────────┘ │ │ +│ │ │ │ │ │ +│ │ └─────────────────────────────────────────────────────────┘ │ │ +│ │ │ │ │ +│ │ ▼ │ │ +│ │ STEP 7: STORE STRATEGY │ │ +│ │ ┌─────────────────┐ │ │ +│ │ │ _store_strategy │───▶ LearningDatabase + Filesystem │ │ +│ │ │ pipeline.py:337 │ │ │ +│ │ └─────────────────┘ │ │ +│ │ │ │ +│ └────────────────────────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌──────────────────────┐ │ +│ │ _should_continue() │ │ +│ │ pipeline.py:368 │ │ +│ │ │ │ +│ │ • Check max_iters │ │ +│ │ • User prompt (10x) │ │ +│ │ • Check paused flag │ │ +│ └──────────┬───────────┘ │ +│ │ │ +└──────────────────────────────────────┼───────────────────────────────────────────────┘ + │ + ▼ + ┌──────────────────────┐ + │ _generate_final_ │ + │ report() │ + │ pipeline.py:399 │ + │ │ + │ • Session stats │ + │ • Common errors │ + │ • Key learnings │ + │ • Library summary │ + └──────────────────────┘ +``` + +### Learning System Components + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ LEARNING SUBSYSTEM │ +│ autonomous/database.py │ +│ autonomous/learner.py │ +│ autonomous/prompt_refiner.py │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + +┌──────────────────────────────────────────────────────────────────────┐ +│ LearningDatabase (SQLite) │ +│ database.py │ +│ │ +│ Tables: │ +│ ┌───────────────────┐ ┌───────────────────┐ ┌───────────────────┐ │ +│ │ generated_strategies│ │ error_patterns │ │ success_patterns │ │ +│ │ │ │ │ │ │ │ +│ │ • name │ │ • error_type │ │ • pattern │ │ +│ │ • category │ │ • count │ │ • strategy_type │ │ +│ │ • sharpe_ratio │ │ • fixed_count │ │ • avg_sharpe │ │ +│ │ • max_drawdown │ │ • suggested_fix │ │ • usage_count │ │ +│ │ • code_files │ │ • success_rate │ │ │ │ +│ │ • paper_source │ │ │ │ │ │ +│ └───────────────────┘ └───────────────────┘ └───────────────────┘ │ +│ │ +└──────────────────────────────────────────────────────────────────────┘ + │ │ │ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ ErrorLearner │ │ PerformanceLearner│ │ PromptRefiner │ +│ learner.py │ │ learner.py │ │ prompt_refiner │ +│ │ │ │ │ .py │ +│ • analyze_error │ │ • analyze_poor_ │ │ │ +│ • get_common_ │ │ performance │ │ • get_enhanced_ │ +│ errors │ │ • identify_ │ │ prompts_for_ │ +│ • record_fix │ │ success_ │ │ agents │ +│ │ │ patterns │ │ │ +└─────────────────┘ └─────────────────┘ └─────────────────┘ +``` + +--- + +## 6. Library Builder System + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ LIBRARY BUILDER SYSTEM │ +│ library/builder.py │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + +$ quantcoder library build --comprehensive --max-hours 24 + + ┌──────────────────────┐ + │ LibraryBuilder │ + │ builder.py:31 │ + │ │ + │ • CoverageTracker │ + │ • checkpoint_file │ + │ • STRATEGY_TAXONOMY │ + └──────────┬───────────┘ + │ + ▼ + ┌──────────────────────┐ + │ _display_build_plan()│ + │ │ + │ Shows: │ + │ • Categories to build│ + │ • Target strategies │ + │ • Estimated time │ + └──────────┬───────────┘ + │ + ▼ + ┌──────────────────────┐ + │ Check for checkpoint │ + │ Resume if exists? │ + └──────────┬───────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ CATEGORY BUILD LOOP │ +│ builder.py:103-146 │ +│ │ +│ for priority in ["high", "medium", "low"]: │ +│ for category_name, category_config in priority_cats.items(): │ +│ │ +│ ┌───────────────────────────────────────────────────────────────────────────────┐ │ +│ │ │ │ +│ │ STRATEGY_TAXONOMY (library/taxonomy.py): │ │ +│ │ │ │ +│ │ ┌─────────────────┬─────────────────┬─────────────────┬─────────────────┐ │ │ +│ │ │ MOMENTUM │ MEAN REVERSION │ FACTOR │ VOLATILITY │ │ │ +│ │ │ (high priority)│ (high priority) │ (high priority) │ (medium) │ │ │ +│ │ │ │ │ │ │ │ │ +│ │ │ min_strategies: │ min_strategies: │ min_strategies: │ min_strategies: │ │ │ +│ │ │ 20 │ 15 │ 15 │ 10 │ │ │ +│ │ │ │ │ │ │ │ │ +│ │ │ queries: │ queries: │ queries: │ queries: │ │ │ +│ │ │ - momentum │ - mean reversion│ - value factor │ - volatility │ │ │ +│ │ │ - trend follow │ - pairs trading │ - momentum │ - VIX trading │ │ │ +│ │ │ - crossover │ - stat arb │ - quality │ - options │ │ │ +│ │ └─────────────────┴─────────────────┴─────────────────┴─────────────────┘ │ │ +│ │ │ │ +│ │ ┌─────────────────┬─────────────────┬─────────────────┬─────────────────┐ │ │ +│ │ │ ML-BASED │ EVENT-DRIVEN │ SENTIMENT │ OPTIONS │ │ │ +│ │ │ (medium) │ (medium) │ (low priority) │ (low priority) │ │ │ +│ │ │ │ │ │ │ │ │ +│ │ │ min_strategies: │ min_strategies: │ min_strategies: │ min_strategies: │ │ │ +│ │ │ 10 │ 10 │ 5 │ 5 │ │ │ +│ │ └─────────────────┴─────────────────┴─────────────────┴─────────────────┘ │ │ +│ │ │ │ +│ └───────────────────────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌───────────────────────────────────────────────────────────────────────────────┐ │ +│ │ _build_category() builder.py:154-217 │ │ +│ │ │ │ +│ │ for query in category_config.queries: │ │ +│ │ for i in range(attempts_per_query): │ │ +│ │ │ │ +│ │ ┌──────────────────────────────────────────────┐ │ │ +│ │ │ _generate_one_strategy() builder.py:219 │ │ │ +│ │ │ │ │ │ +│ │ │ 1. Fetch papers │ │ │ +│ │ │ 2. Get enhanced prompts │ │ │ +│ │ │ 3. Generate strategy (Autonomous Pipeline) │ │ │ +│ │ │ 4. Validate │ │ │ +│ │ │ 5. Backtest │ │ │ +│ │ │ 6. Check Sharpe >= min_sharpe │ │ │ +│ │ │ 7. Save to library │ │ │ +│ │ └──────────────────────────────────────────────┘ │ │ +│ │ │ │ +│ │ ┌──────────────────────────────────────────────┐ │ │ +│ │ │ coverage.update() │ │ │ +│ │ │ Save checkpoint after each category │ │ │ +│ │ └──────────────────────────────────────────────┘ │ │ +│ │ │ │ +│ └───────────────────────────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + │ + ▼ + ┌──────────────────────┐ + │ _generate_library_ │ + │ report() │ + │ builder.py:316 │ + │ │ + │ Generates: │ + │ • index.json │ + │ • README.md │ + │ • Per-category stats │ + └──────────────────────┘ + │ + ▼ + ┌──────────────────────┐ + │ OUTPUT STRUCTURE │ + │ │ + │ strategies_library/ │ + │ ├── index.json │ + │ ├── README.md │ + │ ├── momentum/ │ + │ │ ├── Strategy1/ │ + │ │ │ ├── Main.py │ + │ │ │ ├── Alpha.py│ + │ │ │ └── meta.json│ + │ │ └── Strategy2/ │ + │ ├── mean_reversion/ │ + │ └── factor_based/ │ + └──────────────────────┘ +``` + +--- + +## 7. Chat Interface Flow + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ CHAT INTERFACE │ +│ chat.py │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + + ┌──────────────────────────────────┐ + │ │ + │ InteractiveChat (REPL) │ + │ chat.py:27 │ + │ │ + │ ┌────────────────────────────┐ │ + │ │ prompt_toolkit Features: │ │ + │ │ • FileHistory │ │ + │ │ • AutoSuggestFromHistory │ │ + │ │ • WordCompleter │ │ + │ └────────────────────────────┘ │ + │ │ + └──────────────┬───────────────────┘ + │ + ▼ + ┌──────────────────────────────────┐ + │ run() Loop │ + │ chat.py:55 │ + │ │ + │ while True: │ + │ user_input = prompt() │ + └──────────────┬───────────────────┘ + │ + ▼ + ┌────────────────◇────────────────┐ + │ Input Type Detection │ + │ chat.py:69-95 │ + └───────────────┬─────────────────┘ + │ + ┌───────────────────────────┼───────────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Special Command │ │ Tool Command │ │ Natural Language│ +│ │ │ │ │ │ +│ exit, quit │ │ search │ │ "Find articles │ +│ help │ │ download │ │ about trading" │ +│ clear │ │ summarize │ │ │ +│ config │ │ generate │ │ │ +└────────┬────────┘ └────────┬────────┘ └────────┬────────┘ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Handle │ │ execute_tool() │ │ process_natural │ +│ directly │ │ chat.py:129 │ │ _language() │ +│ │ │ │ │ chat.py:191 │ +│ - Show help │ │ tool.execute() │ │ │ +│ - Clear screen │ │ Display result │ │ LLMHandler.chat │ +│ - Exit loop │ │ │ │ Maintain context│ +└─────────────────┘ └─────────────────┘ └─────────────────┘ + │ + ▼ + ┌──────────────────────────────────┐ + │ Rich Console Output │ + │ │ + │ ┌────────────────────────────┐ │ + │ │ Panel (Markdown) │ │ + │ │ Syntax (code highlighting)│ │ + │ │ Status (spinners) │ │ + │ │ Table (search results) │ │ + │ └────────────────────────────┘ │ + └──────────────────────────────────┘ + + + PROGRAMMATIC CHAT + + ┌──────────────────────────────────┐ + │ │ + │ ProgrammaticChat │ + │ chat.py:290 │ + │ │ + │ • auto_approve = True │ + │ • Single process() call │ + │ • No interaction needed │ + │ │ + └──────────────┬───────────────────┘ + │ + ▼ + ┌──────────────────────────────────┐ + │ process(prompt) │ + │ chat.py:307 │ + │ │ + │ 1. Build messages context │ + │ 2. Call LLMHandler.chat() │ + │ 3. Return response string │ + └──────────────────────────────────┘ +``` + +--- + +## 8. LLM Provider Abstraction + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ LLM PROVIDER ABSTRACTION │ +│ llm/providers.py │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + + ┌──────────────────────┐ + │ LLMProvider │ + │ (Abstract Base) │ + │ │ + │ + chat(messages) │ + │ + get_model_name() │ + └──────────┬───────────┘ + │ + ┌────────────────────────────┼────────────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌───────────────┐ ┌───────────────┐ ┌───────────────┐ +│ OpenAIProvider│ │AnthropicProvider│ │ MistralProvider│ +│ │ │ │ │ │ +│ Models: │ │ Models: │ │ Models: │ +│ • gpt-4o │ │ • claude-3 │ │ • mistral- │ +│ • gpt-4 │ │ • claude-3.5 │ │ large │ +│ • gpt-3.5 │ │ │ │ • codestral │ +└───────────────┘ └───────────────┘ └───────────────┘ + │ │ │ + │ │ │ + └────────────────────────────┼────────────────────────────┘ + │ + ▼ + ┌──────────────────────┐ + │ LLMFactory │ + │ │ + │ + create(provider, │ + │ api_key) │ + │ │ + │ + get_recommended_ │ + │ for_task(task) │ + │ │ + │ Task recommendations:│ + │ • "coding" → Mistral │ + │ • "reasoning" → │ + │ Anthropic │ + │ • "risk" → OpenAI │ + │ • "general" → OpenAI │ + └──────────────────────┘ + + + TASK-BASED LLM SELECTION + (coordinator_agent.py:164-173) + + ┌──────────────────────────────────────────────────────┐ + │ │ + │ # Different LLMs for different agent tasks │ + │ │ + │ code_llm = LLMFactory.create( │ + │ LLMFactory.get_recommended_for_task("coding"), │ ──▶ Mistral/Codestral + │ api_key │ + │ ) │ + │ │ + │ risk_llm = LLMFactory.create( │ + │ LLMFactory.get_recommended_for_task("risk"), │ ──▶ OpenAI GPT-4 + │ api_key │ + │ ) │ + │ │ + └──────────────────────────────────────────────────────┘ +``` + +--- + +## 9. Data Flow & Entity Relationships + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ DATA FLOW OVERVIEW │ +└─────────────────────────────────────────────────────────────────────────────────────┘ + + + ┌──────────────────┐ + │ CrossRef │ + │ API │ + └────────┬─────────┘ + │ + ▼ + ┌──────────────────┐ ┌──────────────────┐ + │ ARTICLE │ │ PDF FILE │ + │ │──────────▶│ │ + │ • title │ download │ downloads/ │ + │ • authors │ │ article_N.pdf │ + │ • DOI │ └────────┬─────────┘ + │ • URL │ │ + │ • abstract │ │ extract + └──────────────────┘ ▼ + ┌──────────────────────────┐ + │ EXTRACTED DATA │ + │ │ + │ { │ + │ 'trading_signal': [...],│ + │ 'risk_management': [...]│ + │ } │ + └────────────┬─────────────┘ + │ + │ LLM + ▼ + ┌──────────────────────────┐ + │ SUMMARY │ + │ │ + │ Plain text strategy │ + │ description │ + └────────────┬─────────────┘ + │ + │ Multi-Agent + ▼ + ┌─────────────────────────────────────────────────────────────────────────┐ + │ GENERATED STRATEGY │ + │ │ + │ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │ + │ │ Main.py │ │ Alpha.py │ │ Universe.py │ │ + │ │ │ │ │ │ │ │ + │ │ QCAlgorithm │ │ Alpha signals │ │ Stock filter │ │ + │ │ Initialize() │ │ Entry/exit │ │ Selection │ │ + │ │ OnData() │ │ indicators │ │ criteria │ │ + │ └────────────────┘ └────────────────┘ └────────────────┘ │ + │ │ + │ ┌────────────────┐ ┌────────────────┐ │ + │ │ Risk.py │ │ metadata.json │ │ + │ │ │ │ │ │ + │ │ Position sizing│ │ • sharpe_ratio │ │ + │ │ Stop-loss │ │ • max_drawdown │ │ + │ │ Risk limits │ │ • paper_source │ │ + │ └────────────────┘ └────────────────┘ │ + │ │ + └─────────────────────────────────────────────────────────────────────────┘ + │ + │ store + ▼ + ┌─────────────────────────────────────────────────────────────────────────┐ + │ LEARNING DATABASE (SQLite) │ + │ │ + │ ┌────────────────────┐ ┌────────────────────┐ ┌──────────────────┐ │ + │ │generated_strategies│ │ error_patterns │ │ success_patterns │ │ + │ │ │ │ │ │ │ │ + │ │ • name │ │ • error_type │ │ • pattern │ │ + │ │ • category │ │ • count │ │ • strategy_type │ │ + │ │ • sharpe_ratio │ │ • fixed_count │ │ • avg_sharpe │ │ + │ │ • success │ │ • suggested_fix │ │ │ │ + │ └────────────────────┘ └────────────────────┘ └──────────────────┘ │ + │ │ + └─────────────────────────────────────────────────────────────────────────┘ + + + ENTITY RELATIONSHIPS + + ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ + │ CrossRef │──1:N──│ Article │──1:1──│ PDF │──1:1──│ Extracted│ + │ API │ │ │ │ │ │ Data │ + └──────────┘ └──────────┘ └──────────┘ └────┬─────┘ + │ + 1:1 │ + ▼ + ┌──────────┐ + │ Summary │ + └────┬─────┘ + │ + 1:N │ + ▼ + ┌──────────┐ ┌───────────────────┐ + │ LLM │◀────────────────────────────────────│Generated Strategy │ + │ Providers│ │ (Multi-file) │ + └──────────┘ └─────────┬─────────┘ + │ + 1:1 │ + ▼ + ┌────────────────────┐ + │ Learning Database │ + │ (Feedback Loop) │ + └────────────────────┘ +``` + +--- + +## 10. File Structure Reference + +``` +quantcoder-cli/ # Root directory +├── quantcoder/ # Main package (6,919 lines) +│ │ +│ ├── cli.py # CLI entry point (510 lines) +│ │ ├── main() # Line 45 - Click group +│ │ ├── search() # Line 113 +│ │ ├── download() # Line 141 +│ │ ├── summarize() # Line 162 +│ │ ├── generate_code() # Line 189 +│ │ ├── auto_start() # Line 293 +│ │ └── library_build() # Line 414 +│ │ +│ ├── chat.py # Chat interfaces (334 lines) +│ │ ├── InteractiveChat # Line 27 +│ │ │ ├── run() # Line 55 +│ │ │ ├── process_input() # Line 96 +│ │ │ ├── execute_tool() # Line 129 +│ │ │ └── process_natural_language() # Line 191 +│ │ └── ProgrammaticChat # Line 290 +│ │ +│ ├── config.py # Configuration management +│ │ ├── Config # Main config class +│ │ ├── ModelConfig # LLM settings +│ │ ├── UIConfig # Terminal UI +│ │ └── ToolsConfig # Tool settings +│ │ +│ ├── agents/ # Multi-agent system +│ │ ├── base.py # BaseAgent (118 lines) +│ │ │ ├── AgentResult # Line 10 +│ │ │ └── BaseAgent # Line 28 +│ │ ├── coordinator_agent.py # Orchestrator (338 lines) +│ │ │ ├── CoordinatorAgent # Line 14 +│ │ │ ├── _create_execution_plan() # Line 83 +│ │ │ ├── _execute_plan() # Line 153 +│ │ │ └── _validate_and_refine() # Line 257 +│ │ ├── universe_agent.py # Universe.py generation +│ │ ├── alpha_agent.py # Alpha.py generation +│ │ ├── risk_agent.py # Risk.py generation +│ │ └── strategy_agent.py # Main.py integration +│ │ +│ ├── autonomous/ # Self-improving pipeline +│ │ ├── pipeline.py # AutonomousPipeline (486 lines) +│ │ │ ├── AutoStats # Line 26 +│ │ │ ├── AutonomousPipeline # Line 54 +│ │ │ ├── run() # Line 82 +│ │ │ ├── _run_iteration() # Line 143 +│ │ │ └── _generate_final_report() # Line 399 +│ │ ├── database.py # LearningDatabase (SQLite) +│ │ ├── learner.py # ErrorLearner, PerformanceLearner +│ │ └── prompt_refiner.py # Dynamic prompt enhancement +│ │ +│ ├── library/ # Library builder +│ │ ├── builder.py # LibraryBuilder (493 lines) +│ │ │ ├── LibraryBuilder # Line 31 +│ │ │ ├── build() # Line 55 +│ │ │ ├── _build_category() # Line 154 +│ │ │ └── _generate_one_strategy() # Line 219 +│ │ ├── taxonomy.py # STRATEGY_TAXONOMY (13+ categories) +│ │ └── coverage.py # CoverageTracker, checkpointing +│ │ +│ ├── tools/ # Tool system +│ │ ├── base.py # Tool, ToolResult (73 lines) +│ │ ├── article_tools.py # SearchArticles, Download, Summarize +│ │ ├── code_tools.py # GenerateCode, ValidateCode +│ │ └── file_tools.py # ReadFile, WriteFile +│ │ +│ ├── llm/ # LLM abstraction +│ │ └── providers.py # LLMProvider, LLMFactory +│ │ # (OpenAI, Anthropic, Mistral, DeepSeek) +│ │ +│ ├── core/ # Core processing +│ │ ├── processor.py # ArticleProcessor, PDF pipeline +│ │ └── llm.py # LLMHandler (OpenAI) +│ │ +│ ├── execution/ # Parallel execution +│ │ └── parallel_executor.py # ParallelExecutor, AgentTask +│ │ +│ ├── mcp/ # QuantConnect integration +│ │ └── quantconnect_mcp.py # MCP client for validation +│ │ +│ └── codegen/ # Multi-file generation +│ └── multi_file.py # Main, Alpha, Universe, Risk +│ +├── tests/ # Test suite +├── docs/ # Documentation +├── pyproject.toml # Dependencies & config +├── requirements.txt # Current dependencies +└── README.md # Project documentation +``` + +--- + +## Summary + +The **gamma branch** of QuantCoder CLI v2.0 represents a sophisticated multi-agent architecture designed for autonomous, self-improving strategy generation: + +### Key Architectural Features + +| Feature | Description | Source | +|---------|-------------|--------| +| **Tool System** | Pluggable tools with consistent execute() interface | `tools/base.py` | +| **Multi-Agent** | Coordinator orchestrates Universe, Alpha, Risk, Strategy agents | `agents/*.py` | +| **Parallel Execution** | AsyncIO + ThreadPool for concurrent agent execution | `execution/parallel_executor.py` | +| **Autonomous Pipeline** | Self-improving loop with error learning | `autonomous/pipeline.py` | +| **Library Builder** | Systematic multi-category strategy generation | `library/builder.py` | +| **LLM Abstraction** | Multi-provider support (OpenAI, Anthropic, Mistral) | `llm/providers.py` | +| **Learning System** | SQLite database tracks errors, fixes, success patterns | `autonomous/database.py` | +| **MCP Integration** | QuantConnect validation and backtesting | `mcp/quantconnect_mcp.py` | + +### Execution Modes + +1. **Interactive** - REPL with command completion and history +2. **Programmatic** - Single-shot queries via `--prompt` +3. **Direct Commands** - Traditional CLI (search, download, generate) +4. **Autonomous** - Self-improving continuous generation +5. **Library Builder** - Comprehensive multi-category strategy library + +### Design Patterns Used + +- **Factory Pattern** - LLMFactory for provider creation +- **Strategy Pattern** - BaseAgent, Tool abstractions +- **Coordinator Pattern** - CoordinatorAgent orchestration +- **Repository Pattern** - LearningDatabase for persistence +- **Builder Pattern** - LibraryBuilder for complex construction +- **Pipeline Pattern** - AutonomousPipeline for iterative refinement diff --git a/docs/ARCHITECTURE_ADAPTATIONS.md b/docs/ARCHITECTURE_ADAPTATIONS.md new file mode 100644 index 00000000..aefa7f67 --- /dev/null +++ b/docs/ARCHITECTURE_ADAPTATIONS.md @@ -0,0 +1,622 @@ +# Architecture Adaptations: From QuantCoder to Research Assistant & Trading Operator + +This document outlines how to adapt the QuantCoder Gamma multi-agent architecture for two new use cases: +1. **Research Assistant** - AI-powered research and analysis tool +2. **Trading Operator** - Automated trading operations system + +--- + +## Source Architecture: QuantCoder Gamma + +### Core Patterns to Reuse + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ REUSABLE COMPONENTS │ +├─────────────────────────────────────────────────────────────────┤ +│ 1. Multi-Agent Orchestration (coordinator_agent.py) │ +│ 2. Parallel Execution Framework (parallel_executor.py) │ +│ 3. LLM Provider Abstraction (llm/providers.py) │ +│ 4. Tool System Base Classes (tools/base.py) │ +│ 5. Learning Database (autonomous/database.py) │ +│ 6. CLI Framework (cli.py with Click + Rich) │ +│ 7. Configuration System (config.py) │ +└─────────────────────────────────────────────────────────────────┘ +``` + +--- + +## 1. Research Assistant Architecture + +### Vision +An AI-powered research assistant that can: +- Search and analyze academic papers, patents, and web sources +- Synthesize findings across multiple sources +- Generate reports, summaries, and literature reviews +- Track research threads and maintain context over time + +### Agent Structure + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ USER QUERY │ +│ "Find papers on transformer architectures for time series" │ +└────────────────────────┬────────────────────────────────────────┘ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ RESEARCH COORDINATOR │ +│ • Parse research question │ +│ • Identify source types needed │ +│ • Plan search strategy │ +│ • Orchestrate specialized agents │ +└────────────────────────┬────────────────────────────────────────┘ + │ + ┌────────────────┼────────────────┬──────────────┐ + ▼ ▼ ▼ ▼ +┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ +│ Search │ │ Paper │ │ Patent │ │ Web │ +│ Agent │ │ Agent │ │ Agent │ │ Agent │ +│ │ │ │ │ │ │ │ +│ • CrossRef │ │ • ArXiv │ │ • USPTO │ │ • Google │ +│ • Semantic │ │ • PDF parse │ │ • EPO │ │ • News │ +│ Scholar │ │ • Citations │ │ • WIPO │ │ • Blogs │ +└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ + │ │ │ │ + └────────────────┴────────────────┴──────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ SYNTHESIS AGENT │ +│ • Cross-reference findings │ +│ • Identify themes and gaps │ +│ • Generate structured summary │ +└────────────────────────┬────────────────────────────────────────┘ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ REPORT AGENT │ +│ • Format output (Markdown, PDF, LaTeX) │ +│ • Create citations │ +│ • Generate bibliography │ +└─────────────────────────────────────────────────────────────────┘ +``` + +### Agent Implementations + +```python +# research_assistant/agents/base.py +from abc import ABC, abstractmethod +from dataclasses import dataclass +from typing import Any, List, Optional + +@dataclass +class ResearchResult: + """Result from a research agent.""" + success: bool + sources: List[dict] = None + summary: str = None + error: Optional[str] = None + metadata: dict = None + +class BaseResearchAgent(ABC): + """Base class for research agents.""" + + def __init__(self, llm, config=None): + self.llm = llm + self.config = config + + @property + @abstractmethod + def agent_name(self) -> str: + pass + + @property + @abstractmethod + def source_type(self) -> str: + """Type of sources this agent searches (papers, patents, web).""" + pass + + @abstractmethod + async def search(self, query: str, **kwargs) -> ResearchResult: + """Search for sources matching the query.""" + pass + + @abstractmethod + async def analyze(self, source: dict) -> dict: + """Analyze a single source and extract key information.""" + pass +``` + +```python +# research_assistant/agents/search_agent.py +class SearchAgent(BaseResearchAgent): + """Agent for academic paper search across multiple databases.""" + + agent_name = "SearchAgent" + source_type = "academic" + + def __init__(self, llm, config=None): + super().__init__(llm, config) + self.databases = { + "crossref": CrossRefClient(), + "semantic_scholar": SemanticScholarClient(), + "arxiv": ArxivClient(), + } + + async def search(self, query: str, databases: List[str] = None, + max_results: int = 20) -> ResearchResult: + """Search multiple academic databases in parallel.""" + dbs = databases or list(self.databases.keys()) + + # Parallel search across databases + tasks = [ + self._search_database(db, query, max_results) + for db in dbs + ] + results = await asyncio.gather(*tasks) + + # Merge and deduplicate + all_sources = self._merge_results(results) + + return ResearchResult( + success=True, + sources=all_sources, + summary=f"Found {len(all_sources)} papers across {len(dbs)} databases" + ) +``` + +```python +# research_assistant/agents/synthesis_agent.py +class SynthesisAgent(BaseResearchAgent): + """Agent for synthesizing findings across multiple sources.""" + + agent_name = "SynthesisAgent" + source_type = "synthesis" + + async def synthesize(self, sources: List[dict], + research_question: str) -> ResearchResult: + """Synthesize findings from multiple sources.""" + + # Group sources by theme + themes = await self._identify_themes(sources) + + # Generate synthesis for each theme + synthesis_prompt = f""" + Research Question: {research_question} + + Sources: {json.dumps(sources, indent=2)} + + Themes Identified: {themes} + + Provide a comprehensive synthesis that: + 1. Summarizes key findings across sources + 2. Identifies areas of consensus and disagreement + 3. Highlights research gaps + 4. Suggests future research directions + """ + + synthesis = await self.llm.chat(synthesis_prompt) + + return ResearchResult( + success=True, + summary=synthesis, + metadata={"themes": themes, "source_count": len(sources)} + ) +``` + +### Tools for Research Assistant + +```python +# research_assistant/tools/ +class SearchPapersTool(Tool): + """Search academic papers across databases.""" + name = "search_papers" + +class DownloadPDFTool(Tool): + """Download and parse PDF papers.""" + name = "download_pdf" + +class ExtractCitationsTool(Tool): + """Extract and format citations from papers.""" + name = "extract_citations" + +class SummarizePaperTool(Tool): + """Generate LLM-powered paper summaries.""" + name = "summarize_paper" + +class SearchPatentsTool(Tool): + """Search patent databases.""" + name = "search_patents" + +class WebSearchTool(Tool): + """Search web sources with filtering.""" + name = "web_search" + +class GenerateReportTool(Tool): + """Generate formatted research reports.""" + name = "generate_report" + +class ManageBibliographyTool(Tool): + """Manage bibliography in various formats.""" + name = "manage_bibliography" +``` + +### CLI Commands + +```python +@main.group() +def research(): + """Research assistant commands.""" + pass + +@research.command() +@click.argument('query') +@click.option('--sources', default='all', help='Sources to search') +@click.option('--max-results', default=20, help='Maximum results per source') +def search(query, sources, max_results): + """Search for research materials.""" + pass + +@research.command() +@click.argument('topic') +@click.option('--depth', default='standard', help='Research depth') +def investigate(topic, depth): + """Deep investigation of a research topic.""" + pass + +@research.command() +@click.argument('paper_ids', nargs=-1) +@click.option('--format', default='markdown', help='Output format') +def synthesize(paper_ids, format): + """Synthesize findings from multiple papers.""" + pass + +@research.command() +@click.option('--format', default='markdown', help='Report format') +def report(format): + """Generate research report from current session.""" + pass +``` + +--- + +## 2. Trading Operator Architecture + +### Vision +An automated trading operations system that can: +- Monitor portfolio positions and P&L in real-time +- Execute trading signals from various sources +- Manage risk and position sizing automatically +- Generate reports and alerts +- Interface with multiple brokers + +### Agent Structure + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ TRADING SIGNALS │ +│ • Strategy signals • Manual orders • Alerts │ +└────────────────────────┬────────────────────────────────────────┘ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ OPERATIONS COORDINATOR │ +│ • Validate signals │ +│ • Check risk limits │ +│ • Route to appropriate agents │ +│ • Log all decisions │ +└────────────────────────┬────────────────────────────────────────┘ + │ + ┌────────────────────┼────────────────────┬──────────────────┐ + ▼ ▼ ▼ ▼ +┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ +│ Position │ │ Risk │ │ Execution │ │ Reporting │ +│ Agent │ │ Agent │ │ Agent │ │ Agent │ +│ │ │ │ │ │ │ │ +│ • Track P&L │ │ • Limits │ │ • Order mgmt │ │ • Daily P&L │ +│ • Holdings │ │ • Drawdown │ │ • Fills │ │ • Positions │ +│ • NAV │ │ • Exposure │ │ • Slippage │ │ • Alerts │ +└──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ + │ │ │ │ + └────────────────┴──────────────────┴────────────────┘ + │ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ BROKER ADAPTERS │ +│ • Interactive Brokers • Alpaca • TD Ameritrade │ +│ • QuantConnect • Binance • Custom API │ +└─────────────────────────────────────────────────────────────────┘ +``` + +### Agent Implementations + +```python +# trading_operator/agents/base.py +from abc import ABC, abstractmethod +from dataclasses import dataclass +from typing import Any, List, Optional +from enum import Enum + +class OrderSide(Enum): + BUY = "buy" + SELL = "sell" + +class OrderType(Enum): + MARKET = "market" + LIMIT = "limit" + STOP = "stop" + STOP_LIMIT = "stop_limit" + +@dataclass +class Order: + symbol: str + side: OrderSide + quantity: float + order_type: OrderType + limit_price: Optional[float] = None + stop_price: Optional[float] = None + +@dataclass +class Position: + symbol: str + quantity: float + avg_price: float + current_price: float + unrealized_pnl: float + +@dataclass +class OperationResult: + success: bool + order_id: Optional[str] = None + message: str = "" + data: Any = None + +class BaseOperatorAgent(ABC): + """Base class for trading operator agents.""" + + def __init__(self, broker, config=None): + self.broker = broker + self.config = config + + @property + @abstractmethod + def agent_name(self) -> str: + pass +``` + +```python +# trading_operator/agents/position_agent.py +class PositionAgent(BaseOperatorAgent): + """Agent for tracking positions and P&L.""" + + agent_name = "PositionAgent" + + async def get_positions(self) -> List[Position]: + """Get current portfolio positions.""" + raw_positions = await self.broker.get_positions() + return [self._to_position(p) for p in raw_positions] + + async def get_portfolio_value(self) -> dict: + """Get total portfolio value and breakdown.""" + positions = await self.get_positions() + + return { + "total_value": sum(p.quantity * p.current_price for p in positions), + "total_pnl": sum(p.unrealized_pnl for p in positions), + "positions": len(positions), + "long_exposure": sum( + p.quantity * p.current_price for p in positions if p.quantity > 0 + ), + "short_exposure": abs(sum( + p.quantity * p.current_price for p in positions if p.quantity < 0 + )), + } +``` + +```python +# trading_operator/agents/risk_agent.py +class RiskAgent(BaseOperatorAgent): + """Agent for risk management and position sizing.""" + + agent_name = "RiskAgent" + + def __init__(self, broker, config=None): + super().__init__(broker, config) + self.limits = config.risk_limits if config else self._default_limits() + + async def check_order(self, order: Order) -> tuple[bool, str]: + """Check if order passes risk limits.""" + portfolio = await self.broker.get_portfolio() + + # Check position concentration + if not self._check_concentration(order, portfolio): + return False, f"Order exceeds position concentration limit" + + # Check total exposure + if not self._check_exposure(order, portfolio): + return False, f"Order exceeds total exposure limit" + + # Check drawdown + if not self._check_drawdown(portfolio): + return False, f"Portfolio drawdown exceeds limit" + + return True, "Order passes all risk checks" + + async def calculate_position_size(self, symbol: str, + signal_strength: float = 1.0) -> float: + """Calculate optimal position size based on risk parameters.""" + portfolio = await self.broker.get_portfolio() + volatility = await self._get_volatility(symbol) + + # Risk-based position sizing + risk_per_trade = self.limits.get("risk_per_trade", 0.02) + portfolio_value = portfolio["total_value"] + + dollar_risk = portfolio_value * risk_per_trade + position_size = dollar_risk / (volatility * signal_strength) + + # Apply limits + max_position = portfolio_value * self.limits.get("max_position_pct", 0.10) + return min(position_size, max_position) +``` + +```python +# trading_operator/agents/execution_agent.py +class ExecutionAgent(BaseOperatorAgent): + """Agent for order execution and management.""" + + agent_name = "ExecutionAgent" + + async def execute_order(self, order: Order) -> OperationResult: + """Execute a trading order.""" + # Pre-execution checks + risk_ok, risk_msg = await self.risk_agent.check_order(order) + if not risk_ok: + return OperationResult(success=False, message=risk_msg) + + # Execute with broker + try: + order_id = await self.broker.submit_order(order) + fill = await self._wait_for_fill(order_id, timeout=60) + + return OperationResult( + success=True, + order_id=order_id, + message=f"Order filled: {fill}", + data=fill + ) + except Exception as e: + return OperationResult(success=False, message=str(e)) +``` + +### Broker Adapters + +```python +# trading_operator/brokers/base.py +class BaseBroker(ABC): + """Abstract base class for broker adapters.""" + + @abstractmethod + async def connect(self) -> bool: + pass + + @abstractmethod + async def get_positions(self) -> List[dict]: + pass + + @abstractmethod + async def get_portfolio(self) -> dict: + pass + + @abstractmethod + async def submit_order(self, order: Order) -> str: + pass + + @abstractmethod + async def cancel_order(self, order_id: str) -> bool: + pass + +# Implementations for different brokers +class InteractiveBrokersBroker(BaseBroker): pass +class AlpacaBroker(BaseBroker): pass +class QuantConnectBroker(BaseBroker): pass +class BinanceBroker(BaseBroker): pass +``` + +### CLI Commands + +```python +@main.group() +def operator(): + """Trading operator commands.""" + pass + +@operator.command() +def status(): + """Show current portfolio status.""" + pass + +@operator.command() +@click.argument('symbol') +@click.argument('side', type=click.Choice(['buy', 'sell'])) +@click.argument('quantity', type=float) +def order(symbol, side, quantity): + """Place a trading order.""" + pass + +@operator.command() +def positions(): + """List current positions.""" + pass + +@operator.command() +def pnl(): + """Show P&L summary.""" + pass + +@operator.group() +def risk(): + """Risk management commands.""" + pass +``` + +--- + +## 3. Shared Components + +### Project Structure Template + +``` +app_name/ +├── app_name/ +│ ├── __init__.py +│ ├── cli.py # CLI entry point +│ ├── config.py # Configuration management +│ ├── chat.py # Interactive chat mode +│ │ +│ ├── agents/ # Multi-agent system +│ │ ├── __init__.py +│ │ ├── base.py # Base agent class +│ │ ├── coordinator.py # Main orchestrator +│ │ └── [specialized_agents].py +│ │ +│ ├── tools/ # Tool implementations +│ │ ├── __init__.py +│ │ ├── base.py # Base tool class +│ │ └── [domain_tools].py +│ │ +│ ├── execution/ # Parallel execution +│ │ ├── __init__.py +│ │ └── parallel_executor.py +│ │ +│ ├── llm/ # LLM providers +│ │ ├── __init__.py +│ │ └── providers.py +│ │ +│ └── autonomous/ # Self-improving mode +│ ├── __init__.py +│ ├── database.py +│ └── learner.py +│ +├── tests/ +├── docs/ +├── pyproject.toml +└── README.md +``` + +--- + +## 4. Summary Comparison + +| Component | QuantCoder Gamma | Research Assistant | Trading Operator | +|-----------|------------------|-------------------|------------------| +| **Coordinator** | Strategy planning | Research planning | Trade orchestration | +| **Parallel Agents** | Universe, Alpha, Risk | Search, Paper, Patent, Web | Position, Risk, Execution | +| **MCP Integration** | QuantConnect API | Paper databases | Broker APIs | +| **Learning DB** | Strategy errors | Research patterns | Trading patterns | +| **Output** | QC algorithms | Research reports | Trade logs, P&L | + +The gamma architecture provides a solid foundation that can be adapted for any domain requiring: +- Multi-agent orchestration +- Parallel task execution +- LLM-powered analysis +- Domain-specific tool integration +- Self-improving capabilities diff --git a/docs/BRANCH_VERSION_MAP.md b/docs/BRANCH_VERSION_MAP.md deleted file mode 100644 index 7433184f..00000000 --- a/docs/BRANCH_VERSION_MAP.md +++ /dev/null @@ -1,441 +0,0 @@ -# QuantCoder-CLI Branch & Version Map - -**Last Updated**: 2026-01-26 (**DEFAULT BRANCH: GAMMA**) -**Repository**: SL-Mar/quantcoder-cli - -## ⚡ Quick Reference - -QuantCoder has **3 active branches** with **gamma as the default**: - -``` -gamma (2.0) → Default branch - Latest development ⭐ -main (1.0) → Original stable -beta (1.1) → Improved legacy (testing) -``` - ---- - -## 📊 Active Branches Overview - -| Branch | Version | Package | Status | Use Case | -|--------|---------|---------|--------|----------| -| **gamma** ⭐ | 2.0.0-alpha.1 | `quantcoder` | 🚀 Default | Autonomous mode, library builder | -| **main** | 1.0.0 | `quantcli` | 🟢 Legacy Stable | Production, simple workflows | -| **beta** | 1.1.0-beta.1 | `quantcli` | 🧪 Testing | Improved legacy, not tested | - -**Archived**: `feature/enhanced-help-command`, `revert-3-feature/enhanced-help-command` - ---- - -## 🔍 Detailed Branch Information - -### 1️⃣ main → QuantCoder 1.0 (Stable) - -**Branch**: `main` -**Package**: `quantcli` -**Version**: 1.0.0 -**Status**: 🟢 Production stable - -#### Quick Info -```bash -git checkout main -pip install -e . -``` - -#### Structure -``` -quantcli/ -├── cli.py # Original CLI -├── processor.py # PDF/NLP processing -├── search.py # Article search -└── utils.py -``` - -#### Features -- ✅ Basic CLI for QuantConnect algorithm generation -- ✅ PDF article processing -- ✅ NLP-based strategy extraction -- ✅ OpenAI integration -- ✅ Simple article search - -#### Commands -```bash -quantcli search "momentum trading" -quantcli download 1 -quantcli generate 1 -``` - -#### Pros/Cons -**Pros**: Stable, proven, simple -**Cons**: No advanced features, basic validation - -#### Who Should Use -- Production environments -- Users needing stability -- Simple single-strategy workflows -- New users learning QuantCoder - ---- - -### 2️⃣ beta → QuantCoder 1.1 (Testing) - -**Branch**: `beta` (renamed from `refactor/modernize-2025`) -**Package**: `quantcli` -**Version**: 1.1.0-beta.1 -**Status**: 🧪 Beta testing (⚠️ not yet tested by maintainers) - -#### Quick Info -```bash -git checkout beta -pip install -e . -``` - -#### Structure -``` -quantcli/ -├── cli.py -├── llm_client.py # NEW: LLM abstraction -├── processor.py -├── qc_validator.py # NEW: QuantConnect validator -├── search.py -└── utils.py - -tests/ # NEW: Test suite -└── __init__.py -``` - -#### Features -All 1.0 features PLUS: -- ✅ **NEW**: Comprehensive testing suite -- ✅ **NEW**: Security improvements -- ✅ **NEW**: Environment configuration -- ✅ **NEW**: LLM client abstraction -- ✅ **NEW**: QuantConnect code validator -- ✅ **NEW**: Better error handling - -#### Commands -```bash -# Same as 1.0 -quantcli search "query" -quantcli generate 1 -``` - -#### Pros/Cons -**Pros**: Better quality, testing, security -**Cons**: Not yet tested in production, same architecture as 1.0 - -#### Who Should Use -- Users wanting improved 1.0 -- Testing/QA contributors -- Gradual migration from 1.0 -- Those needing better validation - -#### Migration from 1.0 -**Difficulty**: Easy -```bash -git checkout beta -pip install -e . -# Same commands, better internals -``` - ---- - -### 3️⃣ gamma → QuantCoder 2.0 (Alpha) - -**Branch**: `gamma` (renamed from `claude/refactor-quantcoder-cli-JwrsM`) -**Package**: `quantcoder` (⚠️ **NEW PACKAGE** - different from `quantcli`) -**Version**: 2.0.0-alpha.1 -**Status**: 🚀 Alpha - cutting edge - -#### Quick Info -```bash -git checkout gamma -pip install -e . -``` - -#### Structure -``` -quantcoder/ -├── agents/ # Multi-agent system -│ ├── coordinator_agent.py -│ ├── universe_agent.py -│ ├── alpha_agent.py -│ ├── risk_agent.py -│ └── strategy_agent.py -├── autonomous/ # ⭐ Self-improving mode -│ ├── database.py -│ ├── learner.py -│ ├── prompt_refiner.py -│ └── pipeline.py -├── library/ # ⭐ Library builder -│ ├── taxonomy.py -│ ├── coverage.py -│ └── builder.py -├── codegen/ -│ └── multi_file.py -├── execution/ -│ └── parallel_executor.py -├── llm/ -│ └── providers.py # Multi-LLM support -├── mcp/ -│ └── quantconnect_mcp.py # MCP integration -├── tools/ -│ ├── article_tools.py -│ ├── code_tools.py -│ └── file_tools.py -├── chat.py -├── cli.py # Enhanced CLI -└── config.py - -quantcli/ # Legacy code (kept for reference) -docs/ # Comprehensive documentation -``` - -#### Features - -**Complete rewrite** with revolutionary capabilities: - -**Core Architecture**: -- ✅ Tool-based design (Mistral Vibe CLI inspired) -- ✅ Multi-agent system (6 specialized agents) -- ✅ Parallel execution framework (3-5x faster) -- ✅ MCP integration for QuantConnect -- ✅ Multi-LLM support (Anthropic, Mistral, DeepSeek, OpenAI) - -**🤖 Autonomous Mode** (Self-learning): -- ✅ Learns from compilation errors automatically -- ✅ Performance-based prompt refinement -- ✅ Self-healing code fixes -- ✅ Learning database (SQLite) -- ✅ Continuous improvement over iterations - -**📚 Library Builder Mode**: -- ✅ Build complete strategy library from scratch -- ✅ 10 strategy categories (86 total strategies) -- ✅ Systematic coverage tracking -- ✅ Progress checkpoints & resume capability - -**Advanced Features**: -- ✅ Multi-file generation (Universe, Alpha, Risk, Main) -- ✅ Coordinator agent orchestration -- ✅ Real-time learning and adaptation -- ✅ Interactive and programmatic modes -- ✅ Rich CLI with modern UI - -#### Commands - -**Regular Mode**: -```bash -quantcoder chat -quantcoder search "query" -quantcoder generate 1 -``` - -**Autonomous Mode** (⭐ NEW): -```bash -quantcoder auto start --query "momentum trading" --max-iterations 50 -quantcoder auto status -quantcoder auto report -``` - -**Library Builder** (⭐ NEW): -```bash -quantcoder library build --comprehensive --max-hours 24 -quantcoder library status -quantcoder library resume -quantcoder library export --format zip -``` - -#### Pros/Cons -**Pros**: -- Revolutionary autonomous features -- Self-improving AI -- Can build entire libraries -- Multi-LLM flexibility -- 3-5x faster with parallel execution - -**Cons**: -- Alpha status (active development) -- Breaking changes from 1.x -- Different package name -- Higher resource requirements -- More complex - -#### Who Should Use -- Users wanting cutting-edge features -- Building complete strategy libraries -- Autonomous overnight generation runs -- Research and experimentation -- Advanced multi-agent workflows - -#### Migration from 1.x -**Difficulty**: Moderate - -**Breaking Changes**: -- Package: `quantcli` → `quantcoder` -- Commands: Different CLI interface -- Config: New format -- Dependencies: More requirements - -**Steps**: -```bash -git checkout gamma -pip install -e . -quantcoder --help # Learn new commands -``` - ---- - -## 🗺️ Version Evolution Timeline - -``` -2023 November - │ - └─> QuantCoder 1.0 (main) - └─ Original CLI, quantcli package - │ -2025 January - │ - ├─> QuantCoder 1.1 (beta) - │ └─ Improved legacy - │ Testing + Security - │ Same quantcli package - │ - └─> QuantCoder 2.0 (gamma) - └─ Complete rewrite - NEW quantcoder package - ├─ Multi-agent system - ├─ Autonomous mode ⭐ - └─ Library builder ⭐ -``` - ---- - -## 📋 Feature Comparison Matrix - -| Feature | 1.0 (main) | 1.1 (beta) | 2.0 (gamma) | -|---------|------------|------------|-------------| -| **Package** | quantcli | quantcli | quantcoder | -| **Status** | Stable | Testing | Alpha | -| **Basic CLI** | ✅ | ✅ | ✅ | -| **PDF Processing** | ✅ | ✅ | ✅ | -| **Article Search** | ✅ | ✅ | ✅ | -| **Code Generation** | ✅ | ✅ | ✅ | -| **Testing Suite** | ❌ | ✅ | ⚠️ | -| **Security** | Basic | Enhanced | Enhanced | -| **Validation** | Basic | Enhanced | Advanced | -| **Tool Architecture** | ❌ | ❌ | ✅ | -| **Multi-Agent** | ❌ | ❌ | ✅ | -| **Parallel Execution** | ❌ | ❌ | ✅ | -| **MCP Integration** | ❌ | ❌ | ✅ | -| **Multi-LLM** | ❌ | ❌ | ✅ | -| **Autonomous Mode** | ❌ | ❌ | ✅ ⭐ | -| **Library Builder** | ❌ | ❌ | ✅ ⭐ | -| **Self-Learning** | ❌ | ❌ | ✅ ⭐ | - ---- - -## 🎯 Branch Selection Guide - -### Choose **main** (1.0) if: -- ✅ You need stability and proven code -- ✅ Simple single-strategy generation -- ✅ Production environment -- ✅ Learning QuantCoder -- ✅ Low resource requirements - -### Choose **beta** (1.1) if: -- ✅ You want improved 1.0 -- ✅ Better validation needed -- ✅ Willing to test new features -- ✅ Same familiar interface -- ⚠️ Accept untested status - -### Choose **gamma** (2.0) if: -- ✅ You want cutting-edge features -- ✅ Building complete libraries -- ✅ Autonomous overnight runs -- ✅ Multi-agent workflows -- ✅ Self-improving AI -- ⚠️ Accept alpha status - ---- - -## 📚 Documentation by Branch - -### main (1.0) -- Original README -- Legacy documentation - -### beta (1.1) -- Testing guide -- Security documentation -- Validation improvements - -### gamma (2.0) -- [VERSION_COMPARISON.md](./VERSION_COMPARISON.md) - Choose version -- [NEW_FEATURES_V4.md](./NEW_FEATURES_V4.md) - 2.0 overview -- [AUTONOMOUS_MODE.md](./AUTONOMOUS_MODE.md) - Self-learning guide -- [LIBRARY_BUILDER.md](./LIBRARY_BUILDER.md) - Library building -- [ARCHITECTURE_V3_MULTI_AGENT.md](./ARCHITECTURE_V3_MULTI_AGENT.md) - Multi-agent - ---- - -## 🗑️ Archived Branches - -The following branches have been archived (tagged for history): - -- `feature/enhanced-help-command` → Added help docs (reverted) -- `revert-3-feature/enhanced-help-command` → Revert branch - -These are no longer active and can be deleted after tagging. - ---- - -## 🔄 Restructuring Summary - -**What Changed**: -- ✅ `claude/refactor-quantcoder-cli-JwrsM` → `gamma` (2.0) -- ✅ `refactor/modernize-2025` → `beta` (1.1) -- ✅ `main` stays as 1.0 -- ✅ Version numbering: v4.0 → 2.0.0-alpha.1 -- ✅ Clear progression: 1.0 → 1.1 → 2.0 - -**Why**: -- Clear version semantics (1.x = legacy, 2.x = rewrite) -- Proper semantic versioning -- Easy branch selection for users -- Clean repository with 3 active branches - ---- - -## ❓ FAQ - -**Q: Why is 2.0 called "gamma" not "v2"?** -A: Greek letters indicate progression: alpha → beta → gamma. Shows 2.0 is beyond beta (1.1). - -**Q: What happened to v3.0 and v4.0?** -A: Renumbered to 2.0.0-alpha.1 since it's the first major rewrite. - -**Q: Can I use both quantcli and quantcoder?** -A: Yes! Different packages, no conflicts. - -**Q: Which branch gets updates?** -A: All three are maintained. Critical bugs fixed in all. New features in 2.0. - -**Q: When will 2.0 be stable?** -A: After alpha → beta → release candidate → 2.0.0 stable. - ---- - -## 📞 Support - -- **Issues**: Open issue and specify branch (1.0/1.1/2.0) -- **Questions**: Specify which version you're using -- **Contributions**: See CONTRIBUTING.md - ---- - -**Last Restructured**: 2025-01-15 -**Maintained by**: SL-MAR -**Repository**: SL-Mar/quantcoder-cli diff --git a/docs/VERSIONS.md b/docs/VERSIONS.md new file mode 100644 index 00000000..2b0b7661 --- /dev/null +++ b/docs/VERSIONS.md @@ -0,0 +1,257 @@ +# QuantCoder CLI - Version Guide + +This document describes the available versions of QuantCoder CLI and their features. + +--- + +## Version Overview + +| Version | Branch | Status | Package | Key Features | +|---------|--------|--------|---------|--------------| +| **v1.0** | `main` | Released | `quantcli` | Legacy, basic features | +| **v1.1** | `beta` | Released | `quantcli` | LLM abstraction, static validator | +| **v2.0** | `develop` | In Development | `quantcoder` | Multi-agent, autonomous | + +--- + +## v1.0 - Legacy Release + +**Tag:** `v1.0` +**Branch:** `main` +**Package:** `quantcli` + +### Features + +- Search academic articles via CrossRef API +- Download PDFs via direct links or Unpaywall API +- Extract trading strategies using NLP (spaCy) +- Generate QuantConnect algorithms using OpenAI GPT-4 +- Tkinter GUI for interactive workflow +- Basic AST code validation with refinement loop + +### Dependencies + +- Python 3.8+ +- OpenAI SDK v0.28 (legacy) +- pdfplumber, spaCy (en_core_web_sm) +- Click CLI framework +- Tkinter (built-in) + +### Installation + +```bash +git checkout v1.0 +pip install -e . +``` + +### Usage + +```bash +quantcli search "momentum trading" +quantcli download 1 +quantcli summarize 1 +quantcli generate-code 1 +quantcli interactive # Launch GUI +``` + +### Limitations + +- Single LLM provider (OpenAI only) +- Legacy OpenAI SDK (v0.28) +- No runtime safety validation +- Single-file code generation + +--- + +## v1.1 - Enhanced Release + +**Tag:** `v1.1` +**Branch:** `beta` +**Package:** `quantcli` + +### What's New in v1.1 + +- **LLM Client Abstraction**: Modern OpenAI SDK v1.x+ support +- **QC Static Validator**: Catches runtime errors before execution +- **Improved Prompts**: Defensive programming patterns in generated code +- **Unit Tests**: Test coverage for LLM client +- **Better Documentation**: Testing guide, changelog + +### Features + +All v1.0 features plus: + +- `LLMClient` class with standardized response handling +- `QuantConnectValidator` for static code analysis: + - Division by zero detection + - Missing `.IsReady` checks on indicators + - `None` value risk detection + - `max()/min()` on potentially None values +- Enhanced code generation prompts with runtime safety requirements +- Token usage tracking in LLM responses + +### Dependencies + +- Python 3.8+ +- OpenAI SDK v1.x+ (modern) +- All v1.0 dependencies + +### Installation + +```bash +git checkout v1.1 +pip install -e . +``` + +### Usage + +Same as v1.0: + +```bash +quantcli search "mean reversion" +quantcli download 1 +quantcli generate-code 1 +``` + +### Breaking Changes from v1.0 + +- Requires OpenAI SDK v1.x+ (not compatible with v0.28) +- Environment variable `OPENAI_API_KEY` required + +--- + +## v2.0 - Next Generation (In Development) + +**Branch:** `develop` +**Package:** `quantcoder` + +### Major Architectural Changes + +Complete rewrite with enterprise-grade features: + +- **Multi-Agent System**: Specialized agents for different tasks + - `CoordinatorAgent`: Orchestrates workflow + - `UniverseAgent`: Stock selection logic + - `AlphaAgent`: Trading signal generation + - `RiskAgent`: Position sizing and risk management + - `StrategyAgent`: Integration into Main.py + +- **Multi-File Code Generation**: Generates separate files + - `Main.py` - Main algorithm + - `Alpha.py` - Alpha model + - `Universe.py` - Universe selection + - `Risk.py` - Risk management + +- **Autonomous Pipeline**: Self-improving strategy generation + - Error learning and pattern extraction + - Performance-based prompt refinement + - Continuous iteration with quality gates + +- **Library Builder**: Batch strategy generation + - 13+ strategy categories + - Checkpointing and resume + - Coverage tracking + +- **Multi-LLM Support**: Provider abstraction + - OpenAI (GPT-4) + - Anthropic (Claude) + - Mistral + - DeepSeek + +- **Modern CLI**: Rich terminal interface + - Interactive REPL with history + - Syntax highlighting + - Progress indicators + +### Installation (Development) + +```bash +git checkout develop +pip install -e ".[dev]" +``` + +### Usage + +```bash +# Interactive mode +quantcoder + +# Programmatic mode +quantcoder --prompt "Create momentum strategy" + +# Direct commands +quantcoder search "pairs trading" +quantcoder generate 1 + +# Autonomous mode +quantcoder auto start --query "momentum" --max-iterations 50 + +# Library builder +quantcoder library build --comprehensive +``` + +### Status + +🚧 **In Development** - Not ready for production use. + +--- + +## Upgrade Path + +``` +v1.0 ──────▶ v1.1 ──────▶ v2.0 + Minor Major + (safe) (breaking) +``` + +### v1.0 → v1.1 + +- Update OpenAI SDK: `pip install openai>=1.0.0` +- No code changes required for CLI usage +- Benefits: Better error handling, runtime validation + +### v1.1 → v2.0 + +- Package renamed: `quantcli` → `quantcoder` +- New architecture (multi-agent) +- New CLI commands +- Requires migration of custom scripts + +--- + +## Choosing a Version + +| Use Case | Recommended Version | +|----------|---------------------| +| Quick start, simple needs | v1.0 | +| Production with validation | v1.1 | +| Multiple strategies at scale | v2.0 (when ready) | +| Research and experimentation | v2.0 develop | + +--- + +## Version Comparison + +| Feature | v1.0 | v1.1 | v2.0 | +|---------|------|------|------| +| CrossRef Search | ✓ | ✓ | ✓ | +| PDF Download | ✓ | ✓ | ✓ | +| NLP Extraction | ✓ | ✓ | ✓ | +| Code Generation | Single file | Single file | Multi-file | +| AST Validation | ✓ | ✓ | ✓ | +| Runtime Validator | ✗ | ✓ | ✓ + MCP | +| LLM Providers | OpenAI only | OpenAI (v1.x) | Multi-provider | +| Tkinter GUI | ✓ | ✓ | ✗ | +| Rich Terminal | ✗ | ✗ | ✓ | +| Multi-Agent | ✗ | ✗ | ✓ | +| Autonomous Mode | ✗ | ✗ | ✓ | +| Library Builder | ✗ | ✗ | ✓ | +| Self-Learning | ✗ | ✗ | ✓ | + +--- + +## Support + +- **v1.0**: Maintenance only (critical fixes) +- **v1.1**: Active support +- **v2.0**: Development preview diff --git a/docs/VERSION_COMPARISON.md b/docs/VERSION_COMPARISON.md deleted file mode 100644 index 14695e06..00000000 --- a/docs/VERSION_COMPARISON.md +++ /dev/null @@ -1,443 +0,0 @@ -# QuantCoder Version Comparison Guide - -**Last Updated:** 2026-01-26 (**DEFAULT BRANCH: GAMMA**) -**Repository:** SL-Mar/quantcoder-cli - -This guide helps you choose the right version of QuantCoder for your needs. - ---- - -## 🎯 Quick Decision Tree - -``` -Start here → QuantCoder 2.0 (gamma branch - DEFAULT) ⭐ - └─ Want simpler legacy versions? ↓ - -Do you want improved legacy with testing? - └─ YES → QuantCoder 1.1 (beta branch) - └─ NO ↓ - -Do you need the original stable production CLI? - └─ YES → QuantCoder 1.0 (main branch) -``` - ---- - -## 📊 Version Overview - -| Version | Branch | Package | Status | Best For | -|---------|--------|---------|--------|----------| -| **2.0** ⭐ | `gamma` | `quantcoder` | 🚀 Default | Latest development, autonomous features | -| **1.0** | `main` | `quantcli` | ✅ Legacy Stable | Original production, simple workflows | -| **1.1** | `beta` | `quantcli` | 🧪 Testing | Improved legacy, not yet tested | - ---- - -## 🔍 Detailed Comparison - -### QuantCoder 1.0 (Legacy Stable) - -**Branch:** `main` -**Package:** `quantcli` -**Status:** ✅ Original production version -**First Released:** November 2023 - -#### Installation -```bash -git checkout main -pip install -e . -``` - -#### Features -- ✅ Basic CLI interface -- ✅ PDF article processing -- ✅ NLP-based strategy extraction -- ✅ OpenAI integration -- ✅ Simple code generation -- ✅ Article search - -#### Pros -- ✅ Stable and proven -- ✅ Simple to use -- ✅ Well-tested in production -- ✅ Low resource requirements - -#### Cons -- ❌ No multi-agent system -- ❌ No autonomous learning -- ❌ No library building -- ❌ Limited testing suite -- ❌ Basic validation only - -#### Use Cases -- Quick single-strategy generation -- Simple article → algorithm workflow -- Production environments requiring stability -- Users new to QuantCoder - -#### Commands -```bash -quantcli search "momentum trading" -quantcli download 1 -quantcli generate 1 -``` - ---- - -### QuantCoder 1.1 (Beta) - -**Branch:** `beta` (from refactor/modernize-2025) -**Package:** `quantcli` -**Status:** 🧪 Beta testing -**Note:** ⚠️ Not yet tested by maintainers - -#### Installation -```bash -git checkout beta -pip install -e . -``` - -#### Features -All 1.0 features PLUS: -- ✅ Comprehensive testing suite -- ✅ Security improvements -- ✅ Environment configuration -- ✅ LLM client abstraction -- ✅ QuantConnect validator -- ✅ Better error handling - -#### Pros -- ✅ Improved code quality -- ✅ Testing coverage -- ✅ Security hardening -- ✅ Better structure -- ✅ Same familiar interface as 1.0 - -#### Cons -- ⚠️ Not yet tested in production -- ❌ Still no multi-agent features -- ❌ Still no autonomous mode -- ❌ Same architecture as 1.0 - -#### Use Cases -- Users wanting improved 1.0 -- Testing new validation features -- Gradual migration from 1.0 -- Contributing to testing efforts - -#### Migration from 1.0 -**Difficulty:** Easy (same commands) -```bash -# No code changes needed -# Just switch branches -git checkout beta -pip install -e . -``` - ---- - -### QuantCoder 2.0 (Default Branch) - -**Branch:** `gamma` (DEFAULT) ⭐ -**Package:** `quantcoder` (NEW - different from quantcli!) -**Status:** 🚀 Primary development branch -**Version:** 2.0.0-alpha.1 - -#### Installation -```bash -git checkout gamma -pip install -e . -``` - -#### Features - -**Complete Rewrite** with revolutionary capabilities: - -**Core Architecture:** -- ✅ Tool-based design (Mistral Vibe CLI inspired) -- ✅ Multi-agent system (6 specialized agents) -- ✅ Parallel execution framework -- ✅ MCP integration for QuantConnect -- ✅ Multi-LLM support (Anthropic, Mistral, DeepSeek, OpenAI) - -**🤖 Autonomous Mode (NEW):** -- ✅ Self-learning from compilation errors -- ✅ Performance-based prompt refinement -- ✅ Self-healing code fixes -- ✅ Learning database (SQLite) -- ✅ Continuous improvement over iterations - -**📚 Library Builder Mode (NEW):** -- ✅ Build complete strategy library from scratch -- ✅ 10 strategy categories (86 total strategies) -- ✅ Systematic coverage tracking -- ✅ Progress checkpoints -- ✅ Resume capability - -**Advanced Features:** -- ✅ Multi-file code generation (Universe, Alpha, Risk, Main) -- ✅ Coordinator agent orchestration -- ✅ Real-time learning and adaptation -- ✅ Interactive and programmatic modes -- ✅ Rich CLI with modern UI - -#### Pros -- ✅ Most advanced features -- ✅ Self-improving AI -- ✅ Can build entire libraries autonomously -- ✅ Multiple LLM backends -- ✅ Parallel execution (3-5x faster) -- ✅ Production-ready architecture - -#### Cons -- ⚠️ Alpha status (active development) -- ⚠️ Breaking changes from 1.x -- ⚠️ Different package name (`quantcoder` vs `quantcli`) -- ⚠️ Different commands -- ⚠️ Higher resource requirements -- ⚠️ More complex setup - -#### Use Cases -- Building complete strategy libraries -- Autonomous overnight generation runs -- Advanced multi-agent workflows -- Research and experimentation -- Users wanting cutting-edge AI features - -#### Commands -```bash -# Regular mode -quantcoder chat -quantcoder search "query" -quantcoder generate 1 - -# Autonomous mode (NEW) -quantcoder auto start --query "momentum trading" -quantcoder auto status -quantcoder auto report - -# Library builder (NEW) -quantcoder library build --comprehensive -quantcoder library status -quantcoder library export -``` - -#### Migration from 1.x -**Difficulty:** Moderate (different package, different commands) - -**Breaking Changes:** -- Package name: `quantcli` → `quantcoder` -- Command structure: Different CLI interface -- Configuration: New config format -- Dependencies: More requirements - -**Migration Steps:** -1. Backup your 1.x setup -2. Install 2.0 in separate environment -3. Test with demo mode: `--demo` flag -4. Migrate configurations manually -5. Update your workflows - ---- - -## 🗺️ Feature Matrix - -| Feature | 1.0 (main) | 1.1 (beta) | 2.0 (gamma) | -|---------|------------|------------|-------------| -| **Basic CLI** | ✅ | ✅ | ✅ | -| **PDF Processing** | ✅ | ✅ | ✅ | -| **Article Search** | ✅ | ✅ | ✅ | -| **Code Generation** | ✅ | ✅ | ✅ | -| **Testing Suite** | ❌ | ✅ | ⚠️ | -| **Security Hardening** | ❌ | ✅ | ⚠️ | -| **Validation** | Basic | Enhanced | Advanced | -| **Tool-based Architecture** | ❌ | ❌ | ✅ | -| **Multi-Agent System** | ❌ | ❌ | ✅ | -| **Parallel Execution** | ❌ | ❌ | ✅ | -| **MCP Integration** | ❌ | ❌ | ✅ | -| **Multi-LLM Support** | ❌ | ❌ | ✅ | -| **Autonomous Mode** | ❌ | ❌ | ✅ ⭐ | -| **Library Builder** | ❌ | ❌ | ✅ ⭐ | -| **Self-Learning** | ❌ | ❌ | ✅ ⭐ | -| **Multi-file Generation** | ❌ | ❌ | ✅ | - ---- - -## 📈 Performance Comparison - -### Generation Time (Single Strategy) - -| Version | Time | Quality | -|---------|------|---------| -| 1.0 | 5-10 min | Variable | -| 1.1 | 5-10 min | Better validation | -| 2.0 | 8-15 min | Multi-agent, higher quality | - -### Autonomous Generation (50 iterations) - -| Version | Supported | Time | Success Rate | -|---------|-----------|------|--------------| -| 1.0 | ❌ | N/A | N/A | -| 1.1 | ❌ | N/A | N/A | -| 2.0 | ✅ | 5-10 hours | 50% → 85% (improves!) | - -### Library Building (Complete) - -| Version | Supported | Time | Output | -|---------|-----------|------|--------| -| 1.0 | ❌ | Manual | 1 strategy at a time | -| 1.1 | ❌ | Manual | 1 strategy at a time | -| 2.0 | ✅ | 20-30 hours | 86 strategies | - ---- - -## 💰 Cost Estimates (API Calls) - -### Single Strategy Generation - -| Version | API Calls | Cost (Sonnet) | Cost (GPT-4o) | -|---------|-----------|---------------|---------------| -| 1.0 | ~5-10 | $0.10-$0.50 | $0.05-$0.20 | -| 1.1 | ~5-10 | $0.10-$0.50 | $0.05-$0.20 | -| 2.0 | ~30-50 (multi-agent) | $0.50-$2.00 | $0.20-$0.80 | - -### Autonomous Mode (50 iterations) - -| Version | API Calls | Cost (Sonnet) | Cost (GPT-4o) | -|---------|-----------|---------------|---------------| -| 1.0 | N/A | N/A | N/A | -| 1.1 | N/A | N/A | N/A | -| 2.0 | ~400 | $5-$20 | $2-$10 | - -### Library Builder (Complete) - -| Version | API Calls | Cost (Sonnet) | Cost (GPT-4o) | -|---------|-----------|---------------|---------------| -| 1.0 | N/A | N/A | N/A | -| 1.1 | N/A | N/A | N/A | -| 2.0 | ~52,000-60,000 | $50-$175 | $20-$70 | - ---- - -## 🎓 Recommendations - -### For Latest Features (Default) -**→ Use 2.0 (gamma - DEFAULT)** -- Autonomous learning -- Library building -- Multi-agent system -- Cutting edge features - -### For Legacy Production Use -**→ Use 1.0 (main)** -- Original stable version -- Low cost -- Simple workflows -- Known limitations - -### For Testing Improvements -**→ Use 1.1 (beta)** -- Better validation -- Testing suite -- Security improvements -- Help test before release! - -### For Beginners -**→ Start with 2.0, explore legacy if needed** -1. Start with 2.0 (default, most features) -2. Try 1.1 or 1.0 if you need simplicity -3. Learn at your own pace - ---- - -## 🚀 Upgrade Paths - -### 1.0 → 1.1 (Easy) -```bash -git checkout beta -pip install -e . -# Same commands, better internals -``` - -### 1.0 → 2.0 (Moderate) -```bash -git checkout gamma -pip install -e . -# New commands - see migration guide -quantcoder --help -``` - -### 1.1 → 2.0 (Moderate) -```bash -git checkout gamma -pip install -e . -# New architecture - read docs -``` - ---- - -## 📚 Documentation by Version - -### Version 1.0 -- Original README -- Basic usage guide -- Legacy documentation - -### Version 1.1 -- Testing guide -- Security improvements -- Validation documentation - -### Version 2.0 -- [NEW_FEATURES_V4.md](./NEW_FEATURES_V4.md) - Overview -- [AUTONOMOUS_MODE.md](./AUTONOMOUS_MODE.md) - Self-learning guide -- [LIBRARY_BUILDER.md](./LIBRARY_BUILDER.md) - Library building guide -- [ARCHITECTURE_V3_MULTI_AGENT.md](./ARCHITECTURE_V3_MULTI_AGENT.md) - Multi-agent details -- [BRANCH_VERSION_MAP.md](./BRANCH_VERSION_MAP.md) - Branch overview - ---- - -## ❓ FAQ - -### Q: Which version should I use? -**A:** Depends on your needs: -- Stability → 1.0 -- Testing improvements → 1.1 -- Advanced features → 2.0 - -### Q: Is 2.0 production-ready? -**A:** It's the default development branch with solid architecture. While marked as alpha for cautious users, it represents the latest and most advanced features. - -### Q: Will 1.0 be maintained? -**A:** Yes, as stable legacy version. Critical bugs will be fixed. - -### Q: Can I run both versions? -**A:** Yes! Different packages (`quantcli` vs `quantcoder`) - no conflicts. - -### Q: How do I report bugs? -**A:** Specify version number in issues: "Bug in 1.0" vs "Bug in 2.0" - -### Q: When will 2.0 be stable? -**A:** 2.0 is already the default branch. The version numbering indicates development stage, but it's the primary branch for active development and new features. - ---- - -## 🎯 Summary Table - -| Criteria | Choose 2.0 | Choose 1.1 | Choose 1.0 | -|----------|------------|------------|------------| -| Default choice | ✅ | ❌ | ❌ | -| Latest features | ✅ | ❌ | ❌ | -| Legacy stability | ⚠️ | ⚠️ | ✅ | -| Simple workflows | ⚠️ | ✅ | ✅ | -| Complex workflows | ✅ | ❌ | ❌ | -| Autonomous generation | ✅ | ❌ | ❌ | -| Library building | ✅ | ❌ | ❌ | -| Active development | ✅ | ⚠️ | ❌ | - ---- - -**Need help choosing?** Open an issue with your use case! - -**Last Updated:** 2026-01-26 (**DEFAULT BRANCH: GAMMA**) -**Maintained by:** SL-MAR diff --git a/pyproject.toml b/pyproject.toml index 172ffa31..edf79848 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta" [project] name = "quantcoder-cli" -version = "2.1.0-alpha.1" +version = "2.0.0" description = "A modern CLI coding assistant for generating QuantConnect trading algorithms from research articles with AlphaEvolve-inspired evolution" readme = "README.md" requires-python = ">=3.10" @@ -31,6 +31,9 @@ dependencies = [ "pdfplumber>=0.10.0", "spacy>=3.7.0", "openai>=1.0.0", + "anthropic>=0.18.0", + "mistralai>=0.1.0", + "aiohttp>=3.9.0", "python-dotenv>=1.0.0", "pygments>=2.17.0", "rich>=13.7.0", diff --git a/quantcoder/agents/coordinator_agent.py b/quantcoder/agents/coordinator_agent.py index 25cb28d2..f2315175 100644 --- a/quantcoder/agents/coordinator_agent.py +++ b/quantcoder/agents/coordinator_agent.py @@ -132,7 +132,7 @@ async def _create_execution_plan( import json try: plan = json.loads(response) - except json.JSONDecodeError: + except (json.JSONDecodeError, ValueError): # Fallback to default plan plan = { "components": { diff --git a/quantcoder/llm/providers.py b/quantcoder/llm/providers.py index 7f6cda58..cc12a89f 100644 --- a/quantcoder/llm/providers.py +++ b/quantcoder/llm/providers.py @@ -1,5 +1,6 @@ """LLM provider abstraction for multiple backends.""" +import os import logging from abc import ABC, abstractmethod from typing import List, Dict, Optional, AsyncIterator @@ -251,31 +252,28 @@ def get_provider_name(self) -> str: class OllamaProvider(LLMProvider): - """Ollama local LLM provider - Run models locally without API costs.""" + """Ollama provider - Local LLM support without API keys.""" def __init__( self, + api_key: str = "", # Not used, kept for interface compatibility model: str = "llama3.2", - base_url: str = "http://localhost:11434/v1" + base_url: str = None ): """ - Initialize Ollama provider for local LLM inference. + Initialize Ollama provider. Args: - model: Model identifier (e.g., llama3.2, codellama, mistral, qwen2.5-coder) - base_url: Ollama API endpoint (default: http://localhost:11434/v1) + api_key: Not used (kept for interface compatibility) + model: Model identifier (default: llama3.2) + base_url: Ollama server URL (default: http://localhost:11434) """ - try: - from openai import AsyncOpenAI - self.client = AsyncOpenAI( - api_key="ollama", # Required but not used by Ollama - base_url=base_url - ) - self.model = model - self.base_url = base_url - self.logger = logging.getLogger(self.__class__.__name__) - except ImportError: - raise ImportError("openai package not installed. Run: pip install openai") + self.model = model + self.base_url = base_url or os.environ.get( + 'OLLAMA_BASE_URL', 'http://localhost:11434' + ) + self.logger = logging.getLogger(self.__class__.__name__) + self.logger.info(f"Initialized OllamaProvider: {self.base_url}, model={self.model}") async def chat( self, @@ -284,18 +282,50 @@ async def chat( max_tokens: int = 2000, **kwargs ) -> str: - """Generate chat completion with local Ollama model.""" + """Generate chat completion with Ollama.""" try: - response = await self.client.chat.completions.create( - model=self.model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - **kwargs - ) - return response.choices[0].message.content + import aiohttp + except ImportError: + raise ImportError("aiohttp package not installed. Run: pip install aiohttp") + + url = f"{self.base_url}/api/chat" + payload = { + "model": self.model, + "messages": messages, + "stream": False, + "options": { + "temperature": temperature, + "num_predict": max_tokens + } + } + + try: + async with aiohttp.ClientSession() as session: + async with session.post(url, json=payload, timeout=aiohttp.ClientTimeout(total=300)) as response: + response.raise_for_status() + result = await response.json() + + # Extract response text + if 'message' in result and 'content' in result['message']: + text = result['message']['content'] + elif 'response' in result: + text = result['response'] + else: + raise ValueError(f"Unexpected response format: {list(result.keys())}") + + self.logger.info(f"Ollama response received ({len(text)} chars)") + return text.strip() + + except aiohttp.ClientConnectorError as e: + error_msg = f"Failed to connect to Ollama at {self.base_url}. Is Ollama running? Error: {e}" + self.logger.error(error_msg) + raise ConnectionError(error_msg) from e + except aiohttp.ClientResponseError as e: + error_msg = f"Ollama API error: {e.status} - {e.message}" + self.logger.error(error_msg) + raise except Exception as e: - self.logger.error(f"Ollama API error: {e}") + self.logger.error(f"Ollama error: {e}") raise def get_model_name(self) -> str: @@ -328,7 +358,7 @@ class LLMFactory: def create( cls, provider: str, - api_key: Optional[str] = None, + api_key: str = "", model: Optional[str] = None, base_url: Optional[str] = None ) -> LLMProvider: @@ -346,8 +376,7 @@ def create( Example: >>> llm = LLMFactory.create("anthropic", api_key="sk-...") - >>> llm = LLMFactory.create("ollama", model="codellama") - >>> llm = LLMFactory.create("ollama", model="qwen2.5-coder", base_url="http://localhost:11434/v1") + >>> llm = LLMFactory.create("ollama", model="llama3.2") """ provider = provider.lower() @@ -388,6 +417,7 @@ def get_recommended_for_task(cls, task_type: str) -> str: "general": "deepseek", # Cost-effective for general tasks "coordination": "anthropic", # Sonnet for orchestration "risk": "anthropic", # Sonnet for nuanced risk decisions + "local": "ollama", # Local LLM, no API key required } return recommendations.get(task_type, "anthropic") diff --git a/reorganize-branches.sh b/reorganize-branches.sh deleted file mode 100755 index 0b329bfb..00000000 --- a/reorganize-branches.sh +++ /dev/null @@ -1,66 +0,0 @@ -#!/bin/bash -# QuantCoder Branch Reorganization Script -# This script creates clean branch names: main, beta, gamma - -set -e - -echo "🔄 QuantCoder Branch Reorganization" -echo "====================================" -echo "" - -# Check if we're in the right repo -if [ ! -d ".git" ]; then - echo "❌ Error: Not in a git repository" - exit 1 -fi - -echo "📍 Current branches:" -git branch -r -echo "" - -# Ask for confirmation -read -p "This will create new branches (main, beta, gamma). Continue? (y/n) " -n 1 -r -echo "" -if [[ ! $REPLY =~ ^[Yy]$ ]]; then - echo "Cancelled." - exit 0 -fi - -echo "" -echo "Step 1: Fetch all branches..." -git fetch --all - -echo "" -echo "Step 2: Create beta branch from refactor/modernize-2025..." -git checkout refactor/modernize-2025 2>/dev/null || git checkout -b beta origin/refactor/modernize-2025 -git checkout -b beta-clean -git push origin beta-clean:beta -echo "✓ Beta branch created" - -echo "" -echo "Step 3: Create gamma branch from current work..." -git checkout claude/refactor-quantcoder-cli-JwrsM 2>/dev/null || git checkout -b gamma origin/claude/refactor-quantcoder-cli-JwrsM -git checkout -b gamma-clean -git push origin gamma-clean:gamma -echo "✓ Gamma branch created" - -echo "" -echo "Step 4: Verify main branch exists..." -git checkout main -echo "✓ Main branch ready" - -echo "" -echo "✅ Branch reorganization complete!" -echo "" -echo "New branches:" -echo " • main (v1.0.0) - Stable" -echo " • beta (v1.1.0-beta.1) - Testing" -echo " • gamma (v2.0.0-alpha.1) - Latest" -echo "" -echo "Next steps:" -echo "1. Verify the new branches on GitHub" -echo "2. Update your local git config if needed" -echo "3. Optionally delete old branches:" -echo " git push origin --delete claude/refactor-quantcoder-cli-JwrsM" -echo " git push origin --delete refactor/modernize-2025" -echo "" diff --git a/requirements-legacy.txt b/requirements-legacy.txt deleted file mode 100644 index 80b111e7..00000000 --- a/requirements-legacy.txt +++ /dev/null @@ -1,72 +0,0 @@ -aiohappyeyeballs==2.6.1 -aiohttp==3.11.14 -aiosignal==1.3.2 -annotated-types==0.7.0 -anyio==4.9.0 -attrs==25.3.0 -blis==1.2.0 -catalogue==2.0.10 -certifi==2025.1.31 -cffi==1.17.1 -charset-normalizer==3.4.1 -click==8.1.8 -cloudpathlib==0.21.0 -colorama==0.4.6 -confection==0.1.5 -cryptography==44.0.2 -cymem==2.0.11 -distro==1.9.0 -en_core_web_sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.8.0/en_core_web_sm-3.8.0-py3-none-any.whl#sha256=1932429db727d4bff3deed6b34cfc05df17794f4a52eeb26cf8928f7c1a0fb85 -frozenlist==1.5.0 -h11==0.14.0 -httpcore==1.0.7 -httpx==0.28.1 -idna==3.10 -inquirerpy==0.3.4 -Jinja2==3.1.6 -jiter==0.9.0 -langcodes==3.5.0 -language_data==1.3.0 -marisa-trie==1.2.1 -markdown-it-py==3.0.0 -MarkupSafe==3.0.2 -mdurl==0.1.2 -multidict==6.2.0 -murmurhash==1.0.12 -numpy==2.2.4 -openai==0.28.0 -packaging==24.2 -pdfminer.six==20231228 -pdfplumber==0.11.5 -pfzy==0.3.4 -pillow==11.1.0 -preshed==3.0.9 -prompt_toolkit==3.0.50 -propcache==0.3.0 -pycparser==2.22 -pydantic==2.10.6 -pydantic_core==2.27.2 -Pygments==2.19.1 -pypdfium2==4.30.1 -python-dotenv==1.0.1 --e git+https://github.com/SL-Mar/QuantCoder@805ce90efa33525247fbc8680c3b2bd8839e90e4#egg=quantcli -requests==2.32.3 -rich==13.9.4 -setuptools==77.0.3 -shellingham==1.5.4 -smart-open==7.1.0 -sniffio==1.3.1 -spacy==3.8.4 -spacy-legacy==3.0.12 -spacy-loggers==1.0.5 -srsly==2.5.1 -thinc==8.3.4 -tqdm==4.67.1 -typer==0.15.2 -typing_extensions==4.12.2 -urllib3==2.3.0 -wasabi==1.1.3 -wcwidth==0.2.13 -weasel==0.4.1 -wrapt==1.17.2 -yarl==1.18.3 diff --git a/requirements.txt b/requirements.txt index 17c451a4..f2633a54 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,4 +1,4 @@ -# QuantCoder CLI v3.0 Requirements +# QuantCoder CLI v2.1.0 Requirements # Multi-Agent System with MCP Support # Core Dependencies diff --git a/tests/test_llm_providers.py b/tests/test_llm_providers.py index b5db5076..0514f8f0 100644 --- a/tests/test_llm_providers.py +++ b/tests/test_llm_providers.py @@ -255,57 +255,51 @@ async def test_chat_error(self, mock_client_class): class TestOllamaProvider: """Tests for OllamaProvider class.""" - @patch('openai.AsyncOpenAI') - def test_init_defaults(self, mock_client_class): + def test_init_defaults(self): """Test provider initialization with defaults.""" provider = OllamaProvider() assert provider.model == "llama3.2" - assert provider.base_url == "http://localhost:11434/v1" + assert provider.base_url == "http://localhost:11434" assert provider.get_provider_name() == "ollama" - mock_client_class.assert_called_with( - api_key="ollama", - base_url="http://localhost:11434/v1" - ) - @patch('openai.AsyncOpenAI') - def test_init_custom_config(self, mock_client_class): + def test_init_custom_config(self): """Test provider with custom configuration.""" provider = OllamaProvider( model="codellama", - base_url="http://192.168.1.100:11434/v1" + base_url="http://192.168.1.100:11434" ) assert provider.model == "codellama" assert provider.get_model_name() == "codellama" + assert provider.base_url == "http://192.168.1.100:11434" - @patch('openai.AsyncOpenAI') @pytest.mark.asyncio - async def test_chat_success(self, mock_client_class): + async def test_chat_success(self): """Test successful chat completion with local Ollama.""" - mock_client = MagicMock() - mock_response = MagicMock() - mock_response.choices = [MagicMock(message=MagicMock(content="Ollama response"))] - mock_client.chat.completions.create = AsyncMock(return_value=mock_response) - mock_client_class.return_value = mock_client - provider = OllamaProvider() - result = await provider.chat( - messages=[{"role": "user", "content": "Hello"}] - ) - - assert result == "Ollama response" - - @patch('openai.AsyncOpenAI') - @pytest.mark.asyncio - async def test_chat_connection_error(self, mock_client_class): - """Test chat error when Ollama is not running.""" - mock_client = MagicMock() - mock_client.chat.completions.create = AsyncMock( - side_effect=Exception("Connection refused") - ) - mock_client_class.return_value = mock_client + with patch('aiohttp.ClientSession') as mock_session_class: + mock_response = AsyncMock() + mock_response.raise_for_status = MagicMock() + mock_response.json = AsyncMock(return_value={ + "message": {"content": "Ollama response"} + }) + + mock_session = MagicMock() + mock_session.post = MagicMock(return_value=AsyncMock( + __aenter__=AsyncMock(return_value=mock_response), + __aexit__=AsyncMock() + )) + mock_session_class.return_value.__aenter__ = AsyncMock(return_value=mock_session) + mock_session_class.return_value.__aexit__ = AsyncMock() + + result = await provider.chat( + messages=[{"role": "user", "content": "Hello"}] + ) + + assert result == "Ollama response" + + def test_init_with_env_base_url(self, monkeypatch): + """Test provider uses OLLAMA_BASE_URL env var.""" + monkeypatch.setenv('OLLAMA_BASE_URL', 'http://custom:11434') provider = OllamaProvider() - - with pytest.raises(Exception) as exc_info: - await provider.chat(messages=[{"role": "user", "content": "Hello"}]) - assert "Connection refused" in str(exc_info.value) + assert provider.base_url == 'http://custom:11434'