A collection of apps, web services, and command line tools built around the FlyFun Euro AIP dataset. Each project consumes the shared airports.db SQLite database generated by the rzflight/euro_aip tooling and presents the information in different contexts—iOS/macOS, web, ForeFlight content packs, and LLM integrations.
This project also serves as an experiment in leveraging AI coding tools (Cursor, Claude Code) to their fullest—from architecture design to implementation to testing. See How This Was Built for the approaches used.
- Shared aviation dataset – Uses the
euro_aipmodels from the rzflight project to normalize European AIP data, runway details, and border crossing requirements. - Multiple frontends – SwiftUI client for iOS/macOS, TypeScript web explorer with Leaflet maps, and an AI-powered chatbot for natural language queries.
- LLM integrations – MCP server for Claude/ChatGPT tool use, plus a LangGraph-based aviation agent powering the web chatbot.
- Data enrichment – Parses customs notifications into structured data, computes persona-based relevance scores from reviews and AIP data.
- Pilot-focused exports – Command line utilities generate ForeFlight content packs, KML overlays, and other artifacts for flight planning.
| Path | Description |
|---|---|
app/ |
SwiftUI app for iOS/macOS. See app/README.md for details. |
web/ |
FastAPI backend + TypeScript frontend with interactive maps and chatbot. See web/README.md. |
mcp_server/ |
FastMCP server exposing airport/route/rules tools to LLMs. See mcp_server/README.md. |
shared/ |
Shared Python modules: aviation agent, filtering, GA friendliness scoring, rules manager. |
tools/ |
CLI utilities for ForeFlight exports, KML generation, AIP processing, and data enrichment. |
configs/ |
Configuration files for aviation agent behavior and model selection. |
designs/ |
Architecture and design documents for major features. |
data/ |
SQLite databases, Excel sheets, and reference data. |
The system enriches raw airport data with computed metadata:
Notification Parsing – Extracts structured notification requirements from free-text customs/PPR fields:
- Hours of notice required (e.g., "24h", "48h")
- Day-specific rules (weekends, holidays)
- Contact methods and operating hours
See designs/NOTIFICATION_PARSING_DESIGN.md for details.
GA Friendliness Scoring – Computes persona-based relevance scores for airports:
- Parses pilot reviews and AIP data to extract feature signals
- Weights features by pilot persona (e.g., IFR touring, VFR day trips)
- Produces quartile-based rankings for airport recommendations
See designs/GA_FRIENDLINESS_DESIGN.md for the scoring algorithm.
All services use a single .env file in the repository root:
cp env.sample .env
${EDITOR:-vi} .env # Set WORKING_DIR, API keys, etc.
source .envKey variables:
WORKING_DIR– Absolute path to the repositoryAIRPORTS_DB– Path to airports.dbOPENAI_API_KEY– Required for LLM featuresAVIATION_AGENT_CONFIG– Config file name (seeconfigs/aviation_agent/)
See env.sample for the full list.
The web app includes an AI chatbot specialized in European airports and flying rules. This is an experiment in building a domain-specific assistant that can answer pilot questions accurately while also driving the UI dynamically.
Goals:
- Specialized knowledge – Answer questions about airport facilities, customs requirements, notification rules, and country-specific regulations across Europe.
- UI-driven by conversation – The chatbot can update the map, apply filters, and display airport details based on the discussion—reducing the need for users to navigate complex UI controls. Ask "show me airports with AVGAS near Lyon" and the map responds.
- Continuous improvement – Flexible configuration (
configs/aviation_agent/) and behavior tests allow iterating on prompts, model selection, and agent logic to improve answer quality over time.
See designs/LLM_AGENT_DESIGN.md for the LangGraph agent architecture and designs/AVIATION_AGENT_CONFIGURATION_DESIGN.md for tuning behavior.
Note: This repository uses Git LFS to store large binary files (airports.db). Install Git LFS before cloning.
# Install Git LFS (one-time)
brew install git-lfs # macOS
# or: sudo apt-get install git-lfs # Ubuntu/Debian
git lfs install
git clone https://github.com/downle/flyfun-apps.git
cd flyfun-apps
git lfs pull # Download LFS files
# Set up Python environment
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -e git+https://github.com/roznet/rzflight@main#egg=euro_aip&subdirectory=euro_aip
# Configure environment
cp env.sample .env && ${EDITOR:-vi} .env
source .envuvicorn web.server.main:app --reload
# API: http://localhost:8000 | UI: http://localhost:8000/static/See designs/CHATBOT_WEBUI_DESIGN.md for chatbot architecture.
cd mcp_server
python main.pyAdd to Claude or ChatGPT via their MCP configuration using mcp-flyfun.json.
A full Docker Compose setup is provided for production deployment of the web server and MCP server. See designs/DOCKER_DEPLOYMENT.md for configuration and deployment instructions.
Open app/FlyFunEuroAIP/FlyFunEuroAIP.xcodeproj in Xcode. See designs/IOS_APP_DESIGN.md.
python tools/foreflight.py PointOfEntry -c airports --database $AIRPORTS_DBDetailed design documents are in designs/:
CHATBOT_WEBUI_DESIGN.md– LangGraph agent architecture, streaming, UI payload formatUI_FILTER_STATE_DESIGN.md– Frontend state management, Zustand store, visualization engineGA_FRIENDLINESS_DESIGN.md– Persona-based relevance scoring systemNOTIFICATION_PARSING_DESIGN.md– Customs/PPR notification extractionLLM_AGENT_DESIGN.md– MCP tools and agent planningIOS_APP_DESIGN.md– SwiftUI app architectureAVIATION_AGENT_CONFIGURATION_DESIGN.md– Agent behavior and model configuration
Pull requests and issues are welcome—please include reproducible steps when reporting data discrepancies. Released under the MIT License. For questions, open a GitHub issue or reach out via flyfun.aero.
Happy flying!
This project is partly an experiment in building a full LLM-based application with multiple frontends (iOS, web, CLI) while extensively leveraging modern AI coding tools. The entire codebase—backend, frontends, data pipelines, and this documentation—was developed using Cursor and Claude Code.
1. Architecture validation via AI commands Used Cursor commands to validate architecture choices and implementation decisions as features were being built. The AI would review proposed changes against existing patterns and flag inconsistencies.
2. Design documents before implementation
Extensive use of planning and design documents (see designs/) to iterate on architecture before writing code. Designs were cross-checked between models (Claude, Cursor, ChatGPT) to surface blind spots and alternative approaches. These documents also serve as context anchors—when starting work on a new feature, loading the relevant design document gives the AI a clean, complete context with all architectural decisions and component interactions, avoiding context drift over long sessions.
3. MCP servers for framework accuracy Used MCP servers providing documentation for specific frameworks (SwiftUI, LangChain, LangGraph) to avoid hallucination on API usage and ensure code follows current best practices rather than outdated patterns.
4. Test-driven AI development Focus on maintaining high-value tests that the AI runs constantly during development. This provides immediate feedback on whether code changes break existing functionality and keeps the AI grounded in working implementations.
5. Cross-model review cycles Frequent reviews where one model critiques another's implementation. Claude would produce review documents of Cursor's code, and vice-versa. This adversarial approach catches issues that a single model might miss and produces more robust designs.
6. Parallel agents via git worktrees Used git worktrees to run multiple AI agents in parallel on different components: for instance, one agent on the iOS app, another on the LLM agent, and a third on the web frontend. Each agent was given explicit rules to focus only on its assigned area and to produce request documents when needing functionality from another component (e.g., requesting a shared library feature). This mimics a team workflow and prevents agents from making conflicting changes. This provided significant development speed and scale.
7. Living documentation Regular use of the AI to review and compare README files and design documents against the actual codebase, keeping documentation in sync with implementation changes. This ensures the design docs remain accurate context anchors rather than drifting into obsolescence.
Using the approaches above, here's what was built in the first 7 weeks of development:
| Language | Lines | Details |
|---|---|---|
| Python | 34,869 | |
| ↳ shared/ | 19,182 | Aviation agent, filtering, GA friendliness |
| ↳ tests/ | 6,219 | Test suite |
| ↳ tools/ | 4,979 | CLI utilities |
| ↳ web/server/ | 3,340 | FastAPI backend |
| ↳ mcp_server/ | 287 | MCP server |
| TypeScript | 7,722 | Web frontend (web/client/ts/) |
| Swift | 10,980 | iOS app |
| Design docs | 16,383 | Architecture documentation |
| Category | Lines |
|---|---|
| Source code | ~53,500 |
| Documentation | ~16,400 |
| Total | ~70,000 |