A real-time, end-to-end system for Search and Rescue operations that suggests context-aware follow-up questions and extracts actionable clues during interviews with missing persons' contacts.
In Search and Rescue (SAR) operations, time pressure and inexperience can lead to missed opportunities during interviews with a missing person's friends and family. This system leverages large language models (LLMs), agentic design patterns, and integration with the IntelliSAR platform to assist interviewers in surfacing more complete and relevant information. It compiles key insights into a structured clue log, ready for human review, refinement, and dissemination to the rest of the team.
Ultimate Goal: Accelerate clue discovery and reduce the likelihood of critical information being overlooked in time-sensitive SAR missions.
- Python-based transcription and processing server
- WebRTC real-time audio streaming
- Vosk speech-to-text processing
- WebSocket communication
- Structured data models with Pydantic
- React + TypeScript user interface
- Vite development server
- TailwindCSS styling
- Real-time WebRTC audio capture and streaming
- AI/ML: LangGraph, LangChain, Langfuse
- Backend: Python 3.13+, WebRTC (aiortc), WebSockets, Vosk
- Frontend: React 19, TypeScript, Vite, TailwindCSS
- Real-time: WebRTC for audio streaming
- Data: Pydantic for structured models
- Python 3.13+
- Node.js (for frontend)
- pnpm (package manager)
- mkcert (for SSL certificates)
- uv (Python package manager)
Generate local SSL certificates for HTTPS (required for WebRTC):
just create-certCopy the .env.example to .env, then to install dependencies and setup the database:
cd backend
uv sync
uv run alembic upgrade headcd frontend
pnpm installThe system uses Vosk for speech recognition. Please download a model from https://alphacephei.com/vosk/models and place into backend/vosk_models/.
In the root of the repo run:
pre-commit installcd backend
uv run ./src/main.pycd frontend
pnpm devThe frontend will be available at https://localhost:5173 and the backend WebRTC server runs on the configured port.
See an interactive diagram of the architecture:
docker run -it --rm -p 8080:8080 -v docs:/usr/local/structurizr structurizr/lite
To run the LLM eval tests, in the backend/ dir run:
uv run deepeval test run -m "llm" .\src\interview_helper\ai_analysis\eval
To autogenerate a migration with alembic, in the backend/ dir run:
uv run alembic revision --autogenerate -m "<message>"
