MythForge is a local chat server built on top of FastAPI and llama_cpp. It exposes a simple web UI and a JSON API for interactive story generation and experimentation built with FASTAPI with llama models.
- Web interface served from the
ui/directory - Character-based conversations with persistent memory
- Automatic goals created and updated throughout chat sessions
- System prompts and goal oriented prompts
- Clean text-based UI (HTML/JS/CSS) designed for PC and IPad usage
- Configurable model launch parameters via
model_settings.json
mythforge/– Python backend implementationui/– Static files for the browser UImodels/– place your.ggufmodel files herechats/– per-chat history storageglobal_prompts/– system prompt filesserver_logs/– JSON event logs
MythForge requires Python 3.10 or later. Install dependencies using pip:
pip install fastapi uvicorn llama-cpp-python pydanticAdditional packages may be needed depending on your configuration.
-
Copy or symlink your llama model into the
models/directory. -
Launch the server with "RunMythForge.bat" python -m uvicorn mythforge.main:app --host 0.0.0.0 --port 8000
-
Connect with
http:YourLocalIP:8000/MythForgeUI.htmlin your browser to access the UI.
MythForge exposes a JSON API for programmatic access. Examples:
POST /chat/send– Send a message to the assistantGET /settings/– Retrieve current model settingsPUT /settings/– Update generation parametersGET /prompts/– List all global prompts
Model parameters can be changed by editing model_settings.json. Global prompts can be added under global_prompts/ and will be available in the UI. Logs are stored under server_logs/.

