This project demonstrates how to use a Large Language Model (LLM) to explain and reason about robot navigation paths, integrating environmental context and user questions.
- Python 3.8+ (recommended: 3.11)
- Ollama (local LLM server)
- llama3.2 model (or another compatible model installed in Ollama)
- make (optional, for easier automation)
- git (for cloning the repository)
-
Clone the repository:
git clone https://github.com/Vlor999/HCI.git cd HCI -
Install Ollama and the LLM model:
-
Download and install Ollama for your OS (Windows, macOS, Linux).
-
Start Ollama (if not already running):
ollama serve
-
Pull the model (e.g., llama3.2):
ollama pull llama3.2
-
-
Initialize the project structure and install Python dependencies:
make init make install
This will:
- Create necessary folders (
src/,tests/,data/,log/, etc.) - Install Python dependencies in a virtual environment
- Create necessary folders (
-
(Optional) Format the code:
make format
-
Run the project:
make run
or manually:
.venv/bin/python main.py
-
Run the tests:
make test -
(Optional) Run coverage and view the HTML report:
make coverage # Then open htmlcov/index.html in your browser
- When running, the program will display the current robot path and context.
- You can ask multiple questions about the path and its conditions.
- Type
exitorquitto end the session. - After exiting, a Markdown log of the conversation will be saved in the
log/directory.
src/ # Source code (robotPathExplanation.py, path.py, io_console.py, etc.)
tests/ # Unit tests
data/ # Example path data (JSON)
log/ # Conversation logs (Markdown)
doc/ # Documentation and roadmap
evaluation/ # Evaluation scripts and results
Makefile # Automation commands
requirements.txt
- You can edit
data/paths.jsonto add or modify path scenarios. - The LLM model name can be changed in the code if you use a different one.
- The
.gitignorefile ensures that data, logs, and virtual environments are not committed.
- Ollama not running: Make sure
ollama serveis active and the model is pulled. - Port conflicts: Only one Ollama server should run at a time on port 11434.
- Python errors: Ensure you are using the virtual environment (
.venv).
For more details, see the documentation in doc/ or the comments in the source files.
This project is licensed under the MIT License.