Skip to content

dpoulopoulos/lhammai-cli

Repository files navigation

Lhammai CLI logo

License Tests Publish to PyPI

✨ Interact with any LLM from your terminal


Lhammai CLI allows you to interact with any LLM directly from your terminal using a simple, intuitive interface. Powered by the any-llm library, it seamlessly connects to various LLM providers, including OpenAI, Anthropic, and local servers such as Ollama, llamafile, and others. For a full list of supported providers, see the official any-llm documentation.

The name Lhammai comes from "Lhammas," Noldorin for "account of tongues", a work of fictional sociolinguistics, written by J. R. R. Tolkien in 1937.

Getting Started

Prerequisites

Installation

You can install the package from PyPI using pip (recommended):

pip install "lhammai-cli[ollama]"

From Source

  1. Clone the repository and navigate to the source directory:

    git clone https://github.com/dpoulopoulos/lhammai-cli.git && cd lhammai-cli
  2. Install the dependencies using uv:

    uv sync --group ollama
  3. Activate the virtual environment:

    source .venv/bin/activate

Note

This installs the necessary dependencies to communicate with a local model via Ollama.

Usage

To begin, you'll need to run the Ollama server. For this example, you can use Docker for a quick setup.

Warning

This approach has some limitations, especially on a Mac. Since Docker Desktop doesn't support GPUs, it's better to run Ollama as a standalone application if you're using a Mac. For more detailed instructions, check the official Ollama documentation.

  1. Run the following command to start the Ollama server in a Docker container:

    a. CPU only:

    docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

    b. Nvidia GPU:

    docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
  2. Run a model:

    docker exec -it ollama ollama run gemma3:4b
  3. Interact with the model:

    lhammai Hello!

Tip

Configure your application by creating a .env file in the root directory and adding your options: cp .default.env .env

You can also pipe content to lhammai from standard input. This is useful for analyzing logs, summarizing files, etc.:

cat dev.log | lhammai -p "explain:"

License

See the LICENSE file for details.

About

Interact with any LLM from your terminal.

Resources

License

Stars

Watchers

Forks