Skip to content

Conversation

@srebrek
Copy link
Collaborator

@srebrek srebrek commented Jan 13, 2026

Working implementation of tool calling for local LLMs.
Left TODOs about testing with other LLMs. Currently working perfectly with gemma3:4b (example 7).

Closes #111

@srebrek srebrek force-pushed the feature/tool-calling-for-local-llm branch from 08f75d2 to 06f6693 Compare January 14, 2026 16:44
@wisedev-pstach
Copy link
Contributor

After a test, I noticed that message about tool invocation is also printed by notification service - Its not what we want, LLMService should be able to ommit this message as its not "user friendly" message, Maybe some tweaks can be done in similar way as we process reasoning models, so we can apply tokenType to tool messages

- Prevent tool definition duplication in the system prompt during subsequent loop iterations.
- Refine system prompt to enforce format more effectively.
- Add invalid chat responses during ToolCalling to the chat, so it sees them in the next prompts.
- Mark first message as processed after processing.
- Removeunnecessary parsing methods.
- Add messages about invalid formating to the promt so model sees what went wrong.
- Format and clean the code.
- OpenAICompatibleService and LLMService use Tool Classes Defined in the Domain.
- Extract Json-Tool parse logic to helper function.
@srebrek srebrek force-pushed the feature/tool-calling-for-local-llm branch from 6d49789 to 89f3382 Compare January 27, 2026 10:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

No tool calling feature for local models

4 participants