Running multiple tools with Ollamasharp #323
-
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 8 replies
-
This question is absolutely legit and thanks for the kind words. Let me begin this way: The primary goal of OllamaSharp is to cover 100% of the Ollama API, providing a typesafe and easy-to-use layer to work with that API. There's the The way this chat class is implemented today is that it automatically executes tools the AI model wants to be executed in a batch and then feeds back the results to the model which has the chance to respond with the information that resulted from the tool call(s). It is important to note here that the model has no chance to further iterate, as the execution ends as soon as the model is done with its message. Your case reads a bit more agentic where the model should reflect on the tool results and whether or not it should continue to iterate (maybe calling other tools or changing the arguments and retry tool calls) until it is satisfied with the results and feels confident enough to stop iterating and write a response. This is more than the current I would encourage you to build this with a pretty simple loop combined with a system prompt that tells the model it should signal some kind of stop word that you can react in code and to stop the iteration. |
Beta Was this translation helpful? Give feedback.
It's fun seeing your response now as I just finished working and pushing this which allows sequential tool calling. It allows not only having multiple tool calls in a single message but also one tool call after another in multiple messages, if the model wants to do so.
Try updating to 5.4.6, chances are this is already implementing what you need. With this, you should be able to remove the necessity for a stop word as OllamaSharp runs an internal recursion on the messages and will automatically stop the moment the AI model does not request any more tool calls.