A plugin for LLM that provides access to IO Intelligence models with full tool calling support.
- 31 IO Intelligence models including Llama, Qwen, DeepSeek, Gemma, and more
- Complete tool calling support - works with all LLM tools
- Text-based tool call parsing - innovative approach for models that simulate tool calls
- Streaming and non-streaming support
- Vision model support for image analysis
- Embedding models for text embeddings
llm install llm-io-intelligence
Set your IO Intelligence API key:
llm keys set ionet
# Paste your API key when prompted
Or set it as an environment variable:
export IONET="your-api-key-here"
This plugin provides complete tool calling support for IO Intelligence models. All LLM tools work seamlessly:
# Get LLM version
llm --tool llm_version "What version?" --td
# Get current time
llm --tool llm_time "What time is it?" --td
# Install math tools
llm install llm-tools-simpleeval
# Use mathematical calculations
llm --tool simple_eval "Calculate 15 * 23 + 7" --td
# Install SQLite tools
llm install llm-tools-sqlite
# Query a database
llm -T 'SQLite("database.db")' "Show me all users" --td
# Install JavaScript tools
llm install llm-tools-quickjs
# Execute JavaScript code
llm --tool quickjs "Calculate factorial of 5" --td
# Powerful math calculator (one-liner)
llm -m llama-3.3-70b --functions 'def calc(expression): import math; return eval(expression, {"math": math, "sqrt": math.sqrt, "sin": math.sin, "pi": math.pi})' --td 'Calculate sqrt(144) + sin(pi/2) * 10'
# Simple operations
llm --functions 'def add(a, b): return a + b' --td 'Add 15 and 27'
# String manipulation
llm --functions 'def reverse(text): return text[::-1]' --td 'Reverse the word "hello"'
# Create a functions file
echo 'def calc(expression): import math; return eval(expression, {"math": math, "sqrt": math.sqrt, "sin": math.sin, "pi": math.pi})' > my_functions.py
# Use the functions file
llm -m qwen3-235b --functions my_functions.py --td 'What is the area of a circle with radius 5?'
Model ID | Full Name | Context Length | Tool Support |
---|---|---|---|
llama-3.3-70b |
meta-llama/Llama-3.3-70B-Instruct | 128K | ✅ Full |
qwen3-235b |
Qwen/Qwen3-235B-A22B-FP8 | 32K | ✅ Full |
llama-3.2-90b-vision |
meta-llama/Llama-3.2-90B-Vision-Instruct | 16K | ✅ Full |
llama-4-maverick-17b |
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 | 430K | ✅ Full |
llama-3.1-nemotron-70b |
neuralmagic/Llama-3.1-Nemotron-70B-Instruct-HF-FP8-dynamic | 128K | ✅ Full |
deepseek-r1 |
deepseek-ai/DeepSeek-R1 | 128K | ❌ Server config |
phi-4 |
microsoft/phi-4 | 16K | ❌ Server config |
And 24 more models - see full list with llm models list
llama-3.2-90b-vision
- Image analysis and understandingqwen2-vl-7b
- Vision-language model
bge-multilingual-gemma2
- Multilingual embeddingsmxbai-embed-large-v1
- Large context embeddings
llm -m llama-3.3-70b "Explain quantum computing"
# Mathematical calculation
llm -m llama-3.3-70b --tool simple_eval "What's the square root of 12345?" --td
# Database query
llm -m llama-3.3-70b -T 'SQLite("data.db")' "Show top 5 customers by revenue" --td
# Custom function calculation
llm -m qwen3-235b --functions 'def calc(expression): import math; return eval(expression, {"math": math, "sqrt": math.sqrt, "sin": math.sin, "pi": math.pi})' --td 'Calculate sqrt(144) + sin(pi/2) * 10'
llm -m llama-3.2-90b-vision "Describe this image" -a image.jpg
import llm
from llm_tools_sqlite import SQLite
# Get model
model = llm.get_model("llama-3.3-70b")
# Use with tools
response = model.prompt(
"Show me all users with age > 25",
tools=[SQLite("database.db")]
)
print(response.text())
# Check tool calls
for tool_call in response.tool_calls:
print(f"Tool: {tool_call.name}, Args: {tool_call.arguments}")
# Using custom functions
import math
def calc(expression):
"""Powerful calculator with math functions"""
safe_dict = {"math": math, "sqrt": math.sqrt, "sin": math.sin, "pi": math.pi}
return str(eval(expression, safe_dict))
# Register as tool and use
response = model.prompt(
"Calculate the circumference of a circle with radius 10",
tools=[calc] # Functions can be used directly as tools
)
This plugin implements an innovative text-based tool call parsing approach:
- Tool definitions sent - Proper OpenAI-compatible tool schemas sent to API
- Model outputs JSON - Models output tool calls as JSON text like
{"name": "tool_name", "arguments": {}}
- Text parsing - Plugin detects and parses JSON patterns from model output
- Tool execution - Converts text to actual
ToolCall
objects and executes them - Results returned - Tool results fed back to model for final response
This approach bridges the gap between IO Intelligence's text-based responses and LLM's tool calling framework.
✅ Working Tools:
llm_version
,llm_time
- Built-in LLM toolssimple_eval
- Mathematical expressionsSQLite
- Database queries and schema inspectionquickjs
- JavaScript code executionDatasette
- Remote database queries- Custom functions - Inline Python functions via
--functions
✅ All tool types supported:
- Simple tools (no parameters)
- Parameterized tools (with arguments)
- Complex toolbox-style tools (multiple methods)
- Async and sync tools
- Inline functions (one-liners and file-based)
- Custom eval functions with math libraries
✅ Tested Models:
llama-3.3-70b
- Full tool support, multiple verification callsqwen3-235b
- Full tool support, clean execution with thinking processllama-3.2-90b-vision
- Vision + tool calling combinedllama-4-maverick-17b
- Advanced tool calling capabilities
# Set temperature
llm -m llama-3.3-70b -o temperature 0.8 "Creative writing task"
# Set max tokens
llm -m llama-3.3-70b -o max_tokens 1000 "Long explanation needed"
# Enable reasoning content (for compatible models)
llm -m deepseek-r1 -o reasoning_content true "Complex problem"
# Set default model
llm models default llama-3.3-70b
# Now you can omit -m
llm "Hello world"
# Test basic functionality
python debug_tool_execution.py
# Test SQLite integration
python test_sqlite_real.py
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
If tools aren't working:
- Check model support - Use models like
llama-3.3-70b
orqwen3-235b
that support tool calling - Verify API key - Ensure
IOINTELLIGENCE_API_KEY
is set correctly - Use debug mode - Add
--td
flag to see tool call details - Check tool installation - Ensure tool plugins are installed
- Parameter names - For custom functions, use parameter names the model expects (e.g.,
expression
notexpr
)
"does not support tools"
- Use a tool-compatible model (see table above)"API key not found"
- Set theIOINTELLIGENCE_API_KEY
environment variable"Chain limit exceeded"
- Model made too many tool calls (safety limit)"unexpected keyword argument"
- Check function parameter names match model expectations"missing required argument"
- Ensure function parameters are properly defined
Apache License 2.0