|
2 | 2 |
|
3 | 3 | ## Challenges of Building AI Agents |
4 | 4 |
|
5 | | -Common tools: LangChain, OpenAI SDK, custom state management systems, Redis for memory storage |
6 | | - |
7 | 5 | Main pain points: |
8 | | -- Managing conversation history and agent memory across sessions requires external databases |
9 | | -- Keeping context and state synchronized while handling multiple concurrent conversations |
10 | | -- High latency from database round trips to fetch conversation history |
11 | | -- Complex infrastructure for realtime streaming responses to users |
12 | | -- Handling tool calls and maintaining agent state during long-running operations |
| 6 | + |
| 7 | +- **Long-running & complex workflows**: Orchestrating multi-step tool calls, handling timeouts, and maintaining state consistency across operations |
| 8 | +- **Fault tolerance & recovery**: Gracefully handling agent failures, network issues, and LLM API errors without losing conversation state |
| 9 | +- **Real-time streaming**: Building infrastructure for streaming responses while handling backpressure and connection management |
| 10 | +- **State persistence & isolation**: Managing conversation history and agent memory per user without complex database setups |
| 11 | +- **Observability & debugging**: Understanding agent decision-making, tracking tool usage, and debugging complex behaviors |
13 | 12 |
|
14 | 13 | ## How Rivet Solves This |
15 | 14 |
|
16 | | -Rivet makes building stateful AI agents simple by providing durable actors that keep conversation history and agent state in memory without external databases. |
| 15 | +Rivet provides a complete actor-based runtime designed specifically for stateful AI agents, addressing each challenge: |
17 | 16 |
|
18 | | -**Persistent Memory**: Each agent actor maintains its own state including conversation history, tool results, and context. State survives restarts and deployments automatically. |
| 17 | +**Long-running & complex workflows**: Actors naturally handle multi-step operations with built-in state management. Tool calls execute within the actor context, maintaining consistency across all operations. Workflows can span hours or days without losing state. |
19 | 18 |
|
20 | 19 | ```typescript |
21 | 20 | const aiAgent = actor({ |
22 | 21 | state: { |
23 | 22 | messages: [] as Array<{role: string, content: string}>, |
24 | | - context: {} |
| 23 | + toolResults: Map<string, any>, |
| 24 | + workflowStep: string |
25 | 25 | }, |
26 | 26 |
|
27 | 27 | actions: { |
28 | | - chat: async (c, userMessage: string) => { |
29 | | - c.state.messages.push({ role: "user", content: userMessage }); |
30 | | - |
31 | | - const response = await callLLM(c.state.messages); |
32 | | - c.state.messages.push({ role: "assistant", content: response }); |
| 28 | + executeWorkflow: async (c, steps: ToolCall[]) => { |
| 29 | + for (const step of steps) { |
| 30 | + c.state.workflowStep = step.id; |
| 31 | + const result = await executeTool(step); |
| 32 | + c.state.toolResults.set(step.id, result); |
33 | 33 |
|
34 | | - c.broadcast("message", { role: "assistant", content: response }); |
35 | | - return response; |
| 34 | + // State automatically persisted between steps |
| 35 | + if (result.requiresApproval) { |
| 36 | + await c.sleep(3600000); // Wait for approval |
| 37 | + } |
| 38 | + } |
36 | 39 | } |
37 | 40 | } |
38 | 41 | }); |
39 | 42 | ``` |
40 | 43 |
|
41 | | -**Realtime Streaming**: Use WebSocket connections to stream LLM responses in realtime without additional infrastructure. Learn more about [events](/docs/actors/events). |
| 44 | +**Fault tolerance & recovery**: State automatically persists to durable storage. If an agent crashes, it resumes exactly where it left off with full conversation history and context intact. Network failures and LLM API errors don't lose progress. |
42 | 45 |
|
43 | | -**Tool Integration**: Execute tool calls within the actor context while maintaining state consistency. See [actions](/docs/actors/actions) for more details. |
| 46 | +**Real-time streaming**: Built-in WebSocket support with automatic connection management. Stream LLM responses directly to clients without building custom infrastructure. Backpressure and reconnection handled automatically. Learn more about [events](/docs/actors/events). |
| 47 | + |
| 48 | +```typescript |
| 49 | +// Stream responses directly to connected clients |
| 50 | +c.broadcast("stream", { chunk: responseChunk }); |
| 51 | +``` |
| 52 | + |
| 53 | +**State persistence & isolation**: Each agent actor is automatically isolated per user/conversation. State persists without external databases - conversation history, tool results, and context are maintained in-actor memory with automatic durability. Read about [actor lifecycle](/docs/actors/lifecycle). |
| 54 | + |
| 55 | +**Observability & debugging**: Full visibility into agent behavior through structured logging, metrics, and state inspection. Track every tool call, decision point, and state change. Debug production issues with complete audit trails. |
| 56 | + |
| 57 | +```typescript |
| 58 | +// Automatic tracing of all actions and state changes |
| 59 | +c.log("Tool executed", { tool: toolName, duration: ms, result }); |
| 60 | +``` |
44 | 61 |
|
45 | | -**No Cold Starts**: Agents hibernate when idle and wake instantly when needed, keeping conversation context ready without paying for idle time. Read about [actor lifecycle](/docs/actors/lifecycle). |
| 62 | +**Bonus - No cold starts**: Agents hibernate when idle and wake instantly when needed, keeping conversation context ready without paying for idle compute. See [actions](/docs/actors/actions) for more details. |
46 | 63 |
|
47 | 64 | ## Full Example Projects |
48 | 65 |
|
|
0 commit comments