Skip to content

Conversation

@leehuwuj
Copy link
Collaborator

@leehuwuj leehuwuj commented May 2, 2025

Summary by CodeRabbit

  • New Features

    • Introduced a new workflow engine for the server, replacing the previous engine.
  • Refactor

    • Unified and streamlined workflow event handling and streaming logic for improved maintainability.
    • Updated event definitions and input structures for workflows.
    • Simplified type usage and removed deprecated types and dependencies.
  • Chores

    • Updated ignore patterns to exclude Python-related files and directories from formatting and linting.
    • Adjusted dependencies to use the new workflow package.

@changeset-bot
Copy link

changeset-bot bot commented May 2, 2025

🦋 Changeset detected

Latest commit: 352d40b

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@llamaindex/server Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link

coderabbitai bot commented May 2, 2025

Walkthrough

This update migrates the server workflow engine from "@llama-flow/core" to "@llamaindex/workflow", refactors event and workflow handling to align with the new engine, and updates related type definitions, stream processing utilities, and dependencies. It also improves ignore patterns for Python artifacts and virtual environments in development tooling configuration files.

Changes

File(s) Change Summary
.changeset/smart-clocks-guess.md Documents the switch to a new workflow engine and deprecation of the old engine for "@llamaindex/server".
.prettierignore, eslint.config.mjs Added ignore patterns for Python directories, caches, and virtual environments.
packages/server/package.json Replaces "@llama-flow/core" dependency with "@llamaindex/workflow".
packages/server/src/events.ts Refactors event classes to factory-based event constants using workflowEvent. Updates event creation logic.
packages/server/src/handlers/chat.ts Refactors chat handler to use typed workflow input, supports aborting on disconnect, and streams output with new data stream utility.
packages/server/src/types.ts Removes ServerWorkflow type, updates WorkflowFactory to use new Workflow type from "@llamaindex/workflow".
packages/server/src/utils/workflow.ts Refactors runWorkflow to a unified workflow execution model, replaces custom streaming logic with a transform pipeline.
packages/server/src/utils/gen-ui.ts Updates imports to use "@llamaindex/workflow" and changes event collection logic to use stream.toArray().
packages/server/src/index.ts Removes export of toStreamGenerator.
packages/server/src/utils/stream.ts Adds toDataStream utility to convert workflow event streams to streaming responses, introduces StreamCallbacks interface.
packages/server/src/utils/suggestion.ts Updates streaming interface type, renames parameters, and exports generateNextQuestions function.
packages/server/src/utils/request.ts Changes pipeStreamToResponse to accept ReadableStream instead of Response.
packages/server/src/server.ts Changes workflowFactory type annotation to use new Workflow type.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Server
    participant WorkflowEngine as "@llamaindex/workflow"
    participant DataStream

    User->>Server: Sends chat message(s)
    Server->>Server: Build chatHistory, validate input
    Server->>WorkflowEngine: Create workflow via workflowFactory
    Server->>WorkflowEngine: runWorkflow(workflow, input, abortSignal)
    WorkflowEngine-->>Server: Emits event stream
    Server->>DataStream: toDataStream(eventStream, callbacks)
    DataStream-->>User: Streams assistant responses and annotations
    User-->>Server: (Disconnect triggers abort)
Loading

Poem

In the warren where workflows hop anew,
Old engines retire, and fresh ones debut.
Streams now flow with data so bright,
Python caches vanish out of sight.
Events are refactored, the code feels clean—
A rabbit’s delight in a server serene!
🐇✨


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aeeccd8 and 352d40b.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (1)
  • packages/server/package.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/server/package.json
⏰ Context from checks skipped due to timeout of 90000ms (27)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@leehuwuj leehuwuj requested a review from thucpn May 2, 2025 10:44
leehuwuj added 9 commits May 5, 2025 09:45
…nt. Clean up unused imports and improve error handling for missing user input.
…ling for missing user input. Replace workflowInputEvent with startAgentEvent and enhance chat history management. Add callbacks for suggested questions event.
…duce new `code_workflow.ts` for managing code generation and updates, and create a factory function in `workflow.ts`. Modify TypeScript helper to copy all `.ts` files instead of just `workflow.ts`. Update chat handler to utilize `AgentInputData` for improved data handling.
…on, replacing the previous context creation method. This change simplifies the workflow stream initialization and enhances code clarity.
…ce event handling in code_workflow.ts and workflow.ts. Update grammar in enhancedPrompt for clarity and improve response handling in agentStreamEvent.
…mprove memory management. Replace custom event classes with workflowEvent for better clarity and maintainability. Update workflow definition to utilize getWorkflow function, enhancing code organization and readability.
…ents

- Introduced `doc-workflow.ts` for managing document generation and updates.
- Created `code-workflow.ts` for code artifact management.
- Enhanced event handling with `workflowEvent` for better clarity and maintainability.
- Updated `README-template.md` to include setup instructions and use cases for new workflows.
- Modified `workflow.ts` to allow switching between code and document workflows.
- Improved grammar and clarity in prompts and comments throughout the code.
…or improved event handling. Introduce workflowToEngineResponseStream function to streamline the processing of workflow events and enhance error handling. Update return statement in runWorkflow to utilize the new stream implementation.
@leehuwuj leehuwuj requested review from marcusschiesser and thucpn May 5, 2025 09:32
@leehuwuj leehuwuj marked this pull request as ready for review May 5, 2025 09:56
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

♻️ Duplicate comments (3)
.changeset/smart-clocks-guess.md (1)

1-6: Duplicate: Reminder to bump version

This changeset records a minor bump for @llamaindex/server. Ensure the version in packages/server/package.json is updated accordingly as per your release process.

.changeset/yummy-moles-lick.md (1)

1-6: Duplicate: Consolidate patch changesets

Refer to the consolidation suggestion in fluffy-otters-refuse.md regarding merging patch-level changesets for create-llama.

packages/create-llama/templates/components/workflows/typescript/artifacts/code-workflow.ts (1)

167-171: Same brittle JSON extraction logic as in doc-workflow

Reuse the improved pattern & error handling suggested earlier to reduce duplicated pitfalls.

🧹 Nitpick comments (9)
.prettierignore (1)

10-15: Align Prettier ignore patterns for Python caches

As with ESLint, the patterns **/*.mypy_cache/**, **/*.venv/**, and **/*.ruff_cache/** may not catch the dot-prefixed cache directories (e.g., .mypy_cache). Update to:

- **/*.mypy_cache/**
- **/*.venv/**
- **/*.ruff_cache/**
+ **/.mypy_cache/**
+ **/.venv/**
+ **/.ruff_cache/**

and ensure these entries mirror the ESLint ignore configuration for consistency.

.changeset/fluffy-otters-refuse.md (1)

1-6: Consolidate patch changesets for create-llama

There are multiple patch changesets for create-llama in this PR (fluffy-otters-refuse.md and yummy-moles-lick.md). Consider merging them into a single changeset to simplify version history and changelog generation.

packages/create-llama/templates/components/workflows/typescript/artifacts/workflow.ts (1)

1-9: Simple and effective workflow factory implementation.

The workflow factory implementation provides a clean way to switch between different artifact workflows (document and code).

Some suggestions for improvement:

  • Consider using a more specific type for reqBody instead of any to improve type safety
  • Standardizing the file naming format across imports would be beneficial (notice "code_workflow" vs "doc-workflow")
- export const workflowFactory = async (reqBody: any) => {
+ export const workflowFactory = async (reqBody: Record<string, unknown>) => {
packages/create-llama/templates/components/workflows/typescript/artifacts/README-template.md (2)

7-9: Specify languages in code blocks for better syntax highlighting.

Fenced code blocks should include a language identifier for proper syntax highlighting.

-```
+```bash
 npm install

```diff
-```
+```bash
 npm run dev

```diff
-```
+```bash
 npm run generate:ui


Also applies to: 13-15, 27-29

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

7-7: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

</details>

</details>

---

`40-46`: **Fix grammar and provide clearer example.**

There are a few grammatical issues in this section.

```diff
- You can start by sending an request on the [chat UI](http://localhost:3000) or you can test the `/api/chat` endpoint with the following curl request:
+ You can start by sending a request on the [chat UI](http://localhost:3000) or you can test the `/api/chat` endpoint with the following curl request:
🧰 Tools
🪛 LanguageTool

[misspelling] ~40-~40: Use “a” instead of ‘an’ if the following word doesn’t start with a vowel sound, e.g. ‘a sentence’, ‘a university’.
Context: ...flow is used. You can start by sending an request on the [chat UI](http://localho...

(EN_A_VS_AN)

packages/create-llama/templates/components/workflows/typescript/artifacts/doc-workflow.ts (1)

246-252: docMatch regex misses new-line after the language tag

/```(markdown|html)([\s\S]*)```/ consumes the language token but not the required end-of-line, so docMatch[2] starts with \n, and more importantly it will not match if the LLM forgets the language hint.

-const docMatch = response.text.match(/```(markdown|html)([\s\S]*)```/);
+const docMatch = response.text.match(/```(?:markdown|html)\s*([\s\S]*?)```/i);

Using ?: (non-capturing) plus \s* makes the pattern more tolerant and avoids the super-greedy last-``` capture.

packages/create-llama/templates/components/workflows/typescript/artifacts/code-workflow.ts (1)

272-295: Mismatch between detected language and emitted artifact

If the caller leaves language empty, we still know the language from the regex (codeMatch[1]). Use that as a fallback:

- language: planData.requirement.language || "",
+ language:
+   planData.requirement.language ??
+   (codeMatch ? codeMatch[1].toLowerCase() : ""),

Prevents blank language fields that downstream UIs may not handle.

packages/create-llama/templates/components/workflows/typescript/financial_report/workflow.ts (1)

101-103: Allocate the tools array once to avoid unnecessary re-creation

tools is rebuilt every time inputEvent fires, although its composition never changes. Creating it once when the workflow is instantiated reduces noise and a (tiny) amount of GC pressure.

-  workflow.handle([inputEvent], async ({ data }) => {
-    const { sendEvent, state } = getContext();
-    const chatHistory = data.input;
-    const tools = [codeInterpreterTool, documentGeneratorTool, queryEngineTool];
+  const tools = [codeInterpreterTool, documentGeneratorTool, queryEngineTool];
+
+  workflow.handle([inputEvent], async ({ data }) => {
+    const { sendEvent, state } = getContext();
+    const chatHistory = data.input;
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (1)

326-338: Do not pass an already-consumed stream as raw

raw: stream is forwarded after the for await loop has exhausted the iterator, so downstream consumers will receive an unusable/empty stream.
Consider omitting raw or cloning the chunks while iterating so the raw data is still available.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1357c42 and 8a2f1d1.

📒 Files selected for processing (18)
  • .changeset/fluffy-otters-refuse.md (1 hunks)
  • .changeset/smart-clocks-guess.md (1 hunks)
  • .changeset/yummy-moles-lick.md (1 hunks)
  • .prettierignore (1 hunks)
  • eslint.config.mjs (1 hunks)
  • packages/create-llama/helpers/typescript.ts (1 hunks)
  • packages/create-llama/questions/simple.ts (1 hunks)
  • packages/create-llama/templates/components/workflows/typescript/artifacts/README-template.md (1 hunks)
  • packages/create-llama/templates/components/workflows/typescript/artifacts/code-workflow.ts (1 hunks)
  • packages/create-llama/templates/components/workflows/typescript/artifacts/doc-workflow.ts (1 hunks)
  • packages/create-llama/templates/components/workflows/typescript/artifacts/workflow.ts (1 hunks)
  • packages/create-llama/templates/components/workflows/typescript/deep_research/README-template.md (1 hunks)
  • packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (5 hunks)
  • packages/create-llama/templates/components/workflows/typescript/financial_report/workflow.ts (7 hunks)
  • packages/server/src/events.ts (6 hunks)
  • packages/server/src/handlers/chat.ts (2 hunks)
  • packages/server/src/types.ts (1 hunks)
  • packages/server/src/utils/workflow.ts (4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (5)
packages/create-llama/helpers/typescript.ts (1)
packages/create-llama/helpers/copy.ts (1)
  • copy (13-49)
packages/create-llama/templates/components/workflows/typescript/artifacts/workflow.ts (1)
packages/create-llama/templates/components/workflows/typescript/artifacts/doc-workflow.ts (1)
  • createDocumentArtifactWorkflow (60-334)
packages/create-llama/templates/components/workflows/typescript/artifacts/code-workflow.ts (1)
packages/server/src/events.ts (2)
  • artifactEvent (120-123)
  • extractLastArtifact (188-215)
packages/server/src/utils/workflow.ts (2)
packages/server/src/types.ts (1)
  • ServerWorkflow (7-7)
packages/server/src/events.ts (1)
  • sourceEvent (28-28)
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (1)
packages/server/src/events.ts (1)
  • toSourceEvent (59-67)
🪛 LanguageTool
packages/create-llama/templates/components/workflows/typescript/deep_research/README-template.md

[style] ~34-~34: Consider an alternative adjective to strengthen your wording.
Context: ...ings.ts). ## Custom UI Components For Deep Research, we have a custom component lo...

(DEEP_PROFOUND)

packages/create-llama/templates/components/workflows/typescript/artifacts/README-template.md

[uncategorized] ~35-~35: Possible missing comma found.
Context: ... To generate code and display it in the UI like Vercel's v0. - [Document Workflow]...

(AI_HYDRA_LEO_MISSING_COMMA)


[typographical] ~38-~38: It appears that a comma is missing.
Context: ...h artifact workflow to use. Without any changes the Code Workflow is used. You can sta...

(DURING_THAT_TIME_COMMA)


[misspelling] ~40-~40: Use “a” instead of ‘an’ if the following word doesn’t start with a vowel sound, e.g. ‘a sentence’, ‘a university’.
Context: ...flow is used. You can start by sending an request on the [chat UI](http://localho...

(EN_A_VS_AN)

🪛 markdownlint-cli2 (0.17.2)
packages/create-llama/templates/components/workflows/typescript/artifacts/README-template.md

7-7: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


13-13: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


27-27: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

🪛 Biome (1.9.4)
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts

[error] 114-114: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)


[error] 116-116: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)

🔇 Additional comments (15)
.changeset/fluffy-otters-refuse.md (1)

1-6: Approve new changeset for create-llama

This changeset correctly marks a patch update for create-llama, documenting the addition of an artifact use case for the TypeScript template. No code or public APIs are modified.

.changeset/yummy-moles-lick.md (1)

1-6: Approve new changeset for create-llama

This changeset properly documents a patch update for create-llama reflecting the updated TypeScript use cases using the new workflow engine.

packages/create-llama/templates/components/workflows/typescript/deep_research/README-template.md (1)

34-34: Path alignment for UI components looks good.

This change updates the file path from components/deep_research_event.jsx to components/ui_event.jsx, which aligns with the broader refactoring of workflows from class-based to functional, middleware-driven architectures in this PR.

🧰 Tools
🪛 LanguageTool

[style] ~34-~34: Consider an alternative adjective to strengthen your wording.
Context: ...ings.ts). ## Custom UI Components For Deep Research, we have a custom component lo...

(DEEP_PROFOUND)

packages/create-llama/questions/simple.ts (1)

65-77: Good addition to support TypeScript artifact workflows.

Removing the conditional check that previously restricted language selection for artifacts enables users to create TypeScript-based artifact workflows. This change is consistent with the introduction of new TypeScript artifact workflows in this PR.

packages/create-llama/helpers/typescript.ts (1)

34-43: Essential update to support multiple TypeScript workflow files.

This change updates the copy pattern from just workflow.ts to all TypeScript files (*.ts) in the workflow template directory. This is necessary to support the new TypeScript artifact workflows which include multiple files like code-workflow.ts, doc-workflow.ts, and other supporting files.

packages/server/src/types.ts (2)

1-1: Simplified import statement.

Removing unused types (AgentInputData and AgentWorkflowContext) from the import makes the code cleaner and more maintainable.


7-7:

✅ Verification successful

Streamlined ServerWorkflow type definition.

Simplifying the type definition to a straightforward union of Workflow | AgentWorkflow without generic parameters aligns with the refactored workflow system using event factories instead of classes.

Since this simplifies the type, please verify that all code using ServerWorkflow works correctly with this change, especially in places where specific generic parameters might have been expected.


🏁 Script executed:

#!/bin/bash
# Check for usage of ServerWorkflow to ensure compatibility with the simplified type
rg -A 3 "ServerWorkflow" --type ts

Length of output: 2056


All ServerWorkflow usages verified—no compatibility issues found

Verified that simplifying ServerWorkflow to Workflow | AgentWorkflow does not break any existing usage:

  • packages/server/src/types.ts: type declaration remains correct.
  • packages/server/src/utils/workflow.ts: imported and passed through factory functions without generic assumptions.
  • packages/server/src/server.ts: referenced in LlamaIndexServerOptions without expecting type parameters.

No further changes required.

packages/server/src/handlers/chat.ts (3)

3-4: Improved type imports.

Good update to import specialized types from LlamaIndex for better type safety.


27-33: Well-structured input creation with proper typing.

The refactoring to create a structured workflowInput of type AgentInputData improves code clarity and type safety.


35-37: Clear workflow creation and execution.

The workflow instantiation and execution is now more explicit, with clear separation between creating the workflow and running it with the input.

packages/server/src/events.ts (5)

5-5: Good refactoring to factory-based event creation.

Adding the workflowEvent import and using it to create the source event constant improves consistency with the new architecture.

Also applies to: 28-28


30-39: Well-structured agent run event data.

The agent run event data structure is clear and the factory-created event constant is consistent with the new architecture.


63-66: Clean conversion to factory-based event creation.

The toSourceEvent function has been updated to use the factory pattern, which is consistent with the overall refactoring.


76-89: Consistent agent run event creation.

The toAgentRunEvent function now uses the factory pattern consistently with the other event creation functions.


120-123: Well-structured artifact event definition.

The artifact event is now defined using the factory pattern, which is consistent with the other events in the file.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (1)

114-116: Replace {} with void for empty payload types

The empty object type {} should be avoided as it matches any non-null value. Use void to better indicate events with no payload data.

-const planResearchEvent = workflowEvent<{}>();
-const reportEvent = workflowEvent<{}>();
+const planResearchEvent = workflowEvent<void>();
+const reportEvent = workflowEvent<void>();
🧰 Tools
🪛 Biome (1.9.4)

[error] 114-114: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)


[error] 116-116: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)

🧹 Nitpick comments (1)
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (1)

161-192: Effective implementation of the start agent event handler

The handler properly initializes the workflow state, performs document retrieval, emits UI events, and sends source events using toSourceEvent. This follows good practices for asynchronous workflow management.

-    state.memory.set(chatHistory);
+    await state.memory.set(chatHistory);

Consider making the memory set operation awaited for consistency with other async operations, since ChatMemoryBuffer's set method might be asynchronous.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8a2f1d1 and 5c9f0a9.

📒 Files selected for processing (1)
  • packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (5 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (1)
packages/server/src/events.ts (1)
  • toSourceEvent (59-67)
🪛 Biome (1.9.4)
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts

[error] 114-114: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)


[error] 116-116: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)

⏰ Context from checks skipped due to timeout of 90000ms (27)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files)
🔇 Additional comments (7)
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (7)

147-159: Good use of stateful middleware pattern

The stateful middleware pattern with createStatefulMiddleware is a clean approach that separates state management from workflow logic. The state initialization is well structured with all necessary workflow state properties.


181-181: Good integration with source event system

The use of toSourceEvent properly converts retrieved nodes into a standardized event format, which aligns well with the server's event system.


234-254: Watch for potential double counting of questions

The current implementation increments totalQuestions before research starts, but doesn't increment it again after completion. This avoids the double counting issue mentioned in previous reviews.


194-267: Elegant handling of plan research logic

The plan research handler effectively:

  1. Notifies the UI of analysis progress
  2. Creates a research plan based on context and memory
  3. Handles the different decision paths (cancel, research, report)
  4. Properly uses the stream.until pattern to wait for research results

269-308: Clean asynchronous handling of research questions

The research event handler properly:

  1. Updates UI state for the specific question being researched
  2. Calls the answer question function with the necessary context
  3. Updates memory and research results with the answer
  4. Sends a UI event with the complete answer data

310-349: Effective streaming implementation for report generation

The report event handler correctly:

  1. Builds appropriate messages for the LLM using memory and the report prompt
  2. Streams the response in chunks, updating the UI with each chunk
  3. Properly finalizes the workflow with the stop agent event

354-420: Well-structured helper functions at the end of the file

The helper functions are properly:

  1. Isolated from the main workflow logic
  2. Organized at the end of the file as suggested in previous comments
  3. Implemented with clear input/output contracts and minimal side effects
  4. Making appropriate use of the LLM and prompt templates

…usage and error handling. Update toDataStreamResponse function to toDataStream and enhance callback functionality for better stream management in workflow.ts.
@leehuwuj leehuwuj force-pushed the lee/use-llamaflow branch from 8d3053d to df62ed0 Compare May 7, 2025 04:51
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
packages/server/src/utils/stream.ts (2)

45-53: Unbounded fullContent accumulation can exhaust memory on long streams

fullContent += content keeps a full copy of everything streamed. For large agent-responses this may blow up the heap, especially on serverless platforms.

If the sole purpose is to build the last assistant message for suggested-question generation, consider:

  1. Truncating to a reasonable max length, or
  2. Collecting only the final chunk if the agent guarantees the full answer is sent in one stopAgentEvent, or
  3. Streaming the content into a temporary buffer and releasing it right after onCompletion.

58-63: Error path skips user callback when onError itself throws

If userCallbacks?.onError throws, the subsequent dataStream.writeData(errorMessage) is skipped and the promise rejects, surfacing as “Unhandled error in stream”.
Wrap the user callback in a try/catch to guarantee we still emit an error chunk.

packages/server/src/utils/workflow.ts (2)

60-63: until(stopAgentEvent) may swallow the terminating event

until drops the element that satisfies the predicate. If callers expect to receive the final stopAgentEvent (e.g. to show typing-indicator → stopped), it will never reach them. Consider takeUntilInclusive or enqueue the event manually after the transformation.


92-95: downloadLlamaCloudFilesFromNodes fire-and-forget loses error visibility

Failures inside downloadFile are swallowed, making debugging difficult.
Either:

  1. await downloadLlamaCloudFilesFromNodes(...) so the transform’s catch handles errors, or
  2. Wrap the call in its own try/catch and enqueue an error annotation.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cc175c1 and df62ed0.

📒 Files selected for processing (3)
  • packages/server/src/utils/request.ts (1 hunks)
  • packages/server/src/utils/stream.ts (1 hunks)
  • packages/server/src/utils/workflow.ts (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/server/src/utils/workflow.ts (4)
packages/server/src/types.ts (1)
  • ServerWorkflow (7-7)
packages/server/src/utils/stream.ts (1)
  • toDataStream (32-72)
packages/server/src/utils/suggestion.ts (1)
  • sendSuggestedQuestionsEvent (18-29)
packages/server/src/events.ts (3)
  • toAgentRunEvent (65-86)
  • toSourceEvent (55-63)
  • sourceEvent (24-24)
🔇 Additional comments (1)
packages/server/src/utils/workflow.ts (1)

35-39:

❓ Verification inconclusive

chatHistory field may be silently dropped by startAgentEvent

Previous discussions (see earlier PR comments) noted that startAgentEvent’s schema only accepts { userInput }. Passing an extra chatHistory property compiles but is discarded at runtime, so downstream tools never see the history.

Double-check the latest @llamaindex/workflow typings; if chatHistory is still unsupported, remove it and transmit history via a dedicated custom event or extend the event upstream.


🏁 Script executed:

#!/bin/bash
set -e

# Find all occurrences of startAgentEvent and its imports
rg "startAgentEvent" -n .
rg "import.*startAgentEvent" -n .

Length of output: 1310


🏁 Script executed:

#!/bin/bash
set -e

echo "Attempting to fetch type definitions for @llamaindex/workflow from unpkg..."

# List of potential type definition file paths in the package
files=(
  "index.d.ts"
  "dist/index.d.ts"
  "dist/esm/index.d.ts"
  "esm/index.d.ts"
  "types/index.d.ts"
)

for file in "${files[@]}"; do
  url="https://unpkg.com/@llamaindex/workflow@latest/$file"
  status=$(curl -s -o /dev/null -w "%{http_code}" "$url")
  echo "Checking $file: HTTP $status"
  if [ "$status" -eq 200 ]; then
    echo "=== Contents of $file (first 200 lines) ==="
    curl -s "$url" | sed -n '1,200p'
    exit 0
  fi
done

echo "No type definition file found in known locations."
exit 1

Length of output: 1952


🏁 Script executed:

#!/bin/bash
set -e

PKG="@llamaindex/workflow"
echo "Fetching package metadata for $PKG..."
meta=$(curl -s "https://registry.npmjs.org/${PKG}")
latest=$(echo "$meta" | jq -r '."dist-tags".latest')
echo "Latest version: $latest"

tarball=$(echo "$meta" | jq -r ".versions[\"$latest\"].dist.tarball")
echo "Tarball URL: $tarball"

echo "Downloading and extracting tarball..."
mkdir -p /tmp/pkg
curl -sL "$tarball" | tar -xz -C /tmp/pkg

echo "Listing TypeScript declaration files:"
find /tmp/pkg/package -type f -name "*.d.ts"

echo "Searching for startAgentEvent definitions:"
grep -R "startAgentEvent" -n /tmp/pkg/package

Length of output: 117769


🏁 Script executed:

#!/bin/bash
set -e

# Fetch tarball URL for version 1.1.0
meta=$(curl -s https://registry.npmjs.org/@llamaindex/workflow)
tarball=$(echo "$meta" | jq -r '.versions["1.1.0"].dist.tarball')

# Download and extract
mkdir -p /tmp/pkg
curl -sL "$tarball" | tar -xz -C /tmp/pkg

# List declaration files
echo "=== .d.ts files in package ==="
find /tmp/pkg/package -type f -name "*.d.ts"

# Search for startAgentEvent definition
echo "=== Occurrences of startAgentEvent ==="
grep -R -n "startAgentEvent" /tmp/pkg/package || echo "No occurrences found"

Length of output: 117637


Verify chatHistory support in startAgentEvent API

We attempted to locate a startAgentEvent definition in the @llamaindex/[email protected] package (including its TypeScript declarations) but found no mention of a chatHistory field. If startAgentEvent still only accepts { userInput } at runtime, the extra chatHistory property will be discarded.

Please manually confirm:

  • The shape of startAgentEvent in @llamaindex/[email protected] (types and runtime schema).
  • Whether the chatHistory property is dropped at runtime.

If it is unsupported:

  • Remove chatHistory from the .with() call in packages/server/src/utils/workflow.ts.
  • Transmit the history via a dedicated custom event or extend the workflow event schema upstream.

… usage and improve error handling. Update toDataStream function in stream.ts for better data streaming and processing. Enhance imports from @llamaindex/workflow for consistency.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/server/src/utils/stream.ts (1)

25-47: ⚠️ Potential issue

Stream never explicitly closes – downstream consumers may wait indefinitely

The toDataStream function doesn't explicitly close the stream after processing completes. This could lead to hanging HTTP responses if consumers wait for a closure signal.

Add an explicit close after processing the stream:

    execute: async (dataStreamWriter: DataStreamWriter) => {
      for await (const event of stream) {
        if (agentStreamEvent.include(event) && event.data.delta) {
          const content = event.data.delta;
          if (content) {
            dataStreamWriter.write(formatDataStreamPart("text", content));
          }
        } else {
          dataStreamWriter.writeMessageAnnotation(event.data as JSONValue);
        }
      }
+     // Explicitly close the stream to prevent hanging responses
+     dataStreamWriter.close();
    },
🧹 Nitpick comments (4)
packages/server/src/utils/stream.ts (2)

9-18: Unused interface in current implementation

The StreamCallbacks interface is defined but not used in the toDataStream function below. Consider either:

  1. Implementing these callbacks in toDataStream to provide better control for consumers, or
  2. Removing the unused interface if it's not needed for future implementations

36-38: Handle empty arrays or non-object annotations

The code assumes event.data is always a valid JSON value when writing message annotations. Consider adding validation to prevent potential runtime errors.

        } else {
-         dataStreamWriter.writeMessageAnnotation(event.data as JSONValue);
+         // Ensure event.data is a valid non-null object before writing annotation
+         if (event.data && typeof event.data === 'object') {
+           dataStreamWriter.writeMessageAnnotation(event.data as JSONValue);
+         }
        }
packages/server/src/handlers/chat.ts (2)

40-48: Set content type header for streaming response

When streaming responses, it's important to set the appropriate content type header. While pipeStreamToResponse handles the mechanics of streaming, it doesn't set headers.

    const workflow = await workflowFactory(body);
    const workflowEventStream = await runWorkflow(
      workflow,
      workflowInput,
      abortController.signal,
    );

+   // Set appropriate content type for streaming response
+   res.setHeader('Content-Type', 'text/event-stream');
+   res.setHeader('Cache-Control', 'no-cache');
+   res.setHeader('Connection', 'keep-alive');
    const dataStream = toDataStream(workflowEventStream);
    pipeStreamToResponse(res, dataStream);

47-48: Handle potential errors from stream conversion and piping

The code doesn't handle potential errors that might occur during stream conversion or piping. Consider adding error handling for these operations.

-   const dataStream = toDataStream(workflowEventStream);
-   pipeStreamToResponse(res, dataStream);
+   try {
+     const dataStream = toDataStream(workflowEventStream);
+     await pipeStreamToResponse(res, dataStream);
+   } catch (streamError) {
+     console.error("Streaming error:", streamError);
+     // Only send error response if headers haven't been sent yet
+     if (!res.headersSent) {
+       return sendJSONResponse(res, 500, {
+         detail: (streamError as Error).message || "Error streaming response",
+       });
+     }
+   }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between df62ed0 and 30c3da2.

📒 Files selected for processing (5)
  • packages/server/src/handlers/chat.ts (2 hunks)
  • packages/server/src/server.ts (2 hunks)
  • packages/server/src/types.ts (1 hunks)
  • packages/server/src/utils/stream.ts (1 hunks)
  • packages/server/src/utils/workflow.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/server/src/server.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/server/src/types.ts
  • packages/server/src/utils/workflow.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/server/src/handlers/chat.ts (4)
packages/create-llama/templates/components/workflows/typescript/artifacts/workflow.ts (1)
  • workflowFactory (4-9)
packages/server/src/utils/workflow.ts (1)
  • runWorkflow (25-44)
packages/server/src/utils/stream.ts (1)
  • toDataStream (25-47)
packages/server/src/utils/request.ts (1)
  • pipeStreamToResponse (30-41)
🔇 Additional comments (3)
packages/server/src/handlers/chat.ts (3)

29-35: Properly structured workflow input

The refactoring to use a defined AgentInputData interface improves code clarity and type safety. The mapping of messages to the required format is clean and efficient.


37-39: Excellent addition of abort handling

Adding abort controller support ensures resources are properly released when the client disconnects, preventing server-side process hanging.


50-50: Improved error logging

Adding the "Chat handler error:" prefix to error logs makes it easier to identify the source of errors in logs.

leehuwuj added 3 commits May 7, 2025 15:22
…factor toDataStream in stream.ts to support callback options for onStart, onText, and onFinal events. Export generateNextQuestions function in suggestion.ts for improved accessibility.
…ates to enhance consistency and organization. Update package.json to include @llamaindex/workflow version 1.1.0. Remove commented-out code in gen-ui.ts for cleaner implementation.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (8)
packages/server/src/utils/stream.ts (2)

35-40: Consider using a more specific type for the stream parameter

The use of AsyncIterable<WorkflowEventData<unknown>> is correct but could possibly be defined more specifically if the exact event data types are known.

-  stream: AsyncIterable<WorkflowEventData<unknown>>,
+  stream: AsyncIterable<WorkflowEventData<unknown>>,

69-72: 🛠️ Refactor suggestion

Ensure stream is explicitly closed after onFinal callback

The stream may not be properly closed if the onFinal callback itself writes to the stream, which could cause hanging HTTP responses. Consider adding an explicit close call after the callback completes.

      // Call onFinal with the complete text when stream ends
      if (callbacks?.onFinal) {
        await callbacks.onFinal(completionText, dataStreamWriter);
+       // Ensure stream is properly closed after all callbacks complete
+       dataStreamWriter.close();
      }
packages/server/src/utils/workflow.ts (3)

57-63: ⚠️ Potential issue

Potential security issue with tool inputs exposure

Directly stringifying and including toolKwargs in the log may expose sensitive information like API keys or personal data that might be passed to tools.

-            const inputString = JSON.stringify(event.data.toolKwargs);
-            transformedEvent = toAgentRunEvent({
-              agent: event.data.agentName,
-              text: `Using tool: '${event.data.toolName}' with inputs: '${inputString}'`,
-              type: "text",
-            });
+            // Sanitize and limit tool input exposure
+            const argKeys = Object.keys(event.data.toolKwargs).join(", ");
+            transformedEvent = toAgentRunEvent({
+              agent: event.data.agentName,
+              text: `Using tool: '${event.data.toolName}' with arguments: ${argKeys}`,
+              type: "text",
+            });

65-76: 🛠️ Refactor suggestion

Unsafe type casting bypasses type safety

The double type assertion as unknown as NodeWithScore<Metadata>[] bypasses TypeScript's type safety mechanisms. Consider using proper validation before casting.

-              const sourceNodes =
-                rawOutput.sourceNodes as unknown as NodeWithScore<Metadata>[];
-              transformedEvent = toSourceEvent(sourceNodes);
+              // Validate the sourceNodes structure before casting
+              if (Array.isArray(rawOutput.sourceNodes) && 
+                  rawOutput.sourceNodes.every(node => 
+                    node && typeof node === 'object' && 
+                    'node' in node && 'score' in node && 'metadata' in node)) {
+                const sourceNodes = rawOutput.sourceNodes as NodeWithScore<Metadata>[];
+                transformedEvent = toSourceEvent(sourceNodes);
+              } else {
+                console.warn("Invalid sourceNodes format in AgentToolCallResult");
+              }

52-85: 🛠️ Refactor suggestion

Add error handling to the transform function

Missing error handling in the transform function could cause the entire stream to terminate if an error occurs during transformation.

        async transform(event, controller) {
+         try {
          console.log("event", event.data);
          let transformedEvent = event;

          // Handle agent events from AgentToolCall
          if (agentToolCallEvent.include(event)) {
            const inputString = JSON.stringify(event.data.toolKwargs);
            transformedEvent = toAgentRunEvent({
              agent: event.data.agentName,
              text: `Using tool: '${event.data.toolName}' with inputs: '${inputString}'`,
              type: "text",
            });
          }
          // Handle source nodes from AgentToolCallResult
          else if (agentToolCallResultEvent.include(event)) {
            const rawOutput = event.data.raw;
            if (
              rawOutput &&
              typeof rawOutput === "object" &&
              "sourceNodes" in rawOutput // TODO: better use Zod to validate and extract sourceNodes from toolCallResult
            ) {
              const sourceNodes =
                rawOutput.sourceNodes as unknown as NodeWithScore<Metadata>[];
              transformedEvent = toSourceEvent(sourceNodes);
            }
          }
          // Post-process for llama-cloud files
          if (sourceEvent.include(transformedEvent)) {
            const sourceNodesForDownload = transformedEvent.data.data.nodes; // These are SourceEventNode[]
            downloadLlamaCloudFilesFromNodes(sourceNodesForDownload); // download files in background
          }

          controller.enqueue(transformedEvent);
+         } catch (error) {
+           console.error("Error transforming workflow event:", error);
+           // Enqueue the original event to ensure stream continues
+           controller.enqueue(event);
+         }
        },
packages/create-llama/templates/components/workflows/typescript/financial_report/workflow.ts (2)

107-123: 🛠️ Refactor suggestion

Add fallback for non-streaming LLM responses

If the LLM does not support streaming (responseGenerator is undefined), the accumulated response string stays empty, resulting in an empty answer for the user even though toolCallResponse.response is available.

      const generator = toolCallResponse.responseGenerator;
      let response = "";
      if (generator) {
        for await (const chunk of generator) {
          response += chunk.delta;
          sendEvent(
            agentStreamEvent.with({
              delta: chunk.delta,
              response,
              currentAgentName: "LLM", // Or derive from context if needed
              raw: chunk.raw,
            }),
          );
        }
+     } else if (toolCallResponse.response) {
+       // Fallback for non-streaming models
+       response = toolCallResponse.response;
+       sendEvent(
+         agentStreamEvent.with({
+           delta: response,
+           response,
+           currentAgentName: "LLM",
+           raw: null,
+         }),
+       );
      }
      return stopAgentEvent.with({ result: response });

264-283: 🛠️ Refactor suggestion

Add error handling for tool calls

Tool calls can fail due to external factors like network issues, quota limits, or runtime errors. Without try/catch, an unhandled exception will terminate the entire workflow and appear as a 500 error to the user.

    // Call the code interpreter tools if needed
    if (toolCalls.length > 0) {
+     try {
        const toolMsgs = await callTools({
          tools: [codeInterpreterTool],
          toolCalls,
          writeEvent: (text, step) => {
            sendEvent(
              toAgentRunEvent({
                agent: "Analyst",
                text,
                type: toolCalls.length > 1 ? "progress" : "text",
                current: step,
                total: toolCalls.length,
              }),
            );
          },
        });
        for (const toolMsg of toolMsgs) {
          state.memory.put(toolMsg);
        }
+     } catch (error) {
+       sendEvent(
+         agentStreamEvent.with({
+           delta: String(error),
+           response: String(error),
+           currentAgentName: "Analyst",
+           raw: null,
+         }),
+       );
+       // Add error message to memory and continue the workflow
+       state.memory.put({
+         role: "assistant",
+         content: `Error during analysis: ${String(error)}`,
+       });
+     }
    }
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (1)

116-118: Replace {} with void for empty payload types

The use of {} as a type is flagged by Biome because it matches any non-null/undefined value. Using void better communicates that these events carry no data and will satisfy the linter.

-const planResearchEvent = workflowEvent<{}>();
+const planResearchEvent = workflowEvent<void>();
 const researchEvent = workflowEvent<ResearchQuestion>();
-const reportEvent = workflowEvent<{}>();
+const reportEvent = workflowEvent<void>();
🧰 Tools
🪛 Biome (1.9.4)

[error] 116-116: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)


[error] 118-118: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)

🧹 Nitpick comments (1)
packages/server/src/utils/workflow.ts (1)

53-53: Remove debug logging from production code

The console.log statement should be removed or wrapped in a debug flag condition to avoid noisy logs in production.

-          console.log("event", event.data);
+          // Only log in development/debug mode
+          if (process.env.NODE_ENV === 'development') {
+            console.log("event", event.data);
+          }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 30c3da2 and de1f4dd.

📒 Files selected for processing (8)
  • packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (5 hunks)
  • packages/create-llama/templates/components/workflows/typescript/financial_report/workflow.ts (7 hunks)
  • packages/server/package.json (1 hunks)
  • packages/server/src/handlers/chat.ts (2 hunks)
  • packages/server/src/utils/gen-ui.ts (2 hunks)
  • packages/server/src/utils/stream.ts (1 hunks)
  • packages/server/src/utils/suggestion.ts (2 hunks)
  • packages/server/src/utils/workflow.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/server/package.json
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/server/src/utils/gen-ui.ts
  • packages/server/src/utils/suggestion.ts
🧰 Additional context used
🧬 Code Graph Analysis (2)
packages/server/src/utils/workflow.ts (1)
packages/server/src/events.ts (3)
  • toAgentRunEvent (65-86)
  • toSourceEvent (55-63)
  • sourceEvent (24-24)
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (1)
packages/server/src/events.ts (1)
  • toSourceEvent (55-63)
🪛 Biome (1.9.4)
packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts

[error] 116-116: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)


[error] 118-118: Don't use '{}' as a type.

Prefer explicitly define the object shape. '{}' means "any non-nullable value".

(lint/complexity/noBannedTypes)

🔇 Additional comments (17)
packages/server/src/handlers/chat.ts (6)

23-26: Type mapping ensures proper message format conversion

The mapping creates a properly typed chatHistory array from Vercel AI's Message type to LlamaIndex's MessageType, ensuring compatibility with the workflow API.


34-37: Well-structured workflow input object with chat history

The input object correctly includes both the user's immediate input and the full chat history, providing context for the workflow processing.


39-40: Good resource management with AbortController

The implementation properly handles client disconnections using an AbortController, which prevents server resources from being wasted if the client disconnects during a long-running workflow.


43-47: Clean workflow execution with proper parameter passing

The handler correctly awaits the workflow factory result and executes it with appropriate inputs and abort signal. The refactored code is more concise and easier to understand than the previous implementation.


49-59: Effective use of streaming callbacks

The implementation effectively uses the new toDataStream utility with lifecycle callbacks, particularly the onFinal callback that updates chat history and triggers suggested questions when the completion finishes.


62-62: Improved error logging with context

Adding "Chat handler error:" prefix to error logs helps with identifying the source of errors in logs, making debugging easier.

packages/server/src/utils/stream.ts (2)

12-27: Well-documented callback interface for stream lifecycle

The StreamCallbacks interface provides clear, well-documented callbacks for important stream lifecycle events, making it easier for consumers to hook into different parts of the streaming process.


74-78: Good error handling with informative messages

The error handling distinguishes between known Error instances and unknown errors, providing informative error messages for stream consumers.

packages/server/src/utils/workflow.ts (1)

30-44: Clean workflow execution with proper validation

The refactored runWorkflow function correctly validates user input and uses the new run function with appropriate events. The use of until with the abort signal is a clean way to handle early termination.

packages/create-llama/templates/components/workflows/typescript/financial_report/workflow.ts (3)

55-59: Clean event definitions using workflowEvent factory

The refactored code effectively uses the workflowEvent factory function to define typed events, which is cleaner and more maintainable than class-based event definitions.


73-79: Good validation of LLM capabilities

The code properly validates that the LLM supports tool calls before proceeding, which prevents runtime errors when attempting to use tool functionality.


81-97: Effective middleware-based state management

The refactored code uses middleware-based state management for the workflow, which provides a clean way to manage chat memory and other state across event handlers.

packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts (5)

27-27: LGTM: Clean workflow factory implementation

The workflow factory has been simplified to use the getWorkflow function, which aligns well with the functional approach of the refactored codebase.


147-354: Function-based workflow implementation looks good

The refactoring from a class-based to a function-based implementation using middleware and event handlers follows best practices for the new workflow engine. The code is well-structured with clear separation of concerns.


183-183: Good use of the toSourceEvent utility

Using the toSourceEvent utility from @llamaindex/server to transform retrieved nodes is a nice integration with the server package, supporting the PR objective.


356-422: Good organization of helper functions

Extracting helper functions and placing them at the end of the file improves readability and organization, as previously suggested in review comments.


236-257:

✅ Verification successful

Check incrementation of totalQuestions

Make sure that totalQuestions is only incremented once. Previous review comments indicated it was being incremented twice, though I only see one increment at line 237 in the current implementation.


🏁 Script executed:

#!/bin/bash
# Check if totalQuestions is incremented elsewhere in the file
rg "totalQuestions \+=" packages/create-llama/templates/components/workflows/typescript/deep_research/workflow.ts

Length of output: 171


No duplicate totalQuestions increment found – no action needed

Verification via ripgrep confirms that state.totalQuestions is only incremented once at line 237 in workflow.ts. There are no other occurrences of += totalQuestions in this file, so the counter isn’t being incremented twice.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 30d7ea5 and aeeccd8.

📒 Files selected for processing (2)
  • packages/create-llama/templates/components/agents/typescript/financial_report/workflow/fin-report.ts (1 hunks)
  • packages/create-llama/templates/components/agents/typescript/form_filling/workflow/form-filling.ts (1 hunks)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants