Skip to content

Conversation

@leehuwuj
Copy link
Collaborator

@leehuwuj leehuwuj commented May 14, 2025

Summary by CodeRabbit

  • New Features
    • Introduced separate code generator and document generator use cases, replacing the previous artifacts use case.
    • Added new UI component for visualizing document generation workflow stages.
    • Added asynchronous workflow factory functions for code and document generator templates in Python and TypeScript.
  • Documentation
    • Updated and added README templates for code generator and document generator projects with improved setup and usage instructions.
  • Bug Fixes
    • Updated language model configuration to use "gpt-4.1" by default.
  • Refactor
    • Split artifact workflow logic into distinct code and document generator workflows for both Python and TypeScript templates.
    • Updated input handling types and conversion for workflow events.
  • Tests
    • Updated test cases to reflect the new code generator and document generator use cases.

@changeset-bot
Copy link

changeset-bot bot commented May 14, 2025

🦋 Changeset detected

Latest commit: 66ee887

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
create-llama Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link

coderabbitai bot commented May 14, 2025

Walkthrough

This change splits the previous "artifacts" use case into two separate use cases: "code_generator" and "document_generator". It updates types, tests, prompts, workflow logic, documentation, and UI components to reflect this separation. Obsolete artifact-related files are removed, and new workflow and UI files are introduced for each new use case. Dependency versions and model identifiers are also updated.

Changes

File(s) Change Summary
.changeset/calm-women-repair.md Adds a changeset describing the split of the "artifacts" use case into "code_generator" and "document_generator".
packages/create-llama/helpers/types.ts, packages/create-llama/questions/simple.ts Updates types and prompts: removes "artifacts" use case, adds "code_generator" and "document_generator" with corresponding logic and configuration.
packages/create-llama/e2e/python/resolve_dependencies.spec.ts, packages/create-llama/e2e/typescript/resolve_dependencies.spec.ts Updates test suites to remove "artifacts" and add "code_generator" and "document_generator" use cases, adjusting test logic accordingly.
packages/create-llama/helpers/typescript.ts Updates conditional logic to handle the new use cases instead of "artifacts", and updates dependency versions for @llamaindex/openai and @llamaindex/readers.
packages/create-llama/templates/components/use-cases/python/artifacts/workflow.py, packages/create-llama/templates/components/use-cases/typescript/artifacts/src/app/workflow.ts Removes old artifact workflow files for both Python and TypeScript.
packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py, packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py Adds new workflow factory functions for the new Python use cases, each instantiating the appropriate workflow with OpenAI LLM configured to "gpt-4.1".
packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts, packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts Adds new workflow factory functions for the new TypeScript use cases, updates event types to use MessageContent instead of string, and adjusts workflow creation accordingly.
packages/create-llama/templates/components/ui/use-cases/document_generator/ui_event.jsx Adds a new React component to visualize document generation workflow progress with dynamic stage-based UI and smooth transitions.
packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md, packages/create-llama/templates/components/use-cases/python/document_generator/README-template.md, packages/create-llama/templates/components/use-cases/typescript/code_generator/README-template.md, packages/create-llama/templates/components/use-cases/typescript/document_generator/README-template.md Adds and updates README templates for the new use cases, clarifying usage, workflow customization, and providing example commands.
packages/create-llama/templates/types/llamaindexserver/nextjs/src/app/settings.ts Updates the LLM model identifier from "gpt-4o-mini" to "gpt-4.1".
packages/create-llama/templates/types/llamaindexserver/nextjs/package.json Modifies dependency versions to caret ranges and adds the "llamaindex" package dependency.
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts Updates typing for user input and query types to use MessageContent and QueryType instead of strings, improving type safety.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant App
    participant WorkflowFactory
    participant Workflow

    User->>App: Selects use case ("code_generator" or "document_generator")
    App->>WorkflowFactory: Passes chat/request data
    WorkflowFactory->>Workflow: Instantiates workflow for selected use case
    Workflow-->>App: Handles workflow logic (code or document generation)
    App-->>User: Displays progress and results (via UI component)
Loading

Possibly related PRs

  • run-llama/create-llama#595: Introduced the original "artifacts" use case and a new workflow for TypeScript, which is now split and refined in this PR.
  • run-llama/create-llama#586: Added the original "artifacts" use case and its Python workflow, which this PR replaces with two distinct use cases.
  • run-llama/create-llama#570: Related dependency version updates for the llamaindex server Next.js template, complementing changes in this PR.

Suggested reviewers

  • marcusschiesser
  • thucpn

Poem

In the warren of code, a split is declared,
Artifacts divided, two new paths prepared.
Code and docs now each have their say,
Workflows and prompts, all neatly arrayed.
🐇 With README and tests, the journey’s begun—
Two use cases now, where once there was one!


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 2cbad52 and 66ee887.

📒 Files selected for processing (2)
  • packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (4 hunks)
  • packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts
  • packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts
⏰ Context from checks skipped due to timeout of 90000ms (55)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: lint
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (17)
packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (1)

15-19: Add documentation and consider removing unnecessary async

The new workflowFactory function lacks documentation comments that would help other developers understand its purpose in the document generator workflow. Additionally, the function is marked as async but doesn't contain any await operations, making the async keyword unnecessary in this case.

-export const workflowFactory = async (reqBody: any) => {
+/**
+ * Creates a document artifact workflow instance with the provided request body
+ * @param reqBody The request data containing chat history and other information
+ * @returns A document generation workflow instance
+ */
+export const workflowFactory = (reqBody: any) => {
  const workflow = createDocumentArtifactWorkflow(reqBody);

  return workflow;
};
packages/create-llama/templates/components/use-cases/python/document_generator/README-template.md (1)

36-38: Fix missing period and clarify workflow purpose

The description lacks a period at the end of the first sentence, and the second sentence should provide more specific guidance about what aspects of the workflow can be modified.

-AI-powered document generator that can help you generate documents with a chat interface and simple markdown editor.
+AI-powered document generator that can help you generate documents with a chat interface and simple markdown editor.
 
-To update the workflow, you can modify the code in [`workflow.py`](app/workflow.py).
+To update the document generation process, prompt templates, or UI events, you can modify the code in [`workflow.py`](app/workflow.py).
🧰 Tools
🪛 LanguageTool

[uncategorized] ~38-~38: A punctuation mark might be missing here.
Context: ...he workflow, you can modify the code in workflow.py. You can...

(AI_EN_LECTOR_MISSING_PUNCTUATION)

packages/create-llama/templates/components/use-cases/typescript/code_generator/README-template.md (1)

33-35: Fix missing article and add more specific guidance

The description is missing the article "an" before "app" and would benefit from more specific guidance about what aspects of the workflow can be modified.

-AI-powered code generator that can help you generate app with a chat interface, code editor and app preview.
+AI-powered code generator that can help you generate an app with a chat interface, code editor and app preview.
 
-To update the workflow, you can modify the code in [`workflow.ts`](app/workflow.ts).
+To update the code generation process, prompt templates, or UI events, you can modify the code in [`workflow.ts`](src/app/workflow.ts).
🧰 Tools
🪛 LanguageTool

[uncategorized] ~33-~33: Possible missing article found.
Context: ...de generator that can help you generate app with a chat interface, code editor and ...

(AI_HYDRA_LEO_MISSING_AN)

packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py (1)

30-36: Consider making the model configurable.

The workflow factory function correctly creates the CodeArtifactWorkflow instance, but it hardcodes the OpenAI model to "gpt-4.1". Consider making this configurable via parameters or environment variables for better flexibility.

Additionally, adding a docstring to explain the function's purpose and parameters would improve readability.

+def create_workflow(chat_request: ChatRequest) -> Workflow:
+    """
+    Create a code generation workflow instance.
+    
+    Args:
+        chat_request: The chat request containing messages and configuration.
+        
+    Returns:
+        A configured CodeArtifactWorkflow instance.
+    """
     workflow = CodeArtifactWorkflow(
-        llm=OpenAI(model="gpt-4.1"),
+        llm=OpenAI(model=os.getenv("CODE_GENERATOR_MODEL", "gpt-4.1")),
         chat_request=chat_request,
         timeout=120.0,
     )
     return workflow
packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py (1)

30-36: Consider making the model configurable and adding documentation.

The workflow factory function correctly creates the DocumentArtifactWorkflow instance, but similarly to the code generator workflow, it hardcodes the OpenAI model to "gpt-4.1". Consider making this configurable for better flexibility.

Adding a docstring would also improve code documentation:

+def create_workflow(chat_request: ChatRequest) -> Workflow:
+    """
+    Create a document generation workflow instance.
+    
+    Args:
+        chat_request: The chat request containing messages and configuration.
+        
+    Returns:
+        A configured DocumentArtifactWorkflow instance.
+    """
     workflow = DocumentArtifactWorkflow(
-        llm=OpenAI(model="gpt-4.1"),
+        llm=OpenAI(model=os.getenv("DOCUMENT_GENERATOR_MODEL", "gpt-4.1")),
         chat_request=chat_request,
         timeout=120.0,
     )
     return workflow

Don't forget to add the import for os if you implement this change.

packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md (5)

5-11: Add language specifier to fenced code blocks.

The code block on line 10 is missing a language specifier, which would improve syntax highlighting in markdown renderers.

-```shell
+```shell

13-28: Missing "Second" step in the instructions.

The instructions jump from "First, setup the environment" to "Then, run the development server" without a "Second" step, which makes the numbering inconsistent.

 First, setup the environment with uv:

 > **_Note:_** This step is not needed if you are using the dev-container.

 ```shell
 uv sync

Then check the parameters that have been pre-configured in the .env file in this directory.
Make sure you have set the OPENAI_API_KEY for the LLM.

-Then, run the development server:
+Second, run the development server:


Also, add language specifiers to the code block on lines 27-28:

```diff
-```
+```shell
 uv run fastapi run
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

26-26: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


34-36: Grammar error in use case description.

There's a missing article in the description of the use case.

-AI-powered code generator that can help you generate app with a chat interface, code editor and app preview.
+AI-powered code generator that can help you generate an app with a chat interface, code editor and app preview.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~35-~35: Possible missing article found.
Context: ...de generator that can help you generate app with a chat interface, code editor and ...

(AI_HYDRA_LEO_MISSING_AN)


39-45: Fix grammatical error and add language specifier to code block.

There's a grammar error in line 39 and the code block is missing a language specifier.

-You can start by sending an request on the [chat UI](http://localhost:8000) or you can test the `/api/chat` endpoint with the following curl request:
+You can start by sending a request on the [chat UI](http://localhost:8000) or you can test the `/api/chat` endpoint with the following curl request:

-```
+```shell
 curl --location 'localhost:8000/api/chat' \
 --header 'Content-Type: application/json' \
 --data '{ "messages": [{ "role": "user", "content": "Create a report comparing the finances of Apple and Tesla" }] }'

Additionally, consider updating the example prompt to be more relevant to code generation rather than financial reporting.

🧰 Tools
🪛 LanguageTool

[misspelling] ~39-~39: Use “a” instead of ‘an’ if the following word doesn’t start with a vowel sound, e.g. ‘a sentence’, ‘a university’.
Context: ...workflow.py). You can start by sending an request on the [chat UI](http://localho...

(EN_A_VS_AN)

🪛 markdownlint-cli2 (0.17.2)

41-41: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


51-55: Add language specifier to code block.

The code block is missing a language specifier.

 You can also generate a new code for the workflow using LLM by running the following command:

-```
+```shell
 uv run generate_ui

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

53-53: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

</details>

</details>

</blockquote></details>
<details>
<summary>packages/create-llama/templates/components/use-cases/typescript/document_generator/README-template.md (5)</summary><blockquote>

`5-9`: **Add language specifier to code block.**

The code block is missing a language specifier.

```diff
 First, install the dependencies:

-```
+```bash
 npm install

---

`11-15`: **Fix instruction numbering and add language specifier.**

The instructions skip from "First" to "Third" without a "Second" step, which is confusing. Also, the code block needs a language specifier.

```diff
-Third, run the development server:
+Second, run the development server:

-```
+```bash
 npm run dev

---

`23-29`: **Fix grammar and add language specifier.**

There's a minor grammar issue in the UI components description, and the code block needs a language specifier.

```diff
-We have a custom component located in `components/ui_event.jsx`. This is used to display the state of artifact workflows in UI. You can regenerate a new UI component from the workflow event schema by running the following command:
+We have a custom component located in `components/ui_event.jsx`. This is used to display the state of artifact workflows in the UI. You can regenerate a new UI component from the workflow event schema by running the following command:

-```
+```bash
 npm run generate:ui

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 LanguageTool</summary>

[uncategorized] ~25-~25: You might be missing the article “the” here.
Context: ...play the state of artifact workflows in UI. You can regenerate a new UI component ...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

</details>

</details>

---

`33-36`: **Fix grammar error and correct file path.**

There's a missing article in the use case description and an inconsistent file path reference.

```diff
-AI-powered document generator that can help you generate documents with a chat interface and simple markdown editor.
+AI-powered document generator that can help you generate documents with a chat interface and a simple markdown editor.

-To update the workflow, you can modify the code in [`workflow.ts`](app/workflow.ts).
+To update the workflow, you can modify the code in [`workflow.ts`](src/app/workflow.ts).

37-43: Fix grammar error in request instructions.

There's a grammar error in the request instructions.

-You can start by sending an request on the [chat UI](http://localhost:3000) or you can test the `/api/chat` endpoint with the following curl request:
+You can start by sending a request on the [chat UI](http://localhost:3000) or you can test the `/api/chat` endpoint with the following curl request:
🧰 Tools
🪛 LanguageTool

[misspelling] ~37-~37: Use “a” instead of ‘an’ if the following word doesn’t start with a vowel sound, e.g. ‘a sentence’, ‘a university’.
Context: ...workflow.ts). You can start by sending an request on the [chat UI](http://localho...

(EN_A_VS_AN)

packages/create-llama/templates/components/ui/use-cases/document_generator/ui_event.jsx (2)

29-48: Consider adding type definitions for the event prop.

While the component logic is sound, adding TypeScript type definitions would improve code robustness and developer experience.

// Add at the top of the file
/**
 * @typedef {Object} WorkflowEvent
 * @property {('plan'|'generate'|'completed')} state - The current state of the workflow
 * @property {string} [requirement] - The current requirement being processed (optional)
 */

/**
 * @param {Object} props
 * @param {WorkflowEvent} props.event - The workflow event object
 */
function ArtifactWorkflowCard({ event }) {
  // ...existing code
}

128-137: Consider adding error handling for unexpected event states.

The component doesn't explicitly handle unexpected event states (beyond "plan" and "generate"). While the current implementation gracefully returns null when the state doesn't match, a more explicit approach might be beneficial.

 export default function Component({ events }) {
   const aggregateEvents = () => {
     if (!events || events.length === 0) return null;
-    return events[events.length - 1];
+    const lastEvent = events[events.length - 1];
+    // Validate that the event state is one of the expected values
+    if (lastEvent && lastEvent.state && !['plan', 'generate', 'completed'].includes(lastEvent.state)) {
+      console.warn(`Unexpected workflow state: ${lastEvent.state}`);
+    }
+    return lastEvent;
   };

   const event = aggregateEvents();

   return <ArtifactWorkflowCard event={event} />;
 }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b3eb0ba and ecd0a98.

📒 Files selected for processing (18)
  • .changeset/calm-women-repair.md (1 hunks)
  • packages/create-llama/e2e/python/resolve_dependencies.spec.ts (1 hunks)
  • packages/create-llama/e2e/typescript/resolve_dependencies.spec.ts (2 hunks)
  • packages/create-llama/helpers/types.ts (1 hunks)
  • packages/create-llama/helpers/typescript.ts (1 hunks)
  • packages/create-llama/questions/simple.ts (4 hunks)
  • packages/create-llama/templates/components/ui/use-cases/document_generator/ui_event.jsx (1 hunks)
  • packages/create-llama/templates/components/use-cases/python/artifacts/workflow.py (0 hunks)
  • packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md (1 hunks)
  • packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py (2 hunks)
  • packages/create-llama/templates/components/use-cases/python/document_generator/README-template.md (1 hunks)
  • packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py (2 hunks)
  • packages/create-llama/templates/components/use-cases/typescript/artifacts/src/app/workflow.ts (0 hunks)
  • packages/create-llama/templates/components/use-cases/typescript/code_generator/README-template.md (1 hunks)
  • packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (1 hunks)
  • packages/create-llama/templates/components/use-cases/typescript/document_generator/README-template.md (1 hunks)
  • packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (1 hunks)
  • packages/create-llama/templates/types/llamaindexserver/nextjs/src/app/settings.ts (1 hunks)
💤 Files with no reviewable changes (2)
  • packages/create-llama/templates/components/use-cases/python/artifacts/workflow.py
  • packages/create-llama/templates/components/use-cases/typescript/artifacts/src/app/workflow.ts
🧰 Additional context used
🧬 Code Graph Analysis (4)
packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (1)
packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (1)
  • workflowFactory (15-19)
packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (1)
packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (1)
  • workflowFactory (15-19)
packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py (2)
packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py (1)
  • create_workflow (30-36)
python/llama-index-server/tests/api/test_event_stream.py (1)
  • chat_request (23-26)
packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py (3)
packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py (1)
  • create_workflow (30-36)
python/llama-index-server/tests/api/test_event_stream.py (1)
  • chat_request (23-26)
python/llama-index-server/llama_index/server/api/models.py (1)
  • ChatRequest (32-41)
🪛 LanguageTool
packages/create-llama/templates/components/use-cases/typescript/code_generator/README-template.md

[uncategorized] ~33-~33: Possible missing article found.
Context: ...de generator that can help you generate app with a chat interface, code editor and ...

(AI_HYDRA_LEO_MISSING_AN)

packages/create-llama/templates/components/use-cases/python/document_generator/README-template.md

[uncategorized] ~38-~38: A punctuation mark might be missing here.
Context: ...he workflow, you can modify the code in workflow.py. You can...

(AI_EN_LECTOR_MISSING_PUNCTUATION)

packages/create-llama/templates/components/use-cases/typescript/document_generator/README-template.md

[uncategorized] ~25-~25: You might be missing the article “the” here.
Context: ...play the state of artifact workflows in UI. You can regenerate a new UI component ...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)


[misspelling] ~37-~37: Use “a” instead of ‘an’ if the following word doesn’t start with a vowel sound, e.g. ‘a sentence’, ‘a university’.
Context: ...workflow.ts). You can start by sending an request on the [chat UI](http://localho...

(EN_A_VS_AN)

packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md

[uncategorized] ~35-~35: Possible missing article found.
Context: ...de generator that can help you generate app with a chat interface, code editor and ...

(AI_HYDRA_LEO_MISSING_AN)


[uncategorized] ~37-~37: A punctuation mark might be missing here.
Context: ...he workflow, you can modify the code in workflow.py. You can...

(AI_EN_LECTOR_MISSING_PUNCTUATION)


[misspelling] ~39-~39: Use “a” instead of ‘an’ if the following word doesn’t start with a vowel sound, e.g. ‘a sentence’, ‘a university’.
Context: ...workflow.py). You can start by sending an request on the [chat UI](http://localho...

(EN_A_VS_AN)

🪛 markdownlint-cli2 (0.17.2)
packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md

26-26: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


41-41: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


53-53: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

⏰ Context from checks skipped due to timeout of 90000ms (56)
  • GitHub Check: build
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: lint
🔇 Additional comments (20)
.changeset/calm-women-repair.md (1)

1-6: LGTM! The changeset properly documents the PR purpose.

The changeset correctly describes the split of the artifacts use case into code_generator and document_generator, which aligns with the PR title and objectives. This is important for tracking and versioning.

packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (1)

15-19: Implementation of workflowFactory looks good.

The workflowFactory function follows the same pattern as used in the document_generator workflow. It properly takes a request body parameter and returns the created workflow.

packages/create-llama/helpers/types.ts (1)

61-62:

✅ Verification successful

Type definition updated correctly to support the split use cases.

The TemplateUseCase type has been properly updated to replace "artifacts" with "code_generator" and "document_generator", which aligns with the overall refactoring goal of this PR.


🏁 Script executed:

#!/bin/bash
# Verify that all previous "artifacts" references are properly updated

echo "Checking for remaining 'artifacts' references that might need updating..."
rg "'artifacts'" --type=ts --type=py
rg "\"artifacts\"" --type=ts --type=py

Length of output: 246


Approve TemplateUseCase update—no remaining “artifacts” references

All occurrences of the old "artifacts" use case have been removed and replaced with "code_generator" and "document_generator", in line with the PR’s refactoring goal.

• Verified via:

  • rg "'artifacts'" --type=ts --type=py
  • rg "\"artifacts\"" --type=ts --type=py
    (no matches found)

• File confirmed: packages/create-llama/helpers/types.ts (Lines 61–62)

packages/create-llama/e2e/python/resolve_dependencies.spec.ts (1)

21-22: LGTM! Properly updated use cases in test suite

The changes correctly replace the "artifacts" use case with the split "code_generator" and "document_generator" use cases in the test suite, consistent with the overall refactoring approach.

packages/create-llama/helpers/typescript.ts (1)

80-80: Updated conditional checks for new use cases correctly.

The condition has been properly updated to check for the new code_generator and document_generator use cases instead of the original artifacts use case. This change aligns with the refactoring goal of splitting the artifacts use case.

packages/create-llama/e2e/typescript/resolve_dependencies.spec.ts (2)

26-27: Successfully updated use cases array.

The test suite correctly replaces artifacts with the two new separate use cases: code_generator and document_generator.


86-86: Properly updated condition for skipping test cases.

The condition has been correctly updated to skip the llamaParse test for both new use cases (code_generator and document_generator) instead of just the original artifacts use case.

packages/create-llama/templates/components/use-cases/python/code_generator/workflow.py (1)

9-9: Added required OpenAI import for LLM initialization.

The import for OpenAI LLM is correctly added to support the new workflow factory function.

packages/create-llama/templates/components/use-cases/python/document_generator/workflow.py (1)

7-7: Added required OpenAI import for LLM initialization.

The import for OpenAI LLM is correctly added to support the new workflow factory function.

packages/create-llama/templates/components/use-cases/python/code_generator/README-template.md (2)

1-3: LGTM - Good introduction with important links.

The introduction effectively provides links to LlamaIndex and Workflows, which are essential for users to understand the project's foundation.


57-65: LGTM - Good resources section.

The Learn More section provides valuable resources for users to deepen their understanding of LlamaIndex and related tools.

packages/create-llama/templates/components/use-cases/typescript/document_generator/README-template.md (2)

1-3: LGTM - Good introduction with important links.

The introduction effectively provides links to LlamaIndex and Create Llama, which are essential for users to understand the project's foundation.


45-53: LGTM - Good resources section.

The Learn More section provides valuable resources for users to deepen their understanding of LlamaIndex and related tools.

packages/create-llama/questions/simple.ts (4)

9-14: LGTM - Clean type definition update for new use cases.

The AppType definition has been appropriately updated to replace "artifacts" with the two new specialized use cases: "code_generator" and "document_generator".


50-59: LGTM - Well-defined choices for new use cases.

The choices have been updated with clear titles and descriptions for the new Code Generator and Document Generator options, replacing the previous Artifacts option.


84-84: LGTM - Correctly updated condition for LlamaCloud services exclusion.

The condition has been properly updated to exclude both "code_generator" and "document_generator" from the LlamaCloud services prompt, maintaining the previous behavior for the new use cases.


161-172: LGTM - Properly configured new use cases in the lookup object.

Both new use cases have been added to the lookup object with the appropriate configuration, maintaining consistency with the other use cases and providing the necessary MODEL_GPT41 configuration.

packages/create-llama/templates/components/ui/use-cases/document_generator/ui_event.jsx (3)

1-9: LGTM - Good imports setup.

The component imports all necessary UI components and React hooks needed for the document generator workflow visualization.


10-27: LGTM - Well-organized stage metadata.

The STAGE_META constant provides a clear and organized way to manage the different visual elements for each workflow stage, making the component maintainable and easy to understand.


49-126: LGTM - Elegant implementation of workflow card UI.

The ArtifactWorkflowCard component handles different workflow states elegantly with appropriate visual feedback and smooth transitions. The conditional rendering based on the workflow state provides a clear user experience.

@leehuwuj leehuwuj force-pushed the lee/separate-artifacts branch from 82fced3 to d0ae22e Compare May 14, 2025 03:41
- Removed unnecessary string conversion for userInput in code_generator and deep_research workflows.
- Updated userRequest type to MessageContent for better type safety.
- Cleaned up the UI event component by removing redundant indicatorClassName logic.
@leehuwuj leehuwuj merged commit 1df8cfb into main May 15, 2025
60 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants