Skip to content

Conversation

@anik120
Copy link
Contributor

@anik120 anik120 commented Nov 9, 2025

Description

This will be helpful for folks who are new to llama-stack

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features

    • Added an interactive tutorial that walks through Llama Stack features: health checks, server info, available models, safety shields, tool groups, tools, inference examples, API reference, integrations, and a "Try It Now" recap; supports interactive pauses and a non-interactive mode.
  • Documentation

    • README updated to insert the optional tutorial step into the local run sequence and renumber subsequent steps.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 9, 2025

Walkthrough

Adds a new interactive Bash tutorial at scripts/llama_stack_tutorial.sh that walks users through Llama Stack HTTP endpoints (health, version, models, tools, safety, inference examples) with optional pauses and a --no-wait flag; updates README.md to include this tutorial as an optional step and renumbers subsequent steps.

Changes

Cohort / File(s) Summary
Documentation Update
README.md
Inserted an optional tutorial step to run scripts/llama_stack_tutorial.sh in the "Run LCS locally" sequence and updated subsequent step numbering.
New Tutorial Script
scripts/llama_stack_tutorial.sh
Added an interactive Bash tutorial script. Defines LLAMA_STACK_URL, color codes, supports --no-wait and --help, detects jq (falls back to cat), provides helpers (print_section, print_header, wait_for_user, run_command), queries endpoints (/health, /version, /models, /tools, /safety, etc.), formats output when possible, and offers interactive pauses and a recap.

Sequence Diagram(s)

sequenceDiagram
    participant User as User (runs script)
    participant Script as llama_stack_tutorial.sh
    participant API as LLAMA_STACK_URL (HTTP API)
    participant Formatter as jq / cat

    User->>Script: start (optionally --no-wait)
    Script->>API: GET /health, /version, /models, /tools, /safety, /inference...
    API-->>Script: JSON responses
    Script->>Formatter: format JSON (jq if available, else cat)
    Formatter-->>Script: formatted output
    Script-->>User: display section, pause (if interactive)
    alt user continues
        User->>Script: proceed to next section
    else user exits
        User->>Script: quit
    end
    Script-->>User: final recap
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

  • Areas to focus on:
    • Correctness of endpoint URLs and curl options
    • Robustness of jq detection and fallback formatting
    • Flag parsing (--no-wait, --help) and interactive pause behavior
    • Handling of API failures and edge-case responses

Suggested labels

ok-to-test

Suggested reviewers

  • tisnik

Poem

🐇 I hopped through endpoints, whiskers bright,
Curl'd some health checks in the moonlight.
Models, tools, a gentle tour —
Pause, then sprint, explore some more.
Run the script and hop with delight!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding a tutorial script for llama-stack to help new users. It directly relates to the two primary file changes (README.md updates and new tutorial script).
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 00755b0 and 28cf6df.

📒 Files selected for processing (2)
  • README.md (1 hunks)
  • scripts/llama_stack_tutorial.sh (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • README.md
  • scripts/llama_stack_tutorial.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests (ci)
  • GitHub Check: e2e_tests (azure)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link

openshift-ci bot commented Nov 9, 2025

Hi @anik120. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
scripts/llama_stack_tutorial.sh (1)

69-76: Refactor repeated jq availability checks into a helper function.

The jq availability check is performed multiple times (lines 70, 153, 199), and command -v jq &> /dev/null is duplicated. This violates DRY and makes maintenance harder if the check logic needs to change.

+# Helper function to check if jq is available
+has_jq() {
+    command -v jq &> /dev/null
+}
+
-# Check if jq is available
-if ! command -v jq &> /dev/null; then
+if ! has_jq; then
     echo "⚠️  Warning: 'jq' is not installed. Output will be less formatted."
     echo "   Install with: apt-get install jq (Ubuntu/Debian), yum install jq (RHEL/CentOS), brew install jq (macOS)"
     JQ_CMD="cat"
 else
     JQ_CMD="jq ."
 fi

...and replace subsequent checks:
-if command -v jq &> /dev/null; then
+if has_jq; then
     # ... processing ...
 fi

Also applies to: 153-172, 199-203

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1a5c345 and 37a5e2e.

📒 Files selected for processing (2)
  • README.md (1 hunks)
  • scripts/llama_stack_tutorial.sh (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: e2e_tests (ci)
  • GitHub Check: e2e_tests (azure)
🔇 Additional comments (2)
scripts/llama_stack_tutorial.sh (2)

43-49: eval pattern poses security risk; verify controlled usage.

The run_command function uses eval to execute dynamically constructed commands. While the commands are internally controlled here, eval is inherently risky and difficult to audit.

Consider alternatives:

  1. Use bash arrays with command substitution instead of eval
  2. Document explicitly why eval is necessary (e.g., supporting command expansion in $cmd)
  3. Add input validation/escaping if external input is ever passed to run_command

Current usage appears safe since $cmd is only constructed internally, but monitor for future changes that might introduce external input.


1-330: Comprehensive tutorial script with clear educational structure.

The script effectively guides users through Llama Stack features with good UX patterns: interactive pauses, color formatting, fallback handling for missing dependencies, and command-line options. Content is well-organized and pedagogically sound.

Minor observation: The script assumes LLAMA_STACK_URL (hardcoded to localhost:8321) is accessible and running. Consider adding a brief connectivity check at startup to provide early feedback if the server is unreachable. This would improve the user experience when the server is not running.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, why not to use it

@tisnik
Copy link
Contributor

tisnik commented Nov 10, 2025

@anik120 LGTM, but please look at shellcheck bug (unused RED control code)

@anik120 anik120 force-pushed the add-llama-stack-tutorial-script branch from 37a5e2e to 00755b0 Compare November 10, 2025 23:14
@anik120
Copy link
Contributor Author

anik120 commented Nov 10, 2025

@tisnik looks like CodeRabbit is happy now.

I also realized the pre-requisites sections could use some info about getting an OpenAPI key, but I've kept that concern separate from this PR and opened a new PR #777

This will be helpful for folks who are new to llama-stack
@anik120 anik120 force-pushed the add-llama-stack-tutorial-script branch from 00755b0 to 28cf6df Compare November 12, 2025 14:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants