Skip to content

Conversation

@xmican10
Copy link
Contributor

@xmican10 xmican10 commented Oct 31, 2025

Description

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Tests
    • Added a new response fixture that provides a minimal HTTP JSON response for integration tests.
    • Added integration tests for health endpoints covering liveness and readiness probes: uninitialized error handling, mixed-provider health translation, and fully healthy scenarios validating status flags, messages, and provider reporting.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 31, 2025

Walkthrough

Added a minimal FastAPI Response pytest fixture and a new integration test module exercising /health liveness and readiness probes with mocked AsyncLlamaStackClientHolder scenarios (uninitialized, mixed-provider statuses, and healthy).

Changes

Cohort / File(s) Summary
Test Infrastructure
tests/integration/conftest.py
Added Response import and a new test_response pytest fixture returning a minimal Response(status_code=200, media_type="application/json").
Health Endpoint Integration Tests
tests/integration/endpoints/test_health_integration.py
New integration tests for /health: test_health_liveness, test_health_readiness_config_error, test_health_readiness, and test_health_readiness_provider_statuses. Adds mock_llama_stack_client_health fixture that mocks AsyncLlamaStackClientHolder and its .inspect.version behavior to simulate uninitialized, mixed, and healthy provider states.

Sequence Diagram(s)

sequenceDiagram
    participant T as Test
    participant F as Mock Fixture
    participant H as AsyncLlamaStackClientHolder
    participant P as Health Probe

    rect rgb(220,235,255)
    Note over T,P: Liveness check
    T->>F: prepare holder (healthy/uninitialized)
    T->>P: call liveness_probe_get_method()
    P->>H: query holder state
    H-->>P: holder ref / state
    P-->>T: 200 {"alive": true}
    end

    rect rgb(255,235,230)
    Note over T,P: Readiness - uninitialized
    T->>F: arrange uninitialized holder
    T->>P: call readiness_probe_get_method()
    P->>H: inspect/version check (async)
    H-->>P: raises / not initialised
    P-->>T: 503 + providers list (id "unknown", init failure)
    end

    rect rgb(230,255,235)
    Note over T,P: Readiness - mixed/healthy providers
    T->>F: arrange mocked provider statuses
    T->>P: call readiness_probe_get_method()
    P->>H: inspect returns provider health entries
    H-->>P: provider statuses
    P-->>T: 200 + readiness payload (ready, providers list)
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Review async mocking of AsyncLlamaStackClientHolder and .inspect.version behavior.
  • Verify test assertions for status codes, provider ids/status/message shapes.
  • Ensure new test_response fixture name/scope doesn't conflict with existing fixtures.

Possibly related PRs

Suggested reviewers

  • radofuchs
  • tisnik

Poem

🐰 I hopped into tests today,
Mocked a client along the way,
Liveness chirped a cheery tune,
Readiness checked each provider soon,
Hooray — the health checks pass, hooray! 🥕

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "Adding Lcore 714 health endpoint" directly aligns with the primary changes in the pull request. The new test module test_health_integration.py adds comprehensive integration tests for health endpoint functionality, including tests for liveness and readiness probe methods. The supporting fixture added to conftest.py enables these tests. The title is clear, specific, and concise—it references the ticket identifier and describes the main feature being added. A reviewer scanning the commit history would immediately understand that this PR introduces health endpoint testing.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/integration/endpoints/test_health_integration.py (1)

61-117: Load the configuration fixture for readiness checks.

reset_configuration_state clears configuration._configuration before every test, so both readiness scenarios currently run with an unloaded config. The tests still pass because the code falls back to an empty provider list, but that makes the assertions vacuous and will break once get_providers_health_statuses enforces a loaded configuration (it raises LogicError today). Please pull in test_config so the tests exercise the real wiring they document.

 async def test_health_readiness_config_error(
-    test_response: Response,
+    test_config: AppConfig,
+    test_response: Response,
     test_auth: AuthTuple,
 ) -> None:
@@
 async def test_health_readiness(
     mock_llama_stack_client_health: AsyncMockType,
-    test_response: Response,
+    test_config: AppConfig,
+    test_response: Response,
     test_auth: AuthTuple,
 ) -> None:
@@
-    _ = mock_llama_stack_client_health
+    _ = (mock_llama_stack_client_health, test_config)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 980e871 and 9e02f79.

📒 Files selected for processing (2)
  • tests/integration/conftest.py (2 hunks)
  • tests/integration/endpoints/test_health_integration.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed

Files:

  • tests/integration/endpoints/test_health_integration.py
  • tests/integration/conftest.py
tests/{unit,integration}/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/{unit,integration}/**/*.py: Use pytest for all unit and integration tests
Do not use unittest in tests; pytest is the standard

Files:

  • tests/integration/endpoints/test_health_integration.py
  • tests/integration/conftest.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests

Files:

  • tests/integration/endpoints/test_health_integration.py
  • tests/integration/conftest.py
tests/**/conftest.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use conftest.py for shared pytest fixtures

Files:

  • tests/integration/conftest.py
🧠 Learnings (1)
📚 Learning: 2025-09-18T16:46:33.353Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-18T16:46:33.353Z
Learning: Applies to tests/**/conftest.py : Use conftest.py for shared pytest fixtures

Applied to files:

  • tests/integration/conftest.py
🧬 Code graph analysis (1)
tests/integration/endpoints/test_health_integration.py (3)
src/configuration.py (1)
  • configuration (73-77)
src/app/endpoints/health.py (2)
  • liveness_probe_get_method (126-140)
  • readiness_probe_get_method (80-112)
src/client.py (1)
  • get_client (49-55)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests (ci)
  • GitHub Check: e2e_tests (azure)

Copy link
Contributor

@radofuchs radofuchs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@radofuchs radofuchs requested a review from tisnik October 31, 2025 12:29
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e02f79 and 6adf5fc.

📒 Files selected for processing (1)
  • tests/integration/endpoints/test_health_integration.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed

Files:

  • tests/integration/endpoints/test_health_integration.py
tests/{unit,integration}/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/{unit,integration}/**/*.py: Use pytest for all unit and integration tests
Do not use unittest in tests; pytest is the standard

Files:

  • tests/integration/endpoints/test_health_integration.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests

Files:

  • tests/integration/endpoints/test_health_integration.py
🧬 Code graph analysis (1)
tests/integration/endpoints/test_health_integration.py (2)
src/app/endpoints/health.py (3)
  • liveness_probe_get_method (126-140)
  • readiness_probe_get_method (80-112)
  • get_providers_health_statuses (29-63)
src/client.py (1)
  • get_client (49-55)
🪛 GitHub Actions: Python linter
tests/integration/endpoints/test_health_integration.py

[warning] 16-16: pylint C0411: third party import 'llama_stack.providers.datatypes.HealthStatus' should be placed before first party imports; wrong-import-order

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests (ci)
  • GitHub Check: e2e_tests (azure)

Comment on lines +157 to +183
@pytest.mark.asyncio
async def test_health_readiness(
mock_llama_stack_client_health: AsyncMockType,
test_response: Response,
test_auth: AuthTuple,
) -> None:
"""Test that readiness probe endpoint returns readiness status.

This integration test verifies:
- Endpoint handler integrates with configuration system
- Configuration values are correctly accessed
- Real noop authentication is used
- Response structure matches expected format

Args:
mock_llama_stack_client_health: Mocked Llama Stack client
test_response: FastAPI response object
test_auth: noop authentication tuple
"""
_ = mock_llama_stack_client_health

result = await readiness_probe_get_method(auth=test_auth, response=test_response)

# Verify that service returns readiness response
assert result.ready is True
assert result.reason == "All providers are healthy"
assert result.providers is not None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Stub healthy providers before asserting readiness.
With no return value configured for mock_llama_stack_client_health.providers.list, get_providers_health_statuses() hits the exception path, returns an "unknown" error entry, and readiness_probe_get_method() keeps ready false—so the assertions on Lines 181-183 fail. Set an explicit healthy provider list in this test.

 async def test_health_readiness(
     mock_llama_stack_client_health: AsyncMockType,
+    mocker: MockerFixture,
     test_response: Response,
     test_auth: AuthTuple,
 ) -> None:
@@
-    Args:
-        mock_llama_stack_client_health: Mocked Llama Stack client
-        test_response: FastAPI response object
-        test_auth: noop authentication tuple
+    Args:
+        mock_llama_stack_client_health: Mocked Llama Stack client
+        mocker: pytest-mock fixture for creating mock objects
+        test_response: FastAPI response object
+        test_auth: noop authentication tuple
@@
-    _ = mock_llama_stack_client_health
+    mock_llama_stack_client_health.providers.list.return_value = [
+        mocker.Mock(
+            provider_id="healthy-provider",
+            health={"status": "ok", "message": ""},
+        )
+    ]
 
     result = await readiness_probe_get_method(auth=test_auth, response=test_response)
🤖 Prompt for AI Agents
In tests/integration/endpoints/test_health_integration.py around lines 157 to
183, the mocked Llama Stack client does not have a configured return for
mock_llama_stack_client_health.providers.list causing
get_providers_health_statuses() to return an "unknown" error and readiness to be
false; fix by stubbing the mock to return an explicit healthy providers list
before calling readiness_probe_get_method (e.g., set
mock_llama_stack_client_health.providers.list to return a list/dict representing
one or more healthy providers so the probe sees all providers healthy and the
assertions for ready=True, reason="All providers are healthy", and providers
non-None will pass).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
tests/integration/endpoints/test_health_integration.py (2)

3-16: Fix import grouping to unblock lint.

The third-party import llama_stack.providers.datatypes should be grouped with other third-party imports before first-party imports to satisfy pylint C0411.


157-178: Stub healthy providers before asserting readiness.

Without configuring mock_llama_stack_client_health.providers.list.return_value, get_providers_health_statuses() hits the exception path and returns an error status, causing the test assertions to fail.

🧹 Nitpick comments (3)
tests/integration/endpoints/test_health_integration.py (3)

19-22: Refine fixture return type annotation.

The return type Generator[Any, None, None] is too generic. Since the fixture yields an AsyncMockType, consider using Generator[AsyncMockType, None, None] for better type safety.

Apply this diff:

 @pytest.fixture(name="mock_llama_stack_client_health")
 def mock_llama_stack_client_fixture(
     mocker: MockerFixture,
-) -> Generator[Any, None, None]:
+) -> Generator[AsyncMockType, None, None]:

42-57: Clarify unused test_config parameter.

The test_config parameter is assigned to _ on line 57, suggesting it's unused. If this fixture is required only for its side effects (e.g., loading application configuration), consider clarifying this in the docstring. Otherwise, remove the parameter if it's not needed.


180-183: Strengthen assertion for healthy providers scenario.

When all providers are healthy, result.providers should contain an empty list (since it only includes unhealthy providers). The assertion assert result.providers is not None is weak and doesn't verify the expected behavior.

Apply this diff:

     # Verify that service returns readiness response
     assert result.ready is True
     assert result.reason == "All providers are healthy"
-    assert result.providers is not None
+    assert result.providers == []
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6adf5fc and 64ce25a.

📒 Files selected for processing (1)
  • tests/integration/endpoints/test_health_integration.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed

Files:

  • tests/integration/endpoints/test_health_integration.py
tests/{unit,integration}/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/{unit,integration}/**/*.py: Use pytest for all unit and integration tests
Do not use unittest in tests; pytest is the standard

Files:

  • tests/integration/endpoints/test_health_integration.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests

Files:

  • tests/integration/endpoints/test_health_integration.py
🧬 Code graph analysis (1)
tests/integration/endpoints/test_health_integration.py (3)
src/configuration.py (2)
  • configuration (73-77)
  • AppConfig (39-181)
src/app/endpoints/health.py (3)
  • liveness_probe_get_method (126-140)
  • readiness_probe_get_method (80-112)
  • get_providers_health_statuses (29-63)
src/client.py (1)
  • get_client (49-55)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: e2e_tests (ci)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests (azure)
🔇 Additional comments (2)
tests/integration/endpoints/test_health_integration.py (2)

65-119: LGTM!

The test properly configures the mock with mixed health statuses and verifies correct mapping of provider IDs, statuses, and messages. The index-based assertions ensure the function preserves provider order as returned by the client.


121-155: LGTM!

This test effectively validates the error handling path by exercising the uninitialized client scenario. The comprehensive assertions verify the 503 status code, error structure, and detailed error message.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tisnik tisnik merged commit 5d280c9 into lightspeed-core:main Oct 31, 2025
18 of 20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants