Skip to content

Conversation

@radofuchs
Copy link
Contributor

@radofuchs radofuchs commented Nov 4, 2025

Description

fix RHELAI tests and add assert messages to make test debugging easier

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features

    • Added support for RHEL AI models with dynamic configuration via environment variables.
    • Added explicit embedding model support.
  • Bug Fixes

    • Enhanced error handling for streaming responses with HTTP status validation.
  • Tests

    • Improved test validation with better error messages and context-driven assertions.
    • Updated test scenarios to require authorization headers for streaming queries.
  • Chores

    • Added push trigger to the end-to-end test workflow.

@radofuchs radofuchs requested review from are-ces and tisnik November 4, 2025 13:46
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 4, 2025

Walkthrough

This PR updates RHEL AI e2e tests to use environment-variable-driven model configuration instead of hardcoded values, enhances error handling in streaming responses, and refines model validation with context-aware matching and improved failure diagnostics.

Changes

Cohort / File(s) Summary
Workflow Automation
\.github/workflows/e2e_tests_rhelai\.yaml
Adds push trigger to GitHub Actions workflow, enabling automated test runs on push events alongside existing schedule and workflow_dispatch triggers.
Test Configuration
tests/e2e/configs/run-rhelai\.yaml
Replaces hardcoded GPT-4 Turbo provider_shield_id with environment variable reference; shifts from OpenAI/GPT-4 to vllm-based LLM driven by ${env.RHEL_AI_MODEL}; adds explicit sentence-transformers embedding model entry.
Test Validation Steps
tests/e2e/features/steps/info\.py
Refines model structure validation from first-model assumption to context-driven provider/resource matching; enhances error messages with specific provider and resource identifiers; strengthens all LLM field assertions.
Stream Response Handling
tests/e2e/features/steps/llm_query_response\.py
Adds explicit raise_for_status() call after parsing streaming response data to catch HTTP errors before subsequent assertions.
Feature Scenarios
tests/e2e/features/streaming_query\.feature
Replaces queries without authorization headers with authorized header usage across scenarios; removes some explicit 200 status code assertions while retaining streamed response fragment checks.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Model validation refactor in info.py — Logic shift from positional assumption to context-driven filtering requires careful verification of matching logic.
  • Error handling insertion in llm_query_response.py — New raise_for_status() call alters control flow; verify it doesn't suppress or prematurely exit expected test paths.
  • Environment variable substitution in config — Confirm ${env.RHEL_AI_MODEL} is properly resolved in test environment and doesn't break fallback scenarios.
  • Authorization header and status code changes in feature file — Verify removed status code assertions don't mask HTTP failures that should be caught elsewhere.

Possibly related PRs

Suggested reviewers

  • are-ces
  • tisnik

Poem

🐰 Hops through configs with care,
Env vars flutter in the air,
Validation now sees what's true,
Context-aware through and through,
Errors caught before they fly,
Tests now streaming way up high! 🚀

Pre-merge checks and finishing touches

✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main changes: fixing RHELAI tests (workflow trigger addition and configuration changes) and adding assert messages with detailed failure information for debugging.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
.github/workflows/e2e_tests_rhelai.yaml (1)

5-5: Consider adding branch filters to the push trigger.

Running RHEL AI E2E tests on every push to every branch could create significant CI overhead. Consider restricting to specific branches:

 on:
-  push:
+  push:
+    branches:
+      - main
+      - 'release/**'
   schedule:
     - cron: "0 0 * * *"  # Runs once a day at midnight UTC
   workflow_dispatch:

Alternatively, use path filters if tests should only run when specific files change.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 60ba8ec and 153e4bd.

📒 Files selected for processing (5)
  • .github/workflows/e2e_tests_rhelai.yaml (1 hunks)
  • tests/e2e/configs/run-rhelai.yaml (1 hunks)
  • tests/e2e/features/steps/info.py (2 hunks)
  • tests/e2e/features/steps/llm_query_response.py (1 hunks)
  • tests/e2e/features/streaming_query.feature (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
tests/e2e/features/**/*.feature

📄 CodeRabbit inference engine (CLAUDE.md)

Write E2E tests as Gherkin feature files for behave

Files:

  • tests/e2e/features/streaming_query.feature
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed

Files:

  • tests/e2e/features/steps/llm_query_response.py
  • tests/e2e/features/steps/info.py
tests/e2e/features/steps/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Place behave step definitions under tests/e2e/features/steps/

Files:

  • tests/e2e/features/steps/llm_query_response.py
  • tests/e2e/features/steps/info.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests

Files:

  • tests/e2e/features/steps/llm_query_response.py
  • tests/e2e/features/steps/info.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: build-pr
  • GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
  • GitHub Check: e2e_tests (azure)
  • GitHub Check: e2e_tests (ci)
🔇 Additional comments (7)
tests/e2e/features/streaming_query.feature (2)

49-49: LGTM - Improved consistency.

The change to use authorization header aligns with other scenarios in the file and ensures consistent test coverage.


60-62: LGTM - Strengthened test assertion.

The change to check for specific fragments in the streamed response provides more meaningful validation than just a status code check, improving test debugging as per the PR objectives.

tests/e2e/features/steps/llm_query_response.py (1)

18-19: LGTM - Enhanced error handling.

Adding raise_for_status() provides clearer failure diagnostics by distinguishing HTTP errors from streaming logic failures, directly supporting the PR's goal of easier test debugging.

tests/e2e/features/steps/info.py (2)

32-80: LGTM - Excellent improvement to model validation.

The refactor to use context-driven model validation with enhanced error messages significantly improves test debugging capabilities. The detailed assertions that show both expected and actual values align perfectly with the PR objectives.

Ensure that context.default_model and context.default_provider are properly initialized in the before_all hook to avoid AttributeError during test execution.


82-114: LGTM - Consistent validation improvements.

The shield validation updates mirror the model validation enhancements, maintaining consistency across the test suite and providing better debugging information.

tests/e2e/configs/run-rhelai.yaml (2)

130-133: LGTM - Environment-driven shield configuration.

Using ${env.RHEL_AI_MODEL} makes the shield configuration flexible and aligns with the workflow environment variable setup.


134-144: LGTM - Simplified model configuration.

The shift to environment-driven model configuration with a single vllm LLM and explicit embedding model improves maintainability and aligns with the RHEL AI deployment model. The configuration is clear and well-documented.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tisnik tisnik merged commit dbe7714 into lightspeed-core:main Nov 4, 2025
20 of 22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants