-
Notifications
You must be signed in to change notification settings - Fork 52
fix RHELAI tests and add assert messages to make test debugging easier #757
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis PR updates RHEL AI e2e tests to use environment-variable-driven model configuration instead of hardcoded values, enhances error handling in streaming responses, and refines model validation with context-aware matching and improved failure diagnostics. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.github/workflows/e2e_tests_rhelai.yaml (1)
5-5: Consider adding branch filters to the push trigger.Running RHEL AI E2E tests on every push to every branch could create significant CI overhead. Consider restricting to specific branches:
on: - push: + push: + branches: + - main + - 'release/**' schedule: - cron: "0 0 * * *" # Runs once a day at midnight UTC workflow_dispatch:Alternatively, use path filters if tests should only run when specific files change.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.github/workflows/e2e_tests_rhelai.yaml(1 hunks)tests/e2e/configs/run-rhelai.yaml(1 hunks)tests/e2e/features/steps/info.py(2 hunks)tests/e2e/features/steps/llm_query_response.py(1 hunks)tests/e2e/features/streaming_query.feature(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
tests/e2e/features/**/*.feature
📄 CodeRabbit inference engine (CLAUDE.md)
Write E2E tests as Gherkin feature files for behave
Files:
tests/e2e/features/streaming_query.feature
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed
Files:
tests/e2e/features/steps/llm_query_response.pytests/e2e/features/steps/info.py
tests/e2e/features/steps/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place behave step definitions under tests/e2e/features/steps/
Files:
tests/e2e/features/steps/llm_query_response.pytests/e2e/features/steps/info.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests
Files:
tests/e2e/features/steps/llm_query_response.pytests/e2e/features/steps/info.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: build-pr
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
- GitHub Check: e2e_tests (azure)
- GitHub Check: e2e_tests (ci)
🔇 Additional comments (7)
tests/e2e/features/streaming_query.feature (2)
49-49: LGTM - Improved consistency.The change to use authorization header aligns with other scenarios in the file and ensures consistent test coverage.
60-62: LGTM - Strengthened test assertion.The change to check for specific fragments in the streamed response provides more meaningful validation than just a status code check, improving test debugging as per the PR objectives.
tests/e2e/features/steps/llm_query_response.py (1)
18-19: LGTM - Enhanced error handling.Adding
raise_for_status()provides clearer failure diagnostics by distinguishing HTTP errors from streaming logic failures, directly supporting the PR's goal of easier test debugging.tests/e2e/features/steps/info.py (2)
32-80: LGTM - Excellent improvement to model validation.The refactor to use context-driven model validation with enhanced error messages significantly improves test debugging capabilities. The detailed assertions that show both expected and actual values align perfectly with the PR objectives.
Ensure that
context.default_modelandcontext.default_providerare properly initialized in thebefore_allhook to avoid AttributeError during test execution.
82-114: LGTM - Consistent validation improvements.The shield validation updates mirror the model validation enhancements, maintaining consistency across the test suite and providing better debugging information.
tests/e2e/configs/run-rhelai.yaml (2)
130-133: LGTM - Environment-driven shield configuration.Using
${env.RHEL_AI_MODEL}makes the shield configuration flexible and aligns with the workflow environment variable setup.
134-144: LGTM - Simplified model configuration.The shift to environment-driven model configuration with a single vllm LLM and explicit embedding model improves maintainability and aligns with the RHEL AI deployment model. The configuration is clear and well-documented.
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
fix RHELAI tests and add assert messages to make test debugging easier
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores