-
Notifications
You must be signed in to change notification settings - Fork 52
LCORE-166: add e2e tests for the endpoints #425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LCORE-166: add e2e tests for the endpoints #425
Conversation
WalkthroughIntroduces Behave-based E2E testing: adds behave.ini, multiple new feature specs, new/updated step definitions, removes an obsolete feature, and updates the test list. Step modules are mostly scaffolds/placeholders. Some prior REST tests are split into dedicated features (health, info), and query steps are refactored to a modular DSL. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor QA as Behave Scenario
participant ST as Step: llm_query_response
participant HTTP as HTTP Client
participant SVC as Service (/v1)
QA->>ST: Given I change the system prompt to "..."
ST-->>QA: context.system_prompt set
QA->>ST: And I modify the request body by removing "system_query"
ST-->>QA: context.request adjusted
QA->>ST: When I use "query" to ask question "Say hello"
ST->>HTTP: POST /v1/query (body from context)
HTTP->>SVC: Request
SVC-->>HTTP: 200/422/4xx
HTTP-->>ST: Response
ST-->>QA: context.response stored
QA->>ST: Then I wait for the response to be completed
ST-->>QA: Placeholder wait complete
sequenceDiagram
autonumber
actor QA as Behave Scenario
participant H as Steps: health/info/common_http
participant SVC as Service
QA->>H: Given The llama-stack connection is disrupted
H-->>QA: Disruption placeholder
QA->>H: When I access readiness
H->>SVC: GET /v1/readiness
SVC-->>H: 503
H-->>QA: Assert status 503
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/e2e/features/info.feature (2)
1-47: Enable this feature file (use tags instead of comments) and normalize scenario names/stepsRight now all scenarios are commented out, so Behave won’t discover or run anything. If step implementations are landing later, prefer tagging scenarios as @wip (and excluding them via behave.ini) instead of commenting them out. Also fix the duplicate scenario names, normalize the “openapi.json” step to use the REST API prefix, and remove the accidental double-space in the disruption step.
Apply this diff to make the tests visible but safely skipped by default:
-# Feature: Info endpoint API tests -#TODO: fix test - -# Background: -# Given The service is started locally -# And REST API service hostname is localhost -# And REST API service port is 8080 -# And REST API service prefix is /v1 - -# Scenario: Check if the OpenAPI endpoint works as expected -# Given The system is in default state -# When I access endpoint "openapi.json" using HTTP GET method -# Then The status code of the response is 200 -# And The body of the response contains OpenAPI - -# Scenario: Check if info endpoint is working -# Given The system is in default state -# When I access REST API endpoint "info" using HTTP GET method -# Then The status code of the response is 200 -# And The body of the response has proper name "lightspeed_stack" and version "0.2.0" - -# Scenario: Check if models endpoint is working -# Given The system is in default state -# When I access REST API endpoint "models" using HTTP GET method -# Then The status code of the response is 200 -# And The body of the response contains gpt - - -# Scenario: Check if models endpoint is working -# Given The system is in default state -# And The llama-stack connection is disrupted -# When I access REST API endpoint "models" using HTTP GET method -# Then The status code of the response is 503 - -# Scenario: Check if metrics endpoint is working -# Given The system is in default state -# When I access REST API endpoint "metrics" using HTTP GET method -# Then The status code of the response is 200 -# And The body of the response has proper metrics - -# Scenario: Check if metrics endpoint is working -# Given The system is in default state -# And The llama-stack connection is disrupted -# When I access REST API endpoint "metrics" using HTTP GET method -# Then The status code of the response is 500 +@e2e @info @wip +Feature: Info endpoint API tests + Verify OpenAPI, info, models, and metrics endpoints under normal and disrupted conditions. + + Background: + Given The service is started locally + And REST API service hostname is localhost + And REST API service port is 8080 + And REST API service prefix is /v1 + + Scenario: OpenAPI endpoint returns spec + Given The system is in default state + When I access REST API endpoint "openapi.json" using HTTP GET method + Then The status code of the response is 200 + And The body of the response contains OpenAPI + + Scenario: Info endpoint returns name and version + Given The system is in default state + When I access REST API endpoint "info" using HTTP GET method + Then The status code of the response is 200 + And The body of the response has proper name "lightspeed_stack" and version "0.2.0" + + Scenario: Models endpoint returns available models + Given The system is in default state + When I access REST API endpoint "models" using HTTP GET method + Then The status code of the response is 200 + And The body of the response contains gpt + + Scenario: Models endpoint returns 503 when llama-stack connection is disrupted + Given The system is in default state + And The llama-stack connection is disrupted + When I access REST API endpoint "models" using HTTP GET method + Then The status code of the response is 503 + + Scenario: Metrics endpoint returns metrics + Given The system is in default state + When I access REST API endpoint "metrics" using HTTP GET method + Then The status code of the response is 200 + And The body of the response has proper metrics + + Scenario: Metrics endpoint returns 500 when llama-stack connection is disrupted + Given The system is in default state + And The llama-stack connection is disrupted + When I access REST API endpoint "metrics" using HTTP GET method + Then The status code of the response is 500I can also update behave.ini to exclude @wip by default and add a fast path to run only @ready scenarios. Want me to send that patch?
1-47: Add missing step definitions, exclude @wip, and register info.featureBefore un-commenting the scenarios in
tests/e2e/features/info.feature:
Implement step definitions for these phrases in
tests/e2e/features/steps/:
REST API service hostname is {hostname}REST API service port is {port}REST API service prefix is {prefix}I access endpoint {endpoint} using HTTP GET methodThe body of the response contains OpenAPIThe body of the response has proper name {name} and version {version}Update your
behave.inito exclude@wipby default. For example:[behave] paths = tests/e2e/features tags = not @wipIf your CI runner uses
tests/e2e/test_list.txt, add:tests/e2e/features/info.feature
🧹 Nitpick comments (33)
behave.ini (1)
1-2: behave.ini is correctly wired; consider excluding WIP features by defaultGood baseline. To prevent partially implemented features from running in CI, exclude @wip by default.
Apply this diff:
[behave] -paths = tests/e2e/features +paths = tests/e2e/features +# Exclude work-in-progress scenarios/features by default. +tags = ~@wiptests/e2e/features/steps/conversation.py (3)
1-1: Clarify module docstringThese are conversation-specific steps; update the docstring for clarity.
-"""Implementation of common test steps.""" +"""Conversation API step definitions."""
7-12: Avoid false positives: make the step assert minimally useful or mark as pendingRight now the step only checks that a response exists; it can pass even if the conversation payload is wrong.
Proposed minimal assertions:
@then("The proper conversation is returned") def check_returned_conversation(context: Context) -> None: """Check the conversation in response.""" - # TODO: add step implementation - assert context.response is not None + assert context.response is not None, "Request needs to be performed first" + assert context.response.status_code == 200, f"Unexpected status {context.response.status_code}" + body = context.response.json() + assert isinstance(body, dict), f"Expected JSON object, got: {type(body).__name__}" + # TODO(radofuchs): Replace with exact contract checks (e.g., 'id', 'title', etc.) + assert body, "Empty conversation payload"If the exact schema is known (conversation id, title, timestamps), I can wire it in this step. Do you want me to enforce a strict schema here?
14-18: Make “deleted conversation is not found” actually verify 404Same issue: only checks non-None.
@then("the deleted conversation is not found") def check_deleted_conversation(context: Context) -> None: """Check whether the deleted conversation is gone.""" - # TODO: add step implementation - assert context.response is not None + assert context.response is not None, "Request needs to be performed first" + assert context.response.status_code == 404, f"Expected 404, got {context.response.status_code}"tests/e2e/features/steps/common_http.py (6)
24-31: Normalize endpoint construction to avoid double slashes and ensure consistencySome GET helpers build URLs with string concatenation that can introduce double slashes or rely on callers to include/exclude a leading slash. You already have normalize_endpoint imported—use it consistently.
Apply these diffs:
@@ - context.response = requests.get( - f"http://{hostname}:{port}/{endpoint}", + endpoint = normalize_endpoint(endpoint) + context.response = requests.get( + f"http://{hostname}:{port}{endpoint}", data=body, timeout=DEFAULT_TIMEOUT, )@@ - context.response = requests.get( - f"http://{hostname}:{port}/{endpoint}", + endpoint = normalize_endpoint(endpoint) + context.response = requests.get( + f"http://{hostname}:{port}{endpoint}", json=json.loads(context.text), timeout=DEFAULT_TIMEOUT, )@@ - context.response = requests.get( - f"http://{hostname}:{port}/{endpoint}", + endpoint = normalize_endpoint(endpoint) + context.response = requests.get( + f"http://{hostname}:{port}{endpoint}", params=params, timeout=DEFAULT_TIMEOUT, )@@ - context.response = requests.get( - f"http://{hostname}:{port}/{endpoint}/{path}", + endpoint = normalize_endpoint(endpoint) + path = normalize_endpoint(path) + context.response = requests.get( + f"http://{hostname}:{port}{endpoint}{path}", timeout=DEFAULT_TIMEOUT, )@@ - context.response = requests.get( - f"http://{hostname}:{port}/{endpoint}", timeout=DEFAULT_TIMEOUT - ) + endpoint = normalize_endpoint(endpoint) + context.response = requests.get( + f"http://{hostname}:{port}{endpoint}", timeout=DEFAULT_TIMEOUT + )Also applies to: 47-52, 79-84, 95-99, 108-112
114-121: Improve failure message for status code checkMake the assertion message explicit.
- assert ( - context.response.status_code == status - ), f"Status code is {context.response.status_code}" + actual = context.response.status_code + assert actual == status, f"Expected status {status}, got {actual}"
186-195: Avoid shadowing the imported json module; use clearer variable namesUsing json as a local variable name obscures the imported module and harms readability.
- expected_body = json.loads(context.text).copy() - result = context.response.json().copy() + expected_body = json.loads(context.text).copy() + body = context.response.json().copy() @@ - result.pop(field, None) + body.pop(field, None) @@ - assert result == expected_body, f"got:\n{result}\nwant:\n{expected_body}" + assert body == expected_body, f"got:\n{body}\nwant:\n{expected_body}"
282-304: Avoid shadowing the imported json module in attribute checksSame issue here. Prefer body to json.
- json = context.response.json() - assert json is not None + body = context.response.json() + assert body is not None @@ - assert attribute in json, f"Attribute {attribute} is not returned by the service" + assert attribute in body, f"Attribute {attribute} is not returned by the service"- json = context.response.json() - assert json is not None + body = context.response.json() + assert body is not None @@ - assert attribute in json, f"Attribute {attribute} is not returned by the service" - value = json[attribute] + assert attribute in body, f"Attribute {attribute} is not returned by the service" + value = body[attribute]
197-213: Relax overly strict step patterns for hostnames/endpointsUsing {hostname:w} and {endpoint:w} disallows valid values like 127.0.0.1 or paths with slashes. Prefer untyped tokens and rely on normalize_endpoint.
-@step("REST API service hostname is {hostname:w}") +@step("REST API service hostname is {hostname}") @@ -@step("REST API service port is {port:d}") +@step("REST API service port is {port:d}") @@ -@step("REST API service prefix is {prefix}") +@step("REST API service prefix is {prefix}")And for POST endpoint:
-@when("I access endpoint {endpoint:w} using HTTP POST method") +@when("I access endpoint {endpoint} using HTTP POST method")
215-242: Connection failure vs expected 5xx behavior in disruption scenariosIn disruption scenarios (service stopped/unavailable), requests.get/post will raise exceptions rather than return 5xx, causing steps to error before assertions. Consider:
- Keeping current behavior and adding separate steps that assert connection failure, or
- Wrapping calls to capture exceptions into a faux response object with status_code 503 for tests that expect 503.
Pick one strategy to keep tests deterministic across environments.
I can add a small helper ResponseOrError wrapper and update these steps if you confirm the desired semantics.
tests/e2e/features/steps/health.py (3)
1-1: Clarify module docstringMake it specific to health checks.
-"""Implementation of common test steps.""" +"""Health endpoint step definitions."""
7-12: Mark step as pending to avoid false positivesAsserting context is not None will always pass. Mark as pending or implement the disruption to produce a predictable 5xx.
@given("The llama-stack connection is disrupted") def llama_stack_connection_broken(context: Context) -> None: """Break llama_stack connection.""" - # TODO: add step implementation - assert context is not None + # TODO: Implement llama-stack disruption (fault injection / provider disable). + raise NotImplementedError("TODO: Implement step: The llama-stack connection is disrupted")
14-18: Mark step as pending to avoid false positivesSame here—better to fail explicitly until implemented.
@given("the service is stopped") def stop_service(context: Context) -> None: """Stop service.""" - # TODO: add step implementation - assert context is not None + # TODO: Implement service stop (or dependency disruption) in test env. + raise NotImplementedError("TODO: Implement step: the service is stopped")tests/e2e/features/health.feature (1)
1-53: Grammar nit: “report proper ... state” → “reports proper ... state”If you keep prose, fix minor grammar in scenario titles (shown in the diff above).
tests/e2e/features/steps/feedback.py (4)
5-6: Missing import for JSON payload handling.Docstring states JSON payload comes from context.text, but json is not imported. You’ll need it for parsing when you wire payload handling.
-from behave.runner import Context -import requests +from behave.runner import Context +import json +import requests
7-8: Consider de-duplicating DEFAULT_TIMEOUT across step modules.DEFAULT_TIMEOUT appears in multiple modules. Centralizing it (e.g., a shared steps/utils module) avoids drift.
11-16: Avoid function name collision with common_http version (readability).There’s already an access_rest_api_endpoint_post in steps/common_http.py. Different step patterns are okay, but duplicating function names harms discoverability in stack traces.
Consider renaming to access_feedback_endpoint_post or similar.
34-39: Unimplemented “disable feedback” step.This is a stub. If the API supports toggling feedback (e.g., header, config, or dedicated endpoint), wire it now or tag the scenario with @wip until implemented. I can help implement once you confirm the intended mechanism.
Would you like me to implement this via a control endpoint (e.g., POST /v1/feedback/disable) or by injecting a header (e.g., X-Feedback-Enabled: false) into context.headers?
tests/e2e/features/steps/info.py (2)
10-16: Implement actual checks for name/system_prompt and version.Currently this only stores values on context. Minimal assertions improve the value of the step without overfitting.
def check_name_version(context: Context, system_prompt: str, version: str) -> None: """Check proper name and version number.""" - context.system_prompt = system_prompt - context.version = version - # TODO: add step implementation - assert context is not None + assert context.response is not None, "No HTTP response available" + body = context.response.json() + # Some services use "name", others expose "system_prompt". + if "name" in body: + assert body["name"] == system_prompt, f'Expected name "{system_prompt}", got "{body["name"]}"' + elif "system_prompt" in body: + assert body["system_prompt"] == system_prompt, f'Expected system_prompt "{system_prompt}", got "{body["system_prompt"]}"' + else: + raise AssertionError('Neither "name" nor "system_prompt" present in response body') + assert "version" in body, 'Missing "version" in response body' + assert str(body["version"]) == version, f'Expected version "{version}", got "{body["version"]}"'
18-22: Add minimal metrics validation.Even as a scaffold, assert the structure to catch regressions.
def check_metrics(context: Context) -> None: """Check proper metrics.""" - # TODO: add step implementation - assert context is not None + assert context.response is not None, "No HTTP response available" + body = context.response.json() + assert "metrics" in body, 'Missing "metrics" in response body' + assert isinstance(body["metrics"], dict), '"metrics" should be an object/dict'tests/e2e/features/steps/auth.py (3)
14-18: Implement header removal to make the step useful.If downstream POST steps honor context.headers, removing Authorization here will exercise the unauthenticated path.
@given("I remove the auth header") def remove_auth_header(context: Context) -> None: """Remove the auth header.""" - # TODO: add step implementation - assert context is not None + headers = dict(getattr(context, "headers", {})) + headers = {k: v for k, v in headers.items() if k.lower() != "authorization"} + context.headers = headers
21-26: Implement unauthorized header mutation.Setting an invalid/forbidden token makes the scenario deterministic.
@given("I modify the auth header so that the user is not authorized") def modify_auth_header(context: Context) -> None: """Modify the auth header making the user unauthorized.""" - # TODO: add step implementation - assert context is not None + headers = dict(getattr(context, "headers", {})) + headers["Authorization"] = "Bearer invalid_or_unauthorized_token" + context.headers = headers
7-12: Add a minimal assertion for the username check.Asserting presence and non-empty value provides immediate value without coupling to a specific user.
@then("The body of the response has proper username") def check_body_username(context: Context) -> None: """Check that the username is correct in response.""" - # TODO: add step implementation - assert context is not None + assert context.response is not None + body = context.response.json() + assert "username" in body and str(body["username"]).strip(), 'Missing or empty "username" in response'tests/e2e/features/authorized.feature (1)
1-35: Entire feature is commented out; prefer @wip tag over commenting.Keeping the file fully commented means Behave won’t pick anything up and reviewers cannot see step binding errors. Use @wip to exclude from CI while retaining validation in local runs and IDEs.
Apply this diff to enable and tag as wip, and fix the grammar in the step:
-# Feature: Authorized endpoint API tests -# TODO: fix test +@wip +Feature: Authorized endpoint API tests + # TODO: finalize step implementations -# Background: -# Given The service is started locally -# And REST API service hostname is localhost -# And REST API service port is 8080 -# And REST API service prefix is /v1 + Background: + Given The service is started locally + And REST API service hostname is localhost + And REST API service port is 8080 + And REST API service prefix is /v1 -# Scenario: Check if the OpenAPI endpoint works as expected -# Given The system is in default state -# When I access endpoint "authorized" using HTTP POST method -# Then The status code of the response is 200 -# And The body of the response has proper username + Scenario: Check if the OpenAPI endpoint works as expected + Given The system is in default state + When I access endpoint "authorized" using HTTP POST method + Then The status code of the response is 200 + And The body of the response has proper username -# Scenario: Check if LLM responds to sent question with error when not authenticated -# Given The system is in default state -# And I remove the auth header -# When I access endpoint "authorized" using HTTP POST method -# Then The status code of the response is 400 -# And The body of the response is the following -# """ -# {"detail": "Unauthorized: No auth header found"} -# """ + Scenario: Returns error when not authenticated + Given The system is in default state + And I remove the auth header + When I access endpoint "authorized" using HTTP POST method + Then The status code of the response is 400 + And The body of the response is the following + """ + {"detail": "Unauthorized: No auth header found"} + """ -# Scenario: Check if LLM responds to sent question with error when not authorized -# Given The system is in default state -# And I modify the auth header so that the user is it authorized -# When I access endpoint "authorized" using HTTP POST method -# Then The status code of the response is 403 -# And The body of the response is the following -# """ -# {"detail": "Forbidden: User is not authorized to access this resource"} -# """ + Scenario: Returns error when not authorized + Given The system is in default state + And I modify the auth header so that the user is not authorized + When I access endpoint "authorized" using HTTP POST method + Then The status code of the response is 403 + And The body of the response is the following + """ + {"detail": "Forbidden: User is not authorized to access this resource"} + """Follow-up: ensure step definitions wire headers into HTTP requests so these scenarios work as intended.
tests/e2e/features/steps/llm_query_response.py (3)
18-24: Make “remove field” step actually influence the request body.Currently it only stores a field name that is never used. Track a set of removed fields for later application in ask_question.
@step('I modify the request body by removing the "{field}"') def remove_field_prompt(context: Context, field: str) -> None: """Remove the field from the prompt.""" - context.field = field - # TODO: add step implementation - assert context is not None + removed = getattr(context, "removed_fields", set()) + removed.add(field) + context.removed_fields = removed
32-39: Honor system_prompt, removed fields, and optional headers in the POST.This aligns the DSL steps with the outbound request and enables auth scenarios to work.
@step('I use "{endpoint}" to ask question "{question}"') def ask_question(context: Context, endpoint: str, question: str) -> None: """Call the service REST API endpoint with question.""" base = f"http://{context.hostname}:{context.port}" path = f"{context.api_prefix}/{endpoint}".replace("//", "/") url = base + path - data = {"query": question} - context.response = requests.post(url, json=data, timeout=DEFAULT_LLM_TIMEOUT) + data = {"query": question} + # Include system_prompt if set + if hasattr(context, "system_prompt"): + data["system_prompt"] = context.system_prompt + # Apply removed fields if configured + for field in getattr(context, "removed_fields", set()): + data.pop(field, None) + # Pass headers if present (for auth scenarios) + headers = getattr(context, "headers", None) + context.response = requests.post( + url, json=data, headers=headers, timeout=DEFAULT_LLM_TIMEOUT + )
42-57: LLM response format checks look good; consider softening truncated check if field is optional.These assertions are straightforward. If the API ever omits "truncated", consider using response_json.get("truncated", False) to avoid KeyError. Not required now.
tests/e2e/features/conversations.feature (1)
17-21: Grammar: “does not finds/exists” → “does not find/exist” (future-proofing for when uncommented)Fix minor grammar in scenario titles for clarity.
-# Scenario: Check if conversations endpoint does not finds the conversation when it does not exists +# Scenario: Check if conversations endpoint does not find the conversation when it does not exist -# Scenario: Check if conversations/delete endpoint does not finds the conversation when it does not exists +# Scenario: Check if conversations/delete endpoint does not find the conversation when it does not existAlso applies to: 39-43
tests/e2e/features/query.feature (1)
51-55: Typo in step text: “user is it authorized”Tighten phrasing.
-# And I modify the auth header so that the user is it authorized +# And I modify the auth header so that the user is not authorizedtests/e2e/features/feedback.feature (1)
11-24: Step naming consistency for POST variantsTwo patterns are used for POSTing to feedback:
- “I access endpoint "feedback" using HTTP POST with conversation ID …”
- “I access endpoint "feedback" using HTTP POST method” (without conversation id)
Ensure step names/args match the underlying step defs to reduce duplication and surprises when you uncomment. Consider one canonical step with optional conversation-id argument.
Also applies to: 64-90
tests/e2e/features/info.feature (3)
16-21: Avoid hardcoding the service version in a long-lived E2EHardcoding "0.2.0" will cause routine failures on version bumps. Prefer a semantic check or a dynamic source (env var or API contract).
Two viable options:
- Option A: semantic version check
- And The body of the response has proper name "lightspeed_stack" and version "0.2.0" + And The body of the response has proper name "lightspeed_stack" and a semantic version
- Option B: read from environment
- And The body of the response has proper name "lightspeed_stack" and version "0.2.0" + And The body of the response has proper name "lightspeed_stack" and version from env var "LS_STACK_VERSION"Please confirm which step variant is already implemented (if any) in tests/e2e/features/steps/info.py; otherwise I can add the step definition.
22-27: Make the models assertion resilient to environment variability“contains gpt” assumes a specific provider/model is configured. In CI or different deployments this may not hold. Recommend asserting on structure or non-emptiness.
Example tweak:
- And The body of the response contains gpt + And The body of the response lists at least 1 model
41-43: Fix the extra space to avoid step mismatchThere’s a double space after “And” which can cause the step not to match.
- And The llama-stack connection is disrupted + And The llama-stack connection is disrupted
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (18)
behave.ini(1 hunks)tests/e2e/features/authorized.feature(1 hunks)tests/e2e/features/conversations.feature(1 hunks)tests/e2e/features/feedback.feature(1 hunks)tests/e2e/features/health.feature(1 hunks)tests/e2e/features/info.feature(1 hunks)tests/e2e/features/llm_interface.feature(0 hunks)tests/e2e/features/query.feature(1 hunks)tests/e2e/features/rest_api.feature(0 hunks)tests/e2e/features/steps/auth.py(1 hunks)tests/e2e/features/steps/common_http.py(2 hunks)tests/e2e/features/steps/conversation.py(1 hunks)tests/e2e/features/steps/feedback.py(1 hunks)tests/e2e/features/steps/health.py(1 hunks)tests/e2e/features/steps/info.py(1 hunks)tests/e2e/features/steps/llm_query_response.py(1 hunks)tests/e2e/features/streaming_query.feature(1 hunks)tests/e2e/test_list.txt(1 hunks)
💤 Files with no reviewable changes (2)
- tests/e2e/features/llm_interface.feature
- tests/e2e/features/rest_api.feature
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/e2e/features/steps/feedback.py (1)
tests/e2e/features/steps/common_http.py (1)
access_rest_api_endpoint_post(245-262)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (8)
tests/e2e/features/steps/common_http.py (3)
6-6: Decorator unification looks goodSwitching to import step alongside then/when is consistent with the step usage below.
175-185: Docstring improvements look goodThe extended docstring clarifies the behavior and parameter roles.
32-52: Confirm GET-with-body usage is intentionalGET requests with bodies are not widely supported and may be dropped by proxies. If the SUT expects payload, consider POST for those endpoints or ensure the infrastructure preserves bodies on GET.
Would you like me to add parallel POST-based steps (keeping GET for backward compatibility) and adjust the feature files accordingly?
tests/e2e/features/health.feature (1)
1-53: Prerequisite steps and helpers verifiedThe step definitions for “The service is started locally” and “The system is in default state” are implemented in tests/e2e/features/steps/common.py, and the helper functions normalize_endpoint and validate_json are defined in tests/e2e/utils/utils.py. No further action required.
tests/e2e/features/steps/auth.py (1)
3-5: Ensure request-sending steps pass headers.For the above header steps to have effect, callers like common_http and llm_query_response must forward context.headers to requests.post. See my comment in llm_query_response.py with a concrete patch; common_http.py should be updated similarly.
Would you like me to open a follow-up PR to wire headers into all HTTP helpers?
tests/e2e/features/steps/llm_query_response.py (3)
10-16: System prompt step is fine as a scaffold.Setting context.system_prompt is lightweight and aligns with the proposed usage above.
59-81: Fragment assertions are clear and actionable.Good use of the scenario table and precise failure messages.
3-8: Unify timeouts/constants and headers handling across HTTP helpers.DEFAULT_LLM_TIMEOUT is fine here, but consider centralizing timeouts. Also, ensure other helpers (e.g., tests/e2e/features/steps/common_http.py) pass context.headers as well for consistency with auth steps.
To update common_http.py similarly, apply a patch like:
# in access_rest_api_endpoint_post(...) headers = getattr(context, "headers", None) context.response = requests.post(url, json=data, headers=headers, timeout=DEFAULT_TIMEOUT)
| # Feature: conversations endpoint API tests | ||
| #TODO: fix test | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uncomment and tag the Feature header to make the file parsable (keep scenarios commented for now)
Behave requires a real Feature: header to parse the file. With it commented, the file is invalid. Suggest tagging as @wip and keeping the rest commented until steps land.
Apply:
-# Feature: conversations endpoint API tests
-#TODO: fix test
+@wip
+Feature: Conversations endpoint API tests
+# TODO: fix test📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Feature: conversations endpoint API tests | |
| #TODO: fix test | |
| @wip | |
| Feature: Conversations endpoint API tests | |
| # TODO: fix test |
🤖 Prompt for AI Agents
In tests/e2e/features/conversations.feature around lines 1 to 3, the Feature
header is commented out which makes the file unparsable by Behave; restore a
proper Feature header by uncommenting and prefixing it with a tag (e.g. add
"@wip" on the line above) while leaving the individual scenarios commented for
now so the file is valid but inactive until steps are implemented.
| # Feature: feedback endpoint API tests | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uncomment and tag the Feature header to avoid parser failures
-# Feature: feedback endpoint API tests
+@wip
+Feature: feedback endpoint API tests📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Feature: feedback endpoint API tests | |
| @wip | |
| Feature: feedback endpoint API tests | |
🤖 Prompt for AI Agents
In tests/e2e/features/feedback.feature around lines 1 to 3 the Feature header is
commented out which causes the Cucumber parser to fail; uncomment the line and
add a tag (e.g. @feedback) directly above the Feature declaration so the file
begins with a tag line followed by an uncommented "Feature: feedback endpoint
API tests" header.
| # Scenario: Check if feedback endpoint is not working when not authorized | ||
| # Given The system is in default state | ||
| # And I remove the auth header | ||
| # When I access endpoint "feedback" using HTTP POST with conversation ID conversationID | ||
| # """ | ||
| # { | ||
| # "llm_response": "bar", | ||
| # "sentiment": -1, | ||
| # "user_feedback": "Not satisfied with the response quality", | ||
| # "user_question": "random question" | ||
| # } | ||
| # """ | ||
| # Then The status code of the response is 400 | ||
| # And The body of the response is the following | ||
| # """ | ||
| # {"response": "feedback received"} | ||
| # """ | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Unauthorized scenario: status/body mismatch
For an unauthorized request, asserting 400 with body {"response":"feedback received"} doesn’t match the intent. Align with how other features assert unauthorized (e.g., 400 with a clear error, or 401).
If you align to the query.feature unauthorized example:
-# Then The status code of the response is 400
-# And The body of the response is the following
-# """
-# {"response": "feedback received"}
-# """
+# Then The status code of the response is 400
+# And The body of the response is the following
+# """
+# {"detail": "Unauthorized: No auth header found"}
+# """If the API uses 401 for missing auth:
-# Then The status code of the response is 400
+# Then The status code of the response is 401📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Scenario: Check if feedback endpoint is not working when not authorized | |
| # Given The system is in default state | |
| # And I remove the auth header | |
| # When I access endpoint "feedback" using HTTP POST with conversation ID conversationID | |
| # """ | |
| # { | |
| # "llm_response": "bar", | |
| # "sentiment": -1, | |
| # "user_feedback": "Not satisfied with the response quality", | |
| # "user_question": "random question" | |
| # } | |
| # """ | |
| # Then The status code of the response is 400 | |
| # And The body of the response is the following | |
| # """ | |
| # {"response": "feedback received"} | |
| # """ | |
| # Scenario: Check if feedback endpoint is not working when not authorized | |
| # Given The system is in default state | |
| # And I remove the auth header | |
| # When I access endpoint "feedback" using HTTP POST with conversation ID conversationID | |
| # """ | |
| # { | |
| # "llm_response": "bar", | |
| # "sentiment": -1, | |
| # "user_feedback": "Not satisfied with the response quality", | |
| # "user_question": "random question" | |
| # } | |
| # """ | |
| # Then The status code of the response is 400 | |
| # And The body of the response is the following | |
| # """ | |
| # {"detail": "Unauthorized: No auth header found"} | |
| # """ |
🤖 Prompt for AI Agents
In tests/e2e/features/feedback.feature around lines 28-45 update the
unauthorized scenario to match other features (like query.feature): change the
expected HTTP status and response body to the same unauthorized contract used
elsewhere — e.g. assert status 401 (or 400 if your API consistently uses 400 for
missing auth) and set the expected response body to the standard error JSON used
by other tests (replace {"response":"feedback received"} with the common
unauthorized error payload). Ensure the scenario matches the exact status code
and JSON shape used across the test suite.
| # Scenario: Check if feedback endpoint is not working when feedback is disabled | ||
| # Given The system is in default state | ||
| # And I disable the feedback | ||
| # When I access endpoint "feedback" using HTTP POST with conversation ID conversationID | ||
| # """ | ||
| # { | ||
| # "llm_response": "bar", | ||
| # "sentiment": -1, | ||
| # "user_feedback": "Not satisfied with the response quality", | ||
| # "user_question": "random question" | ||
| # } | ||
| # """ | ||
| # Then The status code of the response is 403 | ||
| # And The body of the response is the following | ||
| # """ | ||
| # {"response": "feedback received"} | ||
| # """ | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
“Feedback disabled” scenario: success body used with 403
A 403 should have an error payload, not {"response": "feedback received"}.
-# Then The status code of the response is 403
-# And The body of the response is the following
-# """
-# {"response": "feedback received"}
-# """
+# Then The status code of the response is 403
+# And The body of the response is the following
+# """
+# {"detail": "Forbidden: Feedback is disabled"}
+# """📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Scenario: Check if feedback endpoint is not working when feedback is disabled | |
| # Given The system is in default state | |
| # And I disable the feedback | |
| # When I access endpoint "feedback" using HTTP POST with conversation ID conversationID | |
| # """ | |
| # { | |
| # "llm_response": "bar", | |
| # "sentiment": -1, | |
| # "user_feedback": "Not satisfied with the response quality", | |
| # "user_question": "random question" | |
| # } | |
| # """ | |
| # Then The status code of the response is 403 | |
| # And The body of the response is the following | |
| # """ | |
| # {"response": "feedback received"} | |
| # """ | |
| # Scenario: Check if feedback endpoint is not working when feedback is disabled | |
| # Given The system is in default state | |
| # And I disable the feedback | |
| # When I access endpoint "feedback" using HTTP POST with conversation ID conversationID | |
| # """ | |
| # { | |
| # "llm_response": "bar", | |
| # "sentiment": -1, | |
| # "user_feedback": "Not satisfied with the response quality", | |
| # "user_question": "random question" | |
| # } | |
| # """ | |
| # Then The status code of the response is 403 | |
| # And The body of the response is the following | |
| # """ | |
| # {"detail": "Forbidden: Feedback is disabled"} | |
| # """ |
🤖 Prompt for AI Agents
tests/e2e/features/feedback.feature around lines 46 to 63: the scenario for
"Feedback disabled" currently asserts a success body {"response": "feedback
received"} while the response status is 403; update the expected response body
to an appropriate error payload (e.g. a JSON error object such as
{"error":"feedback disabled"} or the project’s standard error schema) so the
test asserts the correct error payload for a 403; keep the rest of the steps
as-is.
| # Then The status code of the response is 422 | ||
| # And The body of the response is the following | ||
| # """ | ||
| # { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required", } | ||
| # """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalid JSON due to trailing comma in error body
Remove the trailing comma.
-# { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required", }
+# { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required" }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Then The status code of the response is 422 | |
| # And The body of the response is the following | |
| # """ | |
| # { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required", } | |
| # """ | |
| # Then The status code of the response is 422 | |
| # And The body of the response is the following | |
| # """ | |
| # { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required" } | |
| # """ |
🤖 Prompt for AI Agents
In tests/e2e/features/feedback.feature around lines 75 to 79 the commented
expected response JSON has a trailing comma after the "Field required" value
which makes the JSON invalid; remove that trailing comma so the JSON in the
triple-quoted response block is valid (e.g. end the object without the extra
comma).
| """Send POST HTTP request with JSON payload to tested service. | ||
| The JSON payload is retrieved from `context.text` attribute, | ||
| which must not be None. The response is stored in | ||
| `context.response` attribute. | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring and implementation diverge; POST sends no body and ignores conversation_id.
Right now the request doesn’t send any payload and the assert message is misleading. If this step is intended to include conversation_id in the JSON body (and optionally merge with context.text), implement it explicitly.
"""Send POST HTTP request with JSON payload to tested service.
The JSON payload is retrieved from `context.text` attribute,
which must not be None. The response is stored in
`context.response` attribute.
"""
@@
- assert conversation_id is not None, "Payload needs to be specified"
- # TODO: finish the conversation ID handling
+ assert conversation_id, "Conversation ID must be specified"
+ # Build payload, merging provided JSON with conversation_id
+ data = {}
+ if getattr(context, "text", None):
+ try:
+ data = json.loads(context.text)
+ except ValueError as e:
+ raise AssertionError(f"Invalid JSON payload in context.text: {e}")
+ # Ensure conversation_id is present in the payload
+ data.setdefault("conversation_id", conversation_id)
# perform REST API call
- context.response = requests.post(url, timeout=DEFAULT_TIMEOUT)
+ context.response = requests.post(url, json=data, timeout=DEFAULT_TIMEOUT)Also applies to: 27-31
🤖 Prompt for AI Agents
In tests/e2e/features/steps/feedback.py around lines 17-22 (and similarly lines
27-31), the docstring says a JSON payload from context.text is sent but the
current POST sends no body and ignores conversation_id; change the step to build
a JSON payload that includes conversation_id (if present) and, if context.text
is not None, parse it as JSON and merge its keys into the payload (or set a
"text" field if parsing fails), then send that payload as the POST body; also
update the assert to check that context.text is not None before attempting to
parse and make the assertion message reflect that the payload was required/was
merged rather than claiming a body was sent when it wasn't.
| @then( | ||
| "The body of the response has proper name {system_prompt:w} and version {version:w}" | ||
| ) | ||
| def check_name_version(context: Context, system_prompt: str, version: str) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Version parsing will fail with dots; quote both parameters.
parse’s {…:w} won’t match semantic versions (e.g., 1.2.3) or names with non-word chars. Use quoted parameters to accept broader input.
-@then(
- "The body of the response has proper name {system_prompt:w} and version {version:w}"
-)
+@then(
+ 'The body of the response has proper name "{system_prompt}" and version "{version}"'
+)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @then( | |
| "The body of the response has proper name {system_prompt:w} and version {version:w}" | |
| ) | |
| def check_name_version(context: Context, system_prompt: str, version: str) -> None: | |
| @then( | |
| 'The body of the response has proper name "{system_prompt}" and version "{version}"' | |
| ) | |
| def check_name_version(context: Context, system_prompt: str, version: str) -> None: |
🤖 Prompt for AI Agents
In tests/e2e/features/steps/info.py around lines 7 to 10, the step pattern uses
parse’s {…:w} which rejects dots and other non-word characters; replace the
unquoted {system_prompt:w} and {version:w} with quoted parameters (e.g.
"\"{system_prompt}\"" and "\"{version}\"" or simply "{system_prompt}" and
"{version}" depending on your step syntax) so the parser accepts semantic
versions and names with non-word chars, and keep the function signature
unchanged to receive those string values.
| # Feature: streaming_query endpoint API tests | ||
| #TODO: fix test | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uncomment and tag the Feature header to ensure the file is a valid Gherkin document
-# Feature: streaming_query endpoint API tests
-#TODO: fix test
+@wip
+Feature: streaming_query endpoint API tests
+# TODO: fix test📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Feature: streaming_query endpoint API tests | |
| #TODO: fix test | |
| @wip | |
| Feature: streaming_query endpoint API tests | |
| # TODO: fix test |
🤖 Prompt for AI Agents
In tests/e2e/features/streaming_query.feature around lines 1 to 3, the Feature
header is commented out and the file is thus not a valid Gherkin document;
uncomment the Feature line and add an appropriate tag (e.g. @streaming_query) on
the line above the Feature header so the file begins with the tag then "Feature:
streaming_query endpoint API tests" and remove the TODO comment or convert it
into a Gherkin comment below the Feature header.
| # Scenario: Check if LLM responds for streaming_query request with error for malformed request | ||
| # Given The system is in default state | ||
| # And I modify the request body by removing the "query" | ||
| # And I use "streaming_query" to ask question "Say hello" | ||
| # When I wait for the response to be completed | ||
| # Then The status code of the response is 422 | ||
| # And The body of the response is the following | ||
| # """ | ||
| # { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required", } | ||
| # """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Invalid JSON (trailing comma) and possible field-name mismatch
As with query.feature, remove the trailing comma and decide whether the missing field is “query” or “system_query” for consistency.
Option A (missing "query"):
-# { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required", }
+# { "type": "missing", "loc": [ "body", "query" ], "msg": "Field required" }Option B (missing "system_query"):
-# And I modify the request body by removing the "query"
+# And I modify the request body by removing the "system_query"
-# { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required", }
+# { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required" }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Scenario: Check if LLM responds for streaming_query request with error for malformed request | |
| # Given The system is in default state | |
| # And I modify the request body by removing the "query" | |
| # And I use "streaming_query" to ask question "Say hello" | |
| # When I wait for the response to be completed | |
| # Then The status code of the response is 422 | |
| # And The body of the response is the following | |
| # """ | |
| # { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required", } | |
| # """ | |
| # Scenario: Check if LLM responds for streaming_query request with error for malformed request | |
| # Given The system is in default state | |
| # And I modify the request body by removing the "query" | |
| # And I use "streaming_query" to ask question "Say hello" | |
| # When I wait for the response to be completed | |
| # Then The status code of the response is 422 | |
| # And The body of the response is the following | |
| # """ | |
| # { "type": "missing", "loc": [ "body", "query" ], "msg": "Field required" } | |
| # """ |
| # Scenario: Check if LLM responds for streaming_query request with error for malformed request | |
| # Given The system is in default state | |
| # And I modify the request body by removing the "query" | |
| # And I use "streaming_query" to ask question "Say hello" | |
| # When I wait for the response to be completed | |
| # Then The status code of the response is 422 | |
| # And The body of the response is the following | |
| # """ | |
| # { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required", } | |
| # """ | |
| # Scenario: Check if LLM responds for streaming_query request with error for malformed request | |
| # Given The system is in default state | |
| # And I modify the request body by removing the "system_query" | |
| # And I use "streaming_query" to ask question "Say hello" | |
| # When I wait for the response to be completed | |
| # Then The status code of the response is 422 | |
| # And The body of the response is the following | |
| # """ | |
| # { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required" } | |
| # """ |
🤖 Prompt for AI Agents
In tests/e2e/features/streaming_query.feature around lines 30-39, the commented
expected JSON has a trailing comma (invalid JSON) and an inconsistent
missing-field name; remove the trailing comma and update the expected error body
to match the actual application validation field name used elsewhere (choose
either "query" or "system_query" to match query.feature), then update the
scenario steps to reflect that chosen missing field so the test expects a valid
JSON error like { "type": "missing", "loc": ["body","<chosen_field>"], "msg":
"Field required" } and re-enable the scenario.
| features/authorized.feature | ||
| features/conversations.feature | ||
| features/feedback.feature | ||
| features/health.feature | ||
| features/info.feature | ||
| features/query.feature | ||
| features/streaming_query.feature |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Guard against Behave parse errors: referenced features currently have commented-out Feature headers
All newly added feature files in this list have their entire contents commented out (including the “Feature:” line). Behave will likely fail to parse them if executed. Either (a) uncomment at least the Feature line in each file (recommended; see per-file comments), or (b) temporarily exclude them from execution by commenting out these entries until steps are available.
Run this script to verify that each listed feature actually contains an uncommented Feature: header and exists at the expected path:
If you prefer to temporarily skip executing these new features, apply this diff to comment them out in the list:
-features/authorized.feature
-features/conversations.feature
-features/feedback.feature
-features/health.feature
-features/info.feature
-features/query.feature
-features/streaming_query.feature
+# features/authorized.feature
+# features/conversations.feature
+# features/feedback.feature
+# features/health.feature
+# features/info.feature
+# features/query.feature
+# features/streaming_query.feature🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Checking behave.ini (if present)…"
fd -HI -t f behave.ini | xargs -I{} sh -c 'echo "--- {} ---"; cat "{}"; echo'
echo
echo "Ensuring all entries in tests/e2e/test_list.txt exist at repo-root relative paths (tests/e2e/<entry>)…"
while IFS= read -r relpath; do
[[ -z "$relpath" ]] && continue
[[ "$relpath" =~ ^# ]] && continue
path="tests/e2e/${relpath}"
if [[ -f "$path" ]]; then
echo "OK: $path"
else
echo "MISSING: $path" >&2
fi
done < tests/e2e/test_list.txt
echo
echo "Checking each referenced file has an uncommented 'Feature:' header…"
while IFS= read -r relpath; do
[[ -z "$relpath" ]] && continue
[[ "$relpath" =~ ^# ]] && continue
path="tests/e2e/${relpath}"
[[ -f "$path" ]] || continue
if rg -nP '^\s*Feature\s*:' "$path" >/dev/null; then
echo "OK Feature header: $path"
else
echo "NO Feature header found (or commented): $path" >&2
echo "Hint: ensure the first non-comment, non-empty line starts with 'Feature:'"
fi
done < tests/e2e/test_list.txtLength of output: 8086
Uncomment missing ‘Feature:’ headers to avoid Behave parse errors
The following newly added feature files currently lack an uncommented Feature: header and will fail to parse:
- tests/e2e/features/authorized.feature
- tests/e2e/features/conversations.feature
- tests/e2e/features/feedback.feature
- tests/e2e/features/health.feature
- tests/e2e/features/info.feature
- tests/e2e/features/query.feature
- tests/e2e/features/streaming_query.feature
Recommended fixes (choose one):
• Uncomment the Feature: line (and any required content) in each file so that Behave can parse them.
– Ensure the first non-empty, non-comment line of each file is:
gherkin Feature: <descriptive title>
• Or, temporarily disable these files in your test list by commenting out their entries in tests/e2e/test_list.txt until you’ve implemented their step definitions:
- features/authorized.feature
- features/conversations.feature
- features/feedback.feature
- features/health.feature
- features/info.feature
- features/query.feature
- features/streaming_query.feature
+# features/authorized.feature
+# features/conversations.feature
+# features/feedback.feature
+# features/health.feature
+# features/info.feature
+# features/query.feature
+# features/streaming_query.feature🤖 Prompt for AI Agents
In tests/e2e/test_list.txt around lines 2 to 8, the referenced feature files
lack an uncommented top-level "Feature:" header which causes Behave parse
errors; either open each listed file (tests/e2e/features/authorized.feature,
conversations.feature, feedback.feature, health.feature, info.feature,
query.feature, streaming_query.feature) and make the first non-empty,
non-comment line read "Feature: <descriptive title>" (and add any required
content/comments below), or temporarily remove/comment out those entries in
tests/e2e/test_list.txt so Behave won't attempt to load them until their step
definitions are implemented.
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you
Description
New e2e tests added to feature files, the steps implementation will follow in next PRs
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit