Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions behave.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[behave]
paths = tests/e2e/features
34 changes: 34 additions & 0 deletions tests/e2e/features/authorized.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Feature: Authorized endpoint API tests
# TODO: fix test

# Background:
# Given The service is started locally
# And REST API service hostname is localhost
# And REST API service port is 8080
# And REST API service prefix is /v1

# Scenario: Check if the OpenAPI endpoint works as expected
# Given The system is in default state
# When I access endpoint "authorized" using HTTP POST method
# Then The status code of the response is 200
# And The body of the response has proper username

# Scenario: Check if LLM responds to sent question with error when not authenticated
# Given The system is in default state
# And I remove the auth header
# When I access endpoint "authorized" using HTTP POST method
# Then The status code of the response is 400
# And The body of the response is the following
# """
# {"detail": "Unauthorized: No auth header found"}
# """

# Scenario: Check if LLM responds to sent question with error when not authorized
# Given The system is in default state
# And I modify the auth header so that the user is it authorized
# When I access endpoint "authorized" using HTTP POST method
# Then The status code of the response is 403
# And The body of the response is the following
# """
# {"detail": "Forbidden: User is not authorized to access this resource"}
# """
54 changes: 54 additions & 0 deletions tests/e2e/features/conversations.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Feature: conversations endpoint API tests
#TODO: fix test

Comment on lines +1 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Uncomment and tag the Feature header to make the file parsable (keep scenarios commented for now)

Behave requires a real Feature: header to parse the file. With it commented, the file is invalid. Suggest tagging as @wip and keeping the rest commented until steps land.

Apply:

-# Feature: conversations endpoint API tests
-#TODO: fix test
+@wip
+Feature: Conversations endpoint API tests
+# TODO: fix test
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Feature: conversations endpoint API tests
#TODO: fix test
@wip
Feature: Conversations endpoint API tests
# TODO: fix test
🤖 Prompt for AI Agents
In tests/e2e/features/conversations.feature around lines 1 to 3, the Feature
header is commented out which makes the file unparsable by Behave; restore a
proper Feature header by uncommenting and prefixing it with a tag (e.g. add
"@wip" on the line above) while leaving the individual scenarios commented for
now so the file is valid but inactive until steps are implemented.

# Background:
# Given The service is started locally
# And REST API service hostname is localhost
# And REST API service port is 8080
# And REST API service prefix is /v1


# Scenario: Check if conversations endpoint finds the correct conversation when it exists
# Given The system is in default state
# When I access REST API endpoint "conversations" using HTTP GET method
# Then The status code of the response is 200
# And the proper conversation is returned

# Scenario: Check if conversations endpoint does not finds the conversation when it does not exists
# Given The system is in default state
# When I access REST API endpoint "conversations" using HTTP GET method
# Then The status code of the response is 404

# Scenario: Check if conversations endpoint fails when conversation id is not provided
# Given The system is in default state
# When I access REST API endpoint "conversations" using HTTP GET method
# Then The status code of the response is 422

# Scenario: Check if conversations endpoint fails when service is unavailable
# Given The system is in default state
# And the service is stopped
# When I access REST API endpoint "conversations" using HTTP GET method
# Then The status code of the response is 503

# Scenario: Check if conversations/delete endpoint finds the correct conversation when it exists
# Given The system is in default state
# When I access REST API endpoint "conversations/delete" using HTTP GET method
# Then The status code of the response is 200
# And the deleted conversation is not found

# Scenario: Check if conversations/delete endpoint does not finds the conversation when it does not exists
# Given The system is in default state
# When I access REST API endpoint "conversations/delete" using HTTP GET method
# Then The status code of the response is 404

# Scenario: Check if conversations/delete endpoint fails when conversation id is not provided
# Given The system is in default state
# When I access REST API endpoint "conversations/delete" using HTTP GET method
# Then The status code of the response is 422

# Scenario: Check if conversations/delete endpoint fails when service is unavailable
# Given The system is in default state
# And the service is stopped
# When I access REST API endpoint "conversations/delete" using HTTP GET method
# Then The status code of the response is 503

90 changes: 90 additions & 0 deletions tests/e2e/features/feedback.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Feature: feedback endpoint API tests


Comment on lines +1 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Uncomment and tag the Feature header to avoid parser failures

-# Feature: feedback endpoint API tests
+@wip
+Feature: feedback endpoint API tests
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Feature: feedback endpoint API tests
@wip
Feature: feedback endpoint API tests
🤖 Prompt for AI Agents
In tests/e2e/features/feedback.feature around lines 1 to 3 the Feature header is
commented out which causes the Cucumber parser to fail; uncomment the line and
add a tag (e.g. @feedback) directly above the Feature declaration so the file
begins with a tag line followed by an uncommented "Feature: feedback endpoint
API tests" header.

# Background:
# Given The service is started locally
# And REST API service hostname is localhost
# And REST API service port is 8080
# And REST API service prefix is /v1


# Scenario: Check if feedback endpoint is working
# Given The system is in default state
# When I access endpoint "feedback" using HTTP POST with conversation ID conversationID
# """
# {
# "llm_response": "bar",
# "sentiment": -1,
# "user_feedback": "Not satisfied with the response quality",
# "user_question": "random question"
# }
# """
# Then The status code of the response is 200
# And The body of the response is the following
# """
# {"response": "feedback received"}
# """

# Scenario: Check if feedback endpoint is not working when not authorized
# Given The system is in default state
# And I remove the auth header
# When I access endpoint "feedback" using HTTP POST with conversation ID conversationID
# """
# {
# "llm_response": "bar",
# "sentiment": -1,
# "user_feedback": "Not satisfied with the response quality",
# "user_question": "random question"
# }
# """
# Then The status code of the response is 400
# And The body of the response is the following
# """
# {"response": "feedback received"}
# """

Comment on lines +28 to +45
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Unauthorized scenario: status/body mismatch

For an unauthorized request, asserting 400 with body {"response":"feedback received"} doesn’t match the intent. Align with how other features assert unauthorized (e.g., 400 with a clear error, or 401).

If you align to the query.feature unauthorized example:

-#      Then The status code of the response is 400
-#      And The body of the response is the following
-#           """
-#           {"response": "feedback received"}
-#           """
+#      Then The status code of the response is 400
+#      And The body of the response is the following
+#           """
+#           {"detail": "Unauthorized: No auth header found"}
+#           """

If the API uses 401 for missing auth:

-#      Then The status code of the response is 400
+#      Then The status code of the response is 401
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Scenario: Check if feedback endpoint is not working when not authorized
# Given The system is in default state
# And I remove the auth header
# When I access endpoint "feedback" using HTTP POST with conversation ID conversationID
# """
# {
# "llm_response": "bar",
# "sentiment": -1,
# "user_feedback": "Not satisfied with the response quality",
# "user_question": "random question"
# }
# """
# Then The status code of the response is 400
# And The body of the response is the following
# """
# {"response": "feedback received"}
# """
# Scenario: Check if feedback endpoint is not working when not authorized
# Given The system is in default state
# And I remove the auth header
# When I access endpoint "feedback" using HTTP POST with conversation ID conversationID
# """
# {
# "llm_response": "bar",
# "sentiment": -1,
# "user_feedback": "Not satisfied with the response quality",
# "user_question": "random question"
# }
# """
# Then The status code of the response is 400
# And The body of the response is the following
# """
# {"detail": "Unauthorized: No auth header found"}
# """
🤖 Prompt for AI Agents
In tests/e2e/features/feedback.feature around lines 28-45 update the
unauthorized scenario to match other features (like query.feature): change the
expected HTTP status and response body to the same unauthorized contract used
elsewhere — e.g. assert status 401 (or 400 if your API consistently uses 400 for
missing auth) and set the expected response body to the standard error JSON used
by other tests (replace {"response":"feedback received"} with the common
unauthorized error payload). Ensure the scenario matches the exact status code
and JSON shape used across the test suite.

# Scenario: Check if feedback endpoint is not working when feedback is disabled
# Given The system is in default state
# And I disable the feedback
# When I access endpoint "feedback" using HTTP POST with conversation ID conversationID
# """
# {
# "llm_response": "bar",
# "sentiment": -1,
# "user_feedback": "Not satisfied with the response quality",
# "user_question": "random question"
# }
# """
# Then The status code of the response is 403
# And The body of the response is the following
# """
# {"response": "feedback received"}
# """

Comment on lines +46 to +63
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

“Feedback disabled” scenario: success body used with 403

A 403 should have an error payload, not {"response": "feedback received"}.

-#      Then The status code of the response is 403
-#      And The body of the response is the following
-#           """
-#           {"response": "feedback received"}
-#           """
+#      Then The status code of the response is 403
+#      And The body of the response is the following
+#           """
+#           {"detail": "Forbidden: Feedback is disabled"}
+#           """
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Scenario: Check if feedback endpoint is not working when feedback is disabled
# Given The system is in default state
# And I disable the feedback
# When I access endpoint "feedback" using HTTP POST with conversation ID conversationID
# """
# {
# "llm_response": "bar",
# "sentiment": -1,
# "user_feedback": "Not satisfied with the response quality",
# "user_question": "random question"
# }
# """
# Then The status code of the response is 403
# And The body of the response is the following
# """
# {"response": "feedback received"}
# """
# Scenario: Check if feedback endpoint is not working when feedback is disabled
# Given The system is in default state
# And I disable the feedback
# When I access endpoint "feedback" using HTTP POST with conversation ID conversationID
# """
# {
# "llm_response": "bar",
# "sentiment": -1,
# "user_feedback": "Not satisfied with the response quality",
# "user_question": "random question"
# }
# """
# Then The status code of the response is 403
# And The body of the response is the following
# """
# {"detail": "Forbidden: Feedback is disabled"}
# """
🤖 Prompt for AI Agents
tests/e2e/features/feedback.feature around lines 46 to 63: the scenario for
"Feedback disabled" currently asserts a success body {"response": "feedback
received"} while the response status is 403; update the expected response body
to an appropriate error payload (e.g. a JSON error object such as
{"error":"feedback disabled"} or the project’s standard error schema) so the
test asserts the correct error payload for a 403; keep the rest of the steps
as-is.

# Scenario: Check if feedback endpoint fails with incorrect body format when conversationID is not present
# Given The system is in default state
# When I access endpoint "feedback" using HTTP POST method
# """
# {
# "llm_response": "bar",
# "sentiment": -1,
# "user_feedback": "Not satisfied with the response quality",
# "user_question": "random question"
# }
# """
# Then The status code of the response is 422
# And The body of the response is the following
# """
# { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required", }
# """
Comment on lines +75 to +79
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Invalid JSON due to trailing comma in error body

Remove the trailing comma.

-#           { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required", }
+#           { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required" }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Then The status code of the response is 422
# And The body of the response is the following
# """
# { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required", }
# """
# Then The status code of the response is 422
# And The body of the response is the following
# """
# { "type": "missing", "loc": [ "body", "conversation_id" ], "msg": "Field required" }
# """
🤖 Prompt for AI Agents
In tests/e2e/features/feedback.feature around lines 75 to 79 the commented
expected response JSON has a trailing comma after the "Field required" value
which makes the JSON invalid; remove that trailing comma so the JSON in the
triple-quoted response block is valid (e.g. end the object without the extra
comma).


# Scenario: Check if feedback/status endpoint is working
# Given The system is in default state
# When I access REST API endpoint "feedback/status" using HTTP GET method
# Then The status code of the response is 200
# And The body of the response is the following
# """
# {"functionality": "feedback", "status": { "enabled": true}}
# """


53 changes: 53 additions & 0 deletions tests/e2e/features/health.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Feature: Health endpoint API tests
#TODO: fix test

# Background:
# Given The service is started locally
# And REST API service hostname is localhost
# And REST API service port is 8080
# And REST API service prefix is /v1


# Scenario: Check if service report proper readiness state
# Given The system is in default state
# When I access endpoint "readiness" using HTTP GET method
# Then The status code of the response is 200
# And The body of the response has the following schema
# """
# {
# "ready": "bool",
# "reason": "str",
# "providers": "list[str]"
# }
# """
# And The body of the response is the following
# """
# {"ready": true, "reason": "All providers are healthy", "providers": []}
# """

# Scenario: Check if service report proper readiness state when llama stack is not available
# Given The system is in default state
# And The llama-stack connection is disrupted
# When I access endpoint "readiness" using HTTP GET method
# Then The status code of the response is 503

# Scenario: Check if service report proper liveness state
# Given The system is in default state
# When I access endpoint "liveness" using HTTP GET method
# Then The status code of the response is 200
# And The body of the response has the following schema
# """
# {
# "alive": "bool"
# }
# """
# And The body of the response is the following
# """
# {"alive":true}
# """

# Scenario: Check if service report proper liveness state when llama stack is not available
# Given The system is in default state
# And The llama-stack connection is disrupted
# When I access endpoint "liveness" using HTTP GET method
# Then The status code of the response is 503
Comment on lines +1 to +53
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Feature is entirely commented out; re-enable with @wip instead of commenting

As-is, Behave will not discover or run any scenario. Prefer tagging as @wip and excluding via behave.ini (tags = ~@wip) so the file stays syntactically valid and visible.

Apply this diff to re-enable the feature (and mark WIP):

-# Feature: Health endpoint API tests
-#TODO: fix test
+@wip
+Feature: Health endpoint API tests
+  # TODO: Replace steps with real service control once available.

-#   Background:
-#     Given The service is started locally
-#     And REST API service hostname is localhost
-#     And REST API service port is 8080
-#     And REST API service prefix is /v1
+  Background:
+    Given The service is started locally
+    And REST API service hostname is localhost
+    And REST API service port is 8080
+    And REST API service prefix is /v1

-#   Scenario: Check if service report proper readiness state
-#     Given The system is in default state
-#     When I access endpoint "readiness" using HTTP GET method
-#     Then The status code of the response is 200
-#     And The body of the response has the following schema
-#           """
-#           {
-#               "ready": "bool",
-#               "reason": "str",
-#               "providers": "list[str]"
-#           }
-#           """
-#       And The body of the response is the following
-#           """
-#           {"ready": true, "reason": "All providers are healthy", "providers": []}
-#           """
+  Scenario: Check if service reports proper readiness state
+    Given The system is in default state
+    When I access endpoint "readiness" using HTTP GET method
+    Then The status code of the response is 200
+    And The body of the response has the following schema
+      """
+      {
+        "ready": "bool",
+        "reason": "str",
+        "providers": "list[str]"
+      }
+      """
+    And The body of the response is the following
+      """
+      {"ready": true, "reason": "All providers are healthy", "providers": []}
+      """

-#   Scenario: Check if service report proper readiness state when llama stack is not available
-#     Given The system is in default state
-#     And The llama-stack connection is disrupted
-#     When I access endpoint "readiness" using HTTP GET method
-#     Then The status code of the response is 503
+  Scenario: Readiness when llama-stack is disrupted
+    Given The system is in default state
+    And The llama-stack connection is disrupted
+    When I access endpoint "readiness" using HTTP GET method
+    Then The status code of the response is 503

-#   Scenario: Check if service report proper liveness state
-#     Given The system is in default state
-#     When I access endpoint "liveness" using HTTP GET method
-#     Then The status code of the response is 200
-#     And The body of the response has the following schema
-#           """
-#           {
-#               "alive": "bool"
-#           }
-#           """
-#     And The body of the response is the following
-#           """
-#           {"alive":true}
-#           """
+  Scenario: Check if service reports proper liveness state
+    Given The system is in default state
+    When I access endpoint "liveness" using HTTP GET method
+    Then The status code of the response is 200
+    And The body of the response has the following schema
+      """
+      {
+        "alive": "bool"
+      }
+      """
+    And The body of the response is the following
+      """
+      {"alive": true}
+      """

-# Scenario: Check if service report proper liveness state when llama stack is not available
-#     Given The system is in default state
-#     And The llama-stack connection is disrupted
-#     When I access endpoint "liveness" using HTTP GET method
-#     Then The status code of the response is 503
+  Scenario: Liveness when llama-stack is disrupted
+    Given The system is in default state
+    And The llama-stack connection is disrupted
+    When I access endpoint "liveness" using HTTP GET method
+    Then The status code of the response is 503
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Feature: Health endpoint API tests
#TODO: fix test
# Background:
# Given The service is started locally
# And REST API service hostname is localhost
# And REST API service port is 8080
# And REST API service prefix is /v1
# Scenario: Check if service report proper readiness state
# Given The system is in default state
# When I access endpoint "readiness" using HTTP GET method
# Then The status code of the response is 200
# And The body of the response has the following schema
# """
# {
# "ready": "bool",
# "reason": "str",
# "providers": "list[str]"
# }
# """
# And The body of the response is the following
# """
# {"ready": true, "reason": "All providers are healthy", "providers": []}
# """
# Scenario: Check if service report proper readiness state when llama stack is not available
# Given The system is in default state
# And The llama-stack connection is disrupted
# When I access endpoint "readiness" using HTTP GET method
# Then The status code of the response is 503
# Scenario: Check if service report proper liveness state
# Given The system is in default state
# When I access endpoint "liveness" using HTTP GET method
# Then The status code of the response is 200
# And The body of the response has the following schema
# """
# {
# "alive": "bool"
# }
# """
# And The body of the response is the following
# """
# {"alive":true}
# """
# Scenario: Check if service report proper liveness state when llama stack is not available
# Given The system is in default state
# And The llama-stack connection is disrupted
# When I access endpoint "liveness" using HTTP GET method
# Then The status code of the response is 503
@wip
Feature: Health endpoint API tests
# TODO: Replace steps with real service control once available.
Background:
Given The service is started locally
And REST API service hostname is localhost
And REST API service port is 8080
And REST API service prefix is /v1
Scenario: Check if service reports proper readiness state
Given The system is in default state
When I access endpoint "readiness" using HTTP GET method
Then The status code of the response is 200
And The body of the response has the following schema
"""
{
"ready": "bool",
"reason": "str",
"providers": "list[str]"
}
"""
And The body of the response is the following
"""
{"ready": true, "reason": "All providers are healthy", "providers": []}
"""
Scenario: Readiness when llama-stack is disrupted
Given The system is in default state
And The llama-stack connection is disrupted
When I access endpoint "readiness" using HTTP GET method
Then The status code of the response is 503
Scenario: Check if service reports proper liveness state
Given The system is in default state
When I access endpoint "liveness" using HTTP GET method
Then The status code of the response is 200
And The body of the response has the following schema
"""
{
"alive": "bool"
}
"""
And The body of the response is the following
"""
{"alive": true}
"""
Scenario: Liveness when llama-stack is disrupted
Given The system is in default state
And The llama-stack connection is disrupted
When I access endpoint "liveness" using HTTP GET method
Then The status code of the response is 503
🤖 Prompt for AI Agents
In tests/e2e/features/health.feature around lines 1 to 53 the entire feature and
scenarios are commented out which prevents Behave from discovering them; restore
the feature by uncommenting the Feature and Scenario blocks and mark the feature
(or each Scenario) with the @wip tag so it remains excluded via behave.ini (tags
= ~@wip). Ensure the syntax is valid (Feature: ... on its own line, optional
Background and Scenario headers) and remove the leading # characters from the
Feature/Scenario/Given/When/Then lines while prepending a single @wip tag at the
top of the file or before each scenario.

46 changes: 46 additions & 0 deletions tests/e2e/features/info.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Feature: Info endpoint API tests
#TODO: fix test

# Background:
# Given The service is started locally
# And REST API service hostname is localhost
# And REST API service port is 8080
# And REST API service prefix is /v1

# Scenario: Check if the OpenAPI endpoint works as expected
# Given The system is in default state
# When I access endpoint "openapi.json" using HTTP GET method
# Then The status code of the response is 200
# And The body of the response contains OpenAPI

# Scenario: Check if info endpoint is working
# Given The system is in default state
# When I access REST API endpoint "info" using HTTP GET method
# Then The status code of the response is 200
# And The body of the response has proper name "lightspeed_stack" and version "0.2.0"

# Scenario: Check if models endpoint is working
# Given The system is in default state
# When I access REST API endpoint "models" using HTTP GET method
# Then The status code of the response is 200
# And The body of the response contains gpt


# Scenario: Check if models endpoint is working
# Given The system is in default state
# And The llama-stack connection is disrupted
# When I access REST API endpoint "models" using HTTP GET method
# Then The status code of the response is 503

# Scenario: Check if metrics endpoint is working
# Given The system is in default state
# When I access REST API endpoint "metrics" using HTTP GET method
# Then The status code of the response is 200
# And The body of the response has proper metrics

# Scenario: Check if metrics endpoint is working
# Given The system is in default state
# And The llama-stack connection is disrupted
# When I access REST API endpoint "metrics" using HTTP GET method
# Then The status code of the response is 500

18 changes: 0 additions & 18 deletions tests/e2e/features/llm_interface.feature

This file was deleted.

60 changes: 60 additions & 0 deletions tests/e2e/features/query.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Feature: Query endpoint API tests
#TODO: fix test

Comment on lines +1 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Uncomment and tag the Feature header to avoid parser errors

Same issue as other features: the fully-commented file is not a valid feature for Behave.

-# Feature: Query endpoint API tests
-#TODO: fix test
+@wip
+Feature: Query endpoint API tests
+# TODO: fix test
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Feature: Query endpoint API tests
#TODO: fix test
@wip
Feature: Query endpoint API tests
# TODO: fix test
🤖 Prompt for AI Agents
In tests/e2e/features/query.feature around lines 1 to 3, the entire file is
commented out causing Behave parser errors; uncomment the Feature header and add
a feature tag (e.g., @query) so the file is a valid feature file. Restore the
"Feature: Query endpoint API tests" line (remove the leading '#'), optionally
add a tag line directly above it like "@query", and ensure any TODO remains as a
comment or is removed so the feature is syntactically valid for Behave.

# Background:
# Given The service is started locally
# And REST API service hostname is localhost
# And REST API service port is 8080
# And REST API service prefix is /v1


# Scenario: Check if LLM responds to sent question
# Given The system is in default state
# When I use "query" to ask question "Say hello"
# Then The status code of the response is 200
# And The response should have proper LLM response format
# And The response should contain following fragments
# | Fragments in LLM response |
# | Hello |

# Scenario: Check if LLM responds to sent question with different system prompt
# Given The system is in default state
# And I change the system prompt to "new system prompt"
# When I use "query" to ask question "Say hello"
# Then The status code of the response is 200
# And The response should have proper LLM response format
# And The response should contain following fragments
# | Fragments in LLM response |
# | Hello |

# Scenario: Check if LLM responds with error for malformed request
# Given The system is in default state
# And I modify the request body by removing the "query"
# When I use "query" to ask question "Say hello"
# Then The status code of the response is 422
# And The body of the response is the following
# """
# { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required", }
# """

Comment on lines +30 to +39
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Mismatch between removed field and expected error JSON; trailing comma makes JSON invalid

Step says you remove "query" but the error body points to "system_query". Also the trailing comma before the closing brace invalidates JSON.

Two options—pick one for consistency with the API:

  • If you intend to remove "query", fix the JSON’s loc to "query".
  • If you intend to remove "system_query", fix the step text accordingly.

In either case, remove the trailing comma.

Option A (keep step, fix JSON):

-#           { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required", }
+#           { "type": "missing", "loc": [ "body", "query" ], "msg": "Field required" }

Option B (change step to system_query, keep JSON key):

-#     And I modify the request body by removing the "query"
+#     And I modify the request body by removing the "system_query"
-#           { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required", }
+#           { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required" }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In tests/e2e/features/query.feature around lines 30 to 39, the scenario removes
the "query" field but the expected error JSON refers to "system_query" and
includes a trailing comma making the JSON invalid; pick one of the two fixes:
either update the expected JSON loc to "query" and remove the trailing comma so
it becomes valid JSON, or change the step text to say you remove "system_query"
(keeping the JSON key) and remove the trailing comma; ensure the chosen variant
is consistent (step name matches JSON loc) and that the JSON has no trailing
comma.

# Scenario: Check if LLM responds to sent question with error when not authenticated
# Given The system is in default state
# And I remove the auth header
# When I use "query" to ask question "Say hello"
# Then The status code of the response is 200
# Then The status code of the response is 400
# And The body of the response is the following
# """
# {"detail": "Unauthorized: No auth header found"}
# """

Comment on lines +40 to +50
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Conflicting assertions: double Then with 200 and 400 status codes

The “not authenticated” scenario asserts 200 and then 400. Keep only the expected error status.

-#      Then The status code of the response is 200
-#       Then The status code of the response is 400
+#      Then The status code of the response is 400

If the API standardizes on 401 for missing auth, adjust accordingly:

-#      Then The status code of the response is 400
+#      Then The status code of the response is 401
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Scenario: Check if LLM responds to sent question with error when not authenticated
# Given The system is in default state
# And I remove the auth header
# When I use "query" to ask question "Say hello"
# Then The status code of the response is 200
# Then The status code of the response is 400
# And The body of the response is the following
# """
# {"detail": "Unauthorized: No auth header found"}
# """
# Scenario: Check if LLM responds to sent question with error when not authenticated
# Given The system is in default state
# And I remove the auth header
# When I use "query" to ask question "Say hello"
# Then The status code of the response is 401
# And The body of the response is the following
# """
# {"detail": "Unauthorized: No auth header found"}
# """
🤖 Prompt for AI Agents
In tests/e2e/features/query.feature around lines 40 to 50, the scenario for
unauthenticated requests contains conflicting assertions for response status
(both 200 and 400); remove the incorrect 200 assertion and keep only the
expected error status (adjust to 401 if your API uses 401 for missing auth), and
update the expected response body assertion to match the standardized error
payload (e.g., {"detail":"Unauthorized: No auth header found"} or the API's
canonical message).

# Scenario: Check if LLM responds to sent question with error when not authorized
# Given The system is in default state
# And I modify the auth header so that the user is it authorized
# When I use "query" to ask question "Say hello"
# Then The status code of the response is 403
# And The body of the response is the following
# """
# {"detail": "Forbidden: User is not authorized to access this resource"}
# """

Loading