-
Notifications
You must be signed in to change notification settings - Fork 17
Update the dependencies and fix gate issues (black syntax) #39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This patch updates the dependencies of this repository. The primary
reason is the openai version 1.99.7 has the following problem:
from llama_stack.providers.utils.inference.openai_compat import (
File
"/rag-content/.venv/lib64/python3.12/site-packages/llama_stack/providers/utils/inference/openai_compat.py",
line 37, in <module>
from openai.types.chat import (
ImportError: cannot import name
'ChatCompletionMessageToolCall' from 'openai.types.chat'
(/rag-content/.venv/lib64/python3.12/site-packages/openai/types/chat/__init__.py).
Did you mean: 'ChatCompletionMessageToolCallParam'?
And this has been fixed on later versions:
openai/openai-python@ba54e03
Signed-off-by: Lucas Alvares Gomes <[email protected]>
WalkthroughPurely stylistic and typing updates in scripts/query_rag.py: quote style normalization, trailing commas, comment/annotation tweaks, and minor logging configuration formatting. No functional or control-flow changes. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Suggested reviewers
Pre-merge checks (1 passed, 2 warnings)❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
Poem
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. ✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Lucas Alvares Gomes <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
scripts/query_rag.py (5)
45-46: Use getattr for cleaner, safer metadata accessSlight readability/type improvement; avoids hasattr and works the same.
- "metadata": node.metadata if hasattr(node, "metadata") else {}, + "metadata": getattr(node, "metadata", {}),And similarly below:
- "metadata": node.metadata if hasattr(node, "metadata") else {}, + "metadata": getattr(node, "metadata", {}),Also applies to: 95-96
90-90: Replace type: ignore with explicit typingPrefer tightening types at assignment instead of suppressing the checker.
- for node in nodes: # type: ignore + for node in nodes:To support this, annotate/cast at retrieval (outside this hunk):
from typing import Sequence, cast from llama_index.core.schema import NodeWithScore nodes = cast(Sequence[NodeWithScore], retriever.retrieve(args.query))Please confirm the project uses a LlamaIndex version where NodeWithScore is available/named as above; adjust import if it’s moved/renamed.
204-204: Minor: prefer getattr for score extractionEquivalent behavior with slightly cleaner syntax.
- "score": score.score if hasattr(score, "score") else score, + "score": getattr(score, "score", score),
206-207: Optionally propagate chunk metadata when availableIf res.metadata provides per-chunk metadata (often under "metadatas"), include it to aid callers.
Example (conceptual):
metadatas = md.get("metadatas") or [{}] * len(md["chunks"]) for (i, (_id, chunk, score)) in enumerate(zip(md["document_ids"], md["chunks"], md["scores"])): node_data = { "id": _id, "score": getattr(score, "score", score), "text": chunk, "metadata": metadatas[i], }
267-269: Logging setup: good JSON/STDERR separation; consider force=TrueCurrent change correctly keeps stdout clean in JSON mode. If this script can be invoked from environments that pre-configure logging, add force=True to avoid duplicate handlers/formatters.
- logging.basicConfig( + logging.basicConfig( level=logging.ERROR, format="%(levelname)s: %(message)s", - stream=sys.stderr, # Send logs to stderr to keep stdout clean for JSON + stream=sys.stderr, # Send logs to stderr to keep stdout clean for JSON + force=True, ) ... - logging.basicConfig(level=logging.INFO, format="%(message)s") + logging.basicConfig(level=logging.INFO, format="%(message)s", force=True)Verify your minimum Python version is 3.8+ (force was added in 3.8).
Also applies to: 273-273
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (1)
scripts/query_rag.py(9 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Pylinter
- GitHub Check: build-and-push-dev
- GitHub Check: mypy
🔇 Additional comments (1)
scripts/query_rag.py (1)
63-64: LGTM: JSON shape consistencyTrailing-commas/empty list normalization improves diff hygiene and keeps the response schema stable. No behavior change.
Also applies to: 78-79, 88-89, 172-173, 188-189, 198-199
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
This patch updates the dependencies of this repository. The primary reason is the openai version 1.99.7 has the following problem:
from llama_stack.providers.utils.inference.openai_compat import (
File
"/rag-content/.venv/lib64/python3.12/site-packages/llama_stack/providers/utils/inference/openai_compat.py",
line 37, in
from openai.types.chat import (
ImportError: cannot import name
'ChatCompletionMessageToolCall' from 'openai.types.chat' (/rag-content/.venv/lib64/python3.12/site-packages/openai/types/chat/init.py). Did you mean: 'ChatCompletionMessageToolCallParam'?
And this has been fixed on later versions:
openai/openai-python@ba54e03
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Style
Chores
No user-facing changes. Functionality, inputs/outputs, and error handling remain unchanged.