-
Notifications
You must be signed in to change notification settings - Fork 0
Refactor application structure to use LangGraph DeepAgents #423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: enhance
Are you sure you want to change the base?
Conversation
…ve outdated langchain packages
|
Review these changes at https://app.gitnotebooks.com/AlphaSphereDotAI/chattr/pull/423 |
Reviewer's GuideThis PR refactors the application to replace the custom state graph, memory, and LLM setup with the DeepAgents multi-agent framework, streamlines prompt and model initialization, and updates dependencies and GUI rendering for improved maintainability. Sequence diagram for response generation with DeepAgentssequenceDiagram
participant User
participant App
participant DeepAgents
participant MongoDB
User->>App: Send message
App->>DeepAgents: Start agent stream (with message)
DeepAgents->>MongoDB: Checkpoint state
DeepAgents-->>App: Stream response
App-->>User: Display response
Class diagram for refactored App structure using DeepAgentsclassDiagram
class App {
Settings settings
AsyncMongoDBSaver _checkpointer
CompiledStateGraph _deep_agent
list~BaseTool~ _tools
+async create(settings: Settings) App
+_setup_deepagents() CompiledStateGraph
+_setup_prompt() str
+_setup_model() BaseChatModel
+draw_graph() Path
+gui() Blocks
+async generate_response(message: str) AIMessage
}
App --> AsyncMongoDBSaver : uses as checkpointer
App --> CompiledStateGraph : uses as deep_agent
App --> BaseTool : uses tools
App --> Settings : has settings
class AsyncMongoDBSaver
class CompiledStateGraph
class BaseTool
class Settings
class BaseChatModel
class Blocks
class AIMessage
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @MH0386, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant refactoring of the application's underlying architecture by adopting Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Here's the code health analysis summary for commits Analysis Summary
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes and they look great!
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `src/chattr/app/builder.py:74` </location>
<code_context>
- cls._model = cls._llm.bind_tools(cls._tools, parallel_tool_calls=False)
- cls._memory = await cls._setup_memory()
- cls._graph = cls._setup_graph()
+ cls._checkpointer = AsyncMongoDBSaver.from_conn_string("localhost:27017")
+ cls._deep_agent = cls._setup_deepagents()
return cls()
</code_context>
<issue_to_address>
**suggestion:** Hardcoded MongoDB connection string may reduce deployment flexibility.
Consider sourcing the connection string from configuration or environment variables to support multiple deployment environments.
```suggestion
import os
mongo_conn_string = os.environ.get("MONGODB_CONN_STRING", "mongodb://localhost:27017")
cls._checkpointer = AsyncMongoDBSaver.from_conn_string(mongo_conn_string)
```
</issue_to_address>
### Comment 2
<location> `src/chattr/app/builder.py:116` </location>
<code_context>
- api_key=cls.settings.model.api_key,
- temperature=cls.settings.model.temperature,
- )
+ return init_chat_model("gemini-2.5-flash", model_provider="google_genai")
except Exception as e:
_msg = f"Failed to initialize ChatOpenAI model: {e}"
</code_context>
<issue_to_address>
**suggestion:** Model initialization is now hardcoded to Gemini; consider configurability.
Making the model and provider configurable will allow easier support for future changes or additional options.
Suggested implementation:
```python
try:
return init_chat_model(
cls.settings.model.name,
model_provider=cls.settings.model.provider
)
except Exception as e:
_msg = f"Failed to initialize ChatOpenAI model: {e}"
logger.error(_msg)
raise Error(_msg) from e
```
Ensure that `cls.settings.model` has the attributes `name` and `provider` set appropriately, either via configuration files or environment variables. If these attributes do not exist, you will need to add them to your model settings class and update any configuration logic accordingly.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| cls._model = cls._llm.bind_tools(cls._tools, parallel_tool_calls=False) | ||
| cls._memory = await cls._setup_memory() | ||
| cls._graph = cls._setup_graph() | ||
| cls._checkpointer = AsyncMongoDBSaver.from_conn_string("localhost:27017") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Hardcoded MongoDB connection string may reduce deployment flexibility.
Consider sourcing the connection string from configuration or environment variables to support multiple deployment environments.
| cls._checkpointer = AsyncMongoDBSaver.from_conn_string("localhost:27017") | |
| import os | |
| mongo_conn_string = os.environ.get("MONGODB_CONN_STRING", "mongodb://localhost:27017") | |
| cls._checkpointer = AsyncMongoDBSaver.from_conn_string(mongo_conn_string) |
| api_key=cls.settings.model.api_key, | ||
| temperature=cls.settings.model.temperature, | ||
| ) | ||
| return init_chat_model("gemini-2.5-flash", model_provider="google_genai") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Model initialization is now hardcoded to Gemini; consider configurability.
Making the model and provider configurable will allow easier support for future changes or additional options.
Suggested implementation:
try:
return init_chat_model(
cls.settings.model.name,
model_provider=cls.settings.model.provider
)
except Exception as e:
_msg = f"Failed to initialize ChatOpenAI model: {e}"
logger.error(_msg)
raise Error(_msg) from eEnsure that cls.settings.model has the attributes name and provider set appropriately, either via configuration files or environment variables. If these attributes do not exist, you will need to add them to your model settings class and update any configuration logic accordingly.
🧪 CI InsightsHere's what we observed from your CI run for a4c374b. 🟢 All jobs passed!But CI Insights is watching 👀 |
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request successfully refactors the application to use DeepAgents, which simplifies the overall architecture. However, the refactoring has introduced several critical regressions by hardcoding values for the database connection, model configuration, and conversation thread_id. These hardcoded values remove configurability and, in the case of thread_id, will cause all users to share a single conversation state. These issues must be addressed before merging.
| api_key=cls.settings.model.api_key, | ||
| temperature=cls.settings.model.temperature, | ||
| ) | ||
| return init_chat_model("gemini-2.5-flash", model_provider="google_genai") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The chat model is hardcoded to use gemini-2.5-flash with the google_genai provider. This bypasses the ModelSettings configuration, removing the flexibility to switch models. This functionality should be restored by using the values from cls.settings.model. You may need to update ModelSettings to include a provider field.
| last_agent_message: AIMessage | None = None | ||
| async for response in cls._deep_agent.astream( | ||
| State(messages=[HumanMessage(content=message)], mem0_user_id="1"), | ||
| RunnableConfig(configurable={"thread_id": "1"}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thread_id is hardcoded to "1". This is a critical flaw that will cause all users to share the same conversation history, as the checkpointer uses this ID to save and load state. Each conversation requires a unique thread_id. This should be implemented using a session management mechanism, such as Gradio's gr.State, to assign and track a unique ID for each user session.
| cls._model = cls._llm.bind_tools(cls._tools, parallel_tool_calls=False) | ||
| cls._memory = await cls._setup_memory() | ||
| cls._graph = cls._setup_graph() | ||
| cls._checkpointer = AsyncMongoDBSaver.from_conn_string("localhost:27017") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ) | ||
| return init_chat_model("gemini-2.5-flash", model_provider="google_genai") | ||
| except Exception as e: | ||
| _msg = f"Failed to initialize ChatOpenAI model: {e}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| async for response in cls._graph.astream( | ||
| last_agent_message: AIMessage | None = None | ||
| async for response in cls._deep_agent.astream( | ||
| State(messages=[HumanMessage(content=message)], mem0_user_id="1"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mem0_user_id parameter appears to be a remnant from the previous mem0 implementation. With DeepAgents using thread_id for session management, this parameter is likely unused. It should be removed from the State class definition and this call to improve clarity and remove dead code.
| State(messages=[HumanMessage(content=message)], mem0_user_id="1"), | |
| State(messages=[HumanMessage(content=message)]), |
🔍 Vulnerabilities of
|
| digest | sha256:b71ded86335e6e5e6385a3edbaa561acabe6efacf84a1dc22f1abb40d43b7b7a |
| vulnerabilities | |
| platform | linux/amd64 |
| size | 328 MB |
| packages | 521 |
# Dockerfile (28:28)
COPY --from=builder --chown=nonroot:nonroot --chmod=555 /home/nonroot/.local/ /home/nonroot/.local/
Description
| ||||||||||||
# Dockerfile (28:28)
COPY --from=builder --chown=nonroot:nonroot --chmod=555 /home/nonroot/.local/ /home/nonroot/.local/
Description
| ||||||||||||
# Dockerfile (28:28)
COPY --from=builder --chown=nonroot:nonroot --chmod=555 /home/nonroot/.local/ /home/nonroot/.local/
Description
|
|



Refactor the application by moving GUI components, improving memory handling, and updating error handling. Enhance Docker and CI configurations, streamline logging, and integrate new dependencies. Address style issues and improve overall code organization for better maintainability.
Summary by Sourcery
Refactor the core application to leverage the DeepAgents multi-agent system and simplify the existing state graph, memory handling, and model setup.
New Features:
Enhancements:
Build: