Skip to content

Conversation

AlexEnrique
Copy link
Contributor

This PR closes #2095

Summary

  • [BREAKING CHANGE]: history_processor's now replaces the message history with its result.
  • Updates docs with warnings about new behavior and issues related to slicing the message history
  • Added tests to test new behavior

Concerns

The new behavior also processes/replaces the current ModelRequest being sent to the LLM. This may lead to unexpected behaviors for users of the library, for example, the following history_processor must prepend the messages[-1:] in order to not remove the model request being sent.

def process_previous_answers(messages: list[ModelMessage]) -> list[ModelMessage]:
        # Keep the last message (last question) and add a new system prompt
        return messages[-1:] + [ModelRequest(parts=[SystemPromptPart(content='Processed answer')])]

Although this may be unexpected, changing it to keep the last model request as is may also be a surprise for users. This was in conflict with exiting tests and I though it would be better to keep in this way and let the library user handle this.

I've added a warning to the docs, but maybe I need to emphasize this more.

c.c: @Kludex

Copy link
Contributor

hyperlint-ai bot commented Jul 26, 2025

PR Change Summary

Updated the message history processing behavior in the library, introducing breaking changes and enhanced documentation to address potential user concerns.

  • Replaced message history with results from the history processor, introducing a breaking change.
  • Updated documentation with warnings about new behavior and issues related to message history slicing.
  • Added tests to validate the new message history processing behavior.

Modified Files

  • docs/message-history.md

How can I customize these reviews?

Check out the Hyperlint AI Reviewer docs for more information on how to customize the review.

If you just want to ignore it on this PR, you can add the hyperlint-ignore label to the PR. Future changes won't trigger a Hyperlint review.

Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add hyperlint-ignore to the PR to ignore the link check for this PR.

Copy link
Contributor

@Wh1isper Wh1isper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@DouweM DouweM assigned DouweM and Kludex and unassigned DouweM Jul 28, 2025
@DouweM
Copy link
Collaborator

DouweM commented Jul 28, 2025

@AlexEnrique Thanks Alex! The code looks good to me but I'd like for @Kludex to have a look at the concern you mention in the description as he built the feature. He's out this week though, so expect a more careful review of the implications early next week!

@Kludex
Copy link
Member

Kludex commented Aug 4, 2025

Thanks folks. I think we should add a note on our breaking changes page in the documentation as well.

@AlexEnrique
Copy link
Contributor Author

AlexEnrique commented Aug 4, 2025

@Kludex There is a warning in the beginning of the section Processing Message History. Isn't it enough?

I'll copy it here to save you a search:

!!! warning "History processors replace the message history"
History processors replace the message history in the state with the processed messages, including the new user prompt part.
This means that if you want to keep the original message history, you need to make a copy of it.

@Kludex
Copy link
Member

Kludex commented Aug 7, 2025

I forgot this PR, I'll merge tomorrow

@Kludex Kludex enabled auto-merge (squash) August 11, 2025 12:16
@Kludex Kludex merged commit ecafb25 into pydantic:main Aug 11, 2025
17 checks passed
ethanabrooks added a commit to reflectionai/pydantic-ai that referenced this pull request Aug 20, 2025
* Add `priority` `service_tier` to `OpenAIModelSettings` and respect it in `OpenAIResponsesModel` (pydantic#2368)

* Add an example of using RunContext to pass data among tools (pydantic#2316)

Co-authored-by: Douwe Maan <[email protected]>

* Rename gemini-2.5-flash-lite-preview-06-17 to gemini-2.5-flash-lite as it's out of preview (pydantic#2387)

* Fix toggleable toolset example so toolset state is not shared across agent runs (pydantic#2396)

* Support custom thinking tags specified on the model profile (pydantic#2364)

Co-authored-by: jescudero <[email protected]>
Co-authored-by: Douwe Maan <[email protected]>

* Add convenience functions to handle AG-UI requests with request-specific deps (pydantic#2397)

* docs: add missing optional packages in `install.md` (pydantic#2412)

* Include default values in tool arguments JSON schema (pydantic#2418)

* Fix "test_download_item_no_content_type test fails on macOS" (pydantic#2404)

* Allow string format, pattern and others in OpenAI strict JSON mode (pydantic#2420)

* Let more `BaseModel`s use OpenAI strict JSON mode by defaulting to `additionalProperties=False` (pydantic#2419)

* BREAKING CHANGE: Change type of 'source' field on EvaluationResult (pydantic#2388)

Co-authored-by: Douwe Maan <[email protected]>

* Fix ImageUrl, VideoUrl, AudioUrl and DocumentUrl not being serializable (pydantic#2422)

* BREAKING CHANGE: Support printing reasons in the console output for pydantic-evals (pydantic#2163)

* Document performance implications of async vs sync tools (pydantic#2298)

Co-authored-by: Douwe Maan <[email protected]>

* Mention that tools become toolset internally (pydantic#2395)

Co-authored-by: Douwe Maan <[email protected]>

* Fix tests for Logfire>=3.22.0 (pydantic#2346)

* tests: speed up the test suite (pydantic#2414)

* google: add more information about schema on union (pydantic#2426)

* typo in output docs (pydantic#2427)

* Deprecate `GeminiModel` in favor of `GoogleModel` (pydantic#2416)

* Use `httpx` on `GoogleProvider` (pydantic#2438)

* Remove older deprecated models and add new model of Anthropic (pydantic#2435)

* Remove `next()` method from `Graph` (pydantic#2440)

* BREAKING CHANGE: Remove `data` from `FinalResult` (pydantic#2443)

* BREAKING CHANGE: Remove `get_data` and `validate_structured_result` from `StreamedRunResult` (pydantic#2445)

* docs: add `griffe_warnings_deprecated` (pydantic#2444)

* BREAKING CHANGE: Remove `format_as_xml` module (pydantic#2446)

* BREAKING CHANGE: Remove `result_type` parameter and similar from `Agent` (pydantic#2441)

* Deprecate `GoogleGLAProvider` and `GoogleVertexProvider` (pydantic#2450)

* BREAKING CHANGE: drop 4 months old deprecation warnings (pydantic#2451)

* Automatically use OpenAI strict mode for strict-compatible native output types (pydantic#2447)

* Make `InlineDefsJsonSchemaTransformer` public (pydantic#2455)

* Send `ThinkingPart`s back to Anthropic used through Bedrock (pydantic#2454)

* Bump boto3 to support `AWS_BEARER_TOKEN_BEDROCK` API key env var (pydantic#2456)

* Add new Heroku models (pydantic#2459)

* Add `builtin_tools` to `Agent` (pydantic#2102)

Co-authored-by: Marcelo Trylesinski <[email protected]>
Co-authored-by: Douwe Maan <[email protected]>

* Bump mcp-run-python (pydantic#2470)

* Remove fail_under from top-level coverage config so <100% html-coverage step doesn't end CI run (pydantic#2475)

* Add AbstractAgent, WrapperAgent, Agent.event_stream_handler, Toolset.id, Agent.override(tools=...) in preparation for Temporal (pydantic#2458)

* Let toolsets be built dynamically based on run context (pydantic#2366)

Co-authored-by: Douwe Maan <[email protected]>

* Add ToolsetFunc to API docs (fix CI) (pydantic#2486)

* tests: change time of evals example (pydantic#2501)

* ci: remove html and xml reports (pydantic#2491)

* fix: Add gpt-5 models to reasoning model detection for temperature parameter handling (pydantic#2483)

Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Douwe Maan <[email protected]>
Co-authored-by: Marcelo Trylesinski <[email protected]>

* History processor replaces message history (pydantic#2324)

Co-authored-by: Marcelo Trylesinski <[email protected]>

* ci: split test suite (pydantic#2436)

Co-authored-by: Douwe Maan <[email protected]>

* ci: use the right install command (pydantic#2506)

* Update config.yaml (pydantic#2514)

* Skip testing flaky evals example (pydantic#2518)

* Fix error when parsing usage details for video without audio track in Google models (pydantic#2507)

* Make OpenAIResponsesModelSettings.openai_builtin_tools work again (pydantic#2520)

* Let Agent be run in a Temporal workflow by moving model requests, tool calls, and MCP to Temporal activities (pydantic#2225)

* Install only dev in CI (pydantic#2523)

* Improve CLAUDE.md (pydantic#2524)

* Add best practices regarding to coverage to CLAUDE.md (pydantic#2527)

* Add support for `"openai-responses"` model inference string (pydantic#2528)

Co-authored-by: Claude <[email protected]>

* docs: Confident AI (pydantic#2529)

* chore: mention what to do with the documentation when deprecating a class (pydantic#2530)

* chore: drop hyperlint (pydantic#2531)

* ci: improve matrix readability (pydantic#2532)

* Add pip to dev deps for PyCharm (pydantic#2533)

Co-authored-by: Marcelo Trylesinski <[email protected]>

* Add genai-prices to dev deps and a basic test (pydantic#2537)

* Add `--durations=100` to all pytest calls in CI (pydantic#2534)

* Cleanup snapshot in test_evaluate_async_logfire (pydantic#2538)

* Make some minor tweaks to the temporal docs (pydantic#2522)

Co-authored-by: Douwe Maan <[email protected]>

* Add new OpenAI GPT-5 models (pydantic#2503)

* Fix `FallbackModel` to respect each model's model settings (pydantic#2540)

* Add support for OpenAI verbosity parameter in Responses API (pydantic#2493)

Co-authored-by: Claude <[email protected]>
Co-authored-by: Douwe Maan <[email protected]>

* Add `UsageLimits.count_tokens_before_request` using Gemini `count_tokens` API (pydantic#2137)

Co-authored-by: Douwe Maan <[email protected]>

* chore: Fix uv.lock (pydantic#2546)

* Stop calling MCP server `get_tools` ahead of `agent run` span (pydantic#2545)

* Disable instrumentation by default in tests (pydantic#2535)

Co-authored-by: Marcelo Trylesinski <[email protected]>

* Only wrap necessary parts of type aliases in forward annotations (pydantic#2548)

* Remove anthropic-beta default header set in `AnthropicModel` (pydantic#2544)

Co-authored-by: Marcelo Trylesinski <[email protected]>

* docs: Clarify why AG-UI example links are on localhost (pydantic#2549)

* chore: Fix path to agent class in CLAUDE.md (pydantic#2550)

* Ignore leading whitespace when streaming from Qwen or DeepSeek (pydantic#2554)

* Ask model to try again if it produced a response without text or tool calls, only thinking (pydantic#2556)

Co-authored-by: Douwe Maan <[email protected]>

* chore: Improve Temporal test to check trace as tree instead of list (pydantic#2559)

* Fix: Forward max_uses parameter to Anthropic WebSearchTool (pydantic#2561)

* Let message history end on ModelResponse and execute pending tool calls (pydantic#2562)

* Fix type issues

* skip tests requiring API keys

* add `google-genai` dependency

* add other provider deps

* add pragma: no cover for untested logic

---------

Co-authored-by: akenar <[email protected]>
Co-authored-by: Tony Woland <[email protected]>
Co-authored-by: Douwe Maan <[email protected]>
Co-authored-by: Yi-Chen Lin <[email protected]>
Co-authored-by: José I. Escudero <[email protected]>
Co-authored-by: jescudero <[email protected]>
Co-authored-by: Marcelo Trylesinski <[email protected]>
Co-authored-by: William Easton <[email protected]>
Co-authored-by: David Montague <[email protected]>
Co-authored-by: Guillermo <[email protected]>
Co-authored-by: Hamza Farhan <[email protected]>
Co-authored-by: Mohamed Amine Zghal <[email protected]>
Co-authored-by: Yinon Ehrlich <[email protected]>
Co-authored-by: Matthew Brandman <[email protected]>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Co-authored-by: Douwe Maan <[email protected]>
Co-authored-by: Alex Enrique <[email protected]>
Co-authored-by: Jerry Yan <[email protected]>
Co-authored-by: Claude <[email protected]>
Co-authored-by: Mayank <[email protected]>
Co-authored-by: Alex Hall <[email protected]>
Co-authored-by: Jerry Lin <[email protected]>
Co-authored-by: Raymond Xu <[email protected]>
Co-authored-by: kauabh <[email protected]>
Co-authored-by: Victorien <[email protected]>
Co-authored-by: Ethan Brooks <[email protected]>
Co-authored-by: eballesteros <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Save history_processors's result for next round model request

4 participants