Skip to content

History processors cause missing messages from capture_run_messages and {all,new}_messages() #2914

@DouweM

Description

@DouweM

Initial Checks

Description

Likely introduced by #2324.

The example below generates these results.

You can see that under with_processor, capture_run_messages is missing the model response and has the unmodified original message, and all_messages and new_messages are both missing the model request with the user prompt.

This likely has something to do with ctx.state.message_history getting out of sync with the messages contextvar used for capture_run_messages, and possibly something to do with new_message_index (although that may not actually be part of the problem here, only if we returned a different length modified history from the one we got)

{
    'without_processor': {
        'capture_run_messages': [
            ModelRequest(
                parts=[
                    UserPromptPart(
                        content='Original message',
                        timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 935505, tzinfo=datetime.timezone.utc),
                    )
                ]
            ),
            ModelRequest(
                parts=[
                    UserPromptPart(
                        content='foobar',
                        timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 951969, tzinfo=datetime.timezone.utc),
                    )
                ]
            ),
            ModelResponse(
                parts=[TextPart(content='success (no tool calls)')],
                usage=RequestUsage(input_tokens=53, output_tokens=4),
                model_name='test',
                timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 952445, tzinfo=datetime.timezone.utc),
            ),
        ],
        'all_messages': [
            ModelRequest(
                parts=[
                    UserPromptPart(
                        content='Original message',
                        timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 935505, tzinfo=datetime.timezone.utc),
                    )
                ]
            ),
            ModelRequest(
                parts=[
                    UserPromptPart(
                        content='foobar',
                        timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 951969, tzinfo=datetime.timezone.utc),
                    )
                ]
            ),
            ModelResponse(
                parts=[TextPart(content='success (no tool calls)')],
                usage=RequestUsage(input_tokens=53, output_tokens=4),
                model_name='test',
                timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 952445, tzinfo=datetime.timezone.utc),
            ),
        ],
        'new_messages': [
            ModelRequest(
                parts=[
                    UserPromptPart(
                        content='foobar',
                        timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 951969, tzinfo=datetime.timezone.utc),
                    )
                ]
            ),
            ModelResponse(
                parts=[TextPart(content='success (no tool calls)')],
                usage=RequestUsage(input_tokens=53, output_tokens=4),
                model_name='test',
                timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 952445, tzinfo=datetime.timezone.utc),
            ),
        ],
    },
    'with_processor': {
        'capture_run_messages': [
            ModelRequest(
                parts=[
                    UserPromptPart(
                        content='Original message',
                        timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 935505, tzinfo=datetime.timezone.utc),
                    )
                ]
            ),
            ModelRequest(
                parts=[
                    UserPromptPart(
                        content='foobar',
                        timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 954449, tzinfo=datetime.timezone.utc),
                    )
                ]
            ),
        ],
        'all_messages': [
            ModelRequest(
                parts=[
                    UserPromptPart(
                        content='Modified message',
                        timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 964587, tzinfo=datetime.timezone.utc),
                    )
                ]
            ),
            ModelResponse(
                parts=[TextPart(content='success (no tool calls)')],
                usage=RequestUsage(input_tokens=52, output_tokens=4),
                model_name='test',
                timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 965188, tzinfo=datetime.timezone.utc),
            ),
        ],
        'new_messages': [
            ModelResponse(
                parts=[TextPart(content='success (no tool calls)')],
                usage=RequestUsage(input_tokens=52, output_tokens=4),
                model_name='test',
                timestamp=datetime.datetime(2025, 9, 15, 21, 59, 27, 965188, tzinfo=datetime.timezone.utc),
            )
        ],
    },
}

Example Code

from pydantic_ai import Agent, capture_run_messages
from pydantic_ai.messages import ModelMessage, ModelRequest, UserPromptPart

history: list[ModelMessage] = [
    ModelRequest(parts=[UserPromptPart(content='Original message')]),
]

agent = Agent('test')

with capture_run_messages() as messages1:
    result1 = agent.run_sync('foobar', message_history=history)


def return_new_history(messages: list[ModelMessage]) -> list[ModelMessage]:
    return [
        ModelRequest(parts=[UserPromptPart(content='Modified message')]),
    ]


agent = Agent('test', history_processors=[return_new_history])

with capture_run_messages() as messages2:
    result2 = agent.run_sync('foobar', message_history=history)

results = {
    'without_processor': {
        'capture_run_messages': messages1,
        'all_messages': result1.all_messages(),
        'new_messages': result1.new_messages(),
    },
    'with_processor': {
        'capture_run_messages': messages2,
        'all_messages': result2.all_messages(),
        'new_messages': result2.new_messages(),
    },
}

print(results)

Python, Pydantic AI & LLM client version

main

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions