-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
I have a rather complex schema with many optional parts. The schema represents "knowledge" to be extracted from natural language, with many options, e.g. entities, facets, actions, dates and times, and more. I have this schema working pretty well with TypeChat but for various reasons I'm looking to replace that with Pydantic AI, if I can get it to work well.
While trying to make this work, using Azure OpenAI, I found two issues:
- The descriptions and docstrings I put in the schema using
Field(description="..."_
, although I see they are passed along quite deep in the code, don't appear to affect the model's behavior. I need to repeat the same info in the prompt text. Is this expected? - Sometimes the model ignores the type annotations. E.g. a field annotated with
DateTime
(a struct defined in the same file) was given a string value (a date/time formatted using the ISO standard) -- causing your validation to fail (I'm glad the validation worked, it would have been harder to debug in my own code :-).
Are these just known issues with the models? Is there something that could be done about it? Does Pydantic AI attempt to retry the request with an additional section in the prompt indicating the problem? (LLMs can often improve their behavior -- at the cost of extra tokens, for sure.)
I'm totally happy to update my prompts if this is the way it's supposed to be, but both of these surprised me and appear to violate the promise of using type annotations in the schema. (Maybe I didn't read the docs end to end -- but who does. :-)
Example Code
Python, Pydantic AI & LLM client version
Python 3.12.11, pydantic_ai_slim 0.4.9, pydantic_core 2.33.2, openai 1.95.1.