Skip to content

[Bug]: LiteLLM SDK Client does not send out metadata upstream #12997

@emerzon

Description

@emerzon

What happened?

Using LiteLLM completion client, the metadata field is ignored.

Sample code:

import litellm

litellm.api_base = ""  # Replace with your LiteLLM server URL
litellm.api_key = "sk-xxx"
litellm._turn_on_debug()

# Create a completion
response = litellm.completion(
    model="gpt-4.1-nano",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Write a haiku about the ocean"},
    ],
    metadata={"session_id": "abc-123"}
)

# Print the completion
print(response.choices[0].message["content"])

Gives the following output, without the metadata being forwarded:

18:06:08 - LiteLLM:DEBUG: utils.py:348 - Request to litellm:
18:06:08 - LiteLLM:DEBUG: utils.py:348 - litellm.completion(model='gpt-4.1-nano', messages=[{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'Write a haiku about the ocean'}], metadata={'session_id': 'abc-123'})
18:06:08 - LiteLLM:DEBUG: utils.py:348 - 

18:06:08 - LiteLLM:DEBUG: litellm_logging.py:475 - self.optional_params: {}
18:06:08 - LiteLLM:DEBUG: utils.py:348 - SYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache')['no-cache']: False
18:06:08 - LiteLLM:INFO: utils.py:3230 - 
LiteLLM completion() model= gpt-4.1-nano; provider = openai
18:06:08 - LiteLLM:DEBUG: utils.py:3233 - 
LiteLLM: Params passed to completion() {'model': 'gpt-4.1-nano', 'functions': None, 'function_call': None, 'temperature': None, 'top_p': None, 'n': None, 'stream': None, 'stream_options': None, 'stop': None, 'max_tokens': None, 'max_completion_tokens': None, 'modalities': None, 'prediction': None, 'audio': None, 'presence_penalty': None, 'frequency_penalty': None, 'logit_bias': None, 'user': None, 'custom_llm_provider': 'openai', 'response_format': None, 'seed': None, 'tools': None, 'tool_choice': None, 'max_retries': None, 'logprobs': None, 'top_logprobs': None, 'extra_headers': None, 'api_version': None, 'parallel_tool_calls': None, 'drop_params': None, 'allowed_openai_params': None, 'reasoning_effort': None, 'additional_drop_params': None, 'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'Write a haiku about the ocean burning'}], 'thinking': None, 'web_search_options': None}
18:06:08 - LiteLLM:DEBUG: utils.py:3236 - 
LiteLLM: Non-Default params passed to completion() {}
18:06:08 - LiteLLM:DEBUG: utils.py:348 - Final returned optional params: {'extra_body': {}}
18:06:08 - LiteLLM:DEBUG: litellm_logging.py:475 - self.optional_params: {'extra_body': {}}
18:06:08 - LiteLLM:DEBUG: utils.py:4592 - checking potential_model_names in litellm.model_cost: {'split_model': 'gpt-4.1-nano', 'combined_model_name': 'openai/gpt-4.1-nano', 'stripped_model_name': 'gpt-4.1-nano', 'combined_stripped_model_name': 'openai/gpt-4.1-nano', 'custom_llm_provider': 'openai'}
18:06:08 - LiteLLM:DEBUG: litellm_logging.py:923 - 

POST Request Sent from LiteLLM:
curl -X POST \
https://llm \
-d '{'model': 'gpt-4.1-nano', 'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'Write a haiku about the ocean burning'}], 'extra_body': {}}'

Relevant log output

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.74.8

Twitter / LinkedIn details

@emersongomesma

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions