-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
The Feature
Support fetching prompts from Langfuse and injecting trace metadata (e.g. trace ID, span ID) into the prompt before sending the request to the LLM.
This would allow teams using Langfuse with LiteLLM to automatically enrich prompts with trace context, enabling better observability and end-to-end traceability in multi-service environments.
Motivation, pitch
I’m working on a Langfuse-integrated system where prompts are version-controlled and stored in Langfuse. I’d like to use these prompts during LLM calls made via LiteLLM, while also injecting trace metadata (like trace_id, span_id, user_id) into the prompt for better observability and auditability.
Currently, this requires manual fetching of the prompt from Langfuse and custom logic to inject trace info, which increases boilerplate and risk of inconsistency.
Having native support in LiteLLM to:
- Fetch prompts by ID from Langfuse
- Optionally inject trace metadata (as a system message or JSON block)
- Use the resulting prompt for the LLM call…would make integration smoother and improve production readiness for Langfuse + LiteLLM stacks.
Proposed interface (example):
prompt = langfuse.get_prompt("prompt_id")
compiled_prompt = prompt.compile(**prompt_variables)
messages = [
{"role": "system", "content": compiled_prompt},
{"role": "user", "content": input},
]
response = litellm.completion(
model="gpt-4",
messages=messages,
**prompt.config, 👈
langfuse_prompt=prompt, 👈
)
LiteLLM is hiring a founding backend engineer, are you interested in joining us and shipping to all our users?
No