-
Notifications
You must be signed in to change notification settings - Fork 2.6k
feat: #1831 Add opt-in cost tracking for LiteLLM models #1832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unlocking the access itself is fine, but we'd like to hold off relying on an internal modules within a different package.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updating this PR. However, I still think it's not a great idea to rely on _litellm_cost
even though it's optional.
"""Whether to include usage chunk. | ||
Only available for Chat Completions API.""" | ||
|
||
track_cost: bool | None = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only related to LiteLLM use cases. Even if we decided to have this feature, I don't think this property should be here. If we add something like this, we may want to introduce LiteLLMSettings, which has only track_cost so far, and pass it to LiteLLMModel constructor as a new arg.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually like this idea. I thought it could be too big of a change, but if this is welcome I will go ahead with this.
): | ||
if isinstance(event, ResponseCompletedEvent): | ||
# Extract cost if it was attached by LiteLLM model. | ||
cost = getattr(event.response, "_litellm_cost", None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, but I still hesitate to rely on LiteLLM's underscored property
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about storing the cost in a _last_stream_cost
attribute on the LitellmModel instance instead, and have run.py extract it from there?
Resolves #1831
Adds cost tracking for LiteLLM models when users enable it via
ModelSettings(track_cost=True)
.Changes
cost
field toUsage
classtrack_cost
setting toModelSettings
. Opt-in (defaults toFalse
)_hidden_params["response_cost"]
litellm.completion_cost()