retrieving OpenAI tokens for monitoring #1288
-
I'm applying guardrails to a chatbot based on gpt-4o mini. I am testing with a pretty basic setup right now, with only input and output rails. The issue I'm facing is that even when verbose logs are enabled (in CLI chat, a server, or just running a single query through a python file) I see that total tokens, prompt tokens, and completion tokens are all 0. I would like to be able to see how many tokens each response and guardrail is taking up--- can anyone help with enabling this? for context, my config is very simple:
instructions: rails: load_dotenv("") config=RailsConfig.from_path("./config") async def stream_response(messages): messages=[{ asyncio.run(stream_response(messages)) info = rails.explain() |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
just noticed you resolved exactly this issue in your most recent merge. just pulled updated files from develop branch and set streaming_usage: True in my config, and all is well!! |
Beta Was this translation helpful? Give feedback.
just noticed you resolved exactly this issue in your most recent merge. just pulled updated files from develop branch and set streaming_usage: True in my config, and all is well!!