Replies: 1 comment 2 replies
-
Hi @nicolemzh ! The NeMo LLM provider is a good example that also supports streaming: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/llm/providers/nemollm.py. In a custom LLM, it needs to be implemented similarly. Let me know if you need more guidance. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What are the requirements to stream a custom LLM with NemoGuardrails?
The custom LLM is working with NemoGuardrails without the streaming function. However, I'm having an issue implementing streaming with stream_async or the StreamingHandler. I was wondering if anyone has tried this task and if there are extra steps to stream with a custom LLM compared to GPT or LangChain through NemoGuardrails.
Beta Was this translation helpful? Give feedback.
All reactions