You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TextGenerationOptions is a parameter of ITextGeneration.GenerateTextAsync. However currently it seems to be not used anywhere.
For some API service like OpenAI Chatgpt, stop sequence is not so important. However for local model inference, the model will endlessly generate response without a stop sequence.
Could you please expose TextGenerationOptions to AskAsync API to let users configure the settings themselves? It will help a lot for local LLM inference integration.