You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I want to use gpt-o1 with Semantic Kernel Chat Completion, I need to use AzureOpenAIPromptExecutionSettings to set the SetNewMaxCompletionTokensEnabled property to 'true' because gpt-o1 requires it to be set. However, when Kernel Memory sends a RAG prompt to an LLM, it uses OpenAIPromptExecutionSettings which does not support the property, therefore, I cannot use gpt-o1 with Kernel Memory. Is this limitation intentional or is it okay if I contribute to address this as an issue (trying to find out if there might be any reason not to)?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
If I want to use gpt-o1 with Semantic Kernel Chat Completion, I need to use
AzureOpenAIPromptExecutionSettings
to set theSetNewMaxCompletionTokensEnabled
property to 'true' because gpt-o1 requires it to be set. However, when Kernel Memory sends a RAG prompt to an LLM, it usesOpenAIPromptExecutionSettings
which does not support the property, therefore, I cannot use gpt-o1 with Kernel Memory. Is this limitation intentional or is it okay if I contribute to address this as an issue (trying to find out if there might be any reason not to)?Beta Was this translation helpful? Give feedback.
All reactions