### Name and Version LlamaSharp-0.23.0 Relevant change: https://github.com/SciSharp/LLamaSharp/pull/1140 ### Operating systems Windows ### GGML backends CUDA ### Hardware 2x3090 ### Models gemma-3-27b-it-Q4_K_M.gguf ### Problem description & steps to reproduce Instead of returning ### First Bad Commit AFAIK https://github.com/ggml-org/llama.cpp/commit/e0dbec0bc6cd4b6230cda7a6ed1e9dac08d1600b ### Relevant log output ```shell decode: failed to prepare ubatch llama_decode: failed to decode, ret = -3 ```