-
Notifications
You must be signed in to change notification settings - Fork 471
Closed
Description
I'm getting an AccessViolationException in LlamaWeights.LoadFromFile()
, coming from LLama.Native.NativeApi.llama_model_meta_count(LLama.Native.SafeLlamaModelHandle)
when the model file does not exist:
[LLamaSharp Native] [Info] NativeLibraryConfig Description:
- Path:
- PreferCuda: True
- PreferredAvxLevel: AVX2
- AllowFallback: True
- SkipCheck: False
- Logging: True
- SearchDirectories and Priorities: { ./bin/Debug/net8.0/, ./ }
[LLamaSharp Native] [Info] Detected OS Platform: WINDOWS
[LLamaSharp Native] [Info] Detected cuda major version 12.
[LLamaSharp Native] [Info] ./bin/Debug/net8.0/runtimes/win-x64/native/cuda12/libllama.dll is selected and loaded successfully.
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA RTX A2000 8GB Laptop GPU, compute capability 8.6
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:
--------------------------------
at LLama.Native.NativeApi.llama_model_meta_count(LLama.Native.SafeLlamaModelHandle)
--------------------------------
at LLama.Native.SafeLlamaModelHandle.get_MetadataCount()
at LLama.Native.SafeLlamaModelHandle.ReadMetadata()
at LLama.LLamaWeights..ctor(LLama.Native.SafeLlamaModelHandle)
at LLama.LLamaWeights.LoadFromFile(LLama.Abstractions.IModelParams)
at Program+<Main>d__0.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
at Program.Main(System.String[])
at Program.<Main>(System.String[])
Metadata
Metadata
Assignees
Labels
No labels