Feature request
langchain.llms.LlamaCpp wraps around llama_cpp, which recently added a n_gpu_layers argument. It would be great to have it in the wrapper.
Current workaround:
llm = LlamaCpp(...)
state = llm.client.__getstate__()
state["n_gpu_layers"] = n_gpu_layers
llm.client.__setstate__(state)
Motivation
Your contribution