-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
Description
🚀 The feature, motivation and pitch
PPO and a number of other LLM fine-tuning techniques require autoregressive generation as part of the training process. When using vLLM to speed up the autoregressive generation part of the training loop, is there an efficient way to update the weights of the LLM? Specifically, in the case of LoRA fine-tuning, is there a way to efficiently swap out the adapters without having to save them to the filesystem?
Alternatives
Efficient LoRA adapter update
Possible workaround without any code change: save adapters to an in-memory file-system (e.g., /dev/shm) and point to that directory in each LoRARequest. This workaround:
- Avoids disk read/write bottleneck and SSD wear.
- Still incurs the overhead of safetensors serialization and deserialization.
Proposed change: modify LoRARequest to allow adapters to be specified as a dictionary of tensors.
- Modify class definition of LoRARequest
- mark
lora_local_path: stras optional - add new optional
lora_tensors: dict[str, torch.Tensor]attribute.
- mark
- Modify WorkerLoRAManager
_load_loraimplementation (vllm/lora/worker_manager.py)- verify that the given LoRARequest specifies exactly one of
lora_local_pathandlora_tensors. - optionally, move the logic for checking
unexpected_modulesinto a separate method. - if
lora_tensorsis provided in the LoRARequest:- check for
unexpected_modulesin the given dict of tensors. - invoke
from_lora_tensorsinstead offrom_local_checkpoint.
- check for
- verify that the given LoRARequest specifies exactly one of
Alternative approach: non-LoRA parameter update
- OpenRLHF replaces vLLM model parameters with in-memory tensors by overriding
hf_model_weights_iteratorand invokingload_weightsfor each tensor in the dict. (source, patch)
Additional context
LLM fine-tuning objectives such as PPO require autoregressive text generation during training, with the requirement that a reasonably up-to-date copy of the model is used for generation.
As of the v0.4.0 vLLM release, when instantiating a vLLM LoRARequest, the LoRA adapters are specified through the lora_local_path: str attribute. (source) In the LoRA PPO example above, if the vLLM instance is on the same machine as the peft training loop, sending a new copy of the adapter weights to vLLM would require the following steps:
- Invoke
peft.PeftModel.save_pretrainedto save the adapter tensor state dict (asfolder_name/adapter_model.safetensors) to a local path on disk. Behind the scene, this method would:- Invoke
peft.utils.get_peft_model_state_dictto obtain the tensor dict, and then - Invoke
safetensors.torch.save_fileto serialize the lora tensors dict to filesystem. (serialization overhead)
- Invoke
- Instantiate a vLLM
LoRARequestand setlora_local_pathattribute to the updated value. - Send this
LoRARequestto the vLLM Engine. Behind the scene, vLLM would:- Invoke
LoRAModel.from_local_checkpoint(source) - Verify that all
target_moduleslisted in the peft config are supported. - Load lora tensors dict from filesystem into CPU memory. (deserialization overhead)
- If additional embedding tensors are provided, load these into CPU memory also.
- Invoke
LoRAModel.from_lora_tensors(source) to instantiate the LoRAModel.
- Invoke
If the proposed alternative is adopted, the new workflow be like:
- Invoke
peft.utils.get_peft_model_state_dicton the LoRA model to obtain the lora tensors dict (same as the one written to disk in the workaround above.) - Instantiate a vLLM LoRARequest and include a pointer to this lora tensors dict.
- Send this
LoRARequestto the vLLM Engine. Behind the scene, vLLM would:- Invoke
LoRAModel.from_lora_tensors(source) to instantiate the updated LoRAModel.
- Invoke
Related Issues
The idea of adding new LoRA adapters without restarting vLLM is related to #3308 with some differences:
- LoRA adapters in this feature request are in memory on the same machine as the one running the vLLM server, whereas 3308 proposes loading new adapters from disk.
- This feature request primarily addresses the vLLM Python API, whereas 3308 addresses the OpenAI-compatible HTTP API.
If the changes proposed in this feature request are merged, these features could eventually be added to the vLLM OpenAI-compatible HTTP API to e.g., allow trusted remote users to add LoRA adapters to a vLLM server without first writing the adapters to a filesystem on the server.