-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
Closed
Labels
feature requestNew feature or requestNew feature or requestnew-modelRequests to new modelsRequests to new models
Description
Dear Authors,
Thank you so much for your wonderful work. I want to ask if I am running LLaVA(https://github.com/haotian-liu/LLaVA/blob/main/llava/model/llava.py), a multimodal LLM built upon LLaMA by adding an image encoder, what is the most convenient method to incorporate VLLM?
I think I can follow the instructions in "https://vllm.readthedocs.io/en/latest/models/adding_model.html". Are there any more convenient ways?
tachyean, uyo9ko, shotarok, louis030195, allen-ash and 6 more
Metadata
Metadata
Assignees
Labels
feature requestNew feature or requestNew feature or requestnew-modelRequests to new modelsRequests to new models