-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
Closed
Labels
Description
Anything you want to discuss about vllm.
Can the current benchmark_serving.py be used with Multimodal LLM (llava) and image input? The existing code send the request in the following format in backend_request_func.py, Is it possible to make it support image input?
payload = {
"model": request_func_input.model,
"messages": [
{
"role": "user",
"content": request_func_input.prompt,
},
],
"temperature": 0.0,
"max_tokens": request_func_input.output_len,
"stream": True,
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {os.environ.get('OPENAI_API_KEY')}",
}
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.