### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm Is there any way to serve embedding model and LLM at the same time with vLLM? If so, what should I do?