diff --git a/docs/usage/v1_guide.md b/docs/usage/v1_guide.md index 03f313aaef0f..ca27f4dfc8ef 100644 --- a/docs/usage/v1_guide.md +++ b/docs/usage/v1_guide.md @@ -1,6 +1,8 @@ # vLLM V1 -**We have started the process of deprecating V0. Please read [RFC #18571](https://github.com/vllm-project/vllm/issues/18571) for more details.** +!!! important + + We have started the process of deprecating V0. Please read [RFC #18571](https://github.com/vllm-project/vllm/issues/18571) for more details. V1 is now enabled by default for all supported use cases, and we will gradually enable it for every use case we plan to support. Please share any feedback on [GitHub](https://github.com/vllm-project/vllm) or in the [vLLM Slack](https://inviter.co/vllm-slack). @@ -32,53 +34,92 @@ Upgrade to vLLM’s Core Architecture](https://blog.vllm.ai/2025/01/27/v1-alpha- This living user guide outlines a few known **important changes and limitations** introduced by vLLM V1. The team has been working actively to bring V1 as the default engine, therefore this guide will be updated constantly as more features get supported on vLLM V1. -### Supports Overview -#### Hardware +## Current Status + +For each item, our progress towards V1 support falls into one of the following states: + +- **🚀 Optimized**: Nearly fully optimized, with no further work currently planned. +- **🟢 Functional**: Fully operational, with ongoing optimizations. +- **🚧 WIP**: Under active development. +- **🟡 Planned**: Scheduled for future implementation (some may have open PRs/RFCs). +- **🟠 Delayed**: Temporarily dropped in V1 but planned to be re-introduced later. +- **🔴 Deprecated**: Not planned for V1 unless there is strong demand. + +### Hardware + +| Hardware | Status | +|------------|------------------------------------| +| **NVIDIA** | 🚀 | +| **AMD** | 🟢 | +| **TPU** | 🟢 | +| **CPU** | 🟢 (x86) 🟡 (MacOS) | + +!!! note + + More hardware platforms may be supported via plugins, e.g.: + + - [vllm-ascend](https://github.com/vllm-project/vllm-ascend) + - [vllm-spyre](https://github.com/vllm-project/vllm-spyre) + - [vllm-openvino](https://github.com/vllm-project/vllm-openvino) + + Please check their corresponding repositories for more details. + +### Models + +| Model Type | Status | +|-----------------|-----------------------------------------------------------------------------------| +| **Decoder-only Models** | 🚀 Optimized | +| **Encoder-Decoder Models** | 🟠 Delayed | +| **Embedding Models** | 🚧 WIP ([PR #16188](https://github.com/vllm-project/vllm/pull/16188)) | +| **Mamba Models** | 🚧 WIP ([PR #19327](https://github.com/vllm-project/vllm/pull/19327)) | +| **Multimodal Models** | 🟢 Functional | -| Hardware | Status | -|----------|------------------------------------------| -| **NVIDIA** | 🚀 Natively Supported | -| **AMD** | 🚧 WIP | -| **TPU** | 🚧 WIP | -| **CPU** | 🚧 WIP | +vLLM V1 currently excludes model architectures with the `SupportsV0Only` protocol, +and the majority fall into the following categories: -#### Feature / Model +**Embedding Models** +The initial support will be provided by [PR #16188](https://github.com/vllm-project/vllm/pull/16188). -| Feature / Model | Status | +Later, we will consider using [hidden states processor](https://github.com/vllm-project/vllm/issues/12249), +which is based on [global logits processor](https://github.com/vllm-project/vllm/pull/13360) +to enable simultaneous generation and embedding using the same engine instance in V1. + +**Mamba Models** +Models using selective state-space mechanisms instead of standard transformer attention (e.g., `MambaForCausalLM`, `JambaForCausalLM`) +will be supported via [PR #19327](https://github.com/vllm-project/vllm/pull/19327). + +**Encoder-Decoder Models** +vLLM V1 is currently optimized for decoder-only transformers. +Models requiring cross-attention between separate encoder and decoder are not yet supported (e.g., `BartForConditionalGeneration`, `MllamaForConditionalGeneration`). + +For a complete list of supported models, see the [list of supported models](https://docs.vllm.ai/en/latest/models/supported_models.html). + +### Features + +| Feature | Status | |-----------------|-----------------------------------------------------------------------------------| -| **Prefix Caching** | 🚀 Optimized | -| **Chunked Prefill** | 🚀 Optimized | +| **Prefix Caching** | 🚀 Optimized | +| **Chunked Prefill** | 🚀 Optimized | | **LoRA** | 🚀 Optimized | | **Logprobs Calculation** | 🟢 Functional | -| **Multimodal Models** | 🟢 Functional | | **FP8 KV Cache** | 🟢 Functional on Hopper devices ([PR #15191](https://github.com/vllm-project/vllm/pull/15191))| | **Spec Decode** | 🚧 WIP ([PR #13933](https://github.com/vllm-project/vllm/pull/13933))| | **Prompt Logprobs with Prefix Caching** | 🟡 Planned ([RFC #13414](https://github.com/vllm-project/vllm/issues/13414))| | **Structured Output Alternative Backends** | 🟢 Functional | -| **Embedding Models** | 🚧 WIP ([PR #16188](https://github.com/vllm-project/vllm/pull/16188)) | -| **Mamba Models** | 🟡 Planned | -| **Encoder-Decoder Models** | 🟠 Delayed | | **Request-level Structured Output Backend** | 🔴 Deprecated | | **best_of** | 🔴 Deprecated ([RFC #13361](https://github.com/vllm-project/vllm/issues/13361))| | **Per-Request Logits Processors** | 🔴 Deprecated ([RFC #13360](https://github.com/vllm-project/vllm/pull/13360)) | | **GPU <> CPU KV Cache Swapping** | 🔴 Deprecated | -- **🚀 Optimized**: Nearly fully optimized, with no further work currently planned. -- **🟢 Functional**: Fully operational, with ongoing optimizations. -- **🚧 WIP**: Under active development. -- **🟡 Planned**: Scheduled for future implementation (some may have open PRs/RFCs). -- **🟠 Delayed**: Temporarily dropped in V1 but planned to be re-introduced later. -- **🔴 Deprecated**: Not planned for V1 unless there is strong demand. +!!! note -**Note**: vLLM V1’s unified scheduler treats both prompt and output tokens the same -way by using a simple dictionary (e.g., `{request_id: num_tokens}`) to dynamically -allocate a fixed token budget per request, enabling features like chunked prefills, -prefix caching, and speculative decoding without a strict separation between prefill -and decode phases. + vLLM V1’s unified scheduler treats both prompt and output tokens the same + way by using a simple dictionary (e.g., `{request_id: num_tokens}`) to dynamically + allocate a fixed token budget per request, enabling features like chunked prefills, + prefix caching, and speculative decoding without a strict separation between prefill + and decode phases. -### Semantic Changes and Deprecated Features - -#### Logprobs +#### Semantic Changes to Logprobs vLLM V1 supports logprobs and prompt logprobs. However, there are some important semantic differences compared to V0: @@ -96,6 +137,14 @@ Support for logprobs with post-sampling adjustments is in progress and will be a Currently prompt logprobs are only supported when prefix caching is turned off via `--no-enable-prefix-caching`. In a future release, prompt logprobs will be compatible with prefix caching, but a recomputation will be triggered to recover the full prompt logprobs even upon a prefix cache hit. See details in [RFC #13414](https://github.com/vllm-project/vllm/issues/13414). +#### WIP Features + +These features are already supported in vLLM V1, but their optimization is still +in progress. + +- **Spec Decode**: Currently, only ngram-based spec decode is supported in V1. There + will be follow-up work to support other types of spec decode (e.g., see [PR #13933](https://github.com/vllm-project/vllm/pull/13933)). We will prioritize the support for Eagle, MTP compared to draft model based spec decode. + #### Deprecated Features As part of the major architectural rework in vLLM V1, several legacy features have been deprecated. @@ -115,39 +164,4 @@ to handle request preemptions. **Structured Output features** -- **Request-level Structured Output Backend**: Deprecated, alternative backends - (outlines, guidance) with fallbacks is WIP. -### Feature & Model Support in Progress - -Although we have re-implemented and partially optimized many features and models from V0 in vLLM V1, optimization work is still ongoing for some, and others remain unsupported. - -#### Features to Be Optimized - -These features are already supported in vLLM V1, but their optimization is still -in progress. - -- **Spec Decode**: Currently, only ngram-based spec decode is supported in V1. There - will be follow-up work to support other types of spec decode (e.g., see [PR #13933](https://github.com/vllm-project/vllm/pull/13933)). We will prioritize the support for Eagle, MTP compared to draft model based spec decode. - -- **Multimodal Models**: V1 is almost fully compatible with V0 except that interleaved modality input is not supported yet. - See [here](https://github.com/orgs/vllm-project/projects/8) for the status of upcoming features and optimizations. - -#### Models to Be Supported - -vLLM V1 currently excludes model architectures with the `SupportsV0Only` protocol, -and the majority fall into the following categories. V1 support for these models will be added eventually. - -**Embedding Models** -The initial support will be provided by [PR #16188](https://github.com/vllm-project/vllm/pull/16188). - -Later, we will consider using [hidden states processor](https://github.com/vllm-project/vllm/issues/12249), which is based on [global logits processor](https://github.com/vllm-project/vllm/pull/13360) to enable simultaneous generation and embedding using the same engine instance in V1. - -**Mamba Models** -Models using selective state-space mechanisms (instead of standard transformer attention) -are not yet supported (e.g., `MambaForCausalLM`, `JambaForCausalLM`). - -**Encoder-Decoder Models** -vLLM V1 is currently optimized for decoder-only transformers. Models requiring - cross-attention between separate encoder and decoder are not yet supported (e.g., `BartForConditionalGeneration`, `MllamaForConditionalGeneration`). - -For a complete list of supported models, see the [list of supported models](https://docs.vllm.ai/en/latest/models/supported_models.html). +- **Request-level Structured Output Backend**: Deprecated, alternative backends (outlines, guidance) with fallbacks is supported now. diff --git a/vllm/engine/arg_utils.py b/vllm/engine/arg_utils.py index f28f7cba4625..38d567acfd8a 100644 --- a/vllm/engine/arg_utils.py +++ b/vllm/engine/arg_utils.py @@ -1440,7 +1440,8 @@ def _is_v1_supported_oracle(self, model_config: ModelConfig) -> bool: _raise_or_fallback(feature_name=name, recommend_to_remove=False) return False - # Non-[CUDA, TPU] may be supported on V1, but off by default for now. + # Non-[CUDA, TPU, x86 CPU] may be supported on V1, + # but off by default for now. v0_hardware = not any( (current_platform.is_cuda_alike(), current_platform.is_tpu(), (current_platform.is_cpu()