-
Notifications
You must be signed in to change notification settings - Fork 670
fix: creating a quickstart.md, readme, and making other updates #3197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
dec35a5
Apply UX fixes: improved README and quickstart documentation
athreesh b22f94c
Add quickstart.md with comprehensive UX improvements
athreesh 59ba6c2
Move quickstart.md to docs/ and update all references
athreesh 14c6c35
Apply remaining UX improvements from old branch
athreesh File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -25,17 +25,7 @@ limitations under the License. | |
|
|
||
| # NVIDIA Dynamo | ||
|
|
||
| High-throughput, low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. | ||
|
|
||
| ## Latest News | ||
|
|
||
| * [08/05] Deploy `openai/gpt-oss-120b` with disaggregated serving on NVIDIA Blackwell GPUs using Dynamo [➡️ link](./components/backends/trtllm/gpt-oss.md) | ||
|
|
||
| ## The Era of Multi-GPU, Multi-Node | ||
|
|
||
| <p align="center"> | ||
| <img src="./docs/images/frontpage-gpu-vertical.png" alt="Multi Node Multi-GPU topology" width="600" /> | ||
| </p> | ||
| High-throughput, low-latency inference framework for serving generative AI models across multi-node distributed environments. | ||
|
|
||
| Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close. | ||
|
|
||
|
|
@@ -51,14 +41,19 @@ Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLa | |
| <img src="./docs/images/frontpage-architecture.png" alt="Dynamo architecture" width="600" /> | ||
| </p> | ||
|
|
||
| Built in Rust for performance and Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach. | ||
|
|
||
| ## Latest News | ||
|
|
||
| * [0.5.0] KVBM (KV Cache Block Manager) support now available in Dynamo for enhanced memory management and KV cache offloading from HBM to remote storage | ||
|
|
||
| ## Framework Support Matrix | ||
|
|
||
| | Feature | vLLM | SGLang | TensorRT-LLM | | ||
| |---------|----------------------|----------------------------|----------------------------------------| | ||
| | [**Disaggregated Serving**](/docs/architecture/disagg_serving.md) | ✅ | ✅ | ✅ | | ||
| | [**Conditional Disaggregation**](/docs/architecture/disagg_serving.md#conditional-disaggregation) | 🚧 | 🚧 | 🚧 | | ||
| | [**KV-Aware Routing**](/docs/architecture/kv_cache_routing.md) | ✅ | ✅ | ✅ | | ||
| | [**Load Based Planner**](/docs/architecture/load_planner.md) | 🚧 | 🚧 | 🚧 | | ||
| | [**SLA-Based Planner**](/docs/architecture/sla_planner.md) | ✅ | ✅ | ✅ | | ||
| | [**KVBM**](/docs/architecture/kvbm_architecture.md) | ✅ | 🚧 | ✅ | | ||
|
|
||
|
|
@@ -67,88 +62,77 @@ To learn more about each framework and their capabilities, check out each framew | |
| - **[SGLang](components/backends/sglang/README.md)** | ||
| - **[TensorRT-LLM](components/backends/trtllm/README.md)** | ||
|
|
||
| Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach. | ||
| # Quick Start | ||
|
|
||
| # Installation | ||
| **New to Dynamo?** **[Complete Quickstart Guide](docs/quickstart.md)** (Recommended) | ||
|
|
||
| The following examples require a few system level packages. | ||
| Recommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md) | ||
| ## Local Development | ||
|
|
||
| ## 1. Initial setup | ||
| ### Prerequisites | ||
| - Ubuntu 24.04 (recommended) or compatible Linux | ||
| - NVIDIA GPU with CUDA support | ||
| - Docker & Docker Compose | ||
|
|
||
| The Dynamo team recommends the `uv` Python package manager, although any way works. Install uv: | ||
| ``` | ||
| ### 1. Install Dynamo | ||
| ```bash | ||
| # Install uv (recommended Python package manager) | ||
| curl -LsSf https://astral.sh/uv/install.sh | sh | ||
| ``` | ||
|
|
||
| ### Install etcd and NATS (required) | ||
|
|
||
| To coordinate across a data center, Dynamo relies on etcd and NATS. To run Dynamo locally, these need to be available. | ||
|
|
||
| - [etcd](https://etcd.io/) can be run directly as `./etcd`. | ||
| - [nats](https://nats.io/) needs jetstream enabled: `nats-server -js`. | ||
|
|
||
| To quickly setup etcd & NATS, you can also run: | ||
| # Create virtual environment and install Dynamo | ||
| uv venv venv | ||
| source venv/bin/activate | ||
| uv pip install "ai-dynamo[sglang]" # or [vllm], [trtllm] | ||
| ``` | ||
| # At the root of the repository: | ||
| # Edit deploy/docker-compose.yml to comment out "runtime: nvidia" of the dcgm-exporter service if the nvidia container runtime isn't deployed or to be used. | ||
|
|
||
| ### 2. Start Infrastructure Services | ||
| ```bash | ||
| # Start etcd and NATS (required for distributed communication) | ||
| docker compose -f deploy/docker-compose.yml up -d | ||
| ``` | ||
|
|
||
| ## 2. Select an engine | ||
|
|
||
| We publish Python wheels specialized for each of our supported engines: vllm, sglang, trtllm, and llama.cpp. The examples that follow use SGLang; continue reading for other engines. | ||
| ### 3. Run Your First Model | ||
| ```bash | ||
| # Terminal 1: Start frontend | ||
| python -m dynamo.frontend --http-port 8000 | ||
|
|
||
| # Terminal 2: Start backend worker | ||
| python -m dynamo.sglang --model-path deepseek-ai/DeepSeek-R1-Distill-Llama-8B | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Minor, but a 0.6B model will download much faster. Less waiting around. I believe Qwen3 0.6B is also vllm's default nowadays. |
||
| ``` | ||
| uv venv venv | ||
| source venv/bin/activate | ||
| uv pip install pip | ||
|
|
||
| # Choose one | ||
| uv pip install "ai-dynamo[sglang]" #replace with [vllm], [trtllm], etc. | ||
| ### 4. Test It | ||
| ```bash | ||
| curl localhost:8000/v1/chat/completions \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "messages": [{"role": "user", "content": "Hello!"}], "max_tokens": 50}' | ||
| ``` | ||
|
|
||
| ## 3. Run Dynamo | ||
| ## Kubernetes Deployment | ||
|
|
||
| ### Running an LLM API server | ||
| **Production deployments** **[Kubernetes Quickstart](docs/quickstart.md#kubernetes-quickstart)** | ||
|
|
||
| Dynamo provides a simple way to spin up a local set of inference components including: | ||
| ```bash | ||
| # Install platform | ||
| export NAMESPACE=dynamo-kubernetes | ||
| export RELEASE_VERSION=0.5.0 | ||
|
|
||
| - **OpenAI Compatible Frontend** – High performance OpenAI compatible http api server written in Rust. | ||
| - **Basic and Kv Aware Router** – Route and load balance traffic to a set of workers. | ||
| - **Workers** – Set of pre-configured LLM serving engines. | ||
| helm fetch https://helm.ngc.nvidia.com/nvidia/ai-dynamo/charts/dynamo-crds-${RELEASE_VERSION}.tgz | ||
| helm install dynamo-crds dynamo-crds-${RELEASE_VERSION}.tgz --namespace default | ||
|
|
||
| ``` | ||
| # Start an OpenAI compatible HTTP server, a pre-processor (prompt templating and tokenization) and a router. | ||
| # Pass the TLS certificate and key paths to use HTTPS instead of HTTP. | ||
| python -m dynamo.frontend --http-port 8000 [--tls-cert-path cert.pem] [--tls-key-path key.pem] | ||
|
|
||
| # Start the SGLang engine, connecting to NATS and etcd to receive requests. You can run several of these, | ||
| # both for the same model and for multiple models. The frontend node will discover them. | ||
| python -m dynamo.sglang.worker --model deepseek-ai/DeepSeek-R1-Distill-Llama-8B --skip-tokenizer-init | ||
| ``` | ||
| helm fetch https://helm.ngc.nvidia.com/nvidia/ai-dynamo/charts/dynamo-platform-${RELEASE_VERSION}.tgz | ||
| helm install dynamo-platform dynamo-platform-${RELEASE_VERSION}.tgz --namespace ${NAMESPACE} --create-namespace | ||
|
|
||
| #### Send a Request | ||
| # Deploy model (example: vLLM aggregated) | ||
| kubectl apply -f components/backends/vllm/deploy/agg.yaml -n ${NAMESPACE} | ||
|
|
||
| ```bash | ||
| curl localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{ | ||
| "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B", | ||
| "messages": [ | ||
| { | ||
| "role": "user", | ||
| "content": "Hello, how are you?" | ||
| } | ||
| ], | ||
| "stream":false, | ||
| "max_tokens": 300 | ||
| }' | jq | ||
| # Test the deployment | ||
| kubectl port-forward svc/agg-vllm-frontend 8000:8000 -n ${NAMESPACE} | ||
| curl http://localhost:8000/v1/models | ||
| ``` | ||
|
|
||
| Rerun with `curl -N` and change `stream` in the request to `true` to get the responses as soon as the engine issues them. | ||
| **For detailed Kubernetes deployment guide**: [Kubernetes Documentation](docs/kubernetes/README.md) | ||
|
|
||
| ### Deploying Dynamo | ||
| ## Next Steps | ||
|
|
||
| - Follow the [Quickstart Guide](docs/kubernetes/README.md) to deploy on Kubernetes. | ||
| - Check out [Backends](components/backends) to deploy various workflow configurations (e.g. SGLang with router, vLLM with disaggregated serving, etc.) | ||
| - Run some [Examples](examples) to learn about building components in Dynamo and exploring various integrations. | ||
|
|
||
|
|
@@ -159,113 +143,57 @@ Dynamo provides comprehensive benchmarking tools to evaluate and optimize your d | |
| * **[Benchmarking Guide](docs/benchmarks/benchmarking.md)** – Compare deployment topologies (aggregated vs. disaggregated vs. vanilla vLLM) using GenAI-Perf | ||
| * **[Pre-Deployment Profiling](docs/benchmarks/pre_deployment_profiling.md)** – Optimize configurations before deployment to meet SLA requirements | ||
|
|
||
| # Engines | ||
|
|
||
| Dynamo is designed to be inference engine agnostic. To use any engine with Dynamo, NATS and etcd need to be installed, along with a Dynamo frontend (`python -m dynamo.frontend [--interactive]`). | ||
|
|
||
| ## vLLM | ||
|
|
||
| ``` | ||
| uv pip install ai-dynamo[vllm] | ||
| ``` | ||
|
|
||
| Run the backend/worker like this: | ||
| ``` | ||
| python -m dynamo.vllm --help | ||
| ``` | ||
|
|
||
| vLLM attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length <value>`. | ||
|
|
||
| To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`. | ||
|
|
||
| ## SGLang | ||
|
|
||
| ``` | ||
| # Install libnuma | ||
| apt install -y libnuma-dev | ||
|
|
||
| uv pip install ai-dynamo[sglang] | ||
| ``` | ||
|
|
||
| Run the backend/worker like this: | ||
| ``` | ||
| python -m dynamo.sglang.worker --help | ||
| ``` | ||
|
|
||
| You can pass any sglang flags directly to this worker, see https://docs.sglang.ai/advanced_features/server_arguments.html . See there to use multiple GPUs. | ||
|
|
||
| ## TensorRT-LLM | ||
| # Supported Engines | ||
|
|
||
| It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for running the TensorRT-LLM engine. | ||
| Dynamo supports multiple inference engines. Choose your preferred backend: | ||
|
|
||
| > [!Note] | ||
| > Ensure that you select a PyTorch container image version that matches the version of TensorRT-LLM you are using. | ||
| > For example, if you are using `tensorrt-llm==1.1.0rc5`, use the PyTorch container image version `25.06`. | ||
| > To find the correct PyTorch container version for your desired `tensorrt-llm` release, visit the [TensorRT-LLM Dockerfile.multi](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docker/Dockerfile.multi) on GitHub. Switch to the branch that matches your `tensorrt-llm` version, and look for the `BASE_TAG` line to identify the recommended PyTorch container tag. | ||
| | Engine | Install | Run Command | Notes | | ||
| |--------|---------|-------------|-------| | ||
| | **vLLM** | `uv pip install ai-dynamo[vllm]` | `python -m dynamo.vllm --model Qwen/Qwen3-0.6B` | Use `--context-length <value>` if KV cache doesn't fit in memory. Set `CUDA_VISIBLE_DEVICES` to specify GPUs. | | ||
| | **SGLang** | `uv pip install ai-dynamo[sglang]` | `python -m dynamo.sglang --model-path deepseek-ai/DeepSeek-R1-Distill-Llama-8B` | Requires `apt install -y libnuma-dev` dependency. | | ||
| | **TensorRT-LLM** | `uv pip install ai-dynamo[trtllm]` | `python -m dynamo.trtllm --model deepseek-ai/DeepSeek-R1-Distill-Llama-8B` | Requires NVIDIA PyTorch container. See [TensorRT-LLM Quickstart](docs/quickstart.md#tensorrt-llm-backend) for setup. | | ||
|
|
||
| > [!Important] | ||
| > Launch container with the following additional settings `--shm-size=1g --ulimit memlock=-1` | ||
|
|
||
| ### Install prerequisites | ||
| ``` | ||
| # Optional step: Only required for Blackwell and Grace Hopper | ||
| uv pip install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 | ||
| **Detailed engine guides**: [vLLM](components/backends/vllm/README.md) | [SGLang](components/backends/sglang/README.md) | [TensorRT-LLM](components/backends/trtllm/README.md) | ||
|
|
||
| # Required until the trtllm version is bumped to include this pinned dependency itself | ||
| uv pip install "cuda-python>=12,<13" | ||
| # Development | ||
|
|
||
| sudo apt-get -y install libopenmpi-dev | ||
| ``` | ||
| <details> | ||
|
|
||
| > [!Tip] | ||
| > You can learn more about these prequisites and known issues with TensorRT-LLM pip based installation [here](https://nvidia.github.io/TensorRT-LLM/installation/linux.html). | ||
| <summary><strong>Building from Source</strong> (Click to expand)</summary> | ||
|
|
||
| ### After installing the pre-requisites above, install Dynamo | ||
| ``` | ||
| uv pip install ai-dynamo[trtllm] | ||
| ``` | ||
| **For contributors and advanced users** | ||
|
|
||
| Run the backend/worker like this: | ||
| ``` | ||
| python -m dynamo.trtllm --help | ||
| ``` | ||
|
|
||
| To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`. | ||
|
|
||
| # Developing Locally | ||
|
|
||
| ## 1. Install libraries | ||
| ### Prerequisites | ||
grahamking marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| **Ubuntu:** | ||
| ``` | ||
| ```bash | ||
| sudo apt install -y build-essential libhwloc-dev libudev-dev pkg-config libclang-dev protobuf-compiler python3-dev cmake | ||
| ``` | ||
|
|
||
| **macOS:** | ||
| - [Homebrew](https://brew.sh/) | ||
| ``` | ||
| ```bash | ||
| # if brew is not installed on your system, install it | ||
| /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" | ||
| ``` | ||
| - [Xcode](https://developer.apple.com/xcode/) | ||
|
|
||
| ``` | ||
| ```bash | ||
| brew install cmake protobuf | ||
|
|
||
| ## Check that Metal is accessible | ||
| xcrun -sdk macosx metal | ||
| ``` | ||
| If Metal is accessible, you should see an error like `metal: error: no input files`, which confirms it is installed correctly. | ||
|
|
||
| ### Install Rust | ||
|
|
||
| ## 2. Install Rust | ||
|
|
||
| ``` | ||
| ```bash | ||
| curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh | ||
| source $HOME/.cargo/env | ||
| ``` | ||
|
|
||
| ## 3. Create a Python virtual env: | ||
| ### Create a Python virtual env: | ||
|
|
||
| Follow the instructions in [uv installation](https://docs.astral.sh/uv/#installation) guide to install uv if you don't have `uv` installed. Once uv is installed, create a virtual environment and activate it. | ||
|
|
||
|
|
@@ -280,24 +208,24 @@ uv venv dynamo | |
| source dynamo/bin/activate | ||
| ``` | ||
|
|
||
| ## 4. Install build tools | ||
| ### Install build tools | ||
|
|
||
| ``` | ||
| ```bash | ||
| uv pip install pip maturin | ||
| ``` | ||
|
|
||
| [Maturin](https://github.com/PyO3/maturin) is the Rust<->Python bindings build tool. | ||
|
|
||
| ## 5. Build the Rust bindings | ||
| ### Build the Rust bindings | ||
|
|
||
| ``` | ||
| ```bash | ||
| cd lib/bindings/python | ||
| maturin develop --uv | ||
| ``` | ||
|
|
||
| ## 6. Install the wheel | ||
| ### Install the wheel | ||
|
|
||
| ``` | ||
| ```bash | ||
| cd $PROJECT_ROOT | ||
| uv pip install . | ||
| # For development, use | ||
|
|
@@ -314,3 +242,5 @@ Remember that nats and etcd must be running (see earlier). | |
| Set the environment variable `DYN_LOG` to adjust the logging level; for example, `export DYN_LOG=debug`. It has the same syntax as `RUST_LOG`. | ||
|
|
||
| If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details. | ||
|
|
||
| </details> | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leading slash links break on GitHub.
Change
/docs/...todocs/....📝 Committable suggestion
🤖 Prompt for AI Agents