Skip to content

Commit 12a7b83

Browse files
authored
docs: Examples README/restructuring, framework READMEs, EKS examples (#2174)
1 parent e542f00 commit 12a7b83

File tree

12 files changed

+85
-5
lines changed

12 files changed

+85
-5
lines changed

components/backends/trtllm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ git checkout $(git describe --tags $(git rev-list --tags --max-count=1))
3636
## Table of Contents
3737
- [Feature Support Matrix](#feature-support-matrix)
3838
- [Quick Start](#quick-start)
39-
- [Single Node Examples](#single-node-deployments)
39+
- [Single Node Examples](#single-node-examples)
4040
- [Advanced Examples](#advanced-examples)
4141
- [Disaggregation Strategy](#disaggregation-strategy)
4242
- [KV Cache Transfer](#kv-cache-transfer-in-disaggregated-serving)

docs/architecture/distributed_runtime.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,11 +30,12 @@ While theoretically each `DistributedRuntime` can have multiple `Namespace`s as
3030

3131
For example, a typical deployment configuration (like `components/backends/vllm/deploy/agg.yaml` or `components/backends/sglang/deploy/agg.yaml`) has multiple workers:
3232

33-
- `Frontend`: Starts an HTTP server and handles incoming requests. The HTTP server routes all requests to the worker components.
34-
- `VllmDecodeWorker`: Performs the actual decode computation using the vLLM engine through the `DecodeWorkerHandler`.
35-
- `VllmPrefillWorker` (in disaggregated deployments): Performs prefill computation using the vLLM engine through the `PrefillWorkerHandler`.
33+
- `Frontend`: Starts an HTTP server and handles incoming requests. The HTTP server routes all requests to the `Processor`.
34+
- `Processor`: When a new request arrives, `Processor` applies the chat template and performs the tokenization.
35+
Then, it routes the request to the `Worker`.
36+
- `Worker` components (e.g., `VllmDecodeWorker`, `SGLangDecodeWorker`, `TrtllmWorker`): Perform the actual computation using their respective engines (vLLM, SGLang, TensorRT-LLM).
3637

37-
Since the workers are deployed in different processes, each of them has its own `DistributedRuntime`. Within their own `DistributedRuntime`, they all share the same `Namespace` (e.g., `vllm-agg`, `vllm-disagg`, `sglang-agg`). Then, under their namespace, they have their own `Component`s: `Frontend` uses the `make_engine` function which handles HTTP serving and routing automatically, while worker components like `VllmDecodeWorker` and `VllmPrefillWorker` create components with names like `worker`, `decode`, or `prefill` and register endpoints like `generate` and `clear_kv_blocks`. The `Frontend` component doesn't explicitly create endpoints - instead, the `make_engine` function handles the HTTP server and worker discovery. Worker components create their endpoints programmatically using the `component.endpoint()` method and use their respective handler classes (`DecodeWorkerHandler` or `PrefillWorkerHandler`) to process requests. Their `DistributedRuntime`s are initialized in their respective main functions, their `Namespace`s are configured in the deployment YAML, their `Component`s are created programmatically (e.g., `runtime.namespace("vllm-agg").component("worker")`), and their `Endpoint`s are created using the `component.endpoint()` method.
38+
Since the workers are deployed in different processes, each of them has its own `DistributedRuntime`. Within their own `DistributedRuntime`, they all share the same `Namespace` (e.g., `vllm-agg`, `sglang-agg`). Then, under their namespace, they have their own `Component`s: `Frontend` uses the `make_engine` function which handles HTTP serving and routing automatically, while worker components create components with names like `worker`, `decode`, or `prefill` and register endpoints like `generate`, `flush_cache`, or `clear_kv_blocks`. The `Frontend` component doesn't explicitly create endpoints - instead, the `make_engine` function handles the HTTP server and worker discovery. Worker components create their endpoints programmatically using the `component.endpoint()` method. Their `DistributedRuntime`s are initialized in their respective main functions, their `Namespace`s are configured in the deployment YAML, their `Component`s are created programmatically (e.g., `runtime.namespace("dynamo").component("worker")`), and their `Endpoint`s are created using the `component.endpoint()` method.
3839

3940
## Initialization
4041

examples/README.md

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
<!--
2+
SPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
3+
SPDX-License-Identifier: Apache-2.0
4+
5+
Licensed under the Apache License, Version 2.0 (the "License");
6+
you may not use this file except in compliance with the License.
7+
You may obtain a copy of the License at
8+
9+
http://www.apache.org/licenses/LICENSE-2.0
10+
11+
Unless required by applicable law or agreed to in writing, software
12+
distributed under the License is distributed on an "AS IS" BASIS,
13+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
See the License for the specific language governing permissions and
15+
limitations under the License.
16+
-->
17+
18+
# Dynamo Examples
19+
20+
This directory contains practical examples demonstrating how to deploy and use Dynamo for distributed LLM inference. Each example includes setup instructions, configuration files, and explanations to help you understand different deployment patterns and use cases.
21+
22+
> **Want to see a specific example?**
23+
> Open a [GitHub issue](https://github.com/ai-dynamo/dynamo/issues) to request an example you'd like to see, or [open a pull request](https://github.com/ai-dynamo/dynamo/pulls) if you'd like to contribute your own!
24+
25+
## Basics & Tutorials
26+
27+
Learn fundamental Dynamo concepts through these introductory examples:
28+
29+
- **[Quickstart](basics/quickstart/README.md)** - Simple aggregated serving example with vLLM backend
30+
- **[Disaggregated Serving](basics/disaggregated_serving/README.md)** - Prefill/decode separation for enhanced performance and scalability
31+
- **[Multi-node](basics/multinode/README.md)** - Distributed inference across multiple nodes and GPUs
32+
- **[Multimodal](basics/multimodal/README.md)** - Multimodal model deployment with E/P/D disaggregated serving
33+
34+
## Deployment Examples
35+
36+
Platform-specific deployment guides for production environments:
37+
38+
- **[Amazon EKS](deployments/EKS/)** - Deploy Dynamo on Amazon Elastic Kubernetes Service
39+
- **[Azure AKS](deployments/AKS/)** - Deploy Dynamo on Azure Kubernetes Service
40+
- **[Router Standalone](deployments/router_standalone/)** - Standalone router deployment patterns
41+
- **Amazon ECS** - _Coming soon_
42+
- **Google GKE** - _Coming soon_
43+
- **Ray** - _Coming soon_
44+
- **NVIDIA Cloud Functions (NVCF)** - _Coming soon_
45+
46+
## Runtime Examples
47+
48+
Low-level runtime examples for developers using Python<>Rust bindings:
49+
50+
- **[Hello World](runtime/hello_world/README.md)** - Minimal Dynamo runtime service demonstrating basic concepts
51+
52+
## Getting Started
53+
54+
1. **Choose your deployment pattern**: Start with the [Quickstart](basics/quickstart/README.md) for a simple local deployment, or explore [Disaggregated Serving](basics/disaggregated_serving/README.md) for advanced architectures.
55+
56+
2. **Set up prerequisites**: Most examples require etcd and NATS services. You can start them using:
57+
```bash
58+
docker compose -f deploy/metrics/docker-compose.yml up -d
59+
```
60+
61+
3. **Follow the example**: Each directory contains detailed setup instructions and configuration files specific to that deployment pattern.
62+
63+
## Prerequisites
64+
65+
Before running any examples, ensure you have:
66+
67+
- **Docker & Docker Compose** - For containerized services
68+
- **CUDA-compatible GPU** - For LLM inference (except hello_world, which is non-GPU aware)
69+
- **Python 3.9++** - For client scripts and utilities
70+
- **Kubernetes cluster** - For any cloud deployment/K8s examples
71+
72+
## Framework Support
73+
74+
These examples show how Dynamo broadly works using major inference engines.
75+
76+
If you want to see advanced, framework-specific deployment patterns and best practices, check out the [Components Workflows](../components/backends/) directory:
77+
- **[vLLM](../components/backends/vllm/)** – vLLM-specific deployment and configuration
78+
- **[SGLang](../components/backends/sglang/)** – SGLang integration examples and workflows
79+
- **[TensorRT-LLM](../components/backends/trtllm/)** – TensorRT-LLM workflows and optimizations
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)