diff --git a/components/backends/trtllm/README.md b/components/backends/trtllm/README.md index 4bb490cb17..3a5b495dce 100644 --- a/components/backends/trtllm/README.md +++ b/components/backends/trtllm/README.md @@ -36,7 +36,7 @@ git checkout $(git describe --tags $(git rev-list --tags --max-count=1)) ## Table of Contents - [Feature Support Matrix](#feature-support-matrix) - [Quick Start](#quick-start) -- [Single Node Examples](#single-node-deployments) +- [Single Node Examples](#single-node-examples) - [Advanced Examples](#advanced-examples) - [Disaggregation Strategy](#disaggregation-strategy) - [KV Cache Transfer](#kv-cache-transfer-in-disaggregated-serving) diff --git a/docs/architecture/distributed_runtime.md b/docs/architecture/distributed_runtime.md index 8754fa87eb..54fd09e6e1 100644 --- a/docs/architecture/distributed_runtime.md +++ b/docs/architecture/distributed_runtime.md @@ -30,11 +30,12 @@ While theoretically each `DistributedRuntime` can have multiple `Namespace`s as For example, a typical deployment configuration (like `components/backends/vllm/deploy/agg.yaml` or `components/backends/sglang/deploy/agg.yaml`) has multiple workers: -- `Frontend`: Starts an HTTP server and handles incoming requests. The HTTP server routes all requests to the worker components. -- `VllmDecodeWorker`: Performs the actual decode computation using the vLLM engine through the `DecodeWorkerHandler`. -- `VllmPrefillWorker` (in disaggregated deployments): Performs prefill computation using the vLLM engine through the `PrefillWorkerHandler`. +- `Frontend`: Starts an HTTP server and handles incoming requests. The HTTP server routes all requests to the `Processor`. +- `Processor`: When a new request arrives, `Processor` applies the chat template and performs the tokenization. +Then, it routes the request to the `Worker`. +- `Worker` components (e.g., `VllmDecodeWorker`, `SGLangDecodeWorker`, `TrtllmWorker`): Perform the actual computation using their respective engines (vLLM, SGLang, TensorRT-LLM). -Since the workers are deployed in different processes, each of them has its own `DistributedRuntime`. Within their own `DistributedRuntime`, they all share the same `Namespace` (e.g., `vllm-agg`, `vllm-disagg`, `sglang-agg`). Then, under their namespace, they have their own `Component`s: `Frontend` uses the `make_engine` function which handles HTTP serving and routing automatically, while worker components like `VllmDecodeWorker` and `VllmPrefillWorker` create components with names like `worker`, `decode`, or `prefill` and register endpoints like `generate` and `clear_kv_blocks`. The `Frontend` component doesn't explicitly create endpoints - instead, the `make_engine` function handles the HTTP server and worker discovery. Worker components create their endpoints programmatically using the `component.endpoint()` method and use their respective handler classes (`DecodeWorkerHandler` or `PrefillWorkerHandler`) to process requests. Their `DistributedRuntime`s are initialized in their respective main functions, their `Namespace`s are configured in the deployment YAML, their `Component`s are created programmatically (e.g., `runtime.namespace("vllm-agg").component("worker")`), and their `Endpoint`s are created using the `component.endpoint()` method. +Since the workers are deployed in different processes, each of them has its own `DistributedRuntime`. Within their own `DistributedRuntime`, they all share the same `Namespace` (e.g., `vllm-agg`, `sglang-agg`). Then, under their namespace, they have their own `Component`s: `Frontend` uses the `make_engine` function which handles HTTP serving and routing automatically, while worker components create components with names like `worker`, `decode`, or `prefill` and register endpoints like `generate`, `flush_cache`, or `clear_kv_blocks`. The `Frontend` component doesn't explicitly create endpoints - instead, the `make_engine` function handles the HTTP server and worker discovery. Worker components create their endpoints programmatically using the `component.endpoint()` method. Their `DistributedRuntime`s are initialized in their respective main functions, their `Namespace`s are configured in the deployment YAML, their `Component`s are created programmatically (e.g., `runtime.namespace("dynamo").component("worker")`), and their `Endpoint`s are created using the `component.endpoint()` method. ## Initialization diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 0000000000..13fdfe5ad2 --- /dev/null +++ b/examples/README.md @@ -0,0 +1,79 @@ + + +# Dynamo Examples + +This directory contains practical examples demonstrating how to deploy and use Dynamo for distributed LLM inference. Each example includes setup instructions, configuration files, and explanations to help you understand different deployment patterns and use cases. + +> **Want to see a specific example?** +> Open a [GitHub issue](https://github.com/ai-dynamo/dynamo/issues) to request an example you'd like to see, or [open a pull request](https://github.com/ai-dynamo/dynamo/pulls) if you'd like to contribute your own! + +## Basics & Tutorials + +Learn fundamental Dynamo concepts through these introductory examples: + +- **[Quickstart](basics/quickstart/README.md)** - Simple aggregated serving example with vLLM backend +- **[Disaggregated Serving](basics/disaggregated_serving/README.md)** - Prefill/decode separation for enhanced performance and scalability +- **[Multi-node](basics/multinode/README.md)** - Distributed inference across multiple nodes and GPUs +- **[Multimodal](basics/multimodal/README.md)** - Multimodal model deployment with E/P/D disaggregated serving + +## Deployment Examples + +Platform-specific deployment guides for production environments: + +- **[Amazon EKS](deployments/EKS/)** - Deploy Dynamo on Amazon Elastic Kubernetes Service +- **[Azure AKS](deployments/AKS/)** - Deploy Dynamo on Azure Kubernetes Service +- **[Router Standalone](deployments/router_standalone/)** - Standalone router deployment patterns +- **Amazon ECS** - _Coming soon_ +- **Google GKE** - _Coming soon_ +- **Ray** - _Coming soon_ +- **NVIDIA Cloud Functions (NVCF)** - _Coming soon_ + +## Runtime Examples + +Low-level runtime examples for developers using Python<>Rust bindings: + +- **[Hello World](runtime/hello_world/README.md)** - Minimal Dynamo runtime service demonstrating basic concepts + +## Getting Started + +1. **Choose your deployment pattern**: Start with the [Quickstart](basics/quickstart/README.md) for a simple local deployment, or explore [Disaggregated Serving](basics/disaggregated_serving/README.md) for advanced architectures. + +2. **Set up prerequisites**: Most examples require etcd and NATS services. You can start them using: + ```bash + docker compose -f deploy/metrics/docker-compose.yml up -d + ``` + +3. **Follow the example**: Each directory contains detailed setup instructions and configuration files specific to that deployment pattern. + +## Prerequisites + +Before running any examples, ensure you have: + +- **Docker & Docker Compose** - For containerized services +- **CUDA-compatible GPU** - For LLM inference (except hello_world, which is non-GPU aware) +- **Python 3.9++** - For client scripts and utilities +- **Kubernetes cluster** - For any cloud deployment/K8s examples + +## Framework Support + +These examples show how Dynamo broadly works using major inference engines. + +If you want to see advanced, framework-specific deployment patterns and best practices, check out the [Components Workflows](../components/backends/) directory: +- **[vLLM](../components/backends/vllm/)** – vLLM-specific deployment and configuration +- **[SGLang](../components/backends/sglang/)** – SGLang integration examples and workflows +- **[TensorRT-LLM](../components/backends/trtllm/)** – TensorRT-LLM workflows and optimizations \ No newline at end of file diff --git a/examples/multimodal/README.md b/examples/basics/multimodal/README.md similarity index 100% rename from examples/multimodal/README.md rename to examples/basics/multimodal/README.md diff --git a/examples/deployments/AKS-deployment.md b/examples/deployments/AKS/AKS-deployment.md similarity index 100% rename from examples/deployments/AKS-deployment.md rename to examples/deployments/AKS/AKS-deployment.md diff --git a/examples/router_standalone/README.md b/examples/deployments/router_standalone/README.md similarity index 100% rename from examples/router_standalone/README.md rename to examples/deployments/router_standalone/README.md diff --git a/examples/router_standalone/__init__.py b/examples/deployments/router_standalone/__init__.py similarity index 100% rename from examples/router_standalone/__init__.py rename to examples/deployments/router_standalone/__init__.py diff --git a/examples/router_standalone/api.py b/examples/deployments/router_standalone/api.py similarity index 100% rename from examples/router_standalone/api.py rename to examples/deployments/router_standalone/api.py diff --git a/examples/router_standalone/perf.sh b/examples/deployments/router_standalone/perf.sh similarity index 100% rename from examples/router_standalone/perf.sh rename to examples/deployments/router_standalone/perf.sh diff --git a/examples/router_standalone/ping.sh b/examples/deployments/router_standalone/ping.sh similarity index 100% rename from examples/router_standalone/ping.sh rename to examples/deployments/router_standalone/ping.sh diff --git a/examples/router_standalone/router.py b/examples/deployments/router_standalone/router.py similarity index 100% rename from examples/router_standalone/router.py rename to examples/deployments/router_standalone/router.py diff --git a/examples/router_standalone/worker.py b/examples/deployments/router_standalone/worker.py similarity index 100% rename from examples/router_standalone/worker.py rename to examples/deployments/router_standalone/worker.py