An early-stage, vendor-agnostic Go SDK for managing clusterable, GPU-accelerated compute across cloud providers.
-
Define a clean, minimal interface for cloud compute primitives:
Instance
Storage
FirewallRule
InstanceType
Location
-
Enable clusterable GPU workloads across multiple providers, with shared semantics and L3 network guarantees. (WIP)
All cloud integrations must follow our Security Requirements, which define:
- Network Security: Default "deny all inbound, allow all outbound" model
- Cluster Security: Internal instance communication with external isolation
- Data Protection: Encryption requirements for data at rest and in transit
- Implementation Guidelines: Security checklists for cloud provider integrations
See SECURITY.md for complete security specifications and implementation requirements.
- Version:
v1
— internal interface, open-sourced - Current scope: core types + interfaces + tests
- Cloud provider implementations are internal-only for now
v2
will be shaped by feedback and contributions from the community
- Operating System: Currently supports Ubuntu 22 only
- Architecture: Designed for GPU-accelerated compute workloads
- Access Method: Requires SSH server and SSH key-based authentication
- System Requirements: Requires systemd to be running and accessible
- NVIDIA Cloud Partners (NCPs) looking to offer Brev-compatible GPU compute
- Infra teams building cluster-aware systems or abstractions on raw compute
- Cloud providers interested in contributing to a shared interface for accelerated compute
- Compute brokers & marketplaces (aggregators) offering multi-cloud compute
- V1 Design Notes: Design decisions, known quirks, and AWS-inspired patterns in the v1 API
- Architecture Overview: How the Cloud SDK fits into Brev's overall architecture
- Security Requirements: Security specifications and implementation requirements
- How to Add a Provider: Step-by-step guide to implement a new cloud provider using the Lambda Labs example
This is a foundation — we're opening it early to learn with the community and shape a clean, composable v2
. If you're building GPU compute infrastructure or tooling, we'd love your input.