HyperBEAM OS is an advanced automation tool for building and running secure virtual machine images with AMD SEV-SNP (Secure Nested Paging) support. The project features a modern, modular architecture with a facade pattern for simplified operations, dependency injection for testability, and comprehensive workflow orchestration.
HyperBEAM OS provides a complete development and deployment environment for secure VMs with SEV-SNP attestation. The tool automates complex workflows including:
- Environment Setup: Initializes build directories, installs dependencies, and configures the host system
- SNP Integration: Downloads, builds, and integrates AMD SNP packages (kernel, OVMF, QEMU)
- VM Image Creation: Builds secure base and guest VM images with dm-verity integrity protection
- Attestation Support: Includes attestation server and digest calculation tools for secure measurements
- Release Management: Packages and distributes complete VM releases
- Development Workflows: Provides streamlined development and testing environments
The project uses modern Python architecture with dependency injection, facade patterns, and modular service layers for maintainability and testability.
HyperBEAM OS follows a layered architecture:
- Build Orchestration: Coordinates complex build workflows
- VM Management: Handles VM lifecycle and configuration
- Service Interfaces: Defines contracts for all services
- Dependency Injection: Manages service dependencies and lifecycle
- Main Facade: Provides complete workflows (setup, development, release)
- Setup Facade: Environment initialization and verification
- Build Facade: Build orchestration and status monitoring
- VM Facade: VM lifecycle management
- Release Facade: Package creation and distribution
- Configuration Service: Centralized configuration management
- Command Execution: Safe command execution with error handling
- Docker Service: Container build and management operations
- File System Service: File and directory operations
- Dependency Service: System dependency management
- CLI Handler: Argument parsing and command dispatch
- Error Handling: Comprehensive error management with user-friendly messages
- Quick Setup: One-command environment initialization and system build
- Development Workflow: Streamlined build-and-test cycle for development
- Release Workflow: Automated build, test, and packaging for production
- Demo Workflow: Easy demonstration and showcase capabilities
- Automated Dependencies: Installs and configures system dependencies
- Host System Setup: Configures SEV-SNP host environment
- Build Directory Management: Creates and manages build artifacts
- Environment Validation: Verifies system readiness and configuration
- SNP Package Building: Builds kernel, OVMF, and QEMU from source
- Base Image Creation: Creates foundational VM images with initramfs
- Guest Image Building: Builds application-specific guest content
- Integrity Protection: Implements dm-verity for tamper detection
- SEV-SNP Support: Full AMD Secure Nested Paging integration
- Attestation Framework: Built-in attestation server and measurement tools
- Secure Boot: OVMF-based secure boot configuration
- Memory Encryption: Transparent memory encryption support
- Package Creation: Creates distributable release packages
- Remote Downloads: Downloads and installs remote releases
- Version Management: Tracks and manages multiple release versions
- Deployment Ready: Production-ready deployment packages
- QEMU Integration: Advanced QEMU configuration and management
- SSH Access: Built-in SSH connectivity to running VMs
- Port Forwarding: Configurable network access and port mapping
- Resource Management: CPU, memory, and disk resource configuration
Some BIOS settings are required in order to use SEV-SNP. The settings slightly differ from machine to machine, but make sure to check the following options:
- Secure Nested Paging: Enable SNP.
- Secure Memory Encryption: Enable SME (not strictly required for running SNP guests).
- SNP Memory Coverage: Must be enabled to reserve space for the Reverse Map Page Table (RMP).
Source - Minimum SEV non-ES ASID: This value should be greater than 1 to allow for the enabling of SEV-ES and SEV-SNP.
Processor Settings
- Virtualization Technology: Enabled
- IOMMU Support: Enabled
- Secure Memory Encryption: Enabled
- Minimum SEV non-ES ASID: 100
- Secure Nested Paging: Enabled
- SNP Memory Coverage: Enabled
- Transparent Secure Memory Encryption: Disabled
System Security
- TPM Security: On
- TPM Hierarchy: Enabled
-
TPM Advanced Settings
- TPM PPI Bypass Provision: Disabled
- TPM PPI Bypass Clear: Disabled
- TPM2 Algorithm Selection: SHA256
After configuring your BIOS settings, verify that your system is properly configured for SEV-SNP operations:
Confirm you're running the SNP-enabled kernel:
uname -r
Expected Output
6.9.0-rc7-snp-host-05b10142ac6a
Verify that SEV capabilities are available in your CPU:
grep -w sev /proc/cpuinfo
Expected Output
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm flush_l1d sme sev sev_es sev_snp
Verify that SEV features are enabled in KVM:
cat /sys/module/kvm_amd/parameters/sev
cat /sys/module/kvm_amd/parameters/sev_es
cat /sys/module/kvm_amd/parameters/sev_snp
Expected Output
Y
Y
Y
Verify TPM 2.0 is detected and accessible:
sudo dmesg | grep -i tpm
sudo ls -l /dev/tpm0
Expected Output
[ 0.000000] efi: ACPI=0x6effe000 ACPI 2.0=0x6effe014 TPMFinalLog=0x6ed9c000 MEMATTR=0x615545a0 SMBIOS=0x69898000 SMBIOS 3.0=0x69896000 MOKvar=0x67bc0000 RNG=0x6ef0d020 TPMEventLog=0x3d2a6020
[ 0.004228] ACPI: SSDT 0x000000006EF22000 000623 (v02 DELL Tpm2Tabl 00001000 INTL 20210331)
[ 0.004230] ACPI: TPM2 0x000000006EF21000 00004C (v04 DELL PE_SC3 00000002 DELL 00000001)
[ 0.004258] ACPI: Reserving TPM2 table memory at [mem 0x6ef21000-0x6ef2104b]
[ 5.207945] tpm_tis MSFT0101:00: 2.0 TPM (device-id 0xFC, rev-id 1)
crw-rw---- 1 tss root 10, 224 Jul 23 16:29 /dev/tpm0
Test TPM operations to ensure proper communication:
# Install TPM tools if not present
sudo apt-get install tpm2-tools
# Test TPM functionality
sudo tpm2_readclock
Expected Output
time: 9476751982
clock_info:
clock: 2675690062
reset_count: 71
restart_count: 0
safe: yes
Check that SEV-SNP is properly enabled in the kernel:
sudo dmesg | grep -i 'sev\|snp'
sudo ls /sys/module/kvm_amd/parameters | grep sev
Expected Output
[ 0.000000] Linux version 6.9.0-rc7-snp-host-05b10142ac6a (root@8a011408bee6) (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #2 SMP Thu May 30 18:35:46 UTC 2024
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.9.0-rc7-snp-host-05b10142ac6a root=UUID=eb1d7853-1fec-4bc0-b7a3-5d987b6d0119 ro serial console=ttyS1,115200n8 modprobe.blacklist=bnxt_re modprobe.blacklist=rndis_host
[ 0.000000] SEV-SNP: RMP table physical range [0x000000601d200000 - 0x000000607dafffff]
[ 0.003786] SEV-SNP: Reserving start/end of RMP table on a 2MB boundary [0x000000607da00000]
[ 0.240623] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.9.0-rc7-snp-host-05b10142ac6a root=UUID=eb1d7853-1fec-4bc0-b7a3-5d987b6d0119 ro serial console=ttyS1,115200n8 modprobe.blacklist=bnxt_re modprobe.blacklist=rndis_host
[ 0.240689] Unknown kernel command line parameters "serial BOOT_IMAGE=/boot/vmlinuz-6.9.0-rc7-snp-host-05b10142ac6a", will be passed to user space.
[ 4.246111] AMD-Vi: IOMMU SNP support enabled.
[ 4.630436] AMD-Vi: Extended features (0xa5bf7320a2294aee, 0x1d): PPR X2APIC NX [5] IA GA PC GA_vAPIC SNP
[ 4.651376] AMD-Vi: Force to disable Virtual APIC due to SNP
[ 5.900355] BOOT_IMAGE=/boot/vmlinuz-6.9.0-rc7-snp-host-05b10142ac6a
[ 6.404977] usb usb1: Manufacturer: Linux 6.9.0-rc7-snp-host-05b10142ac6a xhci-hcd
[ 6.490104] usb usb2: Manufacturer: Linux 6.9.0-rc7-snp-host-05b10142ac6a xhci-hcd
[ 6.505333] usb usb3: Manufacturer: Linux 6.9.0-rc7-snp-host-05b10142ac6a xhci-hcd
[ 6.520267] usb usb4: Manufacturer: Linux 6.9.0-rc7-snp-host-05b10142ac6a xhci-hcd
[ 9.628320] ccp 0000:01:00.5: sev enabled
[ 13.720990] ccp 0000:01:00.5: SEV API:1.55 build:38
[ 13.721003] ccp 0000:01:00.5: SEV-SNP API:1.55 build:38
[ 13.734549] kvm_amd: SEV enabled (ASIDs 100 - 1006)
[ 13.734552] kvm_amd: SEV-ES enabled (ASIDs 1 - 99)
[ 13.734555] kvm_amd: SEV-SNP enabled (ASIDs 1 - 99)
sev
sev_es
sev_snp
Use the snphost
tool for comprehensive validation:
sudo modprobe msr
sudo snphost ok
Expected Output
[ PASS ] - AMD CPU
[ PASS ] - Microcode support
[ PASS ] - Secure Memory Encryption (SME)
[ PASS ] - SME: Enabled in MSR
[ PASS ] - Secure Encrypted Virtualization (SEV)
[ PASS ] - SEV firmware version: 1.55
[ PASS ] - Encrypted State (SEV-ES)
[ PASS ] - SEV-ES initialized
[ PASS ] - SEV initialized: Initialized, no guests running
[ PASS ] - Secure Nested Paging (SEV-SNP)
[ PASS ] - VM Permission Levels
[ PASS ] - Number of VMPLs: 4
[ PASS ] - SNP: Enabled in MSR
[ PASS ] - SNP initialized
[ PASS ] - RMP table addresses: 0x601d200000 - 0x607dafffff
[ PASS ] - RMP table initialized
[ PASS ] - Alias check: Completed since last system update, no aliasing addresses
[ PASS ] - Physical address bit reduction: 6
[ PASS ] - C-bit location: 51
[ PASS ] - Number of encrypted guests supported simultaneously: 1006
[ PASS ] - Minimum ASID value for SEV-enabled, SEV-ES disabled guest: 100
[ PASS ] - /dev/sev readable
[ PASS ] - /dev/sev writable
[ PASS ] - Page flush MSR: DISABLED
[ PASS ] - KVM supported: API version: 12
[ PASS ] - SEV enabled in KVM
[ PASS ] - SEV-ES enabled in KVM
[ PASS ] - SEV-SNP enabled in KVM
[ PASS ] - Memlock resource limit: Soft: 50438688768 | Hard: 50438688768
[ PASS ] - Comparing TCB values: TCB versions match
Platform TCB version: TCB Version:
Microcode: 72
SNP: 22
TEE: 0
Boot Loader: 9
FMC: None
Reported TCB version: TCB Version:
Microcode: 72
SNP: 22
TEE: 0
Boot Loader: 9
FMC: None
All checks should return [ PASS ]
status for a properly configured environment.
HyperBEAM OS uses a centralized configuration system defined in config.py
that provides type-safe, structured configuration management.
The configuration is organized into several key areas:
- Directory Configuration (
DirectoryConfig
): Defines all build and output directories - Build Configuration (
BuildConfig
): Controls build options, branches, and feature flags - VM Configuration (
VMConfig
): Virtual machine settings including CPU, memory, and security options - Network Configuration (
NetworkConfig
): VM networking and SSH connectivity settings
# Branch Configuration
hb_branch = "edge" # HyperBEAM branch for builds
ao_branch = "tillathehun0/cu-experimental" # AO branch for local CU (DEPRECATED)
# Virtualization and Debug Features
debug = False # Enable SSH access for development (False = black box VM)
enable_kvm = True # Enable KVM acceleration
enable_tpm = True # Enable TPM 2.0 support
enable_gpu = False # Enable GPU passthrough support
# Image Configuration
base_image = "base.qcow2" # Base VM image filename
guest_image = "guest.qcow2" # Guest VM image filename
# External Repositories
gpu_admin_tools_repo = "https://github.com/permaweb/gpu-admin-tools" # GPU tools repository
# Hardware Configuration
host_cpu_family = "Genoa" # Host CPU family (AMD Genoa)
vcpu_count = 12 # Number of virtual CPUs
memory_mb = 204800 # Memory allocation in MB (~200GB)
# SEV-SNP Security Configuration
guest_features = "0x1" # Guest feature flags
platform_info = "0x3" # Platform information
guest_policy = "0x30000" # SEV-SNP guest policy
family_id = "00000000000000000000000000000000" # 32-char family identifier
image_id = "00000000000000000000000000000000" # 32-char image identifier
# Kernel Configuration
cmdline = "console=ttyS0 earlyprintk=serial root=/dev/sda" # Kernel command line
# VM Network Configuration
vm_host = "localhost" # VM host address for SSH
vm_port = 2222 # SSH port forwarding
vm_user = "ubuntu" # Default SSH username
hb_port = 80 # HyperBEAM service port
qemu_port = 4444 # QEMU management port
# TCB Version Components
bootloader = 9 # Bootloader TCB version
tee = 0 # TEE TCB version
snp = 22 # SNP TCB version
microcode = 72 # Microcode TCB version
reserved = [0, 0, 0, 0] # Reserved TCB fields
# QEMU Configuration
launch_script = "./launch.sh" # QEMU launch script path
snp_params = "-sev-snp" # SNP-specific QEMU parameters
# SNP Release Configuration
release_url = "https://github.com/permaweb/hb-os/releases/download/v1.0.0/snp-release.tar.gz"
# Build Dependencies (automatically installed during init)
dependencies = [
"build-essential", "git", "python3", "python3-venv", "ninja-build",
"libglib2.0-dev", "uuid-dev", "iasl", "nasm", "python-is-python3",
"flex", "bison", "openssl", "libssl-dev", "libelf-dev", "bc",
"libncurses-dev", "gawk", "dkms", "libudev-dev", "libpci-dev",
"libiberty-dev", "autoconf", "llvm", "cpio", "zstd", "debhelper",
"rsync", "wget", "python3-tomli"
]
You can modify config.py
to customize the build and runtime behavior:
- Development Branches: Change
hb_branch
for different hyperbeam releases - VM Resources: Adjust
vcpu_count
andmemory_mb
based on your hardware - Debug Mode: Set
debug = True
to enable SSH access for development (False creates a black box VM) - GPU Support: Enable
enable_gpu = True
for GPU passthrough capabilities - Network Ports: Modify
vm_port
and other ports to avoid conflicts - SEV-SNP Security: Update
guest_policy
,family_id
, andimage_id
for production deployments
- TCB Versions: Modify TCB component versions to match your platform requirements
- Kernel Command Line: Customize
cmdline
for specific kernel parameters - QEMU Parameters: Adjust QEMU launch script and SNP parameters
- SNP Dependencies: Add or remove build dependencies for custom environments
- File Paths: Directory structure is automatically managed but can be customized if needed
- Debug Mode: When
debug = False
(production), the VM runs as a completely isolated black box with no external access points, providing maximum security isolation. Whendebug = True
(development), SSH access is enabled for debugging and development purposes, which reduces security isolation but allows for development workflows.
Some settings can be overridden at runtime without modifying config.py
:
- Branch selection via CLI arguments:
--hb-branch
and--ao-branch
- Resource allocation through facade system parameters
- Debug mode via environment variables or facade configuration
The configuration system includes:
- Type Safety: Uses Python dataclasses for compile-time validation
- Path Validation: Automatically creates and validates directory structures
- Default Values: Provides sensible defaults for all options
- Environment Integration: Seamlessly integrates with the facade system
For detailed configuration options, see the config.py
file and the ConfigurationService
class in src/services/configuration_service.py
.
HyperBEAM OS provides a command-line interface for all operations:
./run <command> [options]
./run help # Display detailed help information
-
init
- Initialize the complete build environment./run init [--snp-release PATH]
- Creates build directories and installs dependencies
- Downloads and extracts SNP release packages
- Builds attestation server and digest calculator tools
- Configures host system for SEV-SNP operations
-
setup_host
- Configure the host system for SEV-SNP./run setup_host
-
setup_gpu
- Configure GPU passthrough for confidential computing./run setup_gpu
-
build_snp_release
- Build SNP packages (kernel, OVMF, QEMU) from source./run build_snp_release
-
build_base
- Build the base VM image with kernel and initramfs./run build_base
-
build_guest
- Build the guest image with application content./run build_guest [--hb-branch BRANCH] [--ao-branch BRANCH]
-
start
- Start the VM with the current configuration./run start [--data-disk PATH]
-
start_release
- Start the VM using packaged release files./run start_release [--data-disk PATH]
-
ssh
- Connect to the running VM via SSH./run ssh
-
package_release
- Create a distributable release package./run package_release
-
download_release
- Download and install a remote release./run download_release --url URL
-
clean
- Clean up build artifacts and temporary files./run clean
-
help
- Display comprehensive help information./run help
# Complete environment setup and build
./run init
./run build_base
./run build_guest
./run start
# Prerequisits: you have already ran ./run init and ./run build_base
# Set debug = True (config.py)
# Development cycle
./run build_guest --hb-branch feature-branch
./run start --data-disk /path/to/dev-disk.img # (Optional: --data-disk for non vol storage)
ssh -p 2222 root@localhost # Connect to test your changes (password: hb)
# Build complete system for release
./run build_snp_release # If building from source (Have to init again with new snp-release build)
./run build_base
./run build_guest --hb-branch release-v1.0
./run package_release
# Test the release
./run start_release
# Download and run a release
./run download_release --url https://releases.hyperbeam.com/v1.0.0/release.tar.gz
./run start_release --data-disk /mnt/storage.img # (Optional: --data-disk for non vol storage)
# Clean up build artifacts
./run clean
# Verify environment setup
./run init --help # Check available options
For programmatic usage and advanced workflows, HyperBEAM OS provides a facade system:
from src.core.service_factory import get_service_container
from src.core.facade_interfaces import IHyperBeamFacade
# Get the main facade
container = get_service_container()
hyperbeam = container.resolve(IHyperBeamFacade)
# Complete workflows
hyperbeam.quick_setup() # Full environment setup
hyperbeam.development_workflow() # Build and start for development
release_path = hyperbeam.release_workflow() # Build and package for release
hyperbeam.demo_workflow() # Run demonstration
# System status and monitoring
status = hyperbeam.get_system_status()
hyperbeam.print_status_report()
from src.core.facade_interfaces import IBuildFacade, IVMFacade
build_facade = container.resolve(IBuildFacade)
vm_facade = container.resolve(IVMFacade)
# Targeted operations
build_facade.build_guest_image(hb_branch="experimental")
vm_facade.create_and_start_vm(data_disk="/path/to/disk.img")
See FACADE_GUIDE for comprehensive facade documentation.
hb-os/
βββ π src/ # Main source code
β βββ π cli/ # Command Line Interface
β β βββ cli_handler.py # Argument parsing and command dispatch
β βββ π core/ # Core business logic
β β βββ build_orchestrator.py # Build workflow coordination
β β βββ build_content.py # Guest content building
β β βββ build_initramfs.py # Initramfs creation
β β βββ build_snp_packages.py # SNP package building
β β βββ create_new_vm.py # VM image creation
β β βββ create_vm_config.py # VM configuration generation
β β βββ di_container.py # Dependency injection container
β β βββ facade_interfaces.py # Facade pattern interfaces
β β βββ service_factory.py # Service registration and creation
β β βββ service_interfaces.py # Service contracts
β β βββ setup_guest.py # Guest setup and dm-verity
β β βββ vm_manager.py # VM lifecycle management
β β βββ initialization.py # Environment initialization
β βββ π facades/ # High-level workflow facades
β β βββ build_facade.py # Build operations facade
β β βββ hyperbeam_facade.py # Main orchestration facade
β β βββ release_facade.py # Release management facade
β β βββ setup_facade.py # Environment setup facade
β β βββ vm_facade.py # VM management facade
β βββ π services/ # Low-level services
β β βββ command_execution_service.py # Command execution
β β βββ configuration_service.py # Configuration management
β β βββ dependencies.py # Dependency installation
β β βββ docker_service.py # Docker operations
β β βββ filesystem_service.py # File system operations
β β βββ release_manager.py # Release packaging
β β βββ snp_component_service.py # SNP component management
β βββ π utils/ # Utility functions and helpers
β βββ utils.py # Common utilities and error handling
βββ π config/ # Configuration management
β βββ config.py # Type-safe configuration classes
β βββ
βββ π examples/ # Usage examples and documentation
β βββ FACADE_GUIDE.md # Comprehensive facade usage guide
β βββ example_facade_usage.py # Facade system examples
β βββ test_release_package.py # Release testing script
β βββ vm-config-template.toml # VM configuration template
βββ π resources/ # Build resources and templates
β βββ content.Dockerfile # Guest content container definition
β βββ initramfs.Dockerfile # Initramfs build container
β βββ init.sh # VM initialization script
β βββ hyperbeam.service # HyperBEAM systemd service
β βββ cu.service # Compute unit service
β βββ template-user-data # Cloud-init template
βββ π scripts/ # Build and setup scripts
β βββ base_setup.sh # Base system setup
β βββ gpu_passthrough.sh # GPU passthrough configuration
β βββ init.sh # Environment initialization
β βββ install.sh # Installation script
βββ π tools/ # Attestation and security tools
β βββ π attestation_server/ # Rust-based attestation server
β β βββ Cargo.toml # Rust project configuration
β β βββ src/ # Attestation server source
β βββ π digest_calc/ # Measurement digest calculator
β βββ Cargo.toml # Rust project configuration
β βββ src/ # Digest calculator source
βββ run # Main entry point script
βββ launch.sh # QEMU VM launcher
βββ config.py # Global configuration
βββ LICENSE # License information
βββ README.md # This documentation
- Handles argument parsing, command validation, and dispatch
- Provides user-friendly error messages and help documentation
- Entry point for all CLI operations
- Contains the main business logic and workflow orchestration
- Implements dependency injection for testable, modular code
- Manages complex build processes and VM lifecycle operations
- Provides simplified APIs for complex multi-step operations
- Implements the facade pattern for better usability
- Orchestrates services to provide complete workflows
- Low-level services for system operations (file I/O, commands, Docker)
- Encapsulates external dependencies behind clean interfaces
- Provides consistent error handling and logging
- Attestation Server: Rust-based server for SEV-SNP attestation
- Digest Calculator: Computes measurement digests for integrity verification
- Built using Cargo and integrated into the Python build system
- Container definitions, initialization scripts, and system configurations
- Templates and configuration files for VM and service setup
- Shell scripts for system-level operations and environment setup