Hypervisor
Oxide’s hardware virtual machine solution is built on bhyve, an open source Virtual Machine Monitor (VMM) on illumos. The underlying technologies for the software stack also include:
Helios: Oxide’s illumos distribution, as the operating system for the host CPU in server sledsPropolis: Oxide’s homegrown Rust-based userspace
Guest Workload Support
Oxide supports guest images that meet the following criteria:
Guest OS: major Linux distros, Windows
Boot mode: OS images enabled for UEFI booting
Device emulation: x86 images with VirtIO driver support
Network booting via the PXE protocol is also supported.
Guest Facilities
The guest facilities available include idiomatic remote access systems such as SSH for Linux and Remote Desktop Protocol (RDP) for Windows. Serial console access is also available, allowing direct interaction with instances.
Storage
Physical Layer
Each server sled in the Oxide rack includes SSDs of different form factors:
U.2 or U.3 devices (10x): store all user data and internal data (e.g., metadata, control plane data, software images).
M.2 devices (2x): store a limited amount of internal data (e.g., boot images, memory dump).
The physical disks form a common pool of resources distributed across the rack that backs the virtual block storage service.
Service Layer
Virtual disks can be provisioned and attached to instances for guest system read/write. The block storage service comprises two components:
disk downstairs: reside with the target disk storage backend and provide access to it via the network for those upstairsdisk upstairs: reside with the server sled using the storage, making requests across the network to some number of downstairs replicas
Disk upstairs and downstairs communicate over a network protocol for both block data and related metadata across all the server sleds within the rack.
Oxide offers two types of disk backends: distributed disks and local disks. The different backends have different performance, availability, and durability characteristics:
| Type | Distributed disks | Local disks |
|---|---|---|
Data redundancy | 3 backend copies per virtual disk | 1 backend (no redundancy) |
Backend | Standard ZFS volumes | Lightweight ZFS volumes |
Locality | Backends may be located on any three sleds in the rack | Backend must be on the same sled as attaching instance |
Durability | No data loss if two of the backends are intact | 100% data loss if the only backend is gone/corrupted |
Availbility | Guest read/write not interrupted when up to two backends are offline | Guest read/write prohibited when the only backend is offline |
Other features | Compression, encryption, snapshotting, repair | Compress, encryption |
Workload suitability | Persistent data with high reliability guarantee | Temporary data with very high data IOPS requirement |