September 9th, 2025

vCluster

Platform v4.4 and vCluster 0.28 - Auto Nodes, Auto Snapshots and More

vCluster Platform v4.4 - Auto Nodes

In our last release, we announced Private Nodes for virtual clusters, marking the most significant shift in how vCluster operates since its inception in 2021. With Private Nodes, the control plane is hosted on a shared Kubernetes cluster, while worker nodes can be joined directly to a virtual cluster, completely isolated from other tenants and without being part of the host cluster. These nodes exist solely within the virtual cluster, enabling stronger separation and more flexible architecture.

Now, we’re taking Private Nodes to the next level with Auto Nodes. Powered by the popular open-source project Karpenter and baked directly into the vCluster binary, Auto Nodes makes it easier than ever to dynamically provision nodes on demand. We’ve simplified configuration, removed cloud-specific limitations, and enabled support for any environment—whether you’re running in public clouds, on-premises, bare metal, or across multiple clouds. Auto Nodes brings the power of Karpenter to everyone, everywhere.

It’s based on the same open-source engine that powers EKS Auto Mode, but without the EKS-only limitations—bringing dynamic, on-demand provisioning to any Kubernetes environment, across any cloud or on-prem setup. Auto Nodes is easy to get started with, yet highly customizable for advanced use cases. If you’ve wanted to use Karpenter outside of AWS or simplify your setup without giving up power, Auto Nodes delivers.

Auto Nodes is live today—here’s what you can already do:

  1. Set up a Node Provider in vCluster Platform: We support Terraform/OpenTofu, KubeVirt, and NVIDIA BCM.

  2. Define Node Types (use Terraform/OpenTofu Node Provider for any infra that has a Terraform provider, including public clouds but also private cloud environments such as OpenStack, MAAS, etc.) or use our Terraform-based quickstart templates for AWS, GCP and Azure.

  3. Add autoNodes config section to a vcluster.yaml (example snippet below).

  4. See Karpenter in action as it dynamically creates NodeClaims and the vCluster Platform fulfills them using the Node Types available to achieve perfectly right-sized clusters.

  5. Bonus: Auto Nodes will also handle environment details beyond the nodes itself, including setting up LoadBalancers and VPCs as needed. More details in our sample repos and the docs.

# vcluster.yaml with Auto Nodes configured
privateNodes:
	enabled: true
	autoNodes:
		dynamic:
			- name: gcp-nodes
				provider: gcp
				requirements:
					property: instance-type
					operator: In
					values: ["e2-standard-4", "e2-standard-8"]
			- name: aws-nodes
				provider: aws
				requirements:
					property: instance-type
					operator: In
					values: ["t3.medium", "t3.large"]
			- name: private-cloud-openstack-nodes
				provider: openstack
				requirements:
					property: os
					value: ubuntu

Demo

To see Auto Nodes in action with GCP nodes, watch the video below for a quick demo.

Caption: Auto scaling virtual cluster on GCP based on workload demand

Get Started

To harness the full power of Karpenter and Terraform in your environment we recommend to fork our starter repositories and adjust them to meet your requirements.

Public clouds are a great choice for maximum flexibility and scale, though sometimes they aren’t quite the perfect fit. Specially when it comes to data sovereignty, low-latency local compute or bare metal use cases.

We’ve developed additional integrations to bring the power of Auto Nodes to your on-prem and bare metal environments:

  • NVIDIA BCM is the management layer for NVIDIA’s flagship supercomputer offering DGX SuperPOD, a full-stack data center platform that includes industry-leading computing, storage, networking and software for building on-prem AI factories.

  • KubeVirt enables you to break up large bare metal servers into smaller isolated VMs on demand.

Coming Soon: vCluster Standalone

Auto Nodes mark another milestone in our tenancy models journey but we’re not done yet. Stay tuned for the next level to increase tenant isolation: vCluster Standalone.

This deployment model allows you to run vCluster as a binary on a control plane node or a set of multiple control plane nodes for HA similar to how traditional distros such as RKE2 or OpenShift are deployed. vCluster is designed to run control planes in containers but many of our customers face the Day 0 challenge of having to spin up the host cluster to run the vCluster control plane pods on top of. With vCluster Standalone we now answer our customers’ demand for us to help them spin up and manage this host cluster with the same tooling and support that they receive for their containerized vCluster deployments. Stay tuned for this exciting announcement planned for October 1.

Other Changes

Notable Improvements

Auto Snapshots

Protect your virtual clusters with automated backups via vCluster Snapshots. Platform-managed snapshots run on your defined schedule, storing virtual cluster state to S3 compatible buckets or OCI registries. Configure custom retention policies to match your backup requirements and restore virtual clusters from snapshots quickly. Check out the documentation to get started.

In our upcoming releases we will be unveiling more capabilities around adding persistent storage as part of these snapshots but for now this is limited to the control plane state (K8s resources).

Require Template Flow

Improves your tenant self-serve experience in the vCluster Platform UI by highlighting available vCluster Templates in projects. Assign icons to templates and make them instantly recognizable

Read more about projects and self-serving virtual clusters in the docs.

Air-Gapped Platform UI

Air-gapped deployments come with a unique set of challenges, from ensuring you’re clusters are really air tight over pre-populating container images and securely accessing the environment. We’re introducing a new UI setting to give you fine grained control over what external URLs the vCluster Platform UI can access. This helps you to validate that what you see in the web interface matches what your pods can access.

Read more about locking down the vCluster Platform UI.

vCluster v0.28

In parallel with Platform v4.4, we’ve also launched vCluster v0.28. This release includes support for Auto Nodes and other updates mentioned above, plus additional bug fixes and enhancements.

Other Announcements & Changes

  • The Isolated Control Plane feature, configured with experimental.isolatedControlPlane, was deprecated in v0.27, and has been removed in v0.28.

  • The experimental feature Generic Sync has been removed as of v0.28.

  • The External Secret Operator integration configuration has been reformatted. Custom resources have migrated to either the fromHost or toHost section, and the enabled option has been removed. This will clarify the direction of sync, and matches our standard integration config pattern. The previous configuration is now deprecated, and will be removed in v0.29. For more information please see the docs page.

For a list of additional fixes and smaller changes, please refer to the Platform release notes and the vCluster release notes.

What's Next

This release marks another milestone in our tenancy models journey. Stay tuned for the next level to increase tenant isolation: vCluster Standalone.

For detailed documentation and migration guides, visit vcluster.com/docs.