October 28th, 2025

vCluster

Platform v4.5 & vCluster v0.30 - Secure Cloud Bursting, On-Prem Networking, and Persistent Volume Snapshots

We’re excited to roll out vCluster Platform v4.5 and vCluster v0.30, two big releases packed with features that push Kubernetes tenancy even further.

From hybrid flexibility to stronger isolation and smarter automation, these updates are another step toward delivering the most powerful and production-ready tenancy platform in the Kubernetes ecosystem.

Platform v4.5 - vCluster VPN, Netris integration, UI Kubectl Shell, and more

vCluster VPN (Virtual Private Network)

We’ve just wrapped up the most significant shift in how virtual clusters can operate with our Future of Kubernetes Tenancy launch series, introducing two completely new ways of isolating tenants with vCluster Private Nodes and vCluster Standalone.

  • With Private Nodes the control plane is hosted on a shared Kubernetes cluster while worker nodes can be joined directly into a virtual cluster.

  • vCluster Standalone takes this further and allows you to run the control plane on dedicated nodes, solving the “Cluster One” problem.

A networking requirement for both Private Nodes and Standalone is to expose the control plane somehow, typically via LoadBalancers or Ingresses to allow nodes to register themselves. This is easy to do if control plane and nodes are all on the same physical network but gets infinitely harder if they aren’t.

vCluster VPN creates a secure and private connection between the virtual cluster control plane and Private Nodes using the networking technology that is developed by Tailscale. This eliminates the need to expose the virtual cluster control plane directly. Instead, you can create an overlay network for control plane ↔ node and node ↔ node communication.

This makes vCluster VPN perfectly suited for scenarios where you intend to join nodes from different sources. A common challenge of on-prem Kubernetes clusters is providing burst capacity. Auto Nodes and vCluster VPN enable you to automatically provision additional cloud-backed nodes when demand exceeds local capacity. The networking between all nodes in the virtual cluster, regardless of their location, will be taken care of by vCluster VPN.

Let’s walk through setting up a burst-to-cloud virtual cluster:

First, create NodeProviders for your on-prem infrastructure, for example OpenStack, and a cloud provider like AWS.

Next, create a virtual cluster with two node pools and vCluster VPN:

# vcluster.yaml
privateNodes:
	enabled: true
	# Expose the control plane privately to nodes using vCluster VPN
	vpn:
		enabled: true
		# Create an overlay network over all nodes in addition to direct control plane communication
		nodeToNode:
			enabled: true
	autoNodes:
	- provider: openstack
		static:
		# Ensure we always have at least 10 large on-prem nodes in our cluster
		- name: on-prem-nodepool
			quantity: 10 
			nodeTypeSelector:
			- property: instance-type
				value: "lg"
	- provider: aws
		# Dynamically join ec2 instances when workloads exceed our on-prem capacity
		dynamic:
		- name: cloud-nodepool
			nodeTypeSelector:
			- property: instance-type
				value: "t3.xlarge"
			limits:
				nodes: 20 # Enforce a maximum of 20 nodes in this NodePool 

Auto Nodes Improvements

In addition to vCluster VPN, this release brings many convenience features and improvements to Auto Nodes. We're upgrading our Terraform Quickstart NodeProviders for AWS, Azure, and GCP to behave more like traditional cloud Kubernetes clusters by deploying native cloud controller managers and CSI drivers by default.

To achieve this, we're introducing optional NodeEnvironments for the Terraform NodeProvider. NodeEnvironments are created once per provider per virtual cluster. They enable you to provision cluster-wide resources, like VPCs, security groups, and firewalls, and control plane specific deployments inside the virtual cluster, such as cloud controllers or CSI drivers.

Emphasizing the importance of NodeEnvironments, we've updated the vcluster.yaml in v0.30 to allow easy central configuration of environments:

# vcluster.yaml
privateNodes:
	enabled: true
	autoNodes:
	# Configure the relevant NodeProviders environment and NodePools directly
	- provider: aws
		properties:   # global properties, available in both NodeEnvironments and NodeClaims
			region: us-east-1
		dynamic:
		- name: cpu-pool
			nodeTypeSelector:
				key: instance-type
				operator: "In"
				values: ["t3.large", "t3.xlarge"]

IMPORTANT: You need to update the vcluster.yaml when migrating your virtual cluster from v0.29 to v0.30. Please take a look at the docs for the full specification.

UI Kubectl Shell

Platform administrators and users alike often find themselves in a situation where they just need to execute a couple of kubectl commands against a cluster to troubleshoot or get a specific piece of information from it. We’re now making it really easy to do just that within the vCluster Platform UI. Instead of generating a new kubeconfig, downloading it, plugging it into kubectl and cleaning up afterwards just to run kubectl get nodes, you can now connect to your virtual cluster right in your browser.

The Kubectl Shell will create a new pod in your virtual cluster with a specifically crafted kubeconfig already mounted and ready to go. The shell comes preinstalled with common utilities like kubectl, helm, jq/yq, curl and nslookup . Quickly run a couple of commands against your virtual cluster and rest assured that the pod will be cleaned up automatically after 15 minutes of inactivity. Check out the docs for more information.

Netris Partnership - Cloud-style Network Automation for Private Datacenters

We’re excited to announce our strategic partnership with Netris, the company bringing cloud-style networking to private environments and on-prem datacenters. vCluster now integrates deeply into Netris and is able to provide hard physical tenant isolation on the data plane. Isolating networks is a crucial aspect of clustering GPUs as a lot of the value of GenAI is in the model parameters. The combination of vCluster and Netris allows you to keep data private and still give you the maximum amount of flexibility and maintainability, helping you to dynamically distribute access to GPUs across your external and internal tenants.

Get started by reusing your existing tenant-a-net Netris Server Cluster and automatically join nodes into it by setting up a virtual cluster with this vcluster.yaml:

# vcluster.yaml
integrations:
	# Enable Netris integration and authenticate
	netris:
		enabled: true
		connector: netris-credentials
privateNodes:
	enabled: true
	autoNodes:
	# Automatically join nodes with GPUs to the Netris Server Cluster for tenant A
	- provider: bcm
		properties:
			netris.vcluster.com/server-cluster: tenant-a-net
		dynamic:
		- name: gpu-pool
			nodeTypeSelector: 
			- key: "bcm.vcluster.com/gpu-type"
				value: "h100"

Keep an eye out for future releases as we’re expanding our partnership with Netris. The next step is to integrate vCluster even deeper and allow you to manage all network configuration right in your vcluster.yaml.

Read more about the integration in the docs and our partnership announcement.

Other Announcements & Changes

  • AWS RDS Database connectors are now able to provision and authenticate using workload identity (IRSA and Pod Identity) directly through the vCluster Platform. Learn more in the docs.

Breaking Changes

  • As mentioned above you need to take action when you upgrade Auto Node backed virtual clusters from v0.29 to v0.30. Please consult the documentation.

For a list of additional fixes and smaller changes, please refer to the release notes. For detailed documentation and migration guides, visit vcluster.com/docs/platform.

vCluster v0.30 - Volume Snapshots & K8s 1.34 Support

Persistent Volume Snapshots

In vCluster v0.25, we introduced the Snapshot & Restore feature which allowed taking a backup of etcd via the vCluster CLI, and exporting to locations like S3 or OCI registries. Then in Platform v4.4 and vCluster v0.28 we expanded substantially on that by adding support inside the Platform. This lets users set up an automated system which schedules snapshots on a regular basis, with configuration to manage where and how long they are stored.

Now we are introducing another feature: Persistent Volume Snapshots. By integrating the upstream Kubernetes Volume Snapshot feature, vCluster can include snapshots for persistent volumes which will be created and stored automatically by the relevant CSI driver. When used with the auto-snapshots feature, you will have a recurring stable backup feature that can manage disaster recovery at whatever pace you need it, including your workload’s persistent data.

To use the new feature, first you’ll need to install an upstream CSI driver, and configure a default VolumeSnapshotClass. Then you’ll use the --include-volumes command when running with the CLI:

vcluster snapshot create my-vcluster "s3://my-s3-bucket/snap-1.tar.gz" --include-volumes 

Or if using auto-snapshots you can set the volumes.enabled config to true:

external:
	platform:
		autoSnapshot:
		  enabled: true
			schedule: 0 * * * *
			volumes:
				enabled: true

Now when a snapshot is completed, it will have backed up any volumes that are compatible with the CSI drivers installed in the cluster. These volumes will be stored by the CSI driver in the location of its storage, which is separate from the primary location of your snapshot. For example when using AWS’s EBS-CSI driver, volumes are backed up inside EBS storage, even though the primary snapshot may be in an OCI registry.

To restore, simply add the --restore-volumes param, and the volumes will be re-populated inside the new virtual cluster:

vcluster restore my-vcluster "s3://my-s3-bucket/snap-1.tar.gz" --restore-volumes

Note that this feature is currently in Beta, is not recommended for use in mission-critical environments, and has limitations. We plan to expand this feature over time, so stay tuned for further features and enhancements which will make it usable across an even wider variety of deployment models and infrastructure.

Other Announcements & Changes

  • We have added support for Kubernetes 1.34, enabling users to take advantage of the latest enhancements, security updates, and performance improvements in the upstream Kubernetes release.

  • As Kubernetes slowly transitions from Endpoints to Endpoint Slices, vCluster now supports both. Syncing EndpointSlices to a host cluster can now be done in the same manner as Endpoints.

For a list of additional fixes and smaller changes, please refer to the release notes. For detailed documentation and migration guides, visit vcluster.com/docs.