August 11th, 2025

vCluster

vCluster v0.27 - Dedicated Clusters with Private Nodes

The first installment of our Future of Kubernetes launch series is here, and we couldn’t be more excited. Today, we introduce Private Nodes, a powerful new tenancy model that gives users the option to join Kubernetes nodes directly into a virtual cluster. If you use this feature, it means that all your workloads will run in an isolated environment, separate from other virtual clusters, which produces a higher level of security and avoids potential interference or scheduling issues. In addition, when using private nodes, objects no longer need to be synced between the virtual and host cluster. The virtual cluster behaves closer to a traditional cluster: Completely separate nodes, separate CNI, separate CSI, etc. A virtual cluster with private nodes is effectively an entirely separate single-tenant cluster itself. The main difference to a traditional cluster remains the fact that the control plane runs inside a container rather than as a binary directly on a dedicated control plane master node.

This is a giant shift in how vCluster can operate, and it opens up a host of new possibilities. Let’s take a high-level look at setting up a virtual cluster with private nodes. For brevity, only partial configuration is shown here.

First, we enable privateNodes in the vcluster.yaml:

privateNodes: 
  enabled: true 

Once created, we join our private worker nodes to the new virtual cluster. To do this, we need to follow two steps:

  1. Connect to our virtual cluster and generate a join command:

vcluster connect my-vcluster 
# Now within the vCluster context, run: 
vcluster token create --expires=1h 
  1. Now SSH into the node you would like to join as a private node and run the command that you received from the previous step. It looks similar to this command:

curl -sfLk https://<vcluster-endpoint>/node/join?token=<token> | sh - 

Run the above command on any node and it will join the cluster — that’s it! The vCluster control-plane can also manage and automatically upgrade each node when the control plane itself is upgraded, or you can choose to manage them independently.

Please see the documentation for more information and configuration options.

Coming Soon: Auto Nodes For Private Nodes

Private Nodes give you isolation but we won’t stop there. One of our key drivers was always to make Kubernetes architecture more efficient and less wasteful on resources. As you may notice, Private Nodes allows you now to use vCluster for creating single-tenant clusters which might increase the risk for underutilized nodes and wastes resources. That’s why we’re working on an even more exciting feature that builds on top of Private Nodes: We call it Auto Nodes.

With the next release, vCluster will be able to dynamically provision private nodes for each virtual cluster using a built-in Karpenter instance. Karpenter is the open source Kubernetes node autoscaler developed by AWS. With vCluster Auto Nodes, your virtual cluster will be able to scale on auto-pilot just like EKS Auto Mode does in AWS but it will work anywhere, not just in AWS with EC2 nodes. Imagine auto-provisioning nodes from different cloud providers, in your private cloud and even on bare metal.

Combine Private Nodes and Auto Nodes, and you get the best of both worlds: maximum hard isolation and dynamic clusters with maximum infrastructure utilization, backed by the speed, flexibility, and automation benefits of virtual clusters — all within your own data centers, cloud accounts and even in hybrid deployments.

To make sure you’re the first the know once Auto Nodes will be available, subscribe to our Launch Series: The Future of Kubernetes Tenancy.

Other Changes

Notable Improvements

  • Our CIS hardening guide is now live. Based on the industry-standard CIS benchmark, this guide outlines best practices tailored to vCluster’s unique architecture. By following it, you can apply relevant controls which improve your overall security posture, and can better align with security requirements.

  • The vCluster CLI now has a command to review and rotate certificates. This includes the client, server, and CA certificates. Client certificates only remain valid for 1 year by default. See the docs or vcluster certs -h for more information. This command already works with private nodes but requires additional steps.

  • We have added support for Kubernetes 1.33, enabling users to take advantage of the latest enhancements, security updates, and performance improvements in the upstream Kubernetes release.

Other Announcements & Changes

  • Breaking Change: The sync.toHost.pods.rewriteHosts.initContainer configuration has been migrated from a string to our standard image object format. This only needs to be addressed if you are currently using a custom image. Please see the docs for more information.

  • The Isolated Control Plane feature, configured with experimental.isolatedControlPlane has been deprecated as of v0.27, and will be removed in v0.28.

  • In an upcoming minor release we will be upgrading etcd to v3.6. The upstream etcd docs state that before upgrading to v3.6, you must already be on v3.5.20 or higher, otherwise failures may occur. This version of etcd was introduced in vCluster v0.24.2. Please ensure you plan to upgrade to, at minimum, v0.24.2 in the near future.

  • The External Secret Operator integration configuration has been reformatted. Custom resources have migrated to either the fromHost or toHost section, and the enabled option has been removed. This will clarify the direction of sync, and matches our standard integration config pattern. The previous configuration is now deprecated, and will be removed in v0.29. For more information please see the docs page.

  • When syncing CRDs between the virtual and host clusters, you can now specify the API Version. While not a best-practice, this can provide a valuable alternative for specific use-cases. See the docs for more details.

For a list of additional fixes and smaller changes, please refer to the release notes.