Only registred users can make comments

New Features in Kubernetes 1.33: An Overview

 

released:

release history: https://kubernetes.io/releases/

Kubernetes v1.33 was officially released on April 23, 2025. The release team dubbed this release “Octarine: The Color of Magic”, inspired by the idea that even as Kubernetes becomes more mature and well-understood, it retains a bit of “magic” in what it enables. In practical terms, v1.33 brings a host of new stable features, some powerful enhancements now in beta, and a collection of alpha previews that hint at future capabilities.

In this post, we concentrate on the major features of Kubernetes 1.33. These include functionalities graduating to General Availability (GA), improvements that simplify cluster operations and application management, and significant changes in defaults and deprecations. Let’s dive into the top features first.

Top Features

Sidecar Containers (Stable)

KEP-753: Sidecar Containers

What is it? The sidecar container pattern is now an official stable feature in Kubernetes 1.33. Sidecar containers are auxiliary containers in a Pod that provide supporting functionality (such as logging agents, proxies, or metrics collectors) alongside the main application containers. Kubernetes implements sidecars as a special class of init containers with restartPolicy: Always, ensuring that sidecars start before the main app containers, stay running throughout the Pod’s lifecycle, and only terminate after the primary containers have stopped. Sidecars can also have their health probes, and their Out-of-Memory kill priorities are aligned with primary containers to avoid being killed prematurely.

When to Use It: Use sidecar containers whenever you have a helper process that should run alongside your application. Common examples include log shippers, service mesh proxies, or data loaders that need to run continuously. With first-class sidecar support now GA, you can rely on Kubernetes to manage sidecar startup and shutdown ordering automatically, which is especially helpful for ensuring your main app doesn’t start before its helpers are ready (and vice versa for shutdown).

How to Implement It: To use this feature, define the supporting container as a sidecar in your Pod spec. In Kubernetes 1.33, this is achieved by specifying the container as an init container with a never-ending loop (since Kubernetes will now treat an init container with restartPolicy: Always as a sidecar). The sidecar will start up first, and Kubernetes will transition to starting regular containers only after the sidecar is up and running. No special feature gates or flags are needed – sidecars are enabled by default in 1.33 since the feature is GA.

In-Place Pod Resource Resizing (Beta)

KEP-1287: In-Place Update of Pod Resources

What is it? Kubernetes 1.33 introduces beta support for in-place vertical scaling of Pods, meaning you can adjust the CPU and memory resources of running Pods without needing to delete and recreate them. Before this feature, if you updated the resource requests/limits for a Deployment or StatefulSet, Kubernetes would replace each Pod to apply the new resources. Now, with in-place Pod resource updates (sometimes called in-place Pod resizing), the kubelet can directly update a running container’s resources when possible. This allows for scaling a Pod up or down in resources without restarting it.

When to Use It: This feature is especially useful for stateful or long-running processes where restarting the Pod to change resources is disruptive. For example, database Pods or in-memory caches can now be given more memory or CPU on the fly to handle increased load, or scaled down during low-demand periods, all without downtime. It also enables more graceful auto-scaling scenarios – a Vertical Pod Autoscaler could eventually adjust resources in place. Overall, use in-place resizing whenever you need to adjust resources on the fly and want to avoid the cost of restarting your application.

How to Implement It: In-place resource resizing is available as a beta feature in 1.33 and is enabled by default (the InPlacePodVerticalScaling feature gate is on by default ). This means you don’t have to take special action to turn it on in most clusters. To use it, simply edit the resource requests/limits of a Pod (or Deployment/StatefulSet) and apply the change – Kubernetes will attempt to update the existing Pod’s resources. Note that certain changes (like decreasing memory limits under some conditions) might still require a restart or be disallowed for safety. For the full details on how in-place updates work and any limitations (such as the RestartPolicy for containers during downsize), refer to KEP-1287: In-Place Update of Pod Resources.

Improved Job Handling: Per-Index Backoff & Success Policy (Stable)

KEP-3850: Backoff Limit Per Index for Indexed Jobs and KEP-3998: Job Completion Policy

What is it? Kubernetes 1.33 brings significant improvements to Jobs, particularly Indexed Jobs (Jobs that create multiple parallel Pods with specific index numbers). Two related features have graduated to GA:

  • Backoff limit per index: You can now specify a retry limit per index for Indexed Jobs. Previously, the backoffLimit (how many times to retry failed Pods before considering the Job failed) was global to the whole Job. With this enhancement, each indexed Pod can have its retry counter, so a failure in one index will not cause the entire Job to stop once that index’s retries are exhausted, while other indices can continue independently.

  • Job success policy: Kubernetes now supports a .spec.successPolicy for Jobs, allowing more control over what “success” means. You can specify conditions like “at least N Pods succeed” or “these specific indices must succeed” before marking the overall Job as complete. This is useful for workflows where partial completion is acceptable or only certain parts of a job are critical.

When to Use It: These job features are great for batch processing, analytics, or CI/CD pipelines. Use per-index backoff limits if you have an Indexed Job where some tasks might inherently be more error-prone – now one flaky index won’t derail the entire job after hitting a retry limit. Use the custom success policy for cases like: only one out of many workers needs to succeed (e.g., a leader election scenario), or you have a set of simulations where any X out of Y successes is enough to consider the run successful. Essentially, these features give you finer-grained control to make Jobs behave in a way that matches your workload’s requirements.

How to Implement It: When defining an Indexed Job (spec.completionMode: Indexed), you can add the field spec.backoffLimitPerIndex to set a retry limit per index (if not set, it defaults to the global backoffLimit behavior). For the success policy, add spec.successPolicy with either a list of required successful indices (succeededIndexes) or a minimum count of successes (succeededPods) or both, depending on your needs. Both features are Stable in 1.33, so no feature gates are required – they work out of the box once you upgrade. For further reading on these enhancements, see KEP-3850: Backoff Limit Per Index for Indexed Jobs and KEP-3998: Job Completion Policy.

Bound Service Account Token Security Improvements (Stable)

KEP-4193: Bound Service Account Token Improvements

What is it? Kubernetes 1.33 strengthens the security of Service Account Tokens. The feature for bound service account tokens was introduced in earlier releases and now comes with additional improvements that are GA in 1.33. These include embedding a unique token identifier (a JWT ID or JTI claim) and adding the node name (when applicable) into the token claims. Also, Kubernetes can restrict a token to be valid only on a specific node. In essence, a token issued to a Pod is now even more tightly scoped: if that Pod is deleted or if it tries to be used from a different node, the token will be considered invalid.

When to Use It: Use these enhanced bound tokens by default for all your service accounts (Kubernetes does this automatically for tokens issued via the TokenRequest API). They greatly reduce the risk of token theft or misuse. For example, in multi-tenant clusters or any security-sensitive environment, if a token is somehow captured from one Pod, the bound token features ensure it can’t be used in another context – it’s tied to the original Pod (and optionally its node). This means an attacker can’t easily reuse that token to impersonate the service account elsewhere.

How to Implement It: There isn’t much you need to do – as of 1.33, whenever you create a token for a service account (e.g., kubectl create token or automatically via mounting in a Pod with projected volumes), the token will include these additional claims. Make sure you are not using legacy unbound tokens (the older secret-based tokens), which are deprecated in favor of bound tokens. If you want to verify the behavior, you can inspect a bound token’s JWT claims (e.g., via the TokenReview API or using jwt decode tools) to see the jti and kn (Kubelet node) claims. The improvements are enabled by default in 1.33 since it’s a stable feature. Further details can be found in KEP-4193: Bound Service Account Token Improvements.

Multiple Service CIDRs

KEP-1880: Multiple Service CIDRs

What is it? Kubernetes 1.33 addresses the problem of running out of ClusterIP addresses by introducing Multiple Service CIDRs as a stable feature. Traditionally, a cluster had a single range of IP addresses (CIDR) for Services of type ClusterIP, and if you exhausted that range, you were in trouble. Now, the Kubernetes networking API includes a new object type, ServiceCIDR (and an IPAddress resource) that allows cluster operators to add additional Service IP ranges on the fly. The IP address allocation logic has been updated to handle multiple ranges seamlessly. This means you can dynamically grow the pool of service IPs without rebuilding your cluster networking from scratch.

When to Use It: This feature is especially useful for large clusters or long-lived clusters where the number of Services might grow beyond the initial IP allocation. If you’re planning a cluster where thousands of Services could be created (for instance, multi-tenant clusters or environments with heavy microservice architectures), you might hit the limit of a /16 CIDR. With Multiple Service CIDRs, you can plan for expansion by adding a second CIDR (or more) when needed. Use this whenever you foresee scale issues with service IPs, or as a safety measure so that running out of IPs does not cause service creation failures.

How to Implement It: After upgrading to 1.33, the cluster administrator can create a new ServiceCIDR object via the Kubernetes API to announce an additional IP range. The controller-manager will then treat that as part of the pool for allocating ClusterIPs. Existing Services keep their IPs, and new Services can get IPs from any of the configured ranges. This is all handled internally by Kubernetes once configured – services themselves don’t need to know that multiple ranges exist. Since this is a GA feature, it’s enabled by default; however, the exact procedure to add a CIDR may depend on your network provider plugin, so consult the documentation for any additional steps. The design and API changes are described in KEP-1880: Multiple Service CIDRs.

Topology-Aware Routing with PreferClose

 KEP-4444: Traffic Distribution for Services and KEP-2433: Topology Aware Routing

What is it? Topology-aware routing for Services graduates to GA in Kubernetes 1.33, with the addition of a new traffic distribution policy called PreferClose. This feature builds on EndpointSlices and topology hints introduced in earlier releases. In essence, Kubernetes can now prefer routing Service traffic to endpoints that are topologically “close” to the client Pod (for example, in the same zone) to reduce latency and cross-zone data transfer costs. By setting the spec.trafficDistribution: PreferClose on a Service, the kube-proxy (or other networking implementations) will try to send traffic to endpoints in the same zone as the source, falling back to others only if needed.

When to Use It: Use topology-aware routing for any multi-zone (or multi-region) Kubernetes cluster where reducing inter-zone traffic is beneficial. This is common in distributed systems where you want to minimize latency or cloud egress costs. For example, if you have a Service that fronts pods in each availability zone, enabling PreferClose ensures that consumers in each zone mostly talk to the local pods. It’s also useful for regulatory or data sovereignty reasons – keeping traffic within a zone or region when possible. Keep in mind this is a preference, not a guarantee: if an endpoint in the same zone is not available, traffic can still go cross-zone.

How to Implement It: Topology-aware routing is stable in 1.33 and relies on EndpointSlice hints (which have been stable since 1.21). To use it, you simply add the field to your Service definition: for example:

spec:
  trafficDistribution: PreferClose

Kubernetes will handle the rest by annotating EndpointSlices with topology hints and configuring kube-proxy accordingly. There is no feature gate required as it’s GA. Ensure your cluster has EndpointSlice controllers enabled (they are on by default) and that your networking plugin supports the hints (the standard kube-proxy does). You can read more in KEP-4444: Traffic Distribution for Services and KEP-2433: Topology-Aware Routing.

Kubectl User Preferences via .kuberc (Alpha)

KEP-3104: Separate kubectl user preferences from cluster configs

What is it? Kubernetes 1.33 introduces a quality-of-life improvement for the command-line tool kubectl: a new optional configuration file named .kuberc for storing user preferences. Prior to this, your kubectl configuration (usually in ~/.kube/config) mixed cluster connection info with preferences like default output format or custom aliases. With this new feature (currently alpha), you can keep things like kubectl aliases, default flags, and behavior overrides in a separate .kuberc file, leaving the kubeconfig purely for cluster credentials and context info. This separation makes it easier to share or reuse your personal settings across different clusters/environments without merging them with cluster credentials.

When to Use It: If you frequently customize your kubectl experience (like setting aliases for long commands, or always using server-side apply by default, etc.), .kuberc will be useful. It’s especially handy for teams – you might share a common .kuberc template with best-practice aliases or defaults, and each team member can use it regardless of which cluster they’re working with. Essentially, use this feature to standardize and port your CLI preferences easily across systems.

How to Implement It: Since this is an alpha feature in 1.33, you need to opt in. First, enable it by setting an environment variable KUBECTL_KUBERC=true (or starting kubectl with that env var set). Then create a file at ~/.kube/kuberc (or another location of your choice) and put your preferences there. The syntax supports things like specifying default flags or alias mappings. You can also specify a custom location for the preferences file by using the --kuberc=/path/to/file flag with kubectl. Note that as an alpha feature, this capability might change in future releases and is disabled by default. Check out KEP-3104: Separate kubectl user preferences from cluster configs for details on what can go in the .kuberc file and the motivation behind it.

Notable Changes

While the above are the headline features of Kubernetes 1.33, there are some important changes and deprecations to be aware of:

  • Deprecation of the Endpoints API: The traditional Endpoints API (core/v1 Endpoints objects) is now officially deprecated. Kubernetes has long had the replacement EndpointSlice API (GA since v1.21), which is more scalable and supports modern features like dual-stack. In 1.33, if you directly use the Endpoints API, you will receive warnings, and you should plan to migrate to EndpointSlices. Most users who just use Services don’t need to do anything (the system will handle it), but any custom scripts or controllers that depend on Endpoints should switch to EndpointSlices. More info can be found in KEP-4974: Deprecate v1.Endpoints.

  • User Namespaces (Beta) enabled by default: User Namespaces support in Pods – a long-standing security enhancement – is still in beta as of 1.33, but notably, it is now enabled by default on clusters. This feature (first introduced in alpha in v1.25) allows you to run Pods in isolated user ID mappings, improving container security by avoiding running everything as root UID 0 in the host. In 1.33, the feature gate UserNamespacesSupport is on by default (though it won’t affect existing Pods unless you explicitly opt in a Pod by setting spec.hostUsers=false). It’s a major step toward making this security feature mainstream. (See KEP-127: Support User Namespaces for background .)

  • Removal of deprecated features: Some legacy or alpha features have been removed in 1.33. Notably, the ancient gitRepo Volume type (deprecated since v1.11) has finally been removed from the kubelet. This in-tree volume plugin allowed embedding a git checkout as a volume ,but had security issues. If you somehow still use gitRepo volumes, you’ll need to switch to alternatives (like using an initContainer with git clone, or a dedicated operator) since kubelets 1.33+ will no longer support it (unless a special feature gate is set to re-enable it temporarily). For details, refer to KEP-5040: Remove gitRepo volume driver. Another removal is the host networking support for Windows Pods (an alpha feature from 1.26 that didn’t pan out) – that capability has been withdrawn due to technical challenges, and the code is removed in 1.33.

  • Default behavior changes: There are a few subtle changes in defaults. For example, the .status.nodeInfo.kubeProxyVersion field in Node objects has been removed (it was deprecated and not reliably used). Also, the procMount field for Pods (which controls /proc filesystem exposure) has graduated in beta and is now on by default if needed, as part of improving support for running unprivileged containers with user namespaces. These changes may not affect most users, but are part of Kubernetes’ continuous cleanup and security hardening efforts.

Minor Updates

In addition to the major features and changes above, Kubernetes 1.33 includes many smaller improvements. Here are a few notable minor updates:

  • nftables Backend for kube-proxy (GA): Kubernetes 1.33 adds a new iptables implementation – the nftables mode – for kube-proxy as a stable option. This can improve the performance and scalability of Service networking on Linux. (By default, kube-proxy still uses iptables for broad compatibility, but you can opt in to nftables now that it’s GA .)

  • Volume Populators (GA): A feature that allows pre-populating Persistent Volumes with data (from sources other than snapshots or PVC clones) has graduated to stable. Volume Populators let you use custom data sources for PVCs via a dataSourceRef. This is useful for scenarios like creating a PVC from an image or a database backup custom resource. To use it, you need to install the appropriate VolumePopulator CRDs/controllers for your data source types. (See KEP-1495: Generic Data Populators for more details.)

  • Asynchronous Scheduler Preemption (Beta): The scheduler in 1.33 got smarter with async preemption. Instead of pausing scheduling while evicting lower-priority Pods to make room for a high-priority one, the scheduler can now preempt in the background and keep scheduling other pods in parallel. This leads to better scheduling throughput in clusters with frequent preemptions (the feature was alpha in 1.32, now beta). No configuration needed; it’s on by default as a beta feature. (See KEP-4832: Asynchronous Preemption for details.)

  • Windows Networking – Direct Server Return (Beta): Windows nodes see improvement in 1.33 with Direct Server Return (DSR) support in kube-proxy moving to beta. DSR allows return traffic from a service backend to skip going back through the load balancer, which can reduce latency and load. If you run Kubernetes on Windows and use Services with external load balancers, this can improve network performance. (Tracked in KEP-5100.)

  • Kubectl Subresource Support (GA): The kubectl command-line now fully supports interacting with subresources like /status and /scale via the --subresource flag, which has graduated to stable in 1.33. This means you can do things like kubectl get deployment/mydep --subresource=status or kubectl patch deployment/mydep --subresource=scale in a consistent way. It’s a nice improvement for scripting and client usage, ensuring you don’t always need separate commands or API calls for common subresource actions (KEP-2590 covers this change ).

(And many more small enhancements, fixes, and performance improvements are included – see the official release notes for a full list.)

Conclusion

Kubernetes 1.33 “Octarine” brings a mix of features graduating to stability and innovative new capabilities in progress. We’ve highlighted the major updates: from sidecar containers and in-place Pod resizing, to enhanced job controls, improved security defaults, and smarter networking.

If you’re upgrading to v1.33, be sure to review the deprecations (like the Endpoints API) and adjust your usage where necessary. Embrace the new stable features to simplify your workflows – for example, use sidecars natively instead of workarounds, and consider enabling user namespaces for better security isolation.

As always, Kubernetes’ evolution is guided by its vibrant open-source community. The enhancements in 1.33 are the result of many KEPs and contributions (64, to be exact!) from across SIGs and working groups. For further reading, you can check out the official Kubernetes 1.33 release notes and the detailed KEP documents linked above to deep-dive into any specific feature. Happy upgrading, and enjoy the new features in Kubernetes 1.33!

Comments