Published
September 22, 2021

Three key predictions about enterprise Kubernetes you should know about

Tenry Fu
Tenry Fu
CEO & Co-Founder

As we are getting into the last part of the year after the “year the physical world entered a standstill”, and while usually predictions are published around December for the new year, I still thought I’d share some of what we have been hearing from customers and what I believe will be important trends for the Kubernetes (K8s) ecosystem moving forward. In addition, our first inaugural annual adoption report was just published with some interesting key takeaways. So let’s dive in:

1. The rise of multi-cluster & multi-distro

The increased use of diverse deployments with multiple clusters is only a derivative of the organic growth and popularity of containers in larger organizations. As different dev departments move from experimenting with user-specific DIY K8s platforms, customized in a way that suits their own unique requirements for whatever they are deploying, production-grade governance and streamlining is the next phase. As enterprises are adopting K8s as a mainstream container orchestration platform, there will be multiple clusters owned by different teams, potentially in different environments, even with K8s solutions from different vendors (including the public cloud managed K8s services). This means ownership moves to IT Operations and Kubernetes management solutions that now have to deal with looking after multiple diverse development efforts.

As the K8s adoption increases and the enterprises’ operational models shift to focus on scalable production environments scattered across different locations, forcing a single distribution may not be ideal, especially when public cloud is all about using the right services most suitable for the right environment. This optionality with distros and K8s stacks, without jeopardizing consistency and efficiency across all operations, will require a more sophisticated management approach based on declarative (desired-state) models. We start to see the industry racing towards this direction, although some implementations with proprietary orchestration technologies might need to carry more baggage as this is a big paradigm shift.

2.Service Mesh will play an increasingly important role in application lifecycle

Ultimately, K8s is just a new class of infrastructure or application middleware to run container applications. When more applications get to production, inevitably the application lifecycle management becomes important. And in K8s world, the application is no longer a simple monolithic app, but can be several, sometimes tens to hundreds of microservices. This means traditional Application Performance Monitoring (APM) and logging strategies may not be enough for microservice-based applications. Service mesh can help solve some of these rising problems for observability, as well as help automated deployment/update models with traffic flow control and canary updates.

When some of these applications get deep into their production cycle, a common requirement is Disaster Recovery/High Availability (DR/HA). Today, most of the application DR/HA is limited to a single K8s cluster - the K8s cluster can run on fault-tolerant infrastructures such as having worker nodes across multiple availability zones, and the application microservices can have several replicas to avoid having a single point of failure. However, if the application needs to handle DR/HA across locations, then it will need to be directly deployed across multiple clusters that are in different environments. Service mesh can also help orchestrate and secure such east-west and north-south traffic across the cluster boundary at the application services level. It can be helpful even for application services to securely connect to some external services such as cloud hosted PaaS services or existing on-prem services. A modern container strategy should therefore not only handle K8s infrastructure, but also take into account the application lifecycle across multi-cluster and multi-environment.

3.Bare metal and edge deployments

Besides data centers and public clouds, we are starting to see more and more enterprises exploring running K8s on bare metal machines. This further reduces the complexity and operational cost (especially when taking into account hypervisor licenses), while offering better performance and capabilities to support specific use cases: running K8s on bare metal enables applications to have direct access to physical devices such as GPU, SmartNIC, high IOPS storage or crypto ASIC which can sometimes gain 5-10% performance improvements. With containers already doing a decent job on application isolation, further isolation via hypervisor and VMs becomes unnecessary. Running K8s on bare metal also paves the way to eventually having a converged solution to have both containers and VMs managed by K8s (aka container-native virtualization), avoiding running VMs with nested virtualization that has adverse performance impacts.

Furthermore, with more and more data being generated and required to be processed at the network edge, it is not efficient (nor economical) to send it back to the public cloud or on-prem data center for latency and bandwidth issues that can lead to serious user experience problems. And beyond the classic futuristic example of self-driving cars, we are seeing more K8s clusters deployed at edge locations as part of smart retailers, smart restaurants, airports, cruise ships, hospitals, oil & gas fields, 5G base stations, etc. These edge locations by design may not have dedicated IT personnel to deploy and manage K8s, may not have reliable or very fast access to the internet, so having a flexible centralized K8s management approach that can manage the complete stack of preferable bare metal clusters the same way as non-bare metal ones is very quickly becoming a real market demand.

In the next few months, we will be talking more about multi-cluster, bare metal and service meshes as we are always enhancing our own offering with features. In the meantime, don’t forget to download our annual 2021 Kubernetes adoption report which we just published and register for our upcoming webinar.

Tags:
Enterprise Scale
Bare Metal
Edge Computing
Networking
Subscribe to our newsletter
By signing up, you agree with our Terms of Service and our Privacy Policy