X
Business

Kubernetes turns 10: How it steered cloud-native computing for the last decade - and what's next

Like Linux, Kubernetes is a testament to the power of open-source collaboration and innovation. How would we manage without it?
Written by Steven Vaughan-Nichols, Senior Contributing Editor
steersailgettyimages-524856023
rvbox/Getty Images

If you did away with Linux, the cloud, containers, or Kubernetes, you wouldn't recognize today's technology world. Linux is the operating system foundation for all of it; the cloud gives us access to all its applications and resources; containers are where those apps live; and Kubernetes orchestrates all the containers. Remove any one of them, and we're living and working in a more primitive realm.  

Few technologies have had as profound an impact on the ever-evolving landscape of cloud-native computing as Kubernetes. As it celebrates its 10th anniversary, Kubernetes stands as a testament to the power of open-source collaboration and innovation. From its humble beginnings at Google to becoming the de facto standard for container orchestration, Kubernetes has transformed how we deploy, manage, and scale applications. 

Also: 5 ways to save your Windows 10 PC in 2025 - and most are free

You don't have to take my word for it. In Pure Storage's recently released The Voice of Kubernetes Experts Report 2024, the company found that "over the next five years, 80% of new applications will be built in cloud-native platforms." I'm surprised it's that low.  

You see, Kubernetes has changed how we do computing. As Liz Rice, chief open source officer at Isovalent, an eBPF-based networking, security, and observability company, told me, Kubernetes has fundamentally changed the way we approach networking and security: 

Kubernetes is fundamentally dynamic. Pods can scale up and down in response to demand, and workloads can be scheduled and rescheduled onto different machines. So, although networking between Kubernetes workloads uses IP packets, IP addresses are only meaningful in the short term because they get used and reused for different workloads at different times. This means traditional networking and security tools that identify traffic based on ports and IP addresses are no longer sufficient. We need tooling that maps ephemeral IP addresses to meaningful Kubernetes identities, such as pods, services, namespaces, and nodes.

Let's journey through the past decade to understand where Kubernetes started, how it's shaped the cloud-native world, and what lies ahead.

Kubernetes' genesis

The story of Kubernetes begins in the early 2010s at Google, where engineers were grappling with the challenges of managing large-scale containerized applications. Everyone recognized how important containers were and that we needed a way to manage them. 

Inside Google, they already knew how important organizing containers were. After all, Google had been using containers before Docker made them popular. When Google engineers Craig McLuckie, Joe Beda, and Brendan Burns first pitched the idea in 2013 to Urs Hölzle, then Google's head of technical infrastructure, he replied, "So let me get this straight. You want to build an external version of the Borg task scheduler. One of our most important competitive advantages. The one we don't even talk about externally. And, on top of that, you want to open-source it?"

Yes, yes, they did. Eventually, they persuaded Hölzle it was a good idea.

Why? McLuckie explained:  

We always believed that open-sourcing Kubernetes was the right way to go, bringing many benefits to the project. For one, feedback loops were essentially instantaneous -- if there was a problem or something didn't work quite right, we knew about it immediately. But most importantly, we were able to work with lots of great engineers, many of whom really understood the needs of businesses who would benefit from deploying containers. It was a virtuous cycle: the work of talented engineers led to more interest in the project, which further increased the rate of improvement and usage.

So it was that in early June 2014, at the first DockerCon, "The Container Orchestration War" began. Apache Mesos, Red Hat's GearD, Docker Libswarm, Facebook's Tupperware, and Kubernetes were all announced. As Brad Rydzewski, then the founder of Drone.io, said: "What I learned at #dockercon: Everyone is building their own orchestration platform. Seriously. Everyone." 

Rydzewski wasn't wrong. More orchestration programs quickly followed.

Also: Locking down container security once and for all with Rust-based Edera

Even in those early days, though, I thought Kubernetes would be the clear winner. Since it had been inspired by Google's Borg container management program, which had been used since 2003, it had a maturity the other programs lacked. 

Kubernetes quickly gained traction. The name "Kubernetes'' comes from the Greek word for "helmsman" or "pilot," symbolizing its role in steering containerized applications. The Kubernetes logo, a seven-spoke ship's wheel, pays homage to its Borg heritage and its first name, Seven of Nine, a friendly Borg from Star Trek, which was dropped for obvious trademark reasons. 

Rapid adoption and community growth

Kubernetes' open-source nature and robust feature set made it an instant hit among developers and enterprises. By 2015, Kubernetes had reached version 1.0, and Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF), with Kubernetes as its seed technology. This move was pivotal in fostering a vibrant community around Kubernetes, leading to rapid innovation and widespread adoption.

Also: The Linux Foundation and tech giants partner on open-source generative AI enterprise tools

While other container orchestration programs are still with us, in 2017 Amazon Web Services (AWS) announced Elastic Container Service for Kubernetes (EKS), and everyone could read the writing on the wall. Kubernetes would dominate the cloud-native world. 

Simultaneously, the CNCF nurtured the Kubernetes ecosystem. Today, hundreds of cloud-native programs all depend on Kubernetes. Today, there are no major cloud providers that don't rely on Kubernetes. It's become the go-to container orchestration platform.

Transforming cloud-native development

Kubernetes' impact on cloud-native development cannot be overstated. It introduced a new paradigm for deploying and managing applications, enabling developers to focus on writing code rather than worrying about infrastructure. Kubernetes abstracts away the complexities of container orchestration, providing features like automated rollouts and rollbacks, self-healing, and horizontal scaling.

Another key Kubernetes advantage is its portability. Applications deployed on Kubernetes can run on any cloud provider or on-premises infrastructure, making it an ideal choice for hybrid and multi-cloud environments. Indeed, the hybrid cloud lives and dies by Kubernetes. This flexibility has been a game-changer for enterprises, allowing them to avoid vendor lock-in and optimize their cloud strategies.

Also: Red Hat's latest enterprise Linux distro has new features to tackle hybrid cloud complexity

Over the years, in addition to related cloud-native programs, Kubernetes has spawned a rich ecosystem of tools and projects that extend its capabilities. These include Helm, the Kubernetes package manager that simplifies application deployment and management by providing reusable charts, and Prometheus, the powerful Kubernetes environment monitoring and alerting program.

The rise of Kubernetes has also given birth to new paradigms like GitOps, which leverages Git as the single source of truth for declarative infrastructure and application management. 

The future of Kubernetes

Looking ahead, Kubernetes shows no signs of slowing down. The platform continues to evolve, with new features and enhancements being added regularly. The Kubernetes community is exploring ways to simplify the user experience, improve security, and enhance scalability.

Ville Aikas, Chainguard co-founder and one of Kubernetes' creators, observed: 

We have this massive CNCF landscape that's bloomed, which is a wonderful thing in terms of all the diversity of tooling and infrastructure options it gives to platform teams. But I think it also creates a bunch of choices that have to be made in order to operate Kubernetes – and that landscape has gotten huge. I always felt that one of the core reasons Kubernetes became so popular was its Application Programming Interface (API) is so simple and that the cognitive load to use it is relatively low. As Kubernetes continues to mature, it needs to somehow retain the simplicity of its mental model and usability of its API.

That's easier said than done. Juggling Kubernetes and cloud-native programming paradigms has become increasingly difficult. 

As Shahar Azulay, CEO and co-founder of Groundcover, an eBPF performance monitoring company, said: 

Kubernetes has demonstrated its ability to manage diverse tasks effectively, but its complexity requires considerable setup and ongoing maintenance. Similar to how Linux developed into a reliable operating system, I expect Kubernetes to transform into a more user-friendly abstraction layer. As Kubernetes adoption continues to grow a decade in, the need for efficiency and cost optimization becomes increasingly critical.

Looking ahead, Isovalent's Rice said:

We're already seeing Kubernetes being used in more hybrid environments alongside legacy workloads and in edge devices. The Cilium, the cloud-native, open-source eBPF-based Networking, Observability, and Security program, vision is that an application developer should not need to know or care where the services they want to interact with are running: connectivity and security should all be handled in the platform layer.

Another exciting development on the horizon is the integration of Kubernetes with serverless computing. Projects such as Kubeless and Fission are bringing serverless capabilities to Kubernetes, allowing developers to build and deploy functions-as-a-service (FaaS) on top of their existing Kubernetes clusters. This fusion of serverless and Kubernetes promises to unlock new possibilities for cloud-native applications.

Also: Ready to upskill? Look to the edge (where it's not all about AI)

Edge computing and Kubernetes combos are also growing. As more devices and applications move to the edge, Kubernetes is being adapted to support edge deployments. The Kubernetes community is working on projects like KubeEdge, MicroK8s, and Red Hat Device Edge to enable lightweight, efficient Kubernetes clusters that can run on edge devices.

Kubernetes's future is bright. With ongoing innovation and a thriving ecosystem, Kubernetes is poised to continue shaping the cloud-native landscape for years to come. Raise a toast to a decade of Kubernetes, and here's to another 10 years of its innovation, collaboration, and excellence in container orchestration.

Editorial standards