Google’s Kubernetes Engine 1.10 platform for deploying production-ready containerized applications is now generally available.
The company on May 21 announced availability of the technology along with plans to soon introduce several related high-availability, management and automation features designed to help organizations better manage enterprise workloads on Kubernetes Engine environments.
“Here on the Kubernetes Engine team, we’ve been thinking about challenges such as security, networking, logging and monitoring that are critical to enterprises for a long time,” said Yoshi Tamura, product manager, Google Kubernetes Engine in a blog. So in parallel with the general of Kubernetes Engine 1.10 Google has decided to release several new features to support enterprise use cases, Tamura said.
Kubernetes is a Google-designed container management platform that the company released to the open source community in June 2014. It is designed to help organizations automate the deployment, scaling and management of containerized applications. The technology allows administrators to group or to cluster together all the containers that comprise an application or service so they become easier to discover and to manage.
Since the technology became available in June 2014, numerous enterprises have adopted Kubernetes to manage containerized applications. In the blog, Tamura quoted an external analyst’s estimate that 54 percent Fortune 100 enterprises from industries such as finance, media and manufacturing currently use Kubernetes.
Google Kubernetes Engine (GKE) is a hosted, fully managed version of the platform that organizations can use for managing their container environment. The new features that will soon become available with Kubernetes Engine 1.10 include a shared virtual private cloud (VPC) capability, new classes of persistent disks for high-availability and new automated scaling and repairs functions.
Tamura described the Shared VPC feature as making it easier for multiple teams within a large organization to share physical resources in the cloud while maintaining a logical separation between the teams. Enterprise administrators can use the feature to delegate tasks—such as creating and managing virtual machine instances and container clusters—to project administrators while retaining full administrative control over network resources.
Google will introduce support for high-availability via new so-called Regional Persistent Disks that allow administrators to do synchronous data replication between two zones within a larger Google cloud region. The technology will eliminate the need for enterprises to worry about replication at the application level and do storage layer replication instead, Tamura said.
Also soon to be available is regional cluster support that will allow enterprises to spread their Kubernetes container cluster across up to three zones in a Google cloud region. The feature will be especially useful for organizations that want to deploy application clusters with very high uptime requirements, according to Google.
Meanwhile, a newly arriving Horizontal Pod Autoscaler feature will allow organizations to create clusters that will be capable of automatically scaling based on specific trigger conditions. The new features will become available around Google Kubernetes Engine 1.10 in the coming weeks, Tamura said.