Find out the most important concepts about kubernetes

Whenever I hear someone say that the Cloud is 'just someone else's computer', it brings me a smile, so much so that this was one of the first points I chose for my article on 'Cloud Myths'.

In truth is, yes, excluding the case of private clouds, it is quite obvious that the cloud includes the infrastructure of a third party. However, the mistake is not in saying that the cloud is another person's computer, but in saying it is just someone else’s computer.

By its very concept, associating the cloud, with its endless possibilities, with the word 'just' seems to be quite contradictory. The cloud enables real innovations, so much so that have it completely changed the way companies structure technology and do business.

Not long ago, who could have imagined that the same tools used by large corporations would be available, and financially viable, for small and medium-sized businesses? That critical resources such as servers could be just a click away and available in a matter of minutes, enabling on demand, elasticity, and automation?

Well, going back to the central question of this article: Do you know what are Kubernetes and its importance? To answer this, we first must understand another key concept: Linux containers.

Containers are a set of processes isolated from the rest of the system, they create a complete execution environment, including an application and all its dependencies, such as libraries, other binaries and the configuration files that are required to run it, all of it into one package. Using this isolated container for the application, the differences in operating system distributions and even the infrastructure can be abstracted.

It's important not to confuse containers with virtualization. For example, a physical server running five virtual machines would have one hypervisor and five separate operating systems running on it. In the case of a server running applications in containers, there would be a single operating system, and each container shares the operating system's kernel with the others, even if they remain individually isolated. What does this mean in practical terms? Well, while a container may be just over 10 megabytes in size, a virtual server - which will typically include an entire operating system - will probably require several gigabytes.

Containers have lots of benefits, for example, while a virtual machine takes several minutes to launch and load its applications, applications in containers can be instantiated almost immediately. Another great advantage is modular approach to containers: There is no need to run a complex application entirely within a single container, it is perfectly possible to split the application into modules, for example, by separating the database from the front end. This type of architecture is known as microservices, and applications that are developed using this model are much simpler to manage. For example, since each module is relatively simple and has well defined interfaces and operations, it is perfectly possible perform individual updates on each module without having to rebuild the application as a whole.

But what about Kubernetes?

To put it simply, containers are something extremely practical and are being extensively adopted by organizations that want to have more agility or even adopt an approach based on DevOps. This ease of use ends up creating a challenge: once you start using, containers can grow in number very quickly, multiplying at an astonishing speed!

This is where Kubernetes comes in, originally created and developed by Google engineers, it is an open-source platform used to orchestrate and manage container clusters, eliminating most of the manual processes required to deploy and scale applications into containers. In other words, when it is necessary to organize clusters of Linux containers into groups, which can be on public, private, or hybrid clouds, Kubernetes will help manage these clusters easily and effectively.

In fact, Kubernetes organizes the containers in groups called 'pods', this allows solving many of the problems related to its proliferation. Pods create an extra layer of abstraction, so it is much easier to control the workload, and provide the services needed for container’s operation, such as network and storage.

What can I do with Kubernetes?

If you've understood modular approach used by containers, it's easy to imagine that a production environment will include multiple containers distributed over multiple hosts. With the orchestration power of Kubernetes, it is much simpler to create and manage application services spanning multiple containers, to program how these containers are used in the cluster, to scale them, and even manage their integrity over time. Of course, Kubernetes also allows integration with security, network, storage, monitoring, and other services.

Amongst other functionalities, Kubernetes allows you to:

  • Orchestrate containers on multiple hosts, on public, private, or hybrid clouds.
  • Optimize hardware use, maximizing the availability of resources to run the applications.
  • Greater agility to scale container applications and related resources.
  • Manage and automate most deployments and application updates.
  • Ensure Application integrity and auto-recovery in containers, with automatic placement, restart, replication, and escalation.

Some important concepts

If you want to understand how the structure of Kubernetes works, of course you will need to understand some specific concepts and terms, you can access the complete glossary at

  • API Server: The API server is an essential component and serves the Kubernetes API using JSON over HTTP, which provides the internal and external interface to the Kubernetes. The API server processes and validates REST requests and updates the state of API objects in etcd, thereby allowing clients to configure workloads and containers on Worker nodes.
  • Controller Manager: The process that runs the major Kubernetes controllers, such as the DaemonSet Controller and the Replication Controller. The controllers communicate with the API server to create, update, and delete the resources that they manage, such as pods.
  • Scheduler: The component that selects at which node an unscheduled pod (the basic entity managed by the Scheduler) runs, based on the availability of resources. The Scheduler tracks resource usage on each node to ensure that the scheduled workload does not exceed available resources. To this end, the Scheduler must be aware of resource requirements, resource availability, and other restrictions and policy directives provided by the user, such as quality of service (QoS) requirements, data locale, and so forth. In essence, the Scheduler’s function is to combine the "supply" feature with the "demand" of the workload.
  • Node: A node is a machine controlled in the Kubernetes. A node can be a virtual or physical machine, depending on the cluster. It has the necessary Services to run the Pods and is managed by the main components. Services on a node include Docker, kubelet, and kube-proxy.
  • Pod: The smallest and simplest object of the Kubernetes. A pod represents a set of containers running in your cluster.
  • ReplicaSet: The new generation of the ReplicationController, ReplicaSet ensures that a specified number of replicas of the pods are running at the same time.
  • Kubelet: An agent that runs on each node in the cluster. This ensures that the containers are running in a pod.
  • Kubectl: A command-line tool for communicating with a Kubernetes API server.

Trivia time: 7 of 9’s role in Kubernetes origin

As a good Trekker, I would never pass up on the opportunity of talking about how a beloved character influenced such an important technology. In its origin at Google, Kubernetes had another codename! It was Project Seven, a reference to Seven of Nine from Star Trek Voyager, a human Borg drone that, after some initial conflicts with Captain Janeway, joined the Voyager’s crew and became one of its top officers.


Cláudio Dodt

Cloud Evangelist in Brazil