Can’t Docker work anymore in Kubernetes ? K8S abandoned Docker?

Pankaj kushwaha
8 min readJan 12, 2021

--

I accidentally saw an article (a bit of hindsight) published on the official website of kubernetes on Dec 2nd 2020, suggesting that docker container support would be phased out in the next year’s version. The following is the link:

https://kubernetes, right. io/blog/2020/12/02/dont-panic-kubernetes-and-docker/dont-panic-and-docker/

The 2013 Year
In 2013, Docker was publicly revealed at PyCon for the first time. It brings an advanced method of software delivery, that is, delivery of software via container images. To generate their own image, engineers just need a simple docker create order, and to publish it to the DockerHub by docker push. The simple docker runto easily starts using their service commands in the specified mirror.

In this way, it is possible to efficiently solve the problems caused by variations in the software runtime environment, and its objective of Create once, Run anywhere can be accomplished.

Since then, Docker has become largely synonymous with containers and has become the container era’s king.

The 2014 Year
In 2014, Kubernetes was introduced by Google to solve the Docker container orchestration issue in large-scale scenarios.

This was a rational option, and the most common and only runtime at the time was Docker. Through its support for the Docker container runtime, Kubernetes has welcomed a large number of users.

At the same time, Google and the group of Kubernetes still work closely with Docker. There is the following content on the official blog:

Although working with the Docker group to integrate the best ideas from Kubernetes into Docker, we will continue to develop the feature set.

Update on Google Cloud Platform’s container support[1]

Kubernetes is an open source Docker container manager, based on Google’s years of experience using Internet-scale containers. Docker provides the complete container stack with which Kubernetes schedules, and aims to transfer upstream essential functionality and match Libswarm with the Kubernetes system.

Welcome to the group of Kubernetes[2] Microsoft, RedHat, IBM, Docker and more.

In the same month, he also gave a talk at DockerCon, introduced Kubernetes and gained widespread coverage.

Docker Inc. also launched its container orchestration tool, libswarm, at that time (later swarmkit).

In 2015
In 2015, Docker and other leaders in the container industry jointly founded the Open Container Initiative (OCI) (it is also a project under the Linux Foundation)

Two specifications are primarily used by OCI:

Runtime-spec: How to run the package when the container is running on the file system defined.

Specification of the container image (image-spec): How to create a package on a file system that can be run by OCI

As an original work, Docker donated its own container image format and runtime (now runc) to OCI.

In 2016
Docker v1.12 was released in June 2016, bringing with it Docker Swarm, a solution for multi-host and multi-container orchestration. It should also be noted here that the Docker v1.12: design principles are:

Simple but solid (simple and powerful)

Resilient Resilient

Safe Secure

Optional Compatibility and Backward Functionality (optional features and backward compatibility)

So, without thinking about side effects, you can choose whether to use Docker Swarm by configuration.

The CRI was released in December 2016 by Kubernetes (Container Runtime Interface). Part of the explanation was that, in order to avoid subsequent compatibility, Kubernetes attempted to support another container runtime project led by CoreOS, rkt, but needed to write a lot of compatible code, etc. Other runtimes have implemented maintenance work, so a single CRI interface has been released. Any runtime that supports CRI can be explicitly used as Kubernetes’ underlying runtime;

Of course, the container orchestration war was also eventually won by Kubernetes in 2016.

The year 2017
Docker donated its own container runtime container[3] added to CNCF[4] after v1.11 in 2017.

CNI support was added in 2017 by Docker’s network component libnetwork; at the same time, it also introduced IPvs-based service load balancing in Kubernetes by using the ipvs-related code provided by Docker for Docker Swarm[5]. In v1.18, however, similar dependencies were removed.

Containerised support was added to Kubernetes[6] in November of the same year.

Cri-Containerd
The year 2018
Kubernetes’ containerized integration, officially GA[7], was introduced in 2018.

The previous version’s architecture:

  1. 0 Cri-Containerd Container
    Brand new architecture:

Cri-containerd 1.1 Cry-containerd 1.1

The year 2019
In 2019, another above-mentioned container runtime project rkt was archived by CNCF and terminated its mission; in 2019, Mirantis acquired enterprise services from Docker.

The year 2020

Upgrade to kubernetes 1.18 and make use of Pod Topology Spread Constraints

Kubernetes declared that it joined the countdown to leave the help of dockershim, and it was mistaken that it was unusable for Docker;

While this seems like it could have been the perfect solution, at the time of writing Kubernetes 1.18 was unavailable on the two most common managed Kubernetes services in public cloud, EKS and GKE. Furthermore, pod topology spread constraints were still a beta feature in 1.18 which meant that it wasn’t guaranteed to be available in managed clusters even when v1.18 became available. The entire endeavour was concerningly reminiscent of checking caniuse.com when Internet Explorer 8 was still around

Summary, Description
It was because it had no other choices at the time when Kubernetes chose Docker as its container runtime, and choosing Docker could lead more users to it. So, it provided built-in support for the Docker runtime at the beginning.

In fact, when Docker was created, the role of “orchestration” was not taken into account and, of course, the existence of Kubernetes was not considered (because it was not there yet).

In order to render Docker the container runtime it supports, Dockershim has always been a compatible application maintained by the Kubernetes community. This time, the so-called abandonment is just that Kubernetes would leave the dockershim maintenance support in the present Kubernetes code warehouse. So it can only manage its CRI as originally intended, and any CRI-compatible runtime can be used as the Kubernetes runtime.

When Kubernetes proposed CRI, someone in Docker suggested implementing it. But this method also poses a dilemma. It is still not a pure container runtime, even though Docker implements CRI. It itself includes a significant number of non-”pure underlying container runtime” features.

So, as a low-level container runtime, the container project, which was separated from Docker, emerged. It is a safer option for the runtime of the Kubernetes container.

Docker uses container as its underlying container runtime, and in the production world, many cloud providers and businesses use container as their Kubernetes runtime, which often verifies container stability from the hand.

Now both the Kubernetes and Docker communities agree that containerd is sufficiently mature to be used directly as Kubernetes’ runtime, without using Docker as Kubernetes’ runtime by dockershim. This also signals that the pledge of Docker to provide Kubernetes with a modern container runtime has finally been fulfilled.

And what is the direction after the Key Dockershim in this incident? In future versions, the dockershim in the Kubernetes code repository will be removed, but Mirantis has entered into partnership with Docker and will jointly retain the dockershim feature in the future to support Docker as the Kubernetes container runtime.

Otherwise, if you are using the Docker Engine Open Source, the Dockershim project will be available as an Open Source component, and you will be able to continue using it with Kubernetes; it will only involve a minor change in the configuration that we will log.

The Q&A
Q: What is the effect of the abandonment of dockershim maintenance by Kubernetes this time?

A: There is no impact for ordinary users; there is not much impact for engineers who build on top of Kubernetes; for cluster administrators, you need to decide whether to update the runtime of the container to accommodate CRI in future runtime versions, such as container. Of course, if you don’t want to adjust the runtime of a container, that’s all right. In the future, Mirantis will jointly maintain dockershim with Docker and provide it as an open source portion.

Q: Is the Docker no longer eligible for use?

A: The Docker is still the best local development or stand-alone deployment container platform. It offers a more user-friendly user interface and has rich characteristics as well. At present, Docker has entered into partnership with AWS and via Docker CLI will interact directly with AWS. In addition, Docker can also be used as a Kubernetes container runtime, and its support is not suspended immediately.

Q: I heard that Podman might take the chance to take the lead?

A: Repetitive thought. Also, Podman is not compatible with CRI, and it does not have the requirements to operate as a container of Kubernetes. I also use Podman personally, and we also provide help for Podman in the KIND project, but to be frank, it’s just a CLI tool, which in some cases can have some effect, such as if your Kubernetes container is running. It can be used for local debugging when using cri-o.

To summarize:
This article mainly introduces the history of Docker and Kubernetes development, and also explains that Kubernetes only gave up its support for the component of the dockershim this time. In the future, the most recommended Kubernetes runtime is the underlying runtime, such as containers compatible with CRI.

Together with Docker, Mirantis will maintain dockershim and offer it as an open source part.

The Docker is still the best tool to build, test and deploy locally.

Reference
[1]An update on container support on Google Cloud Platform: https://cloudplatform.googleblog.com/2014/06/an-update-on-container-support-on-google-cloud-platform.html

[2]Welcome Microsoft, RedHat, IBM, Docker and more to the Kubernetes community: https://cloudplatform.googleblog.com/2014/07/welcome-microsoft-redhat-ibm-docker-and-more-to-the-kubernetes-community.html

[3]containerd: https://github.com/containerd/containerd/

[4]CNCF website: https://www.cncf.io/

[5]Add IPvs load balancing in K8S: https://github.com/kubernetes/kubernetes/pull/46580

[6]
Containerd Brings More Container Runtime Options for Kubernetes: https://kubernetes.io/blog/2017/11/containerd-container-runtime-options-kubernetes/

[7]Kubernetes Containerd Integration Goes GA: https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/

[8]Mirantis to take over support of Kubernetes dockershim: https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/

I like to learn new and better ways of doing things when working on a scale, and feel free to ask questions and make suggestions.
Also, check out another story on this.
Thanks for reading this.

--

--

Pankaj kushwaha
Pankaj kushwaha

Written by Pankaj kushwaha

Database/System Administrator | DevOPS | Cloud Specialist | DevOPS

No responses yet