After Kubernetes decides to abandon Docker, what to do?
I accidentally saw an article (a bit of hindsight) published on the official website of kubernetes on January 2nd, suggesting that docker container support would be phased out in the next year’s version. The following is the link:
https://kubernetes, right. io/blog/2020/12/02/dont-panic-kubernetes-and-docker/dont-panic-and-docker/
It possibly implies that the built-in docker shim is about to be abandoned by kubernetes, and the docker shim is used in kubelet to achieve docker CRI help. In the future, you need to use docker shim externally or use other CRIs if you want to use docker in Kubernetes.
Speak about Podman, then. Podman is a redhat-issued container management tool, similar to docker, but without a daemon. With instructions somewhat similar to dockers, Podman offers a command-line front end. 87% of the manuals are the same as docker cli. Basically, on Podman, you can use docker commands.
Podman is pre-installed directly to replace the previous docker in Centos8 released in 2019.
It takes time, of course, to test whether the podman can be universally adopted by the market. After all, in server service and maintenance, reliability has always been the primary focus. You see, for so long, Centos8 has been released, and many businesses are still using Centos66
Official website address for Podman: https://podman.io/io/
90% of the Podman Cli instructions are the same as Docker Cli. Basically, with podman, you can apply docker commands.
I found that Kubernetes recently revealed in its latest Changelog that it would deprecate Docker as the container runtime from Kubernetes 1.20. In the cloud native area, this news caused a lot of splash, and several small partners in the Rancher technology group also had hot discussions about it.
Why did Kubernetes plan to forsake Docker? We must briefly grasp Dockershim.
It is a bridge service that helps to communicate with Docker via Kubernetes. To introduce CRI support for Docker, Kubelet used dockershim to (Docker itself has not yet implemented CRI). But to this day, it has been a heavy burden for operations and developers to manage Dockershim. The Kubernetes group therefore suggests that you consider using an open container runtime that provides a full CRI implementation (compatible with v1alpha1 or v1). As a consequence, Docker support as a container runtime is cancelled.
‘But don’t get too concerned. I am here to explain below some questios n answers:
1. Kubernetes Kubelet, as the container runtime, leaves Docker. An alternative, is there?
In the Kubernetes cluster, extracting and running container images is the responsibility of the container runtime. The Docker is just a container runtime widely used. We have two common options after the Docker is deprecated: container and CRI-O.
Containerd, which is extremely simple, robust and portable, is an industry-standard container runtime. The complete container life cycle on the host can be handled by Containerd.
CRI-O is a Red Hat launched container runtime, which aims to provide a method of integration between OCI-consistent runtime and Kubelet. We will further compare the container output and CRI-O in the second half of the article to provide you with a guide when selecting a container runtime.
2. May I use the Docker in Kubernetes 1.20 anyway?
Yes, if you use Docker as the runtime, when Kubelet begins, only an alert log will be printed in 1.20. As early as version 1.23, released at the end of 2021, Kubernetes will drop dockershim.
3. Can my current picture of the Docker still be used?
It is still possible to use it. Actually, the image created by Docker is not a Docker-specific image, but an image of the OCI (Open Container Initiative). No matter what instrument you use to create an image, in Kubernetes, any image that meets the OCI norm looks the same. These images can be extracted and run by both Containerd and CRI-O. So you can still use Docker to generate container files, and you can continue to use them on CRI-O and container images.
4. What implementation of CRI should I use?
This is a more complex problem, and many variables depend on it. If you have used Docker proficiently before, then it should be a reasonably simple option to move to containerd, and containerd has better performance and lower cost. Of course, to choose a more fitting setting, you can also explore other projects in the CNCF area.
On containers and CRI-O, eBay performed a series of performance tests, including forming, beginning, stopping, and removing containers to compare their time. As shown in the figure, except for starting the container, the container performs well in all aspects. The container takes a shorter time than cri-o in terms of overall time.
When changing CRI implementations, what do I look out for?
There are a few variations around the edges, although the underlying containerization code is the same between Docker and most CRIs (including containerized). When migrating, some common items to remember are:
Configuration of Logging
Resource constraints of runtime
Scripts for node provisioning that call the docker or use the docker through its control socket
Kubectl plugins that include CLI docker or socket control
Tools from Kubernetes which require direct access to Docker (e.g. kube-imagepuller)
Functionality environments such as registry mirrors and insecure registries
Other support scripts or daemons that expect Docker to be accessible outside Kubernetes and are run (e.g. monitoring or security agents)
GPUs or special hardware and how they interact with Kubernetes and your runtime
If you use Kubernetes resource requests/limits or DaemonSets file-based log collection, they will continue to function the same way, but if you’ve customized your docker setup, you’ll have to change it where possible for your new container runtime.
Another thing to search for is something that is supposed to run for system maintenance or nested within a container when it can no longer work to create photos. You can use the crictl tool as a drop-in substitute for the former (see mapping from docker cli to crictl) and for the latter you can use newer container build options that do not need Docker, such as img, buildah, kaniko, or buildkit-cli-for-kubectl.
You can begin with their documentation for containerd to see what configuration options are available as you migrate stuff over.
See the Kubernetes Container Runtimes Documentation for guidance on how to use containers and CRI-O with Kubernetes.
I like to learn new and better ways of doing things when working on a scale, and feel free to ask questions and make suggestions.
Also, check out another story on this.
Thanks for reading this.