In this tutorial, you’ll learn how to install and use it on an existing installation of CentOS 7. The other involves spinning up a server with a tool called Docker Machine that auto-installs Docker on it. One method involves installing it on an existing installation of the operating system. There are two methods for installing Docker on CentOS 7. For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components. +cat /lib/systemd/system/kube-scheduler.Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system. Skipping etcd, flannel and docker binaries:ĭiff -git a/cluster/centos/build.sh b/cluster/centos/build.sh If necessary, you could utilize them by applying them on top of the v1.7.2-beta.0 tag (possibly even a different tag) of the K8s source using the git apply command. So, in summary, I had to utilize the following set of patches to get everything in place. In my case I run apiserver on port 8090 rather than 8080 (to leave room for other application servers that are fond of 8080), hence I had to do some additional changes to propagate the port change throughout the K8s platform. However the service configuration scripts had to be modified slightly to suit the Ubuntu (more precisely Debian) filesystem. Most of the things were in line with the installation for CentOS, including artifact build ( make at the source root) and modifications to exclude custom downloads for etcd, flanneld and docker (I built the former two from their sources, and the latter I had already installed via the apt package manager). I started with the CentOS cluster installer, and gradually modified it to suit Ubuntu 16.04 (luckily the changes were minimal).
I could easily have utilized the official kubeadm guide or the Juju-based installer for Ubuntu, but I wanted to get at least a basic idea of how things get glued together additionally I also wanted an easily reproducable installation from scratch, with minimal dependencies on external packages or installers, so I could upgrade my cluster any time I desire, by directly building the latest stable-or beta or even alpha, if it comes to that-directly off the K8s source. Luckily, the K8s guys had written their installation scripts in an amazingly structured and intuitive way, so I could get everything running with around one day's struggle (most of which went into understanding the command flow and modifying it to suit Ubuntu, and my particular requirements). I had already been running K8s 1.2 on my machine, but the upgrade would not have been easy as it's a hyperjump from 1.2 to 1.7. Whenever I wanted to try multi-node stuff like node groups or zones with fail-over I could simply hook up one or two secondary (worker) VMs to get things going, and shut them down when I'm done. While I could run three-at most four-reasonably powerful VMs in there as my dev K8s cluster, I could do much better in terms of resource saving (the cluster would be just a few low-footprint services, rather than resource-eating VMs) as well as ease of operation and management (start the services, and the cluster is ready in seconds). That is why, when we decided to upgrade our Integration Platform for K8s 1.7 compatibility, I decided to go with a native K8s installation for my dev environment, a laptop running Ubuntu 16.04 on 16GB RAM and a ? CPU. While it would often be considered easier (and safer) to get K8s set up in a cluster of virtual machines (VMs), it does not really give you the same degree of advantage and flexibility as running K8s "natively" on your own host. Getting a local K8s cluster up and running is one of the first "baby steps" of toddling into the K8s ecosystem. Note: This article largely borrows from my previous writeup on installing K8s 1.7 on CentOS.