[Update]: Make sure that you are using the latest Linux Distro, preferably Linux 22.04
Make sure that you have installed Docker
1. Add a Signed Key to Your Server, before running this command make sure curl is installed on your machine
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
2. Add Kubernetes Repository as it is not included in Ubuntu App Repositories
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
3. Install Kubernetes Tools you will need to manage the Cluster when it is up and running
sudo apt-get install kubeadm kubelet kubectl
4. Put the Kubernetes Tools on Hold until everything is configured and up and running
sudo apt-mark hold kubeadm kubelet kubectl
5. If you are planning to use Kubernestes Master Node on the Same Server as the Worker Nodes then you will have to disable or add Tolarations to the Pods when configured. Set the Server you are operating on as a master-machine
sudo hostnamectl set-hostname master-machine
6. Register the Network IP Range to Kubernetes Admin Controller
sudo kubeadm init --pod-network-cidr=FirstDigits.SecondDigits.0.0/16
7. After Step 6 Completes, follow the instruction from Kubernetes Installation to complete other steps needed. The outcome from Step 6 will look like this:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join YourLocalIPAddress:Port --token [HashToken] \ --discovery-token-ca-cert-hash [Algorithm]:[TokenHere]
8. Create Pod Communication Network so that Pods can communicate with each other
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
9. Now that you followed Step 6 up to Step 8, it's time to join the Worker Node (The machine/server you want to pods to be running on) to the Admin Controller (The Admin Controller Oversees every Node that runs the Multiple Different Pods, Pods can run multiple Deployments (Docker Containers) ).
- Pay Attention to Step 6, as the instruction going forward in Step 7,8,9 will significantly come from the output of Step 6. Except for step 8, which you will have to decide which Network Parameter you would want to use. The output shows you the URL to read more about Step 8 Command.
kubeadm join --discovery-token ThisValueComesFromKubernetesAsOutPutFromStep7 --discovery-token-ca-cert-hash sha256:ThisValueComesFromKubernetesAsOutPutFromStep7 ThisValueComesFromKubernetesAsOutPutFromStep7
[Note] If everything goes well, you will eventually see the outcome that says:
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
[Note] If you want to allow Nodes to be Scheduled on Master-Node Server
kubectl taint nodes --all node-role.kubernetes.io/master-
sudo kubeadm init --control-plane-endpoint=localIPV4 --pod-network-cidr=10.244.0.0/16 --v=5?
#Run this on Master Node when all namespaces are in pending mode. kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml?
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' error execution phase preflight: One or more conditions for hosting a new control plane instance is not satisfied. [failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory, failure loading key for service account: couldn't load the private key file /etc/kubernetes/pki/sa.key: open /etc/kubernetes/pki/sa.key: no such file or directory, failure loading certificate for front-proxy CA: couldn't load the certificate file /etc/kubernetes/pki/front-proxy-ca.crt: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory, failure loading certificate for etcd CA: couldn't load the certificate file /etc/kubernetes/pki/etcd/ca.crt: open /etc/kubernetes/pki/etcd/ca.crt: no such file or directory] Please ensure that: * The cluster has a stable controlPlaneEndpoint address. * The certificates that must be shared among control plane instances are provided. To see the stack trace of this error execute with --v=5 or higher?