- Use
curl
to explore which Pods are present in the kube-system namespace
Answer
kubectl proxy --port 8080 &
curl http://localhost:8080/api/v1/namespaces/kube-system/pods
- Run a deployment that starts one nginx web server Pod on all cluster nodes
Answer
Sample yaml for Daemonset
$ vi nginx-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ds
labels:
k8s-app: nginx-lable
spec:
selector:
matchLabels:
name: nginx-ds
template:
metadata:
labels:
name: nginx-ds
spec:
containers:
- name: nginx-ds
image: nginx
$ kubectl create -f nginx-daemonset.yaml
daemonset.apps/nginx-ds created
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-2bjhg 1/1 Running 0 28s 10.44.0.1 vb-worker1.example.com <none> <none>
- alternative answer:
# List your nodes
$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vb-master.example.com Ready master 8d v1.18.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vb-master.example.com,kubernetes.io/os=linux,node-role.kubernetes.io/master=
vb-worker1.example.com Ready <none> 8d v1.18.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vb-worker1.example.com,kubernetes.io/os=linux
# Add specific label to nodes so that App will be deployed there
$ kubectl label nodes <node-name> node-app=nginx
# Create yaml manually for deployment
$ kubectl create deployment test-nginx --image nginx --dry-run -o yaml > test-nginx.yaml
# edit the yaml and set the nodeselector
$ vi test-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test-nginx
name: test-nginx
spec:
replicas: 1
selector:
matchLabels:
app: test-nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test-nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
nodeSelector: # --> add this
node-app: nginx # --> add this
status: {}
# Create deployment with yaml
$ kubectl create -f test-nginx.yaml
# Make sure apps run on Worker nodes
$ kubectl get pod -owide
- Configure a 2 GiB persistent storage solution that uses a permanent directory on the host that runs the Pod. Configure a Deployment that runs the httpd web server and mounts the storage on /var/www.
Answer
- Create PV (pv.yaml)
$ kubectl create -f pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
- Create PVC (pvc.yaml)
$ kubectl create -f pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
- Create deployment yaml by dry-run (lab-httpd.yaml)
$ kubectl create deployment lab-httpd --image=httpd --dry-run -o yaml > lab-httpd.yaml
- Edit the deployment yaml as follow
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: lab-httpd
name: lab-httpd
spec:
replicas: 1
selector:
matchLabels:
app: lab-httpd
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: lab-httpd
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- image: httpd
name: httpd
volumeMounts:
- name: task-pv-storage
mountPath: "/var/www"
resources: {}
status: {}
$ kubectl create -f lab-httpd.yaml
$ kubectl get pod,pv,pvc
- Create two services: myservice should be exposing port 9376 and forward to targetport 80, and mydb should be exposing port 9377 and forward to port 80.
- Create a Pod that will start a busybox container that will sleep for 3600 seconds, but only if the aforesaid services are available.
- To test that it is working, start the init container Pod before starting the services.
Answer
- Create service by defining the yaml (go to check the Documentation)
- service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
- Create the Pod with init container
apiVersion: v1
kind: Pod
metadata:
name: init-pod
labels:
app: initapp
spec:
containers:
- name: main-container
image: busybox
command: ['sh', '-c', 'echo main app running && sleep 3600']
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done']
- name: init-db
image: busybox
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done']
- Pod with init-container
# Create pod with init-container
kubectl create -f init-pod.yaml
# Create the service
kubectl create -f service.yaml
# Check the pod and service
kubectl get pod
kubectl get service
- Configure worker1.example.com in such a way that no new Pods will be scheduled on it, but existing Pods will not be moved away from it.
- Mark worker1.example.com with the node label disk=ssd. Start a new nginx Pod that uses nodeAffinity to be scheduled on nodes that have the label disk=ssd set. Start the Pod and see what happens
- Remove all restrictions from worker1.example.com
Answer
Answer1
- Cordon worker1 so that no new Pods will be scheduled on it
# Cordon
kubectl cordon worker1.example.com
node/worker1.example.com cordoned
# Check after the cordon. Make sure worker1 is now `SchedulingDisabled`
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 18d v1.18.2
worker1.example.com Ready,SchedulingDisabled <none> 18d v1.18.2
# Cordon doesn't effect the current running Pods
kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-2bjhg 1/1 Running 0 10d 10.44.0.1 worker1.example.com <none> <none>
- Mark worker1 with label disk=ssd
# Label node
kubectl label nodes worker1.example.com disk=ssd
node/worker1.example.com labeled
# Check the current labels
kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master.example.com Ready master 18d v1.18.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vb-master.example.com,kubernetes.io/os=linux,node-role.kubernetes.io/master=
worker1.example.com Ready,SchedulingDisabled <none> 18d v1.18.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vb-worker1.example.com,kubernetes.io/os=linux,disk=ssd
- Start a Pod with nodeSelector disk=ssd. (Create yaml and run the pod)
apiVersion: v1
kind: Pod
metadata:
name: scheduled-nginx
spec:
containers:
- name: sched-nginx
image: nginx
nodeSelector:
disk: ssd
# Run the pod
kubectl create -f scheduled-nginx.yaml
# Check the pod
kuebctl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
scheduled-nginx 0/1 Pending 0 7s <none> <none> <none> <none>
# See what happened behind the `Pending` status
kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
<unknown> Warning FailedScheduling pod/scheduled-nginx 0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) were unschedulable.
<unknown> Warning FailedScheduling pod/scheduled-nginx 0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) were unschedulable.
- Notice that the pod is Pending, since the selected Node (worker1) is currently cordoned
SchedulingDisabled
- After that, remove the cordon. Notice that the pod will be normally scheduled to its node
# Uncordon worker1
kubectl uncordon worker1.example.com
node/worker1.example.com uncordoned
# Check the pod status
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
scheduled-nginx 1/1 Running 0 4m17s 10.44.0.2 worker1.example.com <none> <none>
- Create a user account named bob that can authenticate to the Kubernetes cluster. Ensure that you can login a user bob, and create pods as this user
Answer
# Create user bob on Linux layer
sudo useradd -G wheel bob
sudo passwd bob
# Switch to user bob
su - bob
# Prepare for the Kubernetes config
cd ~bob
mkdir .kube
sudo cp -ip /etc/kubernetes/admin.conf ~bob/.kube/config
# Make sure you can access the Kubernetes API
kubectl get pod
# Prepare for the Kubernetes bob user
openssl genrsa -out bob.key 2048
openssl req -new -key bob.key -out bob.csr -subj "/CN=bob/O=staff"
sudo openssl x509 -req -in bob.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out bob.crt -days 365
ls -lth
# Add new credential to kubectl config
kubectl config view
kubectl config set-credentials bob --client-certificate=./bob.crt --client-key=./bob.key
kubectl config view
# Create a Default Context for the new user (bob)
kubectl config set-context bob-context --cluster=kubernetes --namespace=staff --user=bob
# Configure Role RBAC to define a staff role
vim staff-role.yaml
--
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
namespace: staff
name: staff-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["list", "get", "watch", "create", "update", "patch", "delete"]
--
kubectl create -f staff-role.yaml
kubectl get role staff-role -n staff
# Bind a user to the new role
vim rolebind.yaml
--
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: staff-rolebinding-for-bob
namespace: staff
subjects:
- kind: User
name: bob
apiGroup: ""
roleRef:
kind: Role
name: staff-role
apiGroup: ""
--
kubectl create -f rolebind.yaml
# Test it
kubectl get pods --context=fahmi-context
kubectl create deployment nginx-bob --image nginx --context bob-context
- Replace the default plugin that you are using in your cluster with the Calico network plugin
Answer
# If you are previously using WeaveNet as your network plugin, list the pod as follow
kubectl get pods --all-namespaces -l name=weave-net
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system weave-net-cc498 2/2 Running 18 21d
kube-system weave-net-x86h6 2/2 Running 10 21d
# Delete the weavenet
kubectl describe pod weave-net-cc498 -n kube-system | grep -i controlled
Controlled By: DaemonSet/weave-net
kubectl get daemonset -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 21d
weave-net 2 2 2 2 2 <none> 21d
kubectl delete daemonset weave-net
# Go to the Manual and install Calico
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
# Install the Calico
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
# Restart all nodes
ssh root@master reboot
ssh root@worker1 reboot
# Make sure all pods running with the new Calico network plugin
kubectl get pod --all-namespaces -o wide
- Add an additional node vb-worker2.example.com to the
Kubernetes
cluster - Mark node vb-worker1.example.com such that all Pods currently running on it will be evicted, and no new Pods will be scheduled on it
- Make a backup of the etcd database
- Configure node vb-worker2.example.com to start a static Pod that runs an Nginx web server
Answer
- Add an additional node vb-worker2.example.com to the
Kubernetes
cluster
# (Install Docker CE)
## Set up the repository
### Install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2
# Add the Docker repository
yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
# Install Docker CE
yum update -y && yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.8 \
docker-ce-cli-19.03.8
## Create /etc/docker
mkdir /etc/docker
# Set up the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
systemctl daemon-reload
systemctl restart docker
- Disable swap and turnoff firewalld
# Turnoff firewalld
systemctl disable --now firewalld
systemctl status firewalld
# Disable swap
free -h
vi /etc/fstab
# /dev/mapper/centos-swap swap swap defaults 0 0
# Swapoff
swapoff /dev/mapper/centos-swap
free -h
- Set the /etc/hosts
vi /etc/hosts
{
192.168.11.25 vb-master.example.com control master # --> Change the IP as your Lab environment
192.168.11.26 vb-worker1.example.com worker1 # --> Change the IP as your Lab environment
192.168.11.30 vb-worker2.example.com worker2 # --> Change the IP as your Lab environment
}
# Installing kubeadm, kubelet, kubectl
# https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# Install kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# Enable kubelet
systemctl enable kubelet
systemctl start kubelet
# Letting iptables see bridged traffic
# https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#letting-iptables-see-bridged-traffic
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
- Prepare the TOKEN and HASHed certificate
# Check if there is any available token exist
kubeadm token list
(no result, which means no available token exist. Need to create new one)
# Create new Token
kubeadm token create
W0530 12:07:50.412967 19703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
z6gm3h.obhxwurl186tuf61
kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
z6gm3h.obhxwurl186tuf61 23h 2020-05-31T12:07:50+09:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
# Define TOKEN variable
export TOKEN=z6gm3h.obhxwurl186tuf61
echo $TOKEN
# Generate cert-hash
# https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform DER 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
48702924fba8473e1a9ce11263c96348715a9700263f57cc41d8244ff4175854 # --> Put this as HASH variable
export HASH=48702924fba8473e1a9ce11263c96348715a9700263f57cc41d8244ff4175854
# Here to breakdown the above command into several steps
# Generate public key from Kubernetes CA certificate
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt > worker2-public.key
file worker2-public.key
worker2-public.key: ASCII text
# Write the private key. Change the key output to DER (Distinguished Encoding Rules)
openssl rsa -pubin -outform DER < worker2-public.key > worker2-public.key.DER
writing RSA key
file worker2-public.key.DER
worker2-public.key.DER: data
(this file is unreadable)
# Create message digest sha256
openssl dgst -sha256 -hex worker2-public.DER
SHA256(worker2-public.key.DER)= 48702924fba8473e1a9ce11263c96348715a9700263f57cc41d8244ff4175854
--> This result should be the same with the above procedure we got early
- Joining the new WorkerNode to the Cluster (Yeeha! This operation is on WorkerNode)
# login to the worker2
echo $TOKEN # Get this value from Master
echo $HASH # Get this value from Master
kubeadm join --token z6gm3h.obhxwurl186tuf61 192.168.11.25:6443 --discovery-token-ca-cert-hash sha256:48702924fba8473e1a9ce11263c96348715a9700263f57cc41d8244ff4175854
W0530 14:33:40.192321 1654 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
- Check the Nodes
- (If you are using WeaveNet for networking plugin, there will be a network issue after joining the new worker node. You need to fix the
iptables
issue as mentioned below)
- (If you are using WeaveNet for networking plugin, there will be a network issue after joining the new worker node. You need to fix the
kubectl get nodes
--> The Worker2 is not ready
kubectl get pod -l name=weave-net -n kube-system
--> Weave-net pod for Worker2 is keep Error
--> IP tables setting does not set properly on Weave-Net side
# Fix the IP tables on Worker2 node
iptables -t nat -I KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
# And finally all nodes Ready
kubectl get nodes
NAME STATUS ROLES AGE VERSION
vb-master.example.com Ready master 21d v1.18.2
vb-worker1.example.com Ready <none> 21d v1.18.2
vb-worker2.example.com Ready <none> 62m v1.18.3
- Mark node vb-worker1.example.com such that all Pods currently running on it will be evicted, and no new Pods will be scheduled on it
kubectl cordon vb-worker1.example.com
node/vb-worker1.example.com cordoned
kubectl get nodes
NAME STATUS ROLES AGE VERSION
vb-master.example.com Ready master 21d v1.18.2
vb-worker1.example.com Ready,SchedulingDisabled <none> 21d v1.18.2
vb-worker2.example.com Ready <none> 80m v1.18.3
kubectl drain vb-worker1.example.com --grace-period 0 --force --ignore-daemonsets
node/vb-worker1.example.com already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-hxpjt, kube-system/weave-net-vbnhw
evicting pod dev/test-busybox-d7b97f5b7-ftrcm
evicting pod kube-system/coredns-66bff467f8-hj7zk
pod/coredns-66bff467f8-hj7zk evicted
pod/test-busybox-d7b97f5b7-ftrcm evicted
node/vb-worker1.example.com evicted
kubectl get pods --all-namespaces -owide
- Make a backup of the etcd database
#Backup:
ETCDCTL_API=3 etcdctl snapshot save mysnapshot.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
Snapshot saved at mysnapshot.db
ETCDCTL_API=3 etcdctl --write-out=table snapshot status mysnapshot.db
+----------+----------+------------+------------+
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| cfdeb76e | 3629469 | 1603 | 3.9 MB |
+----------+----------+------------+------------+
4. Configure node vb-worker2.example.com to start a static Pod that runs an Nginx web server
# Choose which Nodes you want to run the static Pod
ssh worker1
# Findout the default kubelet setting on Workernode
ps aux | grep kubelet | grep yaml
root 611 4.5 6.2 923340 63236 ? Ssl May12 1178:37 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2
--> Here we notice the /var/lib/kubelet/config.yaml
cat /var/lib/kubelet/config.yaml | grep -i staticpod
staticPodPath: /etc/kubernetes/manifests
--> At this directory the static Pod configuration safed
# Prepare for the static pod yaml
cd /etc/kubernetes/manifests
cat <<EOF > static-web.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
EOF
# Restart the Kubelet
systemctl restart kubelet
# Back to the Master node and check the pod
exit
kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default static-web-vb-worker1.example.com 1/1 Running 0 91s
- Use kubeadm to create a cluster. control.example.com is set up as the cluster controller node and worker{1..3} are set up as worker nodes
- Task is complete if kubectl get nodes show all nodes in a "ready" state
Answer
- Initialize control-plane
# switch to root user (su -)
# usermod -aG wheel student
# execute the kubeadm init
kubeadm init
# Take memo the kubeadm join Token and Hash
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
kubeadm join 192.168.11.14:6443 --token 121svj.8urd4tdpt7r51p5n \
--discovery-token-ca-cert-hash sha256:996d622a652b5ae512c1df35bf1560c252f0f70af8efee1a893849e5a7155231
# To set access to kubectl for user other than root
su - student
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Test the kubectl command
kubectl cluster-info
kubectl get pods
kubectl get nodes
- Install the Network plugin on Control-Plane
# In this case, choose the WeaveNet plugin
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
# Check whether Master/ControlPlane is Ready
kubectl get nodes
- Joining worker node to the cluster (execute at each workernode)
# Switch to worker node
ssh worker1
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
exit
kubectl get nodes
- Create a Pod that run the latest version of the alpine image. This Pod should be configured to sleep 3600 seconds and it should be created in the mynamespace namespace. Make sure that the Pods is automatically restarted if it fails.
Answer
[student@control]$ kubectl create namespace mynamespace
namespace/mynamespace created
[student@control]$ kubectl get namespace
NAME STATUS AGE
default Active 22d
dev Active 13d
kube-node-lease Active 22d
kube-public Active 22d
kube-system Active 22d
mynamespace Active 36s
# Search from the Documentation about Pod and create yaml
[student@control]$ vim alpine-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: alpine-test
namespace: mynamespace
spec:
restartPolicy: Always
containers:
- name: mynamespace
image: alpine
command:
- sleep
- "3600"
[student@control]$ kubectl create -f alpine-test.yaml
pod/alpine-test created
[student@control]$ kubectl get pod -n mynamespace
NAME READY STATUS RESTARTS AGE
alpine-test 1/1 Running 0 15s
- Configure a Pod that runs 2 containers. The first container should create the file /data/runfile-test.txt. The second container should only start once this file has been created. The second container should run the sleep 3600 command as its task
Answer
# Create yaml for pod with init-container
vi pod-with-initcontainer.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', "sleep 3600"]
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "mkdir /data; touch /data/runfile-test.txt"]
[student@control]$ kubectl create -f pod-with-initcontainer.yaml
pod/myapp-pod created
[student@control]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 19s
- Create a Persistent Volume that uses local host storage. This PV should be accessible from all namespaces. Run a Pod with the name pv-pod-test that uses this persistent volume from the "myvol-test" namespace
Answer
- Create PV (pv.yaml)
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume-test
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
- Create PVC (pvc.yaml)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- Create pod (pv-pod-test.yaml)
apiVersion: v1
kind: Pod
metadata:
name: pv-pod-test
namespace: myvol-test
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim-test
containers:
- name: task-pv-container
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
# First, create the namespace
[student@control]$ kubectl create namespace myvol-test
namespace/myvol-test created
[student@control]$ kubectl get ns
NAME STATUS AGE
default Active 22d
dev Active 14d
kube-node-lease Active 22d
kube-public Active 22d
kube-system Active 22d
mynamespace Active 77m
myvol-test Active 4s
# Move to working namespace, this is to makesure your default namespace is set to myvol-test
# This is to avoid wrong namespace
[student@control]$ kubectl config set-context --current --namespace myvol-test
Context "kubernetes-admin@kubernetes" modified.
[student@control]$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin myvol-test
[student@control]$ kubectl create -f pv.yaml
persistentvolume/task-pv-volume-test created
[student@control]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume-test 1Gi RWO Retain Available 4s
[student@control]$ kubectl create -f pvc.yaml
persistentvolumeclaim/task-pv-claim-test created
[student@control]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
task-pv-claim-test Bound task-pv-volume-test 1Gi RWO 8s
[student@control]$ kubectl create -f pv-pod-test.yaml
pod/pv-pod-test created
[student@control]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
pv-pod-test 0/1 ContainerCreating 0 10s
[student@control]$ kubectl get pod -n myvol-test
NAME READY STATUS RESTARTS AGE
pv-pod-test 1/1 Running 0 23s
- In the run-once-test namespace, run a Pod with the name xxazz-pod-test, using the alpine image and the command
sleep 3600
. Create the namespace if needed. Ensure that the task in the Pod runs once, and after running it once, the Pod stops.
Answer
- Create the pod (pod-once.yaml)
apiVersion: v1
kind: Pod
metadata:
name: xxazz-pod-test
namespace: run-once-test
spec:
restartPolicy: Never
containers:
- name: xxaxx-container
image: alpine
command:
- sleep
- "3600"
# Create namespace
[student@control]$ kubectl create namespace run-once-test
namespace/run-once-test created
[student@control]$ kubectl get ns
NAME STATUS AGE
default Active 22d
dev Active 14d
kube-node-lease Active 22d
kube-public Active 22d
kube-system Active 22d
mynamespace Active 101m
myvol-test Active 23m
run-once-test Active 6s
# Switch to run-once-test namespace
[student@control]$ kubectl config set-context --current --namespace run-once-test
Context "kubernetes-admin@kubernetes" modified.
[student@control]$ k config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin run-once-test
# Create pod
[student@control]$ kubectl create -f pod-once.yaml
pod/xxazz-pod-test created
[student@control]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
xxazz-pod-test 1/1 Running 0 2m5s
- Create a Deployment that runs Nginx, based n the 1.14 version. After creating it, enable recording, and perform a rolling upgrade to upgrade to the latest version of Nginx. After successfully performing the upgrade, undo the upgrade again.
Answer
kubectl create deployment nginx-test --image=nginx:1.14
kubectl --record deployment.apps/nginx-test set image deployment.v1.apps/nginx-test nginx=nginx:latest
kubectl rollout status deployment nginx-test
kubectl rollout undo deployment nginx-test
[student@control]$ kubectl create deployment nginx-test --image=nginx:1.14
deployment.apps/nginx-test created
[student@control]$ kubectl --record deployment.apps/nginx-test set image deployment.v1.apps/nginx-test nginx=nginx:latest
deployment.apps/nginx-test image updated
deployment.apps/nginx-test image updated
[student@control]$ kubectl rollout status deployment nginx-test
deployment "nginx-test" successfully rolled out
[student@control]$ kubectl rollout history deployment nginx-test
deployment.apps/nginx-test
REVISION CHANGE-CAUSE
1 <none>
2 kubectl deployment.apps/nginx-test set image deployment.v1.apps/nginx-test nginx=nginx:latest --record=true
[student@control]$ kubectl rollout undo deployment nginx-test
deployment.apps/nginx-test rolled back
[student@control]$ kubectl rollout status deployment nginx-test
deployment "nginx-test" successfully rolled out
- Find all Kubernetes objects in all namespaces that have the label k8s-app set to the value kube-dns
Answer
[student@control]$ kubectl get all --all-namespaces -l k8s-app=kube-dns
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-66bff467f8-clj6j 1/1 Running 0 8h
kube-system pod/coredns-66bff467f8-smt97 1/1 Running 0 7h28m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 22d
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 22d
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-66bff467f8 2 2 2 22d
- Create a ConfigMap that defines the variable myuser-test=mypassword-test. Create a Pod that runs alpine, and uses this variable from the ConfigMap
Answer
# Create configmap
kubectl create configmap special-config --from-literal=myuser-test=mypassword-test
kubectl get cm
NAME DATA AGE
special-config 1 8s
- Create the pod (config-pod.yaml)
apiVersion: v1
kind: Pod
metadata:
name: config-pod
spec:
containers:
- name: test-container
image: alpine
command: ["/bin/sh", "-c", "env"]
env:
- name: myuser-test
valueFrom:
configMapKeyRef:
name: special-config
key: myuser-test
kubectl create -f config-pod.yaml
pod/config-pod created
kubectl get pod
NAME READY STATUS RESTARTS AGE
config-pod 0/1 ContainerCreating 0 2s
myapp-pod 1/1 Running 1 92m
nginx-test-67b797db8-nxvhn 1/1 Running 0 14m
kubectl logs config-pod
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=config-pod
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
myuser-test=mypassword-test
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
- Create a solution that runs multiple Pods in parallel. The solution should start Nginx, and ensure that it is started on every node in the cluster in a way that if a new node is added, an Nginx Pod is automatically added to that node as well
Answer
- Create the pod-daemon.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemon
spec:
selector:
matchLabels:
name: nginx-daemon
template:
metadata:
labels:
name: nginx-daemon
spec:
containers:
- name: nginx-container
image: nginx
[student@control]$ kubectl create -f pod-daemon.yaml
daemonset.apps/nginx-daemon created
[student@control]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
config-pod 0/1 CrashLoopBackOff 6 10m 10.36.0.6 vb-worker2.example.com <none> <none>
myapp-pod 1/1 Running 1 103m 10.36.0.2 vb-worker2.example.com <none> <none>
nginx-daemon-cjn6f 1/1 Running 0 35s 10.44.0.2 vb-worker1.example.com <none> <none>
nginx-daemon-tpw6r 1/1 Running 0 35s 10.36.0.7 vb-worker2.example.com <none> <none>
nginx-test-67b797db8-nxvhn 1/1 Running 0 24m 10.36.0.5 vb-worker2.example.com <none> <none>
- Mark node worker2 as unavailable. Ensure that all Pods are moved away from the local node and started again somewhere else
- After successfully executing this task, make sure worker2 can be used again
Answer
kubectl get nodes
kubectl cordon worker2
kubectl drain worker2 --force --delete-local-data --grace-period 0 --ignore-daemonsets
kubectl uncordon worker2
kubectl get nodes
- Put the node worker2 in maintenance mode, such that no new Pods will be scheduled on it
- After successfully executing this task, undo it
Answer
kubectl cordon worker2
kubectl get nodes
kubectl uncordon worker2
- Create a backup of the Etcd database. API version 3 is used for the current database. Write the backup to /var/exam/etcd-backup-test
Answer
mkdir -p /var/exam
ETCDCTL_API=3 etcdctl snapshot save /var/exam/etcd-backup-test/mysnapshot.db --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key
- Start a Pod that runs the busybox image. Use the name busy33-test for this Pod. Expose this Pod on a cluster IP address. Configure the Pod and Service such that DNS name resolution is possible, and use the
nslookup
command to look up the names of both. Write the output of the DNS lookup command to the file /var/exam/dnsnames-test.txt
Answer
- Create the busy33.yaml
apiVersion: v1
kind: Pod
metadata:
name: busy33
labels:
app: busy33
spec:
containers:
- name: myapp-container
image: busybox
command:
- sleep
- "10000"
kubectl get pod
NAME READY STATUS RESTARTS AGE
busy33 1/1 Running 0 8s
kubectl expose pod busy33 --port=3333
service/busy33 exposed
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
busy33 ClusterIP 10.108.137.81 <none> 3333/TCP 5s
# Make sure kube-dns is running
kubectl get pod -n kube-system -l k8s-app=kube-dns
# Disable firewall on all workernode
ssh root@worker1 systemctl stop firewalld
ssh root@worker2 systemctl stop firewalld
kubectl exec busy33 -- nslookup busy33
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: busy33.dev.svc.cluster.local
Address: 10.108.137.81
*** Can't find busy33.svc.cluster.local: No answer
*** Can't find busy33.cluster.local: No answer
*** Can't find busy33.ntkyo1.kn.home.ne.jp: No answer
*** Can't find busy33.example.com: No answer
*** Can't find busy33.dev.svc.cluster.local: No answer
*** Can't find busy33.svc.cluster.local: No answer
*** Can't find busy33.cluster.local: No answer
*** Can't find busy33.ntkyo1.kn.home.ne.jp: No answer
*** Can't find busy33.example.com: No answer
- Configure your node worker1 to automatically start a Pod that runs an Nginx webserver, using the name auto-web-test. Put the manifest file in /etc/kubernetes/manifests
Answer
https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/
ssh root@worker1
ps aux | grep kubelet
- Find the Pod with the Highest CPU load and write its name to the file /var/exam/cpu-pods-test.txt
Answer
kubectl top pod --all-namespaces | sort -n -k 3
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default static-web-vb-worker1.example.com 0m 1Mi
kube-system kube-scheduler-vb-master.example.com 10m 31Mi
kube-system kube-controller-manager-vb-master.example.com 38m 70Mi
kube-system etcd-vb-master.example.com 49m 294Mi
kube-system kube-apiserver-vb-master.example.com 90m 336Mi
# From the above result, write the Pod which has the highest CPU load
echo "kube-apiserver-vb-master.example.com" >> /var/exam/cpu-pods-test.txt
- Create a deployment as mentioned as follow:
- Deployment name: nginx-app
- Image: nginx
- Replica number: 5
- Namespace: dev1
Answer
# Create the namespace
kubectl create namespace dev1
namespace/dev1 created
# Switch your working namespace to dev1
kubectl config set-context --current --namespace=dev1
Context "kubernetes-admin@kubernetes" modified.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@kubernetes kubernetes kubernetes-admin dev1
# Create the deployment
kubectl create deployment nginx-app --image=nginx
deployment.apps/nginx-app created
# Edit the deployment
kubectl edit deployment nginx-app
spec:
replicas: 5 # --> Change the replicas value to 5
# press :wq! to exit and save the edit
deployment.apps/nginx-app edited
# Make sure deployment are replicated to 5
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-app-b8b875889-85ff8 1/1 Running 0 79s
nginx-app-b8b875889-b4v27 1/1 Running 0 79s
nginx-app-b8b875889-g9djm 1/1 Running 0 2m49s
nginx-app-b8b875889-p64wt 1/1 Running 0 79s
nginx-app-b8b875889-q999n 1/1 Running 0 79s
- You are developing an application pod with specification as follow:
- Name: busybox-special-app007
- Image: busybox
- Running command: sleep 10000
- Pod only run on a node with label: app=busybox
- Mark worker1 node with label: app=busybox
Answer
# Label worker1 node: app=busybox
kubectl label nodes vb-worker1.example.com app=busybox
node/vb-worker1.example.com labeled
kubectl get nodes --show-labels | grep app=busybox
vb-worker1.example.com Ready <none> 24d v1.18.2 app=busybox,beta.kubernetes.io/os=linux
# Create yaml for busybox Pod
vi busybox-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox-special-app007
spec:
containers:
- name: busybox-special-app007
image: busybox
command:
- sleep
- "10000"
nodeSelector:
app: busybox
# Create the pod with kubectl
kubectl create -f busybox-pod.yaml
pod/busybox-special-app007 created
# Check the pod run on the Node with label app: busybox
kubectl get pod -owide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
busybox-special-app007 1/1 Running 0 60s 10.44.0.12 vb-worker1.example.com <none> <none> <none>
- Create multiple persistent volume as follow:
- web-pv001
- storage capacity = 10 MebiBytes
- pv can be Read and Write in Once
- pv is mounted from local server at path = /mnt/data-web
- db-pv002
- storage capacity = 10 MebiBytes
- pv can be Read and Write from Many sources
- pv is mounted from local server at path = /mnt/data-db
- backup-pv003
- storage capacity = 100 MebiBytes
- ReadWriteOnce
- mounted from local server = /mnt/backup-text
- backup-pv004
- storage capacity = 300 MebiBytes
- ReadWriteMany
- mounted from local server = /mnt/backup-blob
- web-pv001
- Sort pvc from its capacity using
kubectl
command. Dont use sorting command other that the kubectl sort funcional.
Answer
- Create yaml for all pvs
- web-pv001.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: web-pv001 spec: capacity: storage: 10Mi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data-web"
- db-pv002.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: db-pv002 spec: capacity: storage: 10Mi accessModes: - ReadWriteMany hostPath: path: "/mnt/data-db"
- backup-pv003.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: backup-pv003 spec: capacity: storage: 100Mi accessModes: - ReadWriteOnce hostPath: path: "/mnt/backup-text"
- backup-pv004.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: backup-pv004 spec: capacity: storage: 300Mi accessModes: - ReadWriteMany hostPath: path: "/mnt/backup-blob"
# Create all the pvs
kubectl create -f web-pv001.yaml db-pv002.yaml backup-pv003.yaml backup-pv004.yaml
persistentvolume/web-pv001 created
persistentvolume/db-pv002 created
persistentvolume/backup-pv003 created
persistentvolume/backup-pv004 created
# List PersistentVolumes sorted by capacity
kubectl get pv --sort-by=.spec.capacity.storage
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
db-pv002 10Mi RWX Retain Available 9m32s
web-pv001 10Mi RWO Retain Available 9m32s
backup-pv003 100Mi RWO Retain Available 9m32s
backup-pv004 300Mi RWX Retain Available 7m10s
- Question Details
Answer
- Question Details
Answer
- Question Details
Answer
- Question Details
Answer
Get familiar with:
- kubectl explain
- kubectl cheatsheet
- How to switch namespaces in Kubernetes?
kubectl config set-context --current --namespace=<namespace>
$ kubectl config current-contexts # check your current context
$ kubectl config set-context <context-of-question> --namespace=<namespace-of-question>
$ kubectl config current-context
$ kubectl config view
$ kubectl config get-contexts
$ kubectl config view | grep namespace
- When using kubectl for investigations and troubleshooting utilize the wide output it gives your more details
$kubectl get pods -o wide --show-labels --all-namespaces
-
In
kubectl
utilizie--all-namespaces
to ensure deployments, pods, objects are on the right name space, and right desired state -
for events and troubleshooting utilize kubectl describe
$kubectl describe pods <PODID>
- the '-o yaml' in conjuction with
--dry-run
allows you to create a manifest template from an imperative spec, combined with--edit
it allows you to modify the object before creation
kubectl create service clusterip my-svc -o yaml --dry-run > /tmp/srv.yaml
kubectl create --edit -f /tmp/srv.yaml
#terapod
192.168.11.14 master.example.com
192.168.11.15 worker1.example.com
192.168.11.16 worker2.example.com
192 /et.168.11.5 putihfj
192.168.11.19 putihfj-back
192.168.11.25 vb-master.example.com
192.168.11.26 vb-worker1.example.com
#Command to turn on virtual machine by command
/usr/lib/vmware/bin/vmplayer /root/vmware/kb-worker1/kb-worker1.vmx
https://docs.vmware.com/en/VMware-Workstation-Player-for-Linux/15.0/com.vmware.player.linux.using.doc/GUID-BF62D91D-0647-45EA-9448-83D14DC28A1C.html
#But, I would recommend you Virtual Box ^^;