Showing posts with label kubernetes administration exam paper. Show all posts
Showing posts with label kubernetes administration exam paper. Show all posts

Wednesday, August 10

Certified Kubernetes Administrator (CKA) exam Model Paper 2022 Version 1.24


 CKA Kubernetes Administrator exam info 2022

Software Version: Kubernetes v1.24

Pre Setup:

Once you've gained access to your terminal it might be wise to spend 1 minute to setup your environment. You could set these:

 

alias k=kubectl  # will already be pre-configured

 

export  do="--dry-run=client -o yaml"  # k get pod x $do

Q1:Create Service
Task weight: 7%

Use context: kubectl config use-context k8s-c1-H

Reconfigure the existing deployment front-end and add a port specifiction named http exposing port 80/tcp of the existing container nginx.

Create a new service named front-end-svc exposing the container prot http.

Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.

Solution:

kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=80 --type=NodePort --protocol=TCP

 

Q2:Scale Deployment

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Scale the deployment web-server to 6 pods

Solution:

kubectl scale deployment web-server --replicas=6

 

Q3:Make pod assign to node

Task weight: 7%

Use context: kubectl config use-context k8s-c1-H

Schedule a pod as follows:

· Name: nginx-kusc00401

· Image: nginx

· Node selector: disk=ssd

Solution:

apiVersion: v1

kind: Pod

metadata:

  name: nginx-kusc00401

spec:

  containers:

  - name: nginx

    image: nginx

    imagePullPolicy: IfNotPresent

  nodeSelector:

    disktype: ssd

 

Q4:Check how many nodes are healthy

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Check to see how many nodes are ready (not including nodes tainted NoSchedule)and write the number to /opt/KUSC00402/kusc00402.txt.

Solution:

kubectl describe nodes | grep -i  taints | grep -i -v NoSchedule | wc -l >> /opt/KUSC00402/kusc00402.txt

 

 Q5:Create Persistent Volume

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Create a persistent volume with name app-config, of capacity 1Gi and access mode ReadWriteMany. The type of volume is hostPath and its location is /srv/app-config.

Solution:

apiVersion: v1

kind: PersistentVolume

metadata:

  name: app-config

spec:

  capacity:

    storage: 1Gi

  accessModes:

    - ReadWriteMany

  hostPath:

    path: /srv/app-config

 

Q6:Create PersistentVolumeClaim with pod

Task weight: 7%

Use context: kubectl config use-context k8s-c1-H

Create a new PersistentVolumeClaim:

· Name: pv-volume

· Class: csi-hostpath-sc

· Capacity: 10Mi

Create a new Pod which mounts the PersistentVolumeClaim as a volume:

· Name: web-server

· Image: nginx

· Mount path: /usr/share/nginx/html

Configure the new Pod to have ReadWriteOnce access on the volume.

 

Solution:

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: pv-volume

spec:

  storageClassName: csi-hostpath-sc

  accessModes:

    - ReadWriteOnce

  resources:

    requests:

      storage: 10Mi

 

 

 

Pod Creation:

apiVersion: v1

kind: Pod

metadata:

  name: web-server

spec:

  volumes:

    - name: task-pv-storage

      persistentVolumeClaim:

        claimName: pv-volume

    - name: web-server

      image: nginx

      ports:

        - containerPort: 80

          name: "http-server"

      volumeMounts:

        - mountPath: "/usr/share/nginx/html"

          name: task-pv-storage

 

Finally,using kubectl edit or Kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change

kubecti edit pvc pv-volume  --save-config

#change 10Mi to 70Mi

:wq

 

Q7:Monitor Pods logs

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Monitor the logs of pod foobar and:

· Extract log lines corresponding to error unable-to-access-website

· Write them to /opt/KUTR00101/bar

Solution:

kubectl logs foobar | grep -i unable-to-access-website >> /opt/KUTR00101/bar

 

 Q8:Cluster troubleshooting

Task weight: 7%

Use context: kubectl config use-context k8s-c1-H


A Kubernetes worker node, named wk8s-node-0 is in state NotReady .
Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.

Solution:

kubectl get nodes

ssh root@wk8s-node-0

systemctl status kubelet

systemctl start kubelet

systemctl enable kubelet

systemctl daemon-reload

#switch master

 

Q9:RBAC
Task weight: 7%

Use context: kubectl config use-context k8s-c1-H

Create a new ClusterRole named deployment-clusterrole at only allows the creation of the following resource types:

·       Deployment

·       StatefulSet

·       DaemonSet

Create a new ServiceAccount named cicd-token in the existing namespace app-team1.


Limited to namespace app-team1, bind the new ClusterRole to the new ServiceAccount cicd-token.

 

Solution:


1.  kubectl create clusterrole deplyoment-clusterrole --verb=create --resource=deployment,statefulset,daemonset

2.  kubectl create sa cicd-token -n app-team1

3.  kubectl create clusterrolebinding cicid-binding --clusterrole=deployment-clusterrole --serviceaccount=cicd-token:app-team1 -n app-team1


Q10:Specifies that Node is set to unavailable
Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Set the node named ek8s-node-1 as unavailable and reschedule all the pods running on it.

Solution:


kubectl cordon ek8s-node-1

kubectl drain ek8s-node-1 --ignore-daemonsets --delete-emptydir-data  --force


Q11: Upgrading Kubernetes nodes
Task weight: 7%

Use context: kubectl config use-context k8s-c1-H

Given an existing Kubernetes cluster running version 1.24.1upgrade all of Kubernetes control plane and node components on the master node only to version 1.24.2

You are also expected to upgrade kubelet and kubectl on the master node

Be sure to drain the master node before upgrading it and uncordon it after the upgrade.
Do not upgrade the worker nodes, etcd, the container manager, the CNI plugin, the DNS service or any other add-ons.

Solution:


1.  kubectl cordon master-node-1

2.  kubectl drain master-node-1 --delete-emptydir-data --ignore-daemonsets --force

3.  ssh master-node-1

4.  apt-get update

5.  apt-get install -y kubeadm=1.24.2-00

6.  kubeadm version

7.  kubeadm upgrade plan

8.  kubeadm upgrade apply v1.24.2 --etcd-upgrade=false

9.  apt-get install -y kubelet=1.24.2-00 kubectl=1.24.2-00

10.sudo systemctl daemon-reload

11.sudo systemctl restart kubelet

12.kubectl get nodes


Q12:ETCD backup restore
Task weight: 7%

Use context: kubectl config use-context k8s-c1-H

 Create a snapshot of the existing etcd instance running at https://127.0.0.1:2379 saving the snapshot to /srv/data/etcd-snapshot.db

Next, restore an existing, previous snameshot located at /var/lib/backup/etcd-snapshot-previous.db.

The following TLS certificates/key are supplied for connecting to the server with etcdctl:

CA certificate: /opt/KUIN00601/ca.crt Client certificate: /opt/KUIN00601/etcd-client.crt Clientkey:/opt/KUIN00601/etcd-client.key

Solution:


1.  ETCDETC_API=3 etcdctl --endpoints=127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot save /srv/data/etcd-snapshot.db

2.  ETCDETC_API=3 etcdctl --endpoints=127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt snapshot restore /var/lib/backup/etcd-snapshot-previous.db

NOTE: If you get any permission denied error, run the RESTORE command as root user.


 Q13:Same namespace create NetworkPolicy

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Create a new NetworkPolicy named allow-port-from-namespace to allow Pods in the existing namespace internal to connect to port 9000 of other Pods in the same namespace.
Ensure that the new NetworkPolicy:

·       does not allow access to Pods not listening on port 9000.

·       does not allow access from Pods not in namespace my-app

Solution:


 

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: allow-port-from-namespace

  namespace: internal

spec:

  podSelector: {}

  policyTypes:

  - Ingress

  ingress:

  - from:

    - podSelector: {}

    - namespaceSelector:

      matchLabels:

      project: my-app

    ports:

    - protocol: TCP

      port: 9000


Q14:Create Ingress
Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Create a new nginx Ingress resource as follows:

·       Name: ping

·       Namespace: ing-internal

·       Exposing service hi on path /hi using service port 5678

 

The availability of service hi can be checked using the following command, which should return hi:
curl -kL /hi

Solution:


apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: ping

  namespace: ing-internal

  annotations:

    nginx.ingress.kubernetes.io/rewrite-target: /

spec:

  rules:

  - http:

      paths:

      - path: /hi

        pathType: Prefix

        backend:

          service:

            name: hi

            port:

              number: 5678


Q15: Create PODs for multiple Containers
Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Create a pod named kucc8 with a single app container for each of the following images running inside nginx and memcached

Solution:


apiVersion: v1

kind: Pod

metadata:

  name: kucc8

spec:

  containers:

  - name: nginx

    image: nginx

  - name: memcached

    image: memcached


Q16:View the POD with the highest CPU usage
Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

 

Form the pod label name=cpu-loader,find pods running high CPU workloads and write the name of the pod consuming most CPU to the file /opt/KUTR00401/KURT00401.txt(which alredy exists).

Solution:


kubectl top pods -l name=cpu-loader |head -n2 |tail -n 1 |awk '{print $1} > /opt/KUTR00401/KURT00401.txt

        (OR)

kubectl top -l name=cpu-user -A

echo 'pod name' >> /opt/KUT00401/KUT00401.txt


Q17:Add sidecar container
Task weight: 7%

Use context: kubectl config use-context k8s-c1-H

Without changing its existing containers, an existing Pod needs to be integrated into Kubernetes’s build-in logging architecture (e.g. kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.

Task
Add a busybox sidecar container to the existing Pod legacy-app. The new sidecar container has to run the following command:

/bin/sh -c tail -n+1 -f /var/log/legacy-app.log

Use a volume mount named logs to make the file **/var/log/legacy-app.log available to the sidecar container.

Don’t modify the existing container.
Don’t modify the path of the log file,both containers must access it at /var/log/legacy-app.log.

Solution:


apiVersion: v1

kind: Pod

metadata:

  name: legacy-app

spec:

  containers:

  - name: count

    image: busybox

    args:

    - /bin/sh

    - -c

    - >

      i=0;

      while true;

      do

        echo "$i: $(date)" >> /var/log/legacy-app.log;

        i=$((i+1));

        sleep 1;

      done     

    volumeMounts:

    - name: logs

      mountPath: /var/log

  - name: busybox

    image: busybox

    args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log']

    volumeMounts:

    - name: logs

      mountPath: /var/log

  volumes:

  - name: logs

    emptyDir: {}


 

                              ================  END  ===================

:: Linux - Legends ::