Kubernetes

Kubernetes interview questions 4

Q63) List the Kubernetes volume you are aware of.

The following are some of the Kubernetes volume which are widely used:

NFS: Network File System lets an ongoing NFS to let you mount on your pod. Though you remove the pod from the node, NFS volume will not be erased but only the volume is unmounted.

Flocker: Flocker is available open source and is used to control and administer data volumes. It is a manager for data volume for a clustered container. Through Flocker volume, user can create a Flocker dataset and mount the same to the pod. If in case, there is no such dataset available in Flocker, the user has to create the same through Flocker API.

READ ALSO
Kubernetes interview questions 1

EmptyDIR: Once a pod is assigned to a node, EmptyDIR is created. This volume stay active till the pod is alive and running on that particular node. EmptyDIR volume does not contain anything in the initial state and is empty; the user can read or write files from this volume. The data present in the volume gets erased once the pod is removed from that particular node.

AWS Elastic Block Store: This volume mounts Amazon Web Services Elastic Block Store on to

your pod. Though you remove the pod from the node, data in the volume remains.

GCE Persistent Disk: This volume mounts Google Compute Engine Persistent Disk on to your pod. Similar to AWS Elastic Block store, the data in the volume remains even after removing the pod from the node.

READ ALSO
Top 10 DevOps Tools

Host path: Host path mounts a directory or file from the file system of the host on to your pod.

RBD: Rados Block Device volume lets a Rados Block device to be mounted on to your pod. Similar to AWS Elastic Block store and GCE Persistent Disk Volumes, even after removing the pod from the node, the data in the volume remain.

Q64) What do you mean by Persistent Volume?

Persistent Volume is a network storage unit controlled by the administrator. PV is a strategy used to control an individual pod present in a cluster.

Q65) What do you mean by Persistent Volume Claim?

Persistent Volume Claim is actually the storage provided to the pods in Kubernetes after the request from Kubernetes. User is not expected to have knowledge in the provisioning and the claims has to be created where the pod is created and in the same namespace.

READ ALSO
AWS Interview Questions 1

Q66) Define Secrets in Kubernetes.

As the name implies, secrets are sensitive information and in this context, they are login credentials of the user. Secrets are objects in Kubernetes which stores sensitive information namely the user name and the passwords after encrypting them.

Q67) How do you create secrets in Kubernetes?

Secrets can be created in various ways in Kubernetes. Some of them are

Through Text (txt) files

Through Yaml File

To create secrets from these files, user has to create username and password using kubectl command. The secret file has to be saved in the corresponding file format.

Q68) Explain the Network Policy in Kubernetes.

READ ALSO
Kubernetes interview questions 2

Network policy contains a set of protocol to achieve information transfer between the pods and defines how those pods present in the same name space transfers information with one another. It also defines data transfer with the network endpoint. User has to enable the network policy in the API server while configuring it in run time. Through the resources available in the network policy, select pods using labels and set the rules to permit the data traffic to a particular pod.

Q69) What will happen while adding new API to Kubernetes?

If you add a fresh API to Kubernetes, the same will provide extra features to Kubernetes. So, adding a new API will improve the functioning ability of Kubernetes. But, this will increase the cost and maintenance of the entire system. So, there is a need to maintain the cost and complexity of the system. This can be achieved by defining some sets for the new API.

READ ALSO
Ansible Interview Questions and Answers 1

Q70) How do you make changes in the API?

Changes in the API server has to be done by the team members of Kubernetes. They are responsible to add a new API without affecting the functions in the existing system.

Q71) What are the API versions available? Explain.

Kubernetes supports several versions of API in order to provide support to multiple structures. Versioning is available at Alpha level, Beta level and Stable level. All these version features are in multiple standards.

Alpha level versions have alpha values. This version is prone to errors but the user can drop for support to rectify errors at any time. But, this version is limited to test in short time alone.

READ ALSO
AWS Lambda using Python

Beta level versions contain beta values. Scripts present in this version will be firm since because they are completely tested. User can look for support any time in case of any errors. This version is not recommended to use in commercial applications.

Stable level versions get many updates often. User has to get the recent version. Generally the

v e r s i o n n a m e w i l l b e v X , w h e r e v r e f e r s t o t h e v e r s i o n a n d x r e f e r s t o

Q72) Explain Kubectl command.

 

Kubectl commands provides an interface to establish communication between pods. They are also used to control and administer the pods present in the Kubernetes cluster. To communicate with the Kubernetes cluster, user has to declare kubectl command locally. These commands are also used to communicate and control the cluster and the Kubernetes objects.

READ ALSO
What is DevOps? Introduction to DevOps 9

Q73) What are the kubectl commands you are aware of?

kubectl apply

kubectl annotate

kubectl attach

kubectl api-versions

kubectl autoscale

kubectl config

kubectl cluster-info

kubectl cluster-info dump

kubectl set cluster

kubectl get clusters

kubectl set-credentials

Q74) Using create command along with kubectl, what are the things possible?

User can create several things using the create command with kubectl. They are:

Creating name space

Creating deployment

creating secrets

Creating secret generic

Creating secret docker registry

Creating quota

Creating service account

Creating node port

Creating load balancer

Creating Cluster IP

Q75) What is kubectl drain?

kubectl drain command is used to drain a specific node during maintenance. Once this command is given, the node goes for maintenance and is made unavailable to any user. This is done to avoid assigning this node to a new container. The node will be made available once it completes maintenance.

READ ALSO
Introduction to Docker

Q76) How do you create an application in Kubernetes?

Creating an application in Kubernetes requires creating an application in Docker, since Docker is essential for Kubernetes to perform its operation smoothly. User can do any of the following two things to install Docker: can download or do the installation using Docker file. Since Docker is available open source, the existing image from Docker hub can be downloaded and the same has to be stored in a local Docker registry.

To create a new application using Docker file, user has to create a Docker file initially. Once creating an image, the same can be transferred to the container after testing it completely.

READ ALSO
Introduction to AWS CLI

Q77) What do you mean by application deployment in Kubernetes?

Deployment is the process of transferring images to the container and assigning the images to pods present in Kubernetes cluster. Application deployment automatically sets up the application cluster thereby setting the pod, replication controller, replica set and the deployment of service. Cluster set up is organized properly so as to ensure proper communication between the pods.

This set up also sets up a load balancer to divert traffic between pods. Pods exchange information between one another through objects in Kubernetes.

Q78) Define Autoscaling in Kubernetes.

One of the important feature of Kubernetes is Autoscaling. Autoscaling can be defined as scaling the nodes according to the demand for service response. Through this feature, cluster increases the number of nodes as per the service response demand and decreases the nodes in case of the decrease in service response requirement. This feature is supported currently in Google Container Engine and Google Cloud Engine and AWS is expected to provide this feature at the earliest.

READ ALSO
MQ on AWS: PoC of high availability using EFS

Q79) How will you do monitoring in Kubernetes?

To manage larger clusters, monitoring is needed. Monitoring is yet another important support in Kubernetes. To do monitoring, we have several tools. Monitoring through Promotheus is a famous and widely used tool. This tool not monitors, but also comes with an alert system. It is available as open source. Promotheus is developed at Sound Cloud. This method has the capability to handle multi-dimensional data more accurately than other methods. Promotheus needs some more components to do monitoring. They are

Promotheus node explore

Grafana

Ranch-eye

Infux DB

Prom ranch exporter

Q80) What is Kubernetes Log?

READ ALSO
Kubernetes interview questions 3

Kubernetes container logs are much similar to Docker container logs. But, Kubernetes allows users to view logs of deployed pods i.e running pods. Through the following functions in Kubernetes, we can get even specific information as well.

Container name of Kubernetes

Pod name of Kubernetes

Name space of Kubernetes

Kubernetes UID and

Docker image name

Q81) What do you know about Sematext Docker Agent?

Sematext Docker agent is more famous among the recent day developers. It is a log collection agent with metrics and events. Sematext Docker agent runs as a small container in each Docker host and gathers metrics, events and logs for all the containers and cluster nodes. If core services are deployed in Docker containers,it observes every container inclusive of a container for Kubernetes core services.

READ ALSO
What is DevOps? Introduction to DevOps 10