Kubernetes Flashcards

1
Q

container runtime

A

a k8s component, the underlying software that is used to run containers, e.g. docker

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

pod is what of k8s

A

k8s object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

to see each pod’s node

A

kubectl get po -o wide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

create yml of pod quickly

A

kubectl run redis –image=redis123 –dry-run=client -o yaml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

edit pod

A
  1. use existing yml
  2. extract into yml and recreate pod
  3. k edit only for below properties
    spec.containers[].image
    spec.initContainers[
    ].image
    spec.activeDeadlineSeconds
    spec.tolerations
    spec.terminationGracePeriodSeconds
    spec.replica
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Replica Set (prev. replication controller)
difference of above two?

A
  1. high availability
  2. load balancer across nodes

selector: use to allow for managing pod that not created by replicaSet directly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

edit replicaset

A

k replace -f xxx.yml
k scale –replicas=6 -f rs-definition.yml
k scale –replicas=6 replicaset my-rs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

get version of a k object

A

k explain replicaset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

quick delete multiple pods

A

in a line: k delete po po1 po2 po3 po4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

deployment vs rs

A

deployment contains replicaset, rs contains pod

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

–all-namespaces
–label

A

short -A
-l=”tier=db”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Cert Tip: Imperative Command

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Run an instance of the image webapp-color and publish port 8080 on the container to 8282 on the host.

A

docker run -p 8282:8080 webapp-color

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

light version docker image

A

python:3.6-alpine on alpine not debian

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Practice test Docker images

A

answer is missing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

docker ps vs docker ps -a

A

-a list all containers including the stopped ones
container automatically exit when its task/process is done, which is defined by “CMD”. The process has to be things like web server, db server but not “bash”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

docker run ubuntu

A

will exit but
docker run ubuntu [cmd]
docker run ubuntu sleep 5 will lasts for 5 secs
or:
CMD sleep 5
CMD [“sleep”, “5”]

or:
ENTRYPOINT [“sleep”]
docker run ubuntu-sleeper 10

or:
ENTRYPOINT [“sleep”]
CMD [“5”] -> default value

or: modify during runtime
docker run –entrypoint sleep2.0 ubuntu-sleeper 10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

k replace –force -f x.yml

A

replace pods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

docker run –name ubuntu-container –entrypoint sleep2.0 ubuntu-sleeper 10 in pod definition

A

command:

args: [“10”]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

imperative vs declarative

A

k create configmap
k create -f xxx.yml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

convert base64

A

echo -n ‘paswrd’ | base64
echo -n ‘paswrd’ | base64 –decode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

ubuntu install

A

apt-get install

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

list processes on docker host / inside container

security context

A

ps aux
PID for different containers on the host are different -> process isolation
by default process run as root, but root user inside container is not like it on the host

change root’s capability,
docker run –add-cap MAC_ADMIN
or –drop-cap
–privilege

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

get user inside pod

A

k exec po po-name – whoami

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
resource CPU unit memory unit 1G vs 1Gi
1 vCPU 0.1 == 100 m vCPU mili vCPU minimal 1m vCPU 1 G = gigabyte 1,000,000,000 bytes 1 Gi = gibibyte about 1073,000,000 bytes (2 to the power of 30)
26
can cpu or mem exceed the limit
CPU not, it is throttled. Mem yes, in the end it will be terminated with OOM (OOM kill)
27
what is the resource require and limit by default best CPU configure:
no limit with requests but no limit
28
when no request but limit set? what is request then?
request = limit set automatically by k8s
29
set default resource limit request globally for all newly created pod
limitRange object
30
restrict the total amount of resource
resourceQuota object like hard limit/requests in the current namespace
31
check reason of failed pod
describe po and check last state and Reason: is there
32
default sa what is it? There's also way to disable the mounting of sa token
it is automatically mounted to the pod of the ns, it has very restricted permission to run only basic kubectl cmd query. automountSAToken: false in spec
33
how to change the sa of a pod from default? of deployment?
change it in the spec and recreate! For deployment, no need to recreate, just edit
34
latest 1.24 sa token is not automatic created, how to create it?
k create token sa-name, it has by default 1h expiry time or (not recomanded, no time boundary) create a kubernetes.io/s-a-token type secret for the account as before
35
check token of a sa check taint of a no
describe it and check tokens: describe and check taints:
36
check sa of a pod/deploy
describe the pod and find Service Account:
37
change sa of deploy from default
go to spec/template/spec/ add ServiceAccountName: sa-name
38
taint nodes untain no
k taint no no-name app=blue:taint-effect add a minus in the end
39
what is restricted by the taint and tolerants
it only restrict the no. A pod with a matched toleration will not guaranteed to be scheduled on the tainted node
40
check where is the po
-o wide
41
node-selector label a node
nodeSelector in pod.spec (very simple only one label: value) k label no no-name key=value
42
node affinity
ensure a pod is hosted on a particular node. More advanced than nodeSelector
43
requiredDuringSchedulingIgnoredDuringExectution
during schedule of pod, must find the matched node, if not found, don't schedule During the pod execution, if node's label is changed so that the condition doesn't match any more, ignore it. if required is defined, pod will be evicted.
44
multi-container pods
logging agent + web server they need to share same lifecycle (volumn, storage, netware)
45
multi-container pod
sidecar (logging server +web server), adapter: before sending logs to a central server, we adapt the log in to a unified format ambassador: to connect to different stage db, you may choose to outsource such logic to separate container, such as at local host it connects to a local database, and the new container will proxy that request to other right db
46
check pod conditions
k describe po check conditions section
47
readiness probes
check if a pod's ready status is really true or false. it is application relevant, e.g. http test /api/ready. or if a particular TCP socket is listening or just exec a custom script
48
liveness probes
check if a container is health. http test - /api/healthy or if a particular TCP socket is listening or just exec a custom script
49
docker run -d event-simulator
detach mode without output the log
50
print log of multi-container pod
k logs -f po-name container-name
51
metrics server
1. one for each k8s cluster 2. no historical data, only in-memory
52
with metrics-server, what can do
k top node k top po
53
get things based on label get all pod's label
k get po --selector app=App1 k get po --l app=App1 k get po --show-labels
54
annotation
used to record other details for informationary purpose, phone numbers etc. or may be for other integration purpose
55
check status of each revision
kubectl rollout history deployment nginx --revision=1 record cause: kubectl set image deployment nginx nginx=nginx:1.17 --record
56
edit deploy
kubectl rollout status deployment nginx kubectl rollout history deployment nginx
57
set deploy image
k set image deploy frontend simple-webapp=kodecloud/webapp-color:v2
58
use pod to run 3+2, how to get output
k logs po-name
59
job completion parallelism
job will create po to run a certain one time task completion will be the #pod to create, keep create this number of pod until they all successfully completed parallelism
60
cronjob schedule how is the yaml vs job
job can be schedule schedule: takes cron like format string spec: schedule: jobTemplate: spec (job's spec)
61
check job successful history, attemps
describe job and check Pods Statuses:
62
k8s network : how to access pod's ip
inside node, accessable the pod's ip directly
63
k8s service use case
service is in fact a virtual server inside the node, it has its own IP address, it is called cluster IP of the service NodePort: to listen to a port on the note and forward request on that port to a port on the pod running the web app clusterIP: virtual ip inside cluster to allow communication between different services, such as frontend to backend LoadBalancer: service provisions a load balancer for our app in a supported cloud providers, it is to distribute load accross the different web server in your frontend tier
64
nodePort target port, port, node Port
target port: port on the pod port: port on the service, service is in fact a virtual server inside the node nodeport: the port on the node for external, between 30000 - 32767 only port is the must, target port by default equals to port to access: use node ip/port number
65
cluster IP use case, target port, port
between tiers of a web app: frontend, backend, db. target port is the port of backend exposes. port is where the service is exposed
66
67
what is endpoint of a svc
endpoints is another name of the port identified by the label selectors. It can be used to check if our svc's selector is correctly set
68
describe a frontend backend
There is a web server serving frontend to users, an app server serving backend API and the db server. user send in request to web server, the web server send request to API server, then the api server get data from db and send it to backend
69
ingress vs egress
the direction of a request, but not that of response. netpol only define on ingress, egress is automatically configured itself for response if need request out to other server, we need to configure egress rule in the netpol
70
by default connectivity
all allow, all can communicate to others
71
how to restrict
use network policy to only allow access from api-pod on port 3306. by default everything are connected, but once one netpol is defined for a po, the po is default deny by that type of traffic (in or out) network policy is not supported by all network solution on k8s, fiannel doesn't support it, need to check its documentation. You can create, but it won't work without any error msg networkpolicy can define ingress from podSelector, namespaceSelector, ipBlock for server outside of cluster
72
netpol yaml rule
- ports vs portsare different each - starts a new rule the rule can be defined with ports or to they can also combine in one rule start with - one rule combined with and - to ports: two rules combined with Or vs -to: - ports: TCP UDP needs to define into seperate protocol inside ports:
73
pod A can ping pod B
ping doesn't mean connection, it has a special port, with ICMP protocol
74
K8s ingress controller vs load balancer? Vs nginx server?
Ingress controller contains load balancer + nginx server (or any other load balancing solution) + other functions. It is a k8s deploy
75
K8s ingress controller role in k8s?
Ingress controller helps the apps deployed in k8s using a single accessible url that you can configure routes to different services within your cluster based on the url path. It also implement ssl security. it also has to be published as a nodePort or loadbalancer svc, and it is a one-time config.
76
to inspect ing, logs the ingress pod and find wrong default-backend
--default-backend-service=green-space/default-backend-service redeploy ingress-controller with above changes
77
Term of ingress rules?
Ingress resources, type Ingress, Ingress rule is defined for each host/domain-name (I call it base url), e.g. http://www.my-store.com/
78
3 types of ingress rule defined by yaml
spec.backend directly spec.rules.http for each host/domain-name spec.rules.-host (for multiple host)
79
nginx.ingress.kubernetes.io/rewrite-target: /$2, replace("/something(/|$)(.*)", "/$2")
regular expression capturing group
80
docker copy-on-write mechanism
image layer is read only container layer is read write, when modify app.py, you can still modify it but it is copied to container layer firstly.
81
when container is removed, the container layer is gone to persist data, use volumn
volume mounting: docker volume create data_volume -> create a volume locally /var/lib/volume docker run --mount my-volume:/path-in-container mysql if you don't run create volume before run container, it will be created automatically
82
external data source?
bind mounting: use the path directly docker run -v /fullpath-to-data:/path-in-container latest: docker run --mount type=bind, source=fullpath-to-data, target=/path-in-container mysql
83
what is doing bind mounting? what is doing volume mounting
storage driver: e.g. AUFS, ZFS, overlay volume driver: azure file storage, netApp, rexray docker run -it --name mysql --volume-driver rexray/ebs --mount src=ebs-vol, target ...
84
docker container is meant to be transient
means they meant to last for a short period of time
85
pod.spec.volumes
not nice way: a file path on the node (host) on k8s: hostpath.type: Directory better: to use ebs replace volumes.awsElasticBlockStore
86
persistent volumes
to manage volumes centrally, it is a cluster wide pool of storage volumes configured by admins. The developer can use it by creating PVC. To bind it, use selector in PVC yaml. PVC and PV is 1to1 mapping. If no PV available, pvc will be pending.
87
what happens when pvc is deleted
persistentVolumeReclaimPolicy: Retain/Delete/Recycle retain: pv will be retained and cannot be used by other pvc delete: pv will be deleted as well recycle: scrubb the data and reused for other pvc
88
authentication vs authorization of kube apiserver
who can access? static pwd files, static token file, certs, external identity providers like LDAP SA what can they do: RBAC authorization
89
all communication between the components surrounded by kube apiserver is secured by ???
tls cetificates
90
auth:
Two types account: user, sa cannot create user, external identity provide is needed for example. SA can be created
91
kubeconfig
three things: cluster, context, users context is the thing which links above two. k config view: to list it k config use-context: to change it
92
context
specify namespace along with user and cluster
93
access api groups
curl http://localhost:6443 -k to list resource group curl http://localhost:6443/apis -k | grep name to get more permissions, need to authenticate, --key, --cert --cacert etc. OR kubectl proxy, it launches a proxy service locally on port 8001 and will use the cred from the kube-config file. No need to auth anymore
94
curl, check a svc
-m 5 set timeout curl -m 5 jupiter-crew-svc:8080 (svc name:port) for nodeport: curl nodeIp:nodeport
95
wget vs curl
do http request wget -O- frontend:80 -O-: print content wget focus on downloading file curl serves more general purpose, by default don't download things
96
kubectl proxy
a http proxy service created by kubectl utility to access the kube-api server without auth
97
to know preferred version of a api group
k proxy 8001& (& run in the background) curl localhost:8001/apis/authorization.k8s.io
98
kube-apiserver --authorization-mode=Node, RBAC, Webhook
define the order of authorization option, by default it is alwaysallow
99
role
apiGroups [""] core as empty resources: ["pod"] verbs: [get, create] even to a specific pod resourceName: ["blue-po", "red-po"]
100
rolebinding
bind role to a user subjects: for user detail roleRef: role detail
101
check my permission as admin you can check other's permission
k auth can-i delete nodes k create po --as dev-user
102
clusterRole
role is namespaced, but there are resources that are not namespaced but cluster scoped, such as node, pv, ns. clusterRole can also authorize a user to access pods across all ns
103
Get the API groups and resource names
k api-resources
104
get default enabled admission controller
k -n kube-system exec kubeapi-server-pod -- kube-apiserver -h | grep enable-admission-plugins check second paragraph: admission plugins that should be enabled in addition to default enabled ones (...here all default enabled ones!!!) to edit in a kubeadm setup when it is running as a po: vi /etc/kubernetes/manifests/kube-apiserver.yaml or follow documentation
105
vim search
https://monovm.com/blog/how-to-search-in-vim-editor/#:~:text=Press%20the%20%22%2F%22%20key%20to%20enter%20search%20mode.,occurrence%20of%20the%20character%20string.
106
two type admission controller
mutating: DefaultStorageClass , NamespaceAutoProvision validate: NamespaceExits mutate is invoked first followed by validate
107
mutating/validating admission webhook server
two special admission controller to support external admission controller, we need to point it to our own server within or outside k8s cluster to run our own code & logic. on k8s: kind: ValidatingWebhookConfiguration webhook will send a AdmissionReview object to the sever for reviewing, the server will return a true or false
108
api version: alpha beta, how to enable? enable api groups
not enable by default, in ExecStart= --runtime-config=batch/v2alpha1
109
beta vs alpha vs GA (stable)
beta has e2e tests, has only minor bugs, commit to move to GA (general availble)
110
what is preferred version
mulitple version can be deployed to a k8s cluster, but when run k get deploy, it only return the preferred version. To know preferred version: k explain deploy or /apis/batch storage version: only one version can be defined as storage version. when other version is created, it will be converted to the preferred before storing to edcd version storage and preferred are usually the same but they don't necessarily have to be the same
111
api deprecation policy
1. api elements may only be removed by incrementing the version of the API group 2. api objects must be able to round-trip between api versions in a given release without information loss, with the exception of whole REST resources that do not exist in some versions 4a, beta support at least 9 month or 3 releases GA supports 12 months or 3 releases 5, when a release supports both new and previous version (v1beta2, v1beta1), after 1 release, we can change the storage/preferred version to the new version. 3: v2alpha won't deprecate GA v1 version. v2 can deprecate GA v1
112
k convert
k convert -f old-yaml-file --output-version new-version. it outputs new version's yaml need to install kubectl convert plugin
113
get short name of resource
k api-resources
114
what happens to etcd db when create resource ? what is controller
the resource is store the object in etcd data store controller keeps monitoring etcd and create the replica set based on defined. each resource has a controller, deploy has deploy controller in Go
115
to define a custom resource other than deploy, replicaset etc
use CRD, it only allow you to create resource via k, but it didn't really does anything. To further do things, custom controller is needed. scope: namespaced or not group: define api group, e.g. flights.com names: kind: of the resources singular: plural: shortName: version, schema of its yaml
116
custom controller
loops: monitor and change usually in Go, as it is easier. It can be in python but it is expensive, it needs to create own queueing and caching mechanism
117
operator framework
is CRD + custom controller together operatorHub.io
118
helm concept
package manager for k8s app
119
os
install helm with snap utility apt-based system such as Debian or ubuntu apt-get package-based system, pkg install helm to get version of linux lsb_release -a cat /etc/*release*
120
Chart.yaml
it has info about the helm chart, the version, name etc. artifacthub.io - chart repository to search it: helm search hub wordpress to search other repo, need to add the repo: helm repo add bitnami https://charts.bitnami.com/bitnami to search this repo: helm search repo wordpress helm repo list - to list the charts helm repo update - update chart list from remote
121
to find resource installed by a certain helm release, use selector
kubectl get all -n NAMESPACE --selector=release=RELEASE_NAME
122
helm command install
helm install release-name chart-name/chart-directory: download the chart, extract, install it locally. release: each install of a chart is called a release. it is like id to list package: helm install to uninstall package: helm uninstall rellease-name To only download/extract hem chart: helm pull --untar bitnami/workpress
123
app creation order
pv, pvc, po, svc! first create pvc
124
remove pvc
delete finalizers in the yaml metadata use /finalizers to search in vim editor
125
networkpolicy
check connectivity? root@controlplane:~$ kubectl exec -it webapp-color -- sh /opt # nc -v -z -w 5 secure-service 80
126
create temporary pod
kubectl run tmp --restart=Never --rm --image=nginx:alpine -i -- curl http://project-plt-6cc-svc.pluto:3333 --rm: delete po after exits run not exec!!!
127
exec command in a pod
k -n moon exec web-moon-c77655cc-dc8v4 find /usr/share/nginx/html
128
vim: multiple line edit
ctrl V: select shit I: insert esc: save
129
containerPort:
ports : containerPortList of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational!!! Not specifying a port here DOES NOT prevent that port from being exposed.
130
tcp vs http
http is later higher than tcp. TCP checks basic network connection, http checks app level connection
131
assign po to a no
spec. nodeSelector with labels nodeName with name
132
run as root: add capability
spec.securityContext.runAsUser: 0 spec.container.security.securityContext.capabilities.add
133
echo to log
while true; do echo $(date) Hi i am shell >> date.txt;sleep 5; done;
134
kubectl get event --field-selector involvedObject.name=probe-ckad-aom
get events who has label name=probe-ckad-aom, e.g. pod
135
get first field
kubectl get ns | cut -d' ' -f1
136
batch process
k -n sun label pod -l type=runner protected=true # run for label runner and label them all with protected k get po --selector type=label --no-headers | awk '{print $1}' | xargs -I {} kubectl label po {} protected=true --namespace=sun
137
search po based on text
k describe po | grep happy-shop -A10 -B10 grep before and after 10 lines!
138
cm: --from-env-file vs --from-file
--from-env-file: key-value pairs --from-file: the whole file content will be the value, the file name will be its key. OR: --from-file==
139
istio, what is it?
it is a service mesh (pattern), it manages communication between microservices. istio is the implementation
140
what is its function and why?
To make in-cluster security in microservice archtecture. it introduces a 1. sidecar pattern with evovy proxy: it separates the non-business logic from the microservice. These logic like comunication configuration (COMM), security logic (SEC), retry logic (R), metrics & tracing logic (MT) can be configured once from control plane (istiod) to all microservices (pods) 2. traffic split: canary deployment (90 % new version)
141
how to configure istio?
two crd, virtualservice and destinationrule, both are on istiod. By conifgure istiod, we configure the proxy.
142
istio ingress gateway
entrypoint to your cluster. It is an alternative to nginx ingress controller. configure as crd: gateway
143