Pluralsight - Getting Started with Google Kubernetes Engine Flashcards

1
Q

Containers

A

They run on top of the VMs, so there is no need to wait for the VM to boot, i.e. for the OS to boot, you can just start the container, i.e. the resources and it will be almost instantaneous.
Helps scale application components without scaling/impacting the application as a whole.

They usually exist separately for each part of the application, this reduces costs since fewer files need to be running. Eg when updating a container, the differences can be stored into a new container, and then only this difference is copied across to create a new app version. We don’t include into the container the whole app + the changes. It simply contains the changes only that are applied to the original/first layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Image

A

Application itself + dependencies (libraries)

Container is a running instance of an image.

You can access pre-existing images on Artefact Registry or have your own image and make it private. Note: you can also apply IAM rules to the images.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Docker

A

Used to create and run applications in containers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Dockerfile

A

Contains of 4 commands:
1. FROM –> creates a base layer from a public repo
2. COPY ./app –> adds a new layer contains files copied in from your build tools working directory.
3. RUN make/app –> Builds the application by using the make command and puts results into the final layer
4. CMD python/app/app.y –> specifies what command should be run from within the container

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Kubernetes

A

If the Containers are in different VMs and they need to communicate with each other internally, OR if we run out of VM resources for all the containers we create –> can scale up and manage containers using Kubernetes.

Declarative configuration is used - reduces errors because the config is documented once and Kubernetes tries to adhere to this state.

Declarative management needs to be there to tell Kubernetes how to deal with the objects.

Objects have 2 key elements:
1. Spec
2. Status

Containers in the same pod share same resources and storage.

Control Plane manages the pods - it makes sure the pods are in accordance with the configs specified by the user

Cluster in this context would be all the underlying VMs: master node/VM is the Control Plane; other nodes / VMs are several Pods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Control Plane components

A
  1. kube-APIserver - responsible for kubectl commands, it’s the only single element user interacts with. Before this can be used, kubectl must be configured! kubectl config view allows to view the config of the kubectl command itself
  2. etcd - cluster’s DB –> cluster config data, where pods are running etc
  3. kube-scheduler –> schedules the launches; it knows the state of all the nodes, so it will know where they need to be launched.
  4. kube-controller-manager component –> monitors the cluster state, it makes changes to clusters to achieve the desired state
  5. kube-cloud-manager –> responsible for bringing in the cloud features, like Load Balancing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

kubectl configs and commands

A
  1. To see the list of pods in a clusters:
    kubectl get pods
  2. Connect kubectl to the GKE cluster –> retrieve credentials for the specified cluster
    gcloud container clusters get-credentials cluster_name –region region_name
    This command will write credentials of the cluster into the config file of the .kube directory in the $HOME directory
    This needs to be run just once per cluster. When we switch to a new cluster, the config will need to be updated again.
  3. kubectl commands structure:
    kubectl [what-you-want-to-do] [on-which-type-of-object] [object-name] [flags]
    kubectl [get/describe/logs/exec] [pods/deployments/notes] [object-name] [flags]
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Introspection

A

Debug problems when the code is running.
Gathering info about containers, pods, services, other engines within the cluster.

Gather info about your app
- get –> gives status of the object (running, pending etc; CrashLoopBackOff - error indicating the pod was restarted more than once yet still can’t start, there is likely to be an error in configuration files for it)
- describe –> investigate pod in detail: show pod’s labels, resource requirements and volumes (storage), IP
- exec –> run commands in shell
- logs –> a way to see what’s happening inside a pod; stdout - standard output to the console; stderr - error messages;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Lab:
Cloud Build

A
  1. nano quickstart.sh
  2. # !/bin/shecho “Hello, world! The time is $(date).”
  3. nano Dockerfile
  4. FROM alpine
    COPY quickstart.sh /
    CMD [“/quickstart.sh”]
  5. In Cloud Shell, run the following command to make the quickstart.sh script executable:
    chmod +x quickstart.sh
  6. Create Artifact repo:
    gcloud artifacts repositories create quickstart-docker-repo –repository-format=docker \
    –location=us-east1 –description=”Docker repository”
  7. Build Docker container image in Cloud Build:
    gcloud builds submit –tag us-east1-docker.pkg.dev/${DEVSHELL_PROJECT_ID}/quickstart-docker-repo/quickstart-image:tag1
  8. Build containers
    - create cloudbuild.yaml file with the following:
    steps:
    - name: ‘gcr.io/cloud-builders/docker’
    args: [ ‘build’, ‘-t’, ‘YourRegionHere-docker.pkg.dev/$PROJECT_ID/quickstart-docker-repo/quickstart-image:tag1’, ‘.’ ]
    images:
    - ‘YourRegionHere-docker.pkg.dev/$PROJECT_ID/quickstart-docker-repo/quickstart-image:tag1’
  9. Run the below command to set our region variable and insert that value into the yaml file. This file instructs Cloud Build to use Docker to build an image using the Dockerfile specification in the current local directory
    export REGION=us-east1
    sed -i “s/YourRegionHere/$REGION/g” cloudbuild.yaml
  10. To start a Cloud Build using cloudbuild.yaml as the build configuration file:
    gcloud builds submit –config cloudbuild.yaml
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Theory from programme leaders

A

Containerisation = microservices

Why use containers
If we don’t, we use more of hardware. Using containers we can have multiple VMs run on the same hardware. But at the same time VMs cannot correspond to client demand as quickly, the OS takes time to start (earlier it would be 5-10 mins).
Moreover, multiple applications installed on the same machine compromise security because more ports need to be opened. All apps need to restart if a VM is restarted. One app for example could take all memory on the startup preventing other apps from starting.

Solution
Create a packaging system where all the libraries, binaries, metadata were installed rather than installing them on OS. Everything needed by the app was on one of these packages. These were meant to run in memory.
New problem: OS wouldn’t know how to connect with the package.

Solution 2
Create container layer that translates what’s in the packages to the OS. This is the ‘Interpretation layer’.

Optimise OS space problem
People wanted to optimise the use of OS, they wanted multiple packages to exist in the OS.

Solution 3
For each package, a new process would be created for it to be able to launch. You could now set the limits to the number of VM resources that each app could use.
Now the apps start in seconds, the amount of storage required is also reduced because instead of having just 1 app on the VM, we can have eg 10.

Orchestration problem
As the number of apps increased, it became harder to scale them up and down when needed as well as manage them. You would need to manage the IP of the VM, as well as the IPs of all the apps the VM has because each of the apps acts as its own object.

Solution 4
To automate the management, originally BORG was developed = container orchestration system.

Dockerfile
Dockerfile was needed to tell VM how to launch the apps.
1. The app needs to be developed eg in Python
2. Imagine it uses something from Linux or some other components
3. FROM ubuntu: 18.04 –> cannot be changed later, it creates the app on top of this component. It can be either taken from the public docker repo or URL.
4. COPY . /app –> copy everything from current directory and dump it into app directory in ubuntu
5. RUN make/app –> install all app components
6. CMD python/app/app.y –> launch the app

Docker continued
- you have ONE image but then all containers are created based on the image
1. Prepare Dockerfile
2. Build an image: docker build -t image_name . –> dot is needed to say that all the files gcp needs are in my current working directory
3. Run Docker: docker run -p 8080:80 image_name
4. Can push to Container Registry docker push gcr.io/your-project-id/image_name

GKE modes
Autopilot mode:
- RECOMMENDED unless you need specific configurations
- pay per pod (per launched workload; pod is a grouping of containers, so that we can refer to the 2 containers under one common IP, used when the containers need to communicate and work in tandem; pods remove individual administration of containers; containers within a pod communicate with each other using kube-proxy such as internal IPs)
- all is configured/reshaped for you, like when we don’t know how big the cluster should be
- use autopilot to let clusters learn, then transition to standard mode if you want more control
- optimised for production
- SSH to pods is removed
Standard mode:
- paying per VM
- more customisations available

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Object management

A

You define the objects using Manifest files in YAML or JSON.

Defines the desired state for the pod.

Recommended to save the Manifest files in the version control environment to track all the changes made to the objects.

Objects have:
- api version
- metadata
- name
- uid (unique identifiier)
- labels (set through kubectl command)

Object controller
- controls several pods at the same time rather than each pod individually using the Manifest files
- This can mean having a deployment, which is basically one YAML file that will make sure the desired pods’ state in a single cluster is maintained.
- deployments are good for web apps

Don’t install software directly on container, install them by building IMAGES with the software that you need and then redeploy them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

LAB:
GKE deployment

A

Clusters can be created across a region or in a single zone. A single zone is the default. When you deploy across a region the nodes are deployed to three separate zones and the total number of nodes deployed will be three times higher.

To let users access your deployment, you can expose it to external traffic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

LAB:
GKE deployment using Cloud Shell

A
  1. Create Kubernetes Cluster:
    gcloud container clusters create-auto $my_cluster –region $my_region
  2. To create a kubeconfig file with the credentials of the current user (to allow authentication) and provide the endpoint details for a specific cluster:
    gcloud container clusters get-credentials $my_cluster –region $my_region
  3. Inspect the current cluster:
    kubectl config view
  4. Print out the cluster information for the active context:
    kubectl cluster-info –> this will tell what is running where (which IPs etc)
  5. Print active context
    kubectl config current-context –> gke_qwiklabs-gcp-00-b461c71df8e0_us-east4_autopilot-cluster-1
  6. To list active cluster, the contatiner and the user:
    kubectl config get-contexts
  7. To change the active context/cluster:
    kubectl config use-context gke_${DEVSHELL_PROJECT_ID}_us-east4_autopilot-cluster-1
    You can use this approach to switch the active context when your kubeconfig file has the credentials and configuration for several clusters already populated. This approach requires the full name of the cluster, which includes the gke prefix, the project ID, the location, and the display name, all concatenated with underscores.
  8. Deploy pods to the cluster:
    kubectl create deployment –image nginx nginx-1
  9. View deployed pods:
    kubectl get pods
  10. To view the complete details of the Pod
    kubectl describe pod $my_nginx_pod
  11. A service is required to expose a Pod to clients outside the cluster:
    kubectl expose pod $my_nginx_pod –port 80 –type LoadBalancer
  12. Connect to Pod to adjust settings (not recommended, do it to the Image instead):
    we clone the repo with existing configs for the pod (manifestation file) to make life easier and create a soft link to it, then run the following to deploy an existing manifest in the repo:
    kubectl apply -f ./new-nginx-pod.yaml
  13. Can then SSH sort of into the Pod to install stuff on it:
    kubectl exec -it new-nginx – /bin/bash
  14. Install nano editor and update OS first, then create a new file in the specified directory that has the desired output message. Exit the Pod SSH.
    Set up port forwarding from Cloud Shell to the nginx Pod:
    kubectl port-forward new-nginx 10081:80
  15. In a new Cloud Shell window can test this with an inter-pod IP:
    curl http://127.0.0.1:10081/test.html
  16. In a new Cloud Shell window can view logs:
    kubectl logs new-nginx -f –timestamps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Alerting policy definition

A

Set of conditions that you want to monitor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Internal HTTPs LB

A

Cymbal Superstore’s GKE cluster requires an internal http(s) load balancer. You are creating the configuration files required for this resource.
What is the proper setting for this scenario?

A. Annotate your ingress object with an ingress.class of “gce.”
B. Configure your service object with a type: LoadBalancer.
C. Annotate your service object with a neg reference.
D. Implement custom static routes in your VPC.

Annotating the service object with a Network Endpoint Group (NEG) reference is necessary when you’re configuring an internal HTTP(S) load balancer for a GKE cluster.
A NEG is a resource in Google Cloud that represents a group of network endpoints, which can be VM instances, Kubernetes Pods, or container endpoints managed by a GKE cluster.
By annotating the service object with a NEG reference, you’re essentially telling the internal HTTP(S) load balancer which endpoints to route traffic to within the GKE cluster.
This configuration enables the internal load balancer to distribute traffic to the appropriate Pods or containers running in the GKE cluster.

Option B, configuring the service object with a type: LoadBalancer, is incorrect because it would create an external load balancer, not an internal one. Option D, implementing custom static routes in your VPC, is also incorrect as it does not directly relate to configuring an internal HTTP(S) load balancer for a GKE cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What Kubernetes object provides access to logic running in your cluster via endpoints that you define?

A

Services:
A Kubernetes Service is an abstraction that defines a logical set of Pods and a policy by which to access them. It enables other applications within or outside the Kubernetes cluster to communicate with your application’s Pods using well-defined endpoints.

By creating a Service object in Kubernetes and defining the appropriate selectors or endpoints, you can expose your application’s functionality to other components within the cluster or to external users or systems. This abstraction allows for decoupling the application logic from its network details, facilitating service discovery, load balancing, and routing of traffic to the correct Pods based on the defined endpoints.

Services vs deployments
While Deployments manage the lifecycle of Pods and ensure their desired state, Services provide access to the logic running within those Pods via endpoints. Services act as an abstraction layer above Pods, enabling network communication and load balancing across multiple Pods that belong to the same Service.

In summary, Deployments manage the creation, scaling, and updating of Pods, while Services provide a stable endpoint for accessing the functionality exposed by those Pods. Each serves a distinct role in the Kubernetes ecosystem, with Deployments managing application instances and Services facilitating network communication.

17
Q

Cymbal Superstore’s supply chain management system has been deployed and is working well. You are tasked with monitoring the system’s resources so you can react quickly to any problems. You want to ensure the CPU usage of each of your Compute Engine instances in us-central1 remains below 60%. You want an incident created if it exceeds this value for 5 minutes. You need to configure the proper alerting policy for this scenario. What should you do?

A

A. Choose resource type of VM instance
and metric of CPU load, condition
trigger if any time series violates,
condition is below, threshold is .60, for 5 minutes.
B. Choose resource type of VM instance and metric of CPU utilization,
condition trigger all time series violates, condition is above,
threshold is .60 for 5 minutes.
C. Choose resource type of VM instance, and metric of CPU utilization,
condition trigger if any time series violates, condition is below,
threshold is .60 for 5 minutes.
D. Choose resource type of VM instance and metric of CPU utilization,
condition trigger if any time series violates, condition is above,
threshold is .60 for 5 minutes.

Answer B isn’t correct because it implies the alert will be sent as soon as the CPU usage is above 60%, not when it’s at that point for 5 mins.

18
Q

You need to configure access to Cloud
Spanner from the GKE cluster that is
supporting Cymbal Superstore’s
ecommerce microservices application.
You want to specify an account type to
set the proper permissions.

A

A. Assign permissions to a Google account referenced
by the application.
B. Assign permissions through a Google Workspace
account referenced by the application.
C. Assign permissions through service account
referenced by the application.

D. Assign permissions through a Cloud Identity account
referenced by the application.

Google recommended option is chosen.
Explanations for why other options are wrong:

A. Assign permissions to a Google account referenced by the application.

Google accounts are typically associated with individual users and are not recommended for service-to-service authentication within Google Cloud Platform (GCP). Using a Google account for this purpose would not provide the necessary security and manageability.
B. Assign permissions through a Google Workspace account referenced by the application.

Google Workspace accounts (formerly G Suite) are meant for user access to Google Workspace applications like Gmail, Drive, and Calendar. They are not designed for service-to-service authentication in GCP.
D. Assign permissions through a Cloud Identity account referenced by the application.

Cloud Identity accounts are primarily used for identity and access management within an organization. They are not intended for service-to-service authentication in GCP.

19
Q

Kubernetes Pod

A

A group of containers - because we can put together several VMs with their dependencies (i.e. containers) together if they need to communicate with each other)

20
Q

BigTable use case

A

Great for Time-Series data

21
Q

You have production and test workloads that you want to deploy on Compute Engine. Production VMs need to be in a different subnet than the test VMs. All the VMs must be able to reach each other over Internal IP without creating additional routes. You need to set up VPC and the 2 subnets. Which configuration meets these requirements?

A

A. Create a single custom VPC with 2 subnets. Create each subnet in a different region and with a different CIDR range.

If you create more than one subnet in a VPC, the CIDR blocks of the subnets cannot overlap.

22
Q

Billing roles

A

Billing Account Administrator
The Billing Account Administrator role grants the IT department the permissions to associate projects with billing accounts, turn off billing for the projects, and view the credit card information for the accounts that they resell to their customers.
It does not give them permissions to view the contents of the projects.

Billing Account User
The Billing Account User role gives the service account the permissions to enable billing (associate projects with the organization’s billing account for all projects in the organization) and thereby permit the service account to enable APIs that require billing to be enabled.