Section-A Flashcards
study section -a (106 cards)
Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?
A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer
D. Use separate backend pools for each API path behind the load balancer
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface.
How should you store the data to optimize it for ease of analysis?
A. Load data into Google BigQuery
B. Insert data into Google Cloud SQL
C. Put flat files into Google Cloud Storage
D. Stream data into Google Cloud Datastore
A. Load data into Google BigQuery
Explanation:
BigQuery is Google’s serverless, highly scalable, low cost
The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? (Choose three.)
A. Port the application code to run on Google App Engine
B. Integrate Cloud Dataflow into the application to capture real-time metrics
C. Instrument the application with a monitoring tool like Stackdriver Debugger
D. Select an automation framework to reliably provision the cloud infrastructure
E. Deploy a continuous integration tool with automated testing in a staging environment
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable
A. Port the application code to run on Google App Engine
D. Select an automation framework to reliably provision the cloud infrastructure
E. Deploy a continuous integration tool with automated testing in a staging environment
https://cloud.google.com/appengine/docs/standard/java/tools/uploadingana
A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed.
What is the most likely cause of this problem?
A. The session variable is local to just a single instance
B. The session variable is being overwritten in Cloud Datastore
C. The URL of the API needs to be modified to prevent caching
D. The HTTP Expires header needs to be set to -1 stop caching
A. The session variable is local to just a single instance
An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log
data. You want to help them find a solution that meets their needs.
What should you do?
A. Direct them to download and install the Google StackDriver logging agent
B. Send them a list of online resources about logging best practices
C. Help them define their requirements and assess viable logging tools
D. Help them upgrade their current tool to take advantage of any new features
C. Help them define their requirements and assess viable logging tools
You need to reduce the number of unplanned rollbacks of erroneous production deployments in your companys web hosting platform. Improvement to the QA/
Test processes accomplished an 80% reduction.
Which additional two approaches can you take to further reduce the rollbacks? (Choose two.)
A. Introduce a green-blue deployment model
B. Replace the QA environment with canary releases
C. Fragment the monolithic platform into microservices
D. Reduce the platforms dependency on relational database systems
E. Replace the platforms relational database systems with a NoSQL database
A. Introduce a green-blue deployment model
C. Fragment the monolithic platform into microservices
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.
*```
FROM ubuntu:16.04
COPY ./src
RUN apt-get update && apt-get install -y python python-pip
RUN pip install -r requirements.txt**
~~~
You want to optimize this Dockerfile for faster deployment times without adversely affecting the apps functionality.
Which two actions should you take? (Choose two.)
A. Remove Python after running pip
B. Remove dependencies from requirements.txt
C. Use a slimmed-down base image like Alpine Linux
D. Use larger machine types for your Google Container Engine node pools
E. Copy the source after he package dependencies (Python and pip) are installed
C. Use a slimmed-down base image like Alpine Linux
E. Copy the source after he package dependencies (Python and pip) are installed
Explanation:
The speed of deployment can be changed by limiting the size of the uploaded app, limiting the complexity of the build necessary in the Dockerfile, if present, and by ensuring a fast and reliable internet connection.
Note: Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.
Reference:
Answer : CE
https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDU https://www.alpinelinux.org/about/
Your solution is producing performance bugs in production that you did not see in staging and test environments. You want to adjust your test and deployment procedures to avoid this problem in the future.
What should you do?
A. Deploy fewer changes to production
B. Deploy smaller changes to production
C. Increase the load on your test and staging environments
D. Deploy changes to a small subset of users before rolling out to production
D. Deploy changes to a small subset of users before rolling out to production
Answer : D
A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services. You want to know which service takes the longest in those cases.
What should you do?
A. Set timeouts on your application so that you can fail requests faster
B. Send custom metrics for each of your requests to Stackdriver Monitoring
C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high
D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
Answer : D
Reference:
[https://cloud.google.com/trace/docs/quickstart#find]
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future.
What should you do?
A. Use a different database
B. Choose larger instances for your database
C. Create snapshots of your database more regularly
D. Implement routinely scheduled failovers of your databases
Implement routinely scheduled failovers of your databases
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings.
Which approach should you use?
A. Grant the security team access to the logs in each Project
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery
C. Configure Stackdriver Monitoring for all Projects with the default retention policies
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage
Configure Stackdriver Monitoring for all Projects, and export to BigQuery
Explanation:
Stackdriver Logging provides you with the ability to filter, search, and view logs from your cloud and open source application services. Allows you to define metrics based on log contents that are incorporated into
dashboards and alerts. Enables you to export logs to BigQuery, Google Cloud Storage, and Pub/Sub.
https://cloud.google.com/stackdriver/
Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4TB, and large updates are frequent. Replication requires private address space communication.
Which networking approach should you use?
A. Google Cloud Dedicated Interconnect
B. Google Cloud VPN connected to the data center network
C. A NAT and TLS translation gateway installed on-premises
D. A Google Compute Engine instance with a VPN server installed connected to the data center network
Google Cloud Dedicated Interconnect
Explanation:
Google Cloud Dedicated Interconnect provides direct physical connections and RFC 1918 communication between your on-premises network and Google’s network. Dedicated Interconnect enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet or using VPN tunnels.
Benefits:
✑ Traffic between your on-premises network and your VPC network doesn’t traverse the public Internet. Traffic traverses a dedicated connection with fewer hops, meaning there are less points of failure where traffic might get dropped or disrupted.
✑ Your VPC network’s internal (RFC 1918) IP addresses are directly accessible from your on-premises network. You don’t need to use a NAT device or VPN tunnel to reach internal IP addresses. Currently, you can only reach internal IP addresses over a dedicated connection. To reach Google external IP addresses, you must use a separate connection.
✑ You can scale your connection to Google based on your needs. Connection capacity is delivered over one or more 10 Gbps Ethernet connections, with a maximum of eight connections (80 Gbps total per interconnect).
✑ The cost of egress traffic from your VPC network to your on-premises network is reduced. A dedicated connection is generally the least expensive method if you have a high-volume of traffic to and from Google’s network.
Reference: https://cloud.google.com/interconnect/docs/details/dedicated
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process.
What should you do?
A. Create custom Google Stackdriver alerts and send them to the auditor
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor’s view
D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket
Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket
Answer : D
You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely.
Where should you store the credentials?
A. In the source code
B. In an environment variable
C. In a secret management system
D. In a config file that has restricted access through ACLs
C. In a secret management system
Answer : C
Reference:
https://cloud.google.com/kms/docs/secret-management
A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment. You want to advocate for the adoption of Google Cloud Deployment Manager.
What are two business risks of migrating to Cloud Deployment Manager? (Choose two.)
A. Cloud Deployment Manager uses Python
B. Cloud Deployment Manager APIs could be deprecated in the future
C. Cloud Deployment Manager is unfamiliar to the company’s engineers
D. Cloud Deployment Manager requires a Google APIs service account to run
E. Cloud Deployment Manager can be used to permanently delete cloud resources
F. Cloud Deployment Manager only supports automation of Google Cloud resources
B. Cloud Deployment Manager APIs could be deprecated in the future
F. Cloud Deployment Manager only supports automation of Google Cloud resources
Answer : BF
A development manager is building a new application. He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must:
1. Be based on open-source technology for cloud portability
2. Dynamically scale compute capacity based on demand
3. Support continuous software delivery
4. Run multiple segregated copies of the same application stack
5. Deploy application bundles using dynamic templates
6. Route network traffic to specific services based on URL
Which combination of technologies will meet all of his requirements?
A. Google Kubernetes Engine, Jenkins, and Helm
B. Google Kubernetes Engine and Cloud Load Balancing
C. Google Kubernetes Engine and Cloud Deployment Manager
D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing
D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing
Explanation:
Jenkins is an open-source automation server that lets you flexibly orchestrate your build, test, and deployment pipelines. Kubernetes Engine is a hosted version of
Kubernetes, a powerful cluster manager and orchestration system for containers.
When you need to set up a continuous delivery (CD) pipeline, deploying Jenkins on Kubernetes Engine provides important benefits over a standard VM-based deployment
Incorrect Answers:
A: Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.
Use Helm to:
Reference:
https://cloud.google.com/solutions/jenkins-on-kubernetes-engine
You have created several pre-emptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted.
What should you do?
A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory
B. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service
C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance
D. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key
shutdown-script-url
C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance
Explanation:
A startup script, or a shutdown script, is specified through the metadata server, using startup script metadata keys.
Answer : C
Reference:
https://cloud.google.com/compute/docs/startupscript
Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow between the web and the database tier.
How should you configure the network?
A. Add each tier to a different subnetwork
B. Set up software based firewalls on individual VMs
C. Add tags to each tier and set up routes to allow the desired traffic flow
D . Add tags to each tier and set up firewall rules to allow the desired traffic flow
D. Add tags to each tier and set up firewall rules to allow the desired traffic flow
Explanation:
Google Cloud Platform(GCP) enforces firewall rules through rules and tags. GCP rules and tags can be defined once and used across all regions.
Reference:
https://cloud.google.com/docs/compare/openstack/
https://aws.amazon.com/it/blogs/aws/building-three-tier-architectures-with-security-groups/
Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the development team.
Which three actions should you take? (Choose three.)
A. Use Stackdriver Logging to search for the module log entries
B. Read the debug GCE Activity log using the API or Cloud Console
C. Use gcloud or Cloud Console to connect to the serial console and observe the logs
D. Identify whether a live migration event of the failed server occurred, using in the activity log
E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics
F. Export a debug VM into an image, and run the image on a local server where kernel log messages will be displayed on the native screen
A. Use Stackdriver Logging to search for the module log entries
C. Use gcloud or Cloud Console to connect to the serial console and observe the logs
E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics
Answer : ACE
You created a pipeline that can deploy your source code changes to your infrastructure in instance groups for self-healing. One of the changes negatively affects your key performance indicator. You are not sure how to fix it, and investigation could take up to a week.
What should you do?
A. Log in to a server, and iterate on the fox locally
B. Revert the source code change, and rerun the deployment pipeline
C. Log into the servers with the bad code change, and swap in the previous code
D. Change the instance group template to the previous one, and delete all instances
B. Revert the source code change, and rerun the deployment pipeline
Answer : B
Your organization wants to control IAM policies for different departments independently, but centrally.
Which approach should you take?
A. Multiple Organizations with multiple Folders
B. Multiple Organizations, one for each department
C. A single Organization with Folders for each department
D. A single Organization with multiple projects, each with a central owner
C. A single Organization with Folders for each department
Explanation:
Folders are nodes in the Cloud Platform Resource Hierarchy. A folder can contain projects, other folders, or a combination of both. You can use folders to group projects under an organization in a hierarchy. For example, your organization might contain multiple departments, each with its own set of GCP resources. Folders allow you to group these resources on a per-department basis. Folders are used to group resources that share common IAM policies. While a folder can contain multiple folders or resources, a given folder or resource can have exactly one parent.
Reference:
https://cloud.google.com/resource-manager/docs/creating-managing-folders
You deploy your custom Java application to Google App Engine. It fails to deploy and gives you the following stack trace. What should you do?
image qn 117
A. Upload missing JAR files and redeploy your application.
B. Digitally sign all of your JAR files and redeploy your application
C. Recompile the CLoakedServlet class using and MD5 hash instead of SHA1
B. Digitally sign all of your JAR files and redeploy your application
You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing a message were sent by a specific user.
What should you do?
A .Tag messages client side with the originating user identifier and the destination user.
B . Encrypt the message client side using block-based encryption with a shared key.
C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user’s private key.
D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.
C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user’s private key.
As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data center to their GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the replication.
What should they do?
A . Configure their replication to use UDP.
B. Configure a Google Cloud Dedicated Interconnect.
C. Restore their database daily using Google Cloud SQL.
D. Add additional VPN connections and load balance them.
E. Send the replicated transaction to Google Cloud Pub/Sub
B. Configure a Google Cloud Dedicated Interconnect.