Topic 1 Flashcards
(194 cards)
Question #: 64
Topic #: 1
You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need to retrieve several root entities for which you have the identifiers. You want to minimize the overhead in operations performed by Cloud Datastore. What should you do?
A. Create the Key object for each Entity and run a batch get operation B. Create the Key object for each Entity and run multiple get operations, one operation for each entity C. Use the identifiers to create a query filter and run a batch query operation D. Use the identifiers to create a query filter and run multiple query operations, one operation for each entity
https://www.examtopics.com/discussions/google/view/7290-exam-professional-cloud-architect-topic-1-question-64/
A. Create the Key object for each Entity and run a batch get operation
Create the Key object for each Entity and run a batch get operation
https://cloud.google.com/datastore/docs/best-practices
Use batch operations for your reads, writes, and deletes instead of single operations. Batch operations are more efficient because they perform multiple operations with the same overhead as a single operation.
Firestore in Datastore mode supports batch versions of the operations which allow it to operate on multiple objects in a single Datastore mode call.
Such batch calls are faster than making separate calls for each individual entity because they incur the overhead for only one service call. If multiple entity groups are involved, the work for all the groups is performed in parallel on the server side.
Question #: 160
Topic #: 1
The application reliability team at your company this added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
A. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with serverName ג€" Timestamp ג€¢ Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket. B. ג€¢ Batch every 10,000 events with a single manifest file for metadata ג€¢ Compress event files and manifest file into a single archive file ג€¢ Name files using serverName ג€" EventSequence ג€¢ Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket. C. ג€¢ Compress individual files ג€¢ Name files with serverName ג€" EventSequence ג€¢ Save files to one bucket ג€¢ Set custom metadata headers for each object after saving D. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with a random prefix pattern ג€¢ Save files to one bucket
https://www.examtopics.com/discussions/google/view/54369-exam-professional-cloud-architect-topic-1-question-160/
D. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with a random prefix pattern ג€¢ Save files to one bucket
https://cloud.google.com/storage/docs/request-rate#naming-convention
“A longer randomized prefix provides more effective auto-scaling when ramping to very high read and write rates. For example, a 1-character prefix using a random hex value provides effective auto-scaling from the initial 5000/1000 reads/writes per second up to roughly 80000/16000 reads/writes per second, because the prefix has 16 potential values. If your use case does not need higher rates than this, a 1-character randomized prefix is just as effective at ramping up request rates as a 2-character or longer randomized prefix.”
Example:
my-bucket/2fa764-2016-05-10-12-00-00/file1
my-bucket/5ca42c-2016-05-10-12-00-00/file2
my-bucket/6e9b84-2016-05-10-12-00-01/file3
Question #: 131
Topic #: 1
Your company wants you to build a highly reliable web application with a few public APIs as the backend. You don’t expect a lot of user traffic, but traffic could spike occasionally. You want to leverage Cloud Load Balancing, and the solution must be cost-effective for users. What should you do?
A. Store static content such as HTML and images in Cloud CDN. Host the APIs on App Engine and store the user data in Cloud SQL. B. Store static content such as HTML and images in a Cloud Storage bucket. Host the APIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud Spanner. C. Store static content such as HTML and images in Cloud CDN. Use Cloud Run to host the APIs and save the user data in Cloud SQL. D. Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to host the APIs and save the user data in Firestore.
https://www.examtopics.com/discussions/google/view/56615-exam-professional-cloud-architect-topic-1-question-131/
D. Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to host the APIs and save the user data in Firestore.
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless#gcloud:-cloud-functions
https://cloud.google.com/blog/products/networking/better-load-balancing-for-app-engine-cloud-run-and-functions
Question #: 79
Topic #: 1
You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPU load.
What should you do?
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console. B. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed instance group for the cluster using the gcloud command. C. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster Autoscaler using the gcloud command. D. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on the cluster managed instance group from the GCP Console.
https://www.examtopics.com/discussions/google/view/7323-exam-professional-cloud-architect-topic-1-question-79/
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console.
How does Horizontal Pod Autoscaler work with Cluster Autoscaler?
Horizontal Pod Autoscaler changes the deployment’s or replicaset’s number of replicas based on the current CPU load. If the load increases, HPA will create new replicas, for which there may or may not be enough space in the cluster. If there are not enough resources, CA will try to bring up some nodes, so that the HPA-created pods have a place to run. If the load decreases, HPA will stop some of the replicas. As a result, some nodes may become underutilized or completely empty, and then CA will terminate such unneeded nodes.
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler
“Caution: Do not enable Compute Engine autoscaling for managed instance groups for your cluster nodes. GKE’s cluster autoscaler is separate from Compute Engine autoscaling”
Question #: 22
Topic #: 1
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.
From ubuntu:16.04
COPY . /src
RUN apt-get update && apt-get install -y python pyhton-pip
RUN pip install -r requirements.txt
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.
Which two actions should you take? (Choose two.)
A. Remove Python after running pip B. Remove dependencies from requirements.txt C. Use a slimmed-down base image like Alpine Linux D. Use larger machine types for your Google Container Engine node pools E. Copy the source after he package dependencies (Python and pip) are installed
https://www.examtopics.com/discussions/google/view/54406-exam-professional-cloud-architect-topic-1-question-22/
C. Use a slimmed-down base image like Alpine Linux
E. Copy the source after he package dependencies (Python and pip) are installed
C. Use a slimmed-down base image like Alpine Linux: The ubuntu:16.04 image is a full-fledged operating system, which means it’s larger and takes longer to download and build. Alpine Linux is a minimal distribution designed for containers, resulting in significantly smaller images and faster deployments.
E. Copy the source after the package dependencies (Python and pip) are installed: Docker builds images in layers. Each RUN, COPY, and ADD instruction creates a new layer. By copying the source code after installing dependencies, you can take advantage of Docker’s caching mechanism. If your source code changes, only the layers related to the source code need to be rebuilt, not the layers related to dependencies.
Question #: 61
Topic #: 1
The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application. What should you do?
A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment. B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment. C. Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment. D. Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.
https://www.examtopics.com/discussions/google/view/6330-exam-professional-cloud-architect-topic-1-question-61/
B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.
gcloud command to create K8s cluster https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster
Create a Google Kubernetes Engine (GKE) cluster: You can use the Google Cloud Console or the gcloud command-line tool to create a GKE cluster, which will provide the underlying infrastructure for running your application.
Deploy the application to the cluster: You can use the kubectl command-line tool to apply the Kubernetes Deployment file provided by the development team to the cluster.kubectl apply -f deployment.yaml
Compute
Instances in Multiple Zones >= 99.9%
Cloud SQL
built-in high availability (HA) option which supports 99.95% SLA.
Question #: 5
Topic #: 1
An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log data. You want to help them find a solution that meets their needs.
What should you do?
A. Direct them to download and install the Google StackDriver logging agent B. Send them a list of online resources about logging best practices C. Help them define their requirements and assess viable logging tools D. Help them upgrade their current tool to take advantage of any new features
https://www.examtopics.com/discussions/google/view/6837-exam-professional-cloud-architect-topic-1-question-5/
C. Help them define their requirements and assess viable logging tools
The correct answer is C. Help them define their requirements and assess viable logging tools.
Explanation:
The development team has expressed the need for a better logging tool for their new cloud-based product. As a cloud architect, it is your role to help them find a solution that meets their needs.
Option A, directing them to download and install the Google StackDriver logging agent, is not the best solution as it assumes that StackDriver logging will be the best fit for their needs without proper evaluation of their requirements.
Option B, sending them a list of online resources about logging best practices, may be helpful, but it does not address their specific needs.
Option D, helping them upgrade their current tool, may not be the best solution either since they have already expressed their concerns that their current tool will not meet their needs.
Option C, helping them define their requirements and assess viable logging tools, is the best option. This involves understanding their needs and gathering requirements, evaluating different logging tools available in the market, and selecting the best tool that meets their needs.
By working with the development team to identify their requirements, you can help them choose a logging tool that will enable them to capture errors and analyze their historical log data efficiently. This may involve evaluating various cloud-based logging solutions, such as StackDriver, Splunk, or ELK stack, and comparing their features, functionality, and pricing to identify the most suitable option for their needs.
In summary, the best course of action is to understand the development team’s needs, help them define their requirements, and assess viable logging tools to find a solution that meets their needs.
Question #: 173
Topic #: 1
The operations team in your company wants to save Cloud VPN log events for one year. You need to configure the cloud infrastructure to save the logs. What should you do?
A. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for the logs you want to save. B. Enable the Compute Engine API, and then enable logging on the firewall rules that match the traffic you want to save. C. Set up a Cloud Logging Dashboard titled Cloud VPN Logs, and then add a chart that queries for the VPN metrics over a one-year time period. D. Set up a filter in Cloud Logging and a topic in Pub/Sub to publish the logs.
https://www.examtopics.com/discussions/google/view/68684-exam-professional-cloud-architect-topic-1-question-173/
A. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for the logs you want to save.
Question #: 167
Topic #: 1
You want to enable your running Google Kubernetes Engine cluster to scale as demand for your application changes.
What should you do?
A. Add additional nodes to your Kubernetes Engine cluster using the following command: gcloud container clusters resize CLUSTER_Name ג€" -size 10 B. Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE - -tags enable- autoscaling max-nodes-10 C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 D. Create a new Kubernetes Engine cluster with the following command: gcloud alpha container clusters create mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 and redeploy your application
https://www.examtopics.com/discussions/google/view/7073-exam-professional-cloud-architect-topic-1-question-167/
C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler
A is incorrect because there is supposed to be two hypens “–” not one before size (https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize). B is incorrect because it just adds a string to the cluster (https://cloud.google.com/sdk/gcloud/reference/compute/instances/add-tags). “C” is just as wrong as “A” because the documentation says it should be “–max-nodes” followed by “–min-nodes” (https://cloud.google.com/sdk/gcloud/reference/alpha/container/clusters/update), also the alpha command no longer works but it used to and is still up on google docs. This goes for “D” as well but D talks about making another, which doesn’t have to be done because one it already up. So the debate is between A and C, and C used to work so C was chosen, although C also has spaces which never worked… So this question is an absolute thug tactic by a Google team to steal from the Google kingdom preventing the establishment of their library by failing people that actually know the science behind the technology. When you see this question at a test center I’d select C.
Question #: 85
Topic #: 1
Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables.
You want analysts from each country to be able to see and query only the data for their respective countries.
How should you configure the access rights?
A. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group. B. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country-group. C. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country- group. D. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country-group.
https://www.examtopics.com/discussions/google/view/6457-exam-professional-cloud-architect-topic-1-question-85/
A. Create a group per country. Add analysts to their respective country-groups. Create a single group ‘all_analysts’, and add all country-groups as members. Grant the ‘all_analysts’ group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
The question requires that user from each country can only view a specific data set, so BQ dataViewer cannot be assigned at project level. Only A could limit the user to query and view the data that they are supposed to be allowed to.
Data viewer role can be applied to a Table and a View.
JobUser can be applied only at a Project level not at a Dataset level
https://cloud.google.com/bigquery/docs/access-control#bigquery.dataViewer
https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser
Question #: 76
Topic #: 1
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is the Google- recommended way for your application to authenticate to the required Google Cloud services?
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles. B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles. C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM. D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.
https://www.examtopics.com/discussions/google/view/11818-exam-professional-cloud-architect-topic-1-question-76/
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
The Google-recommended way for your application to authenticate to Cloud Pub/Sub and other Google Cloud services when running on Compute Engine VMs is to use VM service accounts. VM service accounts are automatically created when you create a Compute Engine VM, and they are associated with the VM instance. To authenticate to Cloud Pub/Sub and other Google Cloud services, you should ensure that the VM service accounts are granted the appropriate IAM roles.
Option B, ensuring that VM service accounts do not have access to Cloud Pub/Sub and using VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles, would not be a suitable solution because VM service accounts are required for authentication to Google Cloud services.
Option C, generating an OAuth2 access token for accessing Cloud Pub/Sub, encrypting it, and storing it in Cloud Storage for access from each VM, would not be a suitable solution because it would require manual management of access tokens, which can be error-prone and insecure.
Option D, creating a gateway to Cloud Pub/Sub using a Cloud Function and granting the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles, would not be a suitable solution because it would not allow the application to directly authenticate to Cloud Pub/Sub.
Question #: 140
Topic #: 1
Your company has a Kubernetes application that pulls messages from Pub/Sub and stores them in Filestore. Because the application is simple, it was deployed as a single pod. The infrastructure team has analyzed Pub/Sub metrics and discovered that the application cannot process the messages in real time. Most of them wait for minutes before being processed. You need to scale the elaboration process that is I/O-intensive. What should you do?
A. Use kubectl autoscale deployment APP_NAME --max 6 --min 2 --cpu-percent 50 to configure Kubernetes autoscaling deployment. B. Configure a Kubernetes autoscaling deployment based on the subscription/push_request_latencies metric. C. Use the --enable-autoscaling flag when you create the Kubernetes cluster. D. Configure a Kubernetes autoscaling deployment based on the subscription/num_undelivered_messages metric.
https://www.examtopics.com/discussions/google/view/60396-exam-professional-cloud-architect-topic-1-question-140/
D. Configure a Kubernetes autoscaling deployment based on the subscription/num_undelivered_messages metric.
num_undelivered_messages metric can indicate if subscribers are keeping up with message submissions.
https://cloud.google.com/pubsub/docs/monitoring#monitoring_the_backlog
Subscription Metric: Scaling based on the subscription/num_undelivered_messages metric directly ties the scaling behavior to the number of unprocessed messages in Pub/Sub. This ensures that your application scales out when there are more messages to process and scales in when the queue is short.
Relevant Metric: This metric is relevant for an I/O-intensive application that processes messages from Pub/Sub, ensuring that the scaling is directly responsive to the message processing demand.
Question #: 122
Topic #: 1
You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks, customers have reported that a specific part of the application returns errors very frequently. You currently have no logging or monitoring solution enabled on your GKE cluster. You want to diagnose the problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the application. What should you do?
A. 1. Update your GKE cluster to use Cloud Operations for GKE. 2. Use the GKE Monitoring dashboard to investigate logs from affected Pods. B. 1. Create a new GKE cluster with Cloud Operations for GKE enabled. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Use the GKE Monitoring dashboard to investigate logs from affected Pods. C. 1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus. 2. Set an alert to trigger whenever the application returns an error. D. 1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Set an alert to trigger whenever the application returns an error.
https://www.examtopics.com/discussions/google/view/56425-exam-professional-cloud-architect-topic-1-question-122/
A. 1. Update your GKE cluster to use Cloud Operations for GKE. 2. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
According to the reference, answer should be A.
https://cloud.google.com/blog/products/management-tools/using-logging-your-apps-running-kubernetes-engine
But updating cluster requires downtime, isn’t it?
No it actually does not require to shut down the cluster: https://cloud.google.com/stackdriver/docs/solutions/gke/installing#console_1
Question #: 162
Topic #: 1
You want to make a copy of a production Linux virtual machine in the US-Central region. You want to manage and replace the copy easily if there are changes on the production virtual machine. You will deploy the copy as a new instance in a different project in the US-East region.
What steps must you take?
A. Use the Linux dd and netcat commands to copy and stream the root disk contents to a new virtual machine instance in the US-East region. B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region. C. Create an image file from the root disk with Linux dd command, create a new virtual machine instance in the US-East region D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.
https://www.examtopics.com/discussions/google/view/7018-exam-professional-cloud-architect-topic-1-question-162/
D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.
B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region.
D is correct. A and B are talking about appending the file system to a new VM, not setting it at the root in a new VM set. Option C is not offered within the GCP because the image must be on the GCP platform to run the gcloud of Google Console instructions to create a VM with the image.
=>Why Not B.
https://cloud.google.com/compute/docs/instances/create-start-instance#createsnapshot
This clearly tells we can use snapshot to create a VM instance, and only need a custom image if we need to create many instances. Here we are creating only one.
=>You can’t use the snapshot created by another project
=>According to the documentation we can now https://cloud.google.com/compute/docs/disks/create-snapshots
=>Only if its in the same zone: https://cloud.google.com/compute/docs/disks/manage-snapshots#sharing_snapshots
“Note: The disk must be in the same zone as the instance.”
But this is not the case here, we have:
Different zones and different project hence, you must use a bucket.
Question #: 84
Topic #: 1
You want to automate the creation of a managed instance group. The VMs have many OS package dependencies. You want to minimize the startup time for new
VMs in the instance group.
What should you do?
A. Use Terraform to create the managed instance group and a startup script to install the OS package dependencies. B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image. C. Use Puppet to create the managed instance group and install the OS package dependencies. D. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.
https://www.examtopics.com/discussions/google/view/6873-exam-professional-cloud-architect-topic-1-question-84/
B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image
Managed instance groups are a way to manage a group of Compute Engine instances as a single entity. If you want to automate the creation of a managed instance group, you can use tools such as Terraform, Deployment Manager, or Puppet to automate the process.
To minimize the startup time for new VMs in the instance group, you should create a custom VM image with all of the OS package dependencies pre-installed. This will allow you to create new VMs from the custom image, which will significantly reduce the startup time compared to installing the dependencies on each VM individually. You can then use Deployment Manager to create the managed instance group with the custom VM image.
Question #: 60
Topic #: 1
You need to set up Microsoft SQL Server on GCP. Management requires that there’s no downtime in case of a data center outage in any of the zones within a
GCP region. What should you do?
A. Configure a Cloud SQL instance with high availability enabled. B. Configure a Cloud Spanner instance with a regional instance configuration. C. Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets. D. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
https://www.examtopics.com/discussions/google/view/6443-exam-professional-cloud-architect-topic-1-question-60/
D. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
could also be A. Configure a Cloud SQL instance with high availability enabled.
Cloud SQL offers high availability configurations, it currently support Microsoft SQL Server
please see;
https://cloud.google.com/sql/docs/sqlserver/high-availability?_ga=2.30855355.-503483612.1582800507
he correct approach is: D
Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
Here’s why this is the best option:
* SQL Server Always On Availability Groups: This solution provides high availability by automatically failing over to another node in the event of a failure. It’s specifically designed for SQL Server and ensures minimal downtime in case of outages. * Windows Failover Clustering: By configuring Windows Failover Clustering with Always On Availability Groups, you can achieve high availability by ensuring that the SQL Server can failover to another node in case of a zone or node failure. * Placing nodes in different zones: By deploying nodes in different zones within the same region, you ensure that your setup is protected from any potential zone-level outages. If one zone experiences a failure, the other zone can take over without downtime.
Question #: 49
Topic #: 1
Google Cloud Platform resources are managed hierarchically using organization, folders, and projects. When Cloud Identity and Access Management (IAM) policies exist at these different levels, what is the effective policy at a particular node of the hierarchy?
A. The effective policy is determined only by the policy set at the node B. The effective policy is the policy set at the node and restricted by the policies of its ancestors C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors D. The effective policy is the intersection of the policy set at the node and policies inherited from its ancestors
https://www.examtopics.com/discussions/google/view/6846-exam-professional-cloud-architect-topic-1-question-49/
C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors
https://cloud.google.com/iam/docs/resource-hierarchy-access-control
Question #: 46
Topic #: 1
You have an outage in your Compute Engine managed instance group: all instances keep restarting after 5 seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a Linux expert, offered to look into the issue. You need to make sure that he can access the VMs. What should you do?
A. Grant your colleague the IAM role of project Viewer B. Perform a rolling restart on the instance group C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH Keys D. Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys
https://www.examtopics.com/discussions/google/view/6953-exam-professional-cloud-architect-topic-1-question-46/
C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH Keys
The key element in C is “Disable the Health check.”, so that server wont restart automatically.
But before that the actual troubleshooting step is to check Cloud console -> Instance template -> Metadata-> and see if any startup script is there, if yes review it and possibly remove it. [Consider the case, a script is causing restarting the VM, (possibly in Metadata). ]
Question #: 6
You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company’s web hosting platform. Improvement to the QA/
Test processes accomplished an 80% reduction.
Which additional two approaches can you take to further reduce the rollbacks? (Choose two.)
A. Introduce a green-blue deployment model B. Replace the QA environment with canary releases C. Fragment the monolithic platform into microservices D. Reduce the platform's dependency on relational database systems E. Replace the platform's relational database systems with a NoSQL database
https://www.examtopics.com/discussions/google/view/54383-exam-professional-cloud-architect-topic-1-question-6/
A. Introduce a green-blue deployment model
C. Fragment the monolithic platform into microservices
https://circleci.com/blog/canary-vs-blue-green-downtime/
A. Introduce a green-blue deployment model: This approach involves having two identical environments for the platform, one that is currently live (blue environment) and another that is inactive (green environment). When a new deployment is ready, it is first deployed to the green environment where it can be tested and verified before traffic is switched over to it. This approach reduces the impact of any errors or issues that may arise during deployment since traffic is still being served by the currently live environment. If any issues are identified during testing, the deployment can be rolled back without any impact on users. Once the green environment is verified to be working correctly, traffic is switched over to it, and the blue environment becomes the inactive one. This approach is particularly useful for high-traffic applications where downtime during deployments is not acceptable.
B. Replace the QA environment with canary releases: In this approach, new deployments are first released to a small subset of users (usually 1-5%) before being released to the entire user base. This allows for any issues to be identified and resolved before the deployment is released to everyone. If any issues are identified, the deployment can be rolled back before it affects the entire user base. This approach reduces the risk of deploying faulty code to the entire user base and helps identify issues before they become widespread.
C. Fragmenting the monolithic platform into microservices could improve the overall reliability of the platform, but it may not necessarily reduce the number of unplanned rollbacks of erroneous production deployments. Fragmentation may introduce new challenges in managing and deploying the microservices.
D. Reducing the platform’s dependency on relational database systems could improve the platform’s scalability and performance, but it may not necessarily reduce the number of unplanned rollbacks of erroneous production deployments.
E. Similarly, replacing the platform’s relational database systems with a NoSQL database could improve the platform’s scalability and performance, but it may not necessarily reduce the number of unplanned rollbacks of erroneous production deployments.
In summary, the most effective approaches to reduce the number of unplanned rollbacks of erroneous production deployments would be to introduce a green-blue deployment model and replace the QA environment with canary releases.
*
Question #: 21
Topic #: 1
Your company’s user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a
99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load.
What should you do?
A. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce ג€chaosג€ to the system by terminating random resources on both zones C. Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the same time, terminate random resources on both zones D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing user's usage of the app, and deploy enough resources to handle 200% of expected load
https://www.examtopics.com/discussions/google/view/7128-exam-professional-cloud-architect-topic-1-question-21/
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce ג€chaosג€ to the system by terminating random resources on both zones
https://cloud.google.com/architecture/scalable-and-resilient-apps?hl=en#test_your_resilience
Test your resilience
It’s critical to test that your app responds to failures in the way you expect. The overarching theme is th
at the best way to avoid failure is to introduce failure and learn from it.
Simulating and introducing failures is complex. In addition to verifying the behavior of your app or service, you must also ensure that expected alerts are generated, and appropriate metrics are generated. We recommend a structured approach, where you introduce simple failures and then escalate.
For example, you might proceed as follows, validating and documenting behavior at each stage:
Introduce intermittent failures. Block access to dependencies of the service. Block all network communication. Terminate hosts.
For details, see the Breaking your systems to make them unbreakable video from Google Cloud Next 2019.
If you’re using a service mesh like Istio to manage your app services, you can inject faults at the application layer instead of killing pods or machines, or you can inject corrupting packets at the TCP layer. You can introduce delays to simulate network latency or an overloaded upstream system. You can also introduce aborts, which mimic failures in upstream systems.
Question #: 55
Topic #: 1
Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-recommended practices. What should you do?
A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage. B. Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage. C. Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage. D. Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.
https://www.examtopics.com/discussions/google/view/7043-exam-professional-cloud-architect-topic-1-question-55/
A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
Transfer Appliance lets you quickly and securely transfer large amounts of data to Google Cloud Platform via a high capacity storage server that you lease from Google and ship to our datacenter. Transfer Appliance is recommended for data that exceeds 20 TB or would take more than a week to upload.
https://cloud.google.com/transfer-appliance/docs/2.2/overview
Question #: 74
Topic #: 1
You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool. This requires a relational database that can operate on hundreds of terabytes of data. What is the Google-recommended tool for such applications?
A. Cloud Spanner, because it is globally distributed B. Cloud SQL, because it is a fully managed relational database C. Cloud Firestore, because it offers real-time synchronization across devices D. BigQuery, because it is designed for large-scale processing of tabular data
https://www.examtopics.com/discussions/google/view/11817-exam-professional-cloud-architect-topic-1-question-74/
D. BigQuery, because it is designed for large-scale processing of tabular data
4 reasons to choose BQ (Supports Petabytes of data)
- OLAP Data
- Relational DB (SQL)
- 100s of TB data
- Analystics and Reporting
Question #: 29
Topic #: 1
You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely.
Where should you store the credentials?
A. In the source code B. In an environment variable C. In a secret management system D. In a config file that has restricted access through ACLs
https://www.examtopics.com/discussions/google/view/7200-exam-professional-cloud-architect-topic-1-question-29/
C. In a secret management system
https://cloud.google.com/kms/docs/secret-management
When designing a distributed application with microservices, it is important to ensure that credentials for accessing the database back-end are stored securely. The storage location should be accessible by the microservices but not by anyone else who is unauthorized.
Out of the options provided, the best option for storing credentials securely is C: In a secret management system. A secret management system is a centralized system that stores and manages sensitive information, such as passwords, API keys, and certificates. This system provides a secure way to manage the credentials, which can be accessed by authorized microservices as needed.
Using A: In the source code, is not a secure way to store credentials because the source code can be accessed by anyone who has access to the code repository. This includes not only authorized developers but also potentially unauthorized users who have gained access to the repository.
Using B: In an environment variable, is better than storing the credentials in source code but still not as secure as using a secret management system. Environment variables can be accessed by any process running on the same machine, so if an attacker gains access to the machine, they could potentially access the credentials stored in environment variables.
Using D: In a config file that has restricted access through ACLs, is better than storing the credentials in source code or environment variables, but it still has limitations. While the access control list (ACL) can restrict access to the config file, it may not be as secure as using a secret management system. Additionally, managing access control lists for multiple microservices can become cumbersome and error-prone.
In summary, when storing credentials for distributed microservices, it is best to use a centralized secret management system that provides secure and controlled access to the credentials.
Question #: 67
Topic #: 1
You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor and maximize machine utilization. You also want to reliably deploy new versions of the application. Which set of steps should you take?
A. Perform the following: 1. Create a managed instance group with f1-micro type machines. 2. Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the Python app. 3. Restart the instances to automatically deploy new production releases. B. Perform the following: 1. Create a managed instance group with n1-standard-1 type machines. 2. Build a Compute Engine image from the production branch that contains all of the dependencies and automatically starts the Python app. 3. Rebuild the Compute Engine image, and update the instance template to deploy new production releases. C. Perform the following: 1. Create a Google Kubernetes Engine (GKE) cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to 'IfNotPresent' in the staging namespace, and then promote it to the production namespace after testing. D. Perform the following: 1. Create a GKE cluster with n1-standard-4 type machines. 2. Build a Docker image from the master branch with all of the dependencies, and tag it with 'latest'. 3. Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to 'Always'. Restart the pods to automatically deploy new production releases.
https://www.examtopics.com/discussions/google/view/6890-exam-professional-cloud-architect-topic-1-question-67/
C. Perform the following: 1. Create a Google Kubernetes Engine (GKE) cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to ‘IfNotPresent’ in the staging namespace, and then promote it to the production namespace after testing.
C is correct, need “ifnotpresent”when uploads to container registry
C is the best choice. You can create a k8s cluster with just one node and use a different namespaces for staging and production. In staging, you will test the changes
should be option C because if you are working in real world, GKE is the best solution for such a case. Furthermore, its reliable, scalable, flexible, at least the best option among the other three.