Whizlabs, Practice Questions Flashcards
Google Cloud Certified Professional Cloud Architect
You are working for a Startup company as a Solutions Architect. Recently an application was deployed to production. There is a requirement to monitor the key performance indicators like CPU, memory, and Disk IOPS for the application, and also a dashboard needs to be set up where metrics are visible to the entire team. Which service will you use?
A. Use Cloud monitoring to monitor key performance indicators and create Dashboards with key indicators that can be used by the team
B. Use Cloud Logging to monitor key performance indicators and create Dashboards with key indicators that can be used by the team
C. Use Third-party service from marketplace to monitor key performance indicators and create Dashboards with key indicators that can be used by the team
D. Use Cloud Trace to monitor key performance indicators and create Dashboards with key indicators that can be used by the team
Option A is correct Cloud monitoring provides detailed visibility into the application by monitoring several key performance indicators like CPU, memory and disk IOPS, etc. You can create dashboards to visualize the performance and also can share with the team to provide detailed visibility into the application performance.
Option B is incorrect because Cloud logging is a fully managed service which allows you to store, search and analyze logs
Option C is incorrect because there is no need to use a third-party service you can use Cloud monitoring for such requirements
Option D is incorrect because Cloud trace is used to detect the latency issues in your application
You are working as a Solutions Architect for a large enterprise. They are using the GKE cluster for their production workload. In the upcoming weeks, they are expecting a huge traffic increase and thus want to enable autoscaling on the GKE cluster. What is the command to enable autoscaling on the existing GKE cluster?
A. gcloud container clusters update cluster-name –enable-autoscaling –min-nodes 1 –max-nodes 10 –zone compute-zone –node-pool demo
B. gcloud container clusters create cluster-name –enable-autoscaling –min-nodes 1 –max-nodes 10 –zone compute-zone –node-pool demo
C. You cannot enable autoscaling on existing GKE cluster
D. gcloud container clusters update cluster-name –no-enable-autoscaling –node-pool pool-name [–zone compute-zone –project project-id
Option A is correct It is the right command to enable autoscaling on existing GKE cluster gcloud container clusters update cluster-name –enable-autoscaling –min-nodes 1 –max-nodes 10 –zone compute-zone –node-pool demo
Option B is incorrect because it is used to create a new GKE cluster with auto-scaling enabled
Option C is incorrect because you can enable autoscaling on an existing GKE cluster
Option D is incorrect because the command will disable autoscaling on a GKE cluster
There is a requirement to make some files from the Google Cloud Storage bucket publicly available to the customers. Which of the below commands you will use to make some objects publicly available?
A. gsutil acl ch -u allUsers:R gs://new-project-bucket/example.png
B. gsutil signurl -d 10m keyfile.json gs://new-project-bucket/example.png
C. gsutil acl ch -g my-domain.org:R gs://gcs.my-domain.org
D. gsutil requesterpays get gs://new-project-bucket
Option A is correct This is the right command to make specific files publicly available from Google Cloud storage bucket https://cloud.google.com/storage/docs/gsutil/commands/acl
Option B is incorrect because this command is used to generate a Signed URL which is mostly used to share private content securely for a limited period of time
Option C is incorrect and is used when you have to share the objects with a particular G-suite domain
Option D is incorrect because this enables the requester pay feature on the bucket
You are working as a Solutions Architect for a Startup company that is planning to migrate an on-premise application to Google Cloud. They want to transfer a large number of files to Google Cloud Storage using the gsutil command line. How can you speed up the transfer process?
A. Use -m option with gsutil command
B. Use -o option with gsutil command
C. Use du option with gsutil command
D. Use mb option with gsutil command
Option A is correct When you have to transfer a large number of files from on-premise to Cloud storage using gsutil command then -m is the best option as it enables parallel multithreading copying ttps://cloud.google.com/storage/docs/gsutil/commands/cp
Option B is incorrect because it is used when you have to copy a file which is large in size
Option C is incorrect because it is used to get object size usage
Option D is incorrect because it is used to create a bucket
You are working with a large enterprise as a Solutions architect which is planning to migrate its application from AWS cloud to GCP cloud. There is a requirement to copy data from the AWS S3 bucket to Google Cloud Storage using a command-line utility. How will you fulfill this requirement?
A. Add AWS credentials in the boto configuration file and use the gsutil command to copy data
B. Configure the AWS credentials in gcloud configuration and use the gsutil command to copy files
C. First, download the S3 data using the AWS command-line utility and then copy files to Google cloud storage using gsutil commands
D. Use –s3 flag with gsutil commands to supply AWS credentials while copying files to Google cloud storage
Option A is correct You can directly use the AWS S3 bucket as the source or destination while using the gsutil command-line utility. Just you have to put the AWS credentials in the credentials section of the .boto configuration file. https://cloud.google.com/storage/docs/interoperability Options B & D are incorrect because there are no such commands
Option C can be a possible answer but adding AWS credentials in the .boto file is the preferred and easy way.
A Financial Organization has been growing at a rapid rate and dealing with massive data sets has become an issue. The management has decided to move from on premise to Google Cloud to meet the scaling demands. The data analysts are looking at services which can analyze massive amount of data and can run SQL queries -does data manipulation and visualization in Python. What Google Cloud services can fulfill the requirements?
A. Use Bigquery to run the SQL queries and use Cloud Datalab for detailed data manipulation and visualization in Python.
B. Use Bigtable to run SQL queries and use use Cloud Datalab for detailed data manipulation and visualization in Python.
C. Use Datastore to analyze massive data and use Dataprep for data manipulation and visualization in Python.
D. Use Cloud Spanner to analyze massive data and use Data Studio for data manipulation and visualization in python.
Option A is correct. Big Query can analyze large amounts of data also you can run SQL queries , Cloud Datalab does detailed data manipulation and visualization in Python.
Option B is incorrect. Cloud Bigtable is Google’s NoSQL Big Data database service , and it doesn’t support SQL queries ,use it when you need low latency for high writes and high reads.
Option C is incorrect. Cloud Datastore is a NoSQL document database built for automatic scaling, high performance, and ease of application development not suitable for the current scenario and Dataprep is data service for visually exploring, cleaning, and preparing structured and unstructured datasets of any size with the ease of clicks(UI), not code.
Option D is incorrect. The workload is analytics and Bigquery is the right choiceand Data Studio is a decision report generator service. Reference(s): https://cloud.google.com/solutions/time-series/analyzing-financial-time-series-using-bigquery-and-cloud-datalab https://cloud.google.com/datalab/docs/ https://cloud.google.com/bigquery/
Your organization deals with a huge amount of data and lately, it has become time-consuming and complicated to handle the ever-increasing data volume that needs to be protected and classified based on data sensitivity. The management has set the objective to automate data quarantine and classification system using Google Cloud Platform services. Please select the services that would achieve the objective.
A. Cloud Storage, Cloud Function, Cloud Pub/Sub, DLP API
B. Cloud Storage, Cloud Function, VPC Service control, Cloud Pub/sub
C. Cloud Storage, Cloud Function, Cloud Armour, DLP API
D. Cloud Storage, Cloud Pub/Sub, Cloud Classifier, Cloud Function
Option A is the Correct choice because, the data is uploaded to Cloud Storage and later we create buckets example classification_bucket_1 ( for sensitive information) and classification_bucket_2 (for non-sensitive information), use Cloud Function to invoke the DLP API when files are uploaded to cloud storage, use Cloud Pub/Sub topic and subscription to notify when file processing is completed, use Cloud DLP to understand and manage sensitive data(classification).
Option B is Incorrect because VPC service doesn’t help in data classification better choice would be to use Cloud DLP API. VPC Service Controls allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets, Bigtable instances, and BigQuery datasets to constrain data within a VPC and help mitigate data exfiltration.
Option C is Incorrect because Google Cloud Armor delivers defense at scale against infrastructure and application Distributed Denial of Service (DDoS) attacks using Google’s global infrastructure and security systems which don’t fulfill the objective set by the management.
Option D is Incorrect because Cloud Classifier is a fictitious service. Using Cloud DLP API serves the purpose of classifying data. Cloud DLP helps you better understand and manage sensitive(protected ) data. The numbers in this pipeline correspond to these steps: You upload files to Cloud Storage. You invoke a Cloud Function. The DLP API inspects and classifies the data. The file is moved to the appropriate bucket. Read more about it here: https://cloud.google.com/solutions/automating-classification-of-data-uploaded-to-cloud-storage https://cloud.google.com/dlp/
You are working as a Solutions Architect for a large media company that is planning to migrate its on-premise data warehouse to Google Cloud BigQuery. As a part of the migration, you want to write some migration scripts to interact with BigQuery. Which Command Line utility will you use?
A. gsutil
B. bq
C. gcloud
D. kubectl
Option B is correct Bq is a command-line tool for BigQuery which can be used to perform any operations on BigQuery
Option A is incorrect because it is used to interact with Google Cloud storage
Option C is incorrect because Bigquery is having its own command-line utility
Option D is incorrect because kubectl is used to manage Kubernetes
You are working as a Solutions Architect for a startup company that has recently started using Google cloud for their development environment. The developers want to know if they can persist data on Cloud shell, so they can use Cloud shell for their day to day tasks. What will you suggest to them?
A. Cloud shell can persist up to 10GB data
B. Cloud Shell can persist up to 5GB data
C. Cloud shell data is ephemeral
D. You can attach an additional persistent disk to the Cloud shell
Option B is correct Cloud shell comes with 5GB of persistent disk space which is mounted to your $HOME directory where you can keep your data. This persistent disk persists between your sessions.
Option A is incorrect because Cloud shell comes with 5GB of persistent disk
Option C is incorrect because you can persist data on the Cloud shell
Option D is incorrect because you cannot attach an additional persistent disk to the cloud shell session
You are working with a startup company as Solutions Architect which is planning to use Google Cloud Storage as a backup location for its on-prem application data. There is a requirement to sync a directory from an on-premise server to Google Cloud bucket. Which gsutil command you will use to sync the data on a daily basis?
A. Use lsync option with gsutil
B. Use rsync option with gsutil
C. Use -m option with gsutil
D. Use mb option with gsutil
Option B is correct rsync option is used to sync data between buckets/directories. By using the rsync option only the changed data from the source is copied to the destination bucket https://cloud.google.com/storage/docs/gsutil/commands/rsync
Option A is incorrect because there is no option like lsync
Option C is incorrect because it is used for parallel multithreading copying
Option D is incorrect because mb option is used to create a bucket
You are working as a DevOps engineer for an enterprise. Recently one of the microservices was facing intermittent database connectivity issues. This issue was rarely seen and whenever this problem occurs it triggers a few lines in the log file. There is a requirement to set up alerting for such a scenario. What will you do?
A. Use Cloud trace and setup alerting policies
B. Use Cloud logging to set up log-based metrics and set up alerting policies.
C. Manually monitor the log file
D. Use Cloud profiler to set up log-based metrics and set up alerting policies.
Option B is correct You can set up a log-based metric that is based on the entries in the log files. For example, you can count the number of occurrences of a specific line entry in the log file and create a metric based on the count. You can also set up alerting policies on the metric if the count goes beyond any threshold value. https://cloud.google.com/logging/docs/logs-based-metrics
Option A is incorrect because Cloud trace is used to detect the latency issues in your application
Option C is incorrect because you need to automate this procedure and also setup required alerting
Option D is incorrect because Cloud Profiler helps you to analyze the CPU and memory usage of your functions in the application
Your company is migrating the application from AWS to Google Cloud. There is a requirement to copy the data from the AWS S3 bucket to the Google Cloud Storage bucket. Which transfer service would you use to migrate the data to Google Cloud in the easiest way?
A. Storage Transfer Appliance
B. gsutil utility
C. Storage Transfer Service
D. S3cmd
Option C is correct Storage Transfer Service is used to quickly transfer data from any other cloud provider to Google cloud storage bucket using Console
Option B can also be used but they have not mentioned any specific command line requirement
Option A is incorrect because it is used to transfer data from on-premise
Option D is incorrect because it used for AWS S3 service
You are running a web application on a Compute Engine VM that is using the LAMP stack. There is a requirement to monitor the HTTP response latency of the application, diagnose, and get notified whenever the response latency reaches a defined threshold. Which GCP service will you use?
A. Use Cloud monitoring and setup alerting policies
B. Use Cloud monitoring and setup uptime checks
C. Use Cloud Trace and setup alerting policies
D. Use Cloud Logging and setup uptime checks
Option C is correct You can use cloud trace to setup and track latency based metric which will monitor the HTTP response latency and setup alerting policy on this metric which will send an alert when a certain threshold is reached https://cloud.google.com/trace
Option B is incorrect because the uptime check is used to check the system availability Options A & D are incorrect because Cloud trace is used to detect the latency issues in your application
You are using gcloud command-line utility to interact with Google Cloud resources. There is a requirement to create multiple gcloud configurations for managing resources. What is the command to create a gcloud configuration?
A. gcloud config create example-config
B. gcloud config configurations activate example-config
C. gcloud configurations create example_config
D. gcloud config configurations create example-config
Option D is correct gcloud config configurations create is the right command to create a new configuration gcloud command Options A & C are incorrect because the commands are not right
Option B is incorrect because is used to activate an existing gcloud configuration Ref URL: https://cloud.google.com/sdk/gcloud/reference/topic/configurations
You are using Cloud shell for accessing Google cloud resources and for your day to day tasks. There is a requirement to install some packages when the Cloud Shell boots. How will you fulfill this requirement?
A. Schedule a cronjob on restart
B. Add the script in /$HOME/.bashrc file
C. Add the script in /$HOME/.profile file
D. Add the script in /$HOME/.customize_environment file
Option D is correct To install any packages or run bash script while the cloud shell boots up you must write the script in /$HOME/.customize_environment file. This will install the required things and you view the execution logs in /var/log/customize_environment https://cloud.google.com/shell/docs/configuring-cloud-shell#environment_customization All other options are invalid with respect to cloud shell
You are working for a company that is using Google Cloud for its production workload. As per their new security policy, all the Admin activity logs must be retained for at least 5 and will be accessed once a year for auditing purposes. How will you ensure that all IAM Admin Activity logs are stored for at least 5 years keeping cost low?
A. Create a sink to Cloud Storage bucket with Coldline as a storage class
B. Create a sink to BigQuery
C. Create a sink to Pub/Sub
D. Store it in Cloud logging itself
A) Option is correct All the admin activity logs are enabled by default and stored in cloud logging. The default retention period for Admin activity logs is 400 Days. If you want to store logs for a longer period, you must create a sink. In our case since logs will be accessed once a year for auditing purposes then Cloud storage sink is the most suitable option.
Option B is incorrect because BigQuery is not a cost-effective solution
Option C is incorrect because Pub Sub is not used for long term storage
Option D is incorrect because Cloud Logging default retention period is 400 days
Your company recently performed an audit on your production GCP project. The audit revealed that recently an SSH port was opened to the world on a compute engine VM. The management has requested entire details of the API call made. How will you provide detailed information?
A. Navigate to the Logs viewer section from the console, select VM Instance as a resource and search for the required entry
B. Navigate to the Stackdriver trace section from the console, select GCE Network as a resource and search for the required entry
C. Connect to the compute engine VM and check system logs for API call information
D. Navigate to the Stackdriver monitoring section from the console, select GCE Network as a resource and search for the required entry
A) Option is correct All the IAM admin related activity logs are stored in the logs viewer section of Cloud Logging. You can see the entire details for an API call made in the logs viewer section of that resource. You can see what network tags were added to the particular VM in this section.
Option B is incorrect because Stackdriver trace is used to collect latency details from applications
Option C is incorrect because system logs will contain all logs related to the operating system only, not the Google cloud resources
Option D is incorrect because Stackdriver monitoring is used to monitor CPU, memory, disk, or any other custom metrics.
You are working as a Solutions Architect for a large Media Company. They are using BigQuery for their data warehouse purpose with multiple datasets in it. There is a requirement that a data scientist wants full access to a particular dataset only on which he can run queries against the data. How will you assign appropriate IAM permissions keeping the least privilege principle in mind?
A. Grant bigquery.dataEditor at the required dataset level and bigquery.user at the project level
B. Grant bigquery.dataEditor and bigquery.user at the project level
C. Grant bigquery.dataEditor at the project level and bigquery.user at the required dataset level
D. Grant bigquery.admin at required dataset level and bigquery.user at the project level
A) Option is correct bigquery.dataEditor on the required dataset will grant write access to the particular Dataset only and bigquery.user at the project level will grant him access to run query jobs in project https://cloud.google.com/bigquery/docs/access-control All other options are incorrect because they are too broad access roles as per our requirement
You are working for a large enterprise as a DevSecOps engineer. They are running several applications on compute engine VM. The database credentials required by an application are stored in the Cloud Secret Manager service. As per the best practices, what is the recommended approach for the application to authenticate with Google Secret manager service in order to obtain the credentials?
A. Ensure that the service account used by the VM’s have appropriate Cloud Secret Manager IAM roles and VM’s have proper access scopes
B. Ensure that the VM’s have full access scope to all Cloud APIs and do not have access to Cloud Secret Manager service in IAM roles
C. Generate OAuth token with appropriate IAM permissions and use it in your application
D. Create a service account and access key with appropriate IAM roles attached to access secrets and use that access key in your application
A) Option is correct In order to access Cloud services for an application running on compute engine VM, you should use a service account attached to the VM. If you are using the default service account you need to set access scope for API’s and also need to attach appropriate IAM roles to the service account https://googleapis.dev/python/google-api-core/latest/auth.html https://cloud.google.com/compute/docs/access/service-accounts
Option B is incorrect because you also need to attach IAM roles to the service account with required Cloud API’s access scope
Option C is incorrect because as per Google’s recommended best practices you should use service account attached with the service
Option D is incorrect because as per Google’s recommended best practices you should use service account attached with the service
You have been hired by a large enterprise as a Solutions Architect which has several departments like HR, development, and finance. There is a requirement that they want to control IAM policies for each department separately but centrally. Which hierarchy should you use?
A. A single organization with separate folders for each department
B. A separate organization for each department
C. A single organization with a separate project for each department
D. A separate organization with multiple folders
A) Option is correct As per Google recommended best practices you should have multiple folders within an organization for each department. Each department can have multiple teams and projects. By using folders, you can group resources for each department that shares common IAM policies. For example, you have multiple projects for the HR department and want to assign a Compute Instance Admin role to a user for each project in the HR department. You can assign a Compute Instance Admin role to the user at the HR folder level which will grant him access to each project within the HR folder. https://cloud.google.com/resource-manager/docs/creating-managing-folders
Option B is incorrect because you cannot manage IAM Policies centrally if you create separate Organization for each department
Option C is incorrect because each department can have multiple teams and multiple projects under it. So it will become difficult to manage IAM policy centrally for each project within the department
Option D is incorrect because you cannot manage IAM Policies centrally if you create separate Organizations for each department.
You are working for a Company as a Solutions architect. They want to develop a new application that will have two environments development and Production. The initial requirement is that all the resources deployed in development and Production must be able to communicate with each other using the same RFC-1918 Address space. How will you fulfill the requirement considering the least privilege principle?
A. Create a separate project for each environment and Use shared VPC
B. Create a single GCP project and single VPC for both environments
C. Create a separate project for each environment and create individual VPC in each project with VPC peering
D. Create a separate project and use direct peering
Ansawer : A Shared VPC allows you to share a single VPC in one project with another project within an organization called service project. By using shared VPC, the resources in service project can be deployed in shared VPC and they will use the same IP range from shared VPC The main advantage of Shared VPC is that we can delegate administrative responsibilities, such as creating and managing resources which will use one common VPC that allows each team to manage their own resources individually with proper access control In our case, we will create a VPC in production project which will be called a host project, and share it with the development project which will be called a service project. https://cloud.google.com/vpc/docs/shared-vpc
Option B is incorrect because if we use Single project and VPC for both environments we cannot segregate the access control for example if you want to give someone access to create resources only for staging, not production. Such kind of access control is not possible if we are using Single Project and the same VPC
Option C is incorrect because we want the Same RFC-1918 address space. VPC peering is used to connect two different VPC
Option D is incorrect because direct peering is a connection between the on-prem network and Google’s edge network
You are working with a large finance company as a Consultant which is planning to migrate petabytes of data from the on-premise data centre to Google Cloud storage. They are having 1gbps network connectivity from on-premise to Google Cloud. Which option will you recommend to transfer data?
A. Storage Transfer Service
B. Transfer Appliance
C. gsutil command-line tool
D. Transfer Service for On-premise
Answer - B Since they are having petabytes of data to transfer, a transfer Appliance is the best option. Transfer appliance is an offline data transfer service in which data is transferred via the transfer appliance which comes in two sizes 100TB version and 480TB version
Option A is incorrect because this service scales to available bandwidth and can deliver seamless transfers in just minutes and the available bandwidth is 1gbps, which is too low to transfer petabytes of data.
Option C is incorrect because they are having petabytes of data to transfer and using gsutil command-line utility will take a long time even if the bandwidth is good
Option D is incorrect because it is used when we have data in TB’s. Reference: https://cloud.google.com/storage-transfer/docs/on-prem-overview https://cloud.google.com/transfer-appliance/docs/4.0/overview
You are working for a large enterprise as a Solutions architect. They are running several applications on the Compute Engine in Development, Staging, and Production environments. The CTO has informed you that Development and Staging environments are not used on weekends and must be shut down on weekends for cost savings. How will you automate this procedure?
A. Apply appropriate tags on development and staging environments. Write a Cloud function that will shut down compute engine VM’s as per applied the tags. Write a Cron Job in Cloud Scheduler which will invoke cloud functions endpoint on weekends only.
B. Apply appropriate tags on development and staging environments. Write a Cloud function that will shut down compute engine VM’s as per the tags. Write a Cron Job in Cloud Tasks which will invoke cloud functions endpoint on weekends only.
C. Apply appropriate tags on development and staging environments. Write a Cloud function that will shut down compute engine VM’s as per the applied tags. Write a Cron Job in Cloud build which will invoke cloud functions endpoint on weekends only.
D. Apply appropriate tags on development and staging environments. Write a Cloud function that will shut down compute engine VM’s as per the applied tags. Write a Cron Job in Cloud Run which will invoke cloud functions endpoint on weekends only.
Answer: A
Apply tags to the development and staging Compute Engine VM’s. Write a Cloud Functions using any preferred language which will filter the VM’s based on applied tags and will shut down them. Select the trigger type as HTTP while configuring cloud function and write a cronjob in Cloud Scheduler which will trigger the HTTP endpoint only on a weekly basis. https://www.google.com/search?client=firefox-b-d&q=gcp+cloud+scheduler
Option B is incorrect because Cloud Task is used for management of a large number of distributed tasks
Option C is incorrect because Cloud Build is used to create CICD pipelines
Option D is incorrect because Cloud Run is used to run Containers where the entire infrastructure management is fully handled by GCP
For this question, refer to the Dress4Win case study: “https://cloud.google.com/certification/guides/cloud-architect/casestudy-dress4win-rev2 In the initial phase of migration how will you isolate development and test environments?
A. Create a separate project for testing and separate project for development
B. Create a Single VPC for all environments, separate by subnets
C. Create a VPC network for development and separate VPC network for testing
D. You cannot isolate access between different environments in Google cloud
Answer: A
As per the IAM best practices, you should create a separate project for each environment to isolate each environment. https://cloud.google.com/blog/products/gcp/iam-best-practice-guides-available-now
Option B is incorrect because as per IAM best practice you should create a separate project for each team
Option C is incorrect because you cannot isolate each env by creating 2 VPC in the same project. If anyone has permission to start/stop VM he can stop both environments VM’s if they are in the project
Option D is incorrect because you can isolate env’s by creating a separate project for each