GCL Flashcards

1
Q

You are migrating workloads to the cloud. The goal of the migration is to serve customers worldwide as quickly as possible According to local regulations, certain data is required to be stored in a specific geographic area, and it can be served worldwide. You need to design the architecture and deployment for your workloads.
What should you do?

A. Select a public cloud provider that is only active in the required geographic area
B. Select a private cloud provider that globally replicates data storage for fast data access
C. Select a public cloud provider that guarantees data location in the required geographic area
D. Select a private cloud provider that is only active in the required geographic area

A

To serve customers worldwide while adhering to local regulations regarding data storage in specific geographic areas, the most suitable option would be:

C. Select a public cloud provider that guarantees data location in the required geographic area.

Explanation:

  1. Public Cloud Provider: Using a public cloud provider allows for global reach, enabling you to serve customers worldwide efficiently.
  2. Guarantees Data Location in Required Geographic Area: This option ensures that the chosen public cloud provider guarantees data storage in the specific geographic area as required by local regulations. This ensures compliance with data residency and sovereignty requirements.

By choosing a public cloud provider that ensures data location in the required geographic area, you can achieve a balance between global reach for your services and compliance with local data storage regulations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your organization needs a large amount of extra computing power within the next two weeks.
After those two weeks, the need for the additional resources will end.
Which is the most cost-effective approach?
A. Use a committed use discount to reserve a very powerful virtual machine
B. Purchase one very powerful physical computer
C. Start a very powerful virtual machine without using a committed use discount
D. Purchase multiple physical computers and scale workload across them

A

For a short-term need of extra computing power within the next two weeks, the most cost-effective approach would typically be:

C. Start a very powerful virtual machine without using a committed use discount.

Explanation:

  1. Very Powerful Virtual Machine: Opting for a powerful virtual machine is efficient for short-term, high-compute needs. Virtual machines can be provisioned quickly and scaled up or down based on demand, making them a flexible choice for temporary requirements.
  2. No Committed Use Discount: Since the need for additional resources is only for a short period (two weeks), committing to a longer-term usage with a discount (as in option A) may not be the most cost-effective approach, as it might lead to underutilization and unnecessary costs once the two-week requirement ends.

Purchasing physical computers (option B) or multiple physical computers (option D) may be costly and time-consuming, and the resources may go underutilized after the short-term need ends, making them less cost-effective for this scenario.

In summary, starting a powerful virtual machine without committing to a long-term contract or discount is likely the most cost-effective approach given the short-term nature of the computing power need.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Your organization needs to plan its cloud infrastructure expenditures.
Which should your organization do?
A. Review cloud resource costs frequently, because costs change often based on use
B. Review cloud resource costs annually as part of planning your organization’s overall budget
C. If your organization uses only cloud resources, infrastructure costs are no longer part of your overall budget
D. Involve fewer people in cloud resource planning than your organization did for on-premises resource planning

A

When planning cloud infrastructure expenditures for an organization, it’s important to adopt practices that promote effective cost management and budget planning. The most appropriate approach is:

B. Review cloud resource costs annually as part of planning your organization’s overall budget.

Explanation:

  1. Annual Review of Costs: Regularly reviewing cloud resource costs on an annual basis aligns with the organization’s budget planning cycle. It allows for a comprehensive assessment of cloud spending and helps in making informed decisions regarding budget allocation and resource utilization.
  2. Part of Overall Budgeting: Cloud infrastructure costs should indeed be integrated into the organization’s overall budgeting process. Understanding the costs associated with cloud resources helps in allocating appropriate funds and ensuring financial accountability.

Option A (frequent review of cloud resource costs) can also be beneficial for monitoring costs, especially when there are significant usage fluctuations. However, an annual review is necessary to align with broader budgeting processes and strategies.

Option C is incorrect as cloud resource costs should definitely be part of the overall budget, even if the organization primarily uses cloud resources.

Option D is not recommended. Involving the right people in cloud resource planning is crucial to ensure that the planning is comprehensive, efficient, and aligns with organizational goals and requirements. It’s important to have a cross-functional team involved to gather diverse perspectives and insights into resource needs and cost implications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The operating systems of some of your organization’s virtual machines may have a security vulnerability.
How can your organization most effectively identify all virtual machines that do not have the latest security update?
A. View the Security Command Center to identify virtual machines running vulnerable disk images
B. View the Compliance Reports Manager to identify and download a recent PCI audit
C. View the Security Command Center to identify virtual machines started more than 2 weeks ago
D. View the Compliance Reports Manager to identify and download a recent SOC 1 audit

A

To effectively identify all virtual machines that do not have the latest security update, the most appropriate option is:

A. View the Security Command Center to identify virtual machines running vulnerable disk images.

Explanation:

  1. Security Command Center: The Security Command Center is a tool that provides centralized visibility into security information and vulnerabilities across your infrastructure. In this context, using the Security Command Center to identify virtual machines running vulnerable disk images is a logical choice for identifying those that may not have the latest security updates.
  2. Identifying Vulnerable Disk Images: By using the Security Command Center, you can scan and identify virtual machines running disk images with known vulnerabilities. This allows your organization to prioritize updating those virtual machines and ensuring they have the latest security updates.

Options B and D involve Compliance Reports Manager and audits related to compliance (PCI and SOC 1). While compliance audits are important for regulatory adherence, they may not directly address identifying specific vulnerabilities or outdated security updates on virtual machines.

Option C, viewing virtual machines started more than 2 weeks ago, is not a direct approach to identifying security vulnerabilities or outdated security updates. It doesn’t provide specific information about the security status of the virtual machines in question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You are currently managing workloads running on Windows Server for which your company owns the licenses. Your workloads are only needed during working hours, which allows you to shut down the instances during the weekend. Your Windows Server licenses are up for renewal in a month, and you want to optimize your license cost.
What should you do?
A. Renew your licenses for an additional period of 3 years. Renew your licenses for an additional period of 3 years. Negotiate a cost reduction with your current hosting provider wherein infrastructure cost is reduced when workloads are not in use
B. Renew your licenses for an additional period of 2 years. Negotiate a cost reduction by committing to an automatic renewal of the licenses at the end of the 2 year period
C. Migrate the workloads to Compute Engine with a bring-your-own-license (BYOL) model
D. Migrate the workloads to Compute Engine with a pay-as-you-go (PAYG) model

A

To optimize license costs for workloads running on Windows Server, given the requirement of usage only during working hours and the need for license renewal, the most suitable option is:

C. Migrate the workloads to Compute Engine with a bring-your-own-license (BYOL) model.

Explanation:

  1. Optimizing License Costs: The BYOL model allows you to use your existing Windows Server licenses on Compute Engine, which can help optimize costs. Since you already own the licenses, utilizing them on the cloud platform can reduce additional licensing expenses.
  2. Flexibility in Usage: Migrating to Compute Engine with a BYOL model allows you to shut down instances during the weekends or any other periods when the workloads are not needed, effectively minimizing usage and cost during non-working hours.

Options A and B involve renewing licenses for additional periods, but considering the requirement for flexibility and cost optimization based on usage, it’s prudent to explore alternatives that offer cost-efficiency and better alignment with workload needs.

Option D (PAYG model) might result in unnecessary costs during weekends when the workloads are not needed, which is not in line with the goal of optimizing license costs.

In summary, migrating to Compute Engine with a BYOL model allows you to use your existing licenses, optimize costs, and tailor usage to fit your workload’s specific requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Your organization runs a distributed application in the Compute Engine virtual machines. Your organization needs redundancy, but it also needs extremely fast communication (less than 10 milliseconds) between the parts of the application in different virtual machines.
Where should your organization locate this virtual machines?
A. In a single zone within a single region
B. In different zones within a single region
C. In multiple regions, using one zone per region
D. In multiple regions, using multiple zones per region

A

For achieving redundancy and extremely fast communication (less than 10 milliseconds) between parts of a distributed application in different virtual machines, the most suitable option would be:

B. In different zones within a single region.

Explanation:

  1. Redundancy: Placing the virtual machines in different zones within a single region provides redundancy. If one zone experiences an issue or failure, the application can continue running in another zone within the same region, ensuring high availability and reliability.
  2. Fast Communication: Keeping the virtual machines in different zones within a single region allows for fast communication (less than 10 milliseconds) between parts of the application. Zones within a region are geographically close, minimizing latency and ensuring speedy communication.

Option A (in a single zone within a single region) doesn’t provide the desired level of redundancy, as a failure in that zone could lead to downtime.

Option C (multiple regions, using one zone per region) and Option D (multiple regions, using multiple zones per region) might introduce higher latency due to the geographical distance between regions or potential inter-region communication delays, which could exceed the specified requirement of less than 10 milliseconds for communication.

Therefore, for a balance of redundancy and fast communication, placing the virtual machines in different zones within a single region is the optimal choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

An organization decides to migrate their on-premises environment to the cloud. They need to determine which resource components still need to be assigned ownership.
Which two functions does a public cloud provider own? (Choose two.)
A. Hardware maintenance
B. Infrastructure architecture
C. Infrastructure deployment automation
D. Hardware capacity management
E. Fixing application security issues

A

In a typical public cloud service model, the responsibilities between the cloud provider and the cloud customer are divided based on the service type. For Infrastructure as a Service (IaaS), the cloud provider typically owns functions related to hardware maintenance and hardware capacity management. For Platform as a Service (PaaS) and Software as a Service (SaaS), the provider takes care of additional layers of the stack, including infrastructure architecture and automation, as well as application security issues. Therefore, the two functions a public cloud provider generally owns are:

A. Hardware maintenance
- Public cloud providers are responsible for maintaining and managing the physical hardware, including servers, storage, networking, etc., to ensure reliability and performance of the cloud infrastructure.

D. Hardware capacity management
- The cloud provider manages and optimizes hardware capacity to ensure that resources are available to meet the needs of various cloud customers without any performance degradation.

While it’s important to note that the specific responsibilities can vary based on the cloud provider and the service model being used (IaaS, PaaS, SaaS), these functions are generally owned by the public cloud provider in a traditional cloud service model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You are a program manager within a Software as a Service (SaaS) company that offers rendering software for animation studios. Your team needs the ability to allow scenes to be scheduled at will and to be interrupted at any time to restart later. Any individual scene rendering takes less than 12 hours to complete, and there is no service-level agreement (SLA) for the completion time for all scenes. Results will be stored in a global Cloud Storage bucket. The compute resources are not bound to any single geographical location. This software needs to run on Google Cloud in a cost-optimized way.
What should you do?
A. Deploy the application on Compute Engine using preemptible instances
B. Develop the application so it can run in an unmanaged instance group
C. Create a reservation for the minimum number of Compute Engine instances you will use
D. Start more instances with fewer virtual centralized processing units (vCPUs) instead of fewer instances with more vCPUs

A

For a cost-optimized and efficient approach to running the rendering software in a Software as a Service (SaaS) environment on Google Cloud, the most suitable option would be:

A. Deploy the application on Compute Engine using preemptible instances.

Explanation:

  1. Preemptible Instances: Preemptible instances are cost-effective and suitable for short-lived, interruptible workloads like rendering scenes. Since individual scene rendering takes less than 12 hours and can be interrupted and restarted later, preemptible instances are a good fit. They are considerably cheaper than regular instances but can be terminated by the system at any time with a 30-second notice.
  2. Cost Optimization: Preemptible instances are cost-effective due to their lower price, making them ideal for rendering workloads. Even if an instance is terminated, you can set up the software to handle interruptions gracefully and restart the rendering process.

Option B (unmanaged instance group) might not be the best fit, as preemptible instances offer more cost savings and flexibility in this scenario.

Option C (creating a reservation for a minimum number of instances) may not align well with the variable workload demands and the need for cost optimization.

Option D (starting more instances with fewer vCPUs) may not be the most cost-effective approach as it’s generally better to use preemptible instances for this type of workload, which can provide the needed resources at a lower cost.

In summary, using preemptible instances on Compute Engine is a cost-effective and efficient solution for running rendering workloads with the ability to schedule and restart scenes while storing results in a global Cloud Storage bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your manager wants to restrict communication of all virtual machines with internet access; with resources in another network; or with a resource outside Compute
Engine. It is expected that different teams will create new folders and projects in the near future.
How would you restrict all virtual machines from having an external IP address?
A. Define an organization policy at the root organization node to restrict virtual machine instances from having an external IP address
B. Define an organization policy on all existing folders to define a constraint to restrict virtual machine instances from having an external IP address
C. Define an organization policy on all existing projects to restrict virtual machine instances from having an external IP address
D. Communicate with the different teams and agree that each time a virtual machine is created, it must be configured without an external IP address

A

To restrict all virtual machines from having an external IP address in a way that accommodates future projects and teams, the most appropriate option would be:

A. Define an organization policy at the root organization node to restrict virtual machine instances from having an external IP address.

Explanation:

  1. Organization Policy at the Root Level: Defining the policy at the root organization node ensures that the restriction is enforced across the entire organization, including current and future projects and folders. This approach ensures consistency and adherence to the policy organization-wide.
  2. Future Projects and Teams: By setting the policy at the root organization node, you ensure that any new folders or projects created in the future will inherit this policy, simplifying management and ensuring compliance without needing explicit communication with every team.

Option B (defining an organization policy on all existing folders) and Option C (defining an organization policy on all existing projects) would require applying the policy individually to each folder or project, making it less scalable and more prone to oversight as new folders or projects are added.

Option D (communicating with different teams to configure virtual machines without an external IP address each time) is not a scalable solution and can lead to inconsistent implementation and potential security risks if overlooked by teams.

In summary, defining an organization policy at the root organization node is the most effective way to ensure consistent enforcement of restricting virtual machine instances from having an external IP address across the organization, including current and future projects and teams.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Your multinational organization has servers running mission-critical workloads on its premises around the world. You want to be able to manage these workloads consistently and centrally, and you want to stop managing infrastructure.
What should your organization do?
A. Migrate the workloads to a public cloud
B. Migrate the workloads to a central office building
C. Migrate the workloads to multiple local co-location facilities
D. Migrate the workloads to multiple local private clouds

A

To centralize workload management, eliminate the need to manage infrastructure, and achieve consistent management across a multinational organization, the most suitable option is:

A. Migrate the workloads to a public cloud.

Explanation:

  1. Centralized Management: Public clouds provide centralized management tools and platforms that allow you to manage workloads consistently from a central location. These platforms offer centralized monitoring, scaling, security, and more, allowing for efficient management without the need to manage physical infrastructure across diverse locations.
  2. Eliminate Infrastructure Management: By migrating to a public cloud, the organization can offload the responsibility of managing the underlying infrastructure, including hardware, networking, and storage, to the cloud service provider. This allows the organization to focus on managing the workloads and applications, reducing the burden of infrastructure management.
  3. Global Reach: Public clouds have a global presence with data centers located around the world. This enables the organization to place workloads close to their end-users, optimizing performance and reducing latency.

Options B, C, and D involve managing infrastructure in various ways, which goes against the goal of stopping infrastructure management. Option A (migrating to a public cloud) aligns with the organization’s objective of centralizing management and eliminating the need to manage physical infrastructure while providing a global reach for workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Your organization stores highly sensitive data on-premises that cannot be sent over the public internet. The data must be processed both on-premises and in the cloud.
What should your organization do?
A. Configure Identity-Aware Proxy (IAP) in your Google Cloud VPC network
B. Create a Cloud VPN tunnel between Google Cloud and your data center
C. Order a Partner Interconnect connection with your network provider
D. Enable Private Google Access in your Google Cloud VPC network

A

Given the requirement to process highly sensitive data both on-premises and in the cloud without sending data over the public internet, the most appropriate option is:

B. Create a Cloud VPN tunnel between Google Cloud and your data center.

Explanation:

  1. Secure Communication: Cloud VPN provides a secure and encrypted tunnel between your on-premises data center and Google Cloud. This ensures that the sensitive data is transmitted securely without exposure to the public internet.
  2. Hybrid Cloud Processing: Cloud VPN allows for seamless and secure communication between on-premises resources and Google Cloud, enabling the processing of data both on-premises and in the cloud as needed.

Option A (Identity-Aware Proxy) is used for securing applications running in the cloud and may not be directly related to establishing a secure connection for data transfer between on-premises and cloud environments.

Option C (Partner Interconnect) involves a direct physical connection between your network and Google’s network, which may not be necessary for the given requirement and could be more complex than needed.

Option D (Private Google Access) allows VM instances in a VPC network to reach Google services without a public IP address, but it doesn’t directly address the need for secure communication between on-premises and cloud environments.

In summary, creating a secure Cloud VPN tunnel between your data center and Google Cloud ensures the data can be processed securely both on-premises and in the cloud without using the public internet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company’s development team is building an application that will be deployed on Cloud Run. You are designing a CI/CD pipeline so that any new version of the application can be deployed in the fewest number of steps possible using the CI/CD pipeline you are designing. You need to select a storage location for the images of the application after the CI part of your pipeline has built them.
What should you do?
A. Create a Compute Engine image containing the application
B. Store the images in Container Registry
C. Store the images in Cloud Storage
D. Create a Compute Engine disk containing the application

A

For storing images of the application in the CI/CD pipeline for efficient deployment on Cloud Run, the most appropriate option is:

B. Store the images in Container Registry.

Explanation:

  1. Container Registry: Container Registry is designed specifically for storing container images, making it a suitable choice for storing application images in a containerized environment like Cloud Run.
  2. Efficient Deployment: Cloud Run is designed to deploy containerized applications. By storing the application images in Container Registry, you streamline the deployment process, making it easy to deploy new versions of the application to Cloud Run.

Option A (Compute Engine image) and Option D (Compute Engine disk) are not appropriate for deploying applications on Cloud Run, which is a serverless container-based service.

Option C (Cloud Storage) is a viable option for storing various types of files, including container images, but Container Registry is specifically tailored for storing and managing container images, making it the more appropriate choice for containerized applications intended for deployment on Cloud Run.

In summary, storing the application images in Container Registry ensures an efficient deployment process for Cloud Run, enabling quick and streamlined deployment of new versions of the application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Each of the three cloud service models - infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) - offers benefits between flexibility and levels of management by the cloud provider and the customer.
Why would SaaS be the right choice of service model?
A. You want a balance between flexibility for the customer and the level of management by the cloud provider
B. You want to minimize the level of management by the customer
C. You want to maximize flexibility for the customer.
D. You want to be able to shift your emphasis between flexibility and management by the cloud provider as business needs change

A

The correct choice of service model depends on the specific requirements and preferences of an organization. Given the options provided, if the objective is to minimize the level of management by the customer while still benefitting from the service, the most appropriate option would be:

B. You want to minimize the level of management by the customer.

Explanation:

  1. SaaS Minimizes Management: In a SaaS model, the cloud provider manages almost everything, including infrastructure, software, updates, security, and maintenance. Customers simply use the software through a web browser, without the need to manage underlying technical complexities.
  2. Ease of Use: SaaS provides an easy-to-use solution where customers can access and use the software without worrying about the backend infrastructure, making it highly convenient and reducing the management burden on the customer.

While options A, C, and D may align with other objectives and use cases, if the primary goal is to minimize the level of management and focus on using the software without getting involved in its technical aspects, then SaaS is the right choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

As your organization increases its release velocity, the VM-based application upgrades take a long time to perform rolling updates due to OS boot times. You need to make the application deployments faster.
What should your organization do?
A. Migrate your VMs to the cloud, and add more resources to them
B. Convert your applications into containers
C. Increase the resources of your VMs
D. Automate your upgrade rollouts

A

To accelerate application deployments and improve release velocity by minimizing OS boot times and simplifying the deployment process, the most effective approach would be:

B. Convert your applications into containers.

Explanation:

  1. Containerization: Containers provide a lightweight, portable, and consistent environment for applications. They encapsulate the application, its dependencies, and configurations, making it easy to run consistently across various environments without worrying about differences in underlying systems or boot times.
  2. Faster Deployments: Containers can be started, stopped, and scaled very quickly since they share the host OS kernel. This significantly reduces deployment times compared to traditional VM-based deployments, where OS boot times can be a bottleneck.
  3. Portability and Consistency: Containers can be run on any system that supports the container runtime, ensuring consistent behavior and reducing the risk of deployment-related issues.

Option A (adding more resources to VMs) and Option C (increasing the resources of VMs) may alleviate some performance issues but won’t address the fundamental problem of long OS boot times and the agility required for faster deployments.

Option D (automating upgrade rollouts) is important and should be part of the solution, but it may not address the root issue of long OS boot times that significantly impact deployment speed.

In summary, converting applications into containers (Option B) is the most effective way to improve application deployment speed and release velocity by minimizing OS boot times and enabling faster, more efficient deployments. Additionally, automating upgrade rollouts (Option D) can further enhance deployment efficiency and consistency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Your organization uses Active Directory to authenticate users. Users’ Google account access must be removed when their Active Directory account is terminated.
How should your organization meet this requirement?
A. Configure two-factor authentication in the Google domain
B. Remove the Google account from all IAM policies
C. Configure BeyondCorp and Identity-Aware Proxy in the Google domain
D. Configure single sign-on in the Google domain

A

To ensure that users’ Google account access is removed when their Active Directory account is terminated, the most appropriate option is:

D. Configure single sign-on in the Google domain.

Explanation:

  1. Single Sign-On (SSO): SSO allows users to sign in to multiple applications using a single set of credentials. When configured with Active Directory, it ensures that access to Google accounts is tied to the Active Directory account. When an Active Directory account is terminated, access to associated Google accounts can be automatically revoked.
  2. Integration with Active Directory: By integrating SSO with Active Directory, the termination of an Active Directory account will effectively disable the user’s access to the Google domain, ensuring compliance with the requirement.

Option A (configuring two-factor authentication) is a security measure but does not directly address the requirement to remove Google account access when an Active Directory account is terminated.

Option B (removing the Google account from all IAM policies) is related to Google Cloud IAM (Identity and Access Management) and may not be directly tied to Active Directory account termination.

Option C (configuring BeyondCorp and Identity-Aware Proxy) is a security model but does not specifically address the synchronization of account terminations between Active Directory and Google accounts.

In summary, configuring single sign-on (SSO) in the Google domain, integrating it with Active Directory, is the most appropriate approach to ensure that Google account access is removed when the corresponding Active Directory account is terminated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Your company has recently acquired three growing startups in three different countries. You want to reduce overhead in infrastructure management and keep your costs low without sacrificing security and quality of service to your customers.
How should you meet these requirements?
A. Host all your subsidiaries’ services on-premises together with your existing services.
B. Host all your subsidiaries’ services together with your existing services on the public cloud.
C. Build a homogenous infrastructure at each subsidiary, and invest in training their engineers.
D. Build a homogenous infrastructure at each subsidiary, and invest in hiring more engineers.

A

To reduce overhead in infrastructure management, keep costs low, maintain security, and ensure the quality of service for customers across recently acquired startups in different countries, the most effective approach would be:

B. Host all your subsidiaries’ services together with your existing services on the public cloud.

Explanation:

  1. Public Cloud Benefits: Leveraging the public cloud allows for reduced infrastructure management overhead as the cloud provider handles the underlying infrastructure, including maintenance, updates, and security. It also offers scalability and flexibility based on demand, helping to control costs and adapt to growth efficiently.
  2. Consolidation and Integration: By hosting all services, including those of the acquired subsidiaries, on a unified public cloud platform, you can consolidate resources, reduce complexity, and improve integration across different parts of the organization.
  3. Cost Efficiency: Public cloud providers often offer cost-effective solutions with pay-as-you-go models, allowing you to manage costs effectively. Additionally, shared resources and centralized management lead to cost savings compared to separate on-premises or localized infrastructures.

Options A, C, and D involve building or maintaining separate infrastructures at each subsidiary, which can lead to increased complexity, higher costs, and challenges in maintaining consistency, security, and quality of service.

In summary, hosting all subsidiaries’ services, along with existing services, on the public cloud offers a scalable, cost-effective, and streamlined approach to infrastructure management while ensuring security and quality of service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the difference between Standard and Coldline storage?
A. Coldline storage is for data for which a slow transfer rate is acceptable.
B. Standard and Coldline storage have different durability guarantees.
C. Standard and Coldline storage use different APIs.
D. Coldline storage is for infrequently accessed data.

A

The difference between Standard and Coldline storage in Google Cloud is best described by:

D. Coldline storage is for infrequently accessed data.

Explanation:

  1. Standard Storage:
    • Standard storage is designed for data that is accessed frequently or in real-time.
    • It offers a higher storage cost but provides faster access to data.
    • Ideal for data that requires high availability and low-latency access.
  2. Coldline Storage:
    • Coldline storage is intended for data that is accessed infrequently or accessed very rarely.
    • It offers a lower storage cost compared to Standard storage, but access and retrieval may take longer.
    • Designed for data that is stored for a longer duration and accessed less frequently.

Options A, B, and C are not accurate explanations for the difference between Standard and Coldline storage:

  • Option A (slow transfer rate): Coldline storage is not about transfer rate; it’s about infrequent access to data.
  • Option B (durability guarantees): Both Standard and Coldline storage have the same durability guarantees, meaning data is extremely durable in both storage classes.
  • Option C (different APIs): Both storage classes use the same APIs for access and management; the difference is in usage and pricing based on the storage class selected.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What would provide near-unlimited availability of computing resources without requiring your organization to procure and provision new equipment?
A. Public cloud
B. Containers
C. Private cloud
D. Microservices

A

To provide near-unlimited availability of computing resources without requiring your organization to procure and provision new equipment, the most appropriate option is:

A. Public cloud.

Explanation:

  1. Public Cloud:
    • Public cloud services offer vast and scalable computing resources provided by cloud service providers like Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and others.
    • Public cloud environments allow organizations to access computing resources on-demand without the need to purchase and manage physical infrastructure.
    • Scalability is a key feature, enabling organizations to easily scale up or down based on demand, ensuring near-unlimited availability of computing resources.
  2. Containers, Private Cloud, and Microservices:
    • While containers, private cloud, and microservices can provide scalability and flexibility, they may still be limited by the physical infrastructure of an organization or have specific scalability constraints compared to the virtually unlimited resources available in public clouds.

In summary, public cloud platforms offer the ability to access near-unlimited computing resources without the need for organizations to procure and provision new physical equipment, making it the most suitable option for achieving high availability and scalability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You are a program manager for a team of developers who are building an event-driven application to allow users to follow one another’s activities in the app. Each time a user adds himself as a follower of another user, a write occurs in the real-time database.
The developers will develop a lightweight piece of code that can respond to database writes and generate a notification to let the appropriate users know that they have gained new followers. The code should integrate with other cloud services such as Pub/Sub, Firebase, and Cloud APIs to streamline the orchestration process. The application requires a platform that automatically manages underlying infrastructure and scales to zero when there is no activity.
Which primary compute resource should your developers select, given these requirements?
A. Google Kubernetes Engine
B. Cloud Functions
C. App Engine flexible environment
D. Compute Engine

A

Given the requirements of building an event-driven application that automatically manages underlying infrastructure, scales to zero during periods of inactivity, and integrates with various cloud services for orchestration, the most suitable primary compute resource would be:

B. Cloud Functions

Explanation:

  1. Event-Driven Architecture: Cloud Functions are designed for event-driven, serverless computing. They respond to events, such as database writes, making them ideal for triggering actions like generating notifications whenever a user gains new followers.
  2. Automated Infrastructure Management: Cloud Functions abstract away infrastructure management, automatically scaling up or down based on the number of events and activity, thus meeting the requirement of automatically managing the underlying infrastructure.
  3. Integration with Cloud Services: Cloud Functions can seamlessly integrate with other cloud services such as Pub/Sub, Firebase, and Cloud APIs, allowing for streamlined orchestration of processes and interactions with different parts of the application.
  4. Cost-Efficiency: Cloud Functions follow a pay-as-you-go model, incurring costs only when they’re triggered by events. When there’s no activity, they scale down to zero, ensuring cost-efficiency during periods of inactivity.

Google Kubernetes Engine (A) and App Engine flexible environment (C) are also good choices, but Cloud Functions align more closely with the requirements of a lightweight, event-driven, and serverless architecture, providing cost-efficiency through automatic scaling down to zero during inactivity.

Compute Engine (D) is not the optimal choice for this scenario as it involves manual infrastructure management and does not align well with the requirement of automatically managing and scaling based on events and activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Your organization is developing an application that will capture a large amount of data from millions of different sensor devices spread all around the world. Your organization needs a database that is suitable for worldwide, high-speed data storage of a large amount of unstructured data.
Which Google Cloud product should your organization choose?
A. Firestore
B. Cloud Data Fusion
C. Cloud SQL
D. Cloud Bigtable

A

For capturing a large amount of unstructured data from millions of sensor devices spread worldwide and needing high-speed data storage, the most suitable Google Cloud product is:

D. Cloud Bigtable

Explanation:

  1. Scalability and High-Speed Data Storage: Cloud Bigtable is designed for handling large-scale, high-throughput workloads with a focus on performance and scalability. It is a NoSQL, massively scalable, and highly available database service that can handle massive amounts of unstructured data.
  2. Global Deployment: Cloud Bigtable supports worldwide deployment, enabling efficient data ingestion from millions of sensor devices spread across the globe. It can manage the high-speed, high-volume writes and reads necessary for such a use case.

Firestore (A) is a NoSQL document database that offers scalability and real-time synchronization but may not be as suitable for extremely high-speed and high-volume unstructured data storage compared to Cloud Bigtable.

Cloud Data Fusion (B) is a fully managed, cloud-native data integration service but is more focused on data integration and transformation rather than high-speed unstructured data storage.

Cloud SQL (C) is a fully managed relational database service, which is not ideal for unstructured data storage and may not have the scalability and performance needed for this use case.

In summary, Cloud Bigtable is the appropriate Google Cloud product for worldwide, high-speed data storage of a large amount of unstructured data from millions of sensor devices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Your organization needs to build streaming data pipelines. You don’t want to manage the individual servers that do the data processing in the pipelines. Instead, you want a managed service that will automatically scale with the amount of data to be processed.
Which Google Cloud product or feature should your organization choose?
A. Pub/Sub
B. Dataflow
C. Data Catalog
D. Dataprep by Trifacta

A

For building streaming data pipelines without managing individual servers and ensuring automatic scaling with the data volume, the most suitable Google Cloud product is:

B. Dataflow

Explanation:

  1. Managed Service and Automatic Scaling: Google Cloud Dataflow is a fully managed service that allows you to design, deploy, and monitor data processing pipelines. It automatically handles server provisioning, scaling, and managing the infrastructure based on the incoming data volume, ensuring you don’t have to manage individual servers.
  2. Stream Processing: Dataflow supports stream processing, making it an ideal choice for building streaming data pipelines. It can handle real-time data processing with scalability based on the incoming data stream.

Pub/Sub (A) is a messaging service and can be used in conjunction with Dataflow for ingesting and delivering messages to the data processing pipeline.

Data Catalog (C) is a fully managed and scalable metadata management service, primarily used for discovering and managing metadata across an organization. It is not specifically designed for building and managing streaming data pipelines.

Dataprep by Trifacta (D) is a cloud-based service for cleaning, enriching, and transforming raw data into a usable format. While it’s useful for data preparation, it’s not focused on building and managing streaming data pipelines.

In summary, Google Cloud Dataflow is the appropriate choice for building streaming data pipelines without managing individual servers, ensuring automatic scaling based on the volume of incoming data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Your organization is building an application running in Google Cloud. Currently, software builds, tests, and regular deployments are done manually, but you want to reduce work for the team. Your organization wants to use Google Cloud managed solutions to automate your build, testing, and deployment process.
Which Google Cloud product or feature should your organization use?
A. Cloud Scheduler
B. Cloud Code
C. Cloud Build
D. Cloud Deployment Manager

A

To automate the build, testing, and deployment process in Google Cloud, the most appropriate Google Cloud product is:

C. Cloud Build

Explanation:

  1. Automated Build and Test: Cloud Build is a fully managed continuous integration and continuous deployment (CI/CD) platform that automates the build and test processes. It allows you to automatically build, test, and validate code changes upon every commit or triggered event.
  2. Integration with Other Services: Cloud Build integrates with other Google Cloud services and tools, making it easy to set up pipelines that automate your development workflows.

Cloud Scheduler (A) is a fully managed cron job scheduler, which is useful for invoking services at specified intervals, but it doesn’t directly handle the build, test, and deployment automation process.

Cloud Code (B) is an extension for IDEs like Visual Studio Code and IntelliJ IDEA that helps with writing, deploying, and debugging cloud-native applications. While it assists in the development process, it’s not a standalone automation solution for build, test, and deployment.

Cloud Deployment Manager (D) is a tool to define, deploy, and manage infrastructure in Google Cloud, helping in creating and managing cloud resources in a declarative manner. While it’s essential for infrastructure deployment, it’s not focused on automating the entire build, test, and deployment process.

In summary, Cloud Build is the appropriate Google Cloud product for automating the build, testing, and deployment process, streamlining the development workflow, and reducing manual work for the team.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which Google Cloud product can report on and maintain compliance on your entire Google Cloud organization to cover multiple projects?
A. Cloud Logging
B. Identity and Access Management
C. Google Cloud Armor
D. Security Command Center

A

The Google Cloud product that can report on and maintain compliance for your entire Google Cloud organization covering multiple projects is:

D. Security Command Center

Explanation:

  1. Security Command Center: Google Cloud Security Command Center (SCC) is a security and risk management platform that helps you gain centralized visibility into your security posture across your Google Cloud environment. It provides security and compliance insights and enables monitoring, detection, and response to security threats and vulnerabilities.

Cloud Logging (A) is a tool for storing, searching, analyzing, and alerting on log data. While it’s essential for monitoring and analyzing logs, it’s not primarily focused on reporting and maintaining compliance across the entire organization.

Identity and Access Management (B) is a critical component for controlling access and permissions within Google Cloud, but it’s more focused on access control than reporting and maintaining compliance at an organizational level.

Google Cloud Armor (C) is a DDoS (Distributed Denial of Service) and application defense service, providing security for web applications and services. It’s not specifically designed for reporting and maintaining compliance across multiple projects at an organizational level.

In summary, Security Command Center (D) is the Google Cloud product that provides centralized visibility and management of security and compliance across the entire Google Cloud organization, covering multiple projects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Your organization needs to establish private network connectivity between its on-premises network and its workloads running in Google Cloud. You need to be able to set up the connection as soon as possible.
Which Google Cloud product or feature should you use?
A. Cloud Interconnect
B. Direct Peering
C. Cloud VPN
D. Cloud CDN

A

To establish private network connectivity between your on-premises network and workloads running in Google Cloud quickly, the most appropriate Google Cloud product or feature is:

C. Cloud VPN (Virtual Private Network)

Explanation:

  1. Private Network Connectivity: Cloud VPN provides a secure and encrypted connection between your on-premises network and your virtual private cloud (VPC) network in Google Cloud. It allows you to securely connect your on-premises network to your Google Cloud workloads.
  2. Quick Setup: Cloud VPN is relatively easy and quick to set up, allowing you to establish the connection promptly.

Cloud Interconnect (A) and Direct Peering (B) are both valid options for private network connectivity but may involve longer setup times and additional configurations compared to Cloud VPN, making Cloud VPN more suitable when the requirement is to set up the connection quickly.

Cloud CDN (D) is a content delivery network service and is not related to setting up a private network connection between on-premises and Google Cloud.

In summary, to establish private network connectivity quickly between your on-premises network and workloads in Google Cloud, Cloud VPN is the most appropriate choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Your organization is developing a mobile app and wants to select a fully featured cloud-based compute platform for it.
Which Google Cloud product or feature should your organization use?
A. Google Kubernetes Engine
B. Firebase
C. Cloud Functions
D. App Engine

A

For developing a mobile app and selecting a fully featured cloud-based compute platform, the most appropriate Google Cloud product is:

B. Firebase

Explanation:

  1. Firebase: Firebase is a comprehensive mobile and web application development platform provided by Google. It offers a wide range of features including real-time database, authentication, hosting, analytics, machine learning, and more, making it ideal for developing and managing mobile apps.
  2. Mobile App Development: Firebase is specifically designed to support mobile app development, offering features that facilitate authentication, real-time database updates, cloud messaging, and other functionalities critical for mobile apps.

Google Kubernetes Engine (A) is a managed Kubernetes service and is better suited for deploying, managing, and orchestrating containerized applications. While it can be used to support a mobile app backend, it may be more complex than necessary for a mobile app development scenario.

Cloud Functions (C) is a serverless compute service that allows developers to run event-driven functions in response to events. While useful for backend logic and processing, it may not cover the broader set of features required for a fully featured cloud-based compute platform for mobile app development.

App Engine (D) is a platform-as-a-service (PaaS) offering that enables the deployment and scaling of applications. It is well-suited for web applications and backends, but Firebase is more specialized and comprehensive for mobile app development needs.

In summary, Firebase is the most suitable Google Cloud product for a fully featured cloud-based compute platform specifically designed for mobile app development, providing a wide array of features critical for mobile apps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Your company has been using a shared facility for data storage and will be migrating to Google Cloud. One of the internal applications uses Linux custom images that need to be migrated.
Which Google Cloud product should you use to maintain the custom images?
A. App Engine flexible environment
B. Compute Engine
C. App Engine standard environment
D. Google Kubernetes Engine

A

To maintain the custom Linux images during the migration to Google Cloud, the most appropriate Google Cloud product is:

B. Compute Engine

Explanation:

  1. Compute Engine: Compute Engine allows you to create and manage custom Linux images easily. You can create, customize, and store your custom Linux images, including any specific configurations or software setups needed for your internal application. These custom images can then be used to create and manage virtual machines (VMs) in Google Cloud.

App Engine flexible environment (A) and App Engine standard environment (C) are platform-as-a-service (PaaS) offerings designed for deploying applications without having direct control over the underlying infrastructure. These environments do not provide direct support for managing custom Linux images like Compute Engine does.

Google Kubernetes Engine (D) is a managed Kubernetes service and is more focused on orchestrating and managing containerized applications using Kubernetes. It is not designed for managing custom Linux images in the same way Compute Engine is.

In summary, Compute Engine is the most suitable Google Cloud product for maintaining custom Linux images during the migration, providing flexibility and control over the custom images needed for your internal application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Your organization wants to migrate its data management solutions to Google Cloud because it needs to dynamically scale up or down and to run transactional
SQL queries against historical data at scale. Which Google Cloud product or service should your organization use?
A. BigQuery
B. Cloud Bigtable
C. Pub/Sub
D. Cloud Spanner

A

To dynamically scale up or down and run transactional SQL queries against historical data at scale, the most appropriate Google Cloud product or service is:

D. Cloud Spanner

Explanation:

  1. Cloud Spanner: Cloud Spanner is a globally distributed, horizontally scalable, strongly consistent, and relational database service. It provides the ability to scale up or down dynamically based on workload demands. It allows you to run transactional SQL queries against historical data at scale, ensuring consistent, ACID-compliant transactions.
  2. Transactional SQL Queries: Cloud Spanner supports SQL-based queries, making it suitable for transactional workloads where you need to perform SQL queries against historical data while maintaining strong consistency.

BigQuery (A) is an excellent choice for running analytical queries on large datasets, but it may not be the best fit for transactional SQL queries or for dynamic scaling up or down based on workload demands.

Cloud Bigtable (B) is a high-throughput, scalable NoSQL database, but it is more suitable for handling high-velocity, high-volume analytical workloads rather than transactional SQL queries.

Pub/Sub (C) is a messaging service for building event-driven systems and real-time analytics, but it’s not a database solution that allows transactional SQL queries against historical data.

In summary, for dynamically scaling and running transactional SQL queries against historical data at scale, Cloud Spanner is the most appropriate Google Cloud product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Your organization needs to categorize objects in a large group of static images using machine learning. Which Google Cloud product or service should your organization use?
A. BigQuery ML
B. AutoML Video Intelligence
C. Cloud Vision API
D. AutoML Tables

A

To categorize objects in a large group of static images using machine learning, the most appropriate Google Cloud product or service is:

C. Cloud Vision API

Explanation:

  1. Cloud Vision API: Cloud Vision API is a powerful and efficient image analysis tool that can be used to categorize and annotate images. It can detect and identify objects, faces, logos, labels, and more within images, making it ideal for categorizing objects in a large group of static images.

BigQuery ML (A) is a machine learning service that is more suitable for working with structured data and performing machine learning tasks directly within BigQuery using SQL. It is not specifically designed for image categorization.

AutoML Video Intelligence (B) is designed for training machine learning models specifically for videos, not static images. It’s not the most suitable choice for this scenario.

AutoML Tables (D) is used for structured tabular data and is not designed for image categorization tasks.

In summary, Cloud Vision API is the most appropriate Google Cloud product for categorizing objects in a large group of static images using machine learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Your organization runs all its workloads on Compute Engine virtual machine instances. Your organization has a security requirement: the virtual machines are not allowed to access the public internet. The workloads running on those virtual machines need to access BigQuery and Cloud Storage, using their publicly accessible interfaces, without violating the security requirement.
Which Google Cloud product or feature should your organization use?
A. Identity-Aware Proxy
B. Cloud NAT (network address translation)
C. VPC internal load balancers
D. Private Google Access

A

To enable the workloads running on Compute Engine virtual machine instances to access BigQuery and Cloud Storage using their publicly accessible interfaces without allowing access to the public internet, the most appropriate Google Cloud product or feature is:

D. Private Google Access

Explanation:

  1. Private Google Access: Private Google Access allows virtual machine instances without public IP addresses to reach Google APIs and services such as BigQuery and Cloud Storage using their publicly accessible interfaces. It enables the workloads to access these services without violating the security requirement of not allowing access to the public internet.
  2. Identity-Aware Proxy (A): Identity-Aware Proxy is used to control access to applications and VMs, providing secure access to your applications without exposing them to the public internet. However, it’s not directly related to enabling access to public Google services without public IP addresses.
  3. Cloud NAT (B): Cloud NAT is used to provide internet connectivity to instances that do not have a public IP address. However, in this scenario, the goal is to access Google services without exposing the VMs to the public internet.
  4. VPC Internal Load Balancers (C): VPC Internal Load Balancers are used to load balance traffic within a VPC. They do not specifically address the requirement of allowing access to public Google services without public IP addresses.

In summary, to meet the security requirement and enable access to BigQuery and Cloud Storage through their publicly accessible interfaces without allowing access to the public internet, Private Google Access (option D) is the most appropriate Google Cloud product or feature.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Which Google Cloud product is designed to reduce the risks of handling personally identifiable information (PII)?
A. Cloud Storage
B. Google Cloud Armor
C. Cloud Data Loss Prevention
D. Secret Manager

A

The Google Cloud product designed to reduce the risks of handling personally identifiable information (PII) is:

C. Cloud Data Loss Prevention

Explanation:

  1. Cloud Data Loss Prevention (DLP): Cloud DLP is a comprehensive service that helps you discover, classify, and protect sensitive data, including personally identifiable information (PII). It provides tools to automatically scan and identify PII and other sensitive information, allowing you to apply appropriate controls and protections to mitigate the risks associated with handling such data.
  2. Cloud Storage (A): Cloud Storage is a scalable object storage solution, but it does not specifically focus on data loss prevention or protection of PII.
  3. Google Cloud Armor (B): Google Cloud Armor is a DDoS and application defense service, focused on protecting against application vulnerabilities and DDoS attacks. It’s not specifically designed to handle or protect PII.
  4. Secret Manager (D): Secret Manager is designed for securely storing API keys, passwords, certificates, and other sensitive data. While it enhances security, it’s not primarily focused on PII protection.

In summary, Cloud Data Loss Prevention (Cloud DLP) is the most appropriate Google Cloud product for reducing the risks associated with handling personally identifiable information (PII) and ensuring proper protection and management of sensitive data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Your organization is migrating to Google Cloud. As part of that effort, it needs to move terabytes of data from on-premises file servers to Cloud Storage. Your organization wants the migration process to be automated and to be managed by Google. Your organization has an existing Dedicated Interconnect connection that it wants to use. Which Google Cloud product or feature should your organization use?
A. Storage Transfer Service
B. Migrate for Anthos
C. BigQuery Data Transfer Service
D. Transfer Appliance

A

To automate and manage the migration of terabytes of data from on-premises file servers to Google Cloud Storage using an existing Dedicated Interconnect connection, the most appropriate Google Cloud product or feature is:

A. Storage Transfer Service

Explanation:

  1. Storage Transfer Service: Storage Transfer Service allows you to transfer large amounts of data from on-premises file servers or other cloud providers to Google Cloud Storage. It supports scheduling and automating these transfers, making it a suitable choice for automating the migration process. The service can leverage an existing Dedicated Interconnect connection to facilitate the migration securely and efficiently.
  2. Migrate for Anthos (B): Migrate for Anthos is a service designed to migrate virtual machines and their workloads to Google Kubernetes Engine (GKE). It’s not specifically designed for migrating large amounts of data from file servers to Google Cloud Storage.
  3. BigQuery Data Transfer Service (C): BigQuery Data Transfer Service is designed to transfer data into BigQuery for analysis. It’s not appropriate for moving terabytes of data from on-premises file servers to Cloud Storage.
  4. Transfer Appliance (D): Transfer Appliance is a physical hardware device that allows you to securely and quickly move large amounts of data to Google Cloud Storage. However, it’s not necessary to use this physical appliance when you have an existing Dedicated Interconnect connection, as Storage Transfer Service can accomplish the transfer over the network.

In summary, Storage Transfer Service (option A) is the most appropriate Google Cloud product for automating and managing the migration of terabytes of data from on-premises file servers to Google Cloud Storage using an existing Dedicated Interconnect connection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Your organization needs to analyze data in order to gather insights into its daily operations. You only want to pay for the data you store and the queries you perform. Which Google Cloud product should your organization choose for its data analytics warehouse?
A. Cloud SQL
B. Dataproc
C. Cloud Spanner
D. BigQuery

A

To perform data analytics, gathering insights into daily operations while only paying for the data stored and the queries performed, the most appropriate Google Cloud product is:

D. BigQuery

Explanation:

  1. BigQuery: BigQuery is a fully-managed, serverless, and highly scalable data warehouse designed for analyzing large datasets. With BigQuery, you only pay for the data you store and the queries you run, making it cost-effective and suitable for analytical workloads. It’s optimized for running fast SQL queries over large datasets and provides real-time insights into your data.
  2. Cloud SQL (A): Cloud SQL is a managed relational database service, suitable for traditional transactional applications. It’s not designed for data analytics and may not be cost-effective for large-scale analytical workloads.
  3. Dataproc (B): Dataproc is a fast, easy-to-use, fully-managed cloud service for running Apache Spark and Apache Hadoop clusters. It’s suitable for big data processing and analysis but requires cluster management and may not provide the same serverless and cost-effective model as BigQuery.
  4. Cloud Spanner (C): Cloud Spanner is a globally distributed, horizontally scalable, and strongly consistent relational database service. It’s designed to provide transactional consistency at global scale and may not be the most cost-effective choice for data analytics.

In summary, for cost-effective data analytics with a pay-as-you-go model, where you only pay for the data stored and the queries performed, BigQuery (option D) is the most suitable Google Cloud product for building a data analytics warehouse.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Your organization wants to run a container-based application on Google Cloud. This application is expected to increase in complexity. You have a security need for fine-grained control of traffic between the containers. You also have an operational need to exercise fine-grained control over the application’s scaling policies.
What Google Cloud product or feature should your organization use?
A. Google Kubernetes Engine cluster
B. App Engine
C. Cloud Run
D. Compute Engine virtual machines

A

To run a container-based application with fine-grained control of traffic between containers and operational control over scaling policies, the most appropriate Google Cloud product or feature is:

A. Google Kubernetes Engine (GKE) cluster

Explanation:

  1. Fine-Grained Traffic Control: GKE allows fine-grained control over network policies and traffic between containers using Kubernetes Network Policies. This enables you to define and enforce specific communication rules between containers within the cluster.
  2. Operational Control over Scaling Policies: GKE provides comprehensive control over scaling policies and strategies through Kubernetes. Kubernetes offers features like Horizontal Pod Autoscaling and Vertical Pod Autoscaling, allowing you to control scaling based on metrics such as CPU usage, memory usage, etc.
  3. Container Orchestration: GKE is designed to orchestrate and manage containerized applications effectively, providing features for deploying, managing, and scaling containers.

App Engine (B) is a managed platform-as-a-service (PaaS) offering, which abstracts away much of the infrastructure management, including fine-grained control over networking between containers and scaling policies. While it’s efficient for many use cases, it might not provide the level of control mentioned in the requirements.

Cloud Run (C) is a fully managed serverless platform for running stateless containers. While it offers autoscaling based on demand, it might not provide the fine-grained control over traffic between containers as requested.

Compute Engine (D) is a flexible infrastructure as a service (IaaS) option where you have full control over the virtual machines, but managing the scaling policies and traffic between containers at the fine-grained level would require significant manual configuration and may not be the most efficient choice for this scenario.

In summary, Google Kubernetes Engine (GKE) cluster (option A) is the most appropriate Google Cloud product to fulfill the requirements of fine-grained control over traffic between containers and operational control over scaling policies for your container-based application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Which Google Cloud product or feature makes specific recommendations based on security risks and compliance violations?
A. Google Cloud firewalls
B. Security Command Center
C. Cloud Deployment Manager
D. Google Cloud Armor

A

The Google Cloud product or feature that makes specific recommendations based on security risks and compliance violations is:

B. Security Command Center

Explanation:

  1. Security Command Center: Security Command Center is a comprehensive security and risk management platform that provides insights into the security posture of your Google Cloud environment. It helps you identify and prioritize security risks and compliance violations by providing specific recommendations and actionable insights to improve your security posture.

Google Cloud firewalls (A) allow you to control incoming and outgoing traffic to your instances. However, they are more about configuring network rules and access rather than providing specific recommendations based on security risks and compliance violations.

Cloud Deployment Manager (C) is a tool for defining, deploying, and managing Google Cloud infrastructure. It’s not focused on security recommendations based on risks or compliance violations.

Google Cloud Armor (D) is a DDoS and application defense service. While it offers protection against various security threats, it does not provide specific recommendations based on security risks and compliance violations.

In summary, Security Command Center (option B) is the Google Cloud product or feature that provides specific recommendations based on security risks and compliance violations, helping you improve the security posture of your environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Which Google Cloud product provides a consistent platform for multi-cloud application deployments and extends other Google Cloud services to your organization’s environment?
A. Google Kubernetes Engine
B. Virtual Public Cloud
C. Compute Engine
D. Anthos

A

The Google Cloud product that provides a consistent platform for multi-cloud application deployments and extends other Google Cloud services to your organization’s environment is:

D. Anthos

Explanation:

  1. Anthos: Anthos is a modern application management platform that provides a consistent and unified way to deploy, manage, and operate applications across various environments, including on-premises, in the cloud, and across multiple clouds. It allows you to extend Google Cloud services to your organization’s environment, ensuring consistency and ease of deployment across diverse infrastructures.

Google Kubernetes Engine (A) is a managed Kubernetes service and a part of Anthos, but Anthos encompasses a broader set of capabilities beyond just Kubernetes management.

Virtual Public Cloud (B) is not a recognized Google Cloud product or service.

Compute Engine (C) is an Infrastructure as a Service (IaaS) offering by Google Cloud, providing virtual machines for various computing needs. However, it does not specifically offer a consistent platform for multi-cloud application deployments or extend Google Cloud services to other environments.

In summary, Anthos (option D) is the Google Cloud product that provides a consistent platform for multi-cloud application deployments and extends other Google Cloud services to your organization’s environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Your organization is developing an application that will manage payments and online bank accounts located around the world. The most critical requirement for your database is that each transaction is handled consistently. Your organization anticipates almost unlimited growth in the amount of data stored.
Which Google Cloud product should your organization choose?
A. Cloud SQL
B. Cloud Storage
C. Firestore
D. Cloud Spanner

A

For an application managing payments and online bank accounts with a critical requirement for consistent transaction handling and anticipating almost unlimited data growth, the most appropriate Google Cloud product is:

D. Cloud Spanner

Explanation:

  1. Cloud Spanner: Cloud Spanner is a globally distributed, horizontally scalable, and strongly consistent relational database service. It provides consistent transactions across the globe and is designed to handle a large amount of data while guaranteeing strong consistency. This makes it ideal for applications dealing with financial transactions that require high consistency.
  2. Cloud SQL (A): Cloud SQL is a fully-managed relational database service. While it provides consistency, it may not be as suitable for handling almost unlimited data growth and may not offer the same level of scalability and globally consistent transactions as Cloud Spanner.
  3. Cloud Storage (B): Cloud Storage is an object storage service designed for storing and retrieving any amount of data. However, it does not provide transaction handling or consistency features required for managing payments and bank accounts.
  4. Firestore (C): Firestore is a NoSQL document database that offers real-time updates and scalability, but it may not provide the same level of strong consistency required for critical financial transactions.

In summary, for an application managing payments and online bank accounts with a critical requirement for consistent transaction handling and anticipating almost unlimited data growth, Cloud Spanner (option D) is the most suitable Google Cloud product, offering globally distributed, highly consistent transactions across a large amount of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Your organization wants an economical solution to store data such as files, graphical images, and videos and to access and share them securely.
Which Google Cloud product or service should your organization use?
A. Cloud Storage
B. Cloud SQL
C. Cloud Spanner
D. BigQuery

A

For an economical solution to store data such as files, graphical images, and videos securely and access them, the most appropriate Google Cloud product or service is:

A. Cloud Storage

Explanation:

  1. Cloud Storage: Cloud Storage is a highly durable and scalable object storage service that allows you to store various types of data, including files, images, videos, and more. It’s cost-effective, offers secure and reliable storage, and allows you to control access to the stored data through fine-grained access controls.
  2. Cloud SQL (B): Cloud SQL is a fully-managed relational database service. While it’s great for structured data storage and retrieval, it may not be the most economical solution for storing large files, images, and videos.
  3. Cloud Spanner (C): Cloud Spanner is a globally distributed, horizontally scalable, and strongly consistent relational database service. It’s designed for transactional and structured data, making it less suitable for storing files, images, and videos.
  4. BigQuery (D): BigQuery is a serverless, highly scalable, and cost-effective data warehouse for running fast, SQL-like queries over large datasets. However, it’s not designed for storing files or multimedia data.

In summary, for an economical solution to store data such as files, graphical images, and videos securely and access them, Cloud Storage (option A) is the most suitable Google Cloud product. It provides a scalable, secure, and cost-effective solution for storing various types of data while allowing secure access and sharing.

38
Q

Your organization wants to predict the behavior of visitors to its public website. To do that, you have decided to build a machine learning model. Your team has database-related skills but only basic machine learning skills, and would like to use those database skills.
Which Google Cloud product or feature should your organization choose?
A. BigQuery ML
B. LookML
C. TensorFlow
D. Cloud SQL

A

To predict the behavior of visitors to your organization’s public website and leverage database-related skills, the most appropriate Google Cloud product or feature is:

A. BigQuery ML

Explanation:

  1. BigQuery ML: BigQuery ML is a fully-managed, serverless machine learning tool integrated with BigQuery. It allows you to build and deploy machine learning models using standard SQL queries, making it accessible to individuals with database-related skills. This feature enables predictive analytics directly within BigQuery without needing to use separate machine learning platforms or programming languages.
  2. LookML (B): LookML is a modeling language for Looker, a business intelligence and data analytics platform. It is not a machine learning tool for building predictive models.
  3. TensorFlow (C): TensorFlow is an open-source machine learning framework that allows you to build and train custom machine learning models. While powerful, it requires programming and machine learning expertise, which may be beyond the basic machine learning skills of the team.
  4. Cloud SQL (D): Cloud SQL is a fully-managed relational database service. It is not a machine learning tool and does not directly support the creation of predictive models.

In summary, for your organization’s specific scenario where database-related skills are prominent and basic machine learning skills are preferred, BigQuery ML (option A) is the most appropriate Google Cloud product. It allows for machine learning model creation and prediction directly within BigQuery using SQL, enabling predictive analytics without extensive machine learning expertise.

39
Q

Your organization needs to restrict access to a Cloud Storage bucket. Only employees who are based in Canada should be allowed to view the contents.
What is the most effective and efficient way to satisfy this requirement?
A. Deploy the Cloud Storage bucket to a Google Cloud region in Canada
B. Configure Google Cloud Armor to allow access to the bucket only from IP addresses based in Canada
C. Give each employee who is based in Canada access to the bucket
D. Create a group consisting of all Canada-based employees, and give the group access to the bucket

A

The most effective and efficient way to restrict access to a Cloud Storage bucket, allowing only employees based in Canada to view the contents, is:

B. Configure Google Cloud Armor to allow access to the bucket only from IP addresses based in Canada

Explanation:

  1. Google Cloud Armor (Option B): Google Cloud Armor is a DDoS and application defense service that allows you to create security policies to control access to your applications and resources. By configuring Google Cloud Armor to allow access only from IP addresses based in Canada, you can effectively restrict access to the Cloud Storage bucket to Canada-based employees.
  2. Deploy to a Google Cloud region in Canada (Option A): While deploying the Cloud Storage bucket to a Google Cloud region in Canada may be a valid approach, it does not guarantee that only Canada-based employees will access the bucket. It restricts the storage location but not access.
  3. Giving individual employees access (Option C): This approach would require manual management and is not efficient, especially if there are many employees in Canada.
  4. Creating a group for Canada-based employees (Option D): This approach is better than giving individual access but is not as efficient as using Google Cloud Armor to restrict access based on IP addresses.

In summary, configuring Google Cloud Armor (Option B) to allow access to the bucket only from IP addresses based in Canada is the most effective and efficient way to satisfy the requirement of restricting access to the Cloud Storage bucket to employees based in Canada.

40
Q

Your organization is moving an application to Google Cloud. As part of that effort, it needs to migrate the application’s working database from another cloud provider to Cloud SQL. The database runs on the MySQL engine. The migration must cause minimal disruption to users. Data must be secured while in transit.
Which should your organization use?
A. BigQuery Data Transfer Service
B. MySQL batch insert
C. Database Migration Service
D. Cloud Composer

A

To migrate the application’s working database from another cloud provider to Cloud SQL with minimal disruption to users and ensuring data security in transit for a MySQL database, the most appropriate option is:

C. Database Migration Service

Explanation:

  1. Database Migration Service (Option C): Google Cloud’s Database Migration Service is specifically designed for migrating databases from various sources, including other cloud providers, to Cloud SQL. It supports MySQL as a source and facilitates a seamless and secure migration with minimal disruption to users. It handles the data migration and ensures data is secured during transit.
  2. BigQuery Data Transfer Service (Option A): BigQuery Data Transfer Service is not intended for migrating a MySQL database from another cloud provider to Cloud SQL. It’s primarily used for loading data into BigQuery for analysis.
  3. MySQL batch insert (Option B): MySQL batch insert is a method for inserting data into a MySQL database efficiently. However, it’s not a migration solution and doesn’t facilitate the secure migration of an entire database from one cloud provider to another.
  4. Cloud Composer (Option D): Cloud Composer is a fully managed workflow orchestration service that allows you to author, schedule, and monitor pipelines. It’s not directly related to database migration, especially for the described use case of migrating a MySQL database.

In summary, for migrating a MySQL database from another cloud provider to Cloud SQL while ensuring data security in transit and minimal disruption to users, Database Migration Service (Option C) is the most suitable choice.

41
Q

Your organization is developing and deploying an application on Google Cloud. Tracking your Google Cloud spending needs to stay as simple as possible.
What should you do to ensure that workloads in the development environment are fully isolated from production workloads?
A. Apply a unique tag to development resources
B. Associate the development resources with their own network
C. Associate the development resources with their own billing account
D. Put the development resources in their own project

A

To ensure that workloads in the development environment are fully isolated from production workloads and to simplify tracking Google Cloud spending, the most effective approach is:

D. Put the development resources in their own project

Explanation:

  1. Putting development resources in their own project (Option D): This is a recommended practice to ensure complete isolation of development workloads from production. Each project has its own set of resources and configurations, allowing for clear separation and simplifying billing and tracking of costs.
  2. Applying a unique tag to development resources (Option A): While tagging can help in tracking and organizing resources, it doesn’t provide the same level of isolation as placing resources in separate projects.
  3. Associating development resources with their own network (Option B): Network isolation is important for security, but placing development resources in a separate project offers a higher level of isolation and simplifies cost tracking.
  4. Associating development resources with their own billing account (Option C): While you can associate resources with different billing accounts, this can be more complex and might not provide the desired isolation and simplicity in tracking spending for development workloads.

In summary, putting the development resources in their own project (Option D) is the most effective way to ensure full isolation of workloads in the development environment from production and simplify tracking Google Cloud spending.

42
Q

Your company is running the majority of its workloads in a co-located data center. The workloads are running on virtual machines (VMs) on top of a hypervisor and use either Linux or Windows server editions. As part of your company’s transformation strategy, you need to modernize workloads as much as possible by adopting cloud-native technologies. You need to migrate the workloads into Google Cloud.
What should you do?
A. Export the VMs into VMDK format, and import them into Compute Engine
B. Export the VMs into VMDK format, and import them into Google Cloud VMware Engine
C. Migrate the workloads using Migrate for Compute Engine
D. Migrate the workloads using Migrate for Anthos

A

To modernize workloads running on virtual machines in a co-located data center and migrate them into Google Cloud, the most appropriate approach is:

D. Migrate the workloads using Migrate for Anthos

Explanation:

  1. Migrate for Anthos (Option D): Migrate for Anthos is designed to modernize and migrate workloads running on virtual machines (VMs) by containerizing them and managing them using Kubernetes. It helps you transform VM-based applications into containerized applications, which is a crucial step toward adopting cloud-native technologies.
  2. Export VMs into VMDK format and import them into Compute Engine (Option A): This method involves converting VMs to VMDK format and importing them into Compute Engine. While this can be a migration strategy, it doesn’t inherently modernize the workloads or take advantage of cloud-native technologies.
  3. Export VMs into VMDK format and import them into Google Cloud VMware Engine (Option B): Google Cloud VMware Engine is a VMware-based service on Google Cloud. While this can facilitate VM migration, it’s not aimed at modernizing workloads or adopting cloud-native technologies.
  4. Migrate the workloads using Migrate for Compute Engine (Option C): Migrate for Compute Engine is used for migrating existing physical or virtual servers to Compute Engine instances. It’s suitable for lift-and-shift migrations but does not inherently modernize the workloads.

In summary, to modernize workloads by adopting cloud-native technologies during migration, Migrate for Anthos (Option D) is the most suitable approach. It helps containerize the workloads and manage them using Kubernetes, aligning with the goal of modernization.

43
Q

Your organization is running all its workloads in a private cloud on top of a hypervisor. Your organization has decided it wants to move to Google Cloud as quickly as possible. Your organization wants minimal changes to the current environment, while using the maximum amount of managed services Google offers.
What should your organization do?
A. Migrate the workloads to Google Cloud VMware Engine
B. Migrate the workloads to Compute Engine
C. Migrate the workloads to Bare Metal Solution
D. Migrate the workloads to Google Kubernetes Engine

A

To move to Google Cloud as quickly as possible with minimal changes to the current environment and utilizing maximum managed services, the most appropriate approach is:

A. Migrate the workloads to Google Cloud VMware Engine

Explanation:

  1. Migrate to Google Cloud VMware Engine (Option A): Google Cloud VMware Engine is a fully-managed VMware service that enables organizations to migrate and run their VMware workloads natively on Google Cloud. It allows minimal changes to the current environment as it provides a VMware-based platform in the cloud, ensuring a smooth migration while leveraging managed services.
  2. Migrate to Compute Engine (Option B): While Compute Engine is a viable option, it requires more changes and configurations compared to migrating to Google Cloud VMware Engine. It is not as aligned with the goal of minimal changes and leveraging maximum managed services.
  3. Migrate to Bare Metal Solution (Option C): Bare Metal Solution is an option for running workloads directly on physical hardware provided by Google Cloud. However, this may not be the quickest migration path and might require substantial changes and configurations.
  4. Migrate to Google Kubernetes Engine (Option D): Migrating workloads to Kubernetes (GKE) could be beneficial for modernization and leveraging managed Kubernetes services, but it may require more changes and modernization efforts compared to moving to Google Cloud VMware Engine.

In summary, migrating workloads to Google Cloud VMware Engine (Option A) is the most suitable approach for a quick transition with minimal changes to the current environment while utilizing maximum managed services offered by Google.

44
Q

Your organization is releasing its first publicly available application in Google Cloud. The application is critical to your business and customers and requires a 2- hour SLA.
How should your organization set up support to minimize costs?
A. Enroll in Premium Support
B. Enroll in Enhanced Support
C. Enroll in Standard Support
D. Enroll in Basic Support

A

To ensure critical support with a 2-hour SLA (Service Level Agreement) for your application while minimizing costs, the most suitable option is:

C. Enroll in Standard Support

Explanation:

  1. Standard Support (Option C): Standard Support provides a balance between cost and support coverage. It is suitable for organizations that require reasonably fast response times while controlling costs. With Standard Support, you can benefit from a 4-hour response time, which is close to the 2-hour SLA you need, and it helps minimize costs compared to Premium or Enhanced Support.
  2. Enhanced Support (Option B): Enhanced Support offers faster response times and additional features compared to Standard Support. However, it may be more expensive and may not be necessary if a 2-hour SLA is sufficient for your organization.
  3. Premium Support (Option A): Premium Support offers the fastest response times and additional features, making it suitable for mission-critical applications. However, it is usually more expensive compared to Standard and Enhanced Support.
  4. Basic Support (Option D): Basic Support provides basic coverage and slower response times. Given the critical nature of your application, Basic Support may not meet the 2-hour SLA requirement and could lead to potential business disruptions.

In summary, for ensuring critical support with a 2-hour SLA while minimizing costs, enrolling in Standard Support (Option C) is the most suitable choice. It offers a reasonable response time and a balance between cost and support coverage.

45
Q

Your organization offers public mobile apps and websites. You want to migrate to a Google Cloud-based solution for checking and maintaining your users’ usernames and passwords and controlling their access to different resources based on their identity.
Which should your organization choose?
A. VPN tunnels
B. Identity Platform
C. Compute Engine firewall rules
D. Private Google Access

A

To migrate to a Google Cloud-based solution for managing user identities, passwords, and controlling access to resources, the most suitable option is:

B. Identity Platform

Explanation:

  1. Identity Platform (Option B): Google Cloud Identity Platform is a robust, feature-rich solution that allows you to manage user identities, authentication, and authorization in a secure and scalable manner. It provides services like user authentication, multi-factor authentication, and secure access to resources based on identity.
  2. VPN Tunnels (Option A): VPN tunnels are used to securely connect your on-premises network to Google Cloud, but they are not the appropriate solution for managing user identities and access control.
  3. Compute Engine Firewall Rules (Option C): Compute Engine firewall rules are used to control inbound and outbound traffic to instances (virtual machines). While they are essential for network security, they are not designed for managing user identities and access control at the user level.
  4. Private Google Access (Option D): Private Google Access is used to access Google services without exposing data to the public internet. It’s not related to managing user identities and controlling access based on user identity.

In summary, for managing user identities, passwords, and controlling access to resources based on identity for mobile apps and websites, Google Cloud Identity Platform (Option B) is the most suitable choice. It provides a comprehensive solution for identity management and access control in the cloud.

46
Q

Which Google Cloud service or feature lets you build machine learning models using Standard SQL and data in a data warehouse?
A. BigQuery ML
B. TensorFlow
C. AutoML Tables
D. Cloud Bigtable ML

A

The Google Cloud service that allows you to build machine learning models using Standard SQL and data in a data warehouse is:

A. BigQuery ML

Explanation:

  1. BigQuery ML (Option A): BigQuery ML is a fully-managed service within BigQuery that allows you to create machine learning models directly using SQL queries. It’s designed to build models and perform machine learning on data stored in BigQuery, making it convenient for data analysis and model creation using Standard SQL.
  2. TensorFlow (Option B): TensorFlow is an open-source machine learning framework by Google. While it’s a powerful tool for building machine learning models, it typically involves writing code in Python or other languages and is not directly related to SQL-based model building.
  3. AutoML Tables (Option C): AutoML Tables is a service by Google Cloud that automates the process of training machine learning models on structured tabular data. However, it is not directly related to SQL-based model creation.
  4. Cloud Bigtable ML (Option D): Cloud Bigtable is a NoSQL database service. There is no direct service named “Cloud Bigtable ML” for building machine learning models; it is primarily focused on storing and managing large amounts of data.

In summary, to build machine learning models using Standard SQL and data in a data warehouse, BigQuery ML (Option A) is the correct choice.

47
Q

Your organization runs an application on virtual machines in Google Cloud. This application processes incoming images. This activity takes hours to create a result for each image. The workload for this application normally stays at a certain baseline level, but at regular intervals it spikes to a much greater workload.
Your organization needs to control the cost to run this application.
What should your organization do?
A. Purchase committed use discounts for the baseline load
B. Purchase committed use discounts for the expected spike load
C. Leverage sustained use discounts for your virtual machines
D. Run the workload on preemptible VM instances

A

To control the cost of running the application that processes images with regular workload spikes, the most appropriate option is:

A. Purchase committed use discounts for the baseline load

Explanation:

  1. Purchase committed use discounts for the baseline load (Option A): This option involves purchasing committed use discounts for the baseline workload, ensuring cost savings for the consistent and expected load. Committed use discounts provide cost predictability and savings for a specified usage commitment.
  2. Purchase committed use discounts for the expected spike load (Option B): This may lead to overcommitting resources and incurring unnecessary costs during non-spike periods. It’s better to optimize for the typical workload and handle the spikes separately.
  3. Leverage sustained use discounts for your virtual machines (Option C): While sustained use discounts offer savings based on consistent usage, they might not be the most cost-effective option for a workload with significant spikes.
  4. Run the workload on preemptible VM instances (Option D): Preemptible VM instances are substantially cheaper but have a limited lifespan and can be terminated at any time. They are not suitable for a workload that takes hours to process an image, as interruptions can lead to data loss or disrupted processing.

In summary, purchasing committed use discounts for the baseline load (Option A) is the most effective approach to control costs for the application processing images, considering the expected workload pattern with regular spikes.

48
Q

Your organization is developing a plan for migrating to Google Cloud.
What is a best practice when initially configuring your Google Cloud environment?
A. Create a project via Google Cloud Console per department in your company
B. Define your resource hierarchy with an organization node on top
C. Create projects based on team members’ requests
D. Make every member of your company the project owner

A

A best practice when initially configuring your Google Cloud environment is:

B. Define your resource hierarchy with an organization node on top

Explanation:

  1. Defining a resource hierarchy with an organization node (Option B): Organizing your resources under a structured hierarchy with an organization node provides a clear and organized approach. It helps in managing access control, policies, and permissions across different projects and teams.
  2. Creating a project via Google Cloud Console per department (Option A): While creating projects for each department can be useful, structuring them under an organization node offers centralized management and control.
  3. Creating projects based on team members’ requests (Option C): Creating projects based solely on requests may lead to a fragmented and unorganized environment. It’s important to have a structured approach to resource provisioning.
  4. Making every member of your company the project owner (Option D): This is not a recommended practice as it can lead to security and access control issues. Assigning appropriate roles and permissions based on responsibilities is crucial for security and effective management.

In summary, defining a resource hierarchy with an organization node on top (Option B) is a best practice to ensure a well-structured and organized Google Cloud environment, enabling effective management and control of resources and access.

49
Q

Your organization runs many workloads in different Google Cloud projects, each linked to the same billing account. Each project’s workload costs can vary from month to month, but the overall combined cost of all projects is relatively stable. Your organization needs to optimize its cost.
What should your organization do?
A. Purchase a commitment per project for each project’s usual minimum
B. Create a billing account per project, and link each project to a different billing account
C. Turn on committed use discount sharing, and create a commitment for the combined usage
D. Move all workloads from all different projects into one single consolidated project

A

To optimize cost for workloads in different Google Cloud projects linked to the same billing account, the most suitable option is:

C. Turn on committed use discount sharing, and create a commitment for the combined usage

Explanation:

  1. Turn on committed use discount sharing (Option C): This option allows you to share committed use discounts across projects, consolidating usage and cost commitments. It optimizes cost by aggregating the commitment for combined usage while still maintaining separate projects.
  2. Purchase a commitment per project (Option A): While this can provide commitments for each project’s minimum usage, it may not be as cost-effective as sharing committed use discounts across projects.
  3. Create a billing account per project (Option B): This approach may lead to administrative complexity and is not suitable for optimizing cost across multiple projects.
  4. Move all workloads into one single consolidated project (Option D): This may not be feasible or practical, especially if there are valid reasons to keep workloads separated into different projects, and it might not align with the organization’s structure or requirements.

In summary, to optimize cost for workloads in different Google Cloud projects with varying costs but stable overall combined cost, turning on committed use discount sharing and creating a commitment for the combined usage (Option C) is the most appropriate approach.

50
Q

How should a multinational organization that is migrating to Google Cloud consider security and privacy regulations to ensure that it is in compliance with global standards?
A. Comply with data security and privacy regulations in each geographical region
B. Comply with regional standards for data security and privacy, because they supersede all international regulations
C. Comply with international standards for data security and privacy, because they supersede all regional regulations
D. Comply with regional data security regulations, because they’re more complex than privacy standards

A

When a multinational organization is migrating to Google Cloud, it should consider security and privacy regulations to ensure compliance with global standards by:

A. Complying with data security and privacy regulations in each geographical region

Explanation:

  1. Comply with data security and privacy regulations in each geographical region (Option A): This approach involves understanding and adhering to the specific security and privacy regulations of each geographical region where the organization operates or stores data. This ensures compliance with the local laws and regulations governing data privacy and security.
  2. Comply with regional standards for data security and privacy (Option B): While regional standards are important, it’s equally crucial to comply with international standards where applicable. Different regions might have their own regulations, but international standards can provide a baseline for compliance across regions.
  3. Comply with international standards for data security and privacy (Option C): International standards provide a broad framework, but they may not cover all nuances and specifics of regional regulations. It’s important to consider and comply with both international and regional standards to ensure comprehensive compliance.
  4. Comply with regional data security regulations (Option D): This option may overlook privacy standards, which are equally important. A comprehensive approach should consider both data security and privacy regulations in each region.

In summary, option A, complying with data security and privacy regulations in each geographical region, is the best approach to ensure compliance with global standards by addressing regional variations in data security and privacy regulations.

51
Q

Your organization wants to optimize its use of Google Cloud’s discounts on virtual machine-based workloads. You plan to use 200 CPUs constantly for the next 3 years, and you forecast that spikes of up to 300 CPUs will occur approximately 30% of the time. What should you choose?
A. 1-year committed use discount for 200 CPUs
B. 3-year committed use discount for 300 CPUs
C. 3-year committed use discount for 200 CPUs
D. Regular pay-as-you-go pricing

A

To optimize the use of Google Cloud’s discounts for virtual machine-based workloads considering the provided information, you should choose:

B. 3-year committed use discount for 300 CPUs

Explanation:

  • The workload consists of using 200 CPUs constantly, so committing to this baseline usage would be beneficial.
  • Additionally, spikes of up to 300 CPUs occur approximately 30% of the time. Therefore, committing to 300 CPUs provides coverage for both the baseline usage and the occasional spikes, making it a cost-effective choice.

In summary, a 3-year committed use discount for 300 CPUs (Option B) aligns with the workload requirements and provides potential cost savings.

52
Q

Your organization needs to minimize how much it pays for data traffic from the Google network to the internet. What should your organization do?
A. Choose the Standard network service tier.
B. Choose the Premium network service tier.
C. Deploy Cloud VPN.
D. Deploy Cloud NAT.

A

To minimize the cost for data traffic from the Google network to the internet, your organization should:

D. Deploy Cloud NAT

Explanation:

  1. Deploy Cloud NAT (Option D): Cloud NAT allows virtual machine instances without external IP addresses to communicate with the internet. It helps minimize the egress traffic costs associated with outbound connections.
    • Cloud NAT allows instances to connect to the internet using a single IP address, reducing the amount of egress traffic and therefore lowering costs compared to instances each having their own external IP addresses.
  2. Choose the Standard network service tier (Option A): While the Standard network service tier is cost-effective for egress traffic to the internet, it’s not specifically designed to minimize egress traffic costs.
  3. Choose the Premium network service tier (Option B): The Premium network service tier offers enhanced performance and reliability but does not specifically minimize egress traffic costs.
  4. Deploy Cloud VPN (Option C): Cloud VPN is used for secure communication between Google Cloud and an on-premises network. It doesn’t directly address minimizing egress traffic costs.

In summary, deploying Cloud NAT (Option D) is the most appropriate choice to minimize the cost of data traffic from the Google network to the internet by optimizing egress traffic.

53
Q

Your organization wants to migrate your on-premises environment to Google Cloud. The on-premises environment consists of containers and virtual machine instances. Which Google Cloud products can help to migrate the container images and the virtual machine disks?
A. Compute Engine and Filestore
B. Artifact Registry and Cloud Storage
C. Dataflow and BigQuery
D. Pub/Sub and Cloud Storage

A

To migrate container images and virtual machine disks from an on-premises environment to Google Cloud, the appropriate Google Cloud products are:

B. Artifact Registry and Cloud Storage

Explanation:

  1. Artifact Registry (Option B): Artifact Registry is a fully managed service for storing, managing, and securing artifacts. You can use it to store container images, making it a suitable choice for migrating container images.
  2. Cloud Storage (Option B): Cloud Storage is a scalable object storage solution that can be used to store and manage virtual machine disks during migration. It provides secure, durable, and highly available storage for various data types.
    • You can use Cloud Storage to upload and manage virtual machine disks during the migration process.

Options A, C, and D are not the most appropriate choices for migrating container images and virtual machine disks:

  • Compute Engine and Filestore (Option A): While Compute Engine can be used for virtual machine instances, Filestore is not typically used for migrating virtual machine disks or container images.
  • Dataflow and BigQuery (Option C): Dataflow is used for stream and batch processing, and BigQuery is a fully managed, serverless, and highly scalable data warehouse. They are not relevant for migrating container images or virtual machine disks.
  • Pub/Sub and Cloud Storage (Option D): Pub/Sub is a messaging service, and while Cloud Storage is relevant for migrating virtual machine disks, Pub/Sub is not used for migrating container images or virtual machine disks.
54
Q

Your company security team manages access control to production systems using an LDAP directory group.
How is this access control managed in the Google Cloud production project?
A. Assign the proper role to the Service Account in the project’s IAM Policy
B. Grant each user the roles/iam.serviceAccountUser role on a service account that exists in the Google Group.
C. Assign the proper role to the Google Group in the project’s IAM Policy.
D. Create the project in a folder with the same name as the LDAP directory group.

A

To manage access control for Google Cloud production projects using an LDAP directory group, the appropriate approach is:

C. Assign the proper role to the Google Group in the project’s IAM Policy.

Explanation:

  1. Assign the proper role to the Google Group (Option C): In Google Cloud IAM (Identity and Access Management), you can assign roles to Google Groups, including the LDAP directory group. By doing this, all members of the LDAP directory group are granted the specified roles within the Google Cloud project.
    • This ensures that users in the LDAP directory group have the necessary access based on the assigned IAM roles in the Google Cloud project.

Option A, assigning the proper role to the Service Account in the project’s IAM Policy, and Option B, granting each user the roles/iam.serviceAccountUser role on a service account that exists in the Google Group, are not the typical or best methods for managing access control based on an LDAP directory group.

  • Option D, creating the project in a folder with the same name as the LDAP directory group, is not a standard practice and does not directly relate to IAM and access control for Google Cloud projects.
55
Q

Your organization wants to be sure that is expenditures on cloud services are in line with the budget. Which two Google Cloud cost management features help your organization gain greater visibility into its cloud resource costs? (Choose two.)
A. Billing dashboards
B. Resource labels
C. Sustained use discounts
D. Financial governance policies
E. Payments profile

A

To gain greater visibility into Google Cloud resource costs and manage expenditures effectively, your organization should utilize the following Google Cloud cost management features:

A. Billing dashboards: Billing dashboards provide an overview of your spending, usage, and costs, allowing you to analyze and track your expenses, optimize resource usage, and stay within your budget.

B. Resource labels: Resource labels are customizable metadata that can be applied to your cloud resources. By using labels, you can organize and track spending for various resources, enabling better cost allocation and analysis.

Additional options like sustained use discounts (Option C) and payments profile (Option E) do not directly provide visibility into cloud resource costs, while financial governance policies (Option D) are part of the broader cost management strategy but do not provide specific visibility into costs.

56
Q

Your organization wants to be sure that is expenditures on cloud services are in line with the budget. Which two Google Cloud cost management features help your organization gain greater visibility into its cloud resource costs? (Choose two.)
A. Billing dashboards
B. Resource labels
C. Sustained use discounts
D. Financial governance policies
E. Payments profile

A

To gain greater visibility into Google Cloud resource costs and manage expenditures effectively, your organization should utilize the following Google Cloud cost management features:

A. Billing dashboards: Billing dashboards provide an overview of your spending, usage, and costs, allowing you to analyze and track your expenses, optimize resource usage, and stay within your budget.

B. Resource labels: Resource labels are customizable metadata that can be applied to your cloud resources. By using labels, you can organize and track spending for various resources, enabling better cost allocation and analysis.

Additional options like sustained use discounts (Option C) and payments profile (Option E) do not directly provide visibility into cloud resource costs, while financial governance policies (Option D) are part of the broader cost management strategy but do not provide specific visibility into costs.

57
Q

Your organization needs to ensure that the Google Cloud resources of each of your departments are segregated from one another. Each department has several environments of its own: development, testing, and production. Which strategy should your organization choose?
A. Create a project per department, and create a folder per environment in each project.
B. Create a folder per department, and create a project per environment in each folder.
C. Create a Cloud Identity domain per department, and create a project per environment in each domain.
D. Create a Cloud Identity domain per environment, and create a project per department in each domain.

A

To ensure that Google Cloud resources for each department are segregated and organized according to the given scenario, the most suitable strategy would be:

B. Create a folder per department, and create a project per environment in each folder.

Explanation:

  • Create a folder per department: This approach ensures that each department has its own dedicated folder, enabling segregation and organization of resources specific to that department.
  • Create a project per environment in each folder: Within each department’s folder, create separate projects for each environment (development, testing, and production). This allows further segregation based on the environment and provides flexibility for managing resources associated with different stages of development.

Option A (creating a project per department and a folder per environment in each project) is less efficient as it could result in unnecessary project proliferation and increased administrative overhead.

Option C (creating a Cloud Identity domain per department and a project per environment in each domain) and Option D (creating a Cloud Identity domain per environment and a project per department in each domain) are not suitable for organizing resources per the provided scenario and could lead to complexity and management challenges.

58
Q

Your organization is defining the resource hierarchy for its new application in Google Cloud. You need separate development and production environments. The production environment will be deployed in Compute Engine in two regions. Which structure should your organization choose?
A. Create a single project for all environments. Use labels to segregate resources by environment.
B. Create a single project for all environments. Use tags to segregate resources by environment.
C. Create one project for the development environment and one project for the production environment.
D. Create two projects for the development environment and two projects for the production environment (one for each region).

A

To efficiently manage separate development and production environments with the production environment deployed in Compute Engine across two regions, the most suitable structure would be:

D. Create two projects for the development environment and two projects for the production environment (one for each region).

Explanation:

  • Development Environment Projects: Have two separate projects for the development environment to maintain isolation and manage resources independently for each development stage.
  • Production Environment Projects: Similarly, create two projects for the production environment, one for each region where Compute Engine will be deployed. This approach ensures efficient management, scalability, and isolation for the production workloads.

Options A and B (using labels or tags within a single project) may not provide the desired level of isolation and segregation needed for separate development and production environments.

Option C (one project for development and one for production) does not account for the need to separate the production environment by region, which is essential for this scenario. Therefore, it’s less suitable.

59
Q

Your organization meant to purchase a 3-year Committed Use Discount, but accidentally purchased a 1-year Committed Use Discount instead. What should your organization do?
A. Contact your financial institution.
B. Contact Trust and Safety.
C. Contact Cloud Billing Support.
D. Contact Technical Support.

A

In the scenario where your organization accidentally purchased a 1-year Committed Use Discount instead of the intended 3-year Committed Use Discount, the appropriate action to rectify the situation would be:

C. Contact Cloud Billing Support.

Explanation:

  • Cloud Billing Support: Cloud Billing Support is the designated support channel to address billing-related concerns, inquiries, and issues. They can assist you with resolving billing discrepancies, updating commitments, and making the necessary adjustments to align with your intended purchase.

Options A, B, and D do not directly relate to addressing or rectifying billing errors or discrepancies. It’s essential to reach out to the appropriate support channel, which is Cloud Billing Support, to correct the mistake and ensure the intended Committed Use Discount is applied correctly.

60
Q

Your organization needs to allow a production job to have access to a BigQuery dataset. The production job is running on a Compute Engine instance that is part of an instance group.
What should be included in the IAM Policy on the BigQuery dataset?
A. The Compute Engine instance group
B. The project that owns the Compute Engine instance
C. The Compute Engine service account
D. The Compute Engine instance

A

To allow a production job running on a Compute Engine instance that is part of an instance group to have access to a BigQuery dataset, you should include:

C. The Compute Engine service account

Explanation:

  • Compute Engine Service Account: Each Compute Engine instance is associated with a service account that represents the identity of the instance. You would grant access to this service account in the IAM policy on the BigQuery dataset to allow the production job running on the instances in the instance group to access the dataset.

Options A and D are not the appropriate entities to grant access to a BigQuery dataset. The instance group (A) or individual instances (D) do not represent the identity needed for access.

Option B (The project that owns the Compute Engine instance) is not the most precise approach. While the project is related to the ownership, granting access to the service account associated with the Compute Engine instances is a more accurate method to allow access to the BigQuery dataset.

61
Q

Your team is publishing research results and needs to make large amounts of data available to other researchers within the professional community and the public at minimum cost.
How should you host the data?
A. Use a Cloud Storage bucket and enable ג€Requester Pays.ג€
B. Use a Cloud Storage bucket and provide Signed URLs for the data files.
C. Use a Cloud Storage bucket and set up a Cloud Interconnect connection to allow access to the data.
D. Host the data on-premises, and set up a Cloud Interconnect connection to allow access to the data.

A

To make large amounts of data available to other researchers within the professional community and the public at minimum cost, the most suitable option is:

A. Use a Cloud Storage bucket and enable “Requester Pays”.

Explanation:

  • Cloud Storage Bucket: Utilizing a Cloud Storage bucket is a cost-effective and scalable solution for hosting large amounts of data.
  • Requester Pays: By enabling “Requester Pays” on the Cloud Storage bucket, the cost of data access is shifted to the requestor, making it financially sustainable for the organization while providing public access to the data.

Option B (providing Signed URLs) is not the most cost-effective method, as it involves generating time-limited URLs for access, which can potentially incur additional costs.

Options C and D (using Cloud Interconnect) are not necessary for making data available to the public and can add unnecessary complexity and costs, which are contrary to the goal of minimizing costs for public data access.

62
Q

Your company needs to segment Google Cloud resources used by each team from the others. The teams’ efforts are changing frequently, and you need to reduce operational risk and maintain cost visibility. Which approach does Google recommend?
A. One project per team.
B. One organization per team.
C. One project that contains all of each team’s resources.
D. One top-level folder per team.

A

To segment Google Cloud resources used by each team and reduce operational risk while maintaining cost visibility, Google recommends:

A. One project per team.

Explanation:

  • One Project Per Team: Creating separate projects for each team allows for isolation and segmentation of resources. It helps to minimize operational risk by providing clear boundaries for each team’s work. Additionally, it enables cost visibility, monitoring, and management specific to each team’s project.

While using folders (Option D) can be beneficial for organizing and managing resources, they may not provide the same level of isolation and autonomy as separate projects. Creating one organization per team (Option B) might be too complex and cumbersome, and having one project for all teams (Option C) could result in a lack of resource isolation and clear cost visibility. Therefore, one project per team is a recommended approach to achieve the desired segmentation and manageability.

63
Q

How do Migrate for Compute Engine and Migrate for Anthos differ?
A. Unlike Migrate for Anthos, Migrate for Compute Engine assumes that the migration source is VMware vSphere.
B. Migrate for Compute Engine charges for ingress, but Migrate for Anthos does not.
C. Migrate for Compute Engine is closed source, and Migrate for Anthos is open source.
D. Migrate for Anthos migrates to containers, and Migrate for Compute Engine migrates to virtual machines.

A

The key difference between Migrate for Compute Engine and Migrate for Anthos is:

D. Migrate for Anthos migrates to containers, and Migrate for Compute Engine migrates to virtual machines.

Explanation:

  • Migrate for Anthos: It is a migration tool that helps you modernize and migrate applications to containers running on Google Kubernetes Engine (GKE). It allows for the migration of applications from on-premises or other clouds to GKE, enabling the applications to run in containers and benefit from container orchestration.
  • Migrate for Compute Engine: This tool is designed to migrate virtual machines (VMs) from on-premises environments (such as VMware vSphere) or other clouds directly to Compute Engine in Google Cloud Platform. It focuses on lifting and shifting virtual machines to Google Cloud’s Compute Engine infrastructure.

Options A, B, and C are incorrect:

  • Option A is incorrect because both Migrate for Compute Engine and Migrate for Anthos support migration from different sources, not just VMware vSphere.
  • Option B is incorrect because both solutions may have associated costs, and the cost structure might vary depending on the specific use case, but it’s not a fundamental difference between the two.
  • Option C is incorrect because the openness or closedness of the source code is not a differentiating factor between the two. Both may involve proprietary components or be based on open-source technologies.
64
Q

Your large and frequently changing organization’s user information is stored in an on-premises LDAP database. The database includes user passwords and group and organization membership.
How should your organization provision Google accounts and groups to access Google Cloud resources?
A. Replicate the LDAP infrastructure on Compute Engine
B. Use the Firebase Authentication REST API to create users
C. Use Google Cloud Directory Sync to create users
D. Use the Identity Platform REST API to create users

A

For provisioning Google accounts and groups to access Google Cloud resources based on user information stored in an on-premises LDAP database, the most suitable option is:

C. Use Google Cloud Directory Sync to create users.

Explanation:

  • Google Cloud Directory Sync (GCDS): GCDS is a synchronization tool that synchronizes user and group details from your LDAP server (including on-premises LDAP databases) to your Google Workspace or Google Cloud Identity domain. This allows for automated provisioning and de-provisioning of users and groups in Google Cloud based on the LDAP data. Password synchronization and other attributes can also be managed using GCDS.
  • Option A (Replicate the LDAP infrastructure on Compute Engine) would be feasible but might be overly complex and expensive, especially if only synchronization is required.
  • Option B (Use the Firebase Authentication REST API to create users) is more suitable for Firebase-related authentication, but it might not integrate directly with Google Workspace or Google Cloud Identity for enterprise-level provisioning.
  • Option D (Use the Identity Platform REST API to create users) is more applicable to Firebase Identity Platform, which is tailored for Firebase-based authentication and may not be the most suitable for Google Cloud provisioning.

Therefore, using GCDS (Option C) is the most appropriate choice for syncing user and group information from an on-premises LDAP database to Google Cloud and enabling access to Google Cloud resources.

65
Q

Your organization recently migrated its compute workloads to Google Cloud. You want these workloads in Google Cloud to privately and securely access your large volume of on-premises data, and you also want to minimize latency.
What should your organization do?
A. Use Storage Transfer Service to securely make your data available to Google Cloud
B. Create a VPC between your on-premises data center and your Google resources
C. Peer your on-premises data center to Google’s Edge Network
D. Use Transfer Appliance to securely make your data available to Google Cloud

A

For securely accessing your large volume of on-premises data from Google Cloud while minimizing latency, the most appropriate option is:

B. Create a VPC between your on-premises data center and your Google resources.

Explanation:

  • VPC (Virtual Private Cloud): Creating a VPC allows you to extend your on-premises network into Google Cloud in a secure and private manner. This enables your Google Cloud workloads to privately and securely access your on-premises data over a dedicated network connection. It also helps in minimizing latency and ensuring data security.
  • Option A (Using Storage Transfer Service) is more suitable for securely transferring data from on-premises to Google Cloud Storage but may not provide the desired private and low-latency access.
  • Option C (Peering your on-premises data center to Google’s Edge Network) can improve connectivity and reduce latency but may not provide the required level of security and privacy for accessing on-premises data.
  • Option D (Using Transfer Appliance) is a valid method for securely transferring large volumes of on-premises data to Google Cloud, but it might not provide the real-time and low-latency access needed for active workloads.

Creating a VPC and establishing a secure, private connection between your on-premises data center and Google Cloud (Option B) is the most appropriate solution for accessing your on-premises data securely while minimizing latency.

66
Q

Your organization consists of many teams. Each team has many Google Cloud projects. Your organization wants to simplify the management of identity and access policies for these projects.
How can you group these projects to meet this goal?
A. Group each team’s projects into a separate domain
B. Assign labels based on the virtual machines that are part of each team’s projects
C. Use folders to group each team’s projects
D. Group each team’s projects into a separate organization node

A

To simplify the management of identity and access policies for your organization’s projects, you can group these projects using option:

C. Use folders to group each team’s projects.

Explanation:

  • Folders in Google Cloud: Folders are a way to organize your Google Cloud projects hierarchically, allowing you to group projects based on teams, departments, or any other organizational structure. By organizing projects into folders, you can apply policies and permissions at the folder level, making it easier to manage access and permissions across projects within each folder.
  • Option A (Grouping into a separate domain) is not a typical approach within Google Cloud for organizing projects. Domains in Google Cloud usually refer to G Suite domains or custom domains for identity and email management.
  • Option B (Assigning labels based on virtual machines) is a method for organizing resources based on certain characteristics, but it is not designed to group and manage projects or set access policies at that level.
  • Option D (Grouping into a separate organization node) isn’t a typical approach within Google Cloud. Google Cloud projects are typically organized within a Google Cloud Organization, which provides the necessary hierarchy and structure to manage projects effectively.

Using folders to group each team’s projects (Option C) is the most appropriate and standard approach to simplify management, apply access policies, and organize projects within Google Cloud.

67
Q

An organization needs to categorize text-based customer reviews on their website using a pre-trained machine learning model.
Which Google Cloud product or service should the organization use?
A. Cloud Natural Language API
B. Dialogflow
C. Recommendations AI
D. TensorFlow

A

For categorizing text-based customer reviews, the appropriate Google Cloud product is:

A. Cloud Natural Language API

Explanation:
- Cloud Natural Language API: This service is designed for understanding and categorizing text using machine learning. It provides features such as entity analysis, sentiment analysis, content classification, and more. For categorizing customer reviews, you can utilize the content classification feature to classify reviews into relevant categories.

While TensorFlow (Option D) is a powerful open-source machine learning framework, it is not a pre-trained service like the Cloud Natural Language API. To use TensorFlow effectively, you would need to develop, train, and fine-tune your machine learning model, which is not necessary if you can achieve your goal with a pre-trained service like the Cloud Natural Language API.

68
Q

An organization is planning its cloud expenditure.
What should the organization do to control costs?
A. Consider cloud resource costs as capital expenditure in annual planning.
B. Use only cloud resources; they have no cloud infrastructure costs.
C. Review cloud resource costs frequently because costs depend on usage.
D. Assess cloud resources costs only when SLO is not met by their cloud provider.

A

C. Review cloud resource costs frequently because costs depend on usage.

Explanation:
- Cloud resource costs are variable and can change based on usage, so it’s essential to review them frequently to ensure they align with the organization’s budget and financial plans.
- Regular cost monitoring allows the organization to make adjustments, optimize resource usage, and control costs effectively.
- Treating cloud costs as operational expenditure (OpEx) and reviewing them frequently is a common practice in managing cloud expenses efficiently.

69
Q

An organization is searching for an open-source machine learning platform to build and deploy their own custom machine learning applications using TPUs.
Which Google Cloud product or service should the organization use?
A. TensorFlow
B. BigQuery ML
C. Vision API
D. AutoML Vision

A

A. TensorFlow

Explanation:
- TensorFlow is an open-source machine learning framework developed by the Google Brain team. It supports the use of TPUs (Tensor Processing Units), which are custom hardware accelerators designed by Google for machine learning workloads.
- Using TensorFlow, the organization can build and deploy custom machine learning models, taking advantage of TPUs to accelerate computations and improve performance.

70
Q

What is an example of unstructured data that organizations can capture from social media?
A. Post comments
B. Tagging
C. Profile picture
D. Location

A

A. Post comments

Explanation:
- Post comments on social media are typically free-form text or messages that users write to express their opinions, thoughts, or reactions.
- Post comments are a common example of unstructured data because they do not follow a predefined data model or schema. Each comment can vary in length, content, language, and sentiment.
- Other options like “B. Tagging,” “C. Profile picture,” and “D. Location” are also data types captured from social media, but they may be more considered as semi-structured or structured data depending on the context and how the data is organized and stored.

71
Q

An organization relies on online seasonal sales for the majority of their annual revenue.
Why should the organization use App Engine for their customer app?
A. Automatically adjusts physical inventory in real time
B. Autoscales during peaks in demand
C. Runs maintenance during seasonal sales
D. Recommends the right products to customers

A

B. Autoscales during peaks in demand

Explanation:
- App Engine is a fully managed, serverless platform that automatically scales based on the traffic and demand your application receives. This is particularly valuable during seasonal sales when there might be a significant increase in customer traffic.
- Autoscaling ensures that the application can handle the increased load without any manual intervention, providing a seamless and responsive experience to users during high-demand periods.
- Options like “A. Automatically adjusts physical inventory in real time” and “D. Recommends the right products to customers” pertain to specific functionalities which App Engine, as a platform, does not inherently provide.
- “C. Runs maintenance during seasonal sales” does not align with the autoscaling benefit of App Engine and is not an ideal practice during critical sales periods.

72
Q

An organization is using machine learning to make predictions. One of their datasets mistakenly includes mislabeled data.
How will the prediction be impacted?
A. Increased risk of privacy leaks
B. Increased risk of inaccuracy
C. Decreased model compatibility
D. Decreased model training time

A

B. Increased risk of inaccuracy

Explanation:
- Mislabeled data in a dataset can mislead the machine learning model during the training process.
- Since the model learns from the data it is provided, mislabeled data can introduce noise and incorrect patterns into the model.
- This noise can significantly impact the accuracy and reliability of predictions made by the model.
- Accuracy is a critical aspect in machine learning models, and mislabeled data can cause the model to learn incorrect patterns, leading to inaccurate predictions.

Options like “A. Increased risk of privacy leaks,” “C. Decreased model compatibility,” and “D. Decreased model training time” are not directly related to mislabeled data in a dataset.

73
Q

A global organization is developing an application to manage payments and online bank accounts in multiple regions. Each transaction must be handled consistently in their database, and they anticipate almost unlimited growth in the amount of data stored.
Which Google Cloud product should the organization choose?
A. Cloud SQL
B. Cloud Spanner
C. Cloud Storage
D. BigQuery

A

B. Cloud Spanner

Explanation:
- Cloud Spanner is a globally distributed, horizontally scalable, and strongly consistent relational database service.
- It’s designed to provide consistent transactions across multiple regions, making it suitable for managing payments and online bank accounts.
- Cloud Spanner offers the benefits of a traditional relational database while providing global consistency and scalability.
- Given the need for consistent handling of transactions and almost unlimited data growth, Cloud Spanner is the most appropriate choice.

Cloud SQL (option A) is a good choice but might face challenges in terms of global consistency and scalability for unlimited data growth compared to Cloud Spanner.

Cloud Storage (option C) is an object storage service and not ideal for managing structured data and transactions like in a database.

BigQuery (option D) is a fully managed, serverless, highly scalable, and cost-effective data warehouse designed for analytical queries rather than transactional operations. It’s not the best fit for handling payments and online bank accounts with consistent transactions.

74
Q

An organization has servers running mission-critical workloads on-premises around the world. They want to modernize their infrastructure with a multi-cloud architecture.
What benefit could the organization experience?
A. Ability to disable regional network connectivity during cyber attacks
B. Ability to keep backups of their data on-premises in case of failure
C. Full management access to their regional infrastructure
D. Reduced likelihood of system failure during high demand events

A

D. Reduced likelihood of system failure during high demand events

Explanation:
- Adopting a multi-cloud architecture involves distributing workloads and resources across multiple cloud providers or regions, which can lead to increased redundancy and resilience.
- If one cloud provider or region experiences a failure or high demand, the organization can shift traffic or workloads to another cloud provider or region, reducing the likelihood of system failure during peak demand or outages.
- This redundancy and flexibility enhance the organization’s ability to maintain services during high demand events or unexpected failures, contributing to a more resilient and robust infrastructure.

Options A, B, and C don’t align with the typical benefits of adopting a multi-cloud architecture and may not directly relate to the primary advantages of enhanced redundancy and resilience offered by a multi-cloud approach.

75
Q

An organization needs to run frequent updates for their business app.
Why should the organization use Google Kubernetes Engine (GKE)?
A. Customer expectations can be adjusted without using marketing tools
B. Seamless changes can be made without causing any application downtime.
C. GKE handles version control seamlessly and out of the box
D. GKE is well suited for all monolithic applications

A

B. Seamless changes can be made without causing any application downtime.

Explanation:
- Google Kubernetes Engine (GKE) provides a platform for deploying, managing, and scaling containerized applications using Kubernetes.
- Kubernetes, which is the orchestration platform used by GKE, allows for seamless updates and changes to applications without causing downtime through features like rolling updates and canary releases.
- With rolling updates, a new version of an application can be gradually deployed while the old version is phased out, ensuring a smooth transition without interruption of service.
- This capability is crucial for applications that require frequent updates while maintaining continuous availability and a positive user experience.

Options A, C, and D do not specifically address the seamless updates and zero-downtime deployment features provided by Kubernetes and GKE.

76
Q

An organization wants to use Apigee to manage all their application programming interfaces (APIs).
What will Apigee enable the organization to do?
A. Increase application privacy
B. Measure and track API performance
C. Analyze application development speed
D. Market and sell APIs

A

B. Measure and track API performance

Explanation:
- Apigee is an API management platform provided by Google Cloud that enables organizations to design, secure, analyze, and scale APIs effectively.
- One of the core functionalities of Apigee is to measure and track API performance and usage. It provides analytics and insights into how APIs are being used, which can be essential for optimizing performance, identifying bottlenecks, and making data-driven decisions to enhance the overall API ecosystem.
- While Apigee helps in managing APIs, it doesn’t inherently increase application privacy (A), analyze application development speed (C), or market and sell APIs (D). Its primary focus is on API management, analytics, and optimization.

77
Q

An e-commerce organization is reviewing their cloud data storage.
What type of raw data can they store in a relational database without any processing?
A. Product inventory
B. Product photographs
C. Instructional videos
D. Customer chat history

A

A. Product inventory

Explanation:
- Relational databases are well-suited for structured data, which is typically tabular and follows a specific schema.
- Product inventory information, such as product names, descriptions, prices, quantities, and other relevant details, fits well into a relational database as it can be organized in a structured format.
- On the other hand, options B, C, and D involve unstructured or semi-structured data (images, videos, chat history), which are not typically stored directly in a relational database. For these types of data, organizations often use other storage solutions like object storage (e.g., Cloud Storage) or specialized databases designed for handling such data (e.g., NoSQL databases).

78
Q

A hotel wants to modernize their legacy systems so that customers can make reservations through a mobile app.
What’s the benefit of using an application programming interface (API) to do this?
A. They do not have to develop the end-user application
B. They can deprecate their legacy systems
C. They can transform their systems to be cloud-native
D. They do not have to rewrite the legacy system

A

D. They do not have to rewrite the legacy system

Explanation:
- An API allows the hotel to expose specific functionalities and data from their legacy system without having to overhaul or rewrite the entire system.
- By using an API, the hotel can maintain and utilize their existing systems, making them accessible and usable through modern interfaces such as mobile apps.
- This approach is cost-effective and efficient as it leverages the existing system while providing a modern user experience without the need for a complete system rewrite.

79
Q

An organization wants to digitize and share large volumes of historical text and images.
Why is a public cloud a better option than an on-premises solution?
A. In-house hardware management
B. Provides physical encryption key
C. Cost-effective at scale
D. Optimizes capital expenditure

A

C. Cost-effective at scale

Explanation:
- Public clouds offer a pay-as-you-go model, which means organizations only pay for the resources they use, making it cost-effective, especially at scale.
- Public clouds eliminate the need for significant upfront capital expenditure on hardware and infrastructure, as the cloud provider manages and maintains the infrastructure.
- On-premises solutions would require substantial hardware management, maintenance costs, and upfront capital investments, making the public cloud a more attractive and cost-effective option for storing and sharing large volumes of data.

80
Q

An organization wants to develop an application that can be personalized to user preferences throughout the year.
Why should they build a cloud-native application instead of modernizing their existing on-premises application?
A. Developers can rely on the cloud provider for all source code
B. Developers can launch new features in an agile way
C. IT managers can migrate existing application architecture without needing updates
D. IT managers can accelerate capital expenditure planning

A

B. Developers can launch new features in an agile way

Explanation:
- Building a cloud-native application allows for agility in development and deployment, enabling developers to easily launch new features and updates.
- Cloud-native applications can take advantage of cloud services and resources, providing scalability, reliability, and flexibility that traditional on-premises applications may lack.
- Cloud-native applications are designed to leverage the benefits of the cloud, allowing for rapid development and deployment cycles, making it easier to personalize the application according to user preferences and changing needs.

81
Q

Which technology allows organizations to run multiple computer operating systems on a single piece of physical hardware?
A. Hypervisor
B. Containers
C. Serverless computing
D. Open source

A

A. Hypervisor

Explanation:
- A hypervisor is a technology that allows multiple virtual machines (VMs) or operating systems to run on a single physical machine, each isolated from the others.
- Containers, on the other hand, provide a lightweight and portable way to package, distribute, and run applications and their dependencies, but they share the host operating system’s kernel.
- Serverless computing is a model where the cloud provider manages the infrastructure and automatically scales and allocates resources for executing code in response to events, but it doesn’t involve running multiple operating systems on a single piece of hardware.
- Open source refers to software whose source code is available to the public and can be modified and redistributed, but it’s not directly related to running multiple operating systems on a single piece of hardware.

82
Q

An organization is making a strategic change to customer support in response to feedback. They plan to extend their helpline availability hours.
Why is the organization making this change?
A. Users expect professional expertise
B. Users require personalization
C. Users expect always-on services
D. Users require regional access

A

C. Users expect always-on services

Explanation:
- Extending helpline availability hours implies that the organization is moving towards providing round-the-clock or extended support to users.
- “Users expect always-on services” reflects the understanding that in today’s digital age, users expect services and support to be available and accessible at any time they need it, beyond regular business hours.
- This change aligns with the expectation of 24/7 support, where users can reach out for assistance or support irrespective of the time or day.

83
Q

An organization is migrating their business applications from on-premises to the cloud.
How could this impact their operations and personnel costs?
A. Reduced on-premises infrastructure management costs
B. Increased on-premises hardware maintenance costs
C. Reduced cloud software licensing costs
D. Increased cloud hardware management costs

A

The correct options are:

A. Reduced on-premises infrastructure management costs
C. Reduced cloud software licensing costs

Explanation:

  • Migrating business applications from on-premises to the cloud often leads to reduced on-premises infrastructure management costs. This is because the organization no longer needs to maintain and manage physical servers, networking equipment, cooling systems, etc., which can be costly.
  • Moving to the cloud often eliminates the need for certain software licensing costs associated with on-premises solutions. Cloud service providers typically offer subscription-based pricing models that can be more cost-effective and predictable than traditional software licensing models.

These changes can lead to operational efficiencies and potentially reduce overall personnel costs related to managing physical infrastructure and software licensing. However, it’s important to note that the actual impact on costs can vary based on the specific nature of the applications, the cloud provider, the migration strategy, and other factors.

84
Q

A retail company stores their product inventory in a legacy system. Often, customers find products on the company’s website and want to purchase them in-store.
However, when they arrive, they discover that the products are out of stock.
How could the company benefit from using an application programming interface (API)?
A. To create personalized product recommendations for customers
B. To optimize their on-premises legacy system stability
C. By manually linking each inventory system to the website on a case-by-case basis
D. By programmatically connecting the inventory system to their website

A

The correct option is:

D. By programmatically connecting the inventory system to their website

Explanation:

  • Utilizing an application programming interface (API) allows the retail company to programmatically connect their product inventory system (legacy system) to their website. This enables real-time or near real-time updates of product availability on the website based on the current status in their inventory system.
  • By connecting the inventory system to the website through an API, the company can provide accurate and up-to-date information to customers regarding product availability. Customers can be informed in real time whether a product is in stock or out of stock, helping to manage their expectations and provide a better shopping experience.
  • This integration helps in avoiding discrepancies where a customer might find a product available on the website but then discovers it’s out of stock when they visit the physical store.

Options A, B, and C are not relevant to addressing the specific issue of keeping the online inventory in sync with the in-store inventory to prevent customer dissatisfaction due to product unavailability.

85
Q

An organization is training a machine learning model to predict extreme weather events in their country.
How should they collect data to maximize prediction accuracy?
A. Collect all weather data evenly across all cities
B. Collect all weather data primarily from at-risk cities
C. Collect extreme weather data evenly across all cities
D. Collect extreme weather data primarily from at-risk cities

A

The correct option is:

D. Collect extreme weather data primarily from at-risk cities

Explanation:

  • To maximize prediction accuracy for extreme weather events, it is important to focus on collecting data where extreme weather events are more likely to occur. This means prioritizing the collection of data from at-risk cities or regions where extreme weather events are known to frequently happen.
  • Simply collecting data evenly across all cities may dilute the relevant data for predicting extreme weather events, as not all cities or regions experience extreme weather at the same frequency or severity.
  • Focusing on extreme weather data allows the machine learning model to learn patterns and correlations specific to extreme events, enhancing its ability to predict these events accurately.
  • Collecting data primarily from at-risk cities will provide a more representative and targeted dataset for training the machine learning model to make accurate predictions about extreme weather events.

Option A is not ideal as it suggests collecting all weather data evenly, which might dilute the data relevant to extreme events. Option B is not ideal as it focuses only on at-risk cities, potentially missing important patterns in less risky areas. Option C is better than option A but still doesn’t focus on the extreme events, which are critical for the model’s prediction accuracy.

86
Q

An organization needs to search an application’s source code to identify a potential issue. The application is distributed across multiple containers.
Which Google Cloud product should the organization use?
A. Google Cloud Console
B. Cloud Trace
C. Cloud Monitoring
D. Cloud Logging

A

The correct option is:

D. Cloud Logging

Explanation:

  • Cloud Logging allows organizations to store, search, analyze, and monitor logs generated by applications running on Google Cloud, including applications distributed across multiple containers.
  • In this scenario, if the organization needs to search the application’s source code to identify a potential issue, using Cloud Logging is appropriate. The logs can contain valuable information related to errors, warnings, or other issues that can be used for troubleshooting and identifying problems within the application.
  • While Cloud Trace and Cloud Monitoring are valuable tools for performance monitoring and tracing within applications, they are not specifically designed for searching the application’s source code. On the other hand, Cloud Logging is focused on storing and analyzing logs, making it suitable for searching and identifying potential issues in the application’s source code.
87
Q

An organization’s web developers and operations personnel use different systems.
How will increasing communication between the teams reduce issues caused by silos?
A. By assigning blame for failures and establishing consequences
B. By combining job role responsibilities to ensure that everyone has shared access
C. By increasing data encryption to strengthen workflows
D. By emphasizing shared ownership of business outcomes

A

The correct option is:

D. By emphasizing shared ownership of business outcomes

Explanation:

  • Increasing communication between different teams, such as web developers and operations personnel, is a fundamental step towards breaking down silos within an organization.
  • By fostering collaboration and encouraging open communication, teams can better understand each other’s roles, challenges, and objectives. This, in turn, encourages a shared sense of ownership and responsibility for the organization’s business outcomes.
  • Shared ownership promotes a culture where teams work together towards common goals rather than working in isolated silos. It ensures that all members understand the bigger picture, the impact of their work on the organization, and the need to collaborate for overall success.
  • Assigning blame (option A) or combining job role responsibilities (option B) may not address the root cause of the issues caused by silos and can potentially create further tensions or inefficiencies. Increased data encryption (option C) does not directly address the collaboration and communication aspects needed to reduce silos. Emphasizing shared ownership (option D) addresses the collaborative mindset required to mitigate the effects of silos and promote a more cohesive and efficient work environment.
88
Q

How does a large hotel chain benefit from storing their customer reservation data in the cloud?
A. On-premises hardware access to transaction data
B. Real-time data transformation at scale within an on-premises database
C. Real-time business transaction accuracy at scale
D. Physical hardware access during peak demand

A

The correct option is:

C. Real-time business transaction accuracy at scale

Explanation:

  • Storing customer reservation data in the cloud allows a large hotel chain to benefit from real-time business transaction accuracy at scale. Cloud-based solutions provide the capability to process and analyze large volumes of transaction data in real time, ensuring that the business transactions related to reservations are accurate and up to date.
  • With cloud storage and processing capabilities, the hotel chain can efficiently handle a high volume of reservations, cancellations, modifications, and other transaction-related activities in real time. This enhances the overall customer experience by providing accurate and timely updates on reservations and availability.
  • On-premises hardware access to transaction data (option A) is typically slower and less scalable compared to cloud solutions. Real-time data transformation at scale within an on-premises database (option B) might not be as efficient or cost-effective as utilizing cloud-based solutions. Physical hardware access during peak demand (option D) is not a direct benefit related to storing customer reservation data in the cloud.
89
Q

An organization wants to migrate legacy applications currently hosted in their data center to the cloud. The current architecture dictates that each application needs its own operating system (OS) instead of sharing an OS.
Which infrastructure solution should they choose?
A. Virtual machines
B. Open source
C. Serverless computing
D. Containers

A

The correct option is:

A. Virtual machines

Explanation:

  • Virtual machines (VMs) would be the suitable infrastructure solution when migrating legacy applications that require each application to have its own operating system (OS). VMs provide isolation at the OS level, allowing each application to run on a dedicated virtualized environment with its own OS.
  • Open source (option B) refers to software whose source code is available for modification or enhancement by anyone, and it is not an infrastructure solution like VMs.
  • Serverless computing (option C) is a cloud computing model where the cloud provider manages the infrastructure and automatically allocates resources as needed, but it’s not focused on providing a dedicated OS for each application.
  • Containers (option D) also provide application isolation, but they share the OS kernel and are not typically used to provide separate OS environments for each application. Containers are lightweight and portable, making them a good choice for modernizing and deploying applications, especially microservices-based architectures. However, if the requirement is to have each application on its own OS, VMs would be the better choice.
90
Q

An organization wants to transform multiple types of structured and unstructured data in the cloud from various sources. The data must be readily accessible for analysis and insights.
Which cloud data storage system should the organization use?
A. Relational database
B. Private data center
C. Data field
D. Data warehouse

A

The correct option is:

D. Data warehouse

Explanation:

  • A data warehouse (option D) is designed for analyzing and querying large volumes of structured and unstructured data from various sources. It allows organizations to consolidate and transform data into a structured format suitable for analytics and reporting.
  • A relational database (option A) is more suitable for structured data and is typically optimized for transactional processing.
  • Private data center (option B) refers to on-premises data centers, which may not be the most efficient option for managing and analyzing data at scale in the cloud.
  • “Data field” (option C) does not represent a common term or recognized solution related to cloud data storage systems.

For transforming and analyzing multiple types of structured and unstructured data from various sources in the cloud, a data warehouse is the appropriate choice to ensure data accessibility and facilitate insights and analytics.

91
Q

An organization wants to use all available data to offer predictive suggestions on their website that improve over time.
Which method should the organization use?
A. Data automation
B. Trends analysis
C. Machine learning
D. Multiple regression

A

The correct option is:

C. Machine learning

Explanation:

  • Machine learning (option C) involves training models on existing data and using those models to make predictions or provide suggestions based on new data. It can improve over time as it learns from more data and refines its predictions.
  • Data automation (option A) refers to the process of automatically managing and handling data, but it may not necessarily involve learning or predictive capabilities.
  • Trends analysis (option B) involves analyzing historical data to identify patterns and trends. While this can inform predictions, it may not adapt or improve over time without a learning component.
  • Multiple regression (option D) is a statistical analysis method that predicts a variable based on the values of two or more variables. However, it does not involve learning from data to improve predictions over time.

For offering predictive suggestions that improve over time, machine learning is the most suitable method as it enables the model to learn from data and enhance its predictions as it’s exposed to more data and experiences.

92
Q
A