PCSE Professional Cloud Security Engineer Deck Flashcards
(100 cards)
What is the purpose of Google Kubernetes Engine (GKE)?
A2: GKE is used to bootstrap Kubernetes, saving time and effort when scaling applications and workloads. It provides a managed environment for running Kubernetes.
What is a “node” in Kubernetes?
A3: In Kubernetes, a node represents a computing instance, such as a machine. It’s where containers run. Note that in Google Cloud, a node specifically refers to a virtual machine running in Compute Engine.
What is a “Pod” in Kubernetes?
A4: A Pod is the smallest deployable unit in Kubernetes. It’s a wrapper around one or more containers and represents a running process on the cluster.
hen would you have multiple containers in a single Pod?
A5: You would have multiple containers in a single Pod when those containers have a hard dependency on each other and need to share networking and storage resources.
What does the kubectl run command do?
A6: The kubectl run command starts a Deployment with a container running inside a Po
What is a “Service” in Kubernetes?
A9: A Service is an abstraction that defines a logical set of Pods and a policy by which to access them. It provides a stable endpoint (fixed IP address) for a group of Pods.
Why are Services important for Pods?
A10: Services are important because Pod IP addresses can change over time. Services provide a stable IP address, ensuring that other parts of the application or external clients can consistently access the Pods.
What is the benefit of using a rollout strategy when updating an application?
A15: A rollout strategy allows for gradual updates, reducing the risk of downtime or issues when deploying new code. It allows for new pods to be created before old ones are destroyed.
How do you update a running application to a new version in Kubernetes?
A14: You can update a running application by changing the Deployment configuration file and applying the changes using kubectl apply, or by using kubectl rollout. Kubernetes will then roll out the changes according to the defined update strategy.
What is declarative configuration in Kubernetes?
A12: Declarative configuration involves providing a configuration file that specifies the desired state of your application. Kubernetes then works to achieve that state, rather than issuing individual commands.
Beyond “Managed Control Plane”: How Does Google’s Handling of the GKE Control Plane in Autopilot Mode Impact Cluster Reliability and Security, and What Trade-offs Exist Compared to Self-Managed Kubernetes?
This question goes beyond the basic description of GKE managing the control plane.
It prompts consideration of:
The specific security and reliability practices Google implements.
The potential limitations or dependencies introduced by relying on a managed service.
The trade-offs between the reduced operational overhead of Autopilot and the loss of granular control.
How Does GKE’s Autopilot Mode Optimize Resource Utilization and Cost Efficiency, and What Underlying Mechanisms Are Employed to Achieve This Compared to Standard Mode?
This question delves into the practical benefits of Autopilot.
It encourages investigation of:
How Autopilot handles node provisioning and scaling based on workload demands.
The cost implications of relying on Google’s resource management.
The differences in cost between standard and autopilot mode.
The deeper understanding of how Google optimizes resource utilization.
What Are the Implications of Google Cloud’s Integrations (Load Balancing, Observability, etc.) for GKE, and How Do These Integrations Enhance or Limit the Flexibility and Portability of Kubernetes Workloads?
This question focuses on the ecosystem surrounding GKE.
It prompts consideration of:
The benefits and drawbacks of tight integration with Google Cloud services.
The potential for vendor lock-in.
How portability of workloads is affected when relying on cloud specific features.
How Does GKE’s Node Auto-Repair and Auto-Upgrade Functionality Contribute to Cluster Resilience and Security, and What Are the Best Practices for Managing Potential Disruptions During These Processes?
This question explores the operational aspects of GKE.
It asks to consider:
The technical details of how these automated processes work.
Strategies for minimizing downtime during upgrades and repairs.
How to effectively monitor these processes.
How Does GKE’s Implementation of Node Pools Facilitate Workload Isolation and Resource Management, and What Are the Key Considerations for Designing and Implementing Effective Node Pool Strategies in Complex Applications?
This question dives into a more advanced GKE feature.
It encourages investigation of:
The use cases for node pools in different application architectures.
Best practices for assigning workloads to specific node pools.
The optimization of resource allocation through node pools.
Beyond “Principle of Least Privilege”: How Does the Interaction Between IAM Roles and API Scopes Create a Layered Security Model, and What Are the Potential Vulnerabilities That Can Arise From Misconfigurations in This System?
This question pushes beyond the basic security advice.
It encourages exploration of:
The specific ways that IAM roles and API scopes interact and complement each other.
The potential for privilege escalation or unauthorized access due to misconfigurations.
The impact of the temporary state of access scopes.
How Does the “Default Service Account” and Its Associated “Project Editor” Role Present a Security Risk in Production Environments, and What Strategies Can Organizations Implement to Effectively Migrate to User-Managed Service Accounts?
This question delves into the practical implications of using default settings.
It prompts consideration of:
The specific security vulnerabilities associated with the “Project Editor” role.
The challenges and best practices for transitioning to user-managed service accounts in complex environments.
Automation of the migration process.
How Does the Per-Instance Nature of Access Scopes Impact the Scalability and Manageability of Large-Scale Google Cloud Deployments, and What Alternative Approaches Could Be Considered to Address These Challenges?
This question focuses on the operational aspects of access scopes.
It asks to consider:
The logistical difficulties of managing access scopes across numerous VM instances.
The potential for inconsistencies or errors when configuring access scopes on a per-instance basis.
If there are any google cloud features that mitigate this issue.
How Does Google Cloud’s Authentication and Authorization Mechanisms, Specifically Service Accounts, IAM Roles, and API Scopes, Relate to Broader Security Frameworks and Compliance Requirements, Such as NIST or GDPR?
This question connects the technical details to broader security and compliance concerns.
It prompts consideration of:
How these Google Cloud features align with industry best practices and regulatory requirements.
The role of these features in demonstrating compliance during audits.
The auditing capabilities around these features.
How Does the Evolution From Access Scopes as the Primary Permission Mechanism to IAM Roles Reflect Google Cloud’s Emphasis on Granular Access Control, and What Are the Implications for Legacy Systems and Applications That Still Rely on Access Scopes?
T
his question explores the historical context and evolution of Google Cloud security features.
It asks to consider:
The reasons behind the shift towards IAM roles.
The challenges of maintaining compatibility with legacy systems.
The best practices for updating legacy systems.
This question pushes beyond a simple comparison of the two methods.
It encourages exploration of:
The specific security mechanisms provided by IAP’s HTTPS wrapping and IAM integration.
The potential weaknesses of IAP, such as vulnerabilities in the IAM policy or HTTPS implementation.
The security differences between a long lived bastion host, and the on demand nature of IAP.
The potential attack vectors that are unique to IAP.
Beyond “Encrypted Tunnel”: How Does IAP TCP Forwarding’s Reliance on HTTPS and IAM Policies Enhance Security Compared to a Bastion Host, and What Are the Potential Attack Vectors and Mitigation Strategies Specific to IAP’s Architecture?
How Does the Choice Between a Bastion Host and IAP TCP Forwarding Impact Operational Overhead and Scalability in Large-Scale Google Cloud Environments, and What Factors Should Organizations Consider When Designing a Secure Remote Access Strategy for Instances Without Public IPs?
This question delves into the practical implications of choosing one method over the other.
It prompts consideration of:
The management and maintenance requirements of a bastion host versus IAP.
The scalability of each solution in environments with a large number of instances.
The operational overhead of managing IAM policies versus bastion host security.
The factors that influence which solution is best for a given situation.
Considering the “Defense in Depth” principle, how do the security postures of Bastion Hosts versus IAP TCP Forwarding differ in relation to potential lateral movement threats within a Google Cloud environment, and what are the implications for auditability and incident response?
- This question goes beyond simple security comparisons. It pushes for analysis of how each method affects the overall security architecture, especially in the context of advanced threats.
- It prompts consideration of:
- The potential for an attacker to pivot from a compromised bastion host to other resources.
- How IAP’s IAM-based access control limits lateral movement.
- The differences in logging and auditing capabilities between the two methods.
- How each method effects incident response.
Evaluating the trade-offs between operational complexity and security assurance, how does the implementation of IAP TCP Forwarding impact the network architecture and security policies of a large-scale Google Cloud deployment compared to a traditional Bastion Host model, and what are the long-term implications for infrastructure maintenance and scalability?
- This question delves into the practical and strategic implications of choosing one method over the other.
- It encourages exploration of:
- The changes required to network architecture and security policies when implementing IAP.
- The scalability of each solution in environments with a large number of instances.
- The long-term maintenance costs and challenges associated with each approach.
- How those methods impact the overall agility of the cloud environment.