Architecture of OpenShift Data Foundation Flashcards

(104 cards)

1
Q

What does Red Hat OpenShift Data Foundation provide?

A

Cloud-native data services for Red Hat OpenShift Container Platform (RHOCP) or any other infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What common storage issues does OpenShift Data Foundation avoid?

A

Lack of portability, deployment burden, vendor lock-in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does OpenShift Data Foundation run?

A

As a Kubernetes service, following the Kubernetes operator model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What types of storage does OpenShift Data Foundation provide access to?

A

File, block, and object storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What technology is OpenShift Data Foundation based on?

A

Ceph technology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

List three types of workloads supported by OpenShift Data Foundation.

A
  • Data at rest (databases, data warehouses)
  • Data in motion (pipelines)
  • Data in action (continuous deployment tools, analytics, AI, ML)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What service does OpenShift Data Foundation use for data federation?

A

Multicloud object gateway (MCG) service based on the NooBaa project

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the MCG service backed by?

A

Local storage or cloud-native storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What storage operator does OpenShift Data Foundation use?

A

Rook-Ceph storage operator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the benefits of OpenShift Data Foundation compared with NFS storage?

A
  • Reduces administrative workload
  • Manual share provisioning and persistent volume (PV) definition
  • Custom security context requirements
  • True single-node access (RWO) mode
  • Reuse persistent volume claim (PVC) names without data loss
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What high availability (HA) capability does OpenShift Data Foundation provide?

A

HA across availability zones (AZs) in cloud environments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What algorithm does Ceph storage use for replication?

A

Controlled replication under scalable hashing (CRUSH) algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What limitations does OpenShift Data Foundation remove compared with EBS storage?

A
  • Support for RWX access mode
  • Regional HA and portability
  • Unlimited volume size
  • Unlimited instance attachments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What benefits does OpenShift Data Foundation provide compared with vSphere volumes?

A
  • Support for RWX access mode
  • Regional HA and portability
  • Compatible with volume snapshots
  • No vendor lock-in
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does the OpenShift Data Foundation architecture include?

A
  • OpenShift Data Foundation
  • OpenShift Container Storage
  • Multicloud Object Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What role does the OpenShift Data Foundation act as?

A

Meta-operator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What types of storage does OpenShift Container Storage provide?

A

Back-end services for file, block, and object data storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What does the Multicloud Object Gateway provide?

A

Multicloud object gateway (MCG) service and the NooBaa cluster resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are the deployment modes available for OpenShift Data Foundation?

A

Internal, External, MCG stand-alone

These modes provide flexibility to choose the best approach for each environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Describe the Internal deployment mode of OpenShift Data Foundation.

A

All components run entirely within an RHOCP cluster using dynamically provisioned persistent volumes (PVs) specified by administrators in the installation wizard.

This mode is suitable for environments where complete integration within the cluster is necessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the purpose of the External deployment mode in OpenShift Data Foundation?

A

To connect to an existing Red Hat Ceph Storage (RHCS) cluster running outside the RHOCP cluster.

This mode allows leveraging existing storage solutions without creating a new cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does the MCG stand-alone deployment mode provide?

A

Only S3 storage without creating a Ceph cluster.

This mode is ideal for users who need S3 storage functionality without the overhead of a full Ceph setup.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What deployment mode allows OpenShift Data Foundation to operate entirely within an RHOCP cluster?

A

Internal Mode Deployment

Benefits include operator-based deployments and the option to use local disk devices with the local storage operator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What types of infrastructure can OpenShift Data Foundation internal mode be deployed on?
Bare metal, Amazon Web Services (AWS), Google Cloud ## Footnote This flexibility allows for diverse deployment scenarios.
26
What is the simple deployment approach in OpenShift Data Foundation?
Uses local storage devices, portable storage devices, vSphere volumes, or SAN volumes ## Footnote Services run jointly with applications.
27
When is the simple deployment approach useful?
* Storage requirements are not clear * No infrastructure nodes exist * Creating an extra node instance is difficult ## Footnote Commonly applied in environments with limited resources.
28
What is the optimimal deployment approach in OpenShift Data Foundation?
Uses dedicated RHOCP infrastructure nodes ## Footnote Recommended when storage requirements are clear.
29
In what scenarios is the optimized deployment approach recommended?
* Storage requirements are clear * OpenShift Data Foundation runs on dedicated infrastructure nodes * Creating an extra node instance is straightforward ## Footnote Particularly effective in cloud and virtualized environments.
30
What is the external mode deployment in OpenShift Data Foundation?
Uses independent, external Red Hat Ceph Storage services ## Footnote These services run outside the cluster.
31
When is the external mode deployment recommended?
* Storage requirements are significant * Multiple RHOCP clusters consume storage services from a common external RHCS cluster * Another team manages the external cluster ## Footnote Suitable for larger, more complex environments.
32
Can OpenShift Data Foundation deploy multiple storage clusters?
Yes, one in internal mode and one in external mode ## Footnote Allows integration of both internal and external storage solutions.
33
What is disaster recovery (DR) in the context of OpenShift Data Foundation?
The ability to recover from catastrophic events affecting applications and data availability ## Footnote Essential for maintaining data integrity.
34
What are the DR architectures provided by OpenShift Data Foundation?
* DR with stretch cluster * Regional-DR * Metro-DR ## Footnote Each architecture serves distinct recovery and protection needs.
35
What is the DR with stretch cluster architecture?
A single cluster stretched across two locations with a third location as an arbiter ## Footnote Provides no-data loss protection with synchronous replication.
36
When should DR with stretch cluster be used?
In single-region data centers with specific latency requirements ## Footnote Ideal for environments where low latency is critical.
37
What is the regional disaster recovery (regional-DR)?
Protection against geographical scale disasters using asynchronous volume level replication ## Footnote Minimizes potential data loss.
38
What is required for regional-DR?
* A hub cluster with RHACM * Primary and secondary managed clusters with OpenShift Data Foundation ## Footnote Ensures robust disaster recovery capabilities.
39
What does Metro-DR provide?
No-data loss DR protection with synchronous replication between two RHOCP clusters ## Footnote Ensures data integrity across data centers.
40
What is required for Metro-DR?
* An external RHCS cluster * Two RHOCP clusters with OpenShift Data Foundation * A hub cluster with RHACM ## Footnote Facilitates application and data mobility across clusters.
41
What is the purpose of deploying OpenShift Data Foundation?
To deploy Red Hat OpenShift Data Foundation on premise using the OpenShift web console.
42
What is the first step in deploying OpenShift Data Foundation?
Install the local storage operator (optional).
43
What must you do before installing the OpenShift Data Foundation operator?
Identify three nodes on which the OpenShift Data Foundation services are installed.
44
What label must nodes have to be used by OpenShift Data Foundation?
cluster.ocs.openshift.io/openshift-storage=
45
What is the mandatory step after installing the OpenShift Data Foundation operator?
Create the storage system.
46
What is the default volume mode when creating a storage system?
Block.
47
What is the minimum disk size that can be set for devices when creating a storage system?
100 GB.
48
What option can be selected to dedicate nodes for OpenShift Data Foundation use?
Taint nodes.
49
Fill in the blank: The default performance profile for OpenShift Data Foundation is _______.
balanced
50
What two encryption levels can be selected in the Security and network stage?
Cluster-wide encryption, StorageClass encryption.
51
What is the deployment type when creating a storage system using local storage devices?
Full deployment.
52
Which storage platforms can be connected in External Mode?
Red Hat Ceph Storage, IBM FlashSystem Storage.
53
What Python script can be downloaded for Red Hat Ceph Storage?
A script that retrieves the external cluster details.
54
How can you verify the successful installation of OpenShift Data Foundation?
Go to Operators → Installed Operators and check the status of ocs-storagecluster.
55
What should the status of the ocs-storagecluster object be to confirm successful installation?
Ready.
56
What namespace should the pods be in to verify the installation of OpenShift Data Foundation?
openshift-storage.
57
True or False: Labeling nodes is mandatory before installing the OpenShift Data Foundation operator.
False.
58
What stage allows you to review available raw capacity and selected nodes?
Capacity and nodes stage.
59
What option in the Advanced section allows for the selection of device types?
Device type dropdown.
60
What is the maximum disks limit field used for?
To indicate the maximum number of PVs that can be created on a node.
61
What must be done if using local volumes as storage?
Install the local storage operator from the OperatorHub menu.
62
What is the option for using disks on all nodes when creating a local volume set?
Disks on all nodes.
63
What does the Create StorageSystem page show when deploying by using Multicloud Object Gateway?
Similar options to the previous deployment.
64
Where can you view the persistent storage status in the web console?
Storage → Data Foundation and click Storage Systems
65
What sections are displayed on the Overview → Block and File page?
Status and Activity sections
66
What does the Activity section show in the storage system overview?
Events to create and configure resources
67
What does the Status section display in the storage system overview?
Status for Storage Cluster and Data Resiliency
68
What must you verify after the installation of OpenShift Data Foundation?
That all data services are healthy
69
How many storage classes are created by OpenShift Data Foundation?
Four storage classes
70
List the storage classes created by OpenShift Data Foundation.
* ocs-storagecluster-ceph-rbd * ocs-storagecluster-cephfs * openshift-storage.noobaa.io * ocs-storagecluster-ceph-rgw
71
What differs in the remove method of a storage system installation?
It depends on whether internal or external mode was used
72
What two annotations specify the OpenShift Data Foundation uninstallation behavior?
* uninstall.ocs.openshift.io/cleanup-policy * uninstall.ocs.openshift.io/mode
73
What happens if uninstall.ocs.openshift.io/cleanup-policy is set to delete?
Rook-Ceph Storage Operator cleans up the physical drives and DataDirHostPath
74
What happens if uninstall.ocs.openshift.io/mode is set to graceful?
Operators pause uninstallation until PVCs and OBCs are removed manually
75
What command is used to annotate the storage cluster for cleanup policy?
oc annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy="retain" --overwrite
76
What must be ensured before uninstalling OpenShift Data Foundation?
The cluster is in a healthy state
77
What should be deleted before uninstalling OpenShift Data Foundation?
* PVCs * OBCs * Volume snapshots
78
What command is used to remove the storage system?
oc delete -n openshift-storage storagesystem --all --wait=true
79
What is the main difference in updating OpenShift Data Foundation for internal and external mode?
All components update in internal mode; back-end Ceph storage cluster requires separate update in external mode
80
What should be done to enable automatic updates for OpenShift Data Foundation?
Enable automatic updates for minor releases and z-streams
81
What must be done before updating OpenShift Data Foundation?
* Ensure RHOCP is updated * Ensure the cluster is healthy * Ensure all pods in openshift-storage namespace are Running
82
What is the process to change the subscription update channel for a major version update?
Update the subscription update channel to match the RHOCP version
83
How can you verify a successful installation of OpenShift Data Foundation?
From the Operators → Installed Operator page, check the version number and status
84
What is the main objective of deploying OpenShift Data Foundation?
Deploy Red Hat OpenShift Data Foundation on premise by using the CLI.
85
Which tools can be used to install OpenShift Data Foundation from the command line?
oc or kubectl
86
What are the two main components involved in the installation of OpenShift Data Foundation?
* OpenShift Data Foundation operator * Storage cluster object
87
What must be installed if local volumes are attached to the nodes?
Local storage operator
88
What is the purpose of the operator group object in RHOCP?
Selects target namespaces for generating required RBAC access for its member operators.
89
What does the subscription object define in the context of an operator?
* Name and namespace of the operator * Catalog that includes the operator data * Channel that determines the operator stream
90
What is the minimum requirement for nodes when deploying OpenShift Data Foundation?
A minimum of three nodes with the same number of disks, size, type, and performance capabilities.
91
How can you save on subscription costs when deploying OpenShift Data Foundation?
By deploying on infrastructure nodes.
92
What label indicates that a node is an infrastructure node?
node-role.kubernetes.io/infra
93
What effect does applying a taint with NoSchedule have on infrastructure nodes?
Prevents a pod from being scheduled unless that pod can tolerate the taint.
94
What command is used to add a taint to an infrastructure node for OpenShift Data Foundation?
oc adm taint node nodename node.ocs.openshift.io/storage=true:NoSchedule
95
Do OpenShift Data Foundation components require configuration for toleration in the StorageCluster resource?
No, they tolerate the OpenShift Data Foundation taint by default.
96
What is necessary for the local storage operator to detect disks on dedicated nodes?
The toleration for the OpenShift Data Foundation taint.
97
What command is used to add the OpenShift Data Foundation label to nodes with storage devices?
oc label node nodename cluster.ocs.openshift.io/openshift-storage=''
98
How can you view the OpenShift Data Foundation labeled nodes?
oc get nodes -l cluster.ocs.openshift.io/openshift-storage=''
99
What must be done to install the local storage operator when deploying OpenShift Data Foundation on RHOCP?
Create the OperatorGroup and Subscription objects.
100
What is the purpose of the LocalVolumeDiscovery custom resource?
To discover a list of potentially usable disks on the chosen set of nodes.
101
What does the LocalVolumeSet custom resource do?
Filters a set of storage volumes on the selected nodes, groups them, and creates a dedicated storage class.
102
What must be included in the LocalVolumeDiscovery and LocalVolumeSet objects if deploying on dedicated infrastructure nodes?
Tolerations for the OpenShift Data Foundation taint.
103
What is generated when creating the LocalVolumeDiscovery object?
The LocalVolumeDiscoveryResults object.
104
How can you retrieve the LocalVolumeDiscoveryResults objects?
Using the appropriate command (not specified in the text).