CIS Test 1 Flashcards

(400 cards)

1
Q

Configure / design for optimal operational efficiency.Performance analysisVolume management, DB/application layout.ISL design for SANsChoice of RAID type and LUNs

A

Performance Management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Establishes guidelines for all configurations to achieve high availability based on service level requirements.

A

Availability Management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Disadvantage of target based deduplication

A

Increased network bandwidth and storage capacity requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Disadvantage of source based deduplication

A

Increased overhead on the backup client.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Advantage of source based deduplication

A

Reduced storage capacity and network bandwidth requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Type of deduplication where backup client sends native data to the backup device.

A

Target based deduplication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Type of deduplication where backup client sends only new, unique segments across the network to the backup device.

A

Source-based deduplication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Two types of deduplication

A

1) Source based (client) 2) Target based (storage device)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Type of backup usually created from the most recent full backup and all the incremental backups performed thereafter.

A

Synthetic (or constructed) backup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Type of backup used in implementations where the production volume resources cannot be exclusively reserved for a backup process for extended periods.

A

Synthetic (or constructed) backup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Type of backup which takes longer than an incremental backup, but is faster to restore.

A

Cumulative (or differential) backup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Backup which copies the data that has changed since the last full backup.

A

Cumulative (or differential) backup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A backup of the complete data on the production volumes at a certain point in time.

A

Full backup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Additional copy of data that can be used for restore and recovery purposes.

A

Backup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Provides the functionality to recognize and utilize alternate I/O path to data.

A

Multipathing software

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Solutions and supporting technologies that enable business continuity and uninterrupted data availability

A

1) Eliminating single points of failure. 2) Multi-pathing software 3) Backup / restore 4) Replication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Provides RTO of 72 hours.

A

Restore from backup tapes at a cold site.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Provides RTO of 12 hours.

A

Restore from tapes at a hot site.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Provides RTO of 4 hours

A

Use of data vault to a hot site.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Provides RTO of ~1 hour.

A

Cluster production servers with controller-based disk mirroring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Provides RTO of a few seconds.

A

Cluster production servers with bi-directional mirroring, enabling the applications to run at both sites simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Recovery Time Objective (RTO)

A

Time within which systems, applications, or functions must be recovered after an outage. Amount of downtime that a business can endure and survive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Recovery Point Objective (RPO)

A

Point in time to which systems and data must be recovered. Amount of data loss that a business can endure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Site where an enterprise’s operations can be moved in the event of disaster and where the DR site infrastructure is up and running all the time.

A

Hot Site

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Type of site where the IT infrastructure required to support DR is NOT activated.
Cold site
26
Group of servers and other necessary resources, coupled to operate as a single system.
Cluster
27
Coordinated process of restoring systems, data, and infrastructure required to support ongoing business operations in the event of a disaster.
Disaster Recovery (DR)
28
Processes and/or procedures for ensuring continued business operations.
Business Continuity (BC)
29
True or False: Having a single data model and toolset for unified storage enables a consistent management framework across many applications and workloads.
TRUE
30
True or False: In an object level access, data is access over a network in terms of self contained objects, using object IDs.
TRUE
31
Required to effectively pool storage and provide data access at file level, block level, and object level.
Unified Storage
32
Provides consolidated management interface for NAS, SAN, iSCSI, FCoE, and object-based technologies.
Unified Storage
33
What is a driver for object based storage?
Increasing amounts of unstructured data
34
How is an object identified in an object based storage environment?
By a unique object ID.
35
Limit on number of files in object based storage environment
No limit
36
What is used to generate an object ID?
Hashing function
37
Stores data in a flat address space
Object based storage
38
Combines data with rich metadata to create an "object".
Object Based Storage
39
Eight Benefits of NAS
1) Supports comprehensive access to information. 2) Provides improved efficiency. 3) Provides improved flexibility. 4) Provides centralized storage. 5) Simplifies management. 6) Enables scalability. 7) Provides high availability - through native clustering. 8) Provides security integration to environment (user authentication and authorization).
40
Two options available for FCoE cabling
Twinax copper - Optical fiber
41
Four Benefits of Fibre Channel over Ethernet (FCoE)
1) Lowers CAPEX 2) Reduces power and cooling requirements 3) Enables consolidation of network infrastructure 4) Lowers TCO
42
Enables distributed FC SAN islands to be transparently interconnected over existing IP-based local, metro, and wide area networks.
FCIP
43
FC WWNs are similar to what used in IP networking?
MAC addresses
44
Length of a Fibre Channel address
24 bits
45
E_ports connect to what type of ports?
E-ports
46
N_ports connect to what type of ports?
F_ports
47
Fundamental purpose of a SAN
To provide compute access to storage resources
48
High end switches with a higher port count and better fault tolerance capabilities.
Directors
49
Three basic components of a SAN
1) Servers (compute systems) 2) Network infrastructure 3) Storage
50
Protocol used by a FC SAN
SCSI over FC
51
Concept-based storage networking technologies
1) Object-based storage2) Unified storage
52
Provides interconnection between CPU and attached devices.Latest version provides throughput of 133 MB/sec.
Peripheral Component Interconnect (PCI)
53
How can communication between compute and storage be accomplished?
By using channel or network technologies.
54
Network protocol examples
iSCSI (SCSI over IP) - FCoE (Fibre Channel over Ethernet) - FC (Fibre Channel)
55
Examples of channel protocols
PCIIDE/ATA/SATASCSI
56
Protocols typically used for compute to compute communication
Ethernet or TCIP/IP
57
Why is the LUN masking function implemented on the storage processor / controller?
To ensure that the volume access by servers is controlled appropriately, preventing unauthorized or accidental use in a shared environment.
58
Where is the LUN masking function implemented?
On the storage processor / controller
59
Process that provides data access control by defining which LUNs a compute system can access.
LUN Masking
60
How is access to LUNs by a compute system controlled?
LUN masking
61
Logical Unit Number (LUN)
A unique ID assigned to each logical unit created from the RAID set.
62
What is done with Logical Units after they are created?
After creation, LUs are then assigned to the compute system for their storage requirements.
63
How are Logical Units created from a RAID set?
By partitioning (seen as slices of the RAID set) the available capacity into smaller units.
64
True or False: RAID sets usually have large capacities because they combine the total capacity of individual drives in the set.
TRUE
65
How should a RAID set be created?
The RAID set should be created from the same type, speed and capacity drives to ensure the maximum usable capacity, reliability, and consistent performance.
66
What determines the availability, capacity, and performance of a RAID set in an ISS?
Two things: 1) the number of drives in the RAID set, and 2) the RAID level.
67
How are physical disks configured in an Intelligent Storage System (ISS)?
In an ISS, physical disks are logically grouped together to form a set, call a RAID set, on which a required RAID level is applied.
68
Key advantage of an Intelligent Storage System
A read request can be serviced DIRECTLY FROM CACHE if the requested data is found in cache.
69
Four Components of an Intelligent Storage System
1) Front-end 2) Cache 3) Back-end 4) Physical Disks
70
Why are Intelligent Storage Systems needed?
Disk drives alone, even with a RAID implementation, could not meet performance requirements of today's applications.
71
RAID array highly optimized for high performance I/O processing.Has large amounts of cache for improving I/O performance.Has multiple I/O paths.Has operating environment that provides intelligence for managing cache.
Intelligent Storage System
72
Commonly used RAID Levels
RAID 0 - RAID 1 - Nested - RAID 3 - RAID 5 - RAID 6
73
RAID 6
Distributed parity RAID with dual parity.
74
RAID 5
Parity RAID with distributed parity across all the disks in the set.
75
RAID 3
Parity RAID with DEDICATED parity disk.
76
Nested RAID
Combinations of RAID levels.Example: RAID 1 + RAID 0
77
RAID 1
Disk mirroring
78
RAID 0
Striping with NO FAULT TOLERANCE
79
Disadvantage of Parity
Parity information is generated from data on the disks.As a result, parity is recalculated every time there is a change in data.This recalculation takes time and affects the performance during write operations.
80
What happens if a disk fails in a parity-enabled RAID set?
If one of the disks fails in a RAID set, the value of the failed disk's data is calculated by using the parity information and the data on the surviving disks.
81
What calculates parity?
Calculation of parity is a function of the RAID controller.
82
Parity calculation method
Bitwise XOR2.
83
Where / how can parity information be stored?
Parity information can be stored: 1) On separate, dedicated disk drives, or 2) Distributed across all the drives in a RAID set.
84
Why is Parity RAID less expensive than Mirroring?
Because parity overhead is only a fraction of the total capacity.
85
A mathematical construct that allows re-creation of the missing data.
Parity
86
RAID method of protecting striped data from disk failure without the cost of mirroring.
Parity
87
What is Nested RAID?
Nested RAID is mirroring implemented with striped RAID, where entire stripes of a disk set are mirrored to stripes on the other disk set.
88
Why is mirroring expensive?
Because mirroring involves duplication of data - the amount of storage capacity required is twice the amount of data being stored.
89
Why does mirroring improve read performance?
Because read requests are serviced by both disks.
90
Why does write performance deteriorate with mirroring?
Because each write request manifests as two writes on the disk drives.
91
RAID technique that improves performance because read requests are serviced by both disks, but results in diminished write performance.
Mirroring
92
RAID technique where data is stored on two different disk drives, yielding two copies of data.
Mirroring
93
Benefit of RAID striping
Allows more data to be processed in a shorter time. Performance increases, when compared to writing / retrieving data to / from one disk at a time.
94
True or False: With RAID striping, all read-write heads work simultaneously.
True.
95
Does RAID striping provide fault tolerance?
No.
96
RAID technique of spreading data across multiple drives in order to use the drives in parallel.
Striping
97
Helps implement RAID techniques of striping, mirroring, and parity.
RAID controller
98
RAID techniques that form the basis for defining various RAID levels and determine the data availability and performance of a RAID set.
Striping, mirroring, and parity
99
How does RAID technology improve storage system performance?
By serving I/Os from multiple disks simultaneously.
100
Benefits of RAID technology
1) Data protection against drive failures 2) Improved storage system performance
101
Three RAID techniques
1) Striping 2) Mirroring 3) Parity
102
Improves storage system performance by serving I/Os from multiple disks simultaneously.
Redundant Array of Independent Disks (RAID)
103
Technology which utilizes multiple disk drives as a set to provide protection, capacity, and/or performance benefits.Overcomes limitations of disk drives.
Redundant Array of Independent Disks (RAID)
104
Provides ultra high performance required by mission-critical applications.Very low latency per I/O.Low power requirements.Very high throughput per drive.
Solid State Drives
105
Random read/write access.Uses mechanical parts for data access.Most popular storage device with large storage capacity.Will eventually fail.
Disk Drive
106
Write Once and Read Many (WORM)Limited in capacity and speedPopular in small, single-user environments
Optical Disk
107
Low cost solution for long term data storage.Sequential data access.Subject to physical wear and tear.Subject to storage / retrieval overheads.
Tape Drives
108
Four Basic Storage Device Options
1) Tape Drive 2) Optical Disk 3) Disk Drive 4) Solid State Drive
109
Types of media that may be used by a storage device
Magnetic Media - Optical Media - Solid State Media
110
Type of storage device used is based on what?
1) Type of data 2) Rate at which it is created and used
111
A resource that stores data persistently for subsequent use.
Storage
112
Enables the OS to recognize a device and to use a standard interface (provided as an API) to access and control devices.
Device driver
113
Special software that permits the OS to interact with a specific device (e.g., printer, mouse, or hard drive).
Device driver
114
Provides a set of OS commands, library subroutines, and other tools that enable the creation and control of logical storage.
Logical Volume Managers (LVM)
115
True or False: The logical storage structures appear contiguous to the OS and applications.
TRUE
116
Have the ability to define logical storage structures that can span multiple physical devices.
Logical Volume Managers (LVM)
117
Introduce a logical layer between the OS and physical storage.
Logical Volume Manager (LVM)
118
Provides basic security for the access and usage of all managed resources.
Operating System (OS)
119
Performs basic storage management tasks while managing other underlying components, such as the file system, volume manager, and device drivers.
Operating System (OS)
120
One of the most important services provided by the OS to the application.
Data access
121
Works between the application and physical components of the compute system
Operating System (OS)
122
Monitors and responds to user actions and the environment.
Operating System (OS)
123
Controls all aspects of a computing environment.
Operating System (OS)
124
Where can files reside?
Within a disk drive, a disk partition, or a logical volume.
125
Hierarchical structure of files
File system
126
Four Logical Components of a Compute System
1) File System 2) Operating System 3) Volume Manager 4) Device Drivers
127
Checkup mechanism between two nodes in a cluster to see whether a node is up and running
Exchange Heartbeat
128
Software that connects the nodes in the cluster and provides a single-system view to the clients that are using the cluster.
Cluster service
129
Method of grouping two or more servers (also known as Nodes) and making them work together as a single system.
Server Clustering
130
Benefits of blades server technology
1) Greatly increased server density. 2) Lower power and cooling costs. 3) Easier server expansion. 4) Simplified datacenter management.
131
Provides increased server performance and availability without increase in size, cost, or complexity.
Blade server
132
Enables the addition of server modules as hot-pluggable components
Blade server
133
Consolidates power- and system-level function into a single integrated chassis.
Blade server
134
Commonly used to deploy compute systems in a CDC
Blade server technology
135
Examples of compute systems
laptops / desktops - blade servers - complex cluster of servers - mainframes
136
Connectivity outlet on a HBA
Port
137
Provide connectivity outlets, known as ports, to connect the compute systems to the storage device.
Host Bus Adapter (HBA)
138
Example of a host controller that connects compute systems to Fibre Channel storage devices.
Host Bus Adapter (HBA)
139
Type of communication handled by basic I/O devices such as keyboard, mouse, etc.
User to compute
140
Type of communication enabled using host controller or host adapter
Compute to compute / storage
141
Type of compute facilitated by I/O devices
1) User to compute 2) Compute to compute / storage
142
Physical components of compute
CPUMemory - Input/Output (I/O) devices
143
Consists of physical components (hardware devices) and logical components (software and protocols).
Compute
144
Resource that runs applications with the help of underlying computing components.
Compute
145
Examples of DBMSs
MySQL - Oracle RDBMS - SQL Server
146
Collection of computer programs that control the creation, maintenance, and use of databases.Processes an application's request for data.Instructs the OS to retrieve the appropriate data from storage.
Database Management System (DBMS)
147
Structured way to store data in logically organized tables that are interrelated.
Database
148
Three tiers of a typical business application that uses databases.
1) Front-end tier: the application user interface. 2) Middle tier: the computing logic or the application itself. 3) Back-end tier: the underlying databases that organize the data.
149
Four types of business applications
1) Email 2) Enterprise Resource Planning (ERP) 3) Decision Support System (DSS) 4) Data Warehouse (DW)
150
Two types of management applications
1) Resource management 2) Performance tuning
151
Two types of data protection applications
1) Backup 2) Replication
152
Four Types of Applications Commonly Deployed in a CDC
1) Business applications 2) Management applications 3) Data protection applications 4) Security applications
153
Two key I/O characteristics of an application
1) Read intensive vs. write intensive 2) Sequential vs. random
154
How can manageability be achieved?
Through automation and reduction of manual intervention in common tasks.
155
Seven Key Requirements of a Data Center
1. Manageability 2. Availability 3. Performance 4. Flexibility 5. Scalability 6. Security 7. Data Integrity
156
Product which offers CDP and CRR functionality
EMC Recover Point
157
VNX based local replication software that creates point-in-time views or point-in-time copies of logical volumes.
EMC Snapview
158
Family of products used for full volume and pointer-based local replication in Symmetrix storage arrray
EMC Timefinder
159
Symmetrix software that performs SAN-based remote replication between Symmetrix and qualified storage arrays.Has full or incremental copy capabilities.
EMC Open Replicator
160
VNX-basaed software that enables storage array based remote replication
EMC MirrorView
161
Offers a family of technology solutions to implement storage array based remote replication technologies.
EMC Symmetrix Remote Data Facility (SRDF)
162
SRDF
Symmetrix Remote Data Facility
163
Works with EMC Avamar and EMC Data Domain
EMC Networker
164
Provides centralized, automated backup and recovery operations across an enterprise.Provides both source-based and target-based deduplication capabilities by integrating with Avamar and Data Domain, respectively.
EMC Networker
165
Target-based data deduplication solution with Data Invulnerability Architecture
EMC Data Domain
166
How does Avamar differ from traditional backup and recovery solutions?
Avamar identifies and stores only the unique sub-file data objects.
167
Disk-based backup and recovery solution that provides inherent source-based data deduplication
EMC Avamar
168
Key ability of multi-protocol switches in a SAN environment
They can bridge FC-SAN and IP SAN, a feature that enables these devices to provide connectivity between iSCSI initators and FC storage targets. They also can extend a FC SAN over long distances through IP networks.
169
EMC Connectrix Family
1) Enterprise directors (MDS-9513, DCX) 2) Departmental switches 3) Multi-protocol routers (MP-7800B)
170
Key components of a SAN environment
Fibre Channel switches and directors
171
Provides simplified storage management and provisioning, and options such as additional replication, migration and volume configuration.
Enginuity
172
Operating environment for Symmetrix VMAX
Enginuity
173
Intelligent storage system built on Virtual Matrix architecture
EMC Symmetrix VMAX
174
Proactive strategy that enables an IT organization to effectively align the business value of information with the most appropriate and cost-effective infrastructure, from the time information is created, through its final disposition.
Information Lifecycle Management (ILM)
175
Used to restrict unauthorized HBAs in a SAN environment
Configuration of Zoning
176
Prevents data corruption on the storage array by restricting compute access to a defined set of logical devices.
LUN Masking
177
Key Capacity Management Activities
Trend and Capacity Analysis - Storage Provisioning
178
Types of Reports
Capacity Planning Report - Chargeback Report - Performance
179
Ensures adequate availability of resources based on their service level requirements.Manages resources allocation.
Capacity Management
180
Three Levels of Alerts Based on Severity
Information Alert - Warning Alert - Fatal Alert
181
Integral part of monitoring
Alerting
182
Key components of a CDC that should be monitored
Compute systems, network, and storage
183
Four Key Parameters to be monitored
1) Accessibility 2) Capacity 3) Performance 4) Security
184
6 Key Management Activities in a CDC
1) Monitoring and Alerting 2) Reporting 3) Availability Management 4) Capacity Management 5) Performance Management 6) Security Management
185
Dedicated volume on the SAN-attached storage at each site which stores configuration information about the CDP appliance.
Repository Volume
186
Store all data changes on the primary storage
Journal volumes
187
Intercepts write from initiator and splits each write into two copies
Write Splitter
188
Storage Volumes
Repository volume, journal volume, and replication volume.
189
Runs the CDP software and manages all the aspects of local and remote replication.
CDP Appliance
190
All data changes are stored in a location separate from the primary storage.Recovery point objectives are arbitrary and need not be defined in advance of the actual recovery.
Continuous Data Protection (CDP)
191
CDP Elements
CDP Appliance - Storage Volumes - Write Splitters
192
Captures all writes and maintains consistent point in time images.
Continuous Data Protection (CDP)
193
CDP
Continuous Data Protection
194
Changes to data are continuously captured or tracked.
Continuous Data Protection (CDP)
195
Allows replication between heterogeneous vendor storage arrays over SAN/WAN
SAN Based Replication
196
Eliminates disadvantages of two site replication.Replicates data to two remote sites.
Three Site Replication
197
Combination of local and remote replications.RPO usually on the order of hours.Low bandwidth requirements.Extended distance solution.
Disk Buffered Storage Array Based Remote Replication
198
Three Modes of Operation for Storage Array Based Remote Replication
1) Synchronous Replication 2) Asynchronous Replication 3) Disk Buffered Replication
199
Replication is performed by the array operating environment
Storage Array based Remote Replication
200
Transactions to the source database are captured in logs, which are periodically transmitted by the source compute system to the remote compute system.Remote compute system applies these logs to the remote database.
Database Log Shipping Remote Replication
201
All writes to the source Volume Group are replicated to the target Volume Group by the LVM.Can be in synchronous or asynchronous mode
LVM-based remote replication
202
Two Compute Based Remote Replication Methods
1) LVM-based 2) Database Log Shipping
203
Replication is done by using the CPU resources of the compute system, using software that is running on the compute.
Compute-based remote replication
204
Replication model deployed over long distances.
Asynchronous Replication
205
Write committed to the source and immediately acknowledged to the compute system.Data is buffered at the source and transmitted to the remote replica later.Application response time is unaffected.Needs only average network bandwidthNon-zero RPO
Asynchronous Replication
206
Replication model rarely deployed beyond 200 km.
Synchronous replication
207
A write must be committed to the source and remote replica before it is acknowledged to the compute system.Application response time will be extended.Maximum network bandwidth must be provided at all times to minimize impact on response time.
Synchronous replication
208
Replica is behind the source by a finite time - finite RPO.
Asynchronous Replication
209
Replica is identical to source at all times - near zero RPO
Synchronous Replication
210
Two modes of remote replication
1) Synchronous2) Asynchronous
211
Addresses risks associated with regionally driven outages.Network infrastructure is required between source and target.
Remote Replication
212
Process of creating and maintaining copies of data from a production site to remote site(s).
Remote Replication
213
Targets do not hold actual data, but hold pointers to where the data is located.Target requires only a small fraction of the size of the source volumes.Target devices are accessible at the start of session activation.Uses CoFW technology.
Pointer Based Virtual Replication
214
Two modes of pointer based full volume replication
1) Copy on First Access (deferred) 2) Full Copy Mode
215
Target device is at least as large as the source device.
Pointer-Based Full Volume Replication
216
Provides a full copy of the source data on the target.Target device is accessible for business operations as soon as the replication session has started.Point-in-Time is determined by the time of session activation.
Pointer-Based Full Volume Replication
217
Target is a full physical copy of the source device.Target is attached to the source and data from the source is copied to the target.Target is unavailable while it is attached.Target device is as at least as large as the source device.
Full Volume Mirroring
218
Used to indicate the exact address from which the data is to be read when the data is accessed from the Snap FS.
Blockmap
219
Used to keep track of blocks that are changes on the production FS after creation of the snapshot.
Bitmap
220
Briefly describe a Copy on First Write (CoFW) mechanism
If a write I/O is issued to the production FS for the first time after the creation of a snapshot, the I/O is held, and the original data of the production FS corresponding to that location is moved to the snap FS (replica). Then, the new data is allowed to write on the production FS. The bitmap and blockmap are updated accordingly. Any subsequent write to the same location will not initiate the CoFW activity.
221
Pointer-based local replication which uses Copy on the First Write (CoFW) principle.Uses bit map and block map.Requires a fraction of the space used by the production file system (FS).
File System Snapshot
222
Form of compute based replication where each logical partition in a logical volume is mapped to two physical partitions on two different physical volumes.Write to a logical partition is written to the two physical partitions.
Logical Volume Manager based mirroring
223
Types of Storage Array based replication techniques
1) Full volume mirroring 2) Pointer based full volume replication 3) Pointer based virtual replication
224
Two type of compute based replication
1) LVM-based mirroring 2) File system snapshot
225
Two Classifications of Local Replication
1) Compute based replication 2) Storage array based replication
226
Process of replicating data within the same array or the same data center.
Local Replication
227
Objective of any continuous replication process
Reduce the RPO to zero.
228
Three Characteristics of a Good Replica
1) Recoverability 2) Restartability 3) Consistency
229
Two Types of Replicas
1) Point-in-Time (PIT) Replica (has non-zero RPO) 2) Continuous Replica (has near-zero RPO)
230
Drives choice of replica type
RPO
231
Primary purpose of replication
To enable users to have the designated data at the right place, in a state appropriate to the recovery needs.
232
Two Classifications of Replication
Local Replication - Remote Replication
233
Drivers for replication
Alternate source for backup - Fast recovery - Decision support - Testing platform - Restart from replica
234
Process of creating an EXACT COPY of data
Replication
235
3 Methods for Implementing Deduplication
1) Single Instance Storage (SIS) 2) Sub-file Deduplication 3) Compression
236
3 Benefits of Deduplication
1) Far less infrastructure is required to hold the backup images, due to the elimination of redundant data. 2) Reduces the amount of redundant content in the daily backup, enabling longer retention periods. 3) Reduces backup window and enables faster restore, enabling creation of daily full backup images.
237
True or False: Deduplication can be source-based (client) or target based (storage device).
TRUE
238
Levels of Deduplication Implementation
File LevelBlock / Chunk Level
239
Technology that conserves storage capacity and/or network traffic by eliminating duplicate data.
Deduplication
240
Three Backup Technology Options
1) Backup to Tape 2) Backup to Disk 3) Backup to Virtual Tape (VTL)
241
VTL
Virtual Tape Library
242
Restore Operation Steps
1) Backup client initiates the restore. 2) Backup server scans backup catalog to identify data to be restored and the client that will receive the data. 3) Backup server instructs storage node to load backup media. 4) Storage node restores the backup data to the client and sends metadata to the backup server.
243
Backup Operation Steps
1) Backup server initiates a scheduled backup. 2) Backup server instructs storage node to backup media and instructs clients to send backup data to the storage node. 3) Storage node sends backup data to backup device and media information to the backup server. 4) Backup server updates catalog and records the status.
244
True or False: Typically, the storage node is integrated with the backup server and both are hosted on the same physical platform.
TRUE
245
Contains information about the backup process and backup metadata.
Backup Catalog
246
Stores backup data
Backup device
247
Responsible for writing data to backup device.
Storage Node
248
Manages backup operations and maintains backup catalog
Backup Server
249
Sends backup data to backup server or storage node
Backup Client
250
Four Backup Components
1) Backup Client 2) Backup Server 3) Storage Node 4) Backup Device
251
Enables a full backup copy to be created offline without disrupting the I/O operation on the production volume.
Synthetic (or constructed) full backup
252
How backups can be categorized, based on granularity
1) Full 2) Cumulative 3) Incremental
253
Backup Purposes
Disaster Recovery - Operational Backup - Archival
254
Solutions & Technologies Which Enable Business Continuity
Eliminating Single Points of Failure - Multi-pathing software - Backup - Replication (local and remote)
255
RTO
Recovery Time Objective
256
Goal of a business continuity solution
Ensure the information availability required to conduct vital business operations.
257
Object Level Protocols
REST, SOAP
258
Block Level Storage Protocols
iSCSI, FC, and FCoE
259
File Level Storage Protocols
NFS, CIFS
260
Benefits of Unified Storage
Provides consolidated multi-protocol storage. Simplifies administration. Reduces cost of storage assets, along with power, cooling, and space. Provides a highly scalable architecture.
261
Scenarios appropriate for Object Based storage
Multimedia content rich Web applications- Archives - Cloud
262
7 Benefits of Object Based Storage
1) Automates & simplifies storage management. 2) Ensures data integrity (!!!). 3) Ensures compliance and auditability. 4) Enables easy data migration. 5) Enables self healing. 6) Facilitates intelligent replication. 7) Allows flexible scalability.
263
Communication standard used by Object Based Storage
HTTP
264
Way of exchanging messages between peers on a distributed network.
Simple Object Access Protocol (SOAP)
265
Used to retrieve information from a web site by reading web pages.
Representational State Transfer (REST)
266
Used for communication between peers in a distributed environment.Uses XML framework.
Simple Object Access Protocol (SOAP)
267
Common protocols used in object based communication
SOAPREST
268
6 NAS Components
1) NAS Head (CPU & Memory) 2) NICs (one or more) 3) Optimized OS for managing NAS functionality 4) NFS & CIFS protocols for file sharing 5) Standard protocols to connect & manage physical disk resources (ATA, SCSI, FC) 6) Storage Array
269
NAS Head Components
CPUMemory
270
Remote file services protocols used by NAS
CIFSNFS
271
Dedicated high performance file server with a storage system.
Network Attached Storage (NAS)
272
Storage device connected to a network that provides FILE LEVEL data access to heterogeneous clients.
Network Attached Storage (NAS)
273
Components of FCoE
1) Converged Network Adapter (CNA) 2) FCoE Switch (Ethernet bridge + FCF) 3) Converged Enhanced Ethernet (CEE) (a.k.a. Data Center Ethernet)
274
Part of FCoE switch which encapsulates FC frames into FCoE frames and de-capsulates FCoE frames to FC frames.
Fibre Channel Forwarder (FCF)
275
Contains Ethernet bridge and Fibre Channel Forwarder.
FCoE Switch
276
Combines LAN and SAN traffic over a single 10GigE connection.
FCoE
277
Encapsulates Fibre Channel frames for transport over Enhanced Ethernet networks.Enables consolidation of SAN traffic and Ethernet traffic onto a common 10GigE infrastructure.Consolidates compute and storage communication over a single channel.
Fibre Channel over Ethernet (FCoE)
278
Transports FC block data over an existing IP infrastructure
FCIP
279
Used for data sharing over geographically dispersed SAN.
FCIP
280
IP-based storage networking technology.Combines advantages of Fibre Channel and IP.Creates virtual FC link that connects devices in a different fabric.Distance extension solution.Tunneling protocol.
Fibre Channel over IP (FCIP)
281
Device that issues commands to a target device to perform a task.
Initiator
282
iSCSI topology with the following attributes:1) Translates iSCSIP/IP to FC,2) iSCSI initiator configured with bridge as target,3) Bridge acts as a virtual FC initiator, and4) Less common iSCSI topology
Bridged iSCSI Topology
283
iSCSI topology with:1) No FC components, and2) iSCSI initiators connect directly to the storage array.
Native iSCSI Topology
284
Two iSCSI topologies
1) Native 2) Bridged
285
Used extensively in Disaster Recovery (DR) implementations, where data is duplicated to an alternate site.
FCIP
286
Widely adopted for connecting compute systems to storage because it is relatively inexpensive and easy to implement, especially in environments where an FC SAN does not exist.
iSCSI
287
Uses a pair of bridges (FCIP gateways) communicating over TCP/IP
Fibre Channel over IP
288
Three network adapters used in an iSCSI environment
1) Ethernet NIC card 2) TCP/IP Offload Engine (TOE) card 3) iSCSI HBA
289
Compute-based encapsulation of SCSI I/O over IP using an Ethernet NIC, TCP/IP Offload Engine, or iSCSI HBA in the compute system.
iSCSI
290
Two primary protocols that leverage IP as the transport mechanism for block level data transmission
1) iSCSI (SCSI over IP) 2) FCIP
291
A technology that provides transfer of block level data over an IP network
IP-SAN
292
Zoning that enables a specific port to be tied to the WWN of a node.Combines qualities of both WWN zoning and port zoning.
Mixed zoning
293
Also known as soft zoning.Allows the FC SAN to be re-cabled without reconfiguring the zone information.Uses World Wide Names to define zones.
WWN Zoning
294
Zoning that uses the FC addresses of the physical ports to define zones.Access to data is determined by the physical switch port to which a node is connected.Also called hard zoning.
Port zoning
295
3 type of zoning
1) Port zoning 2) WWN zoning 3) Mixed zoning
296
LUN Masking is done at what level?
Array Level
297
Zoning takes place at what level?
Fabric level
298
In an FC environment, used to control server access to storage.
Zoning in conjunction with LUN masking
299
A Fibre Channel switch function that enables nodes within the fabric to be logically segmented into groups that can communicate with each other.
Zoning
300
Used to identify a group of switch ports used to connect nodes.
Area ID
301
Unique identification number provided to each switch in an FC SAN.
Domain ID
302
Unique 64 bit identifier.Static to the port, similar to a NIC's MAC address.Used to physically identify ports or nodes within an FC SAN.
World Wide Name
303
Two types of addresses used for communication in an FC SAN environment
1) Channel Address 2) World Wide Name
304
Three Components of a Fibre Channel Address
Domain ID (switch) - Area ID (port group) - Port ID (port)
305
Used to communicate between nodes within a FC SAN.Similar in functionality to an IP address on NICs.24-bit address, dynamically assigned.
Fibre Channel Address
306
A generic port that can operate as an E_port or and F_port and determines its functionality automatically during intialization.
G_port
307
A port on an FC switch that connects to an N_port
F_porta.k.a., fabric port
308
An FC port that forms the connection between two FC switches
E_porta.k.a. expansion port
309
Typically, a compute system port (HBA) or a storage array port that is connected to a switch in a switched fabric.
N_port (node port)
310
An end point in an FC fabric
N_port
311
In a switched fabric, the link between any two switches
Inter Switch Link (ISL)
312
One or more interconnected FC switches through which multiple SAN nodes can communicate.
Fibre Channel Fabric
313
Six Components of an FC SAN
1) Node Ports 2) Cables 3) Connectors 4) Interconnecting Devices 5) Storage arrays 6) SAN Management Software
314
Base protocol of FC SAN
SCSI (modified)
315
SCSI data encapsulated and transported within Fibre Channel frames
Fibre Channel Protocol (FCP)
316
FC SAN Scaling Limit
15 million devices
317
Four Benefits of FC SAN
1) Enables storage consolidation and sharing. 2) Enables centralized management. 3) Provides scalability and high performance. 4) Reduces storage and administration cost.
318
Dedicated high speed network of compute systems and shared storage devices which uses the SCSI over FC protocol.Provides block level data access.
FC SAN
319
Protocol-based storage networking technologies
Fibre Channel SAN (FC SAN) - Network Attached Storage (NAS) - Internet Protocol SAN (IP SAN) - Fibre Channel over Ethernet (FCoE)
320
Concept-based storage network technologies
Object Based StorageUnified Storage
321
Six Storage Networking Technologies
1) Fibre Channel SAN (FC SAN) 2) Network Attached Storage (NAS) 3) Internet Protocol SAN (IP SAN) 4) Fibre Channel over Ethernet (FCoE) 5) Object Based Storage 6) Unified Storage
322
DAS Challenges
1) Limited scalability 2) Limited ability to share resources (islands of over and under utilized storage resources)
323
Benefits of DAS
1) Simple to deploy and ideal for local data provisioning 2) Low capital expense and less complexity.
324
Two classifications of DAS
1) Internal 2) External (w.r.t. the compute system)
325
Internal or external storage device, which connects directly to a compute system.
Direct Attached Storage (DAS)
326
Physical drive addressing that refers to specific locations on a drive.
Cylinder, Head, and Sector (CHS)
327
Simplifies addressing by using a linear address to access physical blocks of data.
Logical Block Addressing (LBA)
328
CHS
Cylinder, Head, and Sector
329
LBA
Logical Block Addressing
330
True or False: File-level access has higher overhead than block-level access.
TRUE
331
In file level access where is the file system created?
On a network or at the storage
332
Two ways data can be accessed over a network
1) File Level 2) Block Level
333
In block-level access, where is the file system created?
On a compute system.
334
Protocols used for accessing data from an external storage device (or subsystems).
Fibre ChanneliSCSI
335
3 Key Infrastructure Components
1) OS (or file system) 2) Connectivity (network) 3) Storage
336
In block level access, where does the file system reside?
On the Compute
337
In file level access, where does the file system reside?
With the storage
338
Serial version of SCSI
Serial Attached SCSI (SAS)
339
Preferred storage connectivity option for high-end environments.Improved performance, scalability, and high cost when compared to ATA.
Small Computer System Interface (SCSI)
340
Popular protocol to connect to disk drives.Supports 16-bit parallel transmission.Serial version is called Serial ATA (SATA).Both versions offer good performance at a relatively low cost.
IDE/ATA
341
A multifunction adapter which consolidates the functionality of a NIC card AND a Fibre Channel HBA onto a single adapter.
Converged Network Adapter (CNA)
342
ASIC board that performs I/O interface functions between the host and the storage, relieving the CPU from additional I/O processing workload.
Host Bus Adapter (HBA)
343
More flexible than channel technologies
Network technologies
344
Network Technology Characteristics
1) Compute system and peripheral devices are connected over a network. 2) High protocol overhead due to network connection. 3) Support transmission over long distances. 4) Protocol examples: iSCSI (SCSI over IP), FCoE, and FC
345
Characteristics of channel technology
1) Compute system and peripheral devices are connected through channel. 2) Low protocol overhead due to tight coupling. 3) Supports transmission only over short distances. 4) Protocol examples: PCI, IDE/ATA, SCSI, etc.
346
Five core elements of a CDC
1) Application 2) Database Management System (DBMS) 3) Compute 4) Storage 5) Network
347
A facility containing physical IT resources including compute, network, and storage.
Classic Data Center
348
Which key requirement of a data center is violated when an authorized storage administrator is not able to remotely login to a server in the data center?a. Scalabilityb. Flexibilityc. Securityd. Availability
d. Availability
349
Which are the key parameters that determine the performance and availability of a RAID set?
a. Number of drives in a RAID set and RAID level
350
Which key requirement of a data center refers to the ability of IT to support new business initiatives dynamically?a. Manageabilityb. Availabilityc. Capacityd. Flexibility
d. Flexibility
351
Which statement is true about FC SAN?a. Provides higher scalability as compared to DAS.b. Has limited ability to share resources.c. Enables object level access to data.d. Supports a maximum of 256 nodes.
a. Provides higher scalability as compared to DAS.
352
Which is a benefit of RAID?a. Ensures data integrity in a RAID set.b. Prevents disk failure in a RAID set.c. Improves storage system performance.d. Simplifies distribution of parity across mirrored disks.
c. Improves storage system performance.
353
Width of a PCI bus
32 bits or 64 bits
354
How can RPO by minimized in a PIT replication scenario?
By making periodic PIT replicas
355
RPO2.55
Recovery Point Objective
356
FC-AL
Fibre Channel Arbitrated Loop
357
Compute Before Virtualization
``` Runs single OS per machine Couples software and hardware tightly May have conflicts when multiple applications on same machine Underutilized resources inflexible and expensive ```
358
Compute After Virtualization
Runs Multiple OSs per machine concurrently Makes OS and applications hardware independent Isolates VN from each other, hence no conflict Improves Resource Utilization Offers flexible infrastructure at low cost
359
P2V Conversion Considerations
1) Some hardware-dependent drivers and mapped drive letters might not be preserved. 2) Source machine configuration remains unchanged. 3) Source and target machines will have the same identities. 4) Application that depend on characteristics of the hardware may not work.
360
With transparent page sharing, what happens when attempts to write on the shared page are made?
1) Generates a minor page fault. 2) Creates a private copy after write and remaps the memory.
361
Is a VM's paging file which backs up the VM RAM contents.Exists ONLY when the VM is running.
Virtual Swap File
362
Typical utilization for non-virtualized compute systems
15% to 20% utilization
363
Phases in the transformation to a VDC
1) Classic Data Center 2) Virtualize Compute 3) Virtualize Storage 4) Virtualize Network 5) Virtualize Data Center
364
Converts physical machines to VMs.Supports conversion of VM created third party software to VMware VM.Lets users convert Windows and Linux-based physical machines to VMware virtual machines.Converts VMs between VMware platoforms.
VMware vCenter Converter
365
VMware vSphere Modules and Plug-ins
VMware Distributed Resource Scheduler (DRS) - VMware High Availability (HA) - VMware Data Recovery
366
Key vSphere Components
VMware ESXi - VMware vCenter Server - VMware vCenter Client - VMware vStorage VMFS
367
Infrastructure virtualization suite that provides virtualization, resource mgmt & optimization, HA, and operational automation.
VMware vSphere
368
Cold Conversion Process
1. Boot source machine from the converter boot CD and use the converter software to define conversion parameters and start the conversion. 2. Converter application creates a new VM on the destination physical machine. 3. Converter app copies volumes from the source machine to the destination machine. 4. Converter app installs the required drivers to allow the OS to boot in a VM and personalizes the VM. 5. VM is ready to run on the destination server.
369
Hot Conversion Process Steps
1. Converter server prepares the source machine for the conversion by installing the agent on the source physical machine. 2. Agent takes a snapshot of the source volume. 3. Converter server creates a VM on the destination machine. 4. Agent clones the physical disk of the source machine (using snapshot) to the virtual disk of the destination virtual machine. 5. Agent synchronizes the data and installs the required drivers to allow the OS to boot from a VM and personalize the VM. 6. VM is ready to run on the destination server.
370
Occurs while physical machine is NOT running OS and application.Boots the physical machine using converter boot CD.Creates consistent copy of the physical machine.
Cold Conversion
371
Occurs while machine is running.Performs synchronization; copies blocks changed during cloning period.Performs power off at source and power on at target VM.Changes IP address and machine name of target if both machines must exist on the same network.
Hot Conversion
372
Two ways to migrate from a physical machine to a virtual machine
1) Hot Conversion2) Cold Conversion
373
Bootable CD contains its OS and converter application.Converter application is used to perform cold conversion.
P2V Converter Boot CD
374
Responsible for performing the conversion.Used in hot mode only.Installed on physical machine to convert it to a VM.
P2V Converter Agent
375
Responsible for controlling conversion process.Used for hot conversion only (when source is running its OS).Pushes and installs agent on the source machine.
P2V Converter Server
376
3 Key Components of P2V Converter
1) Converter Server 2) Converter Agent 3) Converter Boot CD
377
Benefits of P2V Converter
Reduces time needed to setup a new VM. Enables migration of legacy machine to new hardware without reinstalling the OS or application. Performs migration across heterogeneous hardware.
378
Clones data from physical machine's disk to VM disk.Performs system reconfiguration of the destination VM such as change IP address and computer name, install required device drivers to enable the VM to boot.
Physical to Virtual Machine (P2V) Conversion
379
Process through which physical machines are converted into VMs.
P2V Conversion
380
Provides ability to manage physical machines running hypervisor.
Resource Management Tool
381
Enables centralized management of resources from a management server.Enables pooling or resources and allocates capacity to VMs.Communicates with hypervisors to perform management.Provides operational automation.
Resource Management Tool
382
VM to Physical Server Anti-Affinity
Allowing a VM to move on different hypervisors in a cluster (e.g, for high availability of performance requirements).
383
VM to Physical Server Affinity
Specify whether selected VM can be placed only on a particular hypervisor (e.g., for licensing issues).
384
VM to VM Anti-Affinity
Ensures that selected VMs are NOT together on a hypervisor (i.e., for availability reasons).
385
VM to VM Affinity
Selected VMs should run on the same hypervisor, to improve performance if VMs are communicating with each other heavily.
386
Last option because it causes notable performance impact.
Memory swapping
387
Swap file size
Swap file size is equal to the DIFFERENCE between the MEMORY LIMIT and the VM MEMORY RESERVATION.
388
Makes the guest OS free some of the virtual machine memory
Ballooning
389
Transparent Page Sharing
Hypervisor detects identical memory pages of VMs and maps them to same physical page. For writes, hypervisor treats the shared pages as copy-on-write. Attempt to write on a shared page generates a minor page fault and CREATES A PRIVATE COPY after write and remaps the memory.
390
True or False: VMs can be configured with more memory than physically available.
True. Referred to as memory overcommitment.
391
Memory Management Techniques
Transparent Page Sharing - Memory Ballooning - Memory Swapping
392
What is enabled by hyper-threading?
Allows the OS to schedule two threads or processes simultaneously.
393
Makes a physical CPU appear as two logical CPUs (LCPUs).
Hyper-threading
394
Features supported by hypervisors to optimize CPU resources
Multi-core - Hyper-threading - CPU load balancing
395
Makes a physical CPU appear as two or more logical CPUs.
Hyper-threading
396
Amount of CPU or memory resources a VM or a child resource pool can have with respect to its parent's total resources.
Share
397
Maximum amount of CPU and memory a VM or a child resource pool can consume.
Limit
398
Amount of CPU and memory reserved for a VM or a child resource pool.
Reservation
399
Used to control the resources consumed by resource pools or VMs
Reservation, limit, and share
400
May be created from the parent resource pool
Child resource pool or virtual machine (VM)