EMC CIS Module 2 Flashcards

part 2 (199 cards)

2
Q

Which is a benefit of RAID?a. Ensures data integrity in a RAID set.b. Prevents disk failure in a RAID set.c. Improves storage system performance.d. Simplifies distribution of parity across mirrored disks.

A

c. Improves storage system performance.Module 2 Quiz

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which statement is true about FC SAN?a. Provides higher scalability as compared to DAS.b. Has limited ability to share resources.c. Enables object level access to data.d. Supports a maximum of 256 nodes.

A

a. Provides higher scalability as compared to DAS.Module 2 Quiz

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which key requirement of a data center refers to the ability of IT to support new business initiatives dynamically?a. Manageabilityb. Availabilityc. Capacityd. Flexibility

A

d. FlexibilityModule 2 Quiz

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which are the key parameters that determine the performance and availability of a RAID set?a. Number of drives in a RAID set and RAID levelb. Number of drives in a RAID set and the capacity of each drivec. Number of RAID controllers and type of RAID implementationd. Number of drives in a RAID set and type of RAID implementation

A

a. Number of drives in a RAID set and RAID levelModule 2 Quiz

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which key requirement of a data center is violated when an authorized storage administrator is not able to remotely login to a server in the data center?a. Scalabilityb. Flexibilityc. Securityd. Availability

A

d. AvailabilityModule 2 Quiz

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A facility containing physical IT resources including compute, network, and storage.

A

Classic Data Center2.3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Five core elements of a CDC

A

1) Application2) Database Management System (DBMS)3) Compute4) Storage5) Network2.3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Characteristics of channel technology

A

1) Compute system and peripheral devices are connected through channel.2) Low protocol overhead due to tight coupling.3) Supports transmission only over short distances.4) Protocol examples: PCI, IDE/ATA, SCSI, etc.2.21

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Network Technology Characteristics

A

1) Compute system and peripheral devices are connected over a network.2) High protocol overhead due to network connection.3) Support transmission over long distances.4) Protocol examples: iSCSI (SCSI over IP), FCoE, and FC2.21

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

More flexible than channel technologies

A

Network technologies2.21

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

ASIC board that performs I/O interface functions between the host and the storage, relieving the CPU from additional I/O processing workload.

A

Host Bus Adapter (HBA)2.21

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A multifunction adapter which consolidates the functionality of a NIC card AND a Fibre Channel HBA onto a single adapter.

A

Converged Network Adapter (CNA)2.21

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Popular protocol to connect to disk drives.Supports 16-bit parallel transmission.Serial version is called Serial ATA (SATA).Both versions offer good performance at a relatively low cost.

A

IDE/ATA2.22

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Preferred storage connectivity option for high-end environments.Improved performance, scalability, and high cost when compared to ATA.

A

Small Computer System Interface (SCSI)2.23

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Serial version of SCSI

A

Serial Attached SCSI (SAS)2.23

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

In file level access, where does the file system reside?

A

With the storage2.24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

In block level access, where does the file system reside?

A

On the Compute2.24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

3 Key Infrastructure Components

A

1) OS (or file system)2) Connectivity (network)3) Storage2.24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Protocols used for accessing data from an external storage device (or subsystems).

A

Fibre ChanneliSCSI2.24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Two ways data can be accessed over a network

A

1) File Level2) Block Level2.24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

In block-level access, where is the file system created?

A

On a compute system.2.24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In file level access where is the file system created?

A

On a network or at the storage2.24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

True or False: File-level access has higher overhead than block-level access.

A

True2.24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

LBA

A

Logical Block Addressing2.24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
CHS
Cylinder, Head, and Sector2.24
27
Simplifies addressing by using a linear address to access physical blocks of data.
Logical Block Addressing (LBA)2.24
28
Physical drive addressing that refers to specific locations on a drive.
Cylinder, Head, and Sector (CHS)2.24
29
Internal or external storage device, which connects directly to a compute system.
Direct Attached Storage (DAS)2.25
30
Two classifications of DAS
1) Internal2) External(w.r.t. the compute system)2.25
31
Benefits of DAS
1) Simple to deploy and ideal for local data provisioning2) Low capital expense and less complexity.2.25
32
DAS Challenges
1) Limited scalability2) Limited ability to share resources (islands of over and under utilized storage resources)2.25
33
Six Storage Networking Technologies
1) Fibre Channel SAN (FC SAN)2) Network Attached Storage (NAS)3) Internet Protocol SAN (IP SAN)4) Fibre Channel over Ethernet (FCoE)5) Object Based Storage6) Unified Storage2.26
34
Concept-based storage network technologies
Object Based StorageUnified Storage2.26
35
Protocol-based storage networking technologies
Fibre Channel SAN (FC SAN)Network Attached Storage (NAS)Internet Protocol SAN (IP SAN)Fibre Channel over Ethernet (FCoE)2.26
36
Dedicated high speed network of compute systems and shared storage devices which uses the SCSI over FC protocol.Provides block level data access.
FC SAN2.27
37
Four Benefits of FC SAN
1) Enables storage consolidation and sharing.2) Enables centralized management.3) Provides scalability and high performance.4) Reduces storage and administration cost.2.27
38
FC SAN Scaling Limit
15 million devices2.27
39
SCSI data encapsulated and transported within Fibre Channel frames
Fibre Channel Protocol (FCP)2.27
40
Base protocol of FC SAN
SCSI (modified)2.27
41
Six Components of an FC SAN
1) Node Ports2) Cables3) Connectors4) Interconnecting Devices5) Storage arrays6) SAN Management Software2.28
42
FC-AL
Fibre Channel Arbitrated Loop
43
One or more interconnected FC switches through which multiple SAN nodes can communicate.
Fibre Channel Fabric2.29
44
In a switched fabric, the link between any two switches
Inter Switch Link (ISL)2.29
45
An end point in an FC fabric
N_port2.30
46
Typically, a compute system port (HBA) or a storage array port that is connected to a switch in a switched fabric.
N_port (node port)2.30
47
An FC port that forms the connection between two FC switches
E_porta.k.a. expansion port2.30
48
A port on an FC switch that connects to an N_port
F_porta.k.a., fabric port 2.30
49
A generic port that can operate as an E_port or and F_port and determines its functionality automatically during intialization.
G_port2.30
50
Used to communicate between nodes within a FC SAN.Similar in functionality to an IP address on NICs.24-bit address, dynamically assigned.
Fibre Channel Address2.31
51
Three Components of a Fibre Channel Address
Domain ID (switch)Area ID (port group)Port ID (port)2.31
52
Two types of addresses used for communication in an FC SAN environment
1) Channel Address2) World Wide Name2.31
53
Unique 64 bit identifier.Static to the port, similar to a NIC's MAC address.Used to physically identify ports or nodes within an FC SAN.
World Wide Name2.31
54
Unique identification number provided to each switch in an FC SAN.
Domain ID2.31
55
Used to identify a group of switch ports used to connect nodes.
Area ID2.31
56
A Fibre Channel switch function that enables nodes within the fabric to be logically segmented into groups that can communicate with each other.
Zoning2.32
57
In an FC environment, used to control server access to storage.
Zoning in conjunction with LUN masking2.32
58
Zoning takes place at what level?
Fabric level2.32
59
LUN Masking is done at what level?
Array Level2.32
60
3 type of zoning
1) Port zoning2) WWN zoning3) Mixed zoning2.32
61
Zoning that uses the FC addresses of the physical ports to define zones.Access to data is determined by the physical switch port to which a node is connected.Also called hard zoning.
Port zoning2.32
62
Also known as soft zoning.Allows the FC SAN to be re-cabled without reconfiguring the zone information.Uses World Wide Names to define zones.
WWN Zoning2.32
63
Zoning that enables a specific port to be tied to the WWN of a node.Combines qualities of both WWN zoning and port zoning.
Mixed zoning2.32
64
A technology that provides transfer of block level data over an IP network
IP-SAN2.34
65
Two primary protocols that leverage IP as the transport mechanism for block level data transmission
1) iSCSI (SCSI over IP)2) FCIP2.35
66
Compute-based encapsulation of SCSI I/O over IP using an Ethernet NIC, TCP/IP Offload Engine, or iSCSI HBA in the compute system.
iSCSI2.35
67
Three network adapters used in an iSCSI environment
1) Ethernet NIC card2) TCP/IP Offload Engine (TOE) card3) iSCSI HBA2.35
68
Uses a pair of bridges (FCIP gateways) communicating over TCP/IP
Fibre Channel over IP2.35
69
Widely adopted for connecting compute systems to storage because it is relatively inexpensive and easy to implement, especially in environments where an FC SAN does not exist.
iSCSI2.35
70
Used extensively in Disaster Recovery (DR) implementations, where data is duplicated to an alternate site.
FCIP2.35
71
Two iSCSI topologies
1) Native2) Bridged2.36
72
iSCSI topology with:1) No FC components, and2) iSCSI initiators connect directly to the storage array.
Native iSCSI Topology2.36
73
iSCSI topology with the following attributes:1) Translates iSCSIP/IP to FC,2) iSCSI initiator configured with bridge as target,3) Bridge acts as a virtual FC initiator, and4) Less common iSCSI topology
Bridged iSCSI Topology2.36
74
Device that issues commands to a target device to perform a task.
Initiator2.36
75
IP-based storage networking technology.Combines advantages of Fibre Channel and IP.Creates virtual FC link that connects devices in a different fabric.Distance extension solution.Tunneling protocol.
Fibre Channel over IP (FCIP)2.37
76
Used for data sharing over geographically dispersed SAN.
FCIP2.37
77
Transports FC block data over an existing IP infrastructure
FCIP2.37
78
Encapsulates Fibre Channel frames for transport over Enhanced Ethernet networks.Enables consolidation of SAN traffic and Ethernet traffic onto a common 10GigE infrastructure.Consolidates compute to compute and compute to storage communication over a single channel.
Fibre Channel over Ethernet (FCoE)2.38
79
Combines LAN and SAN traffic over a single 10GigE connection.
FCoE2.38
80
Contains Ethernet bridge and Fibre Channel Forwarder.
FCoE Switch2.40
81
Part of FCoE switch which encapsulates FC frames into FCoE frames and de-capsulates FCoE frames to FC frames.
Fibre Channel Forwarder (FCF)2.40
82
Components of FCoE
1) Converged Network Adapter (CNA)2) FCoE Switch (Ethernet bridge + FCF)3) Converged Enhanced Ethernet (CEE) (a.k.a. Data Center Ethernet)2.40
83
Storage device connected to a network that provides FILE LEVEL data access to heterogeneous clients.
Network Attached Storage (NAS)2.42
84
Dedicated high performance file server with a storage system.
Network Attached Storage (NAS)2.42
85
Remote file services protocols used by NAS
CIFSNFS2.42
86
NAS Head Components
CPUMemory2.44
87
6 NAS Components
1) NAS Head (CPU & Memory)2) NICs (one or more)3) Optimized OS for managing NAS functionality4) NFS & CIFS protocols for file sharing5) Standard protocols to connect & manage physical disk resources (ATA, SCSI, FC)6) Storage Array2.44
88
Common protocols used in object based communication
SOAPREST2.47
89
Used for communication between peers in a distributed environment.Uses XML framework.
Simple Object Access Protocol (SOAP)2.47
90
Used to retrieve information from a web site by reading web pages.
Representational State Transfer (REST)2.47
91
Way of exchanging messages between peers on a distributed network.
Simple Object Access Protocol (SOAP)2.47
92
Communication standard used by Object Based Storage
HTTP2.47
93
7 Benefits of Object Based Storage
1) Automates & simplifies storage management.2) Ensures data integrity (!!!).3) Ensures compliance and auditability.4) Enables easy data migration.5) Enables self healing.6) Facilitates intelligent replication.7) Allows flexible scalability.2.49
94
Scenarios appropriate for Object Based storage
Multimedia content rich Web applicationsArchivesCloud2.49
95
Benefits of Unified Storage
Provides consolidated multi-protocol storage.Simplifies administration.Reduces cost of storage assets, along with power, cooling, and space.Provides a highly scalable architecture.2.51
96
File Level Storage Protocols
NFS, CIFS2.51
97
Block Level Storage Protocols
iSCSI, FC, and FCoE2.51
98
Object Level Protocols
REST, SOAP2.51
99
Goal of a business continuity solution
Ensure the information availability required to conduct vital business operations.2.53
100
RPO2.55
Recovery Point Objective
101
RTO
Recovery Time Objective2.55
102
Solutions & Technologies Which Enable Business Continuity
Eliminating Single Points of FailureMulti-pathing softwareBackupReplication (local and remote)2.56
103
Backup Purposes
Disaster RecoveryOperational BackupArchival2.57
104
How backups can be categorized, based on granularity
1) Full2) Cumulative3) Incremental2.58
105
Enables a full backup copy to be created offline without disrupting the I/O operation on the production volume.
Synthetic (or constructed) full backup2.58
106
Four Backup Components
1) Backup Client2) Backup Server3) Storage Node4) Backup Device2.59
107
Sends backup data to backup server or storage node
Backup Client2.59
108
Manages backup operations and maintains backup catalog
Backup Server2.59
109
Responsible for writing data to backup device.
Storage Node2.59
110
Stores backup data
Backup device2.59
111
Contains information about the backup process and backup metadata.
Backup Catalog2.59
112
True or False: Typically, the storage node is integrated with the backup server and both are hosted on the same physical platform.
True2.59
113
Backup Operation Steps
1) Backup server initiates a scheduled backup.2) Backup server instructs storage node to backup media and instructs clients to send backup data to the storage node.3) Storage node sends backup data to backup device and media information to the backup server.4) Backup server updates catalog and records the status.2.60
114
Restore Operation Steps
1) Backup client initiates the restore.2) Backup server scans backup catalog to identify data to be restored and the client that will receive the data.3) Backup server instructs storage node to load backup media.4) Storage node restores the backup data to the client and sends metadata to the backup server.2.60
115
VTL
Virtual Tape Library2.61
116
Three Backup Technology Options
1) Backup to Tape2) Backup to Disk3) Backup to Virtual Tape (VTL)2.61
117
Technology that conserves storage capacity and/or network traffic by eliminating duplicate data.
Deduplication2.62
118
Levels of Deduplication Implementation
File LevelBlock / Chunk Level2.62
119
True or False: Deduplication can be source-based (client) or target based (storage device).
True2.62
120
3 Benefits of Deduplication
1) Far less infrastructure is required to hold the backup images, due to the elimination of redundant data.2) Reduces the amount of redundant content in the daily backup, enabling longer retention periods.3) Reduces backup window and enables faster restore, enabling creation of daily full backup images.2.62
121
3 Methods for Implementing Deduplication
1) Single Instance Storage (SIS)2) Sub-file Deduplication3) Compression2.65
122
Process of creating an EXACT COPY of data
Replication2.67
123
Drivers for replication
Alternate source for backupFast recoveryDecision supportTesting platformRestart from replica2.67
124
Two Classifications of Replication
Local ReplicationRemote Replication
125
Primary purpose of replication
To enable users to have the designated data at the right place, in a state appropriate to the recovery needs.2.67
126
Drives choice of replica type
RPO2.68
127
Two Types of Replicas
1) Point-in-Time (PIT) Replica (has non-zero RPO)2) Continuous Replica (has near-zero RPO)2.68
128
Three Characteristics of a Good Replica
1) Recoverability2) Restartability3) Consistency2.68
129
How can RPO by minimized in a PIT replication scenario?
By making periodic PIT replicas
130
Objective of any continuous replication process
Reduce the RPO to zero.2.69
131
Process of replicating data within the same array or the same data center.
Local Replication2.70
132
Two Classifications of Local Replication
1) Compute based replication2) Storage array based replication2.70
133
Two type of compute based replication
1) LVM-based mirroring2) File system snapshot2.71
134
Types of Storage Array based replication techniques
1) Full volume mirroring2) Pointer based full volume replication3) Pointer based virtual replication2.70
135
Form of compute based replication where each logical partition in a logical volume is mapped to two physical partitions on two different physical volumes.Write to a logical partition is written to the two physical partitions.
Logical Volume Manager based mirroring2.71
136
Pointer-based local replication which uses Copy on the First Write (CoFW) principle.Uses bit map and block map.Requires a fraction of the space used by the production file system (FS).
File System Snapshot2.71
137
Briefly describe a Copy on First Write (CoFW) mechanism
If a write I/O is issued to the production FS for the first time after the creation of a snapshot, the I/O is held, and the original data of the production FS corresponding to that location is moved to the snap FS (replica).Then, the new data is allowed to write on the production FS.The bitmap and blockmap are updated accordingly.Any subsequent write to the same location will not initiate the CoFW activity.2.71
138
Used to keep track of blocks that are changes on the production FS after creation of the snapshot.
Bitmap2.71
139
Used to indicate the exact address from which the data is to be read when the data is accessed from the Snap FS.
Blockmap2.71
140
Target is a full physical copy of the source device.Target is attached to the source and data from the source is copied to the target.Target is unavailable while it is attached.Target device is as at least as large as the source device.
Full Volume Mirroring2.72
141
Provides a full copy of the source data on the target.Target device is made accessible for business operations as soon as the replication session has started.Point-in-Time (PIT) is determined by the time of session activation.Target device is at least as large as the source device.
Pointer-Based Full Volume Replication2.73
142
Two modes of pointer based full volume replication
1) Copy on First Access (deferred)2) Full Copy Mode2.73
143
Targets do not hold actual data, but hold pointers to where the data is located.Target requires only a small fraction of the size of the source volumes.Target devices are accessible at the start of session activation.Uses CoFW technology.
Pointer Based Virtual Replication2.74
144
Process of creating and maintaining copies of data from a production site to remote site(s).
Remote Replication2.75
145
Addresses risks associated with regionally driven outages.Network infrastructure is required between source and target.
Remote Replication2.75
146
Two modes of remote replication
1) Synchronous2) Asynchronous2.75
147
Replica is identical to source at all times - near zero RPO
Synchronous Replication2.75
148
Replica is behind the source by a finite time - finite RPO.
Asynchronous Replication2.75
149
A write must be committed to the source and remote replica before it is acknowledged to the compute system.Application response time will be extended.Maximum network bandwidth must be provided at all times to minimize impact on response time.Rarely deployed beyond 200 km.
synchronous replication2.76
150
Write is committed to the source and immediately acknowledged to the compute system.Data is buffered at the source and transmitted to the remote replica later.Application response time is unaffected.Needs only average network bandwidth.Deployed over long distances.Non-zero RPO
asynchronous replication2.76
151
Replication is done by using the CPU resources of the compute system, using software that is running on the compute.
Compute-based remote replication2.77
152
Two Compute Based Remote Replication Methods
1) LVM-based2) Database Log Shipping2.77
153
All writes to the source Volume Group are replicated to the target Volume Group by the LVM.Can be in synchronous or asynchronous mode
LVM-based remote replication2.77
154
Transactions to the source database are captured in logs, which are periodically transmitted by the source compute system to the remote compute system.Remote compute system applies these logs to the remote database.
Database Log Shipping Remote Replication2.77
155
Replication is performed by the array operating environment
Storage Array based Remote Replication2.78
156
Three Modes of Operation for Storage Array Based Remote Replication
1) Synchronous Replication2) Asynchronous Replication3) Disk Buffered Replication2.78
157
Combination of local and remote replications.RPO usually on the order of hours.Low bandwidth requirements.Extended distance solution.
Disk Buffered Storage Array Based Remote Replication2.78
158
Eliminates disadvantages of two site replication.Replicates data to two remote sites.
Three Site Replication2.79
159
Allows replication between heterogeneous vendor storage arrays over SAN/WAN
SAN Based Replication2.79
160
Changes to data are continuously captured or tracked.
Continuous Data Protection (CDP)2.79
161
CDP
Continuous Data Protection2.80
162
Captures all writes and maintains consistent point in time images.
Continuous Data Protection (CDP)2.79
163
CDP Elements
CDP ApplianceStorage VolumesWrite Splitters2.80
164
All data changes are stored in a location separate from the primary storage.Recovery point objectives are arbitrary and need not be defined in advance of the actual recovery.
Continuous Data Protection (CDP)2.80
165
Runs the CDP software and manages all the aspects of local and remote replication.
CDP Appliance2.80
166
Storage Volumes
Repository volume, journal volume, and replication volume.2.80
167
Intercepts write from initiator and splits each write into two copies
Write Splitter2.80
168
Store all data changes on the primary storage
Journal volumes2.80
169
Dedicated volume on the SAN-attached storage at each site which stores configuration information about the CDP appliance.
Repository Volume2.80
170
6 Key Management Activities in a CDC
1) Monitoring and Alerting2) Reporting3) Availability Management4) Capacity Management5) Performance Management6) Security Management2.82
171
Four Key Parameters to be monitored
1) Accessibility2) Capacity3) Performance4) Security2.83
172
Key components of a CDC that should be monitored
Compute systems, network, and storage2.83
173
Integral part of monitoring
Alerting2.85
174
Three Levels of Alerts Based on Severity
Information AlertWarning AlertFatal Alert2.85
175
Types of Reports
Capacity Planning ReportChargeback ReportPerformance2.86
176
Ensures adequate availability of resources based on their service level requirements.Manages resources allocation.
Capacity Management2.88
177
Key Capacity Management Activities
Trend and Capacity AnalysisStorage Provisioning2.88
178
Prevents data corruption on the storage array by restricting compute access to a defined set of logical devices.
LUN Masking2.90
179
Used to restrict unauthorized HBAs in a SAN environment
Configuration of Zoning2.90
180
Proactive strategy that enables an IT organization to effectively align the business value of information with the most appropriate and cost-effective infrastructure, from the time information is created, through its final disposition.
Information Lifecycle Management (ILM)2.92
181
Intelligent storage system built on Virtual Matrix architecture
EMC Symmetrix VMAX2.95
182
Operating environment for Symmetrix VMAX
Enginuity2.95
183
Provides simplified storage management and provisioning, and options such as additional replication, migration and volume configuration.
Enginuity2.95
184
Key components of a SAN environment
Fibre Channel switches and directors2.96
185
EMC Connectrix Family
1) Enterprise directors (MDS-9513, DCX)2) Departmental switches3) Multi-protocol routers (MP-7800B)2.96
186
Key ability of multi-protocol switches in a SAN environment
They can bridge FC-SAN and IP SAN, a feature that enables these devices to provide connectivity between iSCSI initators and FC storage targets.They also can extend a FC SAN over long distances through IP networks.2.96
187
Disk-based backup and recovery solution that provides inherent source-based data deduplication
EMC Avamar2.97
188
How does Avamar differ from traditional backup and recovery solutions?
Avamar identifies and stores only the unique sub-file data objects.2.97
189
Target-based data deduplication solution with Data Invulnerability Architecture
EMC Data Domain2.97
190
Provides centralized, automated backup and recovery operations across an enterprise.Provides both source-based and target-based deduplication capabilities by integrating with Avamar and Data Domain, respectively.
EMC Networker2.97
191
Works with EMC Avamar and EMC Data Domain
EMC Networker2.97
192
SRDF
Symmetrix Remote Data Facility2.98
193
Offers a family of technology solutions to implement storage array based remote replication technologies.
EMC Symmetrix Remote Data Facility (SRDF)2.98
194
VNX-basaed software that enables storage array based remote replication
EMC MirrorView2.98
195
Symmetrix software that performs SAN-based remote replication between Symmetrix and qualified storage arrays.Has full or incremental copy capabilities.
EMC Open Replicator2.98
196
Family of products used for full volume and pointer-based local replication in Symmetrix storage arrray
EMC Timefinder2.98
197
VNX based local replication software that creates point-in-time views or point-in-time copies of logical volumes.
EMC Snapview2.98
198
Product which offers CDP and CRR functionality
EMC Recover Point2.98
199
Seven Key Requirements of a Data Center
1. Manageability2. Availability3. Performance4. Flexibility5. Scalability6. Security7. Data Integrity2.4
200
How can manageability be achieved?
Through automation and reduction of manual intervention in common tasks.2.4