Dell/EMC CX4-series Fibre Channel Storage Arrays ® With Microsoft Windows ® Server Failover Clusters Hardware Installation and Troubleshooting Guide...
Contents Introduction Cluster Solution Cluster Hardware Requirements Cluster Nodes Cluster Storage Supported Cluster Configurations Direct-Attached Cluster SAN-Attached Cluster Other Documents You May Need Cabling Your Cluster Hardware Cabling the Mouse, Keyboard, and Monitor Cabling the Power Supplies Cabling Your Cluster for Public and Private Networks Cabling the Public Network Cabling the Private Network...
Page 4
Storage Groups Using Navisphere Configuring the Hard Drives on the Shared Storage System(s) Optional Storage Features Updating a Dell/EMC Storage System for Clustering Installing and Configuring a Failover Cluster Contents ......
Troubleshooting Guide located on the Dell Support website at support.dell.com. For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Failover Cluster, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha.
Cluster Solution Your cluster implements a minimum of two nodes to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) and provides the following features: • 8-Gbps and 4-Gbps Fibre Channel technology •...
NOTE: For more information about supported systems, HBAs and operating system variants, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha. It is recommended that the NICs on each public network...
15 hard drives NOTE: The DAE-OS is the first DAE enclosure that is connected to the CX4-series (including all of the storage systems listed above). Core software is preinstalled on the first five hard drives of the DAE-OS.
Page 11
LUN. • EMC SAN Copy™ — Moves data between Dell/EMC storage systems without using host CPU cycles or local area network (LAN) bandwidth. For more information about Navisphere Manager, MirrorView, SnapView, and SAN Copy, see "Installing and Configuring the Shared Storage System"...
However, the direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
Page 14
• For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide. • For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide.
Cabling Your Cluster Hardware NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com. Cabling the Mouse, Keyboard, and Monitor When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes.
Page 16
Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems primary power supplies on one AC power strip (or on one AC Power Distribution Unit [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components.
Table 2-1. NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
Table 2-1. Network Connections Network Connection Public network Private network Figure 2-3 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network. Figure 2-3.
Cabling the Private Network The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes three possible private network configurations. Table 2-2. Private Network Hardware Components and Connections Method Hardware Components Network...
Cabling Storage for Your Direct-Attached Cluster A direct-attached cluster configuration consists of redundant Fibre Channel host bus adapter (HBA) ports cabled directly to a Dell/EMC storage system. Figure 2-4 shows an example of a direct-attached, single cluster configuration with redundant HBA ports installed in each cluster node.
Page 21
HBA port, SP port, or tape library port. Cabling a Two-Node Cluster to a Dell/EMC Storage System NOTE: The Dell/EMC storage system requires at least 2 front-end fibre channel ports available on each storage processor. 1 Connect cluster node 1 to the storage system: Install a cable from cluster node 1 HBA port 0 to the first front-end fibre channel port on SP-A.
Page 22
Figure 2-5. Cabling a Two-Node Cluster to a CX4-120 or CX4-240 Storage System cluster node 2 cluster node 1 HBA ports (2) HBA ports (2) SP-A SP-B CX4-120 or CX4-240 storage system Figure 2-6. Cabling a Two-Node Cluster to a CX4-480 Storage System...
Page 23
Dell/EMC storage system, depending on the availability of front-end fibre channel ports. The CX4-120 and CX4-240 storage systems can support up to 6-node cluster, the CX4-480 storage system can support up to 8-node cluster, and the CX4-960 can support up to 12-node cluster.
Page 24
SP-B. Cabling Multiple Clusters to a Dell/EMC Storage System The high number of available front-end fibre channel ports on the CX4-series storage system also allows you to configure multiple clusters or a mix of cluster(s) and non-clustered server(s) in a direct-attached configuration.
Cabling Two Two-Node Clusters to a Dell/EMC Storage System The following steps are an example of how to cable a two two-node cluster. The Dell/EMC storage system needs to have at least 4 front-end fibre channel ports available on each storage processor.
Page 26
Figure 2-8 shows an example of a two node SAN-attached cluster. Figure 2-9 shows an example of an eight-node SAN-attached cluster. Similar cabling concepts can be applied to clusters that contain a different number of nodes. NOTE: The connections listed in this section are representative of one proven method of ensuring redundancy in the connections between the cluster nodes and the storage system.
Page 27
Figure 2-9. Eight-Node SAN-Attached Cluster Fibre Channel switch public network private network cluster nodes (2-8) storage system Cabling Your Cluster Hardware Fibre Channel switch...
Page 28
Adding more cables from the storage system to the switches can increase the I/O bandwidth and high availability of data. Although the CX4-960 has a maximum of 12 front-end fibre channel ports per SP, only 8 of them can be connected to fibre channel switches.
Page 29
Cabling a SAN-Attached Cluster to a Dell/EMC CX4-120 or CX4-240 Storage System 1 Connect cluster node 1 to the SAN: Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1).
Page 30
Figure 2-10. Cabling a SAN-Attached Cluster to the Dell/EMC CX4-120 or CX4-240 cluster node 1 HBA ports (2) CX4-120 or CX4-240 storage system Cabling a SAN-Attached Cluster to the Dell/EMC CX4-480 or CX4-960 Storage System 1 Connect cluster node 1 to the SAN: Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0).
Page 31
Additional cables can be connected from the fibre channel switches to the storage system if there are available front-end fibre channel ports on the storage processors. Figure 2-11. Cabling a SAN-Attached Cluster to the Dell/EMC CX4-480 cluster node 1 HBA ports (2)
Page 32
Figure 2-12. Cabling a SAN-Attached Cluster to the Dell\EMC CX4-960 cluster node 1 HBA ports (2) Fibre Channel switch Cabling Multiple SAN-Attached Clusters to a Dell/EMC Storage System To cable multiple clusters to the storage system, connect the cluster nodes to the appropriate Fibre Channel switches and then connect the Fibre Channel switches to the appropriate storage processors on the processor enclosure.
Page 33
Cabling Multiple SAN-Attached Clusters to the CX4-480 or CX4-960Storage System 1 In the first cluster, connect cluster node 1 to the SAN: Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0).
Page 34
Zoning Your Dell/EMC Storage System in a Switched Environment Dell only supports single-initiator zoning for connecting clusters to a Dell/EMC storage system in a switched environment. When using EMC PowerPath, a separate zone is created from each HBA port to the SPE.
Page 35
Connecting a PowerEdge Cluster to a Tape Library To provide additional backup for your cluster, you can add tape backup devices to your cluster configuration. The Dell PowerVault™ tape libraries may contain an integrated Fibre Channel bridge or Storage Network Controller (SNC) that connects directly to your Fibre Channel switch.
Page 36
NOTE: While tape libraries can be connected to multiple fabrics, they do not provide path failover. Figure 2-14. Cabling a Storage System and a Tape Library cluster node Fibre Channel switch tape library Obtaining More Information See the storage and tape backup documentation for more information on configuring these components.
Page 37
Figure 2-15. Cluster Configuration Using SAN-Based Backup cluster 1 cluster 2 Fibre Channel switch Fibre Channel switch storage systems tape library Cabling Your Cluster Hardware...
NOTE: For more information on step 3 to step 7 and step 10 to step 13, see the "Preparing your systems for clustering" section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com.
Page 40
NOTE: You can configure the cluster nodes as Domain Controllers. For more information, see the “Selecting a Domain Model” section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com.
Installation Overview Each node in your Dell Failover Cluster must be installed with the same release, edition, service pack, and processor architecture of the Windows Server operating system. For example, all nodes in your cluster may be configured with Windows Server 2003 R2, Enterprise x64 Edition.
Placing the adapters on separate buses improves availability and performance. For more information about your system's PCI bus configuration and supported HBAs, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha. Installing the Fibre Channel HBA Drivers For more information, see the EMC documentation that is included with your HBA kit.
Zoning automatically and transparently enforces access of information to the zone devices. More than one PowerEdge cluster configuration can share Dell/EMC storage system(s) in a switched fabric using Fibre Channel switch zoning and with Access Control enabled. By using Fibre Channel switches to implement zoning, you can segment the SANs to isolate heterogeneous servers and storage systems from each other.
Page 44
• Create a zone for each HBA port and its target storage devices. • Each CX4-series storage processor port can be connected to a maximum of 64 HBA ports in a SAN-attached environment. • Each host can be connected to a maximum of four storage systems.
Fibre Channel topologies allow multiple clusters and stand-alone systems to share a single storage system. However, if you cannot control access to the shared storage system, you can corrupt your data. To share your Dell/EMC storage system with multiple heterogeneous host systems and restrict access to the shared storage system, you need to enable Access Control.
Access Control is enabled using Navisphere Manager. After you enable Access Control and connect to the storage system from a management station, Access Control appears in the Storage System Properties window of Navisphere Manager. After you enable Access Control, the host system can only read from and write to specific LUNs on the storage system.
Page 47
An additional storage group feature that performs the following tasks: • Lists all of the paths from the host server to the storage group • Displays whether the path is enabled or disabled Each path contains the following fields: –...
You can access Navisphere Manager through a web browser. Using Navisphere Manager, you can manage a Dell/EMC storage system either locally on the same LAN or through an Internet connection. Navisphere components (Navisphere Manager user interface (UI) and Storage Management Server) are installed on a Dell/EMC storage system.
EMC PowerPath automatically reroutes Fibre Channel I/O traffic from the host system and a Dell/EMC CX4-series storage system to any available path if a primary path fails for any reason. Additionally, PowerPath provides multiple path load balancing, allowing you to balance the I/O traffic across multiple SP ports.
Page 50
3 Enter the IP address of the storage management server on your storage system and then press <Enter>. NOTE: The storage management server is usually one of the SPs on your storage system. 4 In the Enterprise Storage window, click the Storage tab.
Repeat step b and step c to add additional hosts. Click Apply. 16 Click OK to exit the Storage Group Properties dialog box. Configuring the Hard Drives on the Shared Storage System(s) This section provides information for configuring the hard drives on the shared storage systems.
LUNs to the proper host systems. Optional Storage Features Your Dell/EMC CX4-series storage array may be configured to provide optional features that can be used in conjunction with your cluster. These features include MirrorView, SnapView, and SANCopy.
Updating a Dell/EMC Storage System for Clustering If you are updating an existing Dell/EMC storage system to meet the cluster requirements for the shared storage subsystem, you may need to install additional Fibre Channel disk drives in the shared storage system. The size and number of drives you add depend on the RAID level you want to use and the number of Fibre Channel disk drives currently in your system.
Troubleshooting This appendix provides troubleshooting information for your cluster configuration. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1. General Cluster Troubleshooting Problem Probable Cause The nodes cannot The storage system is access the storage not cabled properly to system, or the...
Page 56
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause One of the nodes The node-to-node takes a long time to network has failed due join the cluster. to a cabling or hardware failure. One or more nodes One of the nodes may have the Internet fail to join Connection Firewall...
Page 57
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Attempts to The Cluster Service connect to a cluster has not been started. using Cluster A cluster has not been Administrator fail. formed on the system. The system has just been booted and services are still starting.
Page 58
IP Addresses to Cluster Resources and Components" of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide. Ensure that all systems are powered on so that the NICs in the private network are available.
Page 59
Cluster Service. If unreadable or you are running uninitialized in Windows Server 2003, Windows Disk this situation is Administration normal if the cluster node does not own the cluster disk. Corrective Action...
Page 60
Internet Connection Firewall enabled, see Microsoft Base (KB) articles 258469 and 883398 at the Microsoft Support website at support.microsoft.com and the Microsoft Windows Server 2003 Technet website at www.microsoft.com/technet. Configure the Internet Connection Firewall to allow communications that are required by the MSCS and the clustered applications or services.
Page 61
Zoning Configuration Form Node HBA WWPNs or Alias Names Storage Zone Name WWPNs or Alias Names Zoning Configuration Form Zone Set for Configuration Name...
Page 63
Use the form when you call for technical support. Table C-1. Cluster Information Cluster Information Cluster Solution Cluster name and IP address Server type Installer Date installed Applications Location Notes Table C-2. Cluster Node Information...
Page 64
Additional Networks Table C-3. Storage Array Information Array Array xPE Type Cluster Data Form Array Service Tag Number of Attached Number or World Wide DAEs Name Seed...