Projects/DellRedHatHALinuxCluster/Documentation/Storage/MD3000i

From DellLinuxWiki

Jump to: navigation, search

Dell|Red Hat HA Linux > Storage > MD3000i

Contents

Introduction

The following sections will walk-through the hardware, software, and configuration of the Dell PowerVault MD3000i Storage Array and nodes for the Dell|Red Hat HA Linux Cluster.

NOTE: Additional documentation you may find useful:

  • The Dell PowerVault MD3000i Hardware Owner's Manual provides information about the hardware enclosure.
  • Setting Up Your PowerVault MD3000i provides an overview of setting up and cabling your storage array.
  • The PowerVault Modular Disk Storage Manager CLI Guide provides information about using the command line interface (CLI).
  • The Dell PowerVault MD3000i Resource media provides documentation for configuration and management tools, as well as the full documentation set included here.
  • The Dell PowerVault Modular Disk Storage Manager User's Guide provides instructions for using the array management software to configure RAID systems.
  • The Dell PowerVault MD Systems Support Matrix provides information on supported software and hardware for PowerVault MD systems. This document can be located on the Dell Support website at support.dell.com.

Hardware Setup

The MD3000i physically connects to your cluster by an Ethernet connection. For a fully redundant setup, it is recommended to have three separate subnets. Two subnets are for iSCSI I/O traffic, and one is for management.

Cabling the Power Supplies

To ensure that the specific power requirements are satisfied, see the documentation for each component in your cluster solution. It is recommended that you adhere to the following guidelines to protect your cluster solution from power-related failures:

  • Plug each power supply into a separate AC circuit.
  • Plug each power supply into separate optional network power switches.
  • Use uninterruptible power supplies (UPS).
  • Consider backup generators and power from separate electrical substations.

Cabling the Management Ports

Storage management is performed through the out-of-band Ethernet management connections on the controllers. Cable the Ethernet management ports to the private network on the same subnet as the cluster private network.

Cabling the iSCSI I/O Ports

In the network-attached redundant configuration, each cluster node attaches to the storage system using redundant IP storage area network (SAN) industry-standard 1 Gb Ethernet switches, and either with one dual-port iSCSI NIC or two single-port iSCSI NICs. If a component fails in the storage path such as the iSCSI NIC, the cable, the switch, or the storage controller, the multipath software automatically re-routes the I/O requests to the alternate path so that the storage array continues to operate without interruption. The configuration with 2 single-port NICs provides higher availability; a NIC failure does not cause cluster resources to move to another cluster node.

This configuration can support up to 16 hosts simultaneously and requires dual controller modules.

To cable the cluster:

  1. Connect the storage system to the iSCSI network.
    1. Install a network cable from switch 1 to controller 0 port In-0.
    2. Install a network cable from switch 1 to controller 1 port In-0.
    3. Install a network cable from switch 2 to controller 0 port In-1.
    4. Install a network cable from switch 2 to controller 1 port In-1.
  2. Connect the cluster to the iSCSI network.
    1. Install a network cable from the cluster node 1 iSCSI NIC 1 (or NIC port 1) to the network switch 1.
    2. Install a network cable from the cluster node 1 iSCSI NIC 2 (or NIC port 2) to the network switch 2.
    3. Repeat the previous two steps for each additional node.
  3. Connect each additional cluster or standalone server to the iSCSI network, similar to step 2.

fig1-310.jpg

NOTE: The SAS out port provides SAS connection for cabling to MD1000 expansion enclosure(s).

Software Installation

The software required for your cluster is discussed here. Some software is available via the Dell Software Repository, and all is available on the Dell PowerVault MD3000i Resource CD. For information on configuring access to the repository, see the section Installing Dell Community Repositories in the System section.

Dell PowerVault MD3000i Resource CD

All necessary software is available on the Dell PowerVault MD3000i Resource CD. If you do not have the Resource CD or access to the Dell Community Repositories, download the latest ISO image from the Dell Support website at support.dell.com.

Resource CD Pre-requisites

Before running the Dell PowerVault MD3000 Resource CD installer, ensure that the packages libXp, libXtst, and kernel development are installed on each node. These are needed for the installer.

1. Run the following commands to ensure that the packages are installed:

[root]# yum install libXp libXtst

2. The Dell PowerVault MD3000i Resource CD requires development tools in order to install the Dell HBA drivers and Multi-Path Proxy driver. Before running the installer, ensure your node has these packages by entering the following commands:

[root]# yum install kernel-devel gcc

3. To use the PowerVault Modular Disk Storage Manager ensure that X is running. If X is not running, execute the following commands:

[root]# yum groupinstall base-x

If X does not start up properly, run the following configuration utility:

[root]# system-config-display

Enter runlevel 5 with the following command:

[root]# init 5

Or start X with the following command:

[root]# startx

4. To view the various documentation located on the Dell PowerVault MD3000i Resource CD install a web browser on your system:

[root]# yum install firefox

Launching the Resource CD Main Menu

To install the required software:

1. Access the Dell PowerVault MD3000i Resource CD in one of two ways:

  • Insert the media into a cluster node. If X is running, the media mounts automatically under the /media mount point and launches the main menu. If the media does not mount automatically, execute the following command:
[root]# mount /dev/cdrom {/path/to/mount/point}

where {/path/to/mount/point} is a directory that is used to access the contents.

  • If you have downloaded the ISO image, it is not necessary to create physical media. Mount the ISO image and access the contents with the following command:
[root]# mount -o loop {ISO filename} {/path/to/mount_point}

where {/path/to/mount_point} is a directory that will be used to access the contents.

2. The installer launches automatically. If it did not, run the Dell PowerVault MD3000i Resource CD Linux installation script manually:

[root]# ./autorun

If the main menu does not appear, change the directory using the following command:

[root]# cd {/path/to/mount/point/linux/}

Run the installer:

[root]# ./install.sh

The Dell PowerVault MD3000i Resource CD menu is displayed.

  ################################################################
                 Dell PowerVault MD3000i Resource CD
  ################################################################  
1. View MD3000i Readme
2. Install MD3000i Storage Manager
3. Install Multi-pathing Driver
4. Install MD3000i Documentation
5. View MD3000i Documentation
6. iSCSI Setup Instructions
7. Dell Support
8. View End User License Agreement
Enter the number to select a component from the above list.
Enter q to quit.
Enter:

Installing the PowerVault Modular Disk Storage Manager

NOTE: Perform the following steps on any management or cluster nodes.

The PowerVault Modular Disk Storage Manager contains the necessary management software and host agent utilities. To launch the installer:

1. Complete the steps in Dell PowerVault MD3000i Resource CD.
2. Select Option 2. Install MD3000i Storage Manager.
3. The Java-based installer appears.
NOTE: If you encounter any errors or do not see the Java™ installer, ensure that you have completed the steps in Dell PowerVault MD3000i Resource CD. For more information, go to the main menu and view documentation.
4. Follow the instructions on the screen. To continue with installation, accept the license terms.
5. Select an installation location. The default value is /opt/dell/mdstoragemanager.
6. The Select an Installation Type window appears and displays the following options. Choose the option relevant to your node's function:

Install Option Node Type
Typical (Full) Select this for a node that will be both a management and cluster node
Management Station Select this option for a dedicated management node
Host Select this option for a cluster node

NOTE: If you choose Typical (Full) or Host, the Multi-Path Proxy driver warning is displayed. It is safe to ignore this warning. For more information, see RDAC Multi-Path Proxy Driver readme.

RDAC Multi-Path Proxy Driver

Each cluster node has multiple paths to the same virtual disk via the iSCSI I/O networks. This may cause issues as the cluster nodes are able to access the same data through different paths that could lead to data corruption. Multi-Path Proxy driver consolidates all paths to the virtual disk into a pseudo device. This allows the nodes to access the pseudo device instead of a path directly. If a path failure occurs, the Multi-Path Proxy driver automatically switches paths, and the node continues to access the same data through the same pseudo device.

Red Hat Enterprise Linux 5.1 Advanced Platform includes a multipath driver as part of the base operating system with the package device-mapper-multipath. However, Dell provides Multi-Path Proxy driver that is specific to the PowerVault MD3000i storage array. It is recommended that you use the Dell-specific RDAC Multi-Path Proxy (MPP) driver with the Dell|Red Hat HA Linux Cluster.


NOTE: For more information on device-mapper-multipath, see Using Device-Mapper Multipath located on the Red Hat website at www.redhat.com/docs/manuals/enterprise/.

Installing the Multi-Path Proxy Driver via MD3000i Resource CD

To install the RDAC MPP driver:

  1. Complete the steps in Dell PowerVault MD3000i Resource CD.
  2. Select 3. Install Multi-pathing Driver
  3. Follow the steps on the screen to complete installation. After completion, The follow message appears:
DKMS: install Completed
You must restart your computer for the new settings to take effect.

For additional information about installing RDAC MPP driver, see the RDACreadme.txt on the Dell PowerVault MD3000i Resource media.

Installing the Multi-Path Proxy Driver via Dell Repository

Ensure you have configured your systems to access the Dell Software Repository. See the System section for details.

Install the Multi-Path Proxy driver:

[root]# yum install linuxrdac

Storage Configuration

Connect your iSCSI I/O Ethernet cables to your nodes that were disconnected during the operating system installation in the System section.

Initially Configuring the Shared Storage System

The following steps only need to be performed during the initial setup of the Storage Array. For more information on all procedures below, consult the Dell PowerVault Modular Disk Storage Manager Users's Guide.

Discovering your Storage Array

NOTE: Prior to this step, ensure that the storage array management ports are connected to the private network. Refer to Cabling the Management Ports for more details.

  1. On a management node, open the PowerVault Modular Disk Storage Manager application. For more information on installing this software, refer to Installing the PowerVault Modular Disk Storage Manager.
  2. Click New at the top of the Manager window. This will launch the Add New Storage Array wizard.
  3. Choose Automatic. This option will scan your network for MD3000i Storage Arrays.
  4. When the scan is complete, your MD3000i Storage Array should be visible. If it is not, check all physical connections to ensure that the management node and the MD3000i management ports are on the same private network.

NOTE: If this process fails, consult the section Adding Storage Arrays in the Dell PowerVault Modular Disk Storage Manager Users's Guide.

Initial Setup Tasks

The Initial Setup Tasks window will open when the Storage Array is first disovered. If it does not open, you can click the Initial Setup Tasks link at the top of the MD Storage Manager window. This window contains links to the basic commands needed to configure your storage array. These steps can also be completed manually at any time. Follow these steps to complete the basic configuration of your Storage Array. For more information on any of these steps, see the section Setting Up Your Storage Array in the Dell PowerVault Modular Disk Storage Manager Users's Guide.

Perform the following steps:

  • Rename the Storage Array
  • Set a Storate Array Password
  • Set up Alert Notifications
  • Configure iSCSI Host Ports
    • See the section below for more details
  • Configure Host Access (automatic)
    • If the MD Agent is running properly on the nodes, you can configure host access from this initial setup task wizard. However, if no host initiators are displayed, skip this step for now. You will need to perform Manual host access setup later.
  • Configure Storage Array
    • Hot Spare assignment
    • Virtual Disks and Disk Groups
      • In most cases one large Virtual Disk is adequate. However, create as many Virtual Disks as required by your application.
  • Configure Ethernet Management Ports
    • This is the last task under the Optional heading
    • Assign IP addresses on the private network.

Configuring iSCSI Host Ports

  • Assign IP addresses to the iSCSI ports. For a fully redundant setup, it is recommended to setup two separate subnets. Configure controller 0, port 0 and controller 1, port 0 on the same subnet. Configure controller 0, port 1 and controller 1, port 1 on a separate subnet. The following table displays the default iSCSI Host Port IP addresses for reference:
MD3000i iSCSI host port Network Subnet Default Address
Controller 0, Port 0 iSCSI I/O network #1 192.168.130.101/24
Controller 0, Port 1 iSCSI I/O network #2 192.168.131.101/24
Controller 1, Port 0 iSCSI I/O network #1 192.168.130.102/24
Controller 1, Port 1 iSCSI I/O network #2 192.168.131.102/24

Updating the Firmware

For the latest PowerVault MD3000i firmware, see the Dell Support website at support.dell.com. Use the PowerVault Modular Disk Storage Manager to perform the firmware update. See the section Installing the PowerVault Modular Disk Storage Manager and the documentation that came with your storage array for more information.

Configuring Nodes for Storage Access

On each cluster node:

Assign IP Addresses to Storage NICs

  1. Identify which NIC ports have been cabled to the iSCSI I/O network.
  2. Assign IP addresses to each NIC port.
    NOTE: Ensure the correct IP is assigned on the correct iSCSI I/O subnet.
  3. Allow port 3260 through each node's firewall from the MD3000i target subnets. In most cases simply allowing the entire iSCSI I/O subnets will suffice. For example, on each node, allow the two I/O subnets as follow:
[root]# iptables -I INPUT -s {iSCSI I/O network #1} -j ACCEPT
[root]# iptables -I INPUT -s {iSCSI I/O network #2} -j ACCEPT

For example:

[root]# iptables -I INPUT -s 192.168.130.0/24 -j ACCEPT
[root]# iptables -I INPUT -s 192.168.131.0/24 -j ACCEPT

Install and Configure iSCSI Initiator

  • Install the iSCSI initiator utilities
[root]# yum install iscsi-initiator-utils
  • Determine which NICs are connected to the iSCSI I/O networks. The utilities ethtool or mii-tool may assist in determining which NICs have link.
  • Assign IP addresses on each iSCSI I/O NIC. These should correspond to the separate subnets setup for the iSCSI I/O networks.
  • Edit the file /etc/iscsi/initiator.name. This establishes the iSCSI qualified name (iqn). One will be randomly generated for you, but modifying this to a more easily identifiable name is recommended. This iqn will be used to associate your host with a host group. The industry accepted format is:
iqn.yyyy-mm.{reversed domain name}:{unique identifier} 

For example:

iqn.2008-03.com.example:node1
  • Start the iSCSI initiator:
[root]# service iscsi start
NOTE: If the automatic host configuration was not completed, you will see a warning message of No host records found!. It is safe to ignore this message until manual host configuration has been completed.
  • Configure the iSCSI initiator to start on boot:
[root]# chkconfig iscsi on
  • Initiate a connection to the MD3000i. This step will allow your host's initiator to register on the MD3000i:
[root]# iscsiadm –m discovery –t sendtargets -p {IP address of an iSCSI host port}

For example:

[root]# iscsiadm -m discovery -t sendtargets -p 192.168.130.101
192.168.130.101:3260,1 iqn.1984-05.com.dell:powervault.6001c23000c98b820000000047f52d0b
192.168.131.101:3260,2 iqn.1984-05.com.dell:powervault.6001c23000c98b820000000047f52d0b
192.168.130.102:3260,2 iqn.1984-05.com.dell:powervault.6001c23000c98b820000000047f52d0b
192.168.131.102:3260,1 iqn.1984-05.com.dell:powervault.6001c23000c98b820000000047f52d0b

Completing Shared Storage System Configuration

For more information on all procedures below, consult the section About Your Hosts in the Dell PowerVault Modular Disk Storage Manager Users's Guide.

Configuring Host Access

  • Create Host Group
  • Associate Host initiators with Host Group
    • Ensure all cluster nodes are part of the same Host Group
  • Associate Virtual Disk with Host Group
    • Ensure all Virtual Disks to be accessed by the cluster are part of the same Host Group

Completing Node Configuration

Once host initiators have been associated to a host group with Virtual Disks, you must discover the Virtual Disks on each node.

  • To discover available targets:
[root]# iscsiadm –m discovery –t sendtargets -p {IP address of an iSCSI host port}

For example:

[root]# iscsiadm -m discovery -t sendtargets -p 192.168.130.101
192.168.130.101:3260,1 iqn.1984-05.com.dell:powervault.6001c23000c98b820000000047f52d0b
192.168.131.101:3260,2 iqn.1984-05.com.dell:powervault.6001c23000c98b820000000047f52d0b
192.168.130.102:3260,2 iqn.1984-05.com.dell:powervault.6001c23000c98b820000000047f52d0b
192.168.131.102:3260,1 iqn.1984-05.com.dell:powervault.6001c23000c98b820000000047f52d0b
  • Login to one of the targets:
[root]# iscsiadm –m node –T {target iqn} –p {IP address of an iSCSI host port} –l

where {target iqn} is one of the reported targets from the previous command. For example:

[root]# iscsiadm -m node -T iqn.1984-05.com.dell:powervault.6001c23000c98b820000000047f52d0b -p 192.168.130.101 -l

Verification Checklist

Item Verified
Physical setup
Physical cabling
Multi-Path Proxy driver (linuxrdac package)
Dell™ PowerVault™ MD3000 storage array configured
PowerVault MD3000 Storage Manager installed on a node
Virtual Disk(s) created
Host Group defined and mapped to Virtual Disks
Personal tools
Distributions