Products/HA/DellRedHatHALinuxCluster/Storage/PowerVault MD3000/Storage Configuration

From DellLinuxWiki

Jump to: navigation, search

Dell|Red Hat HA Linux > Storage > PowerVault MD3000 > Storage Configuration

Contents

Storage Configuration

Connect your SAS cables to your nodes that were disconnected during the operating system installation in Preparing the Operating System.

Initially Configuring the Shared Storage System

The following steps only need to be performed during the initial setup of the storage array.

For more information on all procedures below, consult the Users's Guide on the Dell PowerVault MD3000 documentation site.

Discovering your Storage Array

NOTE: Prior to this step, ensure that the Storage Array management ports are connected to the private network. Refer to Cabling the Management Ports for more details.

  1. On a management node, open the PowerVault Modular Disk Storage Manager application. A link to the this application can be found on the desktop. For more information on installing this software, refer to Software Installation
    NOTE: If a link is not found, manually open it from the installed location. The default installation is /opt/dell/mdstoragemanager/client/SMclient
  2. Click New at the top of the Manager window. This will launch the Add New Storage Array wizard.
    NOTE: If clicking New does not seem to perform an action, try using the shortcut keys <ALT>+<TAB> and select the window that is in the background, or minimize the main window.
  3. Choose Automatic. This option will scan your network for MD3000 Storage Arrays.
  4. When the scan is complete, your MD3000 Storage Array should be visible. If it is not, check all physical connections to ensure that the management node and the MD3000 management ports are on the same private network.

NOTE: If this process fails, consult the section Adding Storage Arrays in the Users's Guide on the Dell PowerVault MD3000 documentation site.

Initial Setup Tasks

The Initial Setup Tasks window will open when the storage array is first discovered. This window contains links to the basic commands needed to configure your Storage Array. For more information, see the section About Your Storage Array in the Users's Guide on the Dell PowerVault MD3000 documentation site.

Rename the Storage Array

Select Rename the Storage Array from the Initial Setup Tasks window, or choose Tools then Rename Storage Array.

Set a Storage Array Password

Select Set a Storage Array Password from the Initial Setup Tasks window, or choose Tools then Set or Change Password.

Configure Ethernet Management Ports

Select Configure Ethernet Management Ports from the Initial Setup Tasks window, or choose Tools then Configure Ethernet Management Ports.

Updating the Firmware

For the latest PowerVault MD3000 firmware, see the Dell Support website at support.dell.com. Use the PowerVault Modular Disk Storage Manager to perform the firmware update. For more information on any of these steps, see the section Firmware Downloads in the Users's Guide on the Dell PowerVault MD3000 documentation site.

Initially Configuring Nodes for Storage Access

Start the SMagent service all nodes before continuing.

[root]# service SMagent start
Dell Modular Disk Storage Manager Host Agent, Version 10.00.A6.04
Built Mon Sep 22 09:23:01 CDT 2008
Copyright (C) 2006 - 2008  Dell Inc. All rights reserved.
Checking device <n/a> (/dev/sg4) : Skipping
Checking device /dev/sdc (/dev/sg5) : Activating
Checking device <n/a> (/dev/sg6) : Skipping
Checking device /dev/sdd (/dev/sg7) : Activating
 
Running...    

NOTE: There is no restart or status option for the SMagent service. Even if it is already running, the start command will stop any running SMagent first.

Completing Shared Storage System Configuration

For detailed information on all procedures below, consult the section About Your Host in the Users's Guide on the Dell PowerVault MD3000 documentation site.

Configuring Host Access

This section will describe the process to create a Virtual Disk for use on your nodes.

Create Host Access

Host access can be created using the Automatic or Manual methods.

Create Host Access (Automatic)
  1. Click on the Configure tab and choose Configure Host Access (Automatic).
  2. All hosts that are running the MD SMagent service will appear here.
    NOTE: If a host does not appear here, ensure SMagent is started on all hosts with the command service SMagent start. Refresh the list of hosts by clicking on Configure and Configure Host Access (Automatic) again. If your hosts still do not show up, you must complete the steps in Create Host Access (Manual).
  3. Select all of your nodes and click Add then Ok.
  4. Select Ok a final time to complete this step.
  5. Proceed to Create Host Group.
Create Host Access (Manual)

If you completed Create Host Access (Automatic), then proceed to Create Host Group.

  1. Click on Configure then Configure Host Access (Manual).
  2. Enter the host name of one of your nodes.
  3. Select Linux from the drop-down list for Select host type and click Next.
  4. Select the SAS Addresses recorded in the step Configuring the SAS HBAs to assign them to this host name and click Next.
  5. Leave the option selected for No: This host will NOT share access to the same virtual disks with other hosts - we will configure a host group to provide shared access after all host names have been defined.
  6. Select Next to continue, and Finish to complete manual host access for this node. Select Yes when asked to define another host if needed, and repeat these steps for any remaining nodes
  7. Proceed to Create Host Group.

Create Host Group

  1. In the Configure tab, choose Create Host Group.
  2. Enter a host group name and select both hosts to add, then select Ok.
  3. Select Ok. The host group is now created.

Create Virtual Disk

  1. In the Configure tab, select Create Disk Groups and Virtual Disks.
  2. Select Disk Group to create a new disk group, and click Next.
  3. Assign a Disk Group name, and select Automatic for a simplified configuration, or Manual for more control over the RAID setup.
  4. When this step is completed, select Create a virtual disk using the new disk group and choose Yes.
  5. Assign virtual disk capacity according to the needs of your application(s), and select Next.
  6. Leave the option Map now selected.
  7. Select the host group created in Create Host Group and click Finish.
  8. On the final screen, select No unless you need to create additional Virtual Disks.

Completing Node Configuration

The simplest way to ensure the new Virtual Disk is visible is to reboot all nodes. However, you may initiate a rescan to see the new Virtual Disk without a reboot. This can be accomplished with the mppBusRescan tool installed with the linuxrdac package as such:

[root]# mppBusRescan
Starting new devices re-scan...
scan mptsas HBA host /sys/class/scsi_host/host5...
        no new device found
scan mptsas HBA host /sys/class/scsi_host/host3...
        no new device found
run /usr/sbin/mppUtil -s busscan...
scan mpp virtual host /sys/class/scsi_host/host4...
  Vendor: DELL      Model: MD Virtual Disk   Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
SCSI device sdc: 419430400 512-byte hdwr sectors (214748 MB)
sdc: Write Protect is off
SCSI device sdc: drive cache: write back w/ FUA
SCSI device sdc: 419430400 512-byte hdwr sectors (214748 MB)
sdc: Write Protect is off
SCSI device sdc: drive cache: write back w/ FUA
sd 4:0:0:0: Attached scsi disk sdc
sd 4:0:0:0: Attached scsi generic sg6 type 0
        found 4:0:0:0->/dev/sdc
/usr/sbin/mppBusRescan is completed.
 

If you are performing this command remotely, some output will only appear on the console. You can verify these messages in the /var/log/dmesg log, or with the dmesg command.

After rebooting or running the mppBusRescan command, a new Virtual Disk will be visible on all your nodes. View partitions with the command:

[root@node1]# cat /proc/partitions
major minor  #blocks  name

   8     0  142082048 sda
   8     1     104391 sda1
   8     2   20482875 sda2
 253     0   18448384 dm-0
 253     1    2031616 dm-1
   8    32  142323712 sdc

In this example, the internal disk is sda, and the MD3000 Virtual Disk is sdc which may be different in your configuration.

You may also use the SMdevices command to view MD3000 Virtual Disk mappings:

[root@node1]# SMdevices
Dell Modular Disk Storage Manager Devices, Version 10.01.A6.01
Built Mon Sep 22 09:20:05 CDT 2008
Copyright (C) 2006 - 2008  Dell Inc. All rights reserved.

  <n/a> (/dev/sg4) [Storage Array rh-md3000-2, Virtual Disk Access, LUN 31, Virtual Disk ID <60019b9000c2e62c00000951496f5275>]
  <n/a> (/dev/sg5) [Storage Array rh-md3000-2, Virtual Disk Access, LUN 31, Virtual Disk ID <60019b9000c2e62c00000951496f5275>]
  /dev/sdc (/dev/sg6) [Storage Array rh-md3000-2, Virtual Disk rh_virtual_disk, LUN 0, Virtual Disk ID <60019b9000c2e62c00000b2149c266ac>, Preferred Path (Controller-0): In Use] 

Repeat these steps on any nodes that do not have access to the Virtual Disk. These nodes may show the Access Virtual Disks as 20MiB disks:

[root@node2]# cat /proc/partitions
major minor  #blocks  name
 
   8     0   71041024 sda
   8     1     104391 sda1
   8     2   20482875 sda2
 253     0   18448384 dm-0
 253     1    2031616 dm-1
   8    32      20480 sdc
   8    48      20480 sdd

In this example sdc and sdd are Access Virtual Disks; they are not the Virtual Disk that was created. Once all the software is installed and a Virtual Disk accessible by the node, these disks will disappear.

The two disks listed here are Access Virtual Disks only:

[root@node2]# SMdevices
Dell Modular Disk Storage Manager Devices, Version 10.01.A6.01
Built Mon Sep 22 09:20:05 CDT 2008
Copyright (C) 2006 - 2008  Dell Inc. All rights reserved.

 /dev/sdc (/dev/sg5) [Storage Array rh-md3000-2, Virtual Disk Access, LUN 31, Virtual Disk ID <60019b9000c2e62c00000951496f5275>]
 /dev/sdd (/dev/sg7) [Storage Array rh-md3000-2, Virtual Disk Access, LUN 31, Virtual Disk ID <60019b9000c2e62c00000951496f5275>]

Once all nodes have visibility to the new Virtual Disk, you can take one last step to verify that they all see the same scsi id with the following command. In some cases the new Virtual Disk may appear with a different scsi device name on some nodes. This does not affect the configuration of the cluster, since Logical Volume Manager (LVM) will be used. For example, if a particular node happens to have an extra internal scsi disk configured as /dev/sdc already, then the Virtual Disk may appear as /dev/sdd. Use the following command to verify that all nodes view the same Virtual Disk:

[root@node1]# scsi_id -gxs /block/sdc
ID_VENDOR=DELL
ID_MODEL=MD_Virtual_Disk
ID_REVISION=0735
ID_SERIAL=36001c23000b9858c0000a32d49c21095
ID_TYPE=disk
ID_BUS=scsi

[root@node2]# scsi_id -gxs /block/sdc
ID_VENDOR=DELL
ID_MODEL=MD_Virtual_Disk
ID_REVISION=0735
ID_SERIAL=36001c23000b9858c0000a32d49c21095
ID_TYPE=disk
ID_BUS=scsi



Continue to Appendix



Dell|Red Hat HA Linux > Storage > PowerVault MD3000 > Storage Configuration

Personal tools
Distributions