Products/HA/DellRedHatHALinuxCluster/Storage/PowerVault MD3000i/Storage Configuration

From DellLinuxWiki

Jump to: navigation, search

Dell|Red Hat HA Linux > Storage > PowerVault MD3000i > Storage Configuration

Contents

Storage Configuration

Connect your iSCSI I/O cables to your nodes that were disconnected during the operating system installation in Preparing the Operating System.

Initially Configuring the Shared Storage System

The following steps only need to be performed during the initial setup of the storage array.

For more information on all procedures below, consult the Users's Guide on the Dell PowerVault MD3000i documentation site.

Discovering your Storage Array

NOTE: Prior to this step, ensure that the Storage Array management ports are connected to the private network. Refer to Cabling the Management Ports for more details.

  1. On a management node, open the PowerVault Modular Disk Storage Manager application. A link to this application can be found on the desktop. For more information on installing this software, refer to Software Installation.
    NOTE: If a link is not found, manually open it from the installed location. The default installation is /opt/dell/mdstoragemanager/client/SMclient
  2. Click New at the top of the Manager window. This will launch the Add New Storage Array wizard.
    NOTE: If clicking New does not seem to perform an action, try using the shortcut keys <ALT>+<TAB> and select the window that is in the background, or minimize the main window.
  3. Choose Automatic. This option will scan your network for MD3000i Storage Arrays.
  4. When the scan is complete, your MD3000i Storage Array should be visible. If it is not, check all physical connections to ensure that the management node and the MD3000i management ports are on the same private network.

NOTE: If this process fails, consult the section Adding Storage Arrays in the Dell PowerVault Modular Disk Storage Manager Users's Guide on the Dell PowerVault MD3000i documentation site.

Initial Setup Tasks

The Initial Setup Tasks window will open when the storage array is first disovered. This window contains links to the basic commands needed to configure your Storage Array. For more information, see the section About Your Storage Array in the Users's Guide on the Dell PowerVault MD3000i documentation site.

Rename the Storage Array

Select Rename the Storage Array from the Initial Setup Tasks window, or choose Tools then Rename Storage Array.

Set a Storage Array Password

Select Set a Storage Array Password from the Initial Setup Tasks window, or choose Tools then Set or Change Password.

Configure Ethernet Management Ports

Select Configure Ethernet Management Ports from the Initial Setup Tasks window, or choose Tools then Configure Ethername Management Ports.

Configuring iSCSI Host Ports

  1. Click on the iSCSI tab
  2. Under Identification and networking click on Configure iSCSI Host Ports
  3. Select each controller port from the drop-down.
  4. Assign IP addresses to the iSCSI ports. For a fully redundant setup, it is recommended to setup two separate subnets. Configure controller 0, port 0 and controller 1, port 0 on the same subnet. Configure controller 0, port 1 and controller 1, port 1 on a separate subnet. The following table displays the default iSCSI Host Port IP addresses for reference:
    NOTE: The MAC address of each port is displayed here if you need to assign the IP statically via DHCP.
MD3000i iSCSI host port Network Subnet Default Address
Controller 0, Port 0 iSCSI I/O network #1 192.168.130.101/24
Controller 0, Port 1 iSCSI I/O network #2 192.168.131.101/24
Controller 1, Port 0 iSCSI I/O network #1 192.168.130.102/24
Controller 1, Port 1 iSCSI I/O network #2 192.168.131.102/24

Updating the Firmware

For the latest PowerVault MD3000i firmware, see the Dell Support website at support.dell.com. Use the PowerVault Modular Disk Storage Manager to perform the firmware update. For more information on any of these steps, see the section Firmware Downloads in the Users's Guide on the Dell PowerVault MD3000i documentation site.

Initially Configuring Nodes for Storage Access

The following section is relevant to your cluster nodes.

Assign IP Addresses to Storage NICs

  1. Identify which NIC ports have been cabled to the iSCSI I/O network.
  2. Assign IP addresses to each NIC port.
    NOTE: Ensure the correct IP is assigned on the correct iSCSI I/O subnet.
  3. Allow port 3260 through each node's firewall from the MD3000i target subnets. In most cases simply allowing the entire iSCSI I/O subnets will suffice. For example, on each node, allow the two I/O subnets as follow:
[root]# iptables -I INPUT -s {iSCSI I/O network #1} -j ACCEPT
[root]# iptables -I INPUT -s {iSCSI I/O network #2} -j ACCEPT

For example:

[root]# iptables -I INPUT -s 192.168.130.0/24 -j ACCEPT
[root]# iptables -I INPUT -s 192.168.131.0/24 -j ACCEPT

Save the new firewall rules:

[root]# service iptables save

Install and Configure iSCSI Initiator

  • Install the iSCSI initiator utilities
[root]# yum install iscsi-initiator-utils
  • Determine which NICs are connected to the iSCSI I/O networks. The utilities ethtool or mii-tool may assist in determining which NICs have link.
  • Assign IP addresses on each iSCSI I/O NIC. These should correspond to the separate subnets setup for the iSCSI I/O networks configured in Configuring iSCSI Host Ports.
  • Edit the file /etc/iscsi/initiator.iscsi. This establishes the iSCSI qualified name (iqn). One will be randomly generated for you, but modifying this to a more easily identifiable name is recommended. This iqn will be used to associate your host with a host group. The industry accepted format is:
Format: iqn.yyyy-mm.{reversed domain name}:{unique identifier}

For example:

iqn.2008-03.com.example:node01

By using the hostname for the {unique identifier} it will be easier to identifiy this host when configuring Host Access.

Modify iSCSI Settings

  • Change the following settings in the /etc/iscsi/iscsid.conf configuration file. A sample file is provided on the MD3000i Resource CD for reference, but it may be best to edit the default file. Change the values as follows, if any of the values are missing from the configuration file add them:
node.session.timeo.replacement_timeout = 144
node.conn[0].timeo.noop_out_interval = 10
node.conn[0].timeo.noop_out_timeout = 15
node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.DataDigest = None

You may place these settings at the end of the file, comment out any existing settings, or just change the values.

Initiate Preliminary Access to the Storage

  • Start the iSCSI initiator:
[root]# service iscsi start

NOTE: At this point no targets have been discovered, so you will see a warning message of No host records found!. It is safe to ignore this message.

  • Configure the iSCSI initiator to start on boot:
[root]# chkconfig iscsi on
  • Initiate a connection to the MD3000i. This step will allow your host's initiator to register on the MD3000i:
[root]# iscsiadm -m discovery -t sendtargets -p {IP address of an iSCSI host port}

where {IP address of an iSCSI host port} is any of the four IP addresses you defined. For example:

[root]# iscsiadm -m discovery -t sendtargets -p 192.168.130.101
192.168.130.101:3260,1 iqn.1984-05.com.dell:powervault.md3000i.<truncated>
192.168.131.101:3260,2 iqn.1984-05.com.dell:powervault.md3000i.<truncated>
192.168.130.102:3260,2 iqn.1984-05.com.dell:powervault.md3000i.<truncated>
192.168.131.102:3260,1 iqn.1984-05.com.dell:powervault.md3000i.<truncated>
  • Login to all available targets. At this point no access has been defined, so the only available target is the MD3000i Access Virtual Disk. This is a pseudo Virtual Disk that is used to define host access.
[root]# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.<truncated>
<output truncated>
iscsiadm: initiator reported error (4 - encountered connection failure)
  Vendor: DELL      Model: MD3000i           Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
736 [RAIDarray.mpp]Host 3 Target 0 Lun 0 Is a physical device but is an Unconfig
ured Device.
  Vendor: DELL      Model: Universal Xport   Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
mpp 3:0:0:31: Attached scsi generic sg4 type 0
<output truncated>
iscsiadm: Could not log into all portals. Err 8.

NOTE: It is safe to ignore the errors about logging in until access has been defined.

  • Start the SMagent on all cluster nodes:
[root]# service SMagent start
SMagent started.
Dell Modular Disk Storage Manager Host Agent, Version 10.00.A6.04    
Built Mon Sep 22 09:23:01 CDT 2008    
Copyright (C) 2006 - 2008  Dell Inc. All rights reserved.    
Checking device <n/a> (/dev/sg4) : Activating    
Checking device <n/a> (/dev/sg5) : Activating    
Checking device /dev/sdc (/dev/sg6) : Skipping    

Running...    

NOTE: There is no restart or status option for the SMagent service. Even if it is already running, the start command will stop any running SMagent first.

  • If you experience an error at this point such as the following:
The Host Context Agent did NOT start because 0 RAID controller modules were found.

This means the MD3000i Access Virtual Disk was not found. You may try restarting the iscsi service and try again:

[root]# service iscsi restart
[root]# service SMagent start

If you still see an error at this point, you will need to verify that all MD3000i targets are accessible from the problematic node. See Configuring iSCSI Host Ports for details on which target IP Addresses to test against.

Completing Shared Storage System Configuration

For detailed information on all procedures below, consult the section About Your Host in the Users's Guide on the Dell PowerVault MD3000i documentation site.

Configuring Host Access

This section will describe the process to create a Virtual Disk for use on your nodes.

Create Host Access

Host access can be created using the Automatic or Manual methods.

Create Host Access (Automatic)
  1. Click on the Configure tab and choose Configure Host Access (Automatic).
  2. All hosts that are running the SMagent service will appear here.
    NOTE: If a host does not appear here, ensure SMagent is started on all hosts with the command service SMagent start. Refresh the list of hosts by clicking on Configure and Configure Host Access (Automatic) again. If your hosts still do not show up, you must complete the steps in Create Host Access (Manual).
  3. Select all of your nodes and click Add then Ok.
  4. Select Ok a final time to complete this step.
  5. Proceed to Create Host Group.
Create Host Access (Manual)

If you completed Create Host Access (Automatic), then proceed to Create Host Group.

  1. Click on Configure then Configure Host Access (Manual).
  2. Enter the host name of one of your nodes.
  3. Select Linux from the drop-down list for Select host type and click Next.
  4. Select the iSCSI initiator defined in the step Install and Configure iSCSI Initiator to assign it to this host name and click Next.
  5. Leave the option selected for No: This host will NOT share access to the same virtual disks with other hosts - we will configure a host group to provide shared access after all host names have been defined.
  6. Select Next to continue, and Finish to complete manual host access for this node. Select Yes when asked to define another host if needed, and repeat these steps for any remaining nodes
  7. Proceed to Create Host Group.

Create Host Group

  1. In the Configure tab, choose Create Host Group.
  2. Enter a host group name and select all hosts to add, then select Ok.
  3. Select Ok. The host group is now created.

Create Virtual Disk

  1. In the Configure tab, select Create Disk Groups and Virtual Disks.
  2. Select Disk Group to create a new disk group, and click Next.
  3. Assign a Disk Group name, and select Automatic for a simplified configuration, or Manual for more control over the RAID setup.
  4. When this step is completed, select Create a virtual disk using the new disk group and choose Yes.
  5. Assign virtual disk capacity according to the needs of your application(s), and select Next.
  6. Leave the option Map now selected.
  7. Select the host group created in Create Host Group and click Finish.
  8. On the final screen, select No unless you need to create additional Virtual Disks.

Configuring CHAP on the MD3000i

There are two types of CHAP configurations supported on the MD3000i. The following table describes them.

MD3000i terminology Authentication Type A.K.A. Description
Target CHAP Initiator Authentication Forward, One-Way The initiator is authenticated by the target.
Mutual Chap Target Authentication Reverse, Bi-directional, Mutual The target is authenticated by the initiator. This method also requires Target CHAP (Initiator Authentication).
  • Target CHAP (Initiator Authentication) is the basic CHAP configuration. A password is created on the MD3000i target. Each node logs into the target with its' own initiator iqn as the CHAP user id, and uses the password created on the target as the CHAP secret.
  • Mutual CHAP (Target Authentication) is an authentication method in addition to Target CHAP (Initiator Authentication). A unique password is created for each node. The target logs into each node with its' own target iqn as the CHAP user id, and uses the password associated with each node as the CHAP secret. Target CHAP (Initiator Authentication) must also be configured in this scenario.

NOTE: CHAP is an optional security feature that is not required for iSCSI communications. You may skip this section and proceed to Completing Node Configuration if you do not wish to configure CHAP.

For more information, see the section Understanding CHAP Authentication in the Installation Guide on the PowerVault MD3000i documentation site.

Configuring Target CHAP (Initiator Authentication) on the MD3000i

  1. To enable Target CHAP (Initiator Authentication), select the iSCSI tab, then Change Target Authentication in the Authentication section.
  2. In the CHAP Settings section, Uncheck None, and select CHAP.
    NOTE: Be extremely careful in this selection. The software allows you to select both None and CHAP, but that configuration allows a failed CHAP authentication access regardless. Only select one of these options.
  3. Select CHAP Secret
  4. Define a secret here, or click Generate Random Secret.
  5. Record this secret for use in Configuring CHAP on the Nodes later.

If you are not configuring Mutual CHAP (Target Authentication), skip to Completing Node Configuration.

Configuring Mutual CHAP (Target Authentication) on the MD3000i

  1. To enable Mutual CHAP (Target Authentication), select the iSCSI tab, then Enter Mutual Authentication Permissions in the Authentication section.
  2. All initiators will be defined here. If your initiator is not listed, see Configuring Host Access.
  3. Select an initiator, and click CHAP Secret.
  4. Define a unique secret for this initiator. This secret will be used in Configuring CHAP on the Nodes.
  5. Repeat this for all initiators.
    NOTE: The secret for each initiator must be unique and not the same as the target CHAP secret.

Completing Node Configuration

The following section is relevant to your cluster nodes.

Configuring CHAP on the Nodes

Ensure you have completed the steps in Configuring CHAP on the MD3000i prior to this section. If you are not configuring CHAP, you may skip to Connect Host to Virtual Disk.

NOTE: If any nodes were previously configured with CHAP settings, it may be necessary to preform the steps in Removing Previous CHAP Settings] before continuing.

Configuring Target CHAP (Initiator Authentication) on the Nodes

Change the following settings in /etc/iscsi/iscsid.conf on each node. By default the settings are commented out, and contain default values. Uncomment each setting and change the value.

1. Enable CHAP for iSCSI sessions:

node.session.auth.authmethod = CHAP

2. Enable the CHAP username and password for iSCSI sessions that the initiator will use for authentication by the target:

node.session.auth.username = {iqn of initiator}
node.session.auth.password = {target secret}

For example:

node.session.auth.username = iqn.1994-05.com.redhat:node01
node.session.auth.password = targetsecret

3. Enable CHAP for iSCSI discovery:

discovery.sendtargets.auth.authmethod = CHAP

4. Enable the CHAP username and password for iSCSI discovery that the initiator will use for authentiation by the target:

discovery.sendtargets.auth.username = {iqn of initiator}
discovery.sendtargets.auth.password = {target secret}

Example:

discovery.sendtargets.auth.username = iqn.1994-05.com.redhat:node01
discovery.sendtargets.auth.password = targetsecret

The following values should be set for Target CHAP (Initiator Authentication):

[root]# grep -E "(^node.session.auth|^discovery.sendtargets.auth)" /etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username = iqn.1994-05.com.redhat:node01
node.session.auth.password = targetsecret
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = iqn.1994-05.com.redhat:node01
discovery.sendtargets.auth.password = targetsecret

Target CHAP (Initiator Authentication) is now complete. If Mutual CHAP (Target Authentication) will not be configured, skip to Starting the CHAP Session.

Configuring Mutual CHAP (Target Authentication) on the Nodes

NOTE: Complete the steps in Configuring Target CHAP (Initiator Authentication) on the Nodes first.

Change the following settings in /etc/iscsi/iscsid.conf on each node. Uncomment each setting and change the value.

1. Enable the CHAP username and password for iSCSI sessions that the target will use for authentication by the initiator:

node.session.auth.username_in = {iqn of MD3000i target}
node.session.auth.password_in = {initiator secret}
  • {iqn of MD3000i target} is the full iqn of the MD3000i target. To determine this value, open the MD Storage Manager and click on iSCSI then Change Target Identification. It is best to copy and paste this value. Do not use the iSCSI target alias.
  • {initiator secret} is the secret defined for this specific initiator in Configuring Mutual CHAP (Target Authentication) on the MD3000i.

For example:

node.session.auth.username_in = iqn.1984-05.com.dell:powervault.md3000i.6001c23000b97d4200000000497093ea
node.session.auth.password_in = node01secret

2. Enable the CHAP username and password for iSCSI discovery that the target will use for authentication by the initiator:

discovery.sendtargets.auth.username_in = {iqn of MD3000i target}
discovery.sendtargets.auth.password_in = {initiator secret}

Example:

discovery.sendtargets.auth.username_in = iqn.1984-05.com.dell:powervault.md3000i.6001c23000b97d4200000000497093ea
discovery.sendtargets.auth.password_in = node01secret

The following values should be set for both Target and Mutual CHAP (Initiator and Target Authentication):

[root]# grep -E "(^node.session.auth|^discovery.sendtargets.auth)" /etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username = iqn.1994-05.com.redhat:node01
node.session.auth.password = targetsecret
node.session.auth.username_in = iqn.1984-05.com.dell:powervault.md3000i.6001c23000b97d4200000000497093ea
node.session.auth.password_in = node01secret
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = iqn.1994-05.com.redhat:node01
discovery.sendtargets.auth.password = targetsecret
discovery.sendtargets.auth.username_in = iqn.1984-05.com.dell:powervault.md3000i.6001c23000b97d4200000000497093ea
discovery.sendtargets.auth.password_in = node01secret

Starting the CHAP Session

Simply restart the iscsi service to use the new CHAP configuration:

[root]# service iscsi restart

Troubleshooting CHAP Issues

The following steps may assist in resolving any CHAP issues.

Removing Previous CHAP Settings

If you have changed CHAP settings, any new settings may not be read properly. Follow these steps to clear any previous configuration:

First stop the iscsi service:

[root]# service iscsi stop

Remove iscsi-initiator-utils:

[root]# yum remove iscsi-initiator-utils

Delete any existing session data:

[root]# rm -rf /var/lib/iscsi

Reinstall iscsi-initiator-utils:

[root]# yum install iscsi-initiator-utils

You may have noticed a warning message when the package was being removed:

warning: /etc/iscsi/iscsid.conf saved as /etc/iscsi/iscsid.conf.rpmsave

Use this to your advantage since it contains all of your existing configuration data:

[root]# cat /etc/iscsi/iscsid.conf.rpmsave > /etc/iscsi/iscsid.conf

Double-check your configuration:

[root]# grep -E "(^node.session.auth|^discovery.sendtargets.auth)" /etc/iscsi/iscsid.conf

Start the iscsi service:

[root]# service iscsi start
Debugging CHAP Communications

It may be helpful to enable debugging output during a discovery or login to ensure your new settings are being used:

[root]# iscsiadm -m node --login -d 8 2>&1 | grep -E "(CHAP|user|password)"
iscsiadm: updated 'node.session.auth.authmethod', 'None' => 'CHAP'
iscsiadm: updated 'node.session.auth.username',  => 'iqn.1994-05.com.redhat:node01'
iscsiadm: updated 'node.session.auth.password',  => 'targetsecret'
iscsiadm: updated 'node.session.auth.password_length', '0' => '12'
iscsiadm: updated 'node.session.auth.username_in',  => 'iqn.1984-05.com.dell:powervault.md3000i.6001c23000b97d4200000000497093ea'
iscsiadm: updated 'node.session.auth.password_in',  => 'node01secret'
iscsiadm: updated 'node.session.auth.password_in_length', '0' => '12'
 

Discovery contains a lot more output, some of which has been omitted:

[root]# iscsiadm -m discovery -t sendtargets -p 192.168.100.250 -d 8 2>&1 | grep -E "(CHAP|user|password)"
iscsiadm: updated 'discovery.sendtargets.auth.authmethod', 'None' => 'CHAP'
iscsiadm: updated 'discovery.sendtargets.auth.username',  => 'iqn.1994-05.com.redhat:node01'
iscsiadm: updated 'discovery.sendtargets.auth.password',  => 'targetsecret'
iscsiadm: updated 'discovery.sendtargets.auth.password_length', '0' => '12'
iscsiadm: updated 'discovery.sendtargets.auth.username_in',  => 'iqn.1984-05.com.dell:powervault.md3000i.6001c23000b97d4200000000497093ea'
iscsiadm: updated 'discovery.sendtargets.auth.password_in',  => 'node01secret'
iscsiadm: updated 'discovery.sendtargets.auth.password_in_length', '0' => '12'
iscsiadm: updated 'node.session.auth.authmethod', 'None' => 'CHAP'
iscsiadm: updated 'node.session.auth.username',  => 'iqn.1994-05.com.redhat:node01'
iscsiadm: updated 'node.session.auth.password',  => 'targetsecret'
iscsiadm: updated 'node.session.auth.password_length', '0' => '12'
iscsiadm: updated 'node.session.auth.username_in',  => 'iqn.1984-05.com.dell:powervault.md3000i.6001c23000b97d4200000000497093ea'
iscsiadm: updated 'node.session.auth.password_in',  => 'node01secret'
iscsiadm: updated 'node.session.auth.password_in_length', '0' => '12'
iscsiadm: >    AuthMethod=CHAP

Here is output from the discovery that shows what usernames the MD3000i is sending to the node:

iscsiadm: >    CHAP_N=iqn.1994-05.com.redhat:node01
iscsiadm: >    CHAP_N=iqn.1984-05.com.dell:powervault.md3000i.6001c23000b97d4200000000497093ea

Connect Host to Virtual Disk

Once host initiators have been associated to a host group with Virtual Disks, you must login to the new Virtual Disk on each node. The easiest way to accomplish this is to restart the iscsi service:

[root]# service iscsi restart
Logging out of session [sid: 1, target: iqn.1984-05.com.dell:powervault.md3000i.<truncated>
Logout of [sid: 1, target: iqn.1984-05.com.dell:powervault.md3000i.<truncated>: successful
Stopping iSCSI daemon:                                     [  OK  ]
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.<truncated>
Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.<truncated>
Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.<truncated>
Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.<truncated>
Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3000i.<truncated>
iscsi: registered transport (tcp)
iscsi: registered transport (iser)
  Vendor: D<5>  Vendor: DELL      Model: MD3000i           Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
DELL      Model: MD3000i           Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
  Vendor: D<5>  Vendor: DELL      Model: Universal Xport   Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
DELL      Model: Universal Xport   Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
mpp 5:0:0:31: Attached scsi generic sg4 type 0
mpp 6:0:0:31: Attached scsi generic sg5 type 0
  Vendor: DELL      Model: MD Virtual Disk   Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
SCSI device sdc: 284647424 512-byte hdwr sectors (145739 MB)
sdc: Write Protect is off
SCSI device sdc: drive cache: write back w/ FUA
sd 4:0:0:0: Attached scsi disk sdc
sd 4:0:0:0: Attached scsi generic sg6 type 0

If you are performing this command remotely, only some output will appear on the console. You can verify these messages in the /var/log/dmesg log, or with the dmesg command.

All four targets will login.

A new Virtual Disk will be visible on all your nodes. View partitions with the command:

[root@node1]# cat /proc/partitions
major minor  #blocks  name

   8     0  142082048 sda
   8     1     104391 sda1
   8     2   20482875 sda2
 253     0   18448384 dm-0
 253     1    2031616 dm-1
   8    32  142323712 sdc

In this example, the internal disk is sda, and the MD3000i Virtual Disk is sdc which may be different in your configuration.

You may also use the SMdevices command to view MD3000i Virtual Disk mappings:

[root@node1]# SMdevices
Dell Modular Disk Storage Manager Devices, Version 10.01.A6.01
Built Mon Sep 22 09:20:05 CDT 2008
Copyright (C) 2006 - 2008  Dell Inc. All rights reserved.
 <n/a> (/dev/sg2) [Storage Array rh-md3000i-1, Virtual Disk Access, LUN 31, Virtual Disk ID <6001c23000c98b820000060e496f53d2>]
 <n/a> (/dev/sg3) [Storage Array rh-md3000i-1, Virtual Disk Access, LUN 31, Virtual Disk ID <6001c23000c98b820000060e496f53d2>]
 <n/a> (/dev/sg4) [Storage Array rh-md3000i-1, Virtual Disk Access, LUN 31, Virtual Disk ID <6001c23000c98b820000060e496f53d2>]
 <n/a> (/dev/sg5) [Storage Array rh-md3000i-1, Virtual Disk Access, LUN 31, Virtual Disk ID <6001c23000c98b820000060e496f53d2>]
 /dev/sdb (/dev/sg6) [Storage Array rh-md3000i-1, Virtual Disk clu-vrt-dsk, LUN 0, Virtual Disk ID <6001c23000c98b8200000dc049d4a961>, Preferred Path (Controller-1): In Use]

If your nodes do not see the new Virtual Disk, the simplest way to ensure it is visible is to reboot them. However, you may initiate a rescan to see the new Virtual Disk without a reboot. This can be accomplished with the following command:

[root@node1]# mppBusRescan
Starting new devices re-scan...
scan iSCSI software initiator host /sys/class/scsi_host/host3...
	no new device found
run /usr/sbin/mppUtil -s busscan...
  Vendor: D<5>  Vendor: DELL      Model: MD3000i           Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
DELL      Model: MD3000i           Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
  Vendor: D<5>  Vendor: DELL      Model: Universal Xport   Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
DELL      Model: Universal Xport   Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
mpp 5:0:0:31: Attached scsi generic sg4 type 0
mpp 6:0:0:31: Attached scsi generic sg5 type 0
  Vendor: DELL      Model: MD Virtual Disk   Rev: 0735
  Type:   Direct-Access                      ANSI SCSI revision: 05
SCSI device sdc: 284647424 512-byte hdwr sectors (145739 MB)
sdc: Write Protect is off
SCSI device sdc: drive cache: write back w/ FUA
sd 4:0:0:0: Attached scsi disk sdc
sd 4:0:0:0: Attached scsi generic sg6 type 0
scan mpp virtual host /sys/class/scsi_host/host4...
	found 4:0:0:0->/dev/sdc
/usr/sbin/mppBusRescan is completed.

In this example the new virtual disk is /dev/sdc. Note that all nodes may not see the same device name, but the size and scsi id will be the same.

Repeat these steps on any nodes that do not have access to the Virtual Disk.

Once all nodes have visibility to the new Virtual Disk, you can take one last step to verify that they all see the same scsi id with the following command. In some cases the new Virtual Disk may appear with a different scsi device name on some nodes. This does not affect the configuration of the cluster, since Logical Volume Manager (LVM) will be used. For example, if a particular node happens to have an extra internal scsi disk configured as /dev/sdc already, then the Virtual Disk may appear as /dev/sdd. Use the following command to verify that all nodes view the same Virtual Disk:

[root@node1]# scsi_id -gxs /block/sdc
ID_VENDOR=DELL
ID_MODEL=MD_Virtual_Disk
ID_REVISION=0735
ID_SERIAL=36001c23000b9858c0000a32d49c21095
ID_TYPE=disk
ID_BUS=scsi

[root@node2]# scsi_id -gxs /block/sdc
ID_VENDOR=DELL
ID_MODEL=MD_Virtual_Disk
ID_REVISION=0735
ID_SERIAL=36001c23000b9858c0000a32d49c21095
ID_TYPE=disk
ID_BUS=scsi



Continue to Appendix



Dell|Red Hat HA Linux > Storage > PowerVault MD3000i > Storage Configuration

Personal tools
Distributions