Projects/DellRedHatHALinuxCluster/Documentation/Storage/PS-Series

From DellLinuxWiki

Jump to: navigation, search

Dell|Red Hat HA Linux > Storage > PS-Series

Contents

Introduction

The following sections will walk-through the hardware, software, and configuration of the Dell EqualLogic PS-Series Storage Array and nodes for the Dell|Red Hat HA Linux Cluster.

NOTE: Additional documentation you may find useful:

  • The Dell Cluster Configuration Support Matrix document provides a list of supported operating systems, hardware components, and driver or firmware versions for your Failover Cluster.
  • Dell|EqualLogic documentation:
    • Release Notes: Provides the latest information about PS Series arrays.
    • QuickStart: Describes how to set up the array hardware and create a PS Series group.
    • Group Administration: Describes how to use the Group Manager graphical user interface (GUI) to manage a PS Series group.
    • CLI Reference: Describes how to use the Group Manager command line interface (CLI) to manage a PS Series group and individual arrays.
    • Hardware Maintenance: Provides information about maintaining the array hardware.
    • Online Help: In the Group Manager GUI, expand Tools in the far left panel and then click Online Help for help on both the GUI and the CLI.

Hardware Setup

The Dell EqualLogic PS-Series arrays physically connect to your cluster using ethernet connections. Each controller in a PS-Series array has three ports, which are used for data and management connections. All of the ports from both controllers should all be attached to the same subnet.


Cabling the Power Supplies

To ensure that the specific power requirements are satisfied, see the documentation for each component in your cluster solution. It is recommended that you adhere to the following guidelines to protect your cluster solution from power-related failures:

  • Plug redundant power supplies from each cluster node or storage array into separate AC circuits.
  • Plug cluster node power supplies into separate optional network power switches.
  • Use uninterruptible power supplies (UPS).
  • Consider backup generators and power from separate electrical substations.

Cabling the iSCSI Ports

To cable the array:

All of the iSCSI interfaces on the PS-Series array must be configured on the same network segment. To provide redundant connection paths and fault resilience in the event of a switch failure, a two-switch iSCSI network topology is recommended. In such a configuration, there should be multiple ports with link aggregation providing the inter-switch, or trunk, connection. Further, each EqualLogic controller should have two connections to one switch and one connection to the other switch. The active and stand-by controller in the same member should have dual connections to alternate switches. This topology is illustrated for a single PS5000 array in the figure below. EqualLogic_two_switches.png

At least two network adapter ports on each host server should be configured for iSCSI, with one of these ports connected to each of the two switches. If any component in the storage path -- such as the iSCSI NIC, the cable, the switch, or the storage controller -- fails, multipath software automatically re-routes the I/O requests to the alternate path so that the storage array and cluster node continue to communicate without interruption. A cluster node with 2 single-port NICs can provide higher availability; a single NIC failure in such a configuration does not cause cluster resources to move to another cluster node.

Storage Configuration

EqualLogic PS-series storage arrays include storage virtualization technology. To better understand how these arrays operate, it is helpful to be familiar with some of the terminology used to describe these arrays and their functions:

  • Member: a single PS-series array is known as a member
  • Group: a set of one or more members that can be centrally managed
  • Pool: a RAID that can consist of the disks from one or more members
  • Volume: a LUN or virtual disk that represents a subset of the capacity of a pool
  • Non-Tiered Configuration: a group that contains a single pool
  • Tiered Configuration: a group that contains multiple pools

When a member is initialized, it can be configured with RAID 1/0, RAID 5, or RAID 5/0. The EqualLogic PS-series arrays provide automatic load balancing of volumes among members that participate in the same pool; this can improve the aggregate performance of all of the configured volumes without administrative input.

For more information on all procedures below, consult the Dell EqualLogic User's Guide.

Initialize the Array

The following steps only need to be performed during the initial setup of the Storage Array.

A serial connection is used to perform the initial configuration of the PS-Series array.

Once a serial cable is attached between the active control module and the management node, establish a serial connection:

[root]# minicom -s

Enter the submenu for Serial port setup and set the following values, according to the PS-Series Storage Arrays Quickstart:

  • Speed: 9600
  • Data: 8
  • Parity: None
  • Stopbits: 1
  • Flow control: None

Select Exit to initialize the connection

Press enter until you see a login prompt. Login with the default user and password of grpadmin

login: grpadmin
Password: grpadmin

NOTE: The password will not display any characters to the screen

The following prompt will appear the first time you login:

It appears that the storage array has not been configured.
Would you like to configure the array now ? (y/n) [n]y

Enter y to configure the array and to proceed:

Do you want to proceed (yes | no ) [no]:y

Create a member name for your array:

Member name []:member-name

Accept the default of eth0 for the Network Interface, and enter an IP address.
NOTE: This IP is only used internally, iSCSI discovery and login by the initiators will be performed using the group IP address configured in a later step.

Network interface [eth0]:
IP address for network interface []: 192.168.100.1
Netmask [255.255.255.0]:
Default gateway [192.168.100.1]: 192.168.100.254

Join or create a group name and IP address which serves as the management and iSCSI portal interface.

Group name []: group-name
Group IP address []: 192.168.100.50
Do you want to create a new group (yes | no) [yes]:y
Do you want to use the group settings shown above (yes | no) [yes]: y 

Assign the requested passwords to complete the setup utility.

You will now see a prompt with your group name:

group-name>

Configure and enable the remaining ethernet interfaces:

group-name> member select member-name eth select 1 ipaddress 192.168.100.2
group-name> member select member-name eth select 1 netmask 255.255.255.0
group-name> member select member-name eth select 1 up
group-name> member select member-name eth select 2 ipaddress 192.168.100.3
group-name> member select member-name eth select 2 netmask 255.255.255.0
group-name> member select member-name eth select 2 up
group-name> member select member-name eth show
Name ifType          ifSpeed    Mtu   Ipaddress       Status  Errors            
---- --------------- ---------- ----- --------------- ------- -------           
eth0 ethernet-csmacd 1000000000 9000  192.168.100.1  up      0                 
eth1 ethernet-csmacd 1000000000 9000  192.168.100.2  up      0                 
eth2 ethernet-csmacd 1000000000 9000  192.168.100.3  up      0  

Logout of the group management interface:

group-name> logout
Do you really want to logout? (y/n) [n]y

Exit minicom with <Ctrl><a> followed by <x>

Management Node Configuration

The Dell EqualLogic PS-Series arrays are managed through CLI or a Java applet over a web interface. For more information on cli management, see CLI Reference. This section will discuss the Java management.

A management node must satisfy these pre-requisites:

  • Connected to the iSCSI network
  • Has the X Window System installed
  • A 32-bit web browser, with the Java plugin

Install 32-bit firefox web browser and the Java plugin

Perform the following actions from your management node:
1. Install the 32-bit firefox browser

[root]# yum install firefox.i386

2. Configure your management node to access the RHEL Supplementary (v. 5 for 64-bit x86_64) channel

  • Log in to rhn.redhat.com.
  • Click Systems.
  • Select your management node from the list.
    • NOTE: If your management node is not listed, use a filter like Recently Registered on the left-pane.
  • In the Subscribed channels section, click Alter Channel Subscriptions.
  • Add the RHEL Supplementary (v. 5 for 64-bit x86_64) channel and click Change Subscriptions.

3. Process outstanding RHN actions on the management node

[root]# rhn_check

4. Install the Java plugin package

[root]# yum install java-1.6.0-sun-plugin

5. Create a symbolic link to allow the plugin to function properly.
    A. Identify the path to the plugin file.

[root]# rpm -ql java-1.6.0-sun-plugin | grep ns7/libjavaplugin_oji.so

      This command will output the full path to the correct plugin file. For example:       /usr/lib/jvm/java-1.6.0-sun-1.6.0.7/jre/plugin/i386/ns7/libjavaplugin_oji.so
    B. To install the plugin for root only:

[root]# mkdir ~/.mozilla/plugins
[root]# ln -s <path to file identified in step 5A> ~/.mozilla/plugins/

    C. To install the plugin globally for all users (optional):

[root]# mkdir /usr/lib/firefox-3.0.1/plugins
[root]# ln -s <path to file identified in step 5A> /usr/lib/firefox-3.0.1/plugins/ 

      NOTE: If the version of firefox changes, then the target location in the commands above will have to change as well. You can use [root]# rpm -qa firefox to quickly verify the version.
6. Open the 32-bit firefox browser

[root]# /usr/lib/firefox-3.0.1/firefox

7. Ensure that the plugin has been successfully registered by typing about:plugins in the browser address bar. You should see a heading and table related to the Java(TM) Plug-in.

Completing Storage Configuration

Manage the PS-Series Array

Enter the group IP into the configured 32-bit firefox web browser with Java plugin's address bar to manage the array.

You will be prompted to login. Use the username grpadmin and the password you provided during array initialization.

The Group Manager displays.

Set the Member RAID Policy

  1. Upon first login, you will see a flashing warning icon with the text Unconfigured member exists. Select member to configure its RAID policy.
  2. Expand Members on the left-hand menu.
  3. Select the member. A warning message will appear prompting you to configure the RAID policy. Click Yes to do so now.
  4. Click Next on Step 1 - General Settings
  5. Pick a RAID policy on Step 2 - RAID Configuration and click Next
  6. Review your choices and click Finish.

Initial configuration is now complete.

Create Volume

A Volume is a logical disk that will be shared by the cluster nodes.

  1. Select Volumes on the left hand menu
  2. Under Activities, click Create volume
  3. Complete Step 1 - Volume Settings and click Next
  4. Provide a Volume size in Step 2 - Space reserve and click Next
  5. Althuogh it is possible to configure Access controls in Step 3 - iSCSI Access, select No access. Refer to Set Access Control List for more details. Click Next to continue.
  6. Review your settings, and click Finish

Set Access Control List

  1. Expand Volumes on the left-hand menu
  2. Select the volume to configure
  3. Under Activities click Set access type
  4. In the Set access type window, ensure Enable shared access to the iSCSI target from multiple initiators is selected to enable your cluster nodes to use this volume and click ok
  5. In the right-hand pane select the Access tab
  6. Click Add to create an access control record
  7. Choose one of the access methods:
    • Authenticate using CHAP: Allow nodes that authenticate to access the volume. See the section Configuring CHAP for details
    • Limit access by IP address: Allow specified IP address or network to access the volume.
    • Limit access to the iSCSI initiator name: Allow only a specific initiator iqn to access the volume. See the section Install and Configure iSCSI Initiator for details on setting the initiator name on the nodes.

Configuring CHAP

If you have a large number of nodes, using CHAP may simplify access management.

Target Configuration

  1. Select Group Configuration
  2. On the right-hand pane, select the iSCSI tab
  3. Under Local CHAP Accounts' click Add to create a user
  4. Enter a User name and if desired a password
    NOTE: Passwords must be at least 12 characters long. One will be generated for you if you do not specify one. This password will be used in the client initiator configuration.
  5. Record the Target authentication information for use in the client initiator configuration. You may also click Modify to set these yourself
    NOTE: This password must also be at least 12 characters long.

Initiator Configuration

This process must be completed after the client software has been installed. Refer to Install and Configure iSCSI Initiator

Cluster Node Configuration

Each cluster node has multiple paths to the same virtual disk via the iSCSI I/O network. This may cause issues as the cluster nodes are able to access the same data through different paths that could lead to data corruption. A multipath I/O solution consolidates all paths to the virtual disk into a pseudo device. This allows the nodes to access the pseudo device instead of directly communicating with the block device via a single path. If a path failure occurs, the multipath I/O solution automatically switches paths, and the node continues to access the same data through the same pseudo device.

Red Hat Enterprise Linux 5.2 Advanced Platform includes a multipath driver as part of the base operating system with the package device-mapper-multipath. This package provides a multipath I/O solution for the Dell EqualLogic PS-Series arrays.

Security Considerations

Firewall

If a firewall will be used between the PS-Series storage array and the cluster nodes, ensure port 3260 is allowed both in and out of the cluster nodes to the EqualLogic target. For example, to allow communications from a Group IP address, enter the following commands on each cluster node:

  • Create a new chain for your rule:
[root]# iptables --new-chain EQUALLOGIC
  • Add a rule to the normal input chain to process your chain:
[root]# iptables --insert INPUT --jump EQUALLOGIC
  • Create rule for iSCSI (the following may be entered on one line without the "\"):
[root]# iptables --append EQUALLOGIC \
          --source {Group IP address} \
          --destination {IP address of cluster node} \
          --source-port 3260 --jump ACCEPT
  • Save these rules:
[root]# service iptables save

SELinux

If SELinux is enabled, and your cluster nodes are running selinux-policy version less than 2.4.6-150.el5 then you may be unable to login to your iSCSI target from your initiators.

To determine if SELinux is enabled:

[root]# getenforce
Enforcing

Verify the policy version installed:

[root@node126 ~]# rpm -qa selinux-policy\*
selinux-policy-2.4.6-137.el5
selinux-policy-targeted-2.4.6-137.el5

The following is an example error message that will appear when trying to login to the target using multiple iSCSI interfaces:

Logging in to [iface: ieth0, target: iqn.2001-05.com.equallogic:XXXX-disk, portal: 192.168.100.50,3260]
Logging in to [iface: ieth1, target: iqn.2001-05.com.equallogic:XXXX-disk, portal: 192.168.100.50,3260]
iscsiadm: Could not login to [iface: ieth0, target: iqn.2001-05.com.equallogic:{XXXX}-disk, portal: 192.168.100.50,3260]: 
iscsiadm: initiator reported error (4 - encountered connection failure)
iscsiadm: Could not login to [iface: ieth1, target: iqn.2001-05.com.equallogic:{XXXX}-disk, portal: 192.168.100.50,3260]: 
iscsiadm: initiator reported error (4 - encountered connection failure)
iscsiadm: Could not log into all portals. Err 4.

If you inspect /var/log/audit/audit.log a message similar to the following will appear:

type=AVC msg=audit(1221684460.525:20): avc:  denied  { net_raw } for  pid=9104 comm="iscsid" capability=13
scontext=root:system_r:iscsid_t:s0 tcontext=root:system_r:iscsid_t:s0 tclass=capability type=SYSCALL
msg=audit(1221684460.525:20): arch=c000003e syscall=54 success=no exit=-1 a0=9 a1=1 a2=19 a3=16b110d8
items=0 ppid=1 pid=9104 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2
comm="iscsid" exe="/sbin/iscsid" subj=root:system_r:iscsid_t:s0 key=(null)

An existing bug documents this issue:

https://bugzilla.redhat.com/show_bug.cgi?id=460398

The fix is included in selinux-policy-2.4.6-150.el5 which is available at http://people.redhat.com/dwalsh/SELinux/RHEL5. This fix will appear in a future RHEL update.

To disable SELinux, perform one of the following actions:

[root]# setenforce 0

To make this change persist across a reboot:

[root]# sed -i 's/^SELINUX=.*$/SELINUX=permissive/g' /etc/sysconfig/selinux

You may also use system-config-securitylevel to make this change. Select the SELinux tab, and change the SELinux Setting.

Assign IP Addresses to Storage NICs

  1. Identify which NIC ports have been cabled to the iSCSI I/O network.
  2. Assign IP addresses to each NIC port.

Install and Configure iSCSI Initiator

  • Install the iSCSI initiator utilities
[root]# yum install iscsi-initiator-utils
  • Determine which NICs are connected to the iSCSI I/O networks. The utilities ethtool or mii-tool may assist in determining which NICs have link.
  • Assign IP addresses on each iSCSI I/O NIC. These should correspond to the separate subnets setup for the iSCSI I/O networks.
  • Edit the file /etc/iscsi/initiator.name. This establishes the iSCSI qualified name (iqn). One will be randomly generated for you, but modifying this to a more easily identifiable name is recommended. This iqn will be used to associate your host with a host group. The industry accepted format is:
Format: iqn.yyyy-mm.{reversed domain name}:{unique identifier} 

For example:

iqn.2008-08.com.example:node1
  • Start the iSCSI initiator:
[root]# service iscsi start
  • Configure the iSCSI initiator to start on boot:
[root]# chkconfig iscsi on

CHAP Initiator Configuration

If using CHAP for access control. Ensure the passwords are at least 12 characters long.

On each cluster node, edit the iSCSI configuration file: /etc/iscsi/iscsid.conf

# *************
# CHAP Settings
# *************

# To enable CHAP authentication set node.session.auth.authmethod
# to CHAP. The default is None.
#node.session.auth.authmethod = CHAP

# To set a CHAP username and password for initiator
# authentication by the target(s), uncomment the following lines:
#node.session.auth.username = username
#node.session.auth.password = password

# To set a CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#node.session.auth.username_in = username_in
#node.session.auth.password_in = password_in

# To enable CHAP authentication for a discovery session to the target
# set discovery.sendtargets.auth.authmethod to CHAP. The default is None.
#discovery.sendtargets.auth.authmethod = CHAP

# To set a discovery session CHAP username and password for the initiator
# authentication by the target(s), uncomment the following lines:
#discovery.sendtargets.auth.username = username
#discovery.sendtargets.auth.password = password

# To set a discovery session CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#discovery.sendtargets.auth.username_in = username_in
#discovery.sendtargets.auth.password_in = password_in

Change all references of "username" and "password to the local CHAP user you added in the CHAP section earlier. Change all references of "username_in" and "password_in" to the target authentication you gathered from the storage array.

In this example, "cluster" is the local CHAP user created, and "target" is Target authentication username:

# *************
# CHAP Settings
# *************

# To enable CHAP authentication set node.session.auth.authmethod
# to CHAP. The default is None.
node.session.auth.authmethod = CHAP

# To set a CHAP username and password for initiator
# authentication by the target(s), uncomment the following lines:
node.session.auth.username = cluster
node.session.auth.password = clustercluster

# To set a CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
node.session.auth.username_in = target
node.session.auth.password_in = targettarget

# To enable CHAP authentication for a discovery session to the target
# set discovery.sendtargets.auth.authmethod to CHAP. The default is None.
discovery.sendtargets.auth.authmethod = CHAP

# To set a discovery session CHAP username and password for the initiator
# authentication by the target(s), uncomment the following lines:
discovery.sendtargets.auth.username = cluster
discovery.sendtargets.auth.password = clustercluster

# To set a discovery session CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
discovery.sendtargets.auth.username_in = target
discovery.sendtargets.auth.password_in = targettarget

Restart iscsi:

[root]# service iscsi restart

Configure iSCSI Initiators for Multipath

View current partition visible by the kernel:

 [root]# cat /proc/partitions
 major minor  #blocks  name
 
    8     0   71041024 sda
    8     1     104391 sda1
    8     2   10241437 sda2
  253     0    8192000 dm-0
  253     1    2031616 dm-1
  253     2   24130560 dm-2
 

Verify which two storage NICs are on the same subnet as the EqualLogic target:

 [root]# ip addr show | grep 192.168
     inet 192.168.100.199/24 brd 192.168.100.255 scope global eth0
     inet 192.168.100.188/24 brd 192.168.100.255 scope global eth1

Verify connectivity to the storage target from each storage NIC:

 [root]# ping -I eth0 192.168.100.50
 [root]# ping -I eth1 192.168.100.50

Configure new iSCSI interfaces for that will be associated with your Storage NICs. In this example we are prepending the letter "i" in front of the name of the real interface for consistency, in practice this name could be anything you choose:

 [root]# iscsiadm -m iface -I ieth0 --op=new
 [root]# iscsiadm -m iface -I ieth1 --op=new

Associate the new iSCSI interfaces with your storage NICs:

 [root]# iscsiadm -m  iface -I ieth0 --op=update -n iface.net_ifacename -v eth0
 [root]# iscsiadm -m  iface -I ieth1 --op=update -n iface.net_ifacename -v eth1

Verify these interfaces were created and associated properly:

 [root]# iscsiadm -m iface
 ieth0 tcp,default,eth0
 ieth1 tcp,default,eth1

You may also verify these changes by viewing the database:

 [root]# cat /var/lib/iscsi/ifaces/ieth0
 iface.iscsi_ifacename = ieth0
 iface.net_ifacename = eth0
 iface.hwaddress = default
 iface.transport_name = tcp
 [root]# cat /var/lib/iscsi/ifaces/ieth1
 iface.iscsi_ifacename = ieth1
 iface.net_ifacename = eth1
 iface.hwaddress = default
 iface.transport_name = tcp
 

Discover available targets on the PS-Series array:

 [root]# iscsiadm -m discovery -t st -p 192.168.100.50 -I ieth0 -I ieth1
 192.168.100.50:3260,1 iqn.2001-05.com.equallogic:0-8a0906-3a36e3f02-b990000003e48af0-your_volume_name
 192.168.100.50:3260,1 iqn.2001-05.com.equallogic:0-8a0906-3a36e3f02-b990000003e48af0-your_volume_name
 

Login to all targets

 [root]# iscsiadm -m node --loginall=all

View current partition visible by the kernel:

 [root]# cat /proc/partitions
 major minor  #blocks  name
 
    8     0   71041024 sda
    8     1     104391 sda1
    8     2   10241437 sda2
  253     0    8192000 dm-0
  253     1    2031616 dm-1
  253     2   24130560 dm-2
    8    64   24130560 sdc
    8    80   24130560 sdd
 

Display all logged in sessions, this should show a target for each storage NIC:

 [root]# iscsiadm -m session
 tcp: [8] 192.168.100.50:3260,1 iqn.2001-05.com.equallogic:0-8a0906-3a36e3f02-b990000003e48af0-your_volume_name
 tcp: [9] 192.168.100.50:3260,1 iqn.2001-05.com.equallogic:0-8a0906-3a36e3f02-b990000003e48af0-your_volume_name

Configure DM Multipath for the PS-Series Arrays

Enable multipath on all devices except your local disk. First determine what device your local disk are. This can be done in a variety of ways.

If you are using LVM:

[root]# pvs
  PV         VG         Fmt  Attr PSize   PFree
  /dev/sda2  VolGroup00 lvm2 a-     9.75G    0

In this example the local disk is /dev/sda

If you are not using LVM, you may use the mount command:

[root]# mount
<extra output omitted>
/dev/sda1 on /boot type ext3 (rw)

Once the local disk is determined, obtain the scsi id of that disk:

[root]# scsi_id -g -u -s /block/sda
36001e4f010fde3000f39f9f6066b384c

Now edit the multipath configuration to enable multipath on your non-local devices:

[root]# vi /etc/multipath.conf

Remove the line devnode "*"

blacklist {
        devnode "*"
}

Add the scsi id of your local disk:

blacklist {
        wwid 36001e4f010fde3000f39f9f6066b384c
}

It is also highly recommended that you disable the "user_friendly_names" option that is set by default. This allows your devices to be named according to their scsi_id, rather than a generic "mpath" name:

Change:

defaults {
        user_friendly_names yes
}

To:

defaults {
        user_friendly_names no
}
   

Enable multipathd on boot:

[root]# chkconfig multipathd on

Start multipathd:

[root]# service multipathd start

List multipath devices:

[root]# multipath -ll
36090a028f0e3363af08ae403000090b9dm-2 EQLOGIC,100E-00
[size=23G][features=0][hwhandler=0]
\_ round-robin 0 [prio=1][enabled]
 \_ 11:0:0:0 sdc 8:64  [active][ready]
\_ round-robin 0 [prio=1][enabled]
 \_ 10:0:0:0 sdd 8:80  [active][ready]
 

Verify that the multipath device exists under /dev/mpath and /dev/mapper

[root]# ls /dev/mpath/
36090a028f0e3363af08ae403000090b9
[root]# ls /dev/mapper/
36090a028f0e3363af08ae403000090b9
<other devices omitted>

This multipath device should match the scsi_id for both devices on each storage NIC path:

[root]# scsi_id -x -g -u -s /block/sdc
ID_VENDOR=EQLOGIC
ID_MODEL=100E-00
ID_REVISION=4.0
ID_SERIAL=36090a028f0e3363af08ae403000090b9
ID_TYPE=disk
ID_BUS=scsi

[root]# scsi_id -x -g -u -s /block/sdd
ID_VENDOR=EQLOGIC
ID_MODEL=100E-00
ID_REVISION=4.0
ID_SERIAL=36090a028f0e3363af08ae403000090b9
ID_TYPE=disk
ID_BUS=scsi

More detailed information is available with the multipath command:

[root]# multipath -v3 -d

Using Your Multipath Device

The device to access is now under /dev/mapper. This is the device that is manipulated for creating partitions, adding as a physical volume, and creating filesystems on.

[root]# parted /dev/mapper/36090a028f0e3363af08ae403000090b

For example, to create a partition on a new Volume that uses the entire Volume:

First, create a label for the Volume:

[root]# parted -s /dev/mapper/36090a028f0e3363af08ae403000090b mklabel gpt

Create a partition using the entire Volume:

[root]# parted -s /dev/mapper/36090a028f0e3363af08ae403000090b mkpart primary "0 -1"

You may need to run partprobe to ensure the partition is viewed by the kernel:

[root]# partprobe -s  /dev/mapper/36090a028f0e3363af08ae403000090b
/dev/dm-2: gpt partitions 1

NOTE: The "/dev/dm- devices are using internally to device-mapper, and should never be used directly.

Run kpartx to create maps for the new partition:

 [root]# kpartx -a /dev/mapper/36090a028f0e3363af08ae403000090b

The new partition will appear with a p1 after it:

[root]# ls /dev/mapper/
36090a028f0e3363af08ae403000090b  36090a028f0e3363af08ae403000090bp1

Use pvcreate on the new partition:

[root]# pvcreate /dev/mapper/36090a028f0e3363af08ae403000090bp1
Physical volume "/dev/mapper/36090a028f0e3363af08ae403000090bp1" successfully created


NOTE: For more information on device-mapper-multipath, see Using Device-Mapper Multipath on the Red Hat website at www.redhat.com/docs/manuals/enterprise/.

Verfication Checklist

Item Verified
Physical setup and cabling  
Storage array initialized  
iSCSI NICs configured on each node  
iSCSI initiator tools installed on each node  
iSCSI Initiator IQN set on each node to ease identification  
Management node configured  
Set member RAID policy  
Create a Volume for the Cluster  
Configure Access Control List  
Set Access Type to "shared"  
Multipath I/O configured on each node  
Perform iSCSI discovery and initiate sessions on all paths  
Create logical volume with GFS and mount  
Personal tools
Distributions