Products/HA/DellRedHatHALinuxCluster/System

From DellLinuxWiki

< Products/HA/DellRedHatHALinuxCluster(Difference between revisions)
Jump to: navigation, search
 
(6 intermediate revisions not shown)
Line 1: Line 1:
 +
<font size=1>[[../../DellRedHatHALinuxCluster|Dell|Red Hat HA Linux]] > System</font>
 +
=Cabling Your Cluster=
=Cabling Your Cluster=
The following sections provide information on how to cable various components of your cluster.
The following sections provide information on how to cable various components of your cluster.
==Pre-requisites==
==Pre-requisites==
-
Before cabling the various components of your cluster, ensure that all items have been opened and racked.  For instructions on racking your equipment, see the documentation included with your rack and components.
+
Before cabling the various components of your cluster, ensure that all hardware components have been installed in the rack.  For instructions on racking your equipment, see the documentation included with your rack and components.
==Cabling the Power Supplies==
==Cabling the Power Supplies==
Line 14: Line 16:
==Cabling Your Public and Private Networks==
==Cabling Your Public and Private Networks==
-
The network adapters in the cluster nodes provide at least two network connections for each node. These connections are described in Table 2-1.
+
The network adapters in the cluster nodes provide at least two network connections for each node. These connections are described in Table 1.
-
Table 2-1. Network Connections
+
Table 1. Network Connections
{| border="1" cellpadding="20" cellspacing="0"
{| border="1" cellpadding="20" cellspacing="0"
Line 33: Line 35:
|}
|}
-
Figure 2-3 shows an example of network adapter cabling in which dedicated network adapters in each node are connected to the public network and the remaining network adapters are connected to each other (for the private network).
+
Figure 1 shows an example of network adapter cabling in which dedicated network adapters in each node are connected to the public network and the remaining network adapters are connected to each other (for the private network).
-
Figure 2-3. Example of Network Cabling Connection
 
-
http://linux.dell.com/wiki-images/ha_linux/halinux_figure_2-3.jpg
+
http://docs.us.dell.com/support/edocs/systems/clusters/SE600L/en/it/network4.jpg
 +
 
 +
'''Figure 1. Example of Network Cabling Connection'''
===NIC Bonding===
===NIC Bonding===
-
Network Interface Card (NIC) bonding combines two or more NICs to provide load balancing and/or fault tolerance. Your cluster supports NIC bonding. Use the same NICs in a bond for consistent performance. For information on configuring bonding, see the Red Hat Deployment Guide section "Channel Bonding Interfaces” at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
+
Network Interface Card (NIC) bonding combines two or more NICs to provide load balancing and/or fault tolerance. Your cluster supports NIC bonding. Use the same type of NIC for each member in a bond for consistent performance. For information on configuring bonding, see the Red Hat Deployment Guide section "Channel Bonding Interfaces” at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
-
'''NOTE''': If dual-port network cards are used, do not bond together two ports on a single adapter, as this results in a single point of failure.
+
'''NOTE''': If dual-port network cards are used, do not bond two ports on the same adapter, as this results in a single point of failure.
===Cabling Your Public Network===
===Cabling Your Public Network===
Line 48: Line 51:
===Cabling Your Private Network===
===Cabling Your Private Network===
-
The private network connection to the cluster nodes is provided by a second or subsequent network adapter that is installed in each node. This network is used for intra-cluster communications. Cable all cluster network components including cluster node private NICs, network power switches, remote access controllers (DRAC/IPMI), and any storage controller management ports.
+
The private network connection to the cluster nodes is provided by a second or subsequent network adapter that is installed in each node. This network is used for intra-cluster communications. Cable all cluster network components including cluster node private NICs, network power switches, remote access controllers (DRAC/IPMI), and any storage controller management ports.
=Preparing Your Hardware=
=Preparing Your Hardware=
-
To setup your systems for use in a Dell|Red Hat HA Linux Cluster, ensure you have the Red Hat Enterprise Linux 5.1 Advanced Platform installation media.
+
To setup your systems for use in a Dell|Red Hat HA Linux Cluster, ensure you have the Red Hat Enterprise Linux 5.3 Advanced Platform installation media.
==Configuring the Cluster Nodes==
==Configuring the Cluster Nodes==
Line 57: Line 60:
===Configuring Remote Access===
===Configuring Remote Access===
-
1. In the BIOS post, use the <Ctrl><E> key combination to go to the Remote Access Configuration Utility menu when the following message appears:
+
1. During the system initialization, use the '''<Ctrl><E>''' key combination to go to the Remote Access Configuration Utility menu when the following message appears:
  Remote Access Configuration Utility 1.05
  Remote Access Configuration Utility 1.05
  Copyright 2006 Dell Inc. All Rights Reserved
  Copyright 2006 Dell Inc. All Rights Reserved
Line 65: Line 68:
  Primary Backplane Firmware Revision 1.05
  Primary Backplane Firmware Revision 1.05
   
   
-
  IP Address: 172. 16. 0 .102
+
  IP Address: 192.168.0.120
  Netmask: 255.255.255. 0
  Netmask: 255.255.255. 0
-
  Gateway: 192. 168. 0 .254
+
  Gateway: 192.168.0.1
  Press <Ctrl-E> for Remote Access Setup within 5 sec....
  Press <Ctrl-E> for Remote Access Setup within 5 sec....
 +
'''NOTE''': Version numbers on your servers may not be same as these examples.
2. The Remote Access Configuration Utility menu appears.
2. The Remote Access Configuration Utility menu appears.
-
  ┌──────────────────── Remote Access Configuration Utility ─────────────────────┐
+
  +-------------------- Remote Access Configuration Utility ---------------------+
-
              Copyright 2006 Dell Inc. All Rights Reserved 1.05             
+
  ¦             Copyright 2006 Dell Inc. All Rights Reserved 1.05              ¦
-
                                                                             
+
  ¦                                                                             ¦
-
  └──────────────────────────────────────────────────────────────────────────────┘
+
  +------------------------------------------------------------------------------+
-
  ┌──────────────────────────────────────────────────────────────────────────────┐
+
  +------------------------------------------------------------------------------+
-
  Baseboard Management Controller Revision                      1.33         
+
  ¦ Baseboard Management Controller Revision                      1.33          ¦
-
  Remote Access Controller Revision (Build 06.05.12)            1.0         
+
  ¦ Remote Access Controller Revision (Build 06.05.12)            1.0          ¦
-
  Primary Backplane Firmware Revision                            1.05         
+
  ¦ Primary Backplane Firmware Revision                            1.05          ¦
-
  │ ──────────────────────────────────────────────────────────────────────────  
+
  ¦ --------------------------------------------------------------------------   ¦
-
                                                                             
+
  ¦                                                                             ¦
-
  IPMI Over LAN ................................................ Off         
+
  ¦ IPMI Over LAN ................................................ Off          ¦
-
  NIC Selection ................................................ Dedicated   
+
  ¦ NIC Selection ................................................ Dedicated    ¦
-
  '''LAN Parameters ............................................... <ENTER>'''     
+
  ¦ '''LAN Parameters ............................................... <ENTER>'''      ¦
-
  Advanced LAN Parameters ...................................... <ENTER>     
+
  ¦ Advanced LAN Parameters ...................................... <ENTER>      ¦
-
  Virtual Media Configuration .................................. <ENTER>     
+
  ¦ Virtual Media Configuration .................................. <ENTER>      ¦
-
  LAN User Configuration ....................................... <ENTER>     
+
  ¦ LAN User Configuration ....................................... <ENTER>      ¦
-
  Reset To Default ............................................. <ENTER>     
+
  ¦ Reset To Default ............................................. <ENTER>      ¦
-
  System Event Log Menu ........................................ <ENTER>     
+
  ¦ System Event Log Menu ........................................ <ENTER>      ¦
-
3. If you have purchased the PowerEdge systems with Dell Remote Access Controller (DRAC) cards, ensure the option IPMI Over LAN is set to off.  If IPMI Over LAN is enabled, an additional network interface card (NIC) may be required for a fully-redundant setup. This issue occurs as IPMI uses one of on-board NICs, thereby preventing redundant network configuration without additional NICs.
+
3. If you have purchased the PowerEdge systems with Dell Remote Access Controller (DRAC) cards, ensure the '''IPMI Over LAN''' option is set to ''off''.  If '''IPMI Over LAN''' is enabled, an additional network interface card (NIC) may be required for a fully-redundant setup. This issue occurs as IPMI uses one of the on-board NICs, thereby preventing redundant network configuration without additional NICs.  For instructions on using  IPMI for your cluster nodes, see [[#Additional Dell Configuration|Additional Dell Configuration]].
<br>'''NOTE''': If you are using IPMI instead of a DRAC card for fencing on a PowerEdge 1950 system, a fully redundant setup is not possible due to the limited amount of slots for additional cards.
<br>'''NOTE''': If you are using IPMI instead of a DRAC card for fencing on a PowerEdge 1950 system, a fully redundant setup is not possible due to the limited amount of slots for additional cards.
-
4. In the Remote Access Configuration Utility menu, select LAN Parameters and assign an IP address or select DHCP.
+
4. In the ''Remote Access Configuration Utility'' menu, select '''LAN Parameters''' and assign an IP address or select DHCP.
-
<br>'''NOTE''': If you are using the DHCP option, you must assign the IP address from a DHCP server. Record the MAC Address from this menu for use in assigning a static IP through DHCP later. For instructions on configuring a DHCP server, see the Red Hat ''Deployment Guide'' section ''21.2. Configuring a DHCP Server'' at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
+
<br>'''NOTE''': If you are using the DHCP option, record the MAC Address from this menu for use in assigning a static IP through DHCP later. For instructions on configuring a DHCP server, see the Red Hat ''Deployment Guide'' section ''21.2. Configuring a DHCP Server'' at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
-
<br>'''NOTE''': Ensure the IP address assigned is on the same subnet as the cluster private network, as the nodes communicate with the remote access controllers as part of cluster fencing operations.
+
<br>'''NOTE''': Ensure the IP address assigned is on the same subnet as the cluster private network, since the nodes communicate with the remote access controllers for cluster fencing operations.
-
   ┌──────────────────── Remote Access Configuration Utility ─────────────────────┐
+
   +-------------------- Remote Access Configuration Utility ---------------------+
-
               Copyright 2006 Dell Inc. All Rights Reserved 1.05             
+
   ¦             Copyright 2006 Dell Inc. All Rights Reserved 1.05              ¦
-
                                                                              
+
   ¦                                                                             ¦
-
   └──────────────────────────────────────────────────────────────────────────────┘
+
   +------------------------------------------------------------------------------+
-
   ┌──────────────────────────────────────────────────────────────────────────────┐
+
   +------------------------------------------------------------------------------+
-
   Baseboard Management Controller Revision                      1.33         
+
   ¦ Baseboard Management Controller Revision                      1.33          ¦
-
   Remote Access Controller Revision (Build 06.05.12)            1.0         
+
   ¦ Remote Access Controller Revision (Build 06.05.12)            1.0          ¦
-
   Primary Backplane Firmware Revision                            1.05         
+
   ¦ Primary Backplane Firmware Revision                            1.05          ¦
-
   │ ──────────────────────────────────────────────────────────────────────────  
+
   ¦ --------------------------------------------------------------------------   ¦
-
           ┌────────────────────────────────────────────────────────┐          
+
   ¦         +--------------------------------------------------------+           ¦
-
   IPMI Ove│ RMCP+ Encryption Key ............ <ENTER>                       
+
   ¦ IPMI Ove¦ RMCP+ Encryption Key ............ <ENTER>              ¦           ¦
-
   NIC Sele│ ──────────────────────────────────────────────────── _ │icated    
+
   ¦ NIC Sele¦ ---------------------------------------------------- _ ¦icated     ¦
-
   LAN Para│ IP Address Source ............... DHCP                │TER>     
+
   ¦ LAN Para¦ IP Address Source ............... DHCP                ¦TER>      ¦
-
   │ Advanced│ Ethernet IP Address ............. 172. 16. 0 .102     │TER>     
+
   ¦ Advanced¦ Ethernet IP Address ............. 192.168. 0 .120     ¦TER>      ¦
-
   Virtual MAC Address ..................... 00:18:8B:38:5E:F5    │TER>     
+
   ¦ Virtual ¦ MAC Address ..................... 00:18:8B:38:5E:F5    ¦TER>      ¦
-
   LAN User│ Subnet Mask ..................... 255.255.255. 0      │TER>     
+
   ¦ LAN User¦ Subnet Mask ..................... 255.255.255. 0      ¦TER>      ¦
-
   Reset To│ Default Gateway ................. 172. 16. 0 .254   ▒ │TER>     
+
   ¦ Reset To¦ Default Gateway ................. 192.168. 0 . 1   ¦ ¦TER>      ¦
-
   System E│ ──────────────────────────────────────────────────── ▒ │TER>     
+
   ¦ System E¦ ---------------------------------------------------- ¦ ¦TER>      ¦
-
           VLAN Enable ..................... Off                ▒ │          
+
   ¦         ¦ VLAN Enable ..................... Off                ¦ ¦           ¦
-
           VLAN ID ......................... 0001                         
+
   ¦         ¦ VLAN ID ......................... 0001                ¦           ¦
-
           └────────────────────────────────────────────────────────┘          
+
   ¦         +--------------------------------------------------------+           ¦
5. Save changes and Exit
5. Save changes and Exit
-
 
-
===Additional Configuration===
 
-
You will need to configure your DRAC cards for telnet communication with the fencing agents. DRAC cards are shipped with secure shell (SSH) enabled and telnet disabled by default. Support for SSH in fencing agents is planned for a future release.
 
-
 
-
1.Connect to each DRAC with the following command:
 
-
[root]# ssh {IP address of DRAC}
 
-
For example:
 
-
[root]# ssh 192.168.120.100
 
-
2. Repeat this process on all nodes with DRACs.
 
-
<br>3. Enable telnet with the following command:
 
-
[root@drac]# racadm config -g cfgSerial -o cfgSerialTelnetEnable 1
 
-
'''NOTE''': Perform the procedure in this section after your nodes are configured, if you cannot connect to DRAC using SSH at this time. You can also use the web interface or Dell OpenManage™ to configure your DRACs. For more information on configuring your DRAC, see the documentation on [http://support.dell.com/ support.dell.com].
 
==Configuring the Internal Drives==
==Configuring the Internal Drives==
-
If you have added new physical disks to your system or are setting up the internal drives in a RAID configuration, you must configure the RAID using the RAID controller's BIOS configuration utility before you can install the operating system.
+
If you have added new physical disks to your system or are setting up the internal drives in a RAID configuration, you must configure the RAID using the ''RAID controller's BIOS configuration utility'' before you can install the operating system.
For the best balance of fault tolerance and performance, it is recommended that you use RAID 1 for the internal disks. For more information on RAID configurations, see the documentation for your specific RAID controller.
For the best balance of fault tolerance and performance, it is recommended that you use RAID 1 for the internal disks. For more information on RAID configurations, see the documentation for your specific RAID controller.
Line 140: Line 132:
=Preparing the Operating System=
=Preparing the Operating System=
-
'''NOTICE''': Disconnect all storage cables during any operating system installation. Failure to do so can result in lost data, boot sector installation issues, and multipath drive ordering problems.
+
'''NOTICE''': Disconnect all storage cables during any operating system installation. Failure to do so can result in lost data, boot sector installation issues, and multipath drive ordering problems.
==Accessing the Red Hat Network==
==Accessing the Red Hat Network==
-
If you do not have an RHN account, log on to [http://www.redhat.com/register/ www.redhat.com/register/] and create an account. The cluster nodes are shipped with RHN subscription information. If you are unable to locate the registration information, contact Dell Support.
+
If you do not have an RHN account, log on to [http://www.redhat.com/register/ www.redhat.com/register/] and create an account. The cluster nodes are shipped with RHN subscription information. If you are unable to locate the registration information, contact Dell Support.
==Determine the Operating System Status==
==Determine the Operating System Status==
-
If your PowerEdge system shipped with Red Hat Enterprise Linux 5 operating system installed from the factory, you must upgrade to Red Hat Enterprise Linux 5.1 operating system. To upgrade the operating system, see "Updating Your Operating System". To install the operating system on your PowerEdge systems, see the next section "Installing the Red Hat Enterprise Linux 5.1 Operating System".
+
If your PowerEdge system shipped with Red Hat Enterprise Linux 5 operating system installed from the factory, you must upgrade to Red Hat Enterprise Linux 5.3 operating system. To upgrade the operating system, see [[#Updating Your Operating System|Updating Your Operating System]]. Continue to [[#Installing the Red Hat Enterprise Linux Operating System|Installing the Red Hat Enterprise Linux Operating System]] for instructions on a full install.
-
==Installing the Red Hat Enterprise Linux 5.1 Operating System==
+
==Installing the Red Hat Enterprise Linux Operating System==
-
If the operating system media is not included with your PowerEdge system, download Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) ISO image(s) from the Red Hat website at [http://rhn.redhat.com rhn.redhat.com]. If you require physical media, contact your Dell sales representative.  After obtaining the correct ISO image(s) or physical media, choose one of the following installation methods.
+
If the operating system media is not included with your PowerEdge system, download ''Red Hat Enterprise Linux (v. 5 for 64-bit x86_64)'' ISO image(s) from the Red Hat website at [http://rhn.redhat.com rhn.redhat.com]. If you require physical media, contact your Dell sales representative.  After obtaining the correct ISO image(s) or physical media, choose one of the following installation methods.
===Installing the Operating System Using Physical Media===
===Installing the Operating System Using Physical Media===
-
Create physical media for installation. For more information, see the section ''Can You Install Using the CD-ROM or DVD?'' in the Red Hat ''Installation Guide'' located on the Red Hat website at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/]/.
+
Create physical media for installation. For more information, see the section ''Can You Install Using the CD-ROM or DVD?'' in the Red Hat ''Installation Guide'' on the Red Hat website at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/]/.
-
<br><br>'''NOTE''': You can also use the Dell Systems Build and Update Utility media to install the operating system. To download the latest media, go to the Dell Support website at [http://support.dell.com/ support.dell.com].
+
-
===Installing the Operating System Through the Network===
+
===Installing the Operating System Over the Network===
-
Create a network installation server. For more information, see the section ''Preparing for a Network Installation'' in the Red Hat ''Installation Guide'' located on the Red Hat website at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/]/.
+
Create a network installation server. For more information, see the section ''Preparing for a Network Installation'' in the Red Hat ''Installation Guide'' on the Red Hat website at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/]/.
===Using a Kickstart Installation===
===Using a Kickstart Installation===
-
Your Dell|Red Hat HA Linux Cluster system can be easily installed with the kickstart script located at '''http://linux.dell.com/files/ha_linux/ks.cfg'''. However, your nodes must be connected to the Internet to use this script. If your nodes do not have internet access, you will need to create your own kickstart file. See ''Creating Your Own Kickstart File''.  For complete details on Kickstart, see section ''Kickstart Installations'' in the Red Hat ''Installation Guide'' on [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/]/.
+
Your Dell|Red Hat HA Linux Cluster system can be easily installed with the kickstart script located at '''http://linux.dell.com/files/ha_linux/ks.cfg'''. However, your nodes must be connected to the Internet to use this script. If your nodes do not have internet access, you will need to create your own kickstart file. See [[#Creating Your Own Kickstart File|Creating Your Own Kickstart]].  For complete details on Kickstart, see section ''Kickstart Installations'' in the ''Red Hat Installation Guide'' on [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
===Creating Your Own Kickstart File===
===Creating Your Own Kickstart File===
You may download a kickstart script and modify it to meet your needs, or if your cluster nodes do not have direct internet access.
You may download a kickstart script and modify it to meet your needs, or if your cluster nodes do not have direct internet access.
-
Download a kickstart script from '''http://linux.dell.com/files/ha_linux/'''.
+
Download a kickstart script (e.g. <tt>ks.cfg</tt>) from [http://linux.dell.com/files/ha_linux/ linux.dell.com/files/ha_linux].
-
For more information on creating a kickstart installation, see the section "Creating a Kickstart Installation" in the Red Hat Installation Guide located on the Red Hat website at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/]/.
+
For more information on creating a kickstart installation, see the section "Creating a Kickstart Installation" in the ''Red Hat Installation Guide'' on at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
-
For information on using the kickstart GUI, see the section ''Kickstart Configurator'' of the Red Hat Installation Guide on [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
+
For information on using the kickstart GUI, see the section ''Kickstart Configurator'' of the ''Red Hat Installation Guide'' on [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
===Using the Network Installation===
===Using the Network Installation===
You can use one of the network installation methods described in the following sections to install your Red Hat Enterprise Linux operating system.
You can use one of the network installation methods described in the following sections to install your Red Hat Enterprise Linux operating system.
-
<br><br>'''NOTE''': Use the Pre-Boot Execution Environment (PXE) for easy installation of multiple nodes. For more information on setting up PXE, see the section "PXE Network Installations" in the Red Hat Installation Guide located on the Red Hat website at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
+
<br><br>'''NOTE''': Use Pre-Boot Execution Environment (PXE) for easy installation of multiple nodes. For more information on setting up PXE, see the section ''PXE Network Installations'' in the ''Red Hat Installation Guide'' at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
-
 
+
====Using PXE Boot to Install the Operating System From the Network====
====Using PXE Boot to Install the Operating System From the Network====
-
Configure an entry that points to the network installation server you configured in "Installing the Operating System Through the Network" using the kickstart that you created in "Creating Your Own Kickstart File".  
+
Configure an entry that points to the network installation server you configured in [[#Installing_the_Operating_System_Over_the_Network|Installing the Operating System Over the Network]] using the kickstart that you created in [[#Creating_Your_Own_Kickstart_File|Creating Your Own Kickstart File]].
Example of a PXE entry:
Example of a PXE entry:
  label Dell_Red_Hat_Cluster_Node
  label Dell_Red_Hat_Cluster_Node
-
     menu label node1 RHEL 5.1
+
     menu label node1 RHEL 5.3
-
     kernel images/os/linux/redhat/rhel/5.1/x86_64/vmlinuz
+
     kernel images/os/linux/redhat/rhel/5.3/x86_64/vmlinuz
-
     append initrd= images/os/linux/redhat/rhel/5.1/x86_64/initrd.img
+
     append initrd= images/os/linux/redhat/rhel/5.3/x86_64/initrd.img
====Using Media to Install the Operating System From the Network====
====Using Media to Install the Operating System From the Network====
-
You can boot from any RHEL 5.1 CD1 or DVD and specify your network installation server using the askmethod parameter.  When the RHEL boot prompt appears, enter the following:
+
You can boot from any RHEL 5.3 CD1 or DVD and specify your network installation server using the askmethod parameter.  When the RHEL boot prompt appears, enter the following:
-
  boot: linux askmethod
+
  boot: '''linux askmethod'''
You can also use the kickstart file that you created in ''Using a Kickstart Installation''. For example:
You can also use the kickstart file that you created in ''Using a Kickstart Installation''. For example:
-
  boot: linux ks=http://linux.dell.com/files/ha_linux/ks.cfg
+
  boot: '''linux ks=http://local-server.private.lan/kickstart/ks.cfg'''
-
<br>'''NOTE''': Specify any custom kickstart scripts here instead.
+
If your nodes have access to linux.dell.com, you may use the kickstart file there:
 +
boot: '''linux ks=http://linux.dell.com/files/ha_linux/ks.cfg'''
===Synchronize Node System Clocks===
===Synchronize Node System Clocks===
-
Synchronize the clocks on all nodes. It is recommended to use Network Time Protocol (NTP) to synchronize the nodes.  For more information on configuring system time and NTP, see the section ''Date and Time Configuration'' in the ''Red Hat Deployment Guide'' at http://www.redhat.com/docs/manuals/enterprise/
+
To ensure best cluster performance, synchronize the clocks on all the cluster nodes. If the clocks on the cluster nodes are not synchronized, the inode time stamps will be updated unnecessarily, severely impacting cluster performance. It is recommended to use Network Time Protocol (NTP) to synchronize the nodes.  For more information on configuring system time and NTP, see the section ''Date and Time Configuration'' in the ''Red Hat Deployment Guide'' at http://www.redhat.com/docs/manuals/enterprise/
===Registering Your Nodes With RHN===
===Registering Your Nodes With RHN===
Register all nodes with Red Hat Network (RHN) at rhn.redhat.com by executing the following command:
Register all nodes with Red Hat Network (RHN) at rhn.redhat.com by executing the following command:
-
  [root]# rhn_register
+
  [root]# '''rhn_register'''
<br>'''NOTE''': If you encounter a message that indicates the node is already registered with RHN, you do not need to register it again.
<br>'''NOTE''': If you encounter a message that indicates the node is already registered with RHN, you do not need to register it again.
-
For more information, see the section "RHN Registration" in the Red Hat Installation Guide located on the Red Hat website at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
+
For more information, see the section ''RHN Registration'' in the ''Red Hat Installation Guide'' at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
Your installation number provides access to all necessary RHN Channels.
Your installation number provides access to all necessary RHN Channels.
<br>Verify that all nodes are configured on the following RHN Channels:
<br>Verify that all nodes are configured on the following RHN Channels:
-
* Red Hat Enterprise Linux (v. 5 for 64-bit x86_64)
+
* '''Red Hat Enterprise Linux (v. 5 for 64-bit x86_64)'''
-
* RHEL Clustering (v. 5 for 64-bit x86_64)
+
* '''RHEL Clustering (v. 5 for 64-bit x86_64)'''
-
* RHEL Cluster-Storage (v. 5 for 64-bit x86_64)
+
* '''RHEL Cluster-Storage (v. 5 for 64-bit x86_64)'''
Execute the following command on each node to verify that they are registered:
Execute the following command on each node to verify that they are registered:
-
  [root]# echo "repo list" | yum shell
+
  [root]# '''yum repolist'''
-
 
+
-
<br>'''NOTE''': A future version of yum will support the command '''yum repolist'''.
+
If all three channels are not registered:
If all three channels are not registered:
-
# Log in to rhn.redhat.com.
+
# Log in to ''rhn.redhat.com''.
-
# Click Systems.
+
# Click '''Systems'''.
-
# If your cluster nodes are not listed, use a filter like Recently Registered on the left-pane.
+
# If your cluster nodes are not listed, use a filter like '''Recently Registered''' on the left-pane.
# Select your cluster node from the list.
# Select your cluster node from the list.
-
# In the Subscribed channels section, click Alter Channel Subscriptions.
+
# In the ''Subscribed'' channels section, click '''Alter Channel Subscriptions'''.
# Select all channels and click Change Subscriptions.
# Select all channels and click Change Subscriptions.
-
<br>'''NOTE''': For more information on registering with RHN and subscribing to channels, see Red Hat network documentation website at www.redhat.com/docs/manuals/RHNetwork/.
+
<br>'''NOTE''': For more information on registering with RHN and subscribing to channels, see the Red Hat network documentation website at [http://www.redhat.com/docs/manuals/RHNetwork/ redhat.com/docs/manuals/RHNetwork].
-
 
+
-
 
+
===Updating Your Operating System===
===Updating Your Operating System===
-
After you have registered the cluster nodes with RHN, you can use the yum utility to manage software updates. Red Hat Enterprise Linux operating systems also include a GUI-based tool called system-config-packages. To access system-config-packages, go to Applications→ Add/Remove Software that provides a front-end for the yum utility.
+
After you have registered the cluster nodes with RHN, you can use the '''<tt>yum</tt>''' utility to manage software updates. Red Hat Enterprise Linux operating systems also include a GUI-based tool called '''<tt>system-config-packages</tt>'''. To access '''<tt>system-config-packages</tt>''', go to '''Applications -> Add/Remove Software'''.
If needed, install system-config-packages with the command:
If needed, install system-config-packages with the command:
-
  [root]# yum install pirut
+
  [root]# '''yum install pirut'''
Update to the latest software with the following command:
Update to the latest software with the following command:
-
  [root]# yum update
+
  [root]# '''yum update'''
 +
 
 +
This command updates all packages for your operating system to the latest version.  The Dell|Red Hat HA Linux Cluster system has been tested and qualified with Red Hat Enterprise Linux 5.3.
-
This command updates all packages for your operating system to the latest version. The Dell|Red Hat HA Linux Cluster system has been tested and qualified with Red Hat Enterprise Linux 5.1 operating system.
+
'''NOTE:''' If this update installs a new kernel on your host, the nodes must be rebooted in order to load the new kernel.
===Configuring the Firewall===
===Configuring the Firewall===
To configure your firewall:
To configure your firewall:
-
1. Ensure that all nodes can communicate with each other by host name and IP address.  See the Troubleshooting section under "Verifying Node Connectivity" for details. For more information, see the section "Before Configuring a Red Hat Cluster" in the Red Hat Deployment Guide located on the Red Hat website at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
+
1. Ensure that all nodes can communicate with each other by host name and IP address.  See the section [[#Verifying Node Connectivity|Verifying Node Connectivity]] for details. For more information, see the section ''Before Configuring a Red Hat Cluster'' in the ''Red Hat Deployment Guide'' at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
-
<br>'''NOTE''': If you are not using DNS to resolve names, edit the local host file at /etc/hosts on each node and create entries for each node. Ensure that the local host file of each node contains entries for all nodes.
+
<br>'''NOTE''': If you are not using DNS to resolve names, edit the local host file at <tt>/etc/hosts</tt> on each node and create entries for each node. Ensure that the local host file of each node contains entries for all nodes.
2. Configure necessary IP traffic between the nodes. Execute the following commands on each node to allow all cluster communications between the nodes:
2. Configure necessary IP traffic between the nodes. Execute the following commands on each node to allow all cluster communications between the nodes:
-
  [root]# iptables -I INPUT -s {cluster private network} -j ACCEPT
+
  [root]# '''iptables -I INPUT -s {cluster private network} -j ACCEPT'''
For example:
For example:
-
  [root]# iptables -I INPUT -s 172.16.0.0/16 -j ACCEPT
+
  [root]# '''iptables -I INPUT -s 172.16.0.0/16 -j ACCEPT'''
-
  [root]# service iptables save
+
Save changes to the firewall:
 +
  [root]# '''service iptables save'''
-
This allows all traffic from each node. However, if you require more control over the security between the nodes, you can allow only specific ports that are needed. For more information on firewalls, see the section "Firewalls" in the Red Hat Enterprise Linux Deployment Guide located on the Red Hat website at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
+
This allows all traffic from each node. However, if you require more control over the security between the nodes, you can allow only specific ports that are needed. For more information on firewalls, see the section ''Firewalls'' in the ''Red Hat Enterprise Linux Deployment Guide'' at [http://www.redhat.com/docs/manuals/enterprise/ www.redhat.com/docs/manuals/enterprise/].
-
For information about IP port numbers, protocols, and components used by Red Hat Cluster Suite, see the section Enabling IP Ports on Cluster Nodes in the Red Hat Enterprise Linux 5.1 Cluster Administration Guide located on the Red Hat website at www.redhat.com.
+
For information about IP port numbers, protocols, and components used by Red Hat Clustering, see the section ''Enabling IP Ports on Cluster Nodes'' in the Red Hat Enterprise Linux ''Cluster Administration Guide'' at [http://www.redhat.com/docs/manuals/enterprise/ redhat.com/docs/manuals/enterprise/].
-
==Installing Cluster Groups==
+
===Verify Node Connectiviy===
-
If your systems were installed with the kickstart file, then they should already have the correct cluster groups. Execute the following command to verify or install the correct groups:
+
Perform the following to verify connectivity:
-
  [root]# yum groupinstall "Clustering" "Cluster Storage"
+
 
 +
On node1:
 +
[root@node1]# '''ping {node2 fully qualified hostname}'''
 +
 
 +
For example:
 +
[root@node1]# '''ping node2.example.com'''
 +
 
 +
On node2:
 +
[root@node2]# '''ping {node1 fully qualified hostname}'''
 +
 
 +
On the management node:
 +
[root@management]# '''ping {node1 fully qualified hostname}'''
 +
[root@management]# '''ping {node2 fully qualified hostname}'''
 +
          .
 +
          .
 +
          .
 +
[root@management]# '''ping {node''N'' fully qualified hostname}'''
 +
 
 +
Repeat these steps for all nodes to verify connectivity.
 +
 
 +
==Cluster Groups==
 +
Execute the following command to verify access to the correct groups:
 +
  [root]# '''yum grouplist | grep Cluster'''
 +
 
 +
The output should be:
 +
Clustering
 +
Cluster Storage
 +
 
 +
If you do not see these package groups listed here, then see [[#Registering_Your_Nodes_With_RHN|Registering Your Nodes With RHN]] to subscribe to the proper child channels.
==Dell Community Repositories==
==Dell Community Repositories==
-
Your Dell PowerEdge systems can be managed using the yum utility that already manages packages for Red Hat Enterprise Linux 5.1 on your nodes.  This provides easy access to any Dell provided Linux software, Dell PowerEdge firmware updates, some DKMS drivers, and other open source software. For more information, visit [http://linux.dell.com/repo/software/ linux.dell.com/repo/software/]
+
Your Dell PowerEdge systems can be managed using the '''<tt>yum</tt>''' utility that already manages packages for Red Hat Enterprise Linux 5.3 on your nodes.  This provides easy access to any Dell provided Linux software, Dell PowerEdge firmware updates, some DKMS drivers, and other open source software. For more information, visit [http://linux.dell.com/repo/community/ linux.dell.com/repo/community/]
If your nodes do not have access to the internet, skip this section and perform manual firmware updates and software installation to your PowerEdge systems as needed. Visit [http://support.dell.com/ support.dell.com] for information.
If your nodes do not have access to the internet, skip this section and perform manual firmware updates and software installation to your PowerEdge systems as needed. Visit [http://support.dell.com/ support.dell.com] for information.
Line 270: Line 288:
To install the Dell Community Software Repository, use the following command:
To install the Dell Community Software Repository, use the following command:
-
  [root]# wget -q -O - http://linux.dell.com/repo/software/bootstrap.cgi | bash
+
  [root]# '''wget -q -O - http://linux.dell.com/repo/community/bootstrap.cgi | bash'''
 +
 
 +
To install the Dell OMSA Repository, use the following command:
 +
[root]# '''wget -q -O - http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash'''
 +
 
 +
To install the Dell Community firmware repository, use the following command:
 +
[root]# '''wget -q -O - http://linux.dell.com/repo/firmware/bootstrap.cgi | bash'''
 +
 
 +
==Additional Dell Configuration==
 +
 
 +
===Configure Dell Remote Access Controller===
 +
If your nodes have a Dell Remote Access Controller (DRAC), you can manage it from your systems, or you can ssh/telnet into the DRAC directly.  To install the software components for the DRAC, see the section [[#Installing Dell Community Repositories|Installing Dell Community Repositories]] first.  You may also obtain the software from [http://support.dell.com support.dell.com].
 +
 
 +
Install the DRAC software with the following command:
 +
[root]# '''yum install srvadmin-rac5'''
 +
 
 +
If you need the complete OpenManage functionality, use this command instead:
 +
[root]# '''yum install srvadmin-all'''
 +
 
 +
====Enabling Telnet on the DRAC====
 +
With RHEL 5.3 and greater, the nodes can fence a DRAC via SSH.  By default SSH is enabled on the DRACs, but telnet is disabled.  If you do not want to use ssh, you will need to enable telnet on each DRAC.  Skip this section if you plan to configure SSH for DRAC fencing, as detailed in [[../Cluster#Configure Cluster for DRAC SSH Fencing|Configure Cluster for DRAC SSH Fencing]].
 +
 
 +
1. Connect to each DRAC with the following command:
 +
[root]# '''ssh {IP address of DRAC}'''
 +
For example:
 +
[root]# '''ssh 192.168.120.100'''
 +
2. Repeat this process on all nodes with DRACs.
 +
<br>3. Enable telnet with the following command:
 +
[root@drac]# '''racadm config -g cfgSerial -o cfgSerialTelnetEnable 1'''
 +
'''NOTE''': You can also use the web interface or Dell OpenManage™ to configure your DRACs.
 +
 
 +
For more information, see the [http://support.dell.com/support/edocs/software/smdrac3/ DRAC documentation].
 +
 
 +
===Configure Intelligent Platform Management Interface===
 +
If your nodes do not have a Dell Remote Access Controller, you can use IPMI instead.  However, it is best to use a dedicated NIC for IPMI.
-
To install the Dell Community Hardware Repository, use the following command:
+
To enable run the following steps on all the nodes:  
-
[root]# wget -q -O - http://linux.dell.com/repo/hardware/bootstrap.cgi | bash
+
-
(Optional) To leverage the repositories to install OpenManage DRAC
+
* Install the OpenIPMI packages:
-
components, use the following command:
+
  [root]# '''yum install OpenIPMI OpenIPMI-tools'''
-
  [root]# yum install srvadmin-rac5
+
-
OpenManage gives you the ability to use the racadm command to manage your DRAC components. This installs the required components. If you need the complete OpenManage functionality, use this command instead:
+
* Start the IPMI service
-
  [root]# yum install srvadmin-all
+
  [root]# '''service ipmi start'''
=Verification Checklist=
=Verification Checklist=
Line 289: Line 339:
|-
|-
|Physical setup
|Physical setup
-
|
+
|&nbsp;
|-
|-
|Physical cabling
|Physical cabling
-
|
+
|&nbsp;
|-
|-
|Remote Access Controllers configured in the BIOS
|Remote Access Controllers configured in the BIOS
-
|
+
|&nbsp;
|-
|-
-
|DRAC telnet enabled (If Applicable)
+
|Operating Systems installed and updated
-
|
+
|&nbsp;
|-
|-
-
|Network Power Switches configured (if applicable)
+
|Red Hat Network registration
-
|
+
|&nbsp;
|-
|-
-
|Operating Systems installed and updated
+
|Firewall Configuration
-
|
+
|&nbsp;
 +
|-
 +
|DRAC or IPMI software installation
 +
|&nbsp;
|}
|}
-
=Troubleshooting=
 
-
==Physical Connectivity==
 
-
Check all your physical connections. Disconnect each cable and inspect it for damage, then connect it firmly. If problems persist, try to swap the cable with another one, and see if the issue follows the cable, or is a problem with the device.
 
-
==Verifying Node Connectivity==
+
----
-
Perform the following to verify connectivity:
+
'''Continue to the next section [[../Storage|Storage]]'''
-
On node1:
 
-
[root@node1]# ping {node2 fully qualified hostname}
 
-
For example:
 
-
[root@node1]# ping node2.example.com
 
-
On node2:
+
----
-
[root@node2]# ping {node1 fully qualified hostname}
+
<font size=1>[[../../DellRedHatHALinuxCluster|Dell|Red Hat HA Linux]] > System</font>
-
 
+
-
On the management node:
+
-
[root@management]# ping {node1 fully qualified hostname}
+
-
[root@management]# ping {node2 fully qualified hostname}
+
-
          .
+
-
          .
+
-
          .
+
-
[root@management]# ping {node''N'' fully qualified hostname}
+
-
 
+
-
Repeat these steps for all nodes to verify connectivity.
+

Latest revision as of 21:45, 24 April 2009

Dell|Red Hat HA Linux > System

Contents

Cabling Your Cluster

The following sections provide information on how to cable various components of your cluster.

Pre-requisites

Before cabling the various components of your cluster, ensure that all hardware components have been installed in the rack. For instructions on racking your equipment, see the documentation included with your rack and components.

Cabling the Power Supplies

To ensure that the specific power requirements are satisfied, see the documentation for each component in your cluster solution. It is recommended that you adhere to the following guidelines to protect your cluster solution from power-related failures:

  • Plug each power supply into a separate AC circuit.
  • Plug each power supply into separate optional network power switches.
  • Use uninterruptible power supplies (UPS).
  • Consider backup generators and power from separate electrical substations.

Cabling Your Public and Private Networks

The network adapters in the cluster nodes provide at least two network connections for each node. These connections are described in Table 1.

Table 1. Network Connections

Network Connection Description
Public Network All connections to the client local area network (LAN).

At least one public network must be configured for mixed mode (public mode and private mode) for private network failover.

Private Network A dedicated connection for sharing cluster status information between the cluster nodes.

Network adapters connected to the LAN can also provide redundancy at the communications level in case the cluster interconnect fails. If you have optional DRAC cards, cable them to the private network. In this configuration, do not use point-to-point topology, use a network switch If you have optional network power switches, cable them to the private network. Cable any Storage Array Ethernet management ports to the private network.

Figure 1 shows an example of network adapter cabling in which dedicated network adapters in each node are connected to the public network and the remaining network adapters are connected to each other (for the private network).


network4.jpg

Figure 1. Example of Network Cabling Connection

NIC Bonding

Network Interface Card (NIC) bonding combines two or more NICs to provide load balancing and/or fault tolerance. Your cluster supports NIC bonding. Use the same type of NIC for each member in a bond for consistent performance. For information on configuring bonding, see the Red Hat Deployment Guide section "Channel Bonding Interfaces” at www.redhat.com/docs/manuals/enterprise/.

NOTE: If dual-port network cards are used, do not bond two ports on the same adapter, as this results in a single point of failure.

Cabling Your Public Network

Cable any network adapters that will be providing access to HA applications for use by clients to the public network segments. You may install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port.

Cabling Your Private Network

The private network connection to the cluster nodes is provided by a second or subsequent network adapter that is installed in each node. This network is used for intra-cluster communications. Cable all cluster network components including cluster node private NICs, network power switches, remote access controllers (DRAC/IPMI), and any storage controller management ports.

Preparing Your Hardware

To setup your systems for use in a Dell|Red Hat HA Linux Cluster, ensure you have the Red Hat Enterprise Linux 5.3 Advanced Platform installation media.

Configuring the Cluster Nodes

To prepare the cluster nodes for setup, execute the instructions in the following sections.

Configuring Remote Access

1. During the system initialization, use the <Ctrl><E> key combination to go to the Remote Access Configuration Utility menu when the following message appears:

Remote Access Configuration Utility 1.05
Copyright 2006 Dell Inc. All Rights Reserved

Baseboard management Controller Revision 1.33
Remote Access Controller Revision (Build 06.05.12) 1.0
Primary Backplane Firmware Revision 1.05

IP Address: 192.168.0.120
Netmask: 255.255.255. 0
Gateway: 192.168.0.1
Press <Ctrl-E> for Remote Access Setup within 5 sec....

NOTE: Version numbers on your servers may not be same as these examples.

2. The Remote Access Configuration Utility menu appears.

+-------------------- Remote Access Configuration Utility ---------------------+
¦              Copyright 2006 Dell Inc. All Rights Reserved 1.05               ¦
¦                                                                              ¦
+------------------------------------------------------------------------------+
+------------------------------------------------------------------------------+
¦ Baseboard Management Controller Revision                       1.33          ¦
¦ Remote Access Controller Revision (Build 06.05.12)             1.0           ¦
¦ Primary Backplane Firmware Revision                            1.05          ¦
¦ --------------------------------------------------------------------------   ¦
¦                                                                              ¦
¦ IPMI Over LAN ................................................ Off           ¦
¦ NIC Selection ................................................ Dedicated     ¦
¦ LAN Parameters ............................................... <ENTER>       ¦
¦ Advanced LAN Parameters ...................................... <ENTER>       ¦
¦ Virtual Media Configuration .................................. <ENTER>       ¦
¦ LAN User Configuration ....................................... <ENTER>       ¦
¦ Reset To Default ............................................. <ENTER>       ¦
¦ System Event Log Menu ........................................ <ENTER>       ¦

3. If you have purchased the PowerEdge systems with Dell Remote Access Controller (DRAC) cards, ensure the IPMI Over LAN option is set to off. If IPMI Over LAN is enabled, an additional network interface card (NIC) may be required for a fully-redundant setup. This issue occurs as IPMI uses one of the on-board NICs, thereby preventing redundant network configuration without additional NICs. For instructions on using IPMI for your cluster nodes, see Additional Dell Configuration.
NOTE: If you are using IPMI instead of a DRAC card for fencing on a PowerEdge 1950 system, a fully redundant setup is not possible due to the limited amount of slots for additional cards.

4. In the Remote Access Configuration Utility menu, select LAN Parameters and assign an IP address or select DHCP.
NOTE: If you are using the DHCP option, record the MAC Address from this menu for use in assigning a static IP through DHCP later. For instructions on configuring a DHCP server, see the Red Hat Deployment Guide section 21.2. Configuring a DHCP Server at www.redhat.com/docs/manuals/enterprise/.
NOTE: Ensure the IP address assigned is on the same subnet as the cluster private network, since the nodes communicate with the remote access controllers for cluster fencing operations.

 +-------------------- Remote Access Configuration Utility ---------------------+
 ¦              Copyright 2006 Dell Inc. All Rights Reserved 1.05               ¦
 ¦                                                                              ¦
 +------------------------------------------------------------------------------+
 +------------------------------------------------------------------------------+
 ¦ Baseboard Management Controller Revision                       1.33          ¦
 ¦ Remote Access Controller Revision (Build 06.05.12)             1.0           ¦
 ¦ Primary Backplane Firmware Revision                            1.05          ¦
 ¦ --------------------------------------------------------------------------   ¦
 ¦         +--------------------------------------------------------+           ¦
 ¦ IPMI Ove¦ RMCP+ Encryption Key ............ <ENTER>              ¦           ¦
 ¦ NIC Sele¦ ---------------------------------------------------- _ ¦icated     ¦
 ¦ LAN Para¦ IP Address Source ............... DHCP                 ¦TER>       ¦
 ¦ Advanced¦ Ethernet IP Address ............. 192.168. 0 .120      ¦TER>       ¦
 ¦ Virtual ¦ MAC Address ..................... 00:18:8B:38:5E:F5    ¦TER>       ¦
 ¦ LAN User¦ Subnet Mask ..................... 255.255.255. 0       ¦TER>       ¦
 ¦ Reset To¦ Default Gateway ................. 192.168. 0 .  1    ¦ ¦TER>       ¦
 ¦ System E¦ ---------------------------------------------------- ¦ ¦TER>       ¦
 ¦         ¦ VLAN Enable ..................... Off                ¦ ¦           ¦
 ¦         ¦ VLAN ID ......................... 0001                 ¦           ¦
 ¦         +--------------------------------------------------------+           ¦

5. Save changes and Exit

Configuring the Internal Drives

If you have added new physical disks to your system or are setting up the internal drives in a RAID configuration, you must configure the RAID using the RAID controller's BIOS configuration utility before you can install the operating system.

For the best balance of fault tolerance and performance, it is recommended that you use RAID 1 for the internal disks. For more information on RAID configurations, see the documentation for your specific RAID controller.
NOTE: If you are not using Dell PERC RAID solution and want to configure fault tolerance, use software RAID included with the Red Hat Enterprise Linux operating system. For more instructions, see section 4.5. Configuring Software RAID in the the Red Hat Deployment Guide located on the Red Hat website at www.redhat.com/docs/manuals/enterprise//.

Preparing the Operating System

NOTICE: Disconnect all storage cables during any operating system installation. Failure to do so can result in lost data, boot sector installation issues, and multipath drive ordering problems.

Accessing the Red Hat Network

If you do not have an RHN account, log on to www.redhat.com/register/ and create an account. The cluster nodes are shipped with RHN subscription information. If you are unable to locate the registration information, contact Dell Support.

Determine the Operating System Status

If your PowerEdge system shipped with Red Hat Enterprise Linux 5 operating system installed from the factory, you must upgrade to Red Hat Enterprise Linux 5.3 operating system. To upgrade the operating system, see Updating Your Operating System. Continue to Installing the Red Hat Enterprise Linux Operating System for instructions on a full install.

Installing the Red Hat Enterprise Linux Operating System

If the operating system media is not included with your PowerEdge system, download Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) ISO image(s) from the Red Hat website at rhn.redhat.com. If you require physical media, contact your Dell sales representative. After obtaining the correct ISO image(s) or physical media, choose one of the following installation methods.

Installing the Operating System Using Physical Media

Create physical media for installation. For more information, see the section Can You Install Using the CD-ROM or DVD? in the Red Hat Installation Guide on the Red Hat website at www.redhat.com/docs/manuals/enterprise//.

Installing the Operating System Over the Network

Create a network installation server. For more information, see the section Preparing for a Network Installation in the Red Hat Installation Guide on the Red Hat website at www.redhat.com/docs/manuals/enterprise//.

Using a Kickstart Installation

Your Dell|Red Hat HA Linux Cluster system can be easily installed with the kickstart script located at http://linux.dell.com/files/ha_linux/ks.cfg. However, your nodes must be connected to the Internet to use this script. If your nodes do not have internet access, you will need to create your own kickstart file. See Creating Your Own Kickstart. For complete details on Kickstart, see section Kickstart Installations in the Red Hat Installation Guide on www.redhat.com/docs/manuals/enterprise/.

Creating Your Own Kickstart File

You may download a kickstart script and modify it to meet your needs, or if your cluster nodes do not have direct internet access.

Download a kickstart script (e.g. ks.cfg) from linux.dell.com/files/ha_linux.

For more information on creating a kickstart installation, see the section "Creating a Kickstart Installation" in the Red Hat Installation Guide on at www.redhat.com/docs/manuals/enterprise/.

For information on using the kickstart GUI, see the section Kickstart Configurator of the Red Hat Installation Guide on www.redhat.com/docs/manuals/enterprise/.

Using the Network Installation

You can use one of the network installation methods described in the following sections to install your Red Hat Enterprise Linux operating system.

NOTE: Use Pre-Boot Execution Environment (PXE) for easy installation of multiple nodes. For more information on setting up PXE, see the section PXE Network Installations in the Red Hat Installation Guide at www.redhat.com/docs/manuals/enterprise/.

Using PXE Boot to Install the Operating System From the Network

Configure an entry that points to the network installation server you configured in Installing the Operating System Over the Network using the kickstart that you created in Creating Your Own Kickstart File.

Example of a PXE entry:

label Dell_Red_Hat_Cluster_Node
    menu label node1 RHEL 5.3
    kernel images/os/linux/redhat/rhel/5.3/x86_64/vmlinuz
    append initrd= images/os/linux/redhat/rhel/5.3/x86_64/initrd.img

Using Media to Install the Operating System From the Network

You can boot from any RHEL 5.3 CD1 or DVD and specify your network installation server using the askmethod parameter. When the RHEL boot prompt appears, enter the following:

boot: linux askmethod

You can also use the kickstart file that you created in Using a Kickstart Installation. For example:

boot: linux ks=http://local-server.private.lan/kickstart/ks.cfg

If your nodes have access to linux.dell.com, you may use the kickstart file there:

boot: linux ks=http://linux.dell.com/files/ha_linux/ks.cfg

Synchronize Node System Clocks

To ensure best cluster performance, synchronize the clocks on all the cluster nodes. If the clocks on the cluster nodes are not synchronized, the inode time stamps will be updated unnecessarily, severely impacting cluster performance. It is recommended to use Network Time Protocol (NTP) to synchronize the nodes. For more information on configuring system time and NTP, see the section Date and Time Configuration in the Red Hat Deployment Guide at http://www.redhat.com/docs/manuals/enterprise/

Registering Your Nodes With RHN

Register all nodes with Red Hat Network (RHN) at rhn.redhat.com by executing the following command:

[root]# rhn_register


NOTE: If you encounter a message that indicates the node is already registered with RHN, you do not need to register it again.

For more information, see the section RHN Registration in the Red Hat Installation Guide at www.redhat.com/docs/manuals/enterprise/.

Your installation number provides access to all necessary RHN Channels.
Verify that all nodes are configured on the following RHN Channels:

  • Red Hat Enterprise Linux (v. 5 for 64-bit x86_64)
  • RHEL Clustering (v. 5 for 64-bit x86_64)
  • RHEL Cluster-Storage (v. 5 for 64-bit x86_64)

Execute the following command on each node to verify that they are registered:

[root]# yum repolist

If all three channels are not registered:

  1. Log in to rhn.redhat.com.
  2. Click Systems.
  3. If your cluster nodes are not listed, use a filter like Recently Registered on the left-pane.
  4. Select your cluster node from the list.
  5. In the Subscribed channels section, click Alter Channel Subscriptions.
  6. Select all channels and click Change Subscriptions.


NOTE: For more information on registering with RHN and subscribing to channels, see the Red Hat network documentation website at redhat.com/docs/manuals/RHNetwork.

Updating Your Operating System

After you have registered the cluster nodes with RHN, you can use the yum utility to manage software updates. Red Hat Enterprise Linux operating systems also include a GUI-based tool called system-config-packages. To access system-config-packages, go to Applications -> Add/Remove Software.

If needed, install system-config-packages with the command:

[root]# yum install pirut

Update to the latest software with the following command:

[root]# yum update

This command updates all packages for your operating system to the latest version. The Dell|Red Hat HA Linux Cluster system has been tested and qualified with Red Hat Enterprise Linux 5.3.

NOTE: If this update installs a new kernel on your host, the nodes must be rebooted in order to load the new kernel.

Configuring the Firewall

To configure your firewall:

1. Ensure that all nodes can communicate with each other by host name and IP address. See the section Verifying Node Connectivity for details. For more information, see the section Before Configuring a Red Hat Cluster in the Red Hat Deployment Guide at www.redhat.com/docs/manuals/enterprise/.
NOTE: If you are not using DNS to resolve names, edit the local host file at /etc/hosts on each node and create entries for each node. Ensure that the local host file of each node contains entries for all nodes.

2. Configure necessary IP traffic between the nodes. Execute the following commands on each node to allow all cluster communications between the nodes:

[root]# iptables -I INPUT -s {cluster private network} -j ACCEPT

For example:

[root]# iptables -I INPUT -s 172.16.0.0/16 -j ACCEPT

Save changes to the firewall:

[root]# service iptables save

This allows all traffic from each node. However, if you require more control over the security between the nodes, you can allow only specific ports that are needed. For more information on firewalls, see the section Firewalls in the Red Hat Enterprise Linux Deployment Guide at www.redhat.com/docs/manuals/enterprise/.

For information about IP port numbers, protocols, and components used by Red Hat Clustering, see the section Enabling IP Ports on Cluster Nodes in the Red Hat Enterprise Linux Cluster Administration Guide at redhat.com/docs/manuals/enterprise/.

Verify Node Connectiviy

Perform the following to verify connectivity:

On node1:

[root@node1]# ping {node2 fully qualified hostname}

For example:

[root@node1]# ping node2.example.com

On node2:

[root@node2]# ping {node1 fully qualified hostname}

On the management node:

[root@management]# ping {node1 fully qualified hostname}
[root@management]# ping {node2 fully qualified hostname}
          .
          .
          .
[root@management]# ping {nodeN fully qualified hostname}

Repeat these steps for all nodes to verify connectivity.

Cluster Groups

Execute the following command to verify access to the correct groups:

[root]# yum grouplist | grep Cluster

The output should be:

Clustering
Cluster Storage

If you do not see these package groups listed here, then see Registering Your Nodes With RHN to subscribe to the proper child channels.

Dell Community Repositories

Your Dell PowerEdge systems can be managed using the yum utility that already manages packages for Red Hat Enterprise Linux 5.3 on your nodes. This provides easy access to any Dell provided Linux software, Dell PowerEdge firmware updates, some DKMS drivers, and other open source software. For more information, visit linux.dell.com/repo/community/

If your nodes do not have access to the internet, skip this section and perform manual firmware updates and software installation to your PowerEdge systems as needed. Visit support.dell.com for information.
NOTE: The repositories are community supported, and not officially supported by Dell. Use the linux-poweredge mailing list on lists.us.dell.com for repository support.

Installing Dell Community Repositories


NOTE: Ensure that you enter the CLI commands mentioned below in one line.

To install the Dell Community Software Repository, use the following command:

[root]# wget -q -O - http://linux.dell.com/repo/community/bootstrap.cgi | bash

To install the Dell OMSA Repository, use the following command:

[root]# wget -q -O - http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash

To install the Dell Community firmware repository, use the following command:

[root]# wget -q -O - http://linux.dell.com/repo/firmware/bootstrap.cgi | bash

Additional Dell Configuration

Configure Dell Remote Access Controller

If your nodes have a Dell Remote Access Controller (DRAC), you can manage it from your systems, or you can ssh/telnet into the DRAC directly. To install the software components for the DRAC, see the section Installing Dell Community Repositories first. You may also obtain the software from support.dell.com.

Install the DRAC software with the following command:

[root]# yum install srvadmin-rac5

If you need the complete OpenManage functionality, use this command instead:

[root]# yum install srvadmin-all

Enabling Telnet on the DRAC

With RHEL 5.3 and greater, the nodes can fence a DRAC via SSH. By default SSH is enabled on the DRACs, but telnet is disabled. If you do not want to use ssh, you will need to enable telnet on each DRAC. Skip this section if you plan to configure SSH for DRAC fencing, as detailed in Configure Cluster for DRAC SSH Fencing.

1. Connect to each DRAC with the following command:

[root]# ssh {IP address of DRAC}

For example:

[root]# ssh 192.168.120.100

2. Repeat this process on all nodes with DRACs.
3. Enable telnet with the following command:

[root@drac]# racadm config -g cfgSerial -o cfgSerialTelnetEnable 1

NOTE: You can also use the web interface or Dell OpenManage™ to configure your DRACs.

For more information, see the DRAC documentation.

Configure Intelligent Platform Management Interface

If your nodes do not have a Dell Remote Access Controller, you can use IPMI instead. However, it is best to use a dedicated NIC for IPMI.

To enable run the following steps on all the nodes:

  • Install the OpenIPMI packages:
[root]# yum install OpenIPMI OpenIPMI-tools
  • Start the IPMI service
[root]# service ipmi start

Verification Checklist

Item Verified
Physical setup  
Physical cabling  
Remote Access Controllers configured in the BIOS  
Operating Systems installed and updated  
Red Hat Network registration  
Firewall Configuration  
DRAC or IPMI software installation  



Continue to the next section Storage



Dell|Red Hat HA Linux > System

Personal tools
Distributions