Products/HA/DellRedHatHALinuxCluster/System

From DellLinuxWiki

< Products/HA/DellRedHatHALinuxCluster
Revision as of 17:59, 25 April 2008 by Rhentosh (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents

Cabling Your Cluster

The following sections provide information on how to cable various components of your cluster.

Pre-requisites

Before cabling the various components of your cluster, ensure that all items have been opened and racked. For instructions on racking your equipment, see the documentation included with your rack and components.

Cabling the Power Supplies

To ensure that the specific power requirements are satisfied, see the documentation for each component in your cluster solution. It is recommended that you adhere to the following guidelines to protect your cluster solution from power-related failures:

  • Plug each power supply into a separate AC circuit.
  • Plug each power supply into separate optional network power switches.
  • Use uninterruptible power supplies (UPS).
  • Consider backup generators and power from separate electrical substations.

Cabling Your Public and Private Networks

The network adapters in the cluster nodes provide at least two network connections for each node. These connections are described in Table 2-1.

Table 2-1. Network Connections

Network Connection Description
Public Network All connections to the client local area network (LAN).

At least one public network must be configured for mixed mode (public mode and private mode) for private network failover.

Private Network A dedicated connection for sharing cluster status information between the cluster nodes.

Network adapters connected to the LAN can also provide redundancy at the communications level in case the cluster interconnect fails. If you have optional DRAC cards, cable them to the private network. In this configuration, do not use point-to-point topology, use a network switch If you have optional network power switches, cable them to the private network. Cable any Storage Array Ethernet management ports to the private network.

Figure 2-3 shows an example of network adapter cabling in which dedicated network adapters in each node are connected to the public network and the remaining network adapters are connected to each other (for the private network).

Figure 2-3. Example of Network Cabling Connection

Figure 2-3

NIC Bonding

Network Interface Card (NIC) bonding combines two or more NICs to provide load balancing and/or fault tolerance. Your cluster supports NIC bonding. Use the same NICs in a bond for consistent performance. For information on configuring bonding, see the Red Hat Deployment Guide section "Channel Bonding Interfaces” at www.redhat.com/docs/manuals/enterprise/.

NOTE: If dual-port network cards are used, do not bond together two ports on a single adapter, as this results in a single point of failure.

Cabling Your Public Network

Cable any network adapters that will be providing access to HA applications for use by clients to the public network segments. You may install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port.

Cabling Your Private Network

The private network connection to the cluster nodes is provided by a second or subsequent network adapter that is installed in each node. This network is used for intra-cluster communications. Cable all cluster network components including cluster node private NICs, network power switches, remote access controllers (DRAC/IPMI), and any storage controller management ports.

Preparing Your Hardware

To setup your systems for use in a Dell|Red Hat HA Linux Cluster, ensure you have the Red Hat Enterprise Linux 5.1 Advanced Platform installation media.

Configuring the Cluster Nodes

To prepare the cluster nodes for setup, execute the instructions in the following sections.

Configuring Remote Access

1. In the BIOS post, use the <Ctrl><E> key combination to go to the Remote Access Configuration Utility menu when the following message appears:

Remote Access Configuration Utility 1.05
Copyright 2006 Dell Inc. All Rights Reserved

Baseboard management Controller Revision 1.33
Remote Access Controller Revision (Build 06.05.12) 1.0
Primary Backplane Firmware Revision 1.05

IP Address: 172. 16. 0 .102
Netmask: 255.255.255. 0
Gateway: 192. 168. 0 .254
Press <Ctrl-E> for Remote Access Setup within 5 sec....

2. The Remote Access Configuration Utility menu appears.

┌──────────────────── Remote Access Configuration Utility ─────────────────────┐
│              Copyright 2006 Dell Inc. All Rights Reserved 1.05               │
│                                                                              │
└──────────────────────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────────────────────┐
│ Baseboard Management Controller Revision                       1.33          │
│ Remote Access Controller Revision (Build 06.05.12)             1.0           │
│ Primary Backplane Firmware Revision                            1.05          │
│ ──────────────────────────────────────────────────────────────────────────   │
│                                                                              │
│ IPMI Over LAN ................................................ Off           │
│ NIC Selection ................................................ Dedicated     │
│ LAN Parameters ............................................... <ENTER>       │
│ Advanced LAN Parameters ...................................... <ENTER>       │
│ Virtual Media Configuration .................................. <ENTER>       │
│ LAN User Configuration ....................................... <ENTER>       │
│ Reset To Default ............................................. <ENTER>       │
│ System Event Log Menu ........................................ <ENTER>       │

3. If you have purchased the PowerEdge systems with Dell Remote Access Controller (DRAC) cards, ensure the option IPMI Over LAN is set to off. If IPMI Over LAN is enabled, an additional network interface card (NIC) may be required for a fully-redundant setup. This issue occurs as IPMI uses one of on-board NICs, thereby preventing redundant network configuration without additional NICs.
NOTE: If you are using IPMI instead of a DRAC card for fencing on a PowerEdge 1950 system, a fully redundant setup is not possible due to the limited amount of slots for additional cards.

4. In the Remote Access Configuration Utility menu, select LAN Parameters and assign an IP address or select DHCP.
NOTE: If you are using the DHCP option, you must assign the IP address from a DHCP server. Record the MAC Address from this menu for use in assigning a static IP through DHCP later. For instructions on configuring a DHCP server, see the Red Hat Deployment Guide section 21.2. Configuring a DHCP Server at www.redhat.com/docs/manuals/enterprise/.
NOTE: Ensure the IP address assigned is on the same subnet as the cluster private network, as the nodes communicate with the remote access controllers as part of cluster fencing operations.

 ┌──────────────────── Remote Access Configuration Utility ─────────────────────┐
 │              Copyright 2006 Dell Inc. All Rights Reserved 1.05               │
 │                                                                              │
 └──────────────────────────────────────────────────────────────────────────────┘
 ┌──────────────────────────────────────────────────────────────────────────────┐
 │ Baseboard Management Controller Revision                       1.33          │
 │ Remote Access Controller Revision (Build 06.05.12)             1.0           │
 │ Primary Backplane Firmware Revision                            1.05          │
 │ ──────────────────────────────────────────────────────────────────────────   │
 │         ┌────────────────────────────────────────────────────────┐           │
 │ IPMI Ove│ RMCP+ Encryption Key ............ <ENTER>              │           │
 │ NIC Sele│ ──────────────────────────────────────────────────── _ │icated     │
 │ LAN Para│ IP Address Source ............... DHCP                 │TER>       │
 │ Advanced│ Ethernet IP Address ............. 172. 16. 0 .102      │TER>       │
 │ Virtual │ MAC Address ..................... 00:18:8B:38:5E:F5    │TER>       │
 │ LAN User│ Subnet Mask ..................... 255.255.255. 0       │TER>       │
 │ Reset To│ Default Gateway ................. 172. 16. 0 .254    ▒ │TER>       │
 │ System E│ ──────────────────────────────────────────────────── ▒ │TER>       │
 │         │ VLAN Enable ..................... Off                ▒ │           │
 │         │ VLAN ID ......................... 0001                 │           │
 │         └────────────────────────────────────────────────────────┘           │

5. Save changes and Exit

Additional Configuration

You will need to configure your DRAC cards for telnet communication with the fencing agents. DRAC cards are shipped with secure shell (SSH) enabled and telnet disabled by default. Support for SSH in fencing agents is planned for a future release.

1.Connect to each DRAC with the following command:

[root]# ssh {IP address of DRAC}

For example:

[root]# ssh 192.168.120.100

2. Repeat this process on all nodes with DRACs.
3. Enable telnet with the following command:

[root@drac]# racadm config -g cfgSerial -o cfgSerialTelnetEnable 1

NOTE: Perform the procedure in this section after your nodes are configured, if you cannot connect to DRAC using SSH at this time. You can also use the web interface or Dell OpenManage™ to configure your DRACs. For more information on configuring your DRAC, see the documentation on support.dell.com.

Configuring the Internal Drives

If you have added new physical disks to your system or are setting up the internal drives in a RAID configuration, you must configure the RAID using the RAID controller's BIOS configuration utility before you can install the operating system.

For the best balance of fault tolerance and performance, it is recommended that you use RAID 1 for the internal disks. For more information on RAID configurations, see the documentation for your specific RAID controller.
NOTE: If you are not using Dell PERC RAID solution and want to configure fault tolerance, use software RAID included with the Red Hat Enterprise Linux operating system. For more instructions, see section 4.5. Configuring Software RAID in the the Red Hat Deployment Guide located on the Red Hat website at www.redhat.com/docs/manuals/enterprise//.

Preparing the Operating System

NOTICE: Disconnect all storage cables during any operating system installation. Failure to do so can result in lost data, boot sector installation issues, and multipath drive ordering problems.

Accessing the Red Hat Network

If you do not have an RHN account, log on to www.redhat.com/register/ and create an account. The cluster nodes are shipped with RHN subscription information. If you are unable to locate the registration information, contact Dell Support.

Determine the Operating System Status

If your PowerEdge system shipped with Red Hat Enterprise Linux 5 operating system installed from the factory, you must upgrade to Red Hat Enterprise Linux 5.1 operating system. To upgrade the operating system, see "Updating Your Operating System". To install the operating system on your PowerEdge systems, see the next section "Installing the Red Hat Enterprise Linux 5.1 Operating System".

Installing the Red Hat Enterprise Linux 5.1 Operating System

If the operating system media is not included with your PowerEdge system, download Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) ISO image(s) from the Red Hat website at rhn.redhat.com. If you require physical media, contact your Dell sales representative. After obtaining the correct ISO image(s) or physical media, choose one of the following installation methods.

Installing the Operating System Using Physical Media

Create physical media for installation. For more information, see the section Can You Install Using the CD-ROM or DVD? in the Red Hat Installation Guide located on the Red Hat website at www.redhat.com/docs/manuals/enterprise//.

NOTE: You can also use the Dell Systems Build and Update Utility media to install the operating system. To download the latest media, go to the Dell Support website at support.dell.com.

Installing the Operating System Through the Network

Create a network installation server. For more information, see the section Preparing for a Network Installation in the Red Hat Installation Guide located on the Red Hat website at www.redhat.com/docs/manuals/enterprise//.

Using a Kickstart Installation

Your Dell|Red Hat HA Linux Cluster system can be easily installed with the kickstart script located at http://linux.dell.com/files/ha_linux/ks.cfg. However, your nodes must be connected to the Internet to use this script. If your nodes do not have internet access, you will need to create your own kickstart file. See Creating Your Own Kickstart File. For complete details on Kickstart, see section Kickstart Installations in the Red Hat Installation Guide on www.redhat.com/docs/manuals/enterprise//.

Creating Your Own Kickstart File

You may download a kickstart script and modify it to meet your needs, or if your cluster nodes do not have direct internet access.

Download a kickstart script from http://linux.dell.com/files/ha_linux/.

For more information on creating a kickstart installation, see the section "Creating a Kickstart Installation" in the Red Hat Installation Guide located on the Red Hat website at www.redhat.com/docs/manuals/enterprise//.

For information on using the kickstart GUI, see the section Kickstart Configurator of the Red Hat Installation Guide on www.redhat.com/docs/manuals/enterprise/.

Using the Network Installation

You can use one of the network installation methods described in the following sections to install your Red Hat Enterprise Linux operating system.

NOTE: Use the Pre-Boot Execution Environment (PXE) for easy installation of multiple nodes. For more information on setting up PXE, see the section "PXE Network Installations" in the Red Hat Installation Guide located on the Red Hat website at www.redhat.com/docs/manuals/enterprise/.


Using PXE Boot to Install the Operating System From the Network

Configure an entry that points to the network installation server you configured in "Installing the Operating System Through the Network" using the kickstart that you created in "Creating Your Own Kickstart File".

Example of a PXE entry:

label Dell_Red_Hat_Cluster_Node
    menu label node1 RHEL 5.1
    kernel images/os/linux/redhat/rhel/5.1/x86_64/vmlinuz
    append initrd= images/os/linux/redhat/rhel/5.1/x86_64/initrd.img

Using Media to Install the Operating System From the Network

You can boot from any RHEL 5.1 CD1 or DVD and specify your network installation server using the askmethod parameter. When the RHEL boot prompt appears, enter the following:

boot: linux askmethod

You can also use the kickstart file that you created in Using a Kickstart Installation. For example:

boot: linux ks=http://linux.dell.com/files/ha_linux/ks.cfg


NOTE: Specify any custom kickstart scripts here instead.

Synchronize Node System Clocks

Synchronize the clocks on all nodes. It is recommended to use Network Time Protocol (NTP) to synchronize the nodes. For more information on configuring system time and NTP, see the section Date and Time Configuration in the Red Hat Deployment Guide at http://www.redhat.com/docs/manuals/enterprise/

Registering Your Nodes With RHN

Register all nodes with Red Hat Network (RHN) at rhn.redhat.com by executing the following command:

[root]# rhn_register


NOTE: If you encounter a message that indicates the node is already registered with RHN, you do not need to register it again.

For more information, see the section "RHN Registration" in the Red Hat Installation Guide located on the Red Hat website at www.redhat.com/docs/manuals/enterprise/.

Your installation number provides access to all necessary RHN Channels.
Verify that all nodes are configured on the following RHN Channels:

  • Red Hat Enterprise Linux (v. 5 for 64-bit x86_64)
  • RHEL Clustering (v. 5 for 64-bit x86_64)
  • RHEL Cluster-Storage (v. 5 for 64-bit x86_64)

Execute the following command on each node to verify that they are registered:

[root]# echo "repo list" | yum shell


NOTE: A future version of yum will support the command yum repolist.

If all three channels are not registered:

  1. Log in to rhn.redhat.com.
  2. Click Systems.
  3. If your cluster nodes are not listed, use a filter like Recently Registered on the left-pane.
  4. Select your cluster node from the list.
  5. In the Subscribed channels section, click Alter Channel Subscriptions.
  6. Select all channels and click Change Subscriptions.


NOTE: For more information on registering with RHN and subscribing to channels, see Red Hat network documentation website at www.redhat.com/docs/manuals/RHNetwork/.


Updating Your Operating System

After you have registered the cluster nodes with RHN, you can use the yum utility to manage software updates. Red Hat Enterprise Linux operating systems also include a GUI-based tool called system-config-packages. To access system-config-packages, go to Applications→ Add/Remove Software that provides a front-end for the yum utility.

If needed, install system-config-packages with the command:

[root]# yum install pirut

Update to the latest software with the following command:

[root]# yum update

This command updates all packages for your operating system to the latest version. The Dell|Red Hat HA Linux Cluster system has been tested and qualified with Red Hat Enterprise Linux 5.1 operating system.

Configuring the Firewall

To configure your firewall:

1. Ensure that all nodes can communicate with each other by host name and IP address. See the Troubleshooting section under "Verifying Node Connectivity" for details. For more information, see the section "Before Configuring a Red Hat Cluster" in the Red Hat Deployment Guide located on the Red Hat website at www.redhat.com/docs/manuals/enterprise/.
NOTE: If you are not using DNS to resolve names, edit the local host file at /etc/hosts on each node and create entries for each node. Ensure that the local host file of each node contains entries for all nodes.

2. Configure necessary IP traffic between the nodes. Execute the following commands on each node to allow all cluster communications between the nodes:

[root]# iptables -I INPUT -s {cluster private network} -j ACCEPT

For example:

[root]# iptables -I INPUT -s 172.16.0.0/16 -j ACCEPT
[root]# service iptables save

This allows all traffic from each node. However, if you require more control over the security between the nodes, you can allow only specific ports that are needed. For more information on firewalls, see the section "Firewalls" in the Red Hat Enterprise Linux Deployment Guide located on the Red Hat website at www.redhat.com/docs/manuals/enterprise/.

For information about IP port numbers, protocols, and components used by Red Hat Cluster Suite, see the section Enabling IP Ports on Cluster Nodes in the Red Hat Enterprise Linux 5.1 Cluster Administration Guide located on the Red Hat website at www.redhat.com.

Installing Cluster Groups

If your systems were installed with the kickstart file, then they should already have the correct cluster groups. Execute the following command to verify or install the correct groups:

[root]# yum groupinstall "Clustering" "Cluster Storage"

Dell Community Repositories

Your Dell PowerEdge systems can be managed using the yum utility that already manages packages for Red Hat Enterprise Linux 5.1 on your nodes. This provides easy access to any Dell provided Linux software, Dell PowerEdge firmware updates, some DKMS drivers, and other open source software. For more information, visit linux.dell.com/repo/software/

If your nodes do not have access to the internet, skip this section and perform manual firmware updates and software installation to your PowerEdge systems as needed. Visit support.dell.com for information.
NOTE: The repositories are community supported, and not officially supported by Dell. Use the linux-poweredge mailing list on lists.us.dell.com for repository support.

Installing Dell Community Repositories


NOTE: Ensure that you enter the CLI commands mentioned below in one line.

To install the Dell Community Software Repository, use the following command:

[root]# wget -q -O - http://linux.dell.com/repo/software/bootstrap.cgi | bash

To install the Dell Community Hardware Repository, use the following command:

[root]# wget -q -O - http://linux.dell.com/repo/hardware/bootstrap.cgi | bash

(Optional) To leverage the repositories to install OpenManage DRAC components, use the following command:

[root]# yum install srvadmin-rac5

OpenManage gives you the ability to use the racadm command to manage your DRAC components. This installs the required components. If you need the complete OpenManage functionality, use this command instead:

[root]# yum install srvadmin-all

Verification Checklist

Item Verified
Physical setup
Physical cabling
Remote Access Controllers configured in the BIOS
DRAC telnet enabled (If Applicable)
Network Power Switches configured (if applicable)
Operating Systems installed and updated

Troubleshooting

Physical Connectivity

Check all your physical connections. Disconnect each cable and inspect it for damage, then connect it firmly. If problems persist, try to swap the cable with another one, and see if the issue follows the cable, or is a problem with the device.

Verifying Node Connectivity

Perform the following to verify connectivity:

On node1:

[root@node1]# ping {node2 fully qualified hostname}

For example:

[root@node1]# ping node2.example.com

On node2:

[root@node2]# ping {node1 fully qualified hostname}

On the management node:

[root@management]# ping {node1 fully qualified hostname}
[root@management]# ping {node2 fully qualified hostname}
          .
          .
          .
[root@management]# ping {nodeN fully qualified hostname}

Repeat these steps for all nodes to verify connectivity.

Personal tools
Distributions