Showing posts with label Adding and deleting a node. Show all posts
Showing posts with label Adding and deleting a node. Show all posts

Thursday, February 27, 2014

Removing a Node from the Cluster

Removing a Node from the Cluster

A series of steps is used to remove a node.

  • You cannot simply remove the node from the cluster.
  • Oracle Central Inventory on each node has information about all nodes.
  • The Oracle Cluster Registry (OCR) contains information about all nodes.


On each node in the cluster, Oracle Central Inventory on that node contains information about all the nodes of the cluster. The binary OCR and voting disk also contain information about each node of the cluster. Therefore, to remove a node from the cluster properly, a procedure must be followed. You cannot simply remove the node. The procedure to remove a node from the cluster involves running a set of steps. 


Deleting a node from the cluster is a multiple-step process. Some commands are run on the node to be deleted and other commands are run on an existing node of the cluster. Some commands are run by the root user and other commands are run by the Oracle Clusterware software owner’s account.

When passing arguments to a command, sometimes the existing node is passed, sometimes the node to be removed is passed, and at other times a complete list of remaining nodes is passed as an argument. This requires special attention to detail to avoid making mistakes during the process.


Deleting a Node from the Cluster

Here in this example we have 3-Nodes Oracle RAC Cluster and we want to delete node host03



1.    Verify the location of the Oracle Clusterware home
2.From a node that will remain, run the following as root to expire the Cluster Synchronization Service (CSS) lease on the node that you are deleting:

[root@host01]# crsctl unpin css -n host03

The crsctl unpin command will fail if Cluster Synchronization Services (CSS) is not running on the node being deleted. Run the olsnodes –s –t command to show whether the node is active or pinned. If the node is not pinned, go to step 3.

3. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on each node to be deleted:

[root@host03]# ./rootcrs.pl -delete -force

Note: This procedure assumes that the node to be removed can be accessed. If you cannot execute commands on the node to be removed, you must manually stop and remove the VIP resource using the following commands as root from any node that you are not deleting:

# srvctl stop vip -i vip_name -f

# srvctl remove vip -i vip_name -f


where vip_name is the Virtual IP (VIP) for the node to be deleted.

4. From a node that will remain, delete the node from the cluster with the following command run as root:

[root@host01]# crsctl delete node -n host03

5. On the node to be deleted, as the user that installed Oracle Clusterware, run the following command from the Grid_home/oui/bin directory:

[grid@host03]$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={host03}" CRS=TRUE -local

6. On the node that you are deleting, run the runInstaller command as the user that installed Oracle Clusterware.

A. If you have a shared home:

[grid@host03]$ ./runInstaller -detachHome ORACLE_HOME=/u01/app/11.2.0/grid
B. For a nonshared home, deinstall the Oracle Clusterware home:

[grid@host03]# ./deinstall –local

7. On any remaining node, as the Grid software owner, update the node list:

[grid@host01]$ cd /Grid_home/oui/bin

[grid@host01]$ ./runInstaller -updateNodeList \

ORACLE_HOME=/u01/app/11.2.0/grid \

"CLUSTER_NODES={host01,host02}" CRS=TRUE


8. On any remaining node, verify that the specified nodes have been deleted from the cluster.

[grid@host01]$ cluvfy stage -post nodedel –n host03 [-verbose]


Deleting a Node from a Cluster (GNS in Use)

If your cluster uses GNS, do the following (steps from the previous procedure):

3. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on each node to be deleted.
4. From a node that will remain, delete the node from the cluster.
7. On any remaining node as the Grid software owner, update the node list.
8. On any remaining node perform step 8



Deleting a Node from Cluster
------------------------------------------------------------------
(Node 1) # /u01/app/11.2.0/grid/bin/crsctl unpin css -n rac2
(Node 1) # /u01/app/11.2.0/grid/bin/olsnodes -s -t
(Node 2) # cd /u01/app/11.2.0/grid/crs/install
(Node 2) # ./rootcrs.pl -delete -force
(Node 1) # /u01/app/11.2.0/grid/bin/crsctl delete node -n rac2

(Node 2) $ cd /u01/app/11.2.0/grid/oui/bin
(Node 2) $ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac2}" CRS=TRUE -local
(Node 2) $ ./runInstaller -detachHome ORACLE_HOME=/u01/app/11.2.0/grid


(Node 1) $ cd /u01/app/11.2.0/grid/oui/bin
(Node 1) $ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac1}" CRS=TRUE
(Node 1) $ cluvfy stage -post nodedel -n rac2 [-verbose]

Adding a node to cluster

Following method only add the clusterware to the new node and not the oracle instance.
The addNode.sh shell script is used to add nodes to an existing Oracle Clusterware environment. It:
  • Runs without a graphical interface
  • Does not perform the prerequisite operating system tasks 
You can use a variety of methods to add and delete nodes in an Oracle Clusterware environment:
  • Silent cloning procedures: Copy images of an Oracle Clusterware installation to other nodes to create new clusters that have identical hardware by using the clone.pl script.
  • Enterprise Manager (EM) Grid Control: Provides a GUI interface and automated wizards to the cloning procedures
  • addNode.sh: Invokes a subset of OUI functionality



Special attention must be given to the procedures because some steps are performed on the existing nodes, whereas other steps are performed on the nodes that are being added or removed.

Prerequisite Steps for Running addNode.sh

  • Make physical connections: networking, storage, and other
  • Install the operating system.
  • Perform the Oracle Clusterware installation prerequisite tasks:
    • Check system requirements.
    • Check network requirements.
    • Install the required operating system packages.
    • Set kernel parameters.
    • Create groups and users.
    • Create the required directories.
    • Configure the installation owner’s shell limits.
    • Configure Secure Shell (SSH) and enable user equivalency.
  • Verify the installation with Cluster Verify utility (cluvfy) from existing nodes.
    • Perform a post-hardware and operating system check.
      • [grid@host01]$ cluvfy stage –post hwos –n host03
    • Perform a detailed properties comparison of one existing reference node to the new node.
      • grid@host01]$ cluvfy comp peer -refnode host01 -n host03 -orainv oinstall -osdba asmdba -verbose
Adding a Node with addNode.sh
  • Ensure that Oracle Clusterware is successfully installed on at least one node.
  • Verify the integrity of the cluster and the node to be added (host03) with:
    • [grid@host01]$cluvfy stage -pre nodeadd -n host03 
  • Run addNode.sh to add host03 to an existing cluster.
    • Without GNS:
      • [grid@host01]$cd /Grid_home/oui/bin
[grid@host01]$./addNode.sh –silent "CLUSTER_NEW_NODES={host03}" \
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={host03-vip}"

    • With GNS:
      • $ ./addNode.sh –silent "CLUSTER_NEW_NODES={node3}" 
  • Perform integrity checks on the cluster.
    • [grid@host01]$ cluvfy stage –post nodeadd –n host03 -verbose


Practice
-------------------------------


Adding a node to clusterware
-----------------------------

$ cluvfy stage -post hwos -n rac2

$ cluvfy comp peer -refnode rac1 -n rac2 -orainv oinstall -osdba asmdba -verbose

Adding a Node with addNode.sh
---------------------------------
$ cluvfy stage -pre nodeadd -n rac2
$ cd /u01/app/11.2.0/grid/oui/bin
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}"
$ cluvfy stage -post nodeadd -n rac2 -verbose



Download the Video