Showing posts with label Cloning Grid Infrastructure. Show all posts
Showing posts with label Cloning Grid Infrastructure. Show all posts

Monday, March 10, 2014

Cloning Grid Infrastructure

Cloning Grid Infrastructure

Cloning Grid Infrastructure

What Is Cloning?
Cloning is the process of copying an existing Oracle Clusterware installation to a different location. It:
  • Requires a successful installation as a baseline
  • Can be used to create new clusters
  • Cannot be used to remove nodes from the cluster
  • Does not perform the operating system prerequisites to an installation
  • Is useful to build many clusters in an organization
The cloning procedure is responsible for the work that would have been done by the Oracle Universal Installer (OUI) utility. It does not automate the prerequisite work that must be done on each node before installing the Oracle software.
Benefits of Cloning Grid Infrastructure
The following are some of the benefits of cloning Oracle Grid Infrastructure. It:
  • Can be completed in silent mode from a Secure Shell (SSH) terminal session
  • Contains all patches applied to the original installation
  • Can be done very quickly
  • Is a guaranteed method of repeating the same installation on multiple clusters


Objectives

  • Describe the cloning process
  • Describe the clone.pl script and its variables
  • Perform a clone of Oracle Grid Infrastructure to a new cluster
  • Extend an existing cluster by cloning

Topics

  1. Preparing the Oracle Clusterware Home for Cloning
  2. Preparing the Oracle Clusterware Home for Cloning
  3. Cloning to Extend Oracle Clusterware to More Nodes

Refer the links below:

Cloning Oracle Clusterware



Cloning to Extend Oracle Clusterware to More Nodes




5.         Run the orainstRoot.sh script on each new node.
# /u01/app/oraInventory/orainstRoot.sh

6.         Run the addNode script on the source node.
$ /u01/app/11.2.0/grid/oui/bin/addNode.sh –silent \ "CLUSTER_NEW_NODES=host02" \
"CLUSTER_NEW_VIRTUAL_HOSTNAMES=host02-vip" \ "CLUSTER_NEW_VIPS=host02-vip" –noCopy \
CRS_ADDNODE=true

7.         Copy the following files from node 1, on which you ran addnode.sh, to node 2:
<Grid_home>/crs/install/crsconfig_addparams
Grid_home/crs/install/crsconfig_params
Grid_home/gpnp
8.            Run the <Grid_home>/root.sh script on the new node.
      # /u01/app/11.2.0/grid/root.sh
9.         Navigate to the Oracle_home/oui/bin directory on the first node and run the  
            addNode.sh script:
            $ ./addNode.sh -silent –noCopy "CLUSTER_NEW_NODES={host02}"
10.       Run the <Oracle_home>/root.sh script on node 2 as root.
11.       Run the <Grid_home>/crs/install/rootcrs.pl script on node 2 as root.
12.       Run cluvfy to validate the installation.
            $ cluvfy stage -post nodeadd -n host02 -verbose

Cloning to Create a New Oracle Clusterware Environment



The following procedure uses cloning to create a new cluster:
1.      Prepare the new cluster nodes. (See the lesson titled “Grid Infrastructure Installation” for details.)
  • Check system requirements.
  • Check network requirements.
  • Install the required operating system packages.
  • Set kernel parameters.
  • Create groups and users.
  • Create the required directories.
  • Configure installation owner shell limits.
  • Configure block devices for Oracle Clusterware devices.
  • Configure SSH and enable user equivalency.
  • Use the Cluster Verify Utility to check prerequisites.
2.      Deploy Oracle Clusterware on each of the destination nodes.
A.   Extract the TAR file created earlier.
# mkdir –p /u01/app/11.2.0/grid
# cd /u01/app/11.2.0
# tar –zxvf /tmp/crs111060.tgz
B.   Change the ownership of files and create Oracle Inventory.
# chown –R grid:oinstall /u01/app/11.2.0/grid
# mkdir –p /u01/app/oraInventory
# chown grid:oinstall /u01/app/oraInventory
C.   Remove any network files from /u01/app/11.2.0/grid/network/admin.
$ rm /u01/app/11.2.0/grid/network/admin/*
When you run the last of the preceding commands on the Grid home, it clears setuid and setgid information from the Oracle binary. It also clears setuid from the following binaries:
<Grid_home>/bin/extjob
<Grid_home>/bin/jssu
<Grid_home>/bin/oradism
Run the following commands to restore the cleared information:
# chmod u+s <Grid_home>/bin/oracle
# chmod g+s <Grid_home>/bin/oracle
# chmod u+s <Grid_home>/bin/extjob
# chmod u+s <Grid_home>/bin/jssu
# chmod u+s <Grid_home>/bin/oradism

clone.pl Script
Cloning to a new cluster and cloning to extend an existing cluster both use a PERL script. The clone.pl script is used, which:
      Can be used on the command line
      Can be contained in a shell script
      Accepts many parameters as input
      Is invoked by the PERL interpreter
# perl <Grid_home>/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C01 $C02




The clone.pl script accepts four environment variables as input. They are as follows:
Symbol
Variable
Description
E01
ORACLE_BASE
The location of the Oracle base directory
E02
ORACLE_HOME
The location of the Oracle Grid Infrastructure home. This directory location must exist and must be owned by the Oracle operating system group: oinstall
E03
ORACLE_HOME_NAME
The name of the Oracle Grid Infrastructure home
E04
INVENTORY_LOCATION
The location of the Oracle Inventory
C01
CLUSTER_NODES
The short node names for the nodes in the cluster
C02
LOCAL_NODE
The short name of the local node

3.      Create a shell script to invoke clone.pl supplying input.
#!/bin/sh
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/11.2.0/grid
THIS_NODE=`hostname -s`

E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${ORACLE_HOME}
E03=ORACLE_HOME_NAME=OraGridHome1
E04=INVENTORY_LOCATION=${ORACLE_BASE}/oraInventory

#C00="-O'-debug'"
C01="-O'\"CLUSTER_NODES={node1,node2}\"'"
C02="-O'\"LOCAL_NODE=${THIS_NODE}\"'"

perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C01 $C02
4.      Run the script created in step 3 on each node.
$ /tmp/my-clone-script.sh
5.      Prepare the crsconfig_params file.
6.      Run the cluvfy utility to validate the installation.
$ cluvfy stage –post crsinst –n all -verbose

Log Files Generated During Cloning
/u01/app/oraInventory/logs/cloneActions<timestamp>.log
/u01/app/oraInventory/logs/oraInstall<timestamp>.err
/u01/app/oraInventory/logs/oraInstall<timestamp>.out

Preparing the Oracle Clusterware Home for Cloning

The following procedure is used to prepare the Oracle Clusterware home for cloning:

  1. Install Oracle Clusterware on the first machine.
    • Use the Oracle Universal Installer (OUI) GUI interactively.
    • Install patches that are required (for example, 11.1.0.n).
    • Apply one-off patches, if necessary.
  2. Shut down Oracle Clusterware.
    • # crsctl stop crs -wait
  3. Make a copy of the Oracle Clusterware home.
    • # mkdir /stagecrs
    • # cp –prf /u01/app/11.2.0/grid /stagecrs
  4. Remove files that pertain only to the source node.
    • # cd /stagecrs/grid
    • # rm -rf /stagecrs/grid/log/<hostname>
    • # rm -rf log/<hostname>
    • # rm -rf gpnp/<hostname>
    • # find gpnp -type f -exec rm -f {} \;
    • # rm -rf root.sh*
    • # rm -rf gpnp/*
    • # rm -rf crs/init/*
    • # rm -rf cdata/*
    • # rm -rf crf/*
    • # rm -rf network/admin/*.ora
    • # find . -name '*.ouibak' -exec rm {} \;
    • # find . -name '*.ouibak.1' -exec rm {} \;
    • # find cfgtoollogs -type f -exec rm -f {} \;
  5. Create an archive of the source.
    • # cd /stagecrs/grid
    • # tar –zcvpf /tmp/crs111060.tgz .
  6. Restart Oracle Clusterware.
    • # crsctl start crs