Tuesday, March 11, 2014

Global Cache Service


GCS (Global Cache Service)  and GES (Global Enqueue Service) (which are basically RAC processes) play the key role in implementing Cache Fusion
GCS ensures a single system image of the data even though the data is accessed by multiple instances. The GCS and GES are integrated components of Real Application Clusters that coordinate simultaneous access to the shared database and to shared resources within the database and database cache.
GES and GCS together maintain a Global Resource Directory (GRD) to record information about resources and enqueues. GRD remains in memory and is stored on all the instances. Each instance manages a portion of the directory. This distributed nature is a key point for fault tolerance of the RAC.

The coordination of concurrent tasks within a shared cache server is called synchronization. Synchronization uses the private interconnect and heavy message transfers. The following types of resources require synchronization: data blocks and enqueues. 
GCS maintains the modes for blocks in the global role and is responsible for block transfers between instances. LMS processes handle the GCS messages and do the bulk of the GCS processing.


An enqueue is a shared memory structure that serializes access to database resources. It can be local or global. Oracle uses enqueues in three modes: null (N) mode, share (S) mode, and exclusive (X) mode. Blocks are the primary structures for reading and writing into and out of buffers. An enqueue is often the most requested resource.

GES maintains or handles the synchronization of the dictionary cache, library cache, transaction locks, and DDL locks. In other words, GES manages enqueues other than data blocks. To synchronize access to the data dictionary cache, latches are used in exclusive (X) mode and in single-node cluster databases. Global enqueues are used in cluster database mode.

Necessity of Global Resources





In single-instance environments, locking coordinates access to a common resource such as a row in a table. Locking prevents two processes from changing the same resource (or row) at the same time.

In RAC environments, internode synchronization is critical because it maintains proper coordination between processes on different nodes, preventing them from changing the same resource at the same time. Internode synchronization guarantees that each instance sees the most recent version of a block in its buffer cache.

Note: The slide shows what would happen in the absence of cache coordination. RAC prevents this problem.

Monday, March 10, 2014

Clusters and Scalability

If your application scales transparently on SMP machines, it is realistic to expect it to scale well on RAC, without having to make any changes to the application code.

RAC eliminates the database instance, and the node itself, as a single point of failure, and ensures database integrity in the case of such failures.

The following are some scalability examples:
  • Allow more simultaneous batch processes.
  • Allow larger degrees of parallelism and more parallel executions to occur.
  • Allow large increases in the number of connected users in online transaction processing (OLTP) systems.
 

Benefits of Using RAC

Oracle Real Application Clusters (RAC) enables high utilization of a cluster of standard, low-cost modular servers such as blades.

RAC offers automatic workload management for services. Services are groups or classifications of applications that comprise business components corresponding to application workloads. Services in RAC enable continuous, uninterrupted database operations and provide support for multiple services on multiple instances. You assign services to run on one or more instances, and alternate instances can serve as backup instances. If a primary instance fails, the Oracle server moves the services from the failed instance to a surviving alternate instance. The Oracle server also automatically load-balances connections across instances hosting a service.

RAC harnesses the power of multiple low-cost computers to serve as a single large computer for database processing, and provides the only viable alternative to large-scale symmetric multiprocessing (SMP) for all types of applications.

RAC, which is based on a shared disk architecture, can grow and shrink on demand without the need to artificially partition data among the servers of your cluster. 
 
RAC also offers a single-button addition of servers to a cluster. Thus, you can easily add a server to or remove a server from the database.

Cloning Grid Infrastructure

Cloning Grid Infrastructure

Cloning Grid Infrastructure

What Is Cloning?
Cloning is the process of copying an existing Oracle Clusterware installation to a different location. It:
  • Requires a successful installation as a baseline
  • Can be used to create new clusters
  • Cannot be used to remove nodes from the cluster
  • Does not perform the operating system prerequisites to an installation
  • Is useful to build many clusters in an organization
The cloning procedure is responsible for the work that would have been done by the Oracle Universal Installer (OUI) utility. It does not automate the prerequisite work that must be done on each node before installing the Oracle software.
Benefits of Cloning Grid Infrastructure
The following are some of the benefits of cloning Oracle Grid Infrastructure. It:
  • Can be completed in silent mode from a Secure Shell (SSH) terminal session
  • Contains all patches applied to the original installation
  • Can be done very quickly
  • Is a guaranteed method of repeating the same installation on multiple clusters


Objectives

  • Describe the cloning process
  • Describe the clone.pl script and its variables
  • Perform a clone of Oracle Grid Infrastructure to a new cluster
  • Extend an existing cluster by cloning

Topics

  1. Preparing the Oracle Clusterware Home for Cloning
  2. Preparing the Oracle Clusterware Home for Cloning
  3. Cloning to Extend Oracle Clusterware to More Nodes

Refer the links below:

Cloning Oracle Clusterware



Cloning to Extend Oracle Clusterware to More Nodes




5.         Run the orainstRoot.sh script on each new node.
# /u01/app/oraInventory/orainstRoot.sh

6.         Run the addNode script on the source node.
$ /u01/app/11.2.0/grid/oui/bin/addNode.sh –silent \ "CLUSTER_NEW_NODES=host02" \
"CLUSTER_NEW_VIRTUAL_HOSTNAMES=host02-vip" \ "CLUSTER_NEW_VIPS=host02-vip" –noCopy \
CRS_ADDNODE=true

7.         Copy the following files from node 1, on which you ran addnode.sh, to node 2:
<Grid_home>/crs/install/crsconfig_addparams
Grid_home/crs/install/crsconfig_params
Grid_home/gpnp
8.            Run the <Grid_home>/root.sh script on the new node.
      # /u01/app/11.2.0/grid/root.sh
9.         Navigate to the Oracle_home/oui/bin directory on the first node and run the  
            addNode.sh script:
            $ ./addNode.sh -silent –noCopy "CLUSTER_NEW_NODES={host02}"
10.       Run the <Oracle_home>/root.sh script on node 2 as root.
11.       Run the <Grid_home>/crs/install/rootcrs.pl script on node 2 as root.
12.       Run cluvfy to validate the installation.
            $ cluvfy stage -post nodeadd -n host02 -verbose

Cloning to Create a New Oracle Clusterware Environment



The following procedure uses cloning to create a new cluster:
1.      Prepare the new cluster nodes. (See the lesson titled “Grid Infrastructure Installation” for details.)
  • Check system requirements.
  • Check network requirements.
  • Install the required operating system packages.
  • Set kernel parameters.
  • Create groups and users.
  • Create the required directories.
  • Configure installation owner shell limits.
  • Configure block devices for Oracle Clusterware devices.
  • Configure SSH and enable user equivalency.
  • Use the Cluster Verify Utility to check prerequisites.
2.      Deploy Oracle Clusterware on each of the destination nodes.
A.   Extract the TAR file created earlier.
# mkdir –p /u01/app/11.2.0/grid
# cd /u01/app/11.2.0
# tar –zxvf /tmp/crs111060.tgz
B.   Change the ownership of files and create Oracle Inventory.
# chown –R grid:oinstall /u01/app/11.2.0/grid
# mkdir –p /u01/app/oraInventory
# chown grid:oinstall /u01/app/oraInventory
C.   Remove any network files from /u01/app/11.2.0/grid/network/admin.
$ rm /u01/app/11.2.0/grid/network/admin/*
When you run the last of the preceding commands on the Grid home, it clears setuid and setgid information from the Oracle binary. It also clears setuid from the following binaries:
<Grid_home>/bin/extjob
<Grid_home>/bin/jssu
<Grid_home>/bin/oradism
Run the following commands to restore the cleared information:
# chmod u+s <Grid_home>/bin/oracle
# chmod g+s <Grid_home>/bin/oracle
# chmod u+s <Grid_home>/bin/extjob
# chmod u+s <Grid_home>/bin/jssu
# chmod u+s <Grid_home>/bin/oradism

clone.pl Script
Cloning to a new cluster and cloning to extend an existing cluster both use a PERL script. The clone.pl script is used, which:
      Can be used on the command line
      Can be contained in a shell script
      Accepts many parameters as input
      Is invoked by the PERL interpreter
# perl <Grid_home>/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C01 $C02




The clone.pl script accepts four environment variables as input. They are as follows:
Symbol
Variable
Description
E01
ORACLE_BASE
The location of the Oracle base directory
E02
ORACLE_HOME
The location of the Oracle Grid Infrastructure home. This directory location must exist and must be owned by the Oracle operating system group: oinstall
E03
ORACLE_HOME_NAME
The name of the Oracle Grid Infrastructure home
E04
INVENTORY_LOCATION
The location of the Oracle Inventory
C01
CLUSTER_NODES
The short node names for the nodes in the cluster
C02
LOCAL_NODE
The short name of the local node

3.      Create a shell script to invoke clone.pl supplying input.
#!/bin/sh
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/11.2.0/grid
THIS_NODE=`hostname -s`

E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${ORACLE_HOME}
E03=ORACLE_HOME_NAME=OraGridHome1
E04=INVENTORY_LOCATION=${ORACLE_BASE}/oraInventory

#C00="-O'-debug'"
C01="-O'\"CLUSTER_NODES={node1,node2}\"'"
C02="-O'\"LOCAL_NODE=${THIS_NODE}\"'"

perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C01 $C02
4.      Run the script created in step 3 on each node.
$ /tmp/my-clone-script.sh
5.      Prepare the crsconfig_params file.
6.      Run the cluvfy utility to validate the installation.
$ cluvfy stage –post crsinst –n all -verbose

Log Files Generated During Cloning
/u01/app/oraInventory/logs/cloneActions<timestamp>.log
/u01/app/oraInventory/logs/oraInstall<timestamp>.err
/u01/app/oraInventory/logs/oraInstall<timestamp>.out