Monday, March 10, 2014

Preparing the Oracle Clusterware Home for Cloning

The following procedure is used to prepare the Oracle Clusterware home for cloning:

  1. Install Oracle Clusterware on the first machine.
    • Use the Oracle Universal Installer (OUI) GUI interactively.
    • Install patches that are required (for example, 11.1.0.n).
    • Apply one-off patches, if necessary.
  2. Shut down Oracle Clusterware.
    • # crsctl stop crs -wait
  3. Make a copy of the Oracle Clusterware home.
    • # mkdir /stagecrs
    • # cp –prf /u01/app/11.2.0/grid /stagecrs
  4. Remove files that pertain only to the source node.
    • # cd /stagecrs/grid
    • # rm -rf /stagecrs/grid/log/<hostname>
    • # rm -rf log/<hostname>
    • # rm -rf gpnp/<hostname>
    • # find gpnp -type f -exec rm -f {} \;
    • # rm -rf root.sh*
    • # rm -rf gpnp/*
    • # rm -rf crs/init/*
    • # rm -rf cdata/*
    • # rm -rf crf/*
    • # rm -rf network/admin/*.ora
    • # find . -name '*.ouibak' -exec rm {} \;
    • # find . -name '*.ouibak.1' -exec rm {} \;
    • # find cfgtoollogs -type f -exec rm -f {} \;
  5. Create an archive of the source.
    • # cd /stagecrs/grid
    • # tar –zcvpf /tmp/crs111060.tgz .
  6. Restart Oracle Clusterware.
    • # crsctl start crs


DHCP and DNS Configuration for GNS

DHCP and DNS Configuration for GNS

DHCP and DNS Configuration for GNS

In a static configuration, all addresses are assigned by administrative action and given names that resolve with whatever name service is provided for the environment. This is universal historic practice because there has been no realistic alternative.

One result is significant turnaround time to obtain the address, and to make the name resolvable. This is undesirable for dynamic reassignment of nodes from cluster to cluster and function to function.

DHCP provides for dynamic configuration of the host IP address but does not provide a good way to produce good names that are useful to external clients. As a result, it is rarely used in server complexes because the point of a server is to provide service, and clients need to be able to find the server.

This is solved in the current release by providing a service (GNS) for resolving names in the cluster, and defining GNS to the DNS service used by the clients.


Objectives

  • Configure or communicate DHCP configuration needs in support of GNS
  • Configure or communicate DNS configuration needs in support of GNS

Topics

  1. GNS Overview
  2. How to configure DHCP Service for Grid Infrastructure GNS
  3. How to configure DNS for Grid Infrastructure GNS

Videos (Download Videos and then play)

  1. How to configure DHCP Service for Grid Infrastructure GNS
  2. How to configure DNS for Grid Infrastructure GNS
  3. How to Configure SCAN on DNS
  4. How to install Grid Infrastructre with SCAN configure through DNS



Refer the links below:

Configuring Networks for Oracle Grid Infrastructure and Oracle RAC



How to configure DNS for Grid Infrastructure GNS


DNS Concepts

  • Host name resolution uses the gethostbyname family of library calls.
  • These calls do a configurable search of name space providers, typically including:
    • Local /etc/hosts file
    • DNS service
    • Other directory services such as NIS or LDAP
  • DNS traffic is sent as UDP packets, usually to port 53.
  • A query may ask for particular types of record, or all records for a name.
  • Name resolution for Ipv4 uses A records for addresses, and CNAME records for aliases that must be re-resolved.

DNS Configuration: Example

  • The following is assumed about the environment:
  • The cluster subdomain is cluster01.us.example.com.
  • The address GNS will listen on is 10.228.212.2 port 53.
  • The address for the new DNS server is 10.228.212.3.
  • The parent name servers are M.N.P.Q and W.X.Y.Z.
  • A summary of the steps is shown as follows: 

  1. As root, install the BIND (DNS) rpm: # rpm –Ivh bind-9.2.4-30.rpm
  2. Configure delegation in /etc/named.conf.
  3. Populate the cache file: $ dig . ns > /var/named/db.cache
  4. Populate /var/named/db.127.0.0 to handle reverse lookups.
  5. Start the name service (named): # /etc/init.d/named start
  6. Modify /etc/resolv.conf on all nodes. 
Videos (Download and save and then Play)

How to configure DNS for Grid Infrastructure GNS



How to configure DHCP Service for Grid Infrastructure GNS


  • With DHCP, a host needing an address sends a broadcast message to the network.
  • A DHCP server on the network responds to the request, and assigns an address, along with other information such as:
    • What gateway to use
    • What DNS servers to use
    • What domain to use
  • In the request, a host typically sends a client identifier, usually the MAC address of the interface in question.
  • The identifier sent by Clusterware is not a MAC address, but a VIP resource name such as ora.hostname.vip.
  • Because the IP address is not bound to a fixed MAC address, Clusterware can move it between hosts as needed.

DHCP Configuration: Example

Assumptions about the environment:

  • The hosts have known addresses on the public network.
  • DHCP provides one address per node plus three for the SCAN.
  • The subnet and netmask on the interface to be serviced is 10.228.212.0/255.255.252.0.
  • The address range that the DHCP server will serve is 10.228.212.10 through 10.228.215.254.
  • The gateway address is 10.228.212.1.
  • The name server is known to the cluster nodes.
  • The domain your hosts are in is us.example.com.
  • You have root access.
  • You have the RPM for the DHCP server if it is not already installed.


DHCP Configuration Example

To install and configure DHCP, perform the following steps:

  • As the root user, install the DHCP rpm: # rpm –Ivh dhcp-3.0.1-62.EL4.rpm
  • The DHCP configuration file is /etc/dhcp.conf. In the current example, the minimal configuration for the public network will look similar to the following:
subnet 10.228.212.0 netmask 255.255.252.0

{

default-lease-time 43200;

max-lease-time 86400;

option subnet-mask 255.255.252.0;

option broadcast-address 10.228.215.255;

option routers 10.228.212.1;

option domain-name-servers M.N.P.Q, W.X.Y.Z;

option domain-name "us.example.com";

Pool {

range 10.228.212.10 10.228.215.254;

} }


  • Start the DHCP service: # /etc/init.d/dhcp start

If you encounter any issues, check /var/log/messages for errors. You can adjust the lease time to suit your needs within the subnet.


Videos (Download and save and then Play)

How to configure DHCP for Grid Infrastructure GNS




GNS Overview

In a static configuration, all addresses are assigned by administrative action and given names that resolve with whatever name service is provided for the environment. This is universal historic practice because there has been no realistic alternative.

One result is significant turnaround time to obtain the address, and to make the name resolvable. This is undesirable for dynamic reassignment of nodes from cluster to cluster and function to function.

DHCP provides for dynamic configuration of the host IP address but does not provide a good way to produce good names that are useful to external clients. As a result, it is rarely used in server complexes because the point of a server is to provide service, and clients need to be able to find the server.

This is solved in the current release by providing a service (GNS) for resolving names in the cluster, and defining GNS to the DNS service used by the clients.

To properly configure GNS to work for clients, it is necessary to configure the higher level DNS to forward or delegate a subdomain to the cluster and the cluster must run GNS on an address known to the DNS, by number. This GNS address is maintained as a VIP in the cluster, which is run on a single node, and a GNSD process that follows that VIP around the cluster and service names in the subdomain. To fully implement GNS, you need four things:

  • DHCP service for the public network in question
  • A single assigned address in the public network for the cluster to use as the GNS VIP
  • A forward from the higher-level DNS for the cluster to the GNS VIP
  • A running cluster with properly configured GNS

Sunday, March 9, 2014

Administering ASM Cluster File Systems

Administering ASM Cluster File Systems

Administering ASM Cluster File Systems

Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a multi-platform, scalable file system, and storage management technology that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support customer files maintained outside of Oracle Database. Oracle ACFS supports many database and application files, including executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data.

The ASM feature of Oracle database has been extended in Oracle Grid Infrastructure 11g Release 2 to include support for a general-purpose cluster file system, the ASM Cluster File System (ACFS). To understand the operation of this feature, some terminology needs to be defined and explained. At the operating system (OS) level, the ASM instance provides the disk group, which is a logical container for physical disk space. The disk group can hold ASM database files and ASM dynamic volume files. The ASM Dynamic Volume Manager (ADVM) presents the volume device file to the operating system as a block device. The mkfs utility can be used to create an ASM file system in the volume device file.

Four OS kernel modules loaded in the OS provide the data service. On Linux, they are: oracleasm, the ASM module; oracleadvm, the ASM dynamic volume manager module; oracleoks, the kernel services module; and oracleacfs, the ASM file system module. These modules provide the ASM Cluster File System, ACFS snapshots, the ADVM, and cluster services. The ASM volumes are presented to the OS as a device file at /dev/asm/<volume name>-<number>.

The volume device file appears as another ASM file to the ASM Instance and asmcmd utility. The ASM layers are transparent to the OS file system commands. Only the files and directories created in ACFS and the ACFS snapshots are visible to the OS file system commands. Other file system types such as ext3 and NTFS may be created in an ADMV volume using the mkfs command on Linux and advmutil commands on Windows.

Objectives

  • Administer ASM Dynamic Volume Manager
  • Manage ASM volumes
  • Implement ASM Cluster File System
  • Manage ASM Cluster File System (ACFS)
  • Use ACFS Snapshots
  • Use command-line tools to manage ACFS


Topics


  1. ACFS and ADVM Architecture Overview
  2. ASM Cluster File System
  3. ADVM Processes
  4. Striping Inside the Volume
  5. Creating an ACFS Volume
  6. Extending ASMCMD for Dynamic Volumes 
  7. ACFS Snapshots
  8. ACFS Replication



Refer the links below:

Introduction to Oracle ACFS
Oracle ACFS Advanced Topics
Administrating ACFS




ACFS Replication

Oracle ACFS replication enables replication of Oracle ACFS file systems across the network to a remote site, providing disaster-recovery capability for the file system. Oracle ACFS replication can only be configured for Oracle RAC systems. The source Oracle ACFS file system of an Oracle ACFS replication is referred to as a primary file system. The target Oracle ACFS file system of an Oracle ACFS replication is referred to as a standby file system.

A site can host both primary and standby file systems. For example, if there are cluster sites A and B, a primary file system hosted at site A can be replicated to a standby file system at site B. Also, a primary file system hosted at site B can be replicated to a standby file system at site A. However, an ACFS file system cannot be used as a primary and a standby file system.

Oracle ACFS replication captures file system changes written to disk for a primary file system and records the changes in files called replication logs. These logs are transported to the site hosting the associated standby file system where background processes read the logs and apply the changes recorded in the logs to the standby file system. After the changes recorded in a replication log have been successfully applied to the standby file system, the replication log is deleted from the sites hosting the primary and standby file systems.

It is critical that there is enough disk space available on both sites hosting the primary and the standby file systems to contain the replication logs.

ACFS Replication Requirements

Before using replication on a file system, ensure that you have checked the following:

  • There is sufficient network bandwidth to support replication between the primary and standby file systems.
  • The configuration of the sites hosting the primary and standby file systems allow the standby file system to keep up with the rate of change on the primary file system.
  • The standby file system has sufficient capacity to manage the replication logs.
  • There is sufficient storage capacity to hold excess replication logs that might collect on the primary and the standby file systems when the standby file system cannot process replication logs quickly. For example, this situation can occur during network problems or maintenance on the site hosting the standby file system.
  • The primary file system must have a minimum size of 4 GB for each node that is mounting the file system. The standby file system must have a minimum size of 4 GB and should be sized appropriately for the amount of data being replicated and the space necessary for the replication logs sent from the primary file system.

Before replicating an ACFS file system, a replication configuration must be established that identifies information such as the site hosting the primary and standby file systems, the file system to be replicated, mount point of the file system, and a list of tags if desired.

To use Oracle ACFS replication functionality on Linux, the disk group compatibility attributes for ASM and ADVM must be set to 11.2.0.2 or higher for the disk groups that contain the primary and standby file systems. To configure replication and manage replicated Oracle ACFS file systems, use the acfsutil repl command-line functions.

Managing ACFS Replication

The basic steps for managing ACFS replication are:

1. Determine the storage capacity necessary for replication on the sites hosting the primary and standby file systems.

2. Set up usernames, service names, and tags.

SQL> CREATE USER primary_admin IDENTIFIED BY primary_passwd;

SQL> GRANT sysasm,sysdba TO primary_admin;

primary_repl_site=(DESCRIPTION=

(ADDRESS=(PROTOCOL=tcp)(HOST=primary1.example.com)(PORT=1521))

(ADDRESS=(PROTOCOL=tcp)(HOST=primary2.example.com)(PORT=1521))

(CONNECT_DATA=(SERVICE_NAME=primary_service)))

standby_repl_site=(DESCRIPTION= ...


3. Configure the site hosting the standby file system.

$ /sbin/acfsutil repl init standby \

-p primary_admin/primary_passwd@primary_repl_site \

-c standby_repl_service /standby/repl_data

4. Configure the site hosting the primary file system.

$ /sbin/acfsutil repl init primary \

-s standby_admin/standby_passwd@standby_repl_site \

-m /standby/repl_data -c primary_repl_service \

/acfsmounts/repl_data


5. Monitor information about replication on the file system.

6. Manage replication background processes.



ACFS Backups

  • An ACFS file system may be backed up using:
    • Standard OS file system backup tools
    • Oracle Secure Backup
    • Third-party backup tools
  • ACFS snapshots present a stable point-in-time view.


ACFS Performance

ACFS performance benefits from:

  • Distribution and load balancing of ASM file segments
  • ACFS file extents distributed across ASM file segments
  • User and metadata caching
  • In-memory updates of transaction logs