Showing posts with label Administering oracle Clusterware. Show all posts
Showing posts with label Administering oracle Clusterware. Show all posts

Tuesday, March 4, 2014

Managing SCAN VIP and SCAN Listener Resources


To view SCAN VIP configuration:
# srvctl config scan

To view SCAN LISTENER configuration:
# srvctl config scan_listener


To add a SCAN VIP resource:
$ srvctl add scan -n cluster01-scan

To remove Clusterware resources from SCAN VIPs:
$ srvctl remove scan [-f]

To add a SCAN listener resource:
$ srvctl add scan_listener
$ srvctl add scan_listener -p 65536 ## using nondefault port number ##

To remove Clusterware resources from all SCAN listeners:
$ srvctl remove scan_listener [-f]

The srvctl modify scan command modifies the SCAN VIP configuration to match that of another SCAN VIP:
$ srvctl modify scan –n cluster01-scan

The srvctl modify scan_listener -u command  modifies the configuration information for all SCAN listeners to match the current SCAN VIP configuration:
$ srvctl modify scan_listener -u



Changing the Interconnect Adapter


On a single node in the cluster, add the new global interface specification:
$ oifcfg setif -global eth2/192.0.2.0:cluster_interconnect

Verify the changes with oifcfg getif and then stop Clusterware on all nodes by running the following command as root on each node:
# oifcfg getif
# crsctl stop crs

Assign the network address to the new network adapters on all nodes using ifconfig:
# ifconfig eth2 192.0.2.15 netmask 255.255.255.0 broadcast 192.0.2.255

Remove the former adapter/subnet specification and restart Clusterware:
$ oifcfg delif -global eth1/192.168.1.0
# crsctl start crs

Changing the Public VIP Addresses for Non-GPnP Clusters

Stop all services running on the node whose VIP address you want to change:
$ srvctl stop service -d RDBA -s sales,oltp -n host01

Confirm the current IP address for the VIP address:
$ srvctl config vip -n host01

Stop the VIP address:
# srvctl stop vip -n host01 -f

Verify that the VIP address is no longer running by running the ifconfig -a command.

Make necessary changes to the /etc/hosts file on all nodes and make necessary domain name server (DNS) changes to associate the new IP address with the old host name.

Modify node applications and provide a new VIP address:
# srvctl modify nodeapps -n host01 -A 192.168.2.125/255.255.255.0/eth0

Start the node VIP.
# srvctl start vip -n host01

Repeat the steps for each node in the cluster.

Run cluvfy to verify node connectivity between all the nodes for which your cluster is configured:
$ cluvfy comp nodecon -n all -verbose


Recovering the OCR by Using Physical Backups

Locate a physical backup:
$ ocrconfig –showbackup

Stop the Oracle Clusterware stack on all nodes:
# crsctl stop cluster -all

Stop Oracle High Availability Services on all nodes:
# crsctl stop crs

Restore the physical OCR backup:
# ocrconfig –restore /u01/app/.../cdata/cluster01/day.ocr

Restart Oracle High Availability Services on all nodes:
# crsctl start crs

Check the OCR integrity:
$ cluvfy comp ocr -n all



Related Link:

Managing Oracle Cluster Registry and Voting Disks


Adding, Replacing, and Repairing OCR Locations

Add an OCR location to either ASM or other storage device:
# ocrconfig -add +DATA2
# ocrconfig -add /dev/sde1

To replace the current OCR location:
# ocrconfig -replace /dev/sde1 -replacement +DATA2

To repair OCR configuration, run this command on the node on which you have stopped Oracle Clusterware:
# ocrconfig -repair -add +DATA1
(You cannot perform this operation on a node on which Oracle Clusterware is running.)


Removing an Oracle Cluster Registry Location

Do not perform an OCR removal unless there is at least one other active OCR location online, or you will get an error.

# ocrconfig -delete +DATA2
# ocrconfig -delete /dev/sde1

To perform a physical backup
# ocrconfig -manualbackup

To display a list of manual backups:
$ ocrconfig –showbackup manual

You can check the status of OLR by using ocrcheck:
# ocrcheck -local


Locating the OCR Automatic Backups
----------------------------------
$ ocrconfig -showbackup auto

The backup frequency and retention policies are:
Every four hours: CRS keeps the last three copies.
At the end of every day: CRS keeps the last two copies.
At the end of every week: CRS keeps the last two copies.

Changing the Automatic OCR Backup Location

# ocrconfig –backuploc <path to shared CFS or NFS>

Determining the Location of Oracle Clusterware Configuration Files



To determine the location of the voting disk:
$ crsctl query css votedisk

To determine the location of the OCR:
$ ocrcheck -config

Checking the Integrity of Oracle Clusterware Configuration Files

$ cluvfy comp ocr –n all -verbose
$ ocrcheck

Oracle Interface Configuration Tool oifcfg

Oracle Interface Configuration Tool- oifcfg
  • The oifcfg command-line interface helps you to define and administer network interfaces.
  • The oifcfg is a command-line tool for both single-instance Oracle databases and Oracle RAC 
  • environments.
  • The oifcfg utility can direct components to use specific network interfaces.The oifcfg utility can be used to retrieve component configuration information.

$ oifcfg getif
eth2 10.220.52.0 global cluster_interconnect
eth0 10.220.16.0 global public
 
Determining the Current Network Settings

To determine the list of interfaces available to the cluster:
$ oifcfg iflist –p -n

To determine the public and private interfaces that have been configured:
$ oifcfg getif

To determine the Virtual IP (VIP) host name, VIP address, VIP subnet mask, and VIP interface name:
$ srvctl config nodeapps -a
 

Wednesday, February 26, 2014

Administering Oracle Clusterware

Administering Oracle Clusterware Administering Oracle Clusterware

Backing Up and Recovering the Voting Disk

  • In Oracle Clusterware 11g Release 2, voting disk data is backed up automatically in the OCR as part of any configuration change.
  • Voting disk data is automatically restored to any added voting disks.
  • To add or remove voting disks on non–Automatic Storage Management (ASM) storage, use the following commands:
    # crsctl delete css votedisk path_to_voting_disk
    # crsctl add css votedisk path_to_voting_disk

Note: You can migrate voting disks from non-ASM storage options to ASM without taking down the cluster. To use an ASM disk group to manage the voting disks, you must set the COMPATIBLE.ASM attribute to 11.2.0.0.

Checking the Integrity of Oracle Clusterware Configuration Files

Use the cluvfy utility or the ocrcheck command to check the integrity of the OCR.

$ cluvfy comp ocr –n all -verbose

$ ocrcheck 


Related Link:

Managing Oracle Cluster Registry and Voting Disks

Tuesday, February 25, 2014

Determining the Location of Oracle Clusterware Configuration Files

The two primary configuration file types for Oracle Clusterware are the voting disk and the Oracle Cluster Registry (OCR).

The location of the OCR file can be determined by using the cat /etc/oracle/ocr.loc command. 

To determine the location of the voting disk:
$ crsctl query css votedisk
##  STATE  File Universal Id                File Name  Disk group
--  -----  -----------------                ---------- ----------
 1. ONLINE 8c2e45d734c64f8abf9f136990f3daf8 (ASMDISK01) [DATA]
 2. ONLINE 99bc153df3b84fb4bf071d916089fd4a (ASMDISK02) [DATA]
 3. ONLINE 0b090b6b19154fc1bf5913bc70340921 (ASMDISK03) [DATA]
Located 3 voting disk(s).


To determine the location of the OCR:
$ ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         :      +DATA

Controlling Oracle High Availability Services

The crsctl utility is used to invoke certain OHASD functions.
  • To stop Oracle High Availability Services:
    • Stop the Clusterware stack:
      • # crsctl stop cluster
    • Stop Oracle High Availability Services on the local server:
      • # crsctl stop crs

  • To display Oracle High Availability Services automatic startup configuration:
    • # crsctl config crs
If you intend to stop Oracle Clusterware on all or a list of nodes, then use the crsctl stop cluster command, because it prevents certain resources from being relocated to other servers in the cluster before the Oracle Clusterware stack is stopped on a particular server. If you must stop the Oracle High Availability Services on one or more nodes, then wait until the crsctl stop cluster command completes and then run the crsctl stop crs command on any particular nodes, as necessary.

Use the crsctl config crs command to display Oracle High Availability Services automatic startup configuration. 


To determine the overall health on a specific node:

$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

To check the viability of Cluster Synchronization Services (CSS) across nodes:

$ crsctl check cluster -all
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Controlling Oracle Clusterware


  • To start or stop Oracle Clusterware on a specific node: 
    • # crsctl start cluster 
    • # crsctl stop cluster 

  • To enable or disable Oracle Clusterware on a specific node:
    • # crsctl enable crs
    • # crsctl disable crs

Managing Clusterware with Enterprise Manager


Enterprise Manager Database Control provides facilities to manage Oracle Clusterware. This includes the ability to register and manage resources.

The example in this slide provides a typical illustration of the management interface. It shows how resources can be dynamically relocated from one node to another within your cluster. In this case, my_resource is relocated from host02 to host01.

Managing Oracle Clusterware

  • Command-line utilities
    • crsctl manages clusterware-related operations:
      • Starting and stopping Oracle Clusterware
      • Enabling and disabling Oracle Clusterware daemons
      • Registering cluster resources
    • srvctl manages Oracle resource–related operations:
      • Starting and stopping database instances and services
    • oifcfg can be used to define and administer network interfaces.
  • Enterprise Manager
    • Browser-based graphical user interface
    • Enterprise Manager cluster management available in:
      • Database control – within the cluster
      • Grid control – through a centralized management server

Friday, November 2, 2012

To migrate OCR to Oracle ASM using OCRCONFIG

  • Ensure the upgrade to Oracle Clusterware 11g release 2 (11.2) is complete. Run the following command to verify the current running version: 
    • $ crsctl query crs activeversion 

  • Use the Oracle ASM Configuration Assistant (ASMCA) to configure and start Oracle ASM on all nodes in the cluster. 


  • To add OCR to an Oracle ASM disk group, ensure that the Oracle Clusterware stack is running and run the following command as root: 
    • Before that shutdown the database and also see the rdbms and other compatibity attributes through asmca
        # ocrconfig -add +new_disk_group
    • You can run this command more than once if you add multiple OCR locations. You can have up to five OCR locations. However, each successive run must point to a different disk group.

  • To remove storage configurations no longer in use, run the following command as root:
    # ocrconfig -delete old_storage_location
    Run the above command for every configured OCR.

The following example shows how to migrate two OCRs to Oracle ASM using OCRCONFIG.
# ocrconfig -add +new_disk_group
# ocrconfig -delete /dev/raw/raw2
# ocrconfig -delete /dev/raw/raw1




To migrate OCR to Oracle ASM using OCRCONFIG

  • Ensure the upgrade to Oracle Clusterware 11g release 2 (11.2) is complete. Run the following command to verify the current running version: 
    • $ crsctl query crs activeversion 

  • Use the Oracle ASM Configuration Assistant (ASMCA) to configure and start Oracle ASM on all nodes in the cluster. 


  • To add OCR to an Oracle ASM disk group, ensure that the Oracle Clusterware stack is running and run the following command as root: 
    • Before that shutdown the database and also see the rdbms and other compatibity attributes through asmca
        # ocrconfig -add +new_disk_group
    • You can run this command more than once if you add multiple OCR locations. You can have up to five OCR locations. However, each successive run must point to a different disk group.

  • To remove storage configurations no longer in use, run the following command as root:
    # ocrconfig -delete old_storage_location
    Run the above command for every configured OCR.

The following example shows how to migrate two OCRs to Oracle ASM using OCRCONFIG.
# ocrconfig -add +new_disk_group
# ocrconfig -delete /dev/raw/raw2
# ocrconfig -delete /dev/raw/raw1




Restoring Voting Disks


If all of the voting disks are corrupted, then you can restore them, as follows:
1.      Restore OCR
This step is necessary only if OCR is also corrupted or otherwise unavailable, such as if OCR is on Oracle ASM and the disk group is no longer available.

2.      Run the following command as root from only one node to start the Oracle Clusterware stack in exclusive mode, which does not require voting files to be present or usable:
        # crsctl start crs -excl
3.      Run the crsctl query css votedisk command to retrieve the list of voting files currently defined, similar to the following:
$ crsctl query css votedisk
--  -----    -----------------                --------- ---------
##  STATE    File Universal Id                File Name Disk group
 1. ONLINE   7c54856e98474f61bf349401e7c9fb95 (/dev/sdb1) [DATA]
This list may be empty if all voting disks were corrupted, or may have entries that are marked as status 3 or OFF.
4.      Depending on where you store your voting files, do one of the following:
·         If the voting disks are stored in Oracle ASM, then run the following command to migrate the voting disks to the Oracle ASM disk group you specify:
    crsctl replace votedisk +asm_disk_group
The Oracle ASM disk group to which you migrate the voting files must exist in Oracle ASM. You can use this command whether the voting disks were stored in Oracle ASM or some other storage device.
·         If you did not store voting disks in Oracle ASM, then run the following command using the File Universal Identifier (FUID) obtained in the previous step:
    $ crsctl delete css votedisk FUID
       Add a voting disk, as follows:
    $ crsctl add css votedisk path_to_voting_disk

5.  Stop the Oracle Clusterware stack as root:

      # crsctl stop crs
Note:
If the Oracle Clusterware stack is running in exclusive mode, then use the -f option to force the shutdown of the stack.
6.      Restart the Oracle Clusterware stack in normal mode as root:
      # crsctl start crs

Assignment of IP Addresses and SCAN


Assignment of IP Addresses
·         Before installing Oracle Clusterware you will want to define the public and private adapter IP addresses. There are two options you can choose with respect to IP address assignment. You can choose static IP addresses for each adapter which will require the assignment of several static IP addresses.
·         Alternatively you can use Oracle Grid Naming Service (GNS) which will require the assignment of one static IP address. GNS will then assign dynamic IP addresses to the nodes of the cluster.
·         When installing Oracle Clusterware you will be prompted to identify the private and public adapters.  Also you should be aware if you perform any maintenance or change the network adapters that the same adapter name (for example ETH1) should be used for the public and private adapters on each server.
·         When assigning the IP address for the private adapter you need to make sure that you use a non-routable IP address, and make sure that all interfaces are configured for the correct speed, duplex and so on.

The VIP, GNS and Single Client Access Name (SCAN)
Several networking components are associated with Oracle Clusterware and Oracle RAC. These include the VIP, the GNS and the SCAN.

The VIP
·         The purpose of the Virtual IP (VIP) is to provide for high availability.
·         A VIP address is assigned to a given node.
·         When that node goes down, the VIP is moved to one of the surviving nodes.
·         This new VIP is re-arped. As a result new connections will get sent to a surviving node, and connections connected to the failed node will receive an error and attempt to re-connect (assuming the application handles the errors gracefully which can be a problem).
·         For clients connected to the failed node, this can speed up the timeout much quicker since they don’t have to wait for a TCP/IP timeout, thus facilitating a quicker failover.
·         When assigning an IP address to the VIP make sure that it is on the same subnet as the default gateway.


Grid Naming Service (GNS)
·         GNS is essentially a DNS for the Oracle Cluster.
·         GNS is an optional networking component of Oracle Clustereware 11g Release 2 that provides for dynamic name resolution in the cluster.
·         GNS removes the need to have static IP addresses assigned to the network nodes.
·         GNS also removes the need to request VIPs if the cluster changes. Instead only one static IP is required which is the GNS virtual IP address.
·         In conjunction with your DHCP server, GNS will then assign dynamic IP addresses/VIPs to the cluster nodes as required. The end result is GNS makes it easier to add and remove cluster nodes.

SCAN
·         Oracle Clusterware 11g Release 2 introduced the Single Client Access Name (SCAN).
·         The SCAN address provides a single address for clients to use when accessing the cluster.
·         Before SCAN, if the cluster changed, then the client TNSNAMES.ORA files (or other tns connect strings like Easy Connect strings) would need to change.
·         The SCAN is a fully qualified domain name that typically resolves to three separate IP addresses. 
·         The SCAN provides a stable address that does not change regardless of the number of nodes on the cluster. 
·         The SCAN also provides for a highly available address, much like the VIPs. The SCAN simplifies network connectivity for applications using Easy Connect strings or JDBC thin client connection strings.
·         If you are using GNS, then you have DHCP support, then the GNS will assign the addresses to the SCAN dynamically. If you are not using GNS then the SCAN will be defined in the DNS server and access requests will be resolved to one of three different IP addresses by the DNS. These addresses resolve to the SCAN listener, and the SCAN listener will resolve the connect request to one of the nodes of the cluster. The SCAN listener will load balance the connection requests to the least busy cluster as they come in.

The following diagram displays the relationships between GNS, SCAN, Listeners and the Oracle Cluster and its associated databases.

 
Administering SCAN Resources

As a Clusterware/RAC administrator you may need to administer SCAN resources on your cluster. This can include:
  • Adding a SCAN VIP resource.
Use the srvctl command to add a SCAN VIP resource as seen in this example where we add a new resource called rac03-scan. Note that this command will create the same number of SCAN VIP resources as there are the number of IP addresses that SCAN resolves to. If you are using DHCP or GPNP then Oracle will configure 3 SCAN VIP’s. Other options in the command allow you to define the network number, subnet and so on:
srvctl add scan –n rac03-scan
  • Removing a SCAN VIP resource.
Use the srvctl command to remove an existing SCAN VIP from the cluster as seen in this example:
srvctl remove scan -f
  • You may want to add a SCAN listener resource. To do so use the srvctl command as seen in the following example. Note in the example that we are assigning the listener to a non-default port:
srvctl add scan_listener –p 65001
  • You may need to remove Clusterware resources from the SCAN listeners. Use the srvctl command to perform this activity as seen here:
srvctl remove scan_listener -f
  • You may need to modify the SCAN VIP to that it matches another SCAN VIP. To do so use srvctl again as seen here:
Srvctl modify scan –n new_scan_name
  • If you have changed the SCAN VIP configuration you will need to update the SCN listeners too. Use the srvctl command to perform this action:
srvctl modify scan_listener -u
  • After making changes to the SCAN listener, you can verify those changes with the srvctl command as seen in this example:
srvctl config scan_listener

 Changing the Public VIP

While Oracle Database 11g Release 2 generally relies on the SCAN addresses (and as a result VIP changes should be less frequent in theory) there may still be times that you need to change the public VIP address. To do so, follow these steps:
·         Stop all services on the node who’s VIP you want to change with the srvctl command. As seen in this example where we :

srvctl stop service –d rdba –s sales, oltp –n rac1
  • Make sure you know the current IP address for the VIP by using the srvctl command as seen here:
srvctl config vip –n rac1
  • Now stop the VIP on the node you will be changing with the srvctl command as seen here:
srvcrl stop vip –n rac1
·         Make the network changes that are required for the new VIP to be identified on the network. This would include changing the /etc/hosts file, the DNS and so on.
  • Using the srvctrl command, change the VIP address to the new address. Note that you will include the IP address of the VIP, the network subnet-mask and the adapter name as seen in this example:
srvctl modify nodeapps –n rac1 –A 192.168.2.100/255.255.255.0/eth0
  • Now, start the VIP on the new node, again using the srvctl command:
srvctl start vip –n rac1
  • Repeat steps 1-6 for each node of the cluster who’s VIP you wish to change.
     
  • Use the cluvfy utility to verify node connectivity as seen in this example:
cluvfy comp nodecon –n all -verbose


Clusterware Networking


Clusterware Networking
When designing your cluster architecture you will need to consider the physical networking that will be required. You will need to consider the network adapters, high availability architectures, use of jumbo frames and assignment of IP addresses. Let’s look at each of these topics in more detail. 

Network Adapters

Each server to be used in an Oracle Clusterware configuration will require a minimum of two separate network components. These two adapters are
  • The public network adapter 
  • The private cluster interconnect
Public Network Adapter
·         The first is the public network adapter. 
·         The public network adapter must support TCP/IP and is generally connected to the local internet via a switch.
·         Administrators and users will connect to the database via the public network adapter.
·         Many servers have more than one public network adapter and these are often “bonded”. Bonding provides the ability to combine each public network adapter into one logical network adapter.
·         Bonding provides more throughput and thus better performance. Performance on the public network is quite important for the overall performance of the database.
·         High performing networks make for faster file transfers (for example backups or export/import) and faster transfer of large amounts of application data.

Private Network Adapter
·         The second required network adapter is required to act as the cluster interconnect.
·         The cluster interconnect should support TCP/IP and UDP (when using UNIX or Windows platforms).
·         The cluster interconnect is a local connection between the individual servers, and should not be part of a public network.
·         The public interconnects are for local traffic between the clustered servers. Because the private interconnect traffic should not mix with public traffic then the private interconnect connections should have their own switches.
·         Cross-over cables are not supported, and you must ensure that your configuration is supported by Oracle. Bonded interfaces are supported for improved throughput.



Once you have configured Oracle Clusterware and installed RAC databases you may wish to confirm that the RAC databases are using the correct adapter for the Private Network Adapter.

·         You can use the V$CLUSTER_INTERCONNECT view from any node of the RAC database.
·         You can also use the Oracle oradebug command to determine this information. You can access oradebug through the Oracle SQL*Plus utility.
·         You can then use the oradebug setmypid and oradebug ipc commands to determine if the correct adapter is being used for the private interconnect.
·         Here is an example:
/u01>sqlplus / as sysdba
SQL> ORADEBUG SETMYPID
SQL> oradebug ipc




No Single Points of Failure Please

It is very important to avoid a single point of failure when it comes to your networking. This includes making sure you have redundant interfaces, redundant switches, redundant everything to ensure that no one piece can break.

Administering RAC Network Settings

Oracle Clusterware/RAC administrators will want to be able to monitor and administer network settings related to the cluster. Tools like oifcfg and svrctl are available for these tasks.
Using oifcfg you can:
  • List the interfaces available to the cluster as seen in this example:
oifcfg iflist –p -n
  • Determine the public and private interfaces that have been configured for use as seen in this example:
oifcfg getif

You can use the srvctl command to determine the VIP hostname, address, subnet mask and interface name as seen in this example:

srvctl config nodeapps –a





Changing the Private Interconnect Adapter
Recall that all RAC Clusters have at least two types of network adapters. The first is the public adapter and the second is the private interconnect. You may have cause to change the specification for the interconnect adapter (such as changing the IP address). Here are the instructions to do just that:
1.      Log-on to one node of the cluster. Add the new global interface specification using the oifcfg command as seen here:
               oifcfg setif –global eth2/192.0.2.0:cluster_interconnect
2.      Check the changes to make sure they took effect with the oifcfg command as seen here:
               oifcfg getif
3.      Stop the cluster using the crsctl command. Execute this on each node of the cluster:
               crsctl stop crs
4.      On each node, use the ifconfig command to assign the network address to the adapters:
               Ifconfig eth2 192.0.2.15 netmask 255.255.255.0 broadcast 192.0.2.255
5.      Remove the previous adapter specification using the oifcfg command as seen in this example:
               oifcfg delif –global eth1/192.168.1.0

Do not execute this step until you are sure that the replacement interface has been properly added into your cluster.
6.      Restart the Clusterware using the crsctl command:
               crsctl start crs










High Availability Networking Architectures
Your networking configurations should be architected for high availability. In a best case situation, both the private interconnect and the public interconnect would have bonded, redundant network cards and redundant switches. As a reminder, the interface names on each node of the cluster (and slots) need to be the same. Thus if the private interconnect is on eth1 on node one, it needs to be on eth1 on the remaining nodes.

Jumbo Frames
·         Ethernet packages network messages in “frames” which can be of a variable size.
·         The frame size is called the MTU or the maximum transmission unit. When a message is larger than the MTU size (typically 1500 bytes), then it is split into multiple messages.
·         This splitting of messages incurs additional overhead and network traffic. This can lead to RAC performance problems. In Oracle two main factors can influence the maximum size of a message:
§  DB_BLOCK_SIZE * DB_FILE_MULTI_BLOCK_READ_COUNT (MBRC) impacts the maximum size of a message for the global cache.  For example, a DB_BLOCK_SIZE of 8k and a MBRC of 32 will result in a maximum message size of 256k. This will result in approximately 170 separate packets. Using Jumbo frames (9k packet sizes) would result in only approximately 28 packets. 
§  PARELLEL_EXECUTION_MESSAGE_SIZE (defaults to 16384 bytes) determines the maximum size of a message used in parallel execution. These messages can range from 2k to over 64k.
One solution to these issues is to configure the cluster interconnect to use Jumbo Frames. When you configure Jumbo Frames the Ethernet frame size can be as large as 9k in size. Configuring for Jumbo Frames requires some careful configuration. The following steps are required to configure jumbo frames (these steps might vary based on your hardware and operating system):
  • Check Oracle Metalink for more information on implementing Jumbo Frames. Check for information specific to your hardware, operating system and any bugs that might exist.
  • Configure the host's network adapter with a persistent MTU size of 9000. On a Unix system you might use the command ifconfig -mtu 9000.
  • Check your vendor NIC configuration requirements. The NIC may well require some configuration.
  • Ensure your switches will support the larger frame size and configure the LAN switches to increase the MTU for Jumbo Frame support.
  • You can use traceroute for some basic configuration testing as seen in this example where we do a traceroute with a 9k packet size and a 9001 byte packet size:
               traceroute –F linux01.myhost.com 9000

traceroute –F linux01.myhost.com 9001
Note that Jumbo Frames do not have any standard that they adhere to. As a result Jumbo Frames the interoperability between switches can be problematic and can require advanced networking skills to troubleshoot. Also keep in mind that the smallest MTU used by any device in a given network path will determine the maximum MTU for all traffic travelling along that path. As with all changes make sure that you completely test your configuration before implementing it in production.