11g Release 2 RAC – Network configuration

Oracle RAC requires, at minimum, 2 physical network devices to be configured. These will be used as the server public network interface (public IP), as well as the cluster communication network interface (private IP). Additionally, an extra IP address will be reserved for use by the Oracle cluster software (virtual IP). So in summary, there are at minimum 2 physical network devices, and 3 IP addresses per RAC node.

Oracle 11g RAC introduces the concept of the single client access name, or SCAN. The idea behind this is that applications/client will only connect to a single domain name when connecting to a RAC database. There are at minimum 3 IP addresses configured for this SCAN name, which enables applications to continue connecting transparently to the RAC database without any reconfiguration/changes needed to the database connection settings when RAC nodes are added or removed.

This affords an extra level of availability, without needing changes to application servers/client connectivity settings (possibly involving downtime), when the RAC database node configuration is changed. The table below contains a simple summary of the network requirements we will be configuring:

Component Physical network interface IP address
Public IP Yes, 1 per node (minimum) Yes, 1 per node
Private IP Yes, 1 per node (minimum) Yes, 1 per node
Virtual IP No Yes, 1 per node
SCAN No Yes, 3 total for cluster (minimum)

*Note: The minimum setting for physical network interfaces excludes redundancy/failure tolerance for network devices. In most production environments, configuring redundancy for this component is a mandatory pre-requisite.

 All the IP addresses specified above must be registered with the DNS servers in order to be usable (it is possible to configure 11g RAC without a working DNS server, but this is a workaround and not recommended for live production environments, and as such will not be covered here).

 An example /etc/hosts configuration is shown here:       localhost solarac1        solarac1.niradj.com             loghost    solarac1-priv   solarac1-priv.niradj.com    solarac1-priv2  solarac1-priv2.niradj.com solarac2        solarac2.niradj.com    solarac2-priv   solarac2-priv.niradj.com    solarac2-priv2  solarac2-priv2.niradj.com solastorage     solastorage.niradj.com solarac-scan    solarac-scan.niradj.com solarac-scan    solarac-scan.niradj.com solarac-scan    solarac-scan.niradj.com

What can be observed is that the public IPs and private IPs must be located on different subnets (that is on separate network segments). The purpose of this is so that the private IPs are only used for communication among the RAC nodes, and public connections (for example from the applications/user client programs) do not use these network interfaces.

 The VIP (virtual IPs) that are not actual physical interfaces, but rather services that are handled by the Grid Infrastructure, like the SCAN IPs, must be configured on the public network interface subnet.

Additionally, since Oracle RAC not only requires communications between the database and applications, but also lots of inter-node communication via the Oracle Grid Infrastructure, it is recommended to modify the maximum transmission unit (MTU) for not only the network interfaces used, but also any switches/load balancers, etc used in the environment. By default, UNIX inter-instance communications is handled via UDP (User datagram protocol).

The basic idea is as follows: the RAC database will normally be expected to transfer database blocks between nodes as SQL statements that reference the same row (or to be precise, data in the same database block) are executed on 2 or more nodes (depending on your configuration).

Each Oracle database block is 8192 bytes (8k) minimum, and can be substantially larger. However, the default value of the MTU is set at 1500 bytes, meaning that even the smallest possible Oracle database block cannot be transmitted at one go by default via the network interfaces. This causes the block to be ‘parcelled’ into several pieces (packets), and then transmitted over the network.

Not only does this increase the workload (CPU and also memory) required to break the data into chunks, transmit them via the network, reassemble them on the other end into a valid block before sending them to the database for processing, transmission errors with any single packet might result in the whole process needing to be re-attempted. To improve the overall performance of interconnect traffic, increasing this default MTU value is highly recommended for RAC implementations.

A simple example of setting the modified MTU values is shown here for illustration (please note, this will NOT currently work in a VMWare Workstation environment):

On the server, edit the /kernel/drv/e1000g.conf file (this assumes that the e1000g<number>, and not ce<number> or bg<number> devices are used for network communications) as follows (this will allow maximum transmission sizes (frames) of up to 16k):


Next, edit the /etc/hostname.e1000g<number> files to add an extra clause for the mtu size, similar to the following:

solarac1-priv mtu 9000

 Next, reboot the server, and verify that the changes have taken place by running an ifconfig -a on the RAC node:

e1000g0: flags=1001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU> mtu 9000 index 2

Here we can see that the MTU size for this interface has been succesfully modified to 9000 bytes, which is sufficient to accomodate an 8k block size RAC database. A higher value may need to be set depending on your database block size. Additionally, configuration changes to switches are not shown here, as these tend to be proprietary to the brand of network peripheral used (best bet, get your network vendor to advise on this).


About oracletempspace

I'm an Oracle enthusiast, whose work revolves around consulting, designing, implementing and generally helping businesses get the most out of Oracle Database and related products.
This entry was posted in Oracle 11g Release 2, Oracle RAC, Oracle Solaris 10, RAC network configuration and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s