How to Prepare for a Real Application Clusters Install in Oracle 12c - dummies

How to Prepare for a Real Application Clusters Install in Oracle 12c

By Chris Ruel, Michael Wessler

Each OS has its own configuration for a Real Application Cluster (RAC) install to use with Oracle 12c databases. It’s virtually impossible to cover everything, but there are basics that can be covered to get you started.

However, there are a few pieces of advice to offer:

  • Thoroughly read the Oracle Grid Infrastructure installation and deployment guide for your specific OS. What applies on one OS may not fly on another.

  • Be consistent across all nodes when naming users, groups, group IDs, and user IDs. Make sure the same user owns all the Oracle software components.

    For example, on Linux, oracle is typically an account that owns the Oracle software installation. Create this user exactly the same way as you go to all the nodes. Linux has at least two OS groups for Oracle (dba and oinstall). These must be identical.

    For the users and groups, this goes for the group ID (gid) and user ID (uid) as well. The gid and uid maintain permissions at the OS level. If they’re not identical across nodes, permissions won’t be maintained correctly, and the cluster won’t function.

  • Set up the hosts file correctly. This goes for all RAC installations. The clustering software uses the hosts file to install the software and maintain communications. The domain name server, or DNS, doesn’t substitute for this. You can add the host configuration to the DNS if you want, but make sure the hosts file is properly configured.

    Here’s an example of what a two-node RAC host file may look like:

    127.0.0.1 localhost.localdomain localhost
    192.168.100.11 node1-priv.perptech.com node1-priv # node1 private
    192.168.100.12 node2-priv.perptech.com node2-priv # node2 private
    192.168.200.11 node1.perptech.com node1 # node1 public
    192.168.200.12 node2.perptech.com node2 # node2 public
    192.168.200.21 node1-vip.perptech.com node1-vip # node1 virtual
    192.168.200.22 node2-vip.perptech.com node2-vip # node2 virtual
    • Each cluster node connects to another through a private high-speed network (cluster interconnect).

    • The public IP used for all user communication to the nodes isn’t related to the interconnect.

    • Each cluster node also has a virtual IP address that binds to the public NIC. If a node fails, the failed node’s IP address can be reassigned to another node so applications can keep accessing the database through the same IP address.

      As of Oracle 11gR2, this is done using a new cluster networking component called a SCAN. SCAN stands for single-client-access-name. Three VIPs are assigned on the network to a scan name (typically the name of your cluster), and that one SCAN name is then used for all communication. The three VIPs can float across the nodes to provide constant connectivity and failover capabilities.

  • When using Oracle Grid Infrastructure, install it in a directory that’s not a subset of your Oracle base. For example:

    ORACLE_BASE=/u01/app/oracle
    ORA_CRS_HOME=/u01/app/grid

    You must set many permissions under the Grid Infrastructure home for special root access. You don’t want those settings to interfere with the database software installation.

  • When using Oracle Grid Infrastructure, correctly set the permissions for the underlying storage devices that are used for the ASM disk groups. If you don’t get the permissions right, you can’t complete the installation or a node reboot may either cause the clustering services to not rejoin the cluster or the node to continually reboot itself.

  • Configure the nodes in your cluster to be able to use the following:

    • rsh or ssh (ssh is recommend if you’re on 10gR1 or greater.)

    • rcp or scp (scp is recommend if you’re on 10gR1 or greater.)

    • User equivalence for nonpassword authentication

The communication and copying features are for software installation and patching. They aren’t required for RAC to work after the fact if opening these things is against company security policies.

Oracle 12c Real Application Clusters application for high availability

RAC helps with high availability by providing redundancy in your environment — specifically, redundant Oracle instances. One instance in a multi-instance environment can be removed for OS, hardware, or Oracle software maintenance without disrupting the application.

However, make sure your expectations meet what RAC can deliver:

  • RAC doesn’t cover all points of failure. It definitely helps harden against node failure and instance failure. Unfortunately, it can’t help with SAN, interconnect, or user error.

  • RAC isn’t typically considered a disaster-protection solution. If your entire site is compromised by wind, fire, or water, RAC is going with it.

Extended Real Application Clusters and Oracle 12c

New developments are happening in a movement called Extended RAC. This RAC solution can protect against total site loss while providing all the other RAC features. As network transmission speeds increase over time, some people think that RAC is possible with instances in remote locations.

This configuration requires high-speed SAN mirroring and a network transmission media called dark fiber. Dark fiber is a private, direct connection between two remote sites that can handle multiple network transmissions at once over the same cable by using varying light frequencies.

At press time, Extended RAC appears to have distance limitations. The farther apart the sites, the higher the latency. Latency turns into cluster performance degradation. We’ve been unable to find any definitive documentation on the distance limits. Degradation appears to factor heavily into your type of connection. Some sites use repeaters to extend even further.

In the meantime, if you need a remote site configured for disaster recovery, you may want to consider Data Guard. It can offer a lot of the features that Extended RAC does but at a fraction of the cost with no real distance limits.