I was installing RAC, and during the clusterware install I picked up the wrong interfaces for public and private. I had 10.x.x.x for the public IP on eth0 and 192.x.x.x for the private IP (interconnect) on eth1. I also had 10.x.x.x for the VIP. During the install I choose eth1 to be the public interface. Right after the install I lost connection to the machine via the 10.x.x.x IP. What had happened was I had a 10.x.x.x IP on both eth0 and eth1, which was messing up the routing. The solution? Simply modify the VIP in the cluster configuration.
There's actually a metalink article about this. Here are the essential commands:
srvctl stop nodeapps -n NODE1 srvctl stop nodeapps -n NODE2srvctl modify nodeapps -n NODE1 -A 10.5.5.101/255.255.255.0/eth0 srvctl modify nodeapps -n NODE2 -A 10.5.5.102/255.255.255.0/eth0 srvctl start nodeapps -n NODE1 srvctl start nodeapps -n NODE2 They worked just fine. So if you ever mess up the interfaces, this is how you fix it. If you need to change the private interface, then you need to use
oifcfg. To verify your current settings use:
oifcfg getif eth0 10.5.5.0 global public eth1 192.168.0.0 global cluster_interconnect
And use delif/addif to remove and re-create your private interface. References: Changing VIP Changing interconnect
Ready to optimize your Oracle Database for the future?