Remote DBA usually face Node Reboots or Node Evictions in Real Application Cluster Environment. Node Reboot is performed by CRS to maintain consistency in Cluster environment by removing node which is facing some critical issue.
A critical problem could be a node not responding via a network heartbeat, a node not responding via a disk heartbeat, a hung. or a hung ocssd.bin process etc. There could be many more reasons for node Eviction but Some of them are common and repetitive.Here, I am listing:-
Top 4 Reasons for Node Reboot or Node Eviction in Real Application Cluster (RAC) Environment:
Whenever, Database Administrator face Node Reboot issue, First thing to look at should be /var/log/message and OS Watcher logs of the Database Node which was rebooted.
var/log/messages will give you an actual picture of reboot:- Exact time of restart, status of resource like swap and RAM etc.
1. High Load on Database Server: Out of 100 Issues, I have seen 70 to 80 time High load on the system was reason for Node Evictions. One common scenario is due to high load RAM and SWAP space of DB node got exhaust and system stops working and finally reboot.
So, Every time you see a node eviction start investigation with /var/log/messages and Analyze OS Watcher logs. Below is a situation when a Database Node was reboot due to high load.
/var/log/messages output from a Database Node just before Node eviction:
From above message, we can see that this system has only 4kB free swap out of 24G swap space. This means system neither has RAM not SWAP for processing, which case a reboot. This picture is also clear from OS Watcher of system.
How to avoid Node Reboot due to High Load.
The simple and best way to avoid this is use Oracle Database Resource Manager (DBRM). DBRM help to resolve this by allowing the database to have more control over how hardware resources and their allocation. DBA should setup Resource consumer group and Resource plan and should use them as per requirements. In Exadata system Exadata DBA can use IORM to setup resource allocation among multiple database instances.
2. Voting Disk not Reachable: One of the another reason for Node Reboot is clusterware is not able to access a minimum number of the voting files .When the node aborts for this reason, the node alert log will show CRS-1606 error.
Here is a scenario for voting disk not reachable:
There could be two reasons for this issue:
A. Connection to the voting disk is interrupted.
B. if only one voting disk is in use and version is less than 184.108.40.206.4, hitting known bug 13869978.
How to Solve Voting Disk Outage.
There could be many reasons for voting disk is not reachable, Here are few general approach for DBA to follow.
1. Use command ” crsctl query css votedisk ” on a node where clusterware is up to get a list of all the voting files.
2. Check that each node can access the devices underlying each voting file.
3. Check for permissions to each voting file/disk have not been changed.
4. Check OS, SAN, and storage logs for any errors from the time of the incident.
5. Apply fix for 13869978 if only one voting disk is in use. This is fixed in 220.127.116.11.4 patch set and above, and 18.104.22.168 and above
If any voting files or underlying devices are not currently accessible from any node, work with storage administrator and/or system administrator to resolve it at storage and/or OS level.
3. Missed Network Connection between Nodes: In technical term this is called as Missed Network Heartbeat (NHB). Whenever there is communication gap or no communication between nodes on private network (interconnect) due to network outage or some other reason. A node abort itself to avoid “split brain ” situation. The most common (but not exclusive) cause of missed NHB is network problems communicating over the private interconnect.
Suggestion to troubleshoot Missed Network Heartbeat.
1. Check OS statistics from the evicted node from the time of the eviction. DBA can use OS Watcher to look at OS Stats at time of issue, check oswnetstat and oswprvtnet for network related issues.
2. Validate the interconnect network setup with the Help of Network administrator.
3. Check communication over the private network.
4. Check that the OS network settings are correct by running the RACcheck tool.
4. Database Or ASM Instance Hang: Sometimes Database or ASM instance hang can cause Node reboot. In these case Database instance is hang and is terminated afterwards. which cause either reboot cluster or Node eviction. DBA should check alert log of Database and ASM instance for any hang situation which might cause this issue.
Database Alert log file entry for Database Hang Situation:
At the same time resources at Cluster level start failing and node was evicted by itself. Real Application Cluster Log files
So, I believe this could be due to some bug in database.
In few of the cases, bugs could be the reason for node reboot. bug may be at Database level, ASM level or at Real Application Cluster level. Here, after initial investigation from Database Administrator side, DBA should open an SR with Oracle Support.
Please share if you know any other Common reason for Node eviction in RAC environment in comment section of the post.