The Common Reporting Standard (CRS), developed in response to the G20 request and approved by the OECD Council on 15 July 2014, calls on jurisdictions to obtain information from their financial institutions and automatically exchange that information with other jurisdictions on an annual basis.
ORACLE LOCAL REGISTRY(OLR) contains node-specific information required by OHASD . As OCRs files are present in ASM diskgroup, While starting the CRS, it wont be able to Access OCR file to find the cluster resource information. Because at this point ASM instance would also be down.
■evmd—Event manager daemon. This process also starts the racgevt process to. manage FAN server callouts. ■ocssd—Manages cluster node membership and runs as the oracle user; failure. of this process results in cluster restart.
Oracle Cluster Synchronization Services daemon (CSSD) Cluster Ready Services daemon (CRSD), which is the main engine for maintaining availability of resources.
- Step 1: Prepare the New Cluster Nodes.
- Step 2: Deploy the Oracle Grid Infrastructure Home on the Destination Nodes.
- Step 3: Run the clone.pl Script on Each Destination Node.
- Step 4: Launch the Configuration Wizard.
Split Brain is often used to describe the scenario when two or more nodes in a cluster, lose connectivity with one another but then continue to operate independently of each other, including acquiring logical or physical resources, under the incorrect assumption that the other process(es) are no longer operational or
The CRS daemon (crsd) manages cluster resources based on configuration information that is stored in Oracle Cluster Registry (OCR) for each resource. This includes start, stop, monitor, and failover operations. The crsd process generates events when the status of a resource changes.
There are two RAC processes which are basically deciding about node evictions and who will initiate node evictions in almost all platforms. 1. OCSSD : This process is primary responsible for inter node health monitoring and instance endpoint recovery. It runs as oracle user.
Nodeapps are a standard set of Oracle application services that are automatically launched for RAC (Real Application Cluster). The following service are lunched by nodeapps: •Virtual IP (VIP) •Oracle Net Listener. •Global Services Daemon (GSD)
The Voting Disk. Contains information to determine which nodes are active members of the cluster at any given moment. In Oracle RAC 10g Release 2, these storage locations can be files in a shared file format such as OCFS2, or they can be Linux configured raw storage or block Linux devices.
In Oracle 10g RAC and 11gR1 RAC, Oracle clusterware and ASM are installed in the different Oracle homes, and the Clusterware has to be up before ASM instance can be started because ASM instance uses the clusterware to access the shared storage. The OCR and votingdisk of 11g R2 clusterware can be stored in ASM.
Brief explanation of the startup sequence.The init tab file is the one it triggers oracle high availability service daemon. When a node of an Oracle Clusterware cluster starts, OHASD is started by platform-specific means like init. d in Linux. OHASD is the root for bringing up Oracle Clusterware.
Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager. ASM groups the disks in your storage system into one or more disk groups. You manage a small set of disk groups and ASM automates the placement of the database files within those disk groups.
This location contains log file and diagnostic messages for Oracle Clusterware. In addition, we have the following supplemental CRS log locations: - $ORA_CRS_HOME/crs/log: Contains trace files for the CRS resources. - $ORA_CRS_HOME/crs/init: Contains trace files of the CRS daemon during startup.
login to system using either root, oracle or gird user and execute. I have listed few of the process only, From the above list you can easily see “/u01/app/11.2. 0.3/grid†is GRID_HOME on this system. Set GRID_HOME and ready for remote DBA activities.
Use the crsctl query crs releaseversion command to display the version of the Oracle Clusterware software stored in the binaries on the local node.
- Querying CRS Resource Status.
- Starting Cluster Resources by Using the crsctl Command.
- Stopping Cluster Resources by Using the crsctl Command.
- Starting Database Instances by Using the srvctl Command.
- Stopping Database Instances by Using the srvctl Command.
To stop cluster resources of the local node, run the following command: [root@dbn01 ~]# /u01/app/11.2. 0/grid/bin/crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dbn01' CRS-2673: Attempting to stop 'ora.
CRSCTL utility present in Grid_home/bin location.
- Check the Clusterware on all nodes or single node.
- Start and stop the Clusterware on all nodes or Single node.
- Get hostname with CRSCTL.
- Add and Delete resource with Crsctl commands.
- Get the cluster configuration.
- Get cluster name.
- Configure the CRS.
- Enable and disable the CRS.
Assuming you have not started the cluster stack and only hasd is running.
- Start CRS in exclusive mode in any of the node.
- Add the new disks to asm diskgroup.
- Identify the latest backup.
- Restore the OCR from automatic backup.
- Start the CRS in exclusive mode.
- Replace the voting disk from automatic backup.
- Step 1: Prepare the New Cluster Nodes.
- Step 2: Deploy the Oracle Grid Infrastructure Home on the Destination Nodes.
- Step 3: Run the clone.pl Script on Each Destination Node.
- Step 4: Launch the Configuration Wizard.
The stop CRSCTL command stops Oracle Restart, and the disable CRSCTL command ensures that the components managed by Oracle Restart do not restart automatically. The enable CRSCTL command enables automatic restart and the start CRSCTL command restarts Oracle Restart.
crsctl stop cluster --> Stop HA services on local node. crsctl stop cluster -all --> Stop HA services on all nodes. crsctl stop crs --> stop HA services on local nodes.
CRSCTL is an interface between you and Oracle Clusterware, parsing and calling Oracle Clusterware APIs for Oracle Clusterware objects. Oracle Clusterware 11g release 2 (11.2) introduces cluster-aware commands with which you can perform check, start, and stop operations on the cluster. Checking the health of the cluster.
SOLUTION
- Stop the OHAS stack (as “grid†OS user):
- Connect as root user (different session) and unlock the Oracle Grid Infrastructure Standalone installation as follows:
- Then relink the Oracle Grid Infrastructure Standalone installation as follows (as grid user):