Oracle Consulting Oracle Training Oracle Support Development
Home
Catalog
Oracle Books
SQL Server Books
IT Books
Job Interview Books
eBooks
Rampant Horse Books
911 Series
Pedagogue Books

Oracle Software
image
Write for Rampant
Publish with Rampant
Rampant News
Rampant Authors
Rampant Staff
 Phone
 800-766-1884
Oracle News
Oracle Forum
Oracle Tips
Articles by our Authors
Press Releases
SQL Server Books
image
image

Oracle 11g Books

Oracle tuning

Oracle training

Oracle support

Remote Oracle

STATSPACK Viewer

Privacy Policy

 

 

Oracle Cluster Registry, OCR File and Voting Disk Administration by Example - (Oracle 10g), Part One

Expert Oracle Tips by Jeff Hunter

March 21, 2011

 
Oracle Clusterware 10g, formerly known as Cluster Ready Services (CRS) is software that when installed on servers running the same operating system, enables the servers to be bound together to operate and function as a single server or cluster. This infrastructure simplifies the requirement for an Oracle Real Application Clusters (RAC) database by providing cluster software that is tightly integrated with the Oracle Database.

The Oracle Clusterware requires two critical clusterware components: a voting disk to record node membership information and the Oracle Cluster Registry (OCR) to record cluster configuration information

Voting Disk

The voting disk is a shared partition that Oracle Clusterware uses to verify cluster node membership and status. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The primary function of the voting disk is to manage node membership and prevent what is known as Split Brain Syndrome in which two or more instances attempt to control the RAC database. This can occur in cases where there is a break in communication between nodes through the interconnect.

The voting disk must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. For high availability, Oracle recommends that you have multiple voting disks. Oracle Clusterware can be configured to maintain multiple voting disks (multiplexing) but you must have an odd number of voting disks, such as three, five, and so on. Oracle Clusterware supports a maximum of 32 voting disks. If you define a single voting disk, then you should use external mirroring to provide redundancy.

A node must be able to access more than half of the voting disks at any time. For example, if you have five voting disks configured, then a node must be able to access at least three of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster. After the cause of the failure has been corrected and access to the voting disks has been restored, you can instruct Oracle Clusterware to recover the failed node and restore it to the cluster.

Oracle Cluster Registry (OCR)

Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. OCR is the repository of configuration information for the cluster that manages information about like the cluster node list and instance-to-node mapping information. This configuration information is used by many of the processes that make up the CRS as well as other cluster-aware applications which use this repository to share information amoung them. Some of the main components included in the OCR are:

  • Node membership information 
  • Database instance, node, and other mapping information 
  • ASM (if configured) 
  • Application resource profiles such as VIP addresses, services, etc
  • Service characteristics 
  • Information about processes that Oracle Clusterware controls 
  • Information about any third-party applications controlled by CRS (10g R2 and later) 
    

The OCR stores configuration information in a series of key-value pairs within a directory tree structure. To view the contents of the OCR in a human-readable format, run the ocrdump command. This will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE.

The OCR must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. Oracle Clusterware 10g Release 2 allows you to multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability. Oracle Clusterware allows for a maximum of two OCR locations; one is the primary and the second is an OCR mirror. If you define a single OCR, then you should use external mirroring to provide redundancy. You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA).

This article provides a detailed look at how to administer the two critical Oracle Clusterware components � the voting disk and the Oracle Cluster Registry (OCR). The examples described in this guide were tested with Oracle RAC 10g Release 2 (10.2.0.4) on the Linux x86 platform.  (Note: It is highly recommended to take a backup of the voting disk and OCR file before making any changes! Instruction are included in this guide on how to perform backups of the voting disk and OCR file.)

CRS_home

 The Oracle Clusterware binaries included in this article (i.e. crs_stat, ocrcheck, crsctl, etc.) are being executed from the Oracle Clusterware home directory which for the purpose of this article is /u01/app/crs. The environment variable $ORA_CRS_HOME is set for both the oracle and root user accounts to this directory and is also included in the $PATH:

[root@racnode1 ~]# echo $ORA_CRS_HOME
/u01/app/crs

[root@racnode1 ~]# which ocrcheck
/u01/app/crs/bin/ocrcheck
Since no Oracle ACFS snapshots exist, the snaps directory is empty. 

[oracle@racnode1 ~]$ ls -lFA /documents3/.ACFS/snaps
total 0


Example Configuration

The example configuration used in this article consists of a two-node RAC with a clustered database named racdb.idevelopment.info running Oracle RAC 10g Release 2 on the Linux x86 platform. The two node names are racnode1 and racnode2, each hosting a single Oracle instance named racdb1 and racdb2 respectively. For a detailed guide on building the example clustered database environment, please see my article entitled "Building an Inexpensive Oracle RAC 10g Release 2 on Linux - (CentOS 5.3 / iSCSI)."

The example Oracle Clusterware environment is configured with a single voting disk and a single OCR file on an OCFS2 clustered file system. Note that the voting disk is owned by the oracle user in the oinstall group with 0644 permissions while the OCR file is owned by root in the oinstall group with 0640 permissions:  

[oracle@racnode1 ~]$ ls -l /u02/oradata/racdb
total 16608
-rw-r--r-- 1 oracle oinstall 10240000 Aug 26 22:43 CSSFile
drwxr-xr-x 2 oracle oinstall     3896 Aug 26 23:45 dbs/
-rw-r----- 1 root   oinstall  6836224 Sep  3 23:47 OCRFile 

Check Current OCR File

[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4660
         Available space (kbytes) :     257460
         ID                       :    1331197
         Device/File Name         : /u02/oradata/racdb/OCRFile
                                    Device/File integrity check succeeded

                                    Device/File not configured

         Cluster registry integrity check succeeded

Check Current Voting Disk

[oracle@racnode1 ~]$ crsctl query css votedisk
 0.     0    /u02/oradata/racdb/CSSFile

located 1 votedisk(s).

Preparation

To prepare for the examples used in this guide, five new iSCSI volumes were created from the SAN and will be bound to RAW devices on all nodes in the RAC cluster. These five new volumes will be used to demonstrate how to move the current voting disk and OCR file from an OCFS2 file system to RAW devices:

Five New iSCSI Volumes and their Local Device Name Mappings
iSCSI Target Name Local Device Name Disk Size
iqn.2006-01.com.openfiler:racdb.ocr1 /dev/iscsi/ocr1/part 512 MB
iqn.2006-01.com.openfiler:racdb.ocr2 /dev/iscsi/ocr2/part 512 MB
iqn.2006-01.com.openfiler:racdb.voting1 /dev/iscsi/voting1/part 32 MB
iqn.2006-01.com.openfiler:racdb.voting2 /dev/iscsi/voting2/part 32 MB
iqn.2006-01.com.openfiler:racdb.voting3 /dev/iscsi/voting3/part 32 MB

After creating the new iSCSI volumes from the SAN, they now need to be configured for access and bound to RAW devices by all Oracle RAC nodes in the database cluster.

  • Step 1: From all Oracle RAC nodes in the cluster as root, discover the five new iSCSI volumes from the SAN which will be used to store the voting disks and OCR files.
[root@racnode1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting3

[root@racnode2 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting3
  • Step 2: Manually login to the new iSCSI targets from all Oracle RAC nodes in the cluster.
[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p 192.168.2.195 -l
[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p 192.168.2.195 -l
[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -p 192.168.2.195 -l
[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -p 192.168.2.195 -l
[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting3 -p 192.168.2.195 -l
[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p 192.168.2.195 -l
[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p 192.168.2.195 -l
[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -p 192.168.2.195 -l
[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -p 192.168.2.195 -l
[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting3 -p 192.168.2.195 -l
  • Step 3: Create a single primary partition on each of the five new iSCSI volumes that span the entire disk. Perform this from only one of the Oracle RAC nodes in the cluster:
[root@racnode1 ~]# fdisk /dev/iscsi/ocr1/part
[root@racnode1 ~]# fdisk /dev/iscsi/ocr2/part
[root@racnode1 ~]# fdisk /dev/iscsi/voting1/part
[root@racnode1 ~]# fdisk /dev/iscsi/voting2/part
[root@racnode1 ~]# fdisk /dev/iscsi/voting3/part
  • Step 4: Re-scan the SCSI bus from all Oracle RAC nodes in the cluster:
[root@racnode2 ~]# partprobe
  • Step 5: Create a shell script (/usr/local/bin/setup_raw_devices.sh) on all Oracle RAC nodes in the cluster to bind the five Oracle Clusterware component devices to RAW devices as follows:
+---------------------------------------------------------+
# | FILE: /usr/local/bin/setup_raw_devices.sh               |
# +---------------------------------------------------------+

# +---------------------------------------------------------+
# | Bind OCR files to RAW device files.                     |
# +---------------------------------------------------------+
/bin/raw /dev/raw/raw1 /dev/iscsi/ocr1/part1
/bin/raw /dev/raw/raw2 /dev/iscsi/ocr2/part1
sleep 3
/bin/chown root:oinstall /dev/raw/raw1
/bin/chown root:oinstall /dev/raw/raw2
/bin/chmod 0640 /dev/raw/raw1
/bin/chmod 0640 /dev/raw/raw2

# +---------------------------------------------------------+
# | Bind voting disks to RAW device files.                  |
# +---------------------------------------------------------+
/bin/raw /dev/raw/raw3 /dev/iscsi/voting1/part1
/bin/raw /dev/raw/raw4 /dev/iscsi/voting2/part1
/bin/raw /dev/raw/raw5 /dev/iscsi/voting3/part1
sleep 3
/bin/chown oracle:oinstall /dev/raw/raw3
/bin/chown oracle:oinstall /dev/raw/raw4
/bin/chown oracle:oinstall /dev/raw/raw5
/bin/chmod 0644 /dev/raw/raw3
/bin/chmod 0644 /dev/raw/raw4
/bin/chmod 0644 /dev/raw/raw5

From all Oracle RAC nodes in the cluster, change the permissions of the new shell script to execute:

[root@racnode1 ~]# chmod 755 /usr/local/bin/setup_raw_devices.sh
[root@racnode2 ~]# chmod 755 /usr/local/bin/setup_raw_devices.sh

Manually execute the new shell script from all Oracle RAC nodes in the cluster to bind the voting disks to RAW devices:

[root@racnode1 ~]# /usr/local/bin/setup_raw_devices.sh
/dev/raw/raw1:  bound to major 8, minor 97
/dev/raw/raw2:  bound to major 8, minor 17
/dev/raw/raw3:  bound to major 8, minor 1
/dev/raw/raw4:  bound to major 8, minor 49
/dev/raw/raw5:  bound to major 8, minor 33

[root@racnode2 ~]# /usr/local/bin/setup_raw_devices.sh
/dev/raw/raw1:  bound to major 8, minor 65
/dev/raw/raw2:  bound to major 8, minor 49
/dev/raw/raw3:  bound to major 8, minor 33
/dev/raw/raw4:  bound to major 8, minor 1
/dev/raw/raw5:  bound to major 8, minor 17

Check that the character (RAW) devices were created from all Oracle RAC nodes in the cluster:

[root@racnode1 ~]# ls -l /dev/raw
total 0
crw-r----- 1 root   oinstall 162, 1 Sep 24 00:48 raw1
crw-r----- 1 root   oinstall 162, 2 Sep 24 00:48 raw2
crw-r--r-- 1 oracle oinstall 162, 3 Sep 24 00:48 raw3
crw-r--r-- 1 oracle oinstall 162, 4 Sep 24 00:48 raw4
crw-r--r-- 1 oracle oinstall 162, 5 Sep 24 00:48 raw5

[root@racnode2 ~]# ls -l /dev/raw
total 0
crw-r----- 1 root   oinstall 162, 1 Sep 24 00:48 raw1
crw-r----- 1 root   oinstall 162, 2 Sep 24 00:48 raw2
crw-r--r-- 1 oracle oinstall 162, 3 Sep 24 00:48 raw3
crw-r--r-- 1 oracle oinstall 162, 4 Sep 24 00:48 raw4
crw-r--r-- 1 oracle oinstall 162, 5 Sep 24 00:48 raw5

[root@racnode1 ~]# raw -qa
/dev/raw/raw1:  bound to major 8, minor 97
/dev/raw/raw2:  bound to major 8, minor 17
/dev/raw/raw3:  bound to major 8, minor 1
/dev/raw/raw4:  bound to major 8, minor 49
/dev/raw/raw5:  bound to major 8, minor 33

[root@racnode2 ~]# raw -qa
/dev/raw/raw1:  bound to major 8, minor 65
/dev/raw/raw2:  bound to major 8, minor 49
/dev/raw/raw3:  bound to major 8, minor 33
/dev/raw/raw4:  bound to major 8, minor 1
/dev/raw/raw5:  bound to major 8, minor 17

Include the new shell script in /etc/rc.local to run on each boot from all Oracle RAC nodes in the cluster:

[root@racnode1 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local
[root@racnode2 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local
  • Step 6: Once the raw devices are created, use the dd command to zero out the device and make sure no data is written to the raw devices. Only perform this action from one of the Oracle RAC nodes in the cluster:
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1
dd: writing to '/dev/raw/raw1': No space left on device
1048516+0 records in
1048515+0 records out
536839680 bytes (537 MB) copied, 773.145 seconds, 694 kB/s

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2
dd: writing to '/dev/raw/raw2': No space left on device
1048516+0 records in
1048515+0 records out
536839680 bytes (537 MB) copied, 769.974 seconds, 697 kB/s

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3
dd: writing to '/dev/raw/raw3': No space left on device
65505+0 records in
65504+0 records out
33538048 bytes (34 MB) copied, 47.9176 seconds, 700 kB/s

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4
dd: writing to '/dev/raw/raw4': No space left on device
65505+0 records in
65504+0 records out
33538048 bytes (34 MB) copied, 47.9915 seconds, 699 kB/s

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw5
dd: writing to '/dev/raw/raw5': No space left on device
65505+0 records in
65504+0 records out
33538048 bytes (34 MB) copied, 48.2684 seconds, 695 kB/s

Administering the OCR File

View OCR Configuration Information

Two methods exist to verify how many OCR files are configured for the cluster as well as their location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user account:

[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4660
         Available space (kbytes) :     257460
         ID                       :    1331197
         Device/File Name         : /u02/oradata/racdb/OCRFile  <-- OCR (primary)
                                    Device/File integrity check succeeded

                                    Device/File not configured  <-- OCR Mirror (not configured)

         Cluster registry integrity check succeeded

If CRS is down, you can still determine the location and number of OCR files by viewing the file ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:

[root@racnode1 ~]# cat /etc/oracle/ocr.loc
ocrconfig_loc=/u02/oradata/racdb/OCRFile
local_only=FALSE

To view the actual contents of the OCR in a human-readable format, run the ocrdump command. This command requires the CRS stack to be running. Running the ocrdump command will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE:

[root@racnode1 ~]# ocrdump
[root@racnode1 ~]# ls -l OCRDUMPFILE
-rw-r--r-- 1 root root 250304 Oct  2 22:46 OCRDUMPFILE

The ocrdump utility also allows for different output options:

#
# Write OCR contents to specified file name.
#
[root@racnode1 ~]# ocrdump /tmp/'hostname'_ocrdump_'date +%m%d%y:%H%M'


#
# Print OCR contents to the screen.
#
[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css


#
# Write OCR contents out to XML format.
#
[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css -xml > ocrdump.xml
 

Add an OCR File

Starting with Oracle Clusterware 10g Release 2 (10.2), users now have the ability to multiplex (mirror) the OCR. Oracle Clusterware allows for a maximum of two OCR locations; one is the primary and the second is an OCR mirror. To avoid simultaneous loss of multiple OCR files, each copy of the OCR should be placed on a shared storage device that does not share any components (controller, interconnect, and so on) with the storage devices used for the other OCR file.

Before attempting to add a mirrored OCR, determine how many OCR files are currently configured for the cluster as well as their location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user account:

[oracle@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4660
         Available space (kbytes) :     257460
         ID                       :    1331197
         Device/File Name         : /u02/oradata/racdb/OCRFile  <-- OCR (primary)
                                    Device/File integrity check succeeded

                                    Device/File not configured  <-- OCR Mirror (not configured yet)

         Cluster registry integrity check succeeded

If CRS is down, you can still determine the location and number of OCR files by viewing the file ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:

[root@racnode1 ~]# cat /etc/oracle/ocr.loc
ocrconfig_loc=/u02/oradata/racdb/OCRFile
local_only=FALSE

The results above indicate I have only one OCR file and that it is located on an OCFS2 file system. Since we are allowed a maximum of two OCR locations, I intend to create an OCR mirror and locate it on the same OCFS2 file system in the same directory as the primary OCR. Please note that I am doing this for the sake brevity. The OCR mirror should always be placed on a separate device than the primary OCR file to guard against a single point of failure.

Note that the Oracle Clusterware stack should be online and running on all nodes in the cluster while adding, replacing, or removing the OCR location and hence does not require any system downtime.

Note:  The operations performed in this section affect the OCR for the entire cluster. However, the ocrconfig command cannot modify OCR configuration information for nodes that are shut down or for nodes on which Oracle Clusterware is not running. So, you should avoid shutting down nodes while modifying the OCR using the ocrconfig command. If for any reason, any of the nodes in the cluster are shut down while modifying the OCR using the ocrconfig command, you will need to perform a repair on the stopped node before it can brought online to join the cluster. Please see the section "Repair an OCR File on a Local Node" for instructions on repairing the OCR file on the affected node.

You can add an OCR mirror after an upgrade or after completing the Oracle Clusterware installation. The Oracle Universal Installer (OUI) allows you to configure either one or two OCR locations during the installation of Oracle Clusterware. If you already mirror the OCR, then you do not need to add a new OCR location; Oracle Clusterware automatically manages two OCRs when you configure normal redundancy for the OCR. As previously mentioned, Oracle RAC environments do not support more than two OCR locations; a primary OCR and a secondary (mirrored) OCR.

Run the following command to add or relocate an OCR mirror using either destination_file or disk to designate the target location of the additional OCR:

ocrconfig -replace ocrmirror <destination_file>
ocrconfig -replace ocrmirror <disk>

You must be logged in as the root user to run the ocrconfig command. 

Please note that ocrconfig -replace is the only way to add/relocate OCR files/mirrors. Attempting to copy the existing OCR file to a new location and then manually adding/changing the file pointer in the ocr.loc file is not supported and will actually fail to work.

For example:

#
# Verify CRS is running on node 1.
#
[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

#
# Verify CRS is running on node 2.
#
[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

#
# Configure the shared OCR destination_file/disk before 
# attempting to create the new ocrmirror on it. This example 
# creates a destination_file on an OCFS2 file system. 
# Failure to pre-configure the new destination_file/disk 
# before attempting to run ocrconfig will result in the 
# following error:
# 
#     PROT-21: Invalid parameter
#
[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror
[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror

#
# Add new OCR mirror.
#
[root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirror

After adding the new OCR mirror, check that it can be seen from all nodes in the cluster:

#
# Verify new OCR mirror from node 1.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /u02/oradata/racdb/OCRFile
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/oradata/racdb/OCRFile_mirror  <-- New OCR Mirror
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded


[root@racnode1 ~]# cat /etc/oracle/ocr.loc
#Device/file  getting replaced by device /u02/oradata/racdb/OCRFile_mirror
ocrconfig_loc=/u02/oradata/racdb/OCRFile
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror


#
# Verify new OCR mirror from node 2.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /u02/oradata/racdb/OCRFile
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/oradata/racdb/OCRFile_mirror  <-- New OCR Mirror
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded


[root@racnode2 ~]# cat /etc/oracle/ocr.loc
#Device/file  getting replaced by device /u02/oradata/racdb/OCRFile_mirror
ocrconfig_loc=/u02/oradata/racdb/OCRFile
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror

As mentioned earlier, you can have at most two OCR files in the cluster; the primary OCR and a single OCR mirror. Attempting to add an extra mirror will actually relocate the current OCR mirror to the new location specified in the command:

[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirror2
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /u02/oradata/racdb/OCRFile
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/oradata/racdb/OCRFile_mirror2  <-- Mirror was Relocated!
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

Relocate an OCR File

Just as we were able to add a new ocrmirror while the CRS stack was online, the same holds true when relocating an OCR file or OCR mirror and therefore does not require any system downtime.

You can relocate OCR only when the OCR is mirrored. A mirror copy of the OCR file is required to move the OCR online. If there is no mirror copy of the OCR, first create the mirror using the instructions in the previous section.

Attempting to relocate OCR when an OCR mirror does not exist will produce the following error: :

ocrconfig -replace ocr /u02/oradata/racdb/OCRFile
PROT-16: Internal Error

If the OCR mirror is not required in the cluster after relocating the OCR, it can be safely removed.

Run the following command as the root account to relocate the current OCR file to a new location using either destination_file or disk to designate the new target location for the OCR:

ocrconfig -replace ocr <destination_file>
ocrconfig -replace ocr <disk>

Run the following command as the root account to relocate the current OCR mirror to a new location using either destination_file or disk to designate the new target location for the OCR mirror:

ocrconfig -replace ocrmirror <destination_file>
ocrconfig -replace ocrmirror <disk>

The following example assumes the OCR is mirrored and demonstrates how to relocate the current OCR file (/u02/oradata/racdb/OCRFile) from the OCFS2 file system to a new raw device (/dev/raw/raw1):

#
# Verify CRS is running on node 1.
#
[root@racnode1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

#
# Verify CRS is running on node 2.
#
[root@racnode2 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

#
# Verify current OCR configuration.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /u02/oradata/racdb/OCRFile  <-- Current OCR to Relocate
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/oradata/racdb/OCRFile_mirror
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

#
# Verify new raw storage device exists, is configured with 
# the correct permissions, and can be seen from all nodes 
# in the cluster.
#
[root@racnode1 ~]# ls -l /dev/raw/raw1
crw-r----- 1 root oinstall 162, 1 Oct  2 19:54 /dev/raw/raw1

[root@racnode2 ~]# ls -l /dev/raw/raw1
crw-r----- 1 root oinstall 162, 1 Oct  2 19:54 /dev/raw/raw1

#
# Clear out the contents from the new raw device.
#
[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1

#
# Relocate primary OCR file to new raw device. Note that
# there is no deletion of the old OCR file but simply a
# replacement.
#
[root@racnode1 ~]# ocrconfig -replace ocr /dev/raw/raw1

After relocating the OCR file, check that the change can be seen from all nodes in the cluster:

#
# Verify new OCR file from node 1.
#
[root@racnode1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /dev/raw/raw1  <-- Relocated OCR
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/oradata/racdb/OCRFile_mirror
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

[root@racnode1 ~]# cat /etc/oracle/ocr.loc
#Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1
ocrconfig_loc=/dev/raw/raw1
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror


#
# Verify new OCR file from node 2.
#
[root@racnode2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       4668
         Available space (kbytes) :     257452
         ID                       :    1331197
         Device/File Name         : /dev/raw/raw1  <-- Relocated OCR
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/oradata/racdb/OCRFile_mirror
                                    Device/File integrity check succeeded

         Cluster registry integrity check succeeded

[root@racnode2 ~]# cat /etc/oracle/ocr.loc
#Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1
ocrconfig_loc=/dev/raw/raw1
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror

After verifying the relocation was successful, remove the old OCR file at the OS level:

[root@racnode1 ~]# rm -v /u02/oradata/racdb/OCRFile
removed '/u02/oradata/racdb/OCRFile'
 

In part two of this series, we will continue on with our exploration of the administration of the OCR file, with methods to repair and remove it, and then on to backup and recovery of the OCR file.

 



 

 
 
 
Get the Complete
Oracle Tuning Details 

The landmark book "Oracle Tuning: The Definitive Reference Second Edition" has been updated with over 1,150 pages of expert performance tuning tips. It's packed with scripts and tools to hypercharge Oracle 11g performance and you can buy it for 40% off directly from the publisher.
 


Download your Oracle scripts now:

www.oracle-script.com

The definitive Oracle Script collection for every Oracle professional DBA

 

 
   

 Copyright © 1996 -2016 by Burleson. All rights reserved.


Oracle® is the registered trademark of Oracle Corporation. SQL Server® is the registered trademark of Microsoft Corporation. 
Many of the designations used by computer vendors to distinguish their products are claimed as Trademarks
 

 

Linux Oracle commands syntax poster

ION Oracle tuning software

Oracle data dictionary reference poster



Oracle Forum

BC Oracle consulting support training

BC remote Oracle DBA