The following is an overview of Oracle RAC technology. Oracle RAC turns a single physical and logical instance single server database into single logical instance multi server database system. It is an active cluster solution by Oracle that allows a database instance to be rendered across 2 or more servers while sharing a single shared disk location. The cluster ware serves to bind all the nodes together and it keeps all operations in check using a heartbeat that runs across the network and across the shared storage. Oracle RAC also uses Cache Fusion to allow the efficient management of data blocks in memory.
The connection between these nodes is usually a physical fibre cable that fires a constant stream of high speed UDP packets across a private network connection. It does this to synchronise read state information (memory block state) and other information relevant to the internal RDBMS’s coordination affairs. This connection is usually referred to as the private interconnect. In a VM environment, the connector is virtual and its rendering is susceptible to the same resource constraints as any other rendered hardware. The cluster interconnects are a vital component of Cache Fusion.
RAC's benefits include scalable systems and distributed load as well as better resilience in the event of hardware failure. The same cannot be said with certainty in a VM environment if the nodes are housed on one server. RAC's drawbacks include new complexities in backup and restoration processes, patching and upgrading, a multitude of potential fault points, network is introduced as an element of system throughput and a new stack of interdependent components must be understood by the administrator.
Oracle RAC is a “living” system in that it is usually self-correcting, self-analysing and self-managing as far as its well-being is concerned. I say “usually” because it can fail from time to time for reasons outside the scope of its own control. The reasons I've discovered includes resource starvation at which point RAC is no longer able to carry out its own duties and requires a DBA to intervene. There are other issues that occur which I’ve yet to encounter. For a comprehensive list of frequently asked RAC questions – see Oracle Metalink Note 220970.1
Below are some diagrams given a general view of a cluster system: they are interesting and do provide contextual information, however, an administrator is not required to know them to carry out basic RAC operations.
Below are some diagrams given a general view of a cluster system: they are interesting and do provide contextual information, however, an administrator is not required to know them to carry out basic RAC operations.
Overview of cluster architecture
(Image Source:Oracle® Clusterware Administration and Deployment Guide 11g Release 2 (11.2) E41959-03)
Oracle RAC processes and starting order
(Image Source: Oracle® Clusterware Administration and Deployment Guide 11g
Release 2 (11.2) E41959-03)
1. Cluster begins by
starting OHASD
2. Start the local server stack
3. Start the cluster ready services stack
For a more detailed startup list – see note 1053147.1
Healthy Cluster Resources state on an active 2-node cluster
Cluster Resources
ora.DATA.dg / Oracle Diskgroup
Resource – This is
the oracle disk group resource which serves as a proxy for the diskgroup. This disk
group is the gateway to the RAW partitions used by the Oracle ASM instance.
This is what the cluster instance databases engage with, through ASM, when
transacting with the shared storage.
$GRID_HOME/bin/srvctl status diskgroup -g diskgroupname -a
SQL> l
$GRID_HOME/bin/srvctl status diskgroup -g diskgroupname -a
SQL> l
1* select name, mount_status, header_status, path, total_mb from
v$asm_disk
SQL> /
NAME MOUNT_S HEADER_STATU
PATH TOTAL_MB
------------------------------
------- ------------ ------------------------------ ----------
CLOSED PROVISIONED
\\.\ORCLDISKOCR1
0
DATA_0000 CACHED MEMBER
\\.\ORCLDISKDATA1
614400
H_DRIVE_0000 CACHED MEMBER
\\.\ORCLDISKHDRIVE0
358400
ora.H_DRIVE.dg – see ora.DATA.dg – H_DRIVE is the accessible disk in
windows – it holds the scripts and backups of the database. It is visible to
the administrator through "my computer" in a windows environment.
ora.LISTENER.lsnr / Oracle Listener
– This resource
presents the listener on each node to the cluster for access.
ora.asm / Oracle ASM Instance – This is the Oracle ASM instace
which underpins the database[1/2] instances
ora.net1.network / Oracle Network Resource – This resource monitors the public network interface. All the SCAN and VIP related resources are dependent on this component to function.
ora.net1.network / Oracle Network Resource – This resource monitors the public network interface. All the SCAN and VIP related resources are dependent on this component to function.
(Taken from http://ranjeetwipro.blogspot.co.za/2011/01/oracle-notification-services.html?_sm_au_=iVVfRQZ4WsrPnMPB)
The config file for ONS can be found here:
E:\OracleGrid\11.2.0.3\opmn\conf\ons.config
ora.registry.acfs / Oracle Automatic Storage Management Cluster File System – This resource manages the ACFS file system registration process which Oracle uses to present and share the database files to all nodes on the cluster ware. If this is down or unavailable, the database is not available and the mount point can vanish in the windows file system.
ora.LISTENER_SCAN#.lsnr / Oracle SCAN listener resource – This resource is paired with a VIP resource. This resource listens for requests for the SCAN address (the cluster name) and redirects requests from clients to the LISTENER on the node that have capacity. There are 3 of these services that listen on the addresses registered to the cluster SCAN name.
ora.cvu - This resource monitors the health state of the cluster regularly by using cluvfy.
ora.database.db - This is the database resource that represents the database instance.
ora.scan#.lsnr – This resource is part of the SCAN pairing. It is the listener that listens on a virtual IP address.
ora.MACHINE.vip / Oracle VIP
resource – This resource is a virtual ip for
the node. This IP is used for fail over and other RAC management processes.
Virtual IPs provide a way for connectivity to the cluster without timeout
issues.
Common RAC Tasks and Problems
Notescrsctl is a clusterware control api that allows an administrator to carry out administrative actions against individual cluster ware components on one or all registered nodes in the cluster. In other words: crsctl has global reach
srvctl is a clusterware control api that allows an administrator to carry out administrative actions against individual clusterware components on the node of invocation. This can be used to manage local cluster resources without impacting other nodes directly.
As per the documentation, crsctl is generally not recommended for use against components prefixed with ora.* without certainty. It is better to perform operations on the node of interest.
(http://docs.oracle.com/database/121/CWADD/crsref.htm#CWADD91143)
crsctl is dependent on the OHASD daemon (Oracle High Availability Service daemon) so – if OHASD is down, crsctl will not function.
Actions
Check the cluster status
crsctl stat res –t
ora.ons is OFFLINE – turn it back on
crsctl start resource ora.ons -n node3db01
(FORMAT: crsctl start resource <resourcename*> -n <nodename>)
*Resource name can be found using crsctl stat res –t
ora.LISTENER.lsnr is OFFLINE – turn it back on
srvctl start listener
Starting up the entire cluster
crsctl start cluster -all
Stopping the entire cluster
crsctl stop cluster -all
Find the diskgroup status
$GRID_HOME/bin/srvctl status diskgroup -g DATA –a
Disk Group DATA is running on nodedb02,nodedb01
Disk Group DATA is enabledStopping the local database instanceNotescrsctl is a clusterware control api that allows an administrator to carry out administrative actions against individual cluster ware components on one or all registered nodes in the cluster. In other words: crsctl has global reach
srvctl is a clusterware control api that allows an administrator to carry out administrative actions against individual clusterware components on the node of invocation. This can be used to manage local cluster resources without impacting other nodes directly.
As per the documentation, crsctl is generally not recommended for use against components prefixed with ora.* without certainty. It is better to perform operations on the node of interest.
(http://docs.oracle.com/database/121/CWADD/crsref.htm#CWADD91143)
crsctl is dependent on the OHASD daemon (Oracle High Availability Service daemon) so – if OHASD is down, crsctl will not function.
Actions
Check the cluster status
crsctl stat res –t
ora.ons is OFFLINE – turn it back on
crsctl start resource ora.ons -n node3db01
(FORMAT: crsctl start resource <resourcename*> -n <nodename>)
*Resource name can be found using crsctl stat res –t
ora.LISTENER.lsnr is OFFLINE – turn it back on
srvctl start listener
Starting up the entire cluster
crsctl start cluster -all
Stopping the entire cluster
crsctl stop cluster -all
Find the diskgroup status
$GRID_HOME/bin/srvctl status diskgroup -g DATA –a
Disk Group DATA is running on nodedb02,nodedb01
srvctl stop instance -d <DATABASENAME> -i <INSTANCENAME>
Starting the local database instance
srvctl start instance -d <DATABASENAME> -i <INSTANCENAME>
*Database name is the DBNAME found within the parameter file.
*Instance name is the name of the database instance rendered on a given node.
The instance and db are usually one and the same name - in RAC - they are different and usually different by having the instance name made of the dbname followed by the node number:
database: MYDB
instance on node 1: MYDB1
instance on node 2: MYDB2
No comments:
Post a Comment