11g RAC InstallationAdding Node To Oracle RAC DatabaseImplementation of 11gR2 RAC on LinuxOracle Real Application Cluster(RAC) SCAN,DNS,GNSRAC IssuesStep by step DNS Configuration on Linux RHEL5OEL5Oracle Interview Questions and AnswersRAC Setup Documentation

RAC Notes Scanned Documents

10gR2 RAC Installation on RHEL 4U8

Administrating RAC Using SRVCTL and CRSCTL

Cluster Startup Process in 10g and 11gRel 1

Clusterware Architecture

Deinstallation of RAC Environment

RAC Service Management

Converting Standalone database to RAC Using rconfig utility

RCONFIG UTILITY
—————–

> Oracle has introduced rconfig utility in version 10g of oracle.

> The pre-requisite for rconfig is database area location should either
Cluster filesystem or ASM.

———————————————————————-

#su – oracle
rac1>$crsctl check crs
$ps -ef|grep smon
$srvctl stop database -d hrms
(no need to shutdown the database it should be up and running while creating
a database)

>>>Creating a standalone database

#xhost +
.oracle single instance db
.next
.create database
.next
.General purpose
global database name:prod
next
next
.use same password
confirm password
next
.select asm
next
password:racdba
.select one disk group
next
.use common location for all data files
next
.click on browse
.select ASM_DG_FRA
.click on ok
.enable archiving
.edit archive mode parameters
Remove the entries
.click on ok
next>next>next>next>finish>ok

Note:To find a file location when the instance is not running

$find /u01 -name “alert_*.log”

rac1># su – oracle
$ ps-ef|grep smon
$ sqlplus / as sysdba

SQL> select name,open_mode,log_mode from v$database;
SQL>show parameter cluster
SQL>select name,open_mode,log_mode from v$database;
SQL>select name from v$controlfile;
SQL>select name from v$datafile;
SQL>select member from v$logfile;

# su – oracle
$cd $ORACLE_HOME/dbs
$ls
$cd assistants
$ls
$cd rconfig
$cd SampleXMLs/
$ls
$prod
$cp ConvertTORAC.xml ~
$cd
$ls
$vi ConvertTORAC.xml

specify current Oraclehome of non-rac database for source DBHome

<n:SourceDBHome>/u01/app/oracle/product/10.2.0/db_home

specify OracleHome where the rac database should be configured

<n:TargetDBHome>/u01/app/oracle/product/10.2.0/db_home

specify SID for non RAC database and credential

<n:Source DBInfo SID=”prod”>
<n:password>racdba</n:password>
<n:Role>sysdba</n:Role>

Note:asminfo element is required only if the current non-rac

<n:password>racdba
<n:Role>sysdba

specify prefix for RAC instances Specify the list on non rac

<n:InstancePrefix>prod <n:Node name=”rac1″>

<n:InstancePrefix>prod <n:Node name=”rac2″>

The non-Rac database should have some storage

<n:shared storage type=”ASM”

Specify database area location to be configured

<n:TrgetDatabaseArea=+ASM_DG_DATA>

Specify flash Recovery area

<n:TargetFRA=+ASM_DG_FRA>

$ rconfig ConvertTORAC.xml
—————————————————————————————————

Troubleshooting Oracle clusterware and collecting Diagnostic information from crs_home and db_home using diagcollection.pl in RAC

>The default location of alert log of cluster is:
$ORA_CRS_HOME/10g
$ls
$cd lnx01
$ls
$tail -50 alertlnx01.log | more

—————————————————————–
Collecting diagnostic information from oracle home
—————————————————————–
>we need to run as root user,
lnx01]# export ORA_CRS_HOME=/u01/app/oracle/product/10.2.0/crs_home
#cd $ORA_CRS_HOME/bin
bin]# ./diagcollection.pl –collect -crs $ORA_CRS_HOME
#ls *.gz
#mv *.gz $HOME
#cd
#ls
#unzip ocrData_lnx01.tar.gz
#gunzip ocrData_lnx01.tar.gz
#tar -xvf ocrData_lnx01.tar

———————————————————
Collecting diagnostic information from ORACLE_HOME
———————————————————————-

lnx01]#export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_home
#cd $ORA_CRS_HOME/bin
#./diagcollection.pl –collect –oh $ORACLE_HOME
#ls *.gz
#mv *.gz $HOME
#cd
#gunzip oraData_lnx01.gz
#ls
#gunzip oraData_lnx01.tar.gz
#ls
#tar -xvf oraData_lnx01.tar.gz

SCAN IP and SCAN Name (Single Client Access Name) 11gR2 RAC

SCAN IP and SCAN Name (Single Client Access Name) 11gR2
——————————————————————————–
>In 10g and 11gR1 during node additions and deletions, we need to modify the service entries
manually.To overcome this problem in 11gR2 oracle has introduced scan name and scan IP’s.
>Irrespective of the no.of nodes,oracle recommends to have 3 scan IP’s with a single scan name
which will be resolved to any of the scan IP in round robin manner.

Note:subnet mask of public IP,Virtual IP and Scan IP should be same

>During Grid Infrastucture installations,for every scan ip oracle creates one scan vip and scan
listener.

>scan vip and scan listener forms a pair .

>scan ip can be placed either in /etc/hosts or DNS Server.

———————————————————
Service entries in 11gR2
———————————————————
TEST=
(Description=
(Addresslist=(protocol=TCP)(host=cluster-scan.oracle.com)(port=1521)
(loadbalancer=yes)
(service_name=TEST)
——
———-
———-
)

———————————————————-
/etc/hosts
———————————————————-
192.168.128.1 lnx01 lnx01.oracle.com
192.168.128.2 lnx02 lnx02.oracle.com

192.16.128.10 lnx01-priv
192.16.128.11 lnx02-priv

192.16.128.50 lnx01-vip
192.16.128.51 lnx02-vip

192.16.128.100 cluster-scan
192.16.128.101 cluster-scan
192.16.128.102 cluster-scan

————————————————————
DNS Entries
————————————————————
@ IN SOA dns.oracle.com{



}
@ IN NS dns.oracle.com
localhost IN A 127.0.0.1
dns IN A 192.16.128.200
lnx01 IN A 192.16.128.1
lnx02 IN A 192.16.128.2
lnx01-vip IN A 192.16.128.50
lnx02-vip IN A 192.16.128.51
cluster-scan IN A 192.16.128.100
192.16.128.101
192.16.128.102

———————————————————–
In RAC Nodes
———————————————————–
#vi /etc/resolv.conf
nameserver 192.16.128.200

RAC Backup and Recovery

Backup and Recovery
—————————-
1.RMAN
———–
>Full backup
>Incremental/Diffrential Backup
>Compressed Backup

Configuration Modes:

>No catalog
>catalog

————————–
2.Physical Backup
————————–
>Cold Backup/offline backup/consistent backup
>Hot Backup/online/inconsistent

—————————-
3.Logical Backup
—————————-
Traditional Logical i.e;
>exp
>imp

and

Datapump Utilities
>expdp
>impdp

Note: If the database storage area loacation is ASM then only logical and RMAN backups
are possible.
>If the database storage are location is CFS all the above backups are possible.
>In a Rac system see that channels are equally distributed among all the instances.
>In some environments an instance is totally dedicated for RMAN backups.

example: OCR BACKUP
$ocrconfig -export (in 10g)
$ocrconfig -export (11g) Note:online backup is possible in RAC.
Voting disk
$dd command

PATCHING AND UPGRADING 10.2.0.1 RAC Environment to 10.2.0.4

PATCHING AND UPGRADING 10.2.0.1 RAC Environment to 10.2.0.4
—————————————————————————————-

>Patch is a bug fix

>collection of bug fixes is called a patch set

>Different types of patches released by oracle are:
1.Interim patch / one-off patch
2.Patch sets
3.Critical patch updates.
4.Patchset updates
5.CRS Hash bundle patches

>All above patches can be installed using Opatch utility except Patchsets.

>Patchsets are installed by invoking runinstaller.

>CRS_HASH bundle patches are patches to fix the bugs in the cluster.

>Clusterware can be patched in two ways
1.Rolling upgrade
2.Non-Rolling upgrade

>Incase of rolling upgrade we will bringdown all the services on then ode that we
wish to install the patch set.This is node by node activity.

>In case of non-rolling upgrade we bring down the entire cluster and install the patch set.

>To know the list of patches installed in CRS_HOME
$ Opatch lsinventory -detail $ORA_CRS_HOME

>To know the list of patches installed in ORACLE_HOME
$ opatch lsinventory -detail $ORACLE_HOME

————————————————————————–

Prepatch Consideration and Recommendation
——————————————————————————
>Take the backup of oracle inventory
>Take the backup of clusterware binaries and oracle binaries
>take the backup of oracle database

Linux:
$cat /etc/oraInst.loc
Others:
$cat /var/opt/oracle/oraInst.loc

lnx01]# cd /opt
opt]# unzip p63………x86.zip
#ls
#cp Readme.html /root/Desktop
#mozilla or firefox
#file>open file>Desktop>Readme.html

$su – oracle
$export ORACLE_SID=hrms1
$emctl stop dbconsole
$ssh lnx02
$emctl stop database
lnx01]$isqlplusctl stop (to stop browser)
$srvctl stop service -d hrms
$srvctl stop database -d hrms
$ps -ef
$srvctl stop asm -n lnx01
$srvctl stop asm -n lnx02
$srvctl stop nodeapps -n lnx01
$srvctl stop nodeapps -n lnx02
$exit
lnx01]#/etc/init.d/init.crs stop (for stopping cluster in first node)
#ssh lnx02 /etc/init.d/init.crs stop (to stop cluster in 2nd node)
$crs_stat -t
$crsctl query crs softwareversion
$crsctl query crs activeversion

lnx01]#xhost +
#su – oracle
$cd /opt/Disk1
disk1]$ls
$./runInstaller
change the path
Name:Oracrs10g_home
Run the script
lnx01]#/u01/app/oracle/product/10.2.0/crs_home/bin/crsctl stop crs
lnx01]#/u01/app/oracle/product/10.2.0/crs_home/install/root102.sh
lnx01]# su – oracle
$crsctl check crs
$ps -ef|grep smon
$crsctl query crs softwareversion
$crsctl query crs activeversion
lnx01]ssh lnx02 /u01/app/oracle/product/10.2.0/crs_home/bin/crsctl stop
lnx01]ssh lnx02 /u01/app/oracle/product/10.2.0/crs_home/install/root102.sh
Note:Upgrade Conmpleted Sucessfully
exit>yes
$sqlplus -v (10.2.0.1)
Now we will Patchset on Oracle Home
Note:At the time of installing patchset on Oracle_Home cluster must be up and running.

lnx01]$ export ORACLE_SID=hrms1
$emctl stop dbconsole
$isqlplusctl stop
$srvctl stop service -d hrms
$srvctl stop database -d hrms
$srvctl stop asm -n lnx01
$srvctl stop asm -n lnx02
$srvctl stop listener -n lnx01
$srvctl stop listener -n lnx02
$cd /opt/Disk1/
$./runInstaller
Next > Next > (Untill you get install button)
Install
(execute script on both the nodes)
exit > ok > yes
# exit

Now we will upgrade the database First,

lnx01]$srvctl start listener -n lnx01
lnx02]$srvctl start listener -n lnx02
lnx01]$srvctl start asm -n lnx01
lnx01]$srvctl start asm -n lnx02
$export ORACLE_SID=hrms1
$sqlplus / as sysdba
SQL>startup nomount
SQL>shut immediate
SQL>startup upgrade
SQL>@$ORACLE_HOME/rdbms/catupgrd.sql (command to upgrade the database)
After the compltetion of the process do this,
SQL>shut immediate
SQL>startup
SQL>select object_status from dba_objects where status=’INVALID’;
SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql
SQL>select comp_name,version,status from dba_registry;
SQL>alter system set cluster_database=true scope=spfile;
SQL>exit
$srvctl start database -d hrms
$srvctl start service -d hrms
$emca -upgrade db -cluster
(this will upgrade enterprise manager console)

Creating Stored Scripts

Creating Stored Scripts:
——————————

RMAN>list script names; (to see existing script names)

RMAN>create script bkp
{backup datafile 4;} (local script method)

RMAN>create global script bkp1
{backup database;}

RMAN>list script names;

RMAN>print script bkp;

RMAN>print script bkp1; ( to see script contents)

RMAN>run {execute script bkp;}

——————————————————
Taking Incremental Backup
——————————————————

RMAN>backup incremental level 0 database;

RMAN>backup incremental level 1 database;

RMAN>backup incremental level 2 database;

RMAN>backup incremental level 1 cummulative database;(in 11g)

Adding a node to the existing RAC environment

STEPS: Configure hardware and operating system

1>Propogate clusterware to the new node by executing addNode.sh from $ORA_CRS_HOME/oui/bin

2>Reconfigure Virtual IP’S by invoking vipca

3>Propogate oracle binaries to the new node by executing
addNode.sh from $ORACLE_HOME/oui/bin

4>Reconfigure listener by invoking dbca

5>Add instance by invoking dbca

dbca—->instance management—->Add instance

11gR2 RAC New Features

> ASM has been integrated with clusterware binaries i.e:grid
>Oracle has rearchitected grid infrastuructures in to two stacks
1.Oracle High Availability Service stack
2.Cluster Ready Service Stack
>We cannot place OCR and Voting File in raw partitions (we can place in asm diskgroups).
>CTSS has been introduced to synchronize data and time.
>Oracle has introduced scan name and scan ip’s SCAN NAME and SCAN IP’S can be placed
either in /etc/hosts or DNS Server.

NOTE: IF SCAN IPS’S are placed in /etc/hosts only one scan ip will be enabled
.If placed in DNS all three will be enabled

>SSH Configuration is automated
>we can start and stop all nodes in the cluster with a single command.
>Oracle has introduced SCAN LISTENER ,for every SCAN IP, it creates one scan VIP
and one scan listener.