Useful Standby database commands

To see if the MRP is running or not

SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;

How to Check if Active Data Guard is Already Enabled :

SELECT ‘Using Active Data Guard’ ADG FROM V$MANAGED_STANDBY M, V$DATABASE D WHERE M.PROCESS LIKE ‘MRP%’ AND D.OPEN_MODE=’READ ONLY’;

ADG

———————–                                               (if ENABLED)

Using Active Data Guard

no rows selected.                                                              (if NOT ENABLED)

To start Redo Apply, issue the following statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

To stop recovery of a standby database:

SQL>   ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

On primary:

SQL >  SELECT THREAD# “Thread”,SEQUENCE# “Last Sequence Generated”

FROM V$ARCHIVED_LOG

WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#) ORDER BY 1;

On Standby:

SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference”

FROM

(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,

(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL

WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

Check Archive gap on Standby:

SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

Check the max log sequence on Primary DB

SELECT Max(sequence#) FROM   v$log_history;

# check the last log applied on STANDBY
SELECT thread#,   Max(sequence#) “Last Standby Seq Applied”
FROM   v$archived_log   WHERE  applied = ‘YES’  GROUP  BY thread#   ORDER  BY 1;

SQL> SELECT * FROM V$DATAGUARD_STATS WHERE NAME=’transport lag’;

SQL> SELECT * FROM V$DATAGUARD_STATS WHERE NAME=’apply lag’;

On primary:  (Where dblink_stby à dblink on the primary that points to the standby database )

SQL> select scn_to_timestamp((select current_scn from v$database))-scn_to_timestamp((select current_scn from v$database@dblink_stby)) from dual;

The value returned from the query indicates the number of seconds that data on the standby lags behind the current position of the primary database.

Find Redo GAP: (on STANDBY)

SQL> select * from v$archive_gap;

thread#    low_sequence#    high_sequence#
—————————————————————————
2              222                   222
3              341                   342

Identify missing archive logs from above output:

SQL> select name from v$archived_log where thread# = 2 and dest_id = 2 and sequence# = 222;

Advertisements

Clone Database from Snapshot Standby Database

In 11g, Oracle introduced the new feature Snapshot Standby Database for Data Guard. whether it is possible to create a Database Clone out of such a Snapshot Standby Database. Maybe you want to preserve the changes that have been done during the time, your Physical Standby Database was turned into a READ-WRITE opened (Test-)Database that this is most likely possible  :-)

DGMGRL> convert database physt to snapshot standby;
Converting database "physt" to a Snapshot Standby database, please wait...
Database "physt" converted successfully

DGMGRL> show configuration

Configuration - myconf

 Protection Mode: MaxPerformance
 Databases:
 prima - Primary database
 physt - Snapshot standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
I continue to prepare a Clone Database by creating an spfile and a password file for it. Then I start it into NOMOUNT:
uhesse $ sqlplus sys/oracle@clon as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Tue Dec 21 17:00:26 2010
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup nomount
ORACLE instance started.

Total System Global Area  313860096 bytes
Fixed Size                  1336232 bytes
Variable Size             209718360 bytes
Database Buffers           96468992 bytes
Redo Buffers                6336512 bytes
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
The version in use is 11gR2, but it should work with 11gR1 the same way. Now RMAN is used to create the Clone from the Snapshot Standby:
RMAN> connect target sys/oracle@physt
connected to target database: PRIMA (DBID=1970640575)

RMAN> connect auxiliary sys/oracle@clon
connected to auxiliary database: CLON (not mounted)

RMAN> duplicate target database to clon from active database;

Starting Duplicate Db at 21-DEC-10
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=96 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 12/21/2010 17:03:19
RMAN-05541: no archived logs found in target database

RMAN> exit
Oops! I need Archivelogs on the Snapshot Standby. Note that they are in the same directory but with a different Resetlogs Identifier as the Archivelogs created when the Database was a Physical Standby Database.
uhesse $ sqlplus sys/oracle@physt as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Tue Dec 21 17:03:49 2010
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter system switch logfile;
System altered.

SQL> select name from v$archived_log;

NAME
--------------------------------------------------------------------------------
/home/oracle/physt/archive/1_12_738347903.dbf
/home/oracle/physt/archive/1_8_738347903.dbf
/home/oracle/physt/archive/1_7_738347903.dbf
/home/oracle/physt/archive/1_9_738347903.dbf
/home/oracle/physt/archive/1_11_738347903.dbf
/home/oracle/physt/archive/1_10_738347903.dbf
/home/oracle/physt/archive/1_13_738347903.dbf
/home/oracle/physt/archive/1_14_738347903.dbf
/home/oracle/physt/archive/1_15_738347903.dbf
/home/oracle/physt/archive/1_1_738349098.dbf

10 rows selected.
Again we go with the duplicate:
RMAN> connect target sys/oracle@physt
connected to target database: PRIMA (DBID=1970640575)

RMAN> connect auxiliary sys/oracle@clon
connected to auxiliary database: CLON (not mounted)

RMAN> duplicate target database to clon from active database;

Starting Duplicate Db at 21-DEC-10
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=95 device type=DISK

contents of Memory Script:
{
 sql clone "alter system set  db_name =
 ''PRIMA'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
 sql clone "alter system set  db_unique_name =
 ''CLON'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
 shutdown clone immediate;
 startup clone force nomount
 backup as copy current controlfile auxiliary format  '/home/oracle/clon/control01.ctl';
 alter clone database mount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''PRIMA'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''CLON'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     313860096 bytes

Fixed Size                     1336232 bytes
Variable Size                209718360 bytes
Database Buffers              96468992 bytes
Redo Buffers                   6336512 bytes

Starting backup at 21-DEC-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_physt.f tag=TAG20101221T172429 RECID=6 STAMP=738350670
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 21-DEC-10

database mounted

contents of Memory Script:
{
 set newname for datafile  1 to
 "/home/oracle/clon/system01.dbf";
 set newname for datafile  2 to
 "/home/oracle/clon/sysaux01.dbf";
 set newname for datafile  3 to
 "/home/oracle/clon/undotbs01.dbf";
 set newname for datafile  4 to
 "/home/oracle/clon/users01.dbf";
 backup as copy reuse
 datafile  1 auxiliary format
 "/home/oracle/clon/system01.dbf"   datafile
 2 auxiliary format
 "/home/oracle/clon/sysaux01.dbf"   datafile
 3 auxiliary format
 "/home/oracle/clon/undotbs01.dbf"   datafile
 4 auxiliary format
 "/home/oracle/clon/users01.dbf"   ;
 sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 21-DEC-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/home/oracle/physt/system01.dbf
output file name=/home/oracle/clon/system01.dbf tag=TAG20101221T172436
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/home/oracle/physt/sysaux01.dbf
output file name=/home/oracle/clon/sysaux01.dbf tag=TAG20101221T172436
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/home/oracle/physt/undotbs01.dbf
output file name=/home/oracle/clon/undotbs01.dbf tag=TAG20101221T172436
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/home/oracle/physt/users01.dbf
output file name=/home/oracle/clon/users01.dbf tag=TAG20101221T172436
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 21-DEC-10

sql statement: alter system archive log current

contents of Memory Script:
{
 backup as copy reuse
 archivelog like  "/home/oracle/physt/archive/1_3_738349098.dbf" auxiliary format
 "/home/oracle/clon/archive/1_3_738349098.dbf"   ;
 catalog clone archivelog  "/home/oracle/clon/archive/1_3_738349098.dbf";
 switch clone datafile all;
}
executing Memory Script

Starting backup at 21-DEC-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=3 RECID=12 STAMP=738350751
output file name=/home/oracle/clon/archive/1_3_738349098.dbf RECID=0 STAMP=0
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:15
Finished backup at 21-DEC-10

cataloged archived log
archived log file name=/home/oracle/clon/archive/1_3_738349098.dbf RECID=12 STAMP=738350766

datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=738350766 file name=/home/oracle/clon/system01.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=7 STAMP=738350767 file name=/home/oracle/clon/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=8 STAMP=738350767 file name=/home/oracle/clon/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=9 STAMP=738350767 file name=/home/oracle/clon/users01.dbf

contents of Memory Script:
{
 set until scn  226204;
 recover
 clone database
 delete archivelog
 ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 21-DEC-10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=95 device type=DISK

starting media recovery

archived log for thread 1 with sequence 3 is already on disk as file /home/oracle/clon/archive/1_3_738349098.dbf
archived log file name=/home/oracle/clon/archive/1_3_738349098.dbf thread=1 sequence=3
media recovery complete, elapsed time: 00:00:00
Finished recover at 21-DEC-10

contents of Memory Script:
{
 shutdown clone immediate;
 startup clone nomount;
 sql clone "alter system set  db_name =
 ''CLON'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
 sql clone "alter system reset  db_unique_name scope=spfile";
 shutdown clone immediate;
 startup clone nomount;
}
executing Memory Script

database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     313860096 bytes

Fixed Size                     1336232 bytes
Variable Size                209718360 bytes
Database Buffers              96468992 bytes
Redo Buffers                   6336512 bytes

sql statement: alter system set  db_name =  ''CLON'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     313860096 bytes

Fixed Size                     1336232 bytes
Variable Size                209718360 bytes
Database Buffers              96468992 bytes
Redo Buffers                   6336512 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CLON" RESETLOGS ARCHIVELOG
 MAXLOGFILES     26
 MAXLOGMEMBERS      5
 MAXDATAFILES      100
 MAXINSTANCES     1
 MAXLOGHISTORY      292
 LOGFILE
 GROUP  1 ( '/home/oracle/clon/log_g1m1.rdo' ) SIZE 100 M  REUSE,
 GROUP  2 ( '/home/oracle/clon/log_g2m1.rdo' ) SIZE 100 M  REUSE
 DATAFILE
 '/home/oracle/clon/system01.dbf'
 CHARACTER SET WE8ISO8859P1

contents of Memory Script:
{
 set newname for tempfile  1 to
 "/home/oracle/clon/temp01.dbt";
 switch clone tempfile all;
 catalog clone datafilecopy  "/home/oracle/clon/sysaux01.dbf",
 "/home/oracle/clon/undotbs01.dbf",
 "/home/oracle/clon/users01.dbf";
 switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /home/oracle/clon/temp01.dbt in control file

cataloged datafile copy
datafile copy file name=/home/oracle/clon/sysaux01.dbf RECID=1 STAMP=738350787
cataloged datafile copy
datafile copy file name=/home/oracle/clon/undotbs01.dbf RECID=2 STAMP=738350787
cataloged datafile copy
datafile copy file name=/home/oracle/clon/users01.dbf RECID=3 STAMP=738350787

datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=738350787 file name=/home/oracle/clon/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=738350787 file name=/home/oracle/clon/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=738350787 file name=/home/oracle/clon/users01.dbf

contents of Memory Script:
{
 Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 21-DEC-10
So this was pretty much the same as with an ordinary Clone from a Production Database – as expected. Now we turn the Snapshot Standby back into Physical Standby, thereby losing all the changes that might be done there by using Flashback Database internally. But the Clone Database will remain, now being an independent Database with another Name and DBID than the Primary resp. Snapshot Standby.
DGMGRL> convert database physt to physical standby;
Converting database "physt" to a Physical Standby database, please wait...
Operation requires shutdown of instance "physt" on database "physt"
Shutting down instance "physt"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "physt" on database "physt"
Starting instance "physt"...
ORACLE instance started.
Database mounted.
Continuing to convert database "physt" ...
Operation requires shutdown of instance "physt" on database "physt"
Shutting down instance "physt"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "physt" on database "physt"
Starting instance "physt"...
ORACLE instance started.
Database mounted.
Database "physt" converted successfully
DGMGRL> show configuration

Configuration - myconf

 Protection Mode: MaxPerformance
 Databases:
 prima - Primary database
 physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
SQL> connect sys/oracle@clon as sysdba
Connected.
SQL> select name,dbid,database_role from v$database;

NAME            DBID DATABASE_ROLE
--------- ---------- ----------------
CLON      2202079427 PRIMARY

SQL> connect sys/oracle@physt as sysdba
Connected.
SQL> select name,dbid,database_role from v$database;

NAME            DBID DATABASE_ROLE
--------- ---------- ----------------
PRIMA     1970640575 PHYSICAL STANDBY

SQL> connect sys/oracle@prima as sysdba
Connected.
SQL> select name,dbid,database_role from v$database;

NAME            DBID DATABASE_ROLE
--------- ---------- ----------------
PRIMA     1970640575 PRIMARY
Conclusion: With 11g, we can very comfortably transform our Physical Standby into a fully changeable Snapshot Standby, clone that modified Database into an autonomous Database to keep the changes and afterwards transform the Snapshot Standby back into Physical Standby. During the whole action, Redo Protocol from the Primary Database can still be received – but not applied – on the Standby Site.

Find the delay of standby from primary

Here is a quick way to find the current timestamp from the current SCN.

http://www.chandu208.blogspot.com/2012/03/finding-current-scn-of-database.html

On primary:

SQL> select to_char(CURRENT_SCN) from v$database;

TO_CHAR(CURRENT_SCN)

————————————

12345678909

SQL> select scn_to_timestamp(48133107062) from dual;

SCN_TO_TIMESTAMP(12345678909)

—————————————————

19-JUN-11 08.40.27.000000000 AM

SQL>

And do the same on standby database, then find the difference in time.

If you set the delay parameter then find it in the parameter file

Thanks !!

Data Guard Configuration / Setup

Primary d/b:- prac1 and Standby d/b :-standby

Step-1:-   In primary data base prac1.

SQL >alter database force logging

Step-2:- Create a password file for the primary d/b.

$ orapwd file=orapwprac1 password=oracle entries=5 force=y

Step-3:- Configure Standby redo-logs.

The number and size of standby redo logs should be equal to or more than the number of online redo logs of the primary d/b i.e. in this case prac1.

Size of the log file can be obtained from

SQL> select byte/1024/1024 from v$log;
Add standby logfile accordingly:-

>alter database add standby logfile group 4(‘/$ORACLE_HOME/prac1/redo04.log’) size 50M;

>alter database add standby logfile group 5(‘/$ORACLE_HOME/prac1/redo05.log’) size 50M;

>alter database add standby logfile group 6(‘/$ORACLE_HOME/prac1/redo06.log’) size 50M;
To check standby redo logs:-

sql> select group#,status from v$standby_log;
See:  Usage, Benefits and Limitations of Standby Redo Logs (SRL) (Doc ID 219344.1)
Step-4:- Set Primary d/b Initialization parameters.

Edit the pfile of primary d/b prac1 i.e. initprac1.ora in $ORACLE_HOME/dbs

DB_NAME=prac1 DB_UNIQUE_NAME=prac1 LOG_ARCHIVE_CONFIG=’DG_CONFIG=(prac1,standby)’ CONTROL_FILES=’/home/leo/oracle/product/10.2.0/db_1/prac1/control01.ctl’, ‘/home/leo/oracle/product/10.2.0/db_1/prac1/control02.ctl’, ‘/home/leo/oracle/product/10.2.0/db_1/prac1/control03.ctl’; LOG_ARCHIVE_DEST_1= ‘LOCATION=/home/leo/oracle/product/10.2.0/db_1/prac1/   VALID_FOR=(ALL_LOGFILES,ALL_ROLES)   DB_UNIQUE_NAME=prac1’ LOG_ARCHIVE_DEST_2= ‘SERVICE=standby LGWR ASYNC   VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)   DB_UNIQUE_NAME=standby’ LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30

FAL_SERVER=standby FAL_CLIENT=prac1 DB_FILE_NAME_CONVERT= ‘/home/leo/standby/’,’/home/leo/oracle/product/10.2.0/db_1/prac1/’ LOG_FILE_NAME_CONVERT= ‘/home/leo/standby/’,’/home/leo/oracle/product/10.2.0/db_1/prac1/’ STANDBY_FILE_MANAGEMENT=AUTO

Step – 5:-  In prac1:

SQL> SHUTDOWN IMMEDIATE;

SQL> STARTUP MOUNT;

SQL> ALTER DATABASE ARCHIVELOG;

SQL> ALTER DATABASE OPEN;

Step – 6:- Create a Backup Copy of the Primary Database Datafiles (if using ADG then no need of backup)

Startup database;

$export ORACLE_SID=prac1;

$rman target / nocatalog;

Rman> backup database;

>shutdown database;

Even take backup of database physically by copying the datafiles to the standby file.

Step -7:- Create a Control File for the Standby Database

>startup mount;

> ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘/home/leo/control01.ctl’;

> ALTER DATABASE OPEN;

Step-8:- Prepare an Initialization Parameter File for the Standby Database

Copy the primary database parameter file to the standby database.

In primary database:-

>startup pfile=’/home/leo/oracle/product/10.2.0/db_1/dbs/initprac1.ora’;

>create spfile from pfile;

>create pfile=’ home/leo/oracle/product/10.2.0/db_1/dbs/initstandby.ora’ from spfile;

Step-9:-  Set initialization parameters on the physical standby database    DB_NAME=prac1

DB_UNIQUE_NAME=standby

LOG_ARCHIVE_CONFIG=’DG_CONFIG=(prac1,standby)’

CONTROL_FILES=’/home/leo/standby/control1.ctl

DB_FILE_NAME_CONVERT=’/home/leo/oracle/product/10.2.0/db_1/prac1/’,’/home/leo/standby/’

LOG_FILE_NAME_CONVERT=

’/home/leo/oracle/product/10.2.0/db_1/prac1/’,’/home/leo/standby/’

LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc

LOG_ARCHIVE_DEST_1=

 ‘LOCATION=/home/leo/standby/

  VALID_FOR=(ALL_LOGFILES,ALL_ROLES)

  DB_UNIQUE_NAME=standby’

LOG_ARCHIVE_DEST_2=

 ‘SERVICE=prac1 LGWR ASYNC

  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)

  DB_UNIQUE_NAME=prac1′

LOG_ARCHIVE_DEST_STATE_1=ENABLE

LOG_ARCHIVE_DEST_STATE_2=ENABLE

REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

STANDBY_FILE_MANAGEMENT=AUTO

FAL_SERVER=prac1

FAL_CLIENT=standby

Step-10:-  Copy Files from the Primary System to the Standby System

Physically copy datafiles from primary d/b to the location at standby

Prac1>  cp *.dbf –v ~/standby

Step-11:- Create standby database password file

Dbs>orapwd file=orastandby password=oracle entries=5 force=y;

Step-10:– Setup listeners for “prac1” and “standby”, as well as Tnsnames for “prac1” and “standby”

Step-12:-  Start the physical standby database.

At standby database:- 
 SQL>startup mount;

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Step-13:- Test the data guard connection.

In primary d/b

>alter system switch logfile;

>archive log list;

In standby db

>archive log list

The log sequence number should be the same for both databases

ex:

ON PRIMARY :

SQL>  select thread#,max(sequence#) from v$archived_log where applied=’YES’ group by thread#;

THREAD#  MAX(SEQUENCE#)

———-       ————–

1             6045

or
SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)

————–

6045

ON STANDBY:

SQL>  select thread#,max(sequence#) from v$archived_log where applied=’YES’ group by thread#;

THREAD#  MAX(SEQUENCE#)

———-         ————–

1           6045

or
SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)

————–

6045

Here, the maximum sequence# generated on the Primary database is 6045 and the maximum sequence# applied on the standby database is also 6045 which means that the standby database is in sync with the primary database.

Ref:  Creating a Physical Standby Database (Doc ID 1475344.1)

Active Data Guard

Oracle Active Data Guard enhances the performance of production databases by offloading resource intensive operations to one or more standby databases, protecting interactive users and critical business tasks from the impact of long-running operations. Active Data Guard enables a physical standby database to be used for real-time reporting with minimal latency, and compared to traditional replication methods is simple to use, transparently supports all datatypes, and offers very high performance.

Oracle Active Data Guard enables read-only access to a physical standby database for queries, sorting, reporting, web-based access, etc., while continuously applying changes received from the production database.

Active Data Guard also enables the use of fast incremental backups when offloading backups to a standby database, and can provide additional benefits of high availability and disaster protection against planned or unplanned outages at the production site.

Features

Physical Standby with Real-Time Query

Fast Incremental Backup on Physical Standby

Benefits

  • Increase performance– Offload unpredictable workloads to an up-to-date replica of the production database
  • Simplify operations– Eliminate management complexity that accompanies traditional replication solutions
  • Eliminate compromise– The reporting replica is up to date and online at all times, which is not possible with traditional storage mirroring technology
  • Reduce cost– An Oracle Active Data Guard physical standby database can also provide disaster recovery and/or serve as a test database – no additional storage or servers required
  • Enables standby database for use of real-time reporting
  • Offloads backup operations
  • Insulates critical operations from unexpected system impacts
  • Provides high availability and disaster protection

Data guard GAP Detection and Resolution

Overview:

An archive gap is a range of missing archived redo logs created whenever the standby system is unable to receive the next archived redo log generated by the primary  db.

For example, an archive gap occurs when the network becomes unavailable and automatic archiving from the primary database to the standby database stops. When the network is available again, automatic transmission of the redo data from the primary database to the failed standby database resumes.

Methods of Gap Resolution:

Data Guard provides 2 methods for gap resolution, Automatic and FAL (Fetch Archive Log). The automatic method requires no configuration while FAL requires configuration via init.ora parameters.

Automatic Gap Resolution:

In both 9.0.1 and 9.2.0 Automatic Gap Resolution is implemented during log transport processing. As the LGWR or ARCH process begins to send redo over to the standby, the sequence number of the log being archived is compared to the last sequence received by the RFS process on the standby. If the RFS process detects that the archive log being received is greater than the last sequence received plus one, then the RFS will piggyback a request to the primary to send the missing archive logs. Since the standby destination requesting the gap resolution is already defined by the LOG_ARCHIVE_DEST_n parameter on the primary, the ARCH process on the primary sends the logs to the standby and notifies the LGWR that the gaps have been resolved.

Starting in 9.2.0, automatic gap resolution has been enhanced. In addition to the above, the ARCH process on the primary database polls all standby databases every minute to see if there is a gap in the sequence of archived redo logs. If a gap is detected then the ARCH process sends the missing archived redo log files to the standby databases that reported the gap. Once the gap is resolved, the LGWR process is notified that the site is up to date.

FAL Gap Resolution:

As the RFS process on the standby receives an archived log, it updates the standby controlfile with the name and location of the file. Once the MRP (Media Recovery Process) sees the update to the controlfile, it attempts to recover that file. If the MRP process finds that the archived log is missing or is corrupt, FAL is called to resolve the gap or obtain a new copy. Since MRP has no direct communications link with the primary, it must use the FAL_SERVER and FAL_CLIENT initialization parameters to resolve the gap.Both of these parameters must be set in the standby init.ora. The 2 parameters are defined as:

FAL_SERVER:  An OracleNet service name that exist in the standby tnsnames.ora file that points to the primary db listener. The FAL_SERVER parameter can contain a comma delimited list of locations that should be attempted during gap resolution.

FAL_CLIENT:   An OracleNet service name that exist in the primary tnsnames.ora file that points to the standby database listener. The value of FAL_CLIENT should also be listed as the service in a remote archive destination pointing to the standby.

Once MRP needs to resolve a gap it uses the value from FAL_SERVER to call the primary database. Once communication with the primary has been established, MRP passes the FAL_CLIENT value to the primary ARCH process. The primary ARCH process locates the remote archive destination with the corresponding service name and ships the missing archived redo logs. If the first destination listed in FAL_SERVER is unable to resolve the gap then the next destination is attempted until either the gap is resolved or all FAL_SERVER destination have been tried.

As of 9.2.0 FAL Gap Resolution only works with Physical Standby databases as the process is tied to MRP. Gap recovery on a logical standby database is  handled through the heartbeat mechanism.

Simulating Gap Recovery

The follow steps can be used to illustrate and verify both automatic and FAL gap recovery. As the steps involve shutting down the standby database, which can impact disaster recovery, it is recommended to perform these procedures in a test environment.

Automatic Gap Resolution:

  1. Shutdown the physical standby database.
  2. Determine the current sequence on the primary database.
  3. Perform at least three log switches on the primary database.
  4. Verify that the logs did not get transferred to the standby archive dest.
  5. Start the standby database.
  6. Perform a log switch on the primary and verify that the gap gets resolvedon the standby.

FAL Gap Resolution:

  1. In the standby init.ora define the fal_server and fal_client parameters.
  2. Bounce the standby database so that the parameters are put into effect.
  3. Perform three log switches on the primary database.
  4. In the standby_archive_dest directory delete the middle archive log onthe standby.
  5. Start managed recovery and verify that the gap is resolved by FAL_SERVERand FAL_CLIENT.

Manually Resolving a Gap:

In some rare cases it might be necessary to manually resolve gaps. The following section describes how to query the appropriate views to determine if a gap exists.

On your physical standby database:

Query the V$ARCHIVE_GAP view:

SQL> SELECT * FROM V$ARCHIVE_GAP;

THREAD#   LOW_SEQUENCE#  HIGH_SEQUENCE#

———–             ————-       ————–

1                        443                    446

The query results show that your physical standby database is currently missing logs from sequence 443 to sequence 446 for thread 1. After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo logs on your primary database:

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1  AND  SEQUENCE# BETWEEN 443 AND 446;

NAME

——————————————————-

/u01/oradata/arch/arch_1_443.arc

/u01/oradata/arch/arch_1_444.arc

/u01/oradata/arch/arch_1_445.arc

Copy the logs returned by the query to your physical standby database and

register using the ALTER DATABASE REGISTER LOGFILE command.

SQL> ALTER DATABASE REGISTERLOGFILE    ‘/u01/oradata/stby/arch/arch_1_443.arc’;

SQL> ALTER DATABASE REGISTER LOGFILE    ‘/u01/oradata/stby/arch/arch_1_444.arc’;

SQL> ALTER DATABASE REGISTER LOGFILE    ‘/u01/oradata/stby/arch/arch_1_445.arc’;

Once the log files have been registered in the standby controlfile, you can restart the MRP process.

On a logical standby database:

Query the DBA_LOGSTDBY_LOG view.

SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L    WHERE NEXT_CHANGE# NOT IN  (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# =THREAD#)  ORDER BY THREAD#,SEQUENCE#;

THREAD#          SEQUENCE#             FILE_NAME

———-                    ———-             ———————————————–

1                         451                   /u01/oradata/logical_stby/arch/arch_1_451.arc

1                         453                   /u01/oradata/logical_stby/arch/arch_1_453.arc

Copy the missing logs to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby db.

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE /u01/oradata/logical_stby/arch/arch_1_452.arc;

After you register these logs on the logical standby database, you can restart log apply services.

Data Guard Protection Modes

Oracle Data Guard (known as Oracle Standby Database prior to Oracle9i), forms an extension to the Oracle RDBMS and provides organizations with high availability, data protection, and disaster recovery for enterprise databases.

One of those new features in Oracle9i Release 2 is the ability for the DBA to place the database into one of the following protection modes:

  • Maximum Protection
  • Maximum Availability
  • Maximum Performance

A Data Guard configuration will always run in one of the three protection modes listed above. Each of the three modes provide a high degree of data protection; however they differ with regards to data availability and performance of the primary database.

Log Transport Services

Log Transport Services enables and controls the automated transfer of redo data within a Data Guard configuration from the primary site to each of its standby sites.

Log transport services also controls the level of data protection for your database. The DBA will configure log transport services to balance data protection and availability against database performance. Log transport services will also coordinate with Log Apply Servicesand Role Transition Services for switchover and failover operations.

Maximum Protection Mode

Maximum Protection mode offers the ultimate in data protection. It guarantees no data loss will occur in the event the primary database fails. In order to provide this level of protection, the redo data needed to recover each transaction must be written to both the local (online) redo log and to a standby redo log on at least one standby database before the transaction can be committed. In order to guarantee no loss of data can occur, the primary database will shut down if a fault prevents it from writing its redo data to at least one remote standby redo log.

In a multiple-instance RAC database environment, Data Guard will shut down the primary database if it is unable to write the redo data to at least one properly configured database instance (see minimum requirements below).

In order to participate in Maximum Protection mode:

  • At least one standby instance has to be configured with standby redo logs.
  • When configuring the standby destination service in the LOG_ARCHIVE_DEST_n initialization parameter on the primary database, you must use the LGWR, SYNC, and AFFIRM attributes.
NOTE:  It is highly recommended that a Data Guard configuration operating in Maximum   Protection mode contain at least two physical standby databases that meet the requirements listed in the table above. That way, the primary database can continue processing if one of the physical standby databases cannot receive redo data from the primary database. If only one standby database is configured with the minimum requirements listed above, the primary database will shut down when the physical standby databases cannot receive redo data from the primary database!

Maximum Availability Mode

Maximum Availability mode provides the highest level of data protection that is possible without affecting the availability of the primary database. This protection mode is very similar to Maximum Protection where a transaction will not commit until the redo data needed to recover that transaction is written to both the local (online) redo log and to at least one remote standby redo log. Unlike Maximum Protection mode; however, the primary database will not shut down if a fault prevents it from writing its redo data to a remote standby redo log. Instead, the primary database will operate in Maximum Performance mode until the fault is corrected and all log gaps have been resolved. After all log gaps have been resolved, the primary database automatically resumes operating in Maximum Availability mode.

NOTE: Please note that Maximum Availability mode guarantees that no data will be lost if the primary fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.

Just like Maximum Protection mode, Maximum Availability requires:

  • At least one standby instance has to be configured with standby redo logs.
  • When configuring the standby destination service in the LOG_ARCHIVE_DEST_n initialization parameter on the primary database, you must use the LGWR, SYNC, and AFFIRM attributes.

Maximum Performance Mode

Maximum Performance mode is the default protection mode and provides the highest level of data protection that is possible without affecting the performance or availability of the primary database. With this protection mode, a transaction is committed as soon as the redo data needed to recover the transaction is written to the local (online) redo log.
When configuring the standby destination service in the LOG_ARCHIVE_DEST_n initialization parameter on the primary database, log transport services can be set to use either LGWR / ASYNC or ARCH. In order to reduce the amount of data loss on the standby destination if the primary database were to fail, set the LGWR and ASYNC attribute. Using this configuration, the primary database writes its redo stream to the standby redo logs on the standby database asynchronously with respect to the commitment of the transactions that create the redo data. When the nodes are connected with sufficient bandwidth, this mode provides a level of data protection that approaches that of Maximum Availability mode with minimal impact on primary database performance.

Note that the use of standby redo logs while operating in Maximum Performance mode is only necessary when configuring log transport services to use LGWR. When log transport services is configured to use ARCH, standby redo logs are not required.

SQL> alter database set standby database to maximize protection;
SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE FROM V$DATABASE;

PROTECTION_MODE     PROTECTION_LEVEL    DATABASE_ROLE

——————- ——————- ————–

MAXIMUM PROTECTION  MAXIMUM PROTECTION  PRIMARY

SUMMARY : 

  • 1) Maximum protection ( zero data loss)
  • Redo synchronously transported to standby database
  • Redo must be applied to at least one standby before transactions on primary can be committed
  • Processing on primary is suspended if no standby is available

2)   Maximum availability  (minimal data loss)

  • Similar to maximum protection mode
  • If no standby database is available processing continues on primary

3) Maximum performance (default)

  • Redo asynchronously shipped to standby database
  • If no standby database is available processing continues on primary


Read More: TRANSPORT: Data Guard Protection Modes (Doc ID 239100.1)

Managing Standby Database

Start the standby database

startup nomount;

alter database mount standby database;

alter database recover managed standby database disconnect;

Shutdown the standby database

alter database recover managed standby database cancel;

shutdown immediate;

Registering the archive logs manualy

alter database register or replace logfile ‘/opt/oracle/archive/oradb_1_1234_48540.arc’;

Opening standby database in READ ONLY mode

alter database recover managed standby database cancel;

alter database open read only;

From READ ONLY mode to Managed recovery
a. terminate all the active user sessions on the standby database

b.
alter database recover managed standby database disconnect;

Standby Database Creation

Steps to create the physical standby database:
1. Take the backup of production database
2. Create standby controlfile
3. Copy the backup, standy controfile and init.ora files to standby box
4. Create necessary directories
5. Edit the init.ora file
6. Mount the database
7. Restore the database
8. Start the recovery

1. Take the backup of production database
rman> run {
allocate channel c1 type disk format ‘/backup/%d_rman_bkp_%T_p%p_s%s’ maxpiecesize 1800M;
allocate channel c2 type disk format ‘/backup/%d_rman_bkp_%T_p%p_s%s’ maxpiecesize 1800M;
allocate channel c3 type disk format ‘/backup/%d_rman_bkp_%T_p%p_s%s’ maxpiecesize 1800M;
allocate channel c4 type disk format ‘/backup/%d_rman_bkp_%T_p%p_s%s’ maxpiecesize 1800M;
backup as compressed backupset databse plus archivelog;
}

2. Create standby controlfile
sql> alter database create controlfile as ‘/backup/standby_controlfile.ctl’;

3. Copy the backup, controfile and init.ora files to standby box
use ftp or scp to copy the files

4. Create necessary directories
$ mkdir adump bdump cdump udump datafiles controlfiles redofiles archive

5. Edit the init.ora file
a. Change controfile path
b. Change the paths of all dump_dest parametes
c. Change the LOG_ARCHIVE_DEST path
d. Add below 3 parametes
DB_FILE_NAME_CONVERT=’/old_path1/’,’/new_path1/’ ,’/old_path2/’,’/new_path2/’
LOG_FILE_NAME_CONVERT=’/old_path/’,’/new_path/’
STANDBY_FILE_MANAGEMENT=’AUTO’

6. Mount the database
sql> startup mount pfile=’init.ora’

7. Restore the database
rman> run {
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
restore database;
}

8. Start the recovery
sql> alter database recover manged standby database disconnect;

Applying RMAN Incremental Backup To Standby Database

 We can use RMAN incremental backup to roll forward the physical standby database in the following situations:

1. when an archive sequence is missing
2. when lots of archives needs to be applied

Steps:

1. Take the current SCN of the standby database

SQL> select to_char(current_scn) scn from v$database;
SCN
—————
23339995

2. Stop the redo apply on the standby database

SQL> alter database recover managed standby database cancel;

3. Take the incremental backup of the production database from the current SCN of the standby database

RMAN> run {
allocate channel c1 type disk format ‘/backup/%d_incr_s%s_p%p’ maxpiecesize 2g;
allocate channel c2 type disk format ‘/backup/%d_incr_s%s_p%p’ maxpiecesize 2g;
allocate channel c3 type disk format ‘/backup/%d_incr_s%s_p%p’ maxpiecesize 2g;
allocate channel c4 type disk format ‘/backup/%d_incr_s%s_p%p’ maxpiecesize 2g;
backup as compressed backupset incremental from scn 23339995 database;
}

4. Transfer the incremental backup to the standby system

$ scp /backup/* usename@standbyhost:/tmp/

5. Catalog all the incremental backup pieces to the standby database

RMAN> catalog start with ‘/tmp/’ ;

6. Apply incremental backup to the standby database

RMAN> recover database noredo;

7. Create new standby controlfile at the production database, copy this to standby system and replace current standby controlfile with this new one.

SQL> alter database create standby controlfile as ‘/backup/standby_incr.ctl’;

8. Start redo apply on the physical standby database

SQL> alter database recover managed standby database disconnect;

Note:
If you have added new datafile to the production and if it is not at created at standby you have to restore the datafile.
RAMN> restore datafile 56,57,58;