Thursday, January 27, 2022

Changing dataguard mode

 Changing Dataguard mode from Maximum performance mode to maximum availability

alter system set log_archive_dest_2='SERVICE=oradb_s2 LGWR SYNC AFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) NET_TIMEOUT=30 REOPEN=50 DB_UNIQUE_NAME=oradb_s2';

SERVICE=oradb_s2 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradb_s2

NET_TIMEOUT – Specifies the time in seconds that the primary database log writer will wait for a response from the Log Network Service (LNS) before terminating the connection and marking the standby (destination) as failed. The default value is 30 seconds.

REOPEN – Specifies the time in seconds that the log writer should wait before attempting to access a previously failed standby (destination). The default is 50 seconds.

Note : Shut down the primary database and restart it in mounted mode if the protection mode is being set to Maximum Protection or being changed from Maximum Performance to Maximum Availability. If the primary database is an Oracle Real Applications Cluster, shut down all of the instances and then start and mount a single instance.

Now, shutdown database :

STARTUP MOUNT

ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;

alter database open;

SELECT NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_MODE FROM V$DATABASE;




alter system dump logfile '/u01/app/oracle/fra/ORADB_S2/onlinelog/o1_mf_5_ho2pm6m3_.log' validate;


PR00 (PID:16647): Media Recovery Log /u01/app/oracle/fra/ORADB_S2/archivelog/2022_01_27/o1_mf_1_124_jz4w8rdo_.arc
Errors with log /u01/app/oracle/fra/ORADB_S2/archivelog/2022_01_27/o1_mf_1_124_jz4w8rdo_.arc
PR00 (PID:16647): MRP0: Background Media Recovery terminated with error 328
2022-01-27T16:58:26.816034+04:00
Errors in file /u01/app/oracle/diag/rdbms/oradb_s2/oradb_s2/trace/oradb_s2_pr00_16647.trc:
ORA-00328: archived log ends at change 3830620, need later change 4142985
ORA-00334: archived log: '/u01/app/oracle/fra/ORADB_S2/archivelog/2022_01_27/o1_mf_1_124_jz4w8rdo_.arc'
PR00 (PID:16647): Managed Standby Recovery not using Real Time Apply
Recovery interrupted!

col name for a80
col thread# for 99
col sequence# for 9999
col archived for a5
col status for a10
select name, thread#, sequence#, archived, applied, status from v$archived_log where 4142985 between FIRST_CHANGE# and NEXT_CHANGE#;
select name, thread#, sequence#, archived, applied, status from v$archived_log where 3830620 between FIRST_CHANGE# and NEXT_CHANGE#;

The standby database is not-in-sync after converting maximum performance mode to maximum availability with the above error

Lets do roll forward for this , as we have no solution and will see the result..

Friday, January 7, 2022

what happens at DR side if space is exhausted during tablespace creation at primary side


oradb--> primary db
oradb_s2 -->Standby db

SQL> CREATE BIGFILE TABLESPACE TESTTBS2 DATAFILE '/backup/testtbs02.dbf' SIZE 200M AUTOEXTEND ON NEXT 500M MAXSIZE UNLIMITED;

Tablespace created.

SQL> show parameter service

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
service_names                        string      oradb.localdomain     


The below is the alert log from DR side, when space is exhausted at DR side while tablespace creation at primary side. 
WARNING: File being created with same name as in Primary
Existing file may be overwritten
2022-01-07T22:09:20.122725+04:00
Errors in file /u01/app/oracle/diag/rdbms/oradb_s2/oradb_s2/trace/oradb_s2_pr00_4229.trc:
ORA-27072: File I/O error
Additional information: 4
Additional information: 12288
Additional information: 237568
2022-01-07T22:09:20.179041+04:00
Errors in file /u01/app/oracle/diag/rdbms/oradb_s2/oradb_s2/trace/oradb_s2_pr00_4229.trc:
ORA-19502: write error on file "/backup/testtbs02.dbf", block number 12288 (block size=8192)
ORA-27072: File I/O error
Additional information: 4
Additional information: 12288
Additional information: 237568
File #14 added to control file as 'UNNAMED00014'.
Originally created as:
'/backup/testtbs02.dbf'
Recovery was unable to create the file as:
'/backup/testtbs02.dbf'
PR00 (PID:4229): MRP0: Background Media Recovery terminated with error 1274
2022-01-07T22:09:20.343651+04:00
Errors in file /u01/app/oracle/diag/rdbms/oradb_s2/oradb_s2/trace/oradb_s2_pr00_4229.trc:
ORA-01274: cannot add data file that was originally created as '/backup/testtbs02.dbf'
PR00 (PID:4229): Managed Standby Recovery not using Real Time Apply
2022-01-07T22:09:22.638703+04:00
Recovery interrupted!

IM on ADG: Start of Empty Journal

IM on ADG: End of Empty Journal
Recovered data files to a consistent state at change 3262529
stopping change tracking
2022-01-07T22:09:23.132280+04:00
Errors in file /u01/app/oracle/diag/rdbms/oradb_s2/oradb_s2/trace/oradb_s2_pr00_4229.trc:
ORA-01274: cannot add data file that was originally created as '/backup/testtbs02.dbf'
2022-01-07T22:09:23.184841+04:00
Background Media Recovery process shutdown (oradb_s2)


Again if we forcefully starting the MRP, it is terminating , below is the log from alert log.

alter database recover managed standby database disconnect
2022-01-07T22:17:03.094994+04:00
Attempt to start background Managed Standby Recovery process (oradb_s2)
Starting background process MRP0
2022-01-07T22:17:03.116141+04:00
MRP0 started with pid=60, OS id=5031
2022-01-07T22:17:03.118382+04:00
Background Managed Standby Recovery process started (oradb_s2)
2022-01-07T22:17:08.146408+04:00
 Started logmerger process
2022-01-07T22:17:08.164904+04:00

IM on ADG: Start of Empty Journal

IM on ADG: End of Empty Journal
PR00 (PID:5037): Managed Standby Recovery starting Real Time Apply
2022-01-07T22:17:08.216985+04:00
Errors in file /u01/app/oracle/diag/rdbms/oradb_s2/oradb_s2/trace/oradb_s2_dbw0_1947.trc:
ORA-01186: file 14 failed verification tests
ORA-01157: cannot identify/lock data file 14 - see DBWR trace file
ORA-01111: name for data file 14 is unknown - rename to correct file
ORA-01110: data file 14: '/u01/app/oracle/product/19.0.0/db_1/dbs/UNNAMED00014'
2022-01-07T22:17:08.217231+04:00
File 14 not verified due to error ORA-01157
2022-01-07T22:17:08.218311+04:00
Errors in file /u01/app/oracle/diag/rdbms/oradb_s2/oradb_s2/trace/oradb_s2_dbw0_1947.trc:
ORA-01157: cannot identify/lock data file 203 - see DBWR trace file
ORA-01110: data file 203: '/u01/app/oracle/oradata/ORADB/pdb1/temp01.dbf'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
2022-01-07T22:17:08.431502+04:00
max_pdb is 3
PR00 (PID:5037): MRP0: Background Media Recovery terminated with error 1111
2022-01-07T22:17:08.462884+04:00
Errors in file /u01/app/oracle/diag/rdbms/oradb_s2/oradb_s2/trace/oradb_s2_pr00_5037.trc:
ORA-01111: name for data file 14 is unknown - rename to correct file
ORA-01110: data file 14: '/u01/app/oracle/product/19.0.0/db_1/dbs/UNNAMED00014'
ORA-01157: cannot identify/lock data file 14 - see DBWR trace file
ORA-01111: name for data file 14 is unknown - rename to correct file
ORA-01110: data file 14: '/u01/app/oracle/product/19.0.0/db_1/dbs/UNNAMED00014'
PR00 (PID:5037): Managed Standby Recovery not using Real Time Apply
stopping change tracking
2022-01-07T22:17:08.626152+04:00
Recovery Slave PR00 previously exited with exception 1111
2022-01-07T22:17:08.674098+04:00
Errors in file /u01/app/oracle/diag/rdbms/oradb_s2/oradb_s2/trace/oradb_s2_mrp0_5031.trc:
ORA-01111: name for data file 14 is unknown - rename to correct file
ORA-01110: data file 14: '/u01/app/oracle/product/19.0.0/db_1/dbs/UNNAMED00014'
ORA-01157: cannot identify/lock data file 14 - see DBWR trace file
ORA-01111: name for data file 14 is unknown - rename to correct file
ORA-01110: data file 14: '/u01/app/oracle/product/19.0.0/db_1/dbs/UNNAMED00014'
2022-01-07T22:17:08.674347+04:00
Background Media Recovery process shutdown (oradb_s2)
2022-01-07T22:17:09.182195+04:00
Completed: alter database recover managed standby database disconnect


DATAGUARD

 DATAGUARD

select name,db_unique_name,open_mode,log_mode,database_role from v$database;

alter database recover managed standby database disconnect;

SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
  (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE ARCH.THREAD# = APPL.THREAD#;

select PROCESS, PID, STATUS, THREAD#, SEQUENCE# from gv$managed_standby;

select PROCESS, PID, STATUS, THREAD#, SEQUENCE# from gv$managed_standby where STATUS='IDLE';

select PROCESS, PID, STATUS, THREAD#, SEQUENCE# from gv$managed_standby where PROCESS='MRP0';

set lines 555
col DEST_NAME for a30
 col DESTINATION for a30
col ERROR for a30

 select INST_ID,DEST_ID,DEST_NAME,DESTINATION,ERROR,STATUS from gv$archive_dest where DESTINATION IS NOT NULL;

RMAN

 RMAN:
=========

RMAN Retention Policy

The RMAN retention policy determines how long backups are kept for and how many copies are retained.

The retention policy can be defined in terms of:

  • REDUNDANCY - number of copies to retain
  • RECOVERY WINDOW - number of days to retain backups

The retention policy is configured by the RETENTION POLICY parameter. The default value is

CONFIGURE RETENTION POLICY TO REDUNDANCY 1; 

To specify a recovery window of 1 day use:

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS; 

Backup files can be marked as expired or obsolete.

  • Expired - RMAN has performed a crosscheck and the file cannot be found
  • Obsolete - based on the retention policy the file is not needed for recovery

The backup retention policy only applies to full or level 0 datafile and control file backups.

The retention policy does not directly affect archived redo logs and incremental level 1 backups. These files become obsolete when no full backups exist that need them.

Datafile backup sets cannot be deleted until all datafile backups within the backup set are obsolete.

Obsolete backups can be identified using the REPORT OBSOLETE command and deleted using the DELETE OBSOLETE command.

The retention policy can also be set to NONE

RMAN> CONFIGURE RETENTION POLICY TO NONE;

To revert to the default value use:

RMAN> CONFIGURE RETENTION POLICY CLEAR;

RMAN> crosscheck backup;

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/ORADB/autobackup/2022_01_07/o1_mf_s_1093368491_jxjhzcyy_.bkp RECID=10 STAMP=1093368491
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/media/sf_DR_TEST/ORADB_20220107_11_1_FULL RECID=11 STAMP=1093370071
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/media/sf_DR_TEST/ORADB_20220107_12_1_FULL RECID=12 STAMP=1093370127
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/media/sf_DR_TEST/ORADB_20220107_13_1_FULL RECID=13 STAMP=1093370154
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/ORADB/autobackup/2022_01_07/o1_mf_s_1093370178_jxjkn50v_.bkp RECID=14 STAMP=1093370181
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/media/sf_DR_TEST/ORADB_20220107_15_1_ARCHIVE RECID=15 STAMP=1093370187
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/media/sf_DR_TEST/ORADB_20220107_16_1_CONTROL RECID=16 STAMP=1093370199
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/u01/app/oracle/fra/ORADB/autobackup/2022_01_07/o1_mf_s_1093370201_jxjknsmo_.bkp RECID=17 STAMP=1093370201
Crosschecked 8 objects

RMAN Compression

How does RMAN compression works ?

Goal of this document is to give you a Complete Understanding of

1. Null compression

2. Unused block compression

3. Binary compression




Sunday, January 2, 2022

ORA-00742: Log read detects lost write in thread 1 sequence 104 block 30681

 ORA-00742: Log read detects lost write in thread 1 sequence 104 block 30681

You can encounter this error if your Oracle Database machine was suddenly stopped without proper shutdown command due to some unavoidable problem like Power Failure, Power cable issue, etc.

-bash-4.4$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Jan 2 22:49:25 2022
Version 19.5.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area  281016480 bytes
Fixed Size                  8882336 bytes
Variable Size             218103808 bytes
Database Buffers           50331648 bytes
Redo Buffers                3698688 bytes
Database mounted.
ORA-00742: Log read detects lost write in thread 1 sequence 104 block 30681
ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/testdb/redo02.log'

SQL> select status from v$instance;

STATUS
------------
MOUNTED

2.Check the status of log file 2 in v$log & v$logfile and you can notice log group 2 is the current logfile.

    GROUP# STATUS  TYPE    MEMBER                                                       IS_     CON_ID
---------- ------- ------- ------------------------------------------------------------ --- ----------
         1         ONLINE  /u01/app/oracle/oradata/testdb/redo01.log                    NO           0
         2         ONLINE  /u01/app/oracle/oradata/testdb/redo02.log                    NO           0
SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ --------- ----------
         2          1        104   52428800        512          1 NO  CURRENT                 479386 02-JAN-22   9.2954E+18                    0
         1          1        103   52428800        512          1 NO  INACTIVE                479343 02-JAN-22       479386 02-JAN-22          0

3.Try to clear the current log file most likely it will fail as group log 2 is the current logfile.
SQL> alter database clear unarchived logfile group 2;
alter database clear unarchived logfile group 2
*
ERROR at line 1:
ORA-01624: log 2 needed for crash recovery of instance testdb (thread 1)
ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/testdb/redo02.log'

4.So we will try to perform manual recovery on the database, Please note since Group log 2 is current it will be not archived, and the requested archive log sequence 149 is the current redo logfile, and hence I have provided manually redo log 2 file location. But even after “Media recovery complete” While trying to open DB, I encounter an ORA-00600 error.

SQL> recover database until cancel;
ORA-00279: change 479386 generated at 01/02/2022 00:06:42 needed for thread 1
ORA-00289: suggestion : /u01/app/oracle/fast_recovery_area/TESTDB/archivelog/2022_01_02/o1_mf_1_104_%u_.arc
ORA-00280: change 479386 for thread 1 is in sequence #104


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/u01/app/oracle/oradata/testdb/redo02.log
Log applied.
Media recovery complete.

SQL> alter database open resetlogs;

Database altered.


-bash-4.4$ ps -ef | grep pmon
  oracle  1309     1   0 22:49:47 ?           0:00 ora_pmon_testdb
-bash-4.4$


Saturday, January 1, 2022

Manual DB creation

 Manual Oracle database creation 19c on Oracle-Solaris_11.4

SunOS kolsol.localdomain 5.11 11.4.0.15.0 i86pc i386 i86pc

If we don't have an option to use GUI mode dbca, then we can create database manually.  we can use dbca in silent mode also, but this is for our knowledge , that how the oracle is using internal processes (.bsq files) to create a database when we issue the command " CREATE DATABASE"

BSQ file is read by the instance and is used to create the database system tablespace and all of the tables, clusters, indexes, sequences, and initial data required by the kernel code to store and maintain the metadata for user-created objects.

Below are the prerequisites to create a database :
1.Create the required directories.
mkdir -p /u01/app/oracle/admin/testdb/adump
mkdir -p /u01/app/oracle/oradata/testdb/
mkdir -p /u01/app/oracle/fast_recovery_area 
chmod 655 /u01/app/oracle/admin/testdb/adump
chmod 655 /u01/app/oracle/oradata/testdb/
chmod 655 /u01/app/oracle/fast_recovery_area

2. create the pfile in ORACLE_HOME/dbs location with name inittestdb.ora
*.audit_file_dest='/u01/app/oracle/admin/testdb/adump'
*.audit_sys_operations=TRUE
*.audit_trail='db'
*.compatible='19.5.0.0.0'
*.control_files='/u01/app/oracle/oradata/testdb/control01.ctl','/u01/app/oracle/control02.ctl'
*.db_block_size=8192
*.db_domain='localdomain'
*.db_name='testdb'
*.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'
*.db_recovery_file_dest_size=4560m
*.diagnostic_dest='/u01/app/oracle'
*.log_archive_format='%t_%s_%r.arc'
*.processes=300
*.recyclebin='OFF'
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'

-bash-4.4$ export ORACLE_SID=testdb
-bash-4.4$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
-bash-4.4$ sqlplus / as sysdba
-bash: sqlplus: command not found
-bash-4.4$ export PATH=$PATH:$ORACLE_HOME/bin
-bash-4.4$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jan 1 17:01:48 2022
Version 19.5.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area  281016480 bytes
Fixed Size                  8882336 bytes
Variable Size             218103808 bytes
Database Buffers           50331648 bytes
Redo Buffers                3698688 bytes
SQL>

check the alert log is created or not, 
/u01/app/oracle/diag/rdbms/testdb/testdb/trace
-bash-4.4$ ls -ltr alert_testdb.log
-rw-r-----   1 oracle   oinstall    6669 Jan  1 17:03 alert_testdb.log
it is created

from alert log , we can find the below details
ORACLE_HOME:    /u01/app/oracle/product/19.0.0/dbhome_1
System name:    SunOS
Node name:      kolsol.localdomain
Release:        5.11
Version:        11.4.0.15.0
Machine:        i86pc
Using parameter settings in server-side pfile /u01/app/oracle/product/19.0.0/dbhome_1/dbs/inittestdb.ora

After starting the database in nomount mode, then create the database now

SQL> CREATE DATABASE testdb
USER SYS IDENTIFIED BY oracle
USER SYSTEM IDENTIFIED BY oracle
LOGFILE GROUP 1 ('/u01/app/oracle/oradata/testdb/redo01.log') SIZE 50M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 50
MAXDATAFILES 100
MAXINSTANCES 1
DATAFILE '/u01/app/oracle/oradata/testdb/system01.dbf' SIZE 100M autoextend on
SYSAUX DATAFILE '/u01/app/oracle/oradata/testdb/sysaux01.dbf' SIZE 100M autoextend on
DEFAULT TABLESPACE users datafile '/u01/app/oracle/oradata/testdb/users01.dbf' size 100m autoextend on
DEFAULT TEMPORARY TABLESPACE temp
TEMPFILE '/u01/app/oracle/oradata/testdb/temp01.dbf' SIZE 50m
UNDO TABLESPACE undotbs1
DATAFILE '/u01/app/oracle/oradata/testdb/undotbs01.dbf'
  2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17  SIZE 200M;
CREATE DATABASE testdb
*
ERROR at line 1:
ORA-01518: CREATE DATABASE must specify more than one log file

here we can see that , we require atleast two log files to create a database, means , one is for writing and another is for archiving . 
lets add one more log file and create the database

SQL> CREATE DATABASE testdb
USER SYS IDENTIFIED BY oracle
USER SYSTEM IDENTIFIED BY oracle
LOGFILE GROUP 1 ('/u01/app/oracle/oradata/testdb/redo01.log') SIZE 50M,
GROUP 2 ('/u01/app/oracle/oradata/testdb/redo02.log') SIZE 50M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 50
MAXDATAFILES 100
MAXINSTANCES 1
DATAFILE '/u01/app/oracle/oradata/testdb/system01.dbf' SIZE 100M autoextend on
SYSAUX DATAFILE '/u01/app/oracle/oradata/testdb/sysaux01.dbf' SIZE 100M autoextend on
DEFAULT TABLESPACE users datafile '/u01/app/oracle/oradata/testdb/users01.dbf' size 100m autoextend on
DEFAULT TEMPORARY TABLESPACE temp
TEMPFILE '/u01/app/oracle/oradata/testdb/temp01.dbf' SIZE 50m
UNDO TABLESPACE undotbs1
DATAFILE '/u01/app/oracle/oradata/testdb/undotbs01.dbf'
SIZE 200M;
  2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18  CREATE DATABASE testdb
*
ERROR at line 1:
ORA-01501: CREATE DATABASE failed
ORA-00200: control file could not be created
ORA-00202: control file: '/u01/app/oracle/oradata/testdb/control01.ctl'
ORA-27038: created file already exists
Additional information: 1

here in the first try , the control file and redolog files are already created, lets remove those files, and then try
SQL> startup nomount
ORACLE instance started.

Total System Global Area  281016480 bytes
Fixed Size                  8882336 bytes
Variable Size             218103808 bytes
Database Buffers           50331648 bytes
Redo Buffers                3698688 bytes
SQL> CREATE DATABASE testdb
USER SYS IDENTIFIED BY oracle
USER SYSTEM IDENTIFIED BY oracle
LOGFILE GROUP 1 ('/u01/app/oracle/oradata/testdb/redo01.log') SIZE 50M,
GROUP 2 ('/u01/app/oracle/oradata/testdb/redo02.log') SIZE 50M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 50
MAXDATAFILES 100
MAXINSTANCES 1
DATAFILE '/u01/app/oracle/oradata/testdb/system01.dbf' SIZE 100M autoextend on
SYSAUX DATAFILE '/u01/app/oracle/oradata/testdb/sysaux01.dbf' SIZE 100M autoextend on
DEFAULT TABLESPACE users datafile '/u01/app/oracle/oradata/testdb/users01.dbf' size 100m autoextend on
DEFAULT TEMPORARY TABLESPACE temp
TEMPFILE '/u01/app/oracle/oradata/testdb/temp01.dbf' SIZE 50m
UNDO TABLESPACE undotbs1
DATAFILE '/u01/app/oracle/oradata/testdb/undotbs01.dbf'
SIZE 200M;  2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18

Database created.

from alert log we can see db creation with some .bsq files

processing ?/rdbms/admin/dcore.bsq
create tablespace SYSTEM datafile  '/u01/app/oracle/oradata/testdb/system01.dbf' SIZE 100M autoextend on

  default storage (initial 10K next 10K) EXTENT MANAGEMENT DICTIONARY online
2022-01-01T17:18:00.134237+05:30
Endian type of dictionary set to little
Completed: create tablespace SYSTEM datafile  '/u01/app/oracle/oradata/testdb/system01.dbf' SIZE 100M autoextend on

  default storage (initial 10K next 10K) EXTENT MANAGEMENT DICTIONARY online
create rollback segment SYSTEM tablespace SYSTEM
  storage (initial 50K next 50K)
Completed: create rollback segment SYSTEM tablespace SYSTEM
  storage (initial 50K next 50K)

alter tablespace system force logging
Completed: alter tablespace system force logging
2022-01-01T17:18:03.687898+05:30
processing ?/rdbms/admin/dsqlddl.bsq
processing ?/rdbms/admin/dmanage.bsq
CREATE TABLESPACE sysaux DATAFILE  '/u01/app/oracle/oradata/testdb/sysaux01.dbf' SIZE 100M autoextend on

  EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ONLINE FORCE LOGGING
2022-01-01T17:18:07.513338+05:30
Completed: CREATE TABLESPACE sysaux DATAFILE  '/u01/app/oracle/oradata/testdb/sysaux01.dbf' SIZE 100M autoextend on

  EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ONLINE FORCE LOGGING

alter tablespace system default compress for all operations
Completed: alter tablespace system default compress for all operations
alter tablespace sysaux default compress for all operations
Completed: alter tablespace sysaux default compress for all operations
2022-01-01T17:18:07.676503+05:30
processing ?/rdbms/admin/dplsql.bsq
processing ?/rdbms/admin/dtxnspc.bsq
CREATE UNDO TABLESPACE UNDOTBS1 DATAFILE  '/u01/app/oracle/oradata/testdb/undotbs01.dbf'
SIZE 200M
2022-01-01T17:18:15.295970+05:30
[1543] Successfully onlined Undo Tablespace 2.
Completed: CREATE UNDO TABLESPACE UNDOTBS1 DATAFILE  '/u01/app/oracle/oradata/testdb/undotbs01.dbf'
SIZE 200M
CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE  '/u01/app/oracle/oradata/testdb/temp01.dbf' SIZE 50m

Completed: CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE  '/u01/app/oracle/oradata/testdb/temp01.dbf' SIZE 50m

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE "TEMP"
Completed: ALTER DATABASE DEFAULT TEMPORARY TABLESPACE "TEMP"
CREATE  TABLESPACE "USERS" DATAFILE  '/u01/app/oracle/oradata/testdb/users01.dbf' size 100m autoextend on
 SEGMENT SPACE MANAGEMENT AUTO
2022-01-01T17:18:18.232890+05:30

2022-01-01T17:18:18.232890+05:30
Completed: CREATE  TABLESPACE "USERS" DATAFILE  '/u01/app/oracle/oradata/testdb/users01.dbf' size 100m autoextend on
 SEGMENT SPACE MANAGEMENT AUTO
ALTER DATABASE DEFAULT TABLESPACE "USERS"
Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
2022-01-01T17:18:18.434077+05:30
processing ?/rdbms/admin/dfmap.bsq
processing ?/rdbms/admin/denv.bsq
processing ?/rdbms/admin/drac.bsq
processing ?/rdbms/admin/dsec.bsq
2022-01-01T17:18:21.857237+05:30
processing ?/rdbms/admin/doptim.bsq
processing ?/rdbms/admin/dobj.bsq
2022-01-01T17:18:23.372012+05:30
processing ?/rdbms/admin/djava.bsq
processing ?/rdbms/admin/dpart.bsq
processing ?/rdbms/admin/drep.bsq
2022-01-01T17:18:26.086262+05:30
processing ?/rdbms/admin/daw.bsq
processing ?/rdbms/admin/dsummgt.bsq
2022-01-01T17:18:27.439197+05:30
processing ?/rdbms/admin/dtools.bsq
processing ?/rdbms/admin/dexttab.bsq
processing ?/rdbms/admin/ddm.bsq
2022-01-01T17:18:28.683148+05:30
processing ?/rdbms/admin/dlmnr.bsq
processing ?/rdbms/admin/ddst.bsq
processing ?/rdbms/admin/dfba.bsq
2022-01-01T17:18:29.786777+05:30
processing ?/rdbms/admin/dpstdy.bsq
processing ?/rdbms/admin/drupg.bsq
processing ?/rdbms/admin/dtlog.bsq
processing ?/rdbms/admin/dmisc.bsq
processing ?/rdbms/admin/dhcs.bsq

Successfully created internal service SYS$BACKGROUND at open
Successfully created internal service SYS$USERS at open

Now, check in the db
SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
TESTDB    READ WRITE

SQL> select instance_name,status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
testdb           OPEN

POST STEPS:

Below post scripts create database data dictionary views which is require to fully funtional database creation.
catalog.sql
catproc.sql
pupbld.sql

it creates default dictionary views, we have the dictionaries like dba_users, but accessing views are not available.

@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
the below pupbld.sql run as system user.
@?/sqlplus/admin/pupbld.sql

generate password file
bash-4.4$ orapwd file=orapwdtestdb password=oracle321! force=y entries=10

OPW-00029: Password complexity failed for SYS user : Password must not be too simple.
bash-4.4$ orapwd file=orapwdtestdb password=oracle$321! force=y entries=10

OPW-00029: Password complexity failed for SYS user : Password must not be too simple.

by using format=12 , password is created with simple string
orapwd file=orapwdtestdb password=oracle force=y format=12 entries=10 
-rw-r-----   1 oracle   oinstall    2048 Jan  1 18:58 orapwtestdb

Now create spfile

bash-4.4$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jan 1 19:00:06 2022
Version 19.5.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.5.0.0.0

SQL> show parameter service

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
service_names                        string      testdb.localdomain
SQL>
SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string

SQL> create spfile from pfile;

File created.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  281016480 bytes
Fixed Size                  8882336 bytes
Variable Size             218103808 bytes
Database Buffers           50331648 bytes
Redo Buffers                3698688 bytes
Database mounted.
Database opened.
SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /u01/app/oracle/product/19.0.0
                                                 /dbhome_1/dbs/spfiletestdb.ora

SQL> show parameter sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
allow_group_access_to_sga            boolean     FALSE
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     TRUE
sga_max_size                         big integer 268M
sga_min_size                         big integer 0
sga_target                           big integer 0
unified_audit_sga_queue_size         integer     1048576
SQL>
SQL> show parameter pga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_limit                  big integer 2G
pga_aggregate_target                 big integer 10M

I didnot mentioned the SGA and PGA target in the pfile, the above are the values it is created with


Now the database is ready, this is for testing purpose only.

Wednesday, December 15, 2021

Database cloning

 Database cloning using RMAN and with existing full database backups taken by RMAN

Create pfile with limited parameters in it. And startup in nomount using this pfile.
As the existing backup is taken from a CDB (container) database , include the enable_pluggable_database parameter as highlighted below.

*.audit_file_dest='/u01/app/oracle/admin/racnew/adump'
*.audit_trail='db'
*.compatible='19.0.0'
*.db_block_size=8192
*.db_name='racnew'
*.enable_pluggable_database=TRUE
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=racnewXDB)'
*.log_archive_dest_1='LOCATION=+FRA'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=300
*.processes=1500
*.remote_login_passwordfile='EXCLUSIVE'
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+FRA'

RMAN COMMAND:
DUPLICATE DATABASE TO racnew BACKUP LOCATION '/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB' NOFILENAMECHECK;

[oracle@srv1 dbs]$ rman auxiliary /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Dec 15 10:36:57 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to auxiliary database: RACNEW (not mounted)

RMAN> DUPLICATE DATABASE TO racnew BACKUP LOCATION '/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB' NOFILENAMECHECK;

Starting Duplicate Db at 15-DEC-2021 10:36:59
searching for database ID
found backup of database ID 2633676207

contents of Memory Script:
{
   sql clone "create spfile from memory";
}
executing Memory Script

sql statement: create spfile from memory

contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     666892632 bytes

Fixed Size                     9140568 bytes
Variable Size                599785472 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7634944 bytes

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''+FRA/RACNEW/CONTROLFILE/current.296.1091356675'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''RAC'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''racnew'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile from  '/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_6_1_CONTROL';
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''+FRA/RACNEW/CONTROLFILE/current.296.1091356675'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''RAC'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''racnew'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     666892632 bytes

Fixed Size                     9140568 bytes
Variable Size                599785472 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7634944 bytes

Starting restore at 15-DEC-2021 10:38:47
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=1939 device type=DISK

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02
output file name=+FRA/RACNEW/CONTROLFILE/current.296.1091356675
Finished restore at 15-DEC-2021 10:38:50

database mounted
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=1939 device type=DISK
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
   set until scn  3327291;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  9 to new;
   set newname for clone datafile  10 to new;
   set newname for clone datafile  11 to new;
   set newname for clone datafile  12 to new;
   set newname for clone datafile  13 to new;
   set newname for clone datafile  14 to new;
   set newname for clone datafile  16 to new;
   set newname for clone datafile  17 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 15-DEC-2021 10:38:59
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00003 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00004 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00007 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00009 to +DATA
channel ORA_AUX_DISK_1: reading from backup piece /media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_1_1_FULL
channel ORA_AUX_DISK_1: piece handle=/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_1_1_FULL tag=RACDB_FULL
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:05
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00005 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00006 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00008 to +DATA
channel ORA_AUX_DISK_1: reading from backup piece /media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_3_1_FULL
channel ORA_AUX_DISK_1: piece handle=/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_3_1_FULL tag=RACDB_FULL
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:36
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00010 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00011 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00012 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00013 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00014 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00016 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00017 to +DATA
channel ORA_AUX_DISK_1: reading from backup piece /media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_2_1_FULL
channel ORA_AUX_DISK_1: piece handle=/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_2_1_FULL tag=RACDB_FULL
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 15-DEC-2021 10:41:26

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=16 STAMP=1091356886 file name=+DATA/RACNEW/DATAFILE/system.282.1091356741
datafile 3 switched to datafile copy
input datafile copy RECID=17 STAMP=1091356886 file name=+DATA/RACNEW/DATAFILE/sysaux.283.1091356741
datafile 4 switched to datafile copy
input datafile copy RECID=18 STAMP=1091356887 file name=+DATA/RACNEW/DATAFILE/undotbs1.284.1091356741
datafile 5 switched to datafile copy
input datafile copy RECID=19 STAMP=1091356887 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/system.288.1091356807
datafile 6 switched to datafile copy
input datafile copy RECID=20 STAMP=1091356887 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/sysaux.287.1091356807
datafile 7 switched to datafile copy
input datafile copy RECID=21 STAMP=1091356887 file name=+DATA/RACNEW/DATAFILE/users.286.1091356741
datafile 8 switched to datafile copy
input datafile copy RECID=22 STAMP=1091356887 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/undotbs1.289.1091356807
datafile 9 switched to datafile copy
input datafile copy RECID=23 STAMP=1091356887 file name=+DATA/RACNEW/DATAFILE/undotbs2.285.1091356741
datafile 10 switched to datafile copy
input datafile copy RECID=24 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/system.292.1091356841
datafile 11 switched to datafile copy
input datafile copy RECID=25 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/sysaux.291.1091356841
datafile 12 switched to datafile copy
input datafile copy RECID=26 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undotbs1.293.1091356841
datafile 13 switched to datafile copy
input datafile copy RECID=27 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undo_2.294.1091356841
datafile 14 switched to datafile copy
input datafile copy RECID=28 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/users.296.1091356843
datafile 16 switched to datafile copy
input datafile copy RECID=29 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/dboard.290.1091356841
datafile 17 switched to datafile copy
input datafile copy RECID=30 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/test_ts.295.1091356841

contents of Memory Script:
{
   set until scn  3327291;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 15-DEC-2021 10:41:31
using channel ORA_AUX_DISK_1

starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=14
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=2 sequence=6
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=15
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=2 sequence=7
channel ORA_AUX_DISK_1: reading from backup piece /media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_5_1_ARCHIVE
channel ORA_AUX_DISK_1: piece handle=/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_5_1_ARCHIVE tag=RACDB_ARCHIVE
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_1_seq_14.297.1091356897 thread=1 sequence=14
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_2_seq_6.298.1091356897 thread=2 sequence=6
channel clone_default: deleting archived log(s)
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_1_seq_14.297.1091356897 RECID=1 STAMP=1091356896
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_1_seq_15.299.1091356897 thread=1 sequence=15
channel clone_default: deleting archived log(s)
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_2_seq_6.298.1091356897 RECID=2 STAMP=1091356896
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_2_seq_7.300.1091356897 thread=2 sequence=7
channel clone_default: deleting archived log(s)
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_1_seq_15.299.1091356897 RECID=3 STAMP=1091356896
channel clone_default: deleting archived log(s)
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_2_seq_7.300.1091356897 RECID=4 STAMP=1091356896
media recovery complete, elapsed time: 00:00:02
Finished recover at 15-DEC-2021 10:41:39
Oracle instance started

Total System Global Area     666892632 bytes

Fixed Size                     9140568 bytes
Variable Size                599785472 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7634944 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''RACNEW'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
}
executing Memory Script

sql statement: alter system set  db_name =  ''RACNEW'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile
Oracle instance started

Total System Global Area     666892632 bytes

Fixed Size                     9140568 bytes
Variable Size                599785472 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7634944 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "RACNEW" RESETLOGS ARCHIVELOG
  MAXLOGFILES    192
  MAXLOGMEMBERS      3
  MAXDATAFILES     1024
  MAXINSTANCES    32
  MAXLOGHISTORY      292
 LOGFILE
  GROUP     1  SIZE 200 M ,
  GROUP     2  SIZE 200 M
 DATAFILE
  '+DATA/RACNEW/DATAFILE/system.282.1091356741',
  '+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/system.288.1091356807',
  '+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/system.292.1091356841'
 CHARACTER SET AL32UTF8

sql statement: ALTER DATABASE ADD LOGFILE

  INSTANCE 'i2'
  GROUP     3  SIZE 200 M ,
  GROUP     4  SIZE 200 M

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   set newname for clone tempfile  3 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "+DATA/RACNEW/DATAFILE/sysaux.283.1091356741",
 "+DATA/RACNEW/DATAFILE/undotbs1.284.1091356741",
 "+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/sysaux.287.1091356807",
 "+DATA/RACNEW/DATAFILE/users.286.1091356741",
 "+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/undotbs1.289.1091356807",
 "+DATA/RACNEW/DATAFILE/undotbs2.285.1091356741",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/sysaux.291.1091356841",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undotbs1.293.1091356841",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undo_2.294.1091356841",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/users.296.1091356843",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/dboard.290.1091356841",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/test_ts.295.1091356841";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to +DATA in control file
renamed tempfile 2 to +DATA in control file
renamed tempfile 3 to +DATA in control file

cataloged datafile copy
datafile copy file name=+DATA/RACNEW/DATAFILE/sysaux.283.1091356741 RECID=1 STAMP=1091356952
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/DATAFILE/undotbs1.284.1091356741 RECID=2 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/sysaux.287.1091356807 RECID=3 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/DATAFILE/users.286.1091356741 RECID=4 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/undotbs1.289.1091356807 RECID=5 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/DATAFILE/undotbs2.285.1091356741 RECID=6 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/sysaux.291.1091356841 RECID=7 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undotbs1.293.1091356841 RECID=8 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undo_2.294.1091356841 RECID=9 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/users.296.1091356843 RECID=10 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/dboard.290.1091356841 RECID=11 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/test_ts.295.1091356841 RECID=12 STAMP=1091356955

datafile 3 switched to datafile copy
input datafile copy RECID=1 STAMP=1091356952 file name=+DATA/RACNEW/DATAFILE/sysaux.283.1091356741
datafile 4 switched to datafile copy
input datafile copy RECID=2 STAMP=1091356953 file name=+DATA/RACNEW/DATAFILE/undotbs1.284.1091356741
datafile 6 switched to datafile copy
input datafile copy RECID=3 STAMP=1091356953 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/sysaux.287.1091356807
datafile 7 switched to datafile copy
input datafile copy RECID=4 STAMP=1091356953 file name=+DATA/RACNEW/DATAFILE/users.286.1091356741
datafile 8 switched to datafile copy
input datafile copy RECID=5 STAMP=1091356953 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/undotbs1.289.1091356807
datafile 9 switched to datafile copy
input datafile copy RECID=6 STAMP=1091356953 file name=+DATA/RACNEW/DATAFILE/undotbs2.285.1091356741
datafile 11 switched to datafile copy
input datafile copy RECID=7 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/sysaux.291.1091356841
datafile 12 switched to datafile copy
input datafile copy RECID=8 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undotbs1.293.1091356841
datafile 13 switched to datafile copy
input datafile copy RECID=9 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undo_2.294.1091356841
datafile 14 switched to datafile copy
input datafile copy RECID=10 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/users.296.1091356843
datafile 16 switched to datafile copy
input datafile copy RECID=11 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/dboard.290.1091356841
datafile 17 switched to datafile copy
input datafile copy RECID=12 STAMP=1091356955 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/test_ts.295.1091356841

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened

contents of Memory Script:
{
   sql clone "alter pluggable database all open";
}
executing Memory Script

sql statement: alter pluggable database all open
Cannot remove created server parameter file
Finished Duplicate Db at 15-DEC-2021 10:44:06

Now RAC conversion is going on for this database

[oracle@srv1 dbs]$ mv initracnew.ora initracnew1.ora

[oracle@srv1 dbs]$vi initracnew1.ora -->Add the below cluster parameters
*.cluster_database=true
racnew2.thread=2
racnew1.thread=1
racnew2.undo_tablespace='UNDOTBS2'
racnew1.undo_tablespace='UNDOTBS1'
racnew2.instance_number=2
racnew1.instance_number=1

Add the instance number in the oratab file
racnew1:/u01/app/oracle/product/19.0.0/db_1:N

create the audit trail directory on node-2, wherever you want to start the second instance
mkdir -p /u01/app/oracle/admin/racnew/adump

[oracle@srv1 dbs]$
[oracle@srv1 dbs]$ . oraenv
ORACLE_SID = [racnew] ? racnew1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@srv1 dbs]$

[oracle@srv1 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Dec 15 11:28:19 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup pfile=initracnew1.ora
ORACLE instance started.

Total System Global Area  721419208 bytes
Fixed Size                  9141192 bytes
Variable Size             654311424 bytes
Database Buffers           50331648 bytes
Redo Buffers                7634944 bytes
Database mounted.
Database opened.

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string
SQL>
SQL> create spfile='+DATA' from pfile='initracnew1.ora';

File created.

Note the location of SPFILE from ASM

ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   DEC 15 11:00:00  Y    spfile.300.1091360199
ASMCMD> pwd
+DATA/RACNEW/PARAMETERFILE

spfile='+DATA/RACNEW/PARAMETERFILE/spfile.300.1091360199'

Now add the database to cluster

srvctl add database -db racnew -oraclehome $ORACLE_HOME -dbtype RAC -spfile '+DATA/RACNEW/PARAMETERFILE/spfile.300.1091360199' -role PRIMARY -startoption OPEN -stopoption IMMEDIATE -dbname RACNEW -diskgroup DATA,FRA
srvctl add instance -db racnew -i racnew1 -n srv1.localdomain
srvctl add instance -db racnew -i racnew2 -n srv2.localdomain

Now stop the database running with pfile=initracnew1.ora from sql prompt and then start with SRVCTL

srvctl start database -d racnew

[oracle@srv1 dbs]$ srvctl status database -d racnew -v -f
Instance racnew1 is running on node srv1. Instance status: Open.
Instance racnew2 is running on node srv2. Instance status: Open.

Monday, November 15, 2021

Dataguard

 We are using the below DOC for Database upgrade from 12.1,12.2 to 19c(19.12) and parallelly migrating it as PDB into a container, 

The Below process is for RAC environment:-

Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration ( Doc ID 2273304.1 )

1.Before Starting upgrade, Copy the PFILE and PASSWORD file from the existing 12.1/12.2 home to new home 19.12 at DR side

cp /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/inittestdbdr1.ora /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs

cp /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/orapwtestdbdr1.ora /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs

And the database is upgraded successfully but DR database is out-of-sync at the step:17 in the DOC 2273304.1

We started seeing the issue when we plugin the xml file to create PDB in primary db and started the MRP in standby.


Saturday, September 18, 2021

Jenkins in Oracle Database Task Automation

 Jenkins in Oracle Database Task Automation

Now we’re ready to finish setting up the node via the Jenkins UI. In Jenkins, go to Manage Jenkins, then Manage Nodes, then click New Node. Here you can give your agent node a name, then select Permanent Agent and click OK. There are a variety of options you can use here to customize your node. All we care about right now is the Launch Method.






Sunday, September 12, 2021

Important Points

 Important points to be noted from daily activities:-

While performing Re-org/oracle home movement on a production database:-  

    i)While performing oracle-home movement of a production database, please add the TNS of both primary and standby databases to the new homes on both the PRIMARY and STANDBY side.

    ii)While performing re-org/table/index/lob/table partitions movement , monitor the RECO disk group closely , flashback log also accumulated. If DR-sync-up is not happening, RECO disk group may get full.

------------------------------------------------------------------------------------------------------

If you are unable to drop the tablespace due to BIN$ objects in dba_indexes, then try to drop those indexes, if you are unable to drop them, then drop the BIN$ constraints of that table first from dba_constraints then drop the BIN$ indexes.

select sum(bytes) from v$sgastat where pool = 'shared pool';

The sharedpool holds many other structures that are outside the scope of the corresponding parameter. The SHARED_POOL_SIZE is typically the largest contributor to the shared pool as reported by the SUM(BYTES), but it is not the only contributor.

In Oracle 10g, the SHARED_POOL_SIZE parameter controls the size of the shared pool, whereas in Oracle9i and before, it was just the largest contributor to the shared pool. You should review your 9i and before actual shared pool size (based on V$SGASTAT) and use that figure to set your SHARED_POOL_SIZE parameter in Oracle 10g and above.


SELECT DBTIMEZONE AS "Database Time Zone", SESSIONTIMEZONE AS "Session Time Zone"

FROM dual;



Wednesday, September 1, 2021

SQL_TUNING

 SQL_TUNING

Turning on 10046 tracing for the sql:

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 12';

Or if the sql is already running you can turn the trace on for this sql_id:
How to trace sql id:
1. ALTER SYSTEM SET EVENTS sql_trace [sql: sql_id=3s1yukp05bzg7] bind=true, wait=true';
2. execute the query
3. alter system set events 'sql_trace off';
4. Find the trace
You do not need to leave the trace on, just collect for 10 minutes.

Explain plan of a sql query:

SET VERIFY OFF
set echo off
set pagesize 400
set linesize 300
set long 4999
set longc 4999
explain plan for 
SELECT * from soe.employee;
select * from table(dbms_xplan.display);

----------invisible index demo------------------------------
SET VERIFY OFF
set echo off
set pagesize 400
set linesize 300
set long 4999
set longc 4999
explain plan for 
SELECT * /*+ use_invisible_indexes */ from soe.employee;
select * from table(dbms_xplan.display);

--------------Indexes presence on a table------------------
SELECT aic.index_owner, aic.table_name, aic.INDEX_NAME, listagg(aic.column_name,',') within group (order by aic.column_position) cols FROM all_ind_columns aic where  aic.table_name='TABLE_NAME' group by aic.index_owner, aic.table_name, aic.INDEX_NAME order by aic.index_owner, aic.table_name ;

----------SQL Tracing-------------
1. Set the below trace at session level to get trace :
alter session set tracefile_identifier='06502';
alter session set events '06502 trace name errorstack level 3';
2. Run the plsql block and generate the error.
3. To close the trace set the following
alter session set events '06502 trace name context off';
SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';

---------Query execution time check-----------
------this query will give the avg execution time per each execution ---taking calculations from gv$sql executions count and elapsed time

WITH
p AS (
SELECT plan_hash_value
  FROM gv$sql_plan
 WHERE sql_id = TRIM('&&sql_id.')
   AND other_xml IS NOT NULL
 UNION
SELECT plan_hash_value
  FROM dba_hist_sql_plan
 WHERE sql_id = TRIM('&&sql_id.')
   AND other_xml IS NOT NULL ),
m AS (
SELECT plan_hash_value,
       SUM(elapsed_time)/SUM(executions) avg_et_secs
  FROM gv$sql
 WHERE sql_id = TRIM('&&sql_id.')
   AND executions > 0
 GROUP BY
       plan_hash_value ),
a AS (
SELECT plan_hash_value,
       SUM(elapsed_time_total)/SUM(executions_total) avg_et_secs
  FROM dba_hist_sqlstat
 WHERE sql_id = TRIM('&&sql_id.')
   AND executions_total > 0
 GROUP BY
       plan_hash_value )
SELECT p.plan_hash_value,
       ROUND(NVL(m.avg_et_secs, a.avg_et_secs)/1e6, 3) avg_et_secs
  FROM p, m, a
 WHERE p.plan_hash_value = m.plan_hash_value(+)
   AND p.plan_hash_value = a.plan_hash_value(+)
 ORDER BY
       avg_et_secs NULLS LAST;
--------------------------------------------------------------

---------sql plan changes app --------
set lines 200
set pagesize 200
col execs for 999,999,999
col avg_etime for 999,999.999 heading avg_exec|time(s)
col avg_lio for 999,999,999.9
col begin_interval_time for a30
col node for 99999
col SQL_PROFILE for a45

set verify off
col PLAN_HASH_VALUE for 9999999999 heading 'Plan Hash'
col PARSING_SCHEMA_NAME for a10 heading 'Parsing Schema'
col END_INTERVAL_TIME for a30

SELECT STAT.SNAP_ID,STAT.SQL_ID, PLAN_HASH_VALUE, PARSING_SCHEMA_NAME, round(ELAPSED_TIME_DELTA/1000000,2) exec_sec, SS.END_INTERVAL_TIME,SQL_PROFILE FROM DBA_HIST_SQLSTAT STAT, DBA_HIST_SQLTEXT TXT, DBA_HIST_SNAPSHOT SS WHERE STAT.SQL_ID = TXT.SQL_ID AND STAT.DBID = TXT.DBID AND SS.DBID = STAT.DBID AND SS.INSTANCE_NUMBER = STAT.INSTANCE_NUMBER AND STAT.SNAP_ID = SS.SNAP_ID AND  STAT.INSTANCE_NUMBER = 1 AND SS.BEGIN_INTERVAL_TIME >= sysdate-7 AND UPPER(STAT.SQL_ID) =  upper('&sqlid') ORDER BY stat.snap_id desc
/

select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,SQL_PROFILE
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where upper(sql_id) like upper('&&sqlid')
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
order by 1,2,begin_interval_time 
/
PROMPT +-------------------------------------------------+
PROMPT | Execution times of the various plans in history |
PROMPT +-------------------------------------------------+

set lines 200 pages 200
col execs for 999,999,999

col etime for 999,999,999.9
col avg_etime for 999,999.999
col avg_cpu_time for 999,999.999
col avg_lio for 999,999,999.9
col avg_pio for 9,999,999.9
col begin_interval_time for a30
col node for 99999
--break on plan_hash_value on startup_time skip 1
select sql_id, plan_hash_value,
sum(execs) execs,
sum(etime) etime,
sum(etime)/sum(execs) avg_etime,
sum(cpu_time)/sum(execs) avg_cpu_time,
sum(lio)/sum(execs) avg_lio,
sum(pio)/sum(execs) avg_pio
from (
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
elapsed_time_delta/1000000 etime,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
buffer_gets_delta lio,
disk_reads_delta pio,
cpu_time_delta/1000000 cpu_time,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,
(cpu_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta)) avg_cpu_time
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id =  '&&sqlid'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
)
group by sql_id, plan_hash_value
order by 5
/


=====================check plan diff sql id

set linesize 200
set pagesize 40
col sql_plan_hash_value for 9999999999 heading 'Sql|Plan Hash'
col rows_processed for 999999999 heading 'Rows|Processed'
col SORTS for 9999999

set verify off
col last_load for a19
col plan_hash_value for 9999999999 heading "Plan|Hash Value"
select plan_hash_value,to_char(LAST_LOAD_TIME,'DD-MON-YY HH24:MI:SS') last_load,SORTS,FETCHES,EXECUTIONS,PARSE_CALLS,DISK_READS,DIRECT_WRITES,BUFFER_GETS,ROWS_PROCESSED,HASH_VALUE,OBJECT_STATUS from gv$sqlarea where SQL_ID = '&&sqlid';

PROMPT +---------------------------------------+
PROMPT | &&sqlid Query Last 5 plan history    
PROMPT +---------------------------------------+

set lines 200
set pagesize 200
col snap_id for 999999
col instance_number for 9999
col execs for 999,999,999
col avg_etime for 999,999.999
col avg_lio for 999,999,999
col SQL_PROFILE for a32
col begin_interval_time for a26
col node for 99999
--define sqlid=&1
set verify off
select * from (select ss.snap_id, ss.instance_number node, to_char(begin_interval_time,'DD-MON-YY HH24:MI:SS') Begin_Interval, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,SQL_PROFILE
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id ='&&sqlid'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
--and begin_interval_time > sysdate-1/24
order by begin_interval_time desc,1, 2)
where rownum <= 5
/

PROMPT +---------------------------------------+
PROMPT | &&sqlid Avg Exec Plan History
PROMPT +---------------------------------------+

set lines 200 pages 200
col execs for 999,999,999
col etime for 999,999,999 heading 'Exec_Time(sec)'
col avg_etime for 999,990.999 heading 'Avg |Exec_Time(sec)'
col avg_cpu_time for 999,999.999
col avg_lio for 999,999,999 heading 'Avg | Logical IO'
col avg_pio for 9,999,999  heading 'Avg | Physical IO'
col begin_interval_time for a30
col node for 99999
select sql_id, plan_hash_value,
sum(execs) execs,
sum(etime) etime,
sum(etime)/sum(execs) avg_etime,
sum(cpu_time)/sum(execs) avg_cpu_time,
sum(lio)/sum(execs) avg_lio,
sum(pio)/sum(execs) avg_pio
from (
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
elapsed_time_delta/1000000 etime,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
buffer_gets_delta lio,
disk_reads_delta pio,
cpu_time_delta/1000000 cpu_time,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,
(cpu_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta)) avg_cpu_time
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id =  '&&sqlid'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
)
group by sql_id, plan_hash_value
order by 5
/
==================================

------------processing sql id ------------------------


set linesize 200
set pagesize 40
col sql_plan_hash_value for 9999999999 heading 'Sql|Plan Hash'
col rows_processed for 999999999 heading 'Rows|Processed'

set verify off
col last_load for a19 heading "Last Load Time"
col plan_hash_value for 9999999999 heading "Plan|Hash Value"
select plan_hash_value,to_char(LAST_LOAD_TIME,'DD-MON-YY HH24:MI:SS') last_load,SORTS,FETCHES,EXECUTIONS,PARSE_CALLS,DISK_READS,DIRECT_WRITES,BUFFER_GETS,ROWS_PROCESSED,HASH_VALUE,OBJECT_STATUS from gv$sqlarea where SQL_ID = '&&sqlid';
=========================================

=====stale stat sql id ============

set lines 500
col table_owner for a15
col table_name for a30
col partition_name for a30
select distinct b.table_owner, b.table_name, b.partition_name, b.inserts, b.updates, b.deletes, b.TRUNCATED,c.STALE_STATS,
to_char(b.timestamp, 'mm/dd/yyyy hh24:mi') timestamp, to_char(c.last_analyzed, 'mm/dd/yyyy hh24:mi') last_analyzed,
c.num_rows
from (select distinct sql_id, object#, object_name, object_owner from gv$sql_plan where sql_id = '&&sqlid' UNION select distinct sql_id, object#, object_name, object_owner from dba_hist_sql_plan where sql_id = '&&sqlid') a
, sys.dba_tab_modifications b, dba_tab_statistics c
where a.sql_id = '&&sqlid'
and  a.OBJECT_OWNER = b.table_owner
and  a.OBJECT_NAME = b.table_name
and  b.table_owner = c.owner
and  b.table_name  = c.table_name
and  NVL(b.partition_name,'NONE') = NVL(c.partition_name,'NONE')
and b.table_name is not null
order by b.table_owner, b.table_name, b.partition_name;
========================================================

======Plan change REport ==============


with samples as 
 (select *
  from dba_hist_sqlstat st
  join dba_hist_snapshot sn
  using (snap_id, instance_number) 
  where 
  --  sql_id='sqlid'
-- parsing_schema_name = 'schema'
  --and module  'DBMS_SCHEDULER' -- no sql tuning task
   begin_interval_time between sysdate - '&num_days' and sysdate
  and executions_delta > 0),
 

/* just statements that had at least 2 different plans during that time */
  sql_ids as 
   (select sql_id,
    count(distinct plan_hash_value) plancount
    from samples
    group by sql_id
    having count(distinct plan_hash_value) > 2),

/* per combination of sql_id and plan_hash_value, elapsed times per execution */
    plan_stats as 
     (select sql_id,
      plan_hash_value,
      min(parsing_schema_name),
      count(snap_id) snap_count,
      max(end_interval_time) last_seen,
      min(begin_interval_time) first_seen,
      sum(executions_delta) total_execs,
      sum(elapsed_time_delta) / sum(executions_delta) elapsed_per_exec_thisplan
      from sql_ids
      join samples
      using (sql_id)
      group by sql_id, plan_hash_value),

/* how much different is the elapsed time most recently encountered from other elapsed times in the measurement interval? */
      elapsed_time_diffs as 
       (select p.*,
        elapsed_per_exec_thisplan - first_value(elapsed_per_exec_thisplan)
          over(partition by sql_id order by last_seen desc) elapsed_per_exec_diff,
        (elapsed_per_exec_thisplan - first_value(elapsed_per_exec_thisplan)
          over(partition by sql_id order by last_seen desc)) / elapsed_per_exec_thisplan elapsed_per_exec_diff_ratio
        from plan_stats p),

/* consider just statements for which the difference is bigger than our configured threshold */
        impacted_sql_ids as 
         (select *
          from elapsed_time_diffs ),

/* for those statements, get all required information */
          all_info as
           (select sql_id,
            plan_hash_value,
        --    parsing_schema_name,
            snap_count,
            last_seen,
first_seen,
total_execs,
            round(elapsed_per_exec_thisplan / 1e6, 2) elapsed_per_exec_thisplan,
            round(elapsed_per_exec_diff / 1e6, 2) elapsed_per_exec_diff,
            round(100 * elapsed_per_exec_diff_ratio, 2) elapsed_per_exec_diff_pct,
            round(max(abs(elapsed_per_exec_diff_ratio))
              over(partition by sql_id), 2) * 100 max_abs_diff,
            round(max(elapsed_per_exec_diff_ratio) over(partition by sql_id), 2) * 100 max_diff,
            'select * from table(dbms_xplan.display_awr(sql_id=>''' || sql_id ||
            ''', plan_hash_value=>' || plan_hash_value || '));' xplan
            from elapsed_time_diffs
            where sql_id in (select sql_id from impacted_sql_ids))

/* format the output */
            select 
             a.sql_id,
plan_hash_value,
            -- parsing_schema_name,
             a.snap_count,
total_execs,
     to_char(a.elapsed_per_exec_thisplan, '999999.99') elapsed_per_exec_thisplan,
             to_char(a.elapsed_per_exec_diff, '999999.99') elapsed_per_exec_diff,
             to_char(a.elapsed_per_exec_diff_pct, '999999.99') elapsed_per_exec_diff_pct,
to_char(first_seen, 'dd-mon-yy hh24:mi') first_seen,
to_char(last_seen, 'dd-mon-yy hh24:mi') last_seen
             --xplan
             from all_info a where sql_id in (select distinct sql_id from all_info where elapsed_per_exec_diff_pct < -50)
             order by sql_id, elapsed_per_exec_diff_pct;
=============================

=====explain plan from sql_id==============
SELECT t.*
FROM gv$sql s, table(DBMS_XPLAN.DISPLAY_CURSOR(s.sql_id, s.child_number)) t 
WHERE s.sql_id='&&sql_id';
=============================

Stale Stat check query

select distinct b.table_owner, b.table_name, b.partition_name, b.inserts, b.updates, b.deletes, b.TRUNCATED,c.STALE_STATS,
to_char(b.timestamp, 'mm/dd/yyyy hh24:mi') timestamp, to_char(c.last_analyzed, 'mm/dd/yyyy hh24:mi') last_analyzed,
c.num_rows
from (select distinct sql_id, object#, object_name, object_owner from gv$sql_plan where sql_id = '&&sqlid' UNION select distinct sql_id, object#, object_name, object_owner from dba_hist_sql_plan where sql_id = '&&sqlid') a
, sys.dba_tab_modifications b, dba_tab_statistics c
where a.sql_id = '&&sqlid'
and  a.OBJECT_OWNER = b.table_owner
and  a.OBJECT_NAME = b.table_name
and  b.table_owner = c.owner
and  b.table_name  = c.table_name
and  NVL(b.partition_name,'NONE') = NVL(c.partition_name,'NONE')
and b.table_name is not null
order by b.table_owner, b.table_name, b.partition_name;

+++++++++++++++++++++++++++++++++++++++++++++++


To check unindexed foreign key columnn in database;
column columns format a30 word_wrapped
column table_name format a15 word_wrapped
column constraint_name format a15 word_wrapped
select table_name, constraint_name,
cname1 || nvl2(cname2,','||cname2,null) ||
nvl2(cname3,','||cname3,null) || nvl2(cname4,','||cname4,null) ||
nvl2(cname5,','||cname5,null) || nvl2(cname6,','||cname6,null) ||
nvl2(cname7,','||cname7,null) || nvl2(cname8,','||cname8,null)
columns
from ( select b.table_name,
b.constraint_name,
max(decode( position, 1, column_name, null )) cname1,
 max(decode( position, 2, column_name, null )) cname2,
 max(decode( position, 3, column_name, null )) cname3,
 max(decode( position, 4, column_name, null )) cname4,
 max(decode( position, 5, column_name, null )) cname5,
 max(decode( position, 6, column_name, null )) cname6,
 max(decode( position, 7, column_name, null )) cname7,
 max(decode( position, 8, column_name, null )) cname8,
 count(*) col_cnt
 from (select substr(table_name,1,30) table_name,
 substr(constraint_name,1,30) constraint_name,
 substr(column_name,1,30) column_name,
 position
 from user_cons_columns ) a,
 user_constraints b
 where a.constraint_name = b.constraint_name
 and b.constraint_type = 'R'
 group by b.table_name, b.constraint_name
 ) cons
 where col_cnt > ALL
 ( select count(*)
 from user_ind_columns i,
 user_indexes ui
 where i.table_name = cons.table_name
 and i.column_name in (cname1, cname2, cname3, cname4,
 cname5, cname6, cname7, cname8 )
 and i.column_position <= cons.col_cnt
 and ui.table_name = i.table_name
 and ui.index_name = i.index_name
 and ui.index_type IN ('NORMAL','NORMAL/REV')
 group by i.index_name
 );


Blocking session:
============
select
(select username from v$session where sid=a.sid) blocker,
a.sid,
' is blocking ',
(select username from v$session where sid=b.sid) blockee,
b.sid
from v$lock a, v$lock b
where a.block = 1
and b.request > 0
 and a.id1 = b.id1
 and a.id2 = b.id2;


ALTER SESSION SET EVENTS '10730 trace name context forever level [1, 2, 3]';

perfstats_query:
SELECT
    SYSDATE,
    sql_id,
    sql_fulltext,
    hash_value,
    parsing_schema_name,
    module,
    first_load_time,
    last_active_time,
    parse_calls,
    executions,
    round(cpu_time / (executions * 1000000) ) AS cputime,
    round(user_io_wait_time / (executions * 1000000) ) AS iowait,
    round(elapsed_time / (executions * 1000000),2 ) AS elaptimesecs
FROM
    gv$sqlarea
WHERE
    executions != 0
    AND parsing_schema_name NOT IN (
        'SYS',
        'SYSTEM',
        'DBSNMP',
        'PERFSTATS'
    )
        AND module NOT IN (
        'SQL Developer',
        'Toad'
    )
    and round(elapsed_time/(executions*1000000))>3

========
col BEGIN_INTERVAL_TIME for a30
select snap_id,BEGIN_INTERVAL_TIME,END_INTERVAL_TIME from dba_hist_snapshot where BEGIN_INTERVAL_TIME > systimestamp -1 order by BEGIN_INTERVAL_TIME 

select snap_id,instance_number inst_id,sql_id,plan_hash_value,parsing_schema_name,EXECUTIONS_TOTAL,EXECUTIONS_DELTA,ELAPSED_TIME_TOTAL,ELAPSED_TIME_DELTA from DBA_HIST_SQLSTAT where sql_id=TRIM('&&sql_id.');

 select sql_id,plan_hash_value,elapsed_time,executions from gv$sql where sql_id =TRIM('&&sql_id.');

select Inst_id,SQL_FULLTEXT,SQL_ID,EXECUTIONS,ELAPSED_TIME from gv$sqlarea where sql_id =TRIM('&&sql_id.') ;


SELECT STAT.SNAP_ID,STAT.SQL_ID, PLAN_HASH_VALUE, PARSING_SCHEMA_NAME,elapsed_time_total,executions_total,elapsed_time_delta, nvl(executions_delta,0) executions_delta ,round(ELAPSED_TIME_DELTA/1000000,2) exec_sec, SS.END_INTERVAL_TIME,SQL_PROFILE FROM DBA_HIST_SQLSTAT STAT, DBA_HIST_SQLTEXT TXT, DBA_HIST_SNAPSHOT SS 
WHERE STAT.SQL_ID = TXT.SQL_ID AND STAT.DBID = TXT.DBID AND SS.DBID = STAT.DBID AND SS.INSTANCE_NUMBER = STAT.INSTANCE_NUMBER AND STAT.SNAP_ID = SS.SNAP_ID AND  STAT.INSTANCE_NUMBER = 1 AND SS.BEGIN_INTERVAL_TIME >= sysdate-10 AND UPPER(STAT.SQL_ID) =  upper('&sqlid') ORDER BY stat.snap_id desc
/

FIG project queries

##### Service add & LOad Baclancing on Add Service ####### srvctl add service -s wcccdmt.farmersinsurance.com -r wcccdmtx1,wcccdmtx2,wcc...