Wednesday, December 15, 2021

Database cloning

 Database cloning using RMAN and with existing full database backups taken by RMAN

Create pfile with limited parameters in it. And startup in nomount using this pfile.
As the existing backup is taken from a CDB (container) database , include the enable_pluggable_database parameter as highlighted below.

*.audit_file_dest='/u01/app/oracle/admin/racnew/adump'
*.audit_trail='db'
*.compatible='19.0.0'
*.db_block_size=8192
*.db_name='racnew'
*.enable_pluggable_database=TRUE
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=racnewXDB)'
*.log_archive_dest_1='LOCATION=+FRA'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=300
*.processes=1500
*.remote_login_passwordfile='EXCLUSIVE'
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+FRA'

RMAN COMMAND:
DUPLICATE DATABASE TO racnew BACKUP LOCATION '/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB' NOFILENAMECHECK;

[oracle@srv1 dbs]$ rman auxiliary /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Dec 15 10:36:57 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to auxiliary database: RACNEW (not mounted)

RMAN> DUPLICATE DATABASE TO racnew BACKUP LOCATION '/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB' NOFILENAMECHECK;

Starting Duplicate Db at 15-DEC-2021 10:36:59
searching for database ID
found backup of database ID 2633676207

contents of Memory Script:
{
   sql clone "create spfile from memory";
}
executing Memory Script

sql statement: create spfile from memory

contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     666892632 bytes

Fixed Size                     9140568 bytes
Variable Size                599785472 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7634944 bytes

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''+FRA/RACNEW/CONTROLFILE/current.296.1091356675'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''RAC'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''racnew'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile from  '/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_6_1_CONTROL';
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''+FRA/RACNEW/CONTROLFILE/current.296.1091356675'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''RAC'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''racnew'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     666892632 bytes

Fixed Size                     9140568 bytes
Variable Size                599785472 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7634944 bytes

Starting restore at 15-DEC-2021 10:38:47
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=1939 device type=DISK

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02
output file name=+FRA/RACNEW/CONTROLFILE/current.296.1091356675
Finished restore at 15-DEC-2021 10:38:50

database mounted
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=1939 device type=DISK
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
   set until scn  3327291;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  9 to new;
   set newname for clone datafile  10 to new;
   set newname for clone datafile  11 to new;
   set newname for clone datafile  12 to new;
   set newname for clone datafile  13 to new;
   set newname for clone datafile  14 to new;
   set newname for clone datafile  16 to new;
   set newname for clone datafile  17 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 15-DEC-2021 10:38:59
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00003 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00004 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00007 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00009 to +DATA
channel ORA_AUX_DISK_1: reading from backup piece /media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_1_1_FULL
channel ORA_AUX_DISK_1: piece handle=/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_1_1_FULL tag=RACDB_FULL
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:05
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00005 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00006 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00008 to +DATA
channel ORA_AUX_DISK_1: reading from backup piece /media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_3_1_FULL
channel ORA_AUX_DISK_1: piece handle=/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_3_1_FULL tag=RACDB_FULL
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:36
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00010 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00011 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00012 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00013 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00014 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00016 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00017 to +DATA
channel ORA_AUX_DISK_1: reading from backup piece /media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_2_1_FULL
channel ORA_AUX_DISK_1: piece handle=/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_2_1_FULL tag=RACDB_FULL
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:45
Finished restore at 15-DEC-2021 10:41:26

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=16 STAMP=1091356886 file name=+DATA/RACNEW/DATAFILE/system.282.1091356741
datafile 3 switched to datafile copy
input datafile copy RECID=17 STAMP=1091356886 file name=+DATA/RACNEW/DATAFILE/sysaux.283.1091356741
datafile 4 switched to datafile copy
input datafile copy RECID=18 STAMP=1091356887 file name=+DATA/RACNEW/DATAFILE/undotbs1.284.1091356741
datafile 5 switched to datafile copy
input datafile copy RECID=19 STAMP=1091356887 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/system.288.1091356807
datafile 6 switched to datafile copy
input datafile copy RECID=20 STAMP=1091356887 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/sysaux.287.1091356807
datafile 7 switched to datafile copy
input datafile copy RECID=21 STAMP=1091356887 file name=+DATA/RACNEW/DATAFILE/users.286.1091356741
datafile 8 switched to datafile copy
input datafile copy RECID=22 STAMP=1091356887 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/undotbs1.289.1091356807
datafile 9 switched to datafile copy
input datafile copy RECID=23 STAMP=1091356887 file name=+DATA/RACNEW/DATAFILE/undotbs2.285.1091356741
datafile 10 switched to datafile copy
input datafile copy RECID=24 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/system.292.1091356841
datafile 11 switched to datafile copy
input datafile copy RECID=25 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/sysaux.291.1091356841
datafile 12 switched to datafile copy
input datafile copy RECID=26 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undotbs1.293.1091356841
datafile 13 switched to datafile copy
input datafile copy RECID=27 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undo_2.294.1091356841
datafile 14 switched to datafile copy
input datafile copy RECID=28 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/users.296.1091356843
datafile 16 switched to datafile copy
input datafile copy RECID=29 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/dboard.290.1091356841
datafile 17 switched to datafile copy
input datafile copy RECID=30 STAMP=1091356888 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/test_ts.295.1091356841

contents of Memory Script:
{
   set until scn  3327291;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 15-DEC-2021 10:41:31
using channel ORA_AUX_DISK_1

starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=14
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=2 sequence=6
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=15
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=2 sequence=7
channel ORA_AUX_DISK_1: reading from backup piece /media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_5_1_ARCHIVE
channel ORA_AUX_DISK_1: piece handle=/media/sf_Ahmed/backups_testdir/RESTRICT_GOLDCOPY_TESTPDB/RAC_20211213_5_1_ARCHIVE tag=RACDB_ARCHIVE
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_1_seq_14.297.1091356897 thread=1 sequence=14
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_2_seq_6.298.1091356897 thread=2 sequence=6
channel clone_default: deleting archived log(s)
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_1_seq_14.297.1091356897 RECID=1 STAMP=1091356896
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_1_seq_15.299.1091356897 thread=1 sequence=15
channel clone_default: deleting archived log(s)
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_2_seq_6.298.1091356897 RECID=2 STAMP=1091356896
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_2_seq_7.300.1091356897 thread=2 sequence=7
channel clone_default: deleting archived log(s)
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_1_seq_15.299.1091356897 RECID=3 STAMP=1091356896
channel clone_default: deleting archived log(s)
archived log file name=+FRA/RACNEW/ARCHIVELOG/2021_12_15/thread_2_seq_7.300.1091356897 RECID=4 STAMP=1091356896
media recovery complete, elapsed time: 00:00:02
Finished recover at 15-DEC-2021 10:41:39
Oracle instance started

Total System Global Area     666892632 bytes

Fixed Size                     9140568 bytes
Variable Size                599785472 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7634944 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''RACNEW'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
}
executing Memory Script

sql statement: alter system set  db_name =  ''RACNEW'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile
Oracle instance started

Total System Global Area     666892632 bytes

Fixed Size                     9140568 bytes
Variable Size                599785472 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7634944 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "RACNEW" RESETLOGS ARCHIVELOG
  MAXLOGFILES    192
  MAXLOGMEMBERS      3
  MAXDATAFILES     1024
  MAXINSTANCES    32
  MAXLOGHISTORY      292
 LOGFILE
  GROUP     1  SIZE 200 M ,
  GROUP     2  SIZE 200 M
 DATAFILE
  '+DATA/RACNEW/DATAFILE/system.282.1091356741',
  '+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/system.288.1091356807',
  '+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/system.292.1091356841'
 CHARACTER SET AL32UTF8

sql statement: ALTER DATABASE ADD LOGFILE

  INSTANCE 'i2'
  GROUP     3  SIZE 200 M ,
  GROUP     4  SIZE 200 M

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   set newname for clone tempfile  3 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "+DATA/RACNEW/DATAFILE/sysaux.283.1091356741",
 "+DATA/RACNEW/DATAFILE/undotbs1.284.1091356741",
 "+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/sysaux.287.1091356807",
 "+DATA/RACNEW/DATAFILE/users.286.1091356741",
 "+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/undotbs1.289.1091356807",
 "+DATA/RACNEW/DATAFILE/undotbs2.285.1091356741",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/sysaux.291.1091356841",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undotbs1.293.1091356841",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undo_2.294.1091356841",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/users.296.1091356843",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/dboard.290.1091356841",
 "+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/test_ts.295.1091356841";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to +DATA in control file
renamed tempfile 2 to +DATA in control file
renamed tempfile 3 to +DATA in control file

cataloged datafile copy
datafile copy file name=+DATA/RACNEW/DATAFILE/sysaux.283.1091356741 RECID=1 STAMP=1091356952
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/DATAFILE/undotbs1.284.1091356741 RECID=2 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/sysaux.287.1091356807 RECID=3 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/DATAFILE/users.286.1091356741 RECID=4 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/undotbs1.289.1091356807 RECID=5 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/DATAFILE/undotbs2.285.1091356741 RECID=6 STAMP=1091356953
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/sysaux.291.1091356841 RECID=7 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undotbs1.293.1091356841 RECID=8 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undo_2.294.1091356841 RECID=9 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/users.296.1091356843 RECID=10 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/dboard.290.1091356841 RECID=11 STAMP=1091356954
cataloged datafile copy
datafile copy file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/test_ts.295.1091356841 RECID=12 STAMP=1091356955

datafile 3 switched to datafile copy
input datafile copy RECID=1 STAMP=1091356952 file name=+DATA/RACNEW/DATAFILE/sysaux.283.1091356741
datafile 4 switched to datafile copy
input datafile copy RECID=2 STAMP=1091356953 file name=+DATA/RACNEW/DATAFILE/undotbs1.284.1091356741
datafile 6 switched to datafile copy
input datafile copy RECID=3 STAMP=1091356953 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/sysaux.287.1091356807
datafile 7 switched to datafile copy
input datafile copy RECID=4 STAMP=1091356953 file name=+DATA/RACNEW/DATAFILE/users.286.1091356741
datafile 8 switched to datafile copy
input datafile copy RECID=5 STAMP=1091356953 file name=+DATA/RACNEW/AA5804C6F40D3D1FE0534738A8C0DEDA/DATAFILE/undotbs1.289.1091356807
datafile 9 switched to datafile copy
input datafile copy RECID=6 STAMP=1091356953 file name=+DATA/RACNEW/DATAFILE/undotbs2.285.1091356741
datafile 11 switched to datafile copy
input datafile copy RECID=7 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/sysaux.291.1091356841
datafile 12 switched to datafile copy
input datafile copy RECID=8 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undotbs1.293.1091356841
datafile 13 switched to datafile copy
input datafile copy RECID=9 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/undo_2.294.1091356841
datafile 14 switched to datafile copy
input datafile copy RECID=10 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/users.296.1091356843
datafile 16 switched to datafile copy
input datafile copy RECID=11 STAMP=1091356954 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/dboard.290.1091356841
datafile 17 switched to datafile copy
input datafile copy RECID=12 STAMP=1091356955 file name=+DATA/RACNEW/AA5834B17FFE6058E0534738A8C0829B/DATAFILE/test_ts.295.1091356841

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened

contents of Memory Script:
{
   sql clone "alter pluggable database all open";
}
executing Memory Script

sql statement: alter pluggable database all open
Cannot remove created server parameter file
Finished Duplicate Db at 15-DEC-2021 10:44:06

Now RAC conversion is going on for this database

[oracle@srv1 dbs]$ mv initracnew.ora initracnew1.ora

[oracle@srv1 dbs]$vi initracnew1.ora -->Add the below cluster parameters
*.cluster_database=true
racnew2.thread=2
racnew1.thread=1
racnew2.undo_tablespace='UNDOTBS2'
racnew1.undo_tablespace='UNDOTBS1'
racnew2.instance_number=2
racnew1.instance_number=1

Add the instance number in the oratab file
racnew1:/u01/app/oracle/product/19.0.0/db_1:N

create the audit trail directory on node-2, wherever you want to start the second instance
mkdir -p /u01/app/oracle/admin/racnew/adump

[oracle@srv1 dbs]$
[oracle@srv1 dbs]$ . oraenv
ORACLE_SID = [racnew] ? racnew1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@srv1 dbs]$

[oracle@srv1 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Dec 15 11:28:19 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup pfile=initracnew1.ora
ORACLE instance started.

Total System Global Area  721419208 bytes
Fixed Size                  9141192 bytes
Variable Size             654311424 bytes
Database Buffers           50331648 bytes
Redo Buffers                7634944 bytes
Database mounted.
Database opened.

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string
SQL>
SQL> create spfile='+DATA' from pfile='initracnew1.ora';

File created.

Note the location of SPFILE from ASM

ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   DEC 15 11:00:00  Y    spfile.300.1091360199
ASMCMD> pwd
+DATA/RACNEW/PARAMETERFILE

spfile='+DATA/RACNEW/PARAMETERFILE/spfile.300.1091360199'

Now add the database to cluster

srvctl add database -db racnew -oraclehome $ORACLE_HOME -dbtype RAC -spfile '+DATA/RACNEW/PARAMETERFILE/spfile.300.1091360199' -role PRIMARY -startoption OPEN -stopoption IMMEDIATE -dbname RACNEW -diskgroup DATA,FRA
srvctl add instance -db racnew -i racnew1 -n srv1.localdomain
srvctl add instance -db racnew -i racnew2 -n srv2.localdomain

Now stop the database running with pfile=initracnew1.ora from sql prompt and then start with SRVCTL

srvctl start database -d racnew

[oracle@srv1 dbs]$ srvctl status database -d racnew -v -f
Instance racnew1 is running on node srv1. Instance status: Open.
Instance racnew2 is running on node srv2. Instance status: Open.

Monday, November 15, 2021

Dataguard

 We are using the below DOC for Database upgrade from 12.1,12.2 to 19c(19.12) and parallelly migrating it as PDB into a container, 

The Below process is for RAC environment:-

Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration ( Doc ID 2273304.1 )

1.Before Starting upgrade, Copy the PFILE and PASSWORD file from the existing 12.1/12.2 home to new home 19.12 at DR side

cp /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/inittestdbdr1.ora /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs

cp /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/orapwtestdbdr1.ora /u01/app/oracle/product/19.0.0.0/dbhome_1/dbs

And the database is upgraded successfully but DR database is out-of-sync at the step:17 in the DOC 2273304.1

We started seeing the issue when we plugin the xml file to create PDB in primary db and started the MRP in standby.


Saturday, September 18, 2021

Jenkins in Oracle Database Task Automation

 Jenkins in Oracle Database Task Automation

Now we’re ready to finish setting up the node via the Jenkins UI. In Jenkins, go to Manage Jenkins, then Manage Nodes, then click New Node. Here you can give your agent node a name, then select Permanent Agent and click OK. There are a variety of options you can use here to customize your node. All we care about right now is the Launch Method.






Sunday, September 12, 2021

Important Points

 Important points to be noted from daily activities:-

While performing Re-org/oracle home movement on a production database:-  

    i)While performing oracle-home movement of a production database, please add the TNS of both primary and standby databases to the new homes on both the PRIMARY and STANDBY side.

    ii)While performing re-org/table/index/lob/table partitions movement , monitor the RECO disk group closely , flashback log also accumulated. If DR-sync-up is not happening, RECO disk group may get full.

------------------------------------------------------------------------------------------------------

If you are unable to drop the tablespace due to BIN$ objects in dba_indexes, then try to drop those indexes, if you are unable to drop them, then drop the BIN$ constraints of that table first from dba_constraints then drop the BIN$ indexes.

select sum(bytes) from v$sgastat where pool = 'shared pool';

The sharedpool holds many other structures that are outside the scope of the corresponding parameter. The SHARED_POOL_SIZE is typically the largest contributor to the shared pool as reported by the SUM(BYTES), but it is not the only contributor.

In Oracle 10g, the SHARED_POOL_SIZE parameter controls the size of the shared pool, whereas in Oracle9i and before, it was just the largest contributor to the shared pool. You should review your 9i and before actual shared pool size (based on V$SGASTAT) and use that figure to set your SHARED_POOL_SIZE parameter in Oracle 10g and above.


SELECT DBTIMEZONE AS "Database Time Zone", SESSIONTIMEZONE AS "Session Time Zone"

FROM dual;



Wednesday, September 1, 2021

SQL_TUNING

 SQL_TUNING

Turning on 10046 tracing for the sql:

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 12';

Or if the sql is already running you can turn the trace on for this sql_id:
How to trace sql id:
1. ALTER SYSTEM SET EVENTS sql_trace [sql: sql_id=3s1yukp05bzg7] bind=true, wait=true';
2. execute the query
3. alter system set events 'sql_trace off';
4. Find the trace
You do not need to leave the trace on, just collect for 10 minutes.

Explain plan of a sql query:

SET VERIFY OFF
set echo off
set pagesize 400
set linesize 300
set long 4999
set longc 4999
explain plan for 
SELECT * from soe.employee;
select * from table(dbms_xplan.display);

----------invisible index demo------------------------------
SET VERIFY OFF
set echo off
set pagesize 400
set linesize 300
set long 4999
set longc 4999
explain plan for 
SELECT * /*+ use_invisible_indexes */ from soe.employee;
select * from table(dbms_xplan.display);

--------------Indexes presence on a table------------------
SELECT aic.index_owner, aic.table_name, aic.INDEX_NAME, listagg(aic.column_name,',') within group (order by aic.column_position) cols FROM all_ind_columns aic where  aic.table_name='TABLE_NAME' group by aic.index_owner, aic.table_name, aic.INDEX_NAME order by aic.index_owner, aic.table_name ;

----------SQL Tracing-------------
1. Set the below trace at session level to get trace :
alter session set tracefile_identifier='06502';
alter session set events '06502 trace name errorstack level 3';
2. Run the plsql block and generate the error.
3. To close the trace set the following
alter session set events '06502 trace name context off';
SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';

---------Query execution time check-----------
------this query will give the avg execution time per each execution ---taking calculations from gv$sql executions count and elapsed time

WITH
p AS (
SELECT plan_hash_value
  FROM gv$sql_plan
 WHERE sql_id = TRIM('&&sql_id.')
   AND other_xml IS NOT NULL
 UNION
SELECT plan_hash_value
  FROM dba_hist_sql_plan
 WHERE sql_id = TRIM('&&sql_id.')
   AND other_xml IS NOT NULL ),
m AS (
SELECT plan_hash_value,
       SUM(elapsed_time)/SUM(executions) avg_et_secs
  FROM gv$sql
 WHERE sql_id = TRIM('&&sql_id.')
   AND executions > 0
 GROUP BY
       plan_hash_value ),
a AS (
SELECT plan_hash_value,
       SUM(elapsed_time_total)/SUM(executions_total) avg_et_secs
  FROM dba_hist_sqlstat
 WHERE sql_id = TRIM('&&sql_id.')
   AND executions_total > 0
 GROUP BY
       plan_hash_value )
SELECT p.plan_hash_value,
       ROUND(NVL(m.avg_et_secs, a.avg_et_secs)/1e6, 3) avg_et_secs
  FROM p, m, a
 WHERE p.plan_hash_value = m.plan_hash_value(+)
   AND p.plan_hash_value = a.plan_hash_value(+)
 ORDER BY
       avg_et_secs NULLS LAST;
--------------------------------------------------------------

---------sql plan changes app --------
set lines 200
set pagesize 200
col execs for 999,999,999
col avg_etime for 999,999.999 heading avg_exec|time(s)
col avg_lio for 999,999,999.9
col begin_interval_time for a30
col node for 99999
col SQL_PROFILE for a45

set verify off
col PLAN_HASH_VALUE for 9999999999 heading 'Plan Hash'
col PARSING_SCHEMA_NAME for a10 heading 'Parsing Schema'
col END_INTERVAL_TIME for a30

SELECT STAT.SNAP_ID,STAT.SQL_ID, PLAN_HASH_VALUE, PARSING_SCHEMA_NAME, round(ELAPSED_TIME_DELTA/1000000,2) exec_sec, SS.END_INTERVAL_TIME,SQL_PROFILE FROM DBA_HIST_SQLSTAT STAT, DBA_HIST_SQLTEXT TXT, DBA_HIST_SNAPSHOT SS WHERE STAT.SQL_ID = TXT.SQL_ID AND STAT.DBID = TXT.DBID AND SS.DBID = STAT.DBID AND SS.INSTANCE_NUMBER = STAT.INSTANCE_NUMBER AND STAT.SNAP_ID = SS.SNAP_ID AND  STAT.INSTANCE_NUMBER = 1 AND SS.BEGIN_INTERVAL_TIME >= sysdate-7 AND UPPER(STAT.SQL_ID) =  upper('&sqlid') ORDER BY stat.snap_id desc
/

select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,SQL_PROFILE
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where upper(sql_id) like upper('&&sqlid')
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
order by 1,2,begin_interval_time 
/
PROMPT +-------------------------------------------------+
PROMPT | Execution times of the various plans in history |
PROMPT +-------------------------------------------------+

set lines 200 pages 200
col execs for 999,999,999

col etime for 999,999,999.9
col avg_etime for 999,999.999
col avg_cpu_time for 999,999.999
col avg_lio for 999,999,999.9
col avg_pio for 9,999,999.9
col begin_interval_time for a30
col node for 99999
--break on plan_hash_value on startup_time skip 1
select sql_id, plan_hash_value,
sum(execs) execs,
sum(etime) etime,
sum(etime)/sum(execs) avg_etime,
sum(cpu_time)/sum(execs) avg_cpu_time,
sum(lio)/sum(execs) avg_lio,
sum(pio)/sum(execs) avg_pio
from (
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
elapsed_time_delta/1000000 etime,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
buffer_gets_delta lio,
disk_reads_delta pio,
cpu_time_delta/1000000 cpu_time,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,
(cpu_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta)) avg_cpu_time
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id =  '&&sqlid'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
)
group by sql_id, plan_hash_value
order by 5
/


=====================check plan diff sql id

set linesize 200
set pagesize 40
col sql_plan_hash_value for 9999999999 heading 'Sql|Plan Hash'
col rows_processed for 999999999 heading 'Rows|Processed'
col SORTS for 9999999

set verify off
col last_load for a19
col plan_hash_value for 9999999999 heading "Plan|Hash Value"
select plan_hash_value,to_char(LAST_LOAD_TIME,'DD-MON-YY HH24:MI:SS') last_load,SORTS,FETCHES,EXECUTIONS,PARSE_CALLS,DISK_READS,DIRECT_WRITES,BUFFER_GETS,ROWS_PROCESSED,HASH_VALUE,OBJECT_STATUS from gv$sqlarea where SQL_ID = '&&sqlid';

PROMPT +---------------------------------------+
PROMPT | &&sqlid Query Last 5 plan history    
PROMPT +---------------------------------------+

set lines 200
set pagesize 200
col snap_id for 999999
col instance_number for 9999
col execs for 999,999,999
col avg_etime for 999,999.999
col avg_lio for 999,999,999
col SQL_PROFILE for a32
col begin_interval_time for a26
col node for 99999
--define sqlid=&1
set verify off
select * from (select ss.snap_id, ss.instance_number node, to_char(begin_interval_time,'DD-MON-YY HH24:MI:SS') Begin_Interval, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,SQL_PROFILE
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id ='&&sqlid'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
--and begin_interval_time > sysdate-1/24
order by begin_interval_time desc,1, 2)
where rownum <= 5
/

PROMPT +---------------------------------------+
PROMPT | &&sqlid Avg Exec Plan History
PROMPT +---------------------------------------+

set lines 200 pages 200
col execs for 999,999,999
col etime for 999,999,999 heading 'Exec_Time(sec)'
col avg_etime for 999,990.999 heading 'Avg |Exec_Time(sec)'
col avg_cpu_time for 999,999.999
col avg_lio for 999,999,999 heading 'Avg | Logical IO'
col avg_pio for 9,999,999  heading 'Avg | Physical IO'
col begin_interval_time for a30
col node for 99999
select sql_id, plan_hash_value,
sum(execs) execs,
sum(etime) etime,
sum(etime)/sum(execs) avg_etime,
sum(cpu_time)/sum(execs) avg_cpu_time,
sum(lio)/sum(execs) avg_lio,
sum(pio)/sum(execs) avg_pio
from (
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
elapsed_time_delta/1000000 etime,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
buffer_gets_delta lio,
disk_reads_delta pio,
cpu_time_delta/1000000 cpu_time,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,
(cpu_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta)) avg_cpu_time
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id =  '&&sqlid'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
)
group by sql_id, plan_hash_value
order by 5
/
==================================

------------processing sql id ------------------------


set linesize 200
set pagesize 40
col sql_plan_hash_value for 9999999999 heading 'Sql|Plan Hash'
col rows_processed for 999999999 heading 'Rows|Processed'

set verify off
col last_load for a19 heading "Last Load Time"
col plan_hash_value for 9999999999 heading "Plan|Hash Value"
select plan_hash_value,to_char(LAST_LOAD_TIME,'DD-MON-YY HH24:MI:SS') last_load,SORTS,FETCHES,EXECUTIONS,PARSE_CALLS,DISK_READS,DIRECT_WRITES,BUFFER_GETS,ROWS_PROCESSED,HASH_VALUE,OBJECT_STATUS from gv$sqlarea where SQL_ID = '&&sqlid';
=========================================

=====stale stat sql id ============

set lines 500
col table_owner for a15
col table_name for a30
col partition_name for a30
select distinct b.table_owner, b.table_name, b.partition_name, b.inserts, b.updates, b.deletes, b.TRUNCATED,c.STALE_STATS,
to_char(b.timestamp, 'mm/dd/yyyy hh24:mi') timestamp, to_char(c.last_analyzed, 'mm/dd/yyyy hh24:mi') last_analyzed,
c.num_rows
from (select distinct sql_id, object#, object_name, object_owner from gv$sql_plan where sql_id = '&&sqlid' UNION select distinct sql_id, object#, object_name, object_owner from dba_hist_sql_plan where sql_id = '&&sqlid') a
, sys.dba_tab_modifications b, dba_tab_statistics c
where a.sql_id = '&&sqlid'
and  a.OBJECT_OWNER = b.table_owner
and  a.OBJECT_NAME = b.table_name
and  b.table_owner = c.owner
and  b.table_name  = c.table_name
and  NVL(b.partition_name,'NONE') = NVL(c.partition_name,'NONE')
and b.table_name is not null
order by b.table_owner, b.table_name, b.partition_name;
========================================================

======Plan change REport ==============


with samples as 
 (select *
  from dba_hist_sqlstat st
  join dba_hist_snapshot sn
  using (snap_id, instance_number) 
  where 
  --  sql_id='sqlid'
-- parsing_schema_name = 'schema'
  --and module  'DBMS_SCHEDULER' -- no sql tuning task
   begin_interval_time between sysdate - '&num_days' and sysdate
  and executions_delta > 0),
 

/* just statements that had at least 2 different plans during that time */
  sql_ids as 
   (select sql_id,
    count(distinct plan_hash_value) plancount
    from samples
    group by sql_id
    having count(distinct plan_hash_value) > 2),

/* per combination of sql_id and plan_hash_value, elapsed times per execution */
    plan_stats as 
     (select sql_id,
      plan_hash_value,
      min(parsing_schema_name),
      count(snap_id) snap_count,
      max(end_interval_time) last_seen,
      min(begin_interval_time) first_seen,
      sum(executions_delta) total_execs,
      sum(elapsed_time_delta) / sum(executions_delta) elapsed_per_exec_thisplan
      from sql_ids
      join samples
      using (sql_id)
      group by sql_id, plan_hash_value),

/* how much different is the elapsed time most recently encountered from other elapsed times in the measurement interval? */
      elapsed_time_diffs as 
       (select p.*,
        elapsed_per_exec_thisplan - first_value(elapsed_per_exec_thisplan)
          over(partition by sql_id order by last_seen desc) elapsed_per_exec_diff,
        (elapsed_per_exec_thisplan - first_value(elapsed_per_exec_thisplan)
          over(partition by sql_id order by last_seen desc)) / elapsed_per_exec_thisplan elapsed_per_exec_diff_ratio
        from plan_stats p),

/* consider just statements for which the difference is bigger than our configured threshold */
        impacted_sql_ids as 
         (select *
          from elapsed_time_diffs ),

/* for those statements, get all required information */
          all_info as
           (select sql_id,
            plan_hash_value,
        --    parsing_schema_name,
            snap_count,
            last_seen,
first_seen,
total_execs,
            round(elapsed_per_exec_thisplan / 1e6, 2) elapsed_per_exec_thisplan,
            round(elapsed_per_exec_diff / 1e6, 2) elapsed_per_exec_diff,
            round(100 * elapsed_per_exec_diff_ratio, 2) elapsed_per_exec_diff_pct,
            round(max(abs(elapsed_per_exec_diff_ratio))
              over(partition by sql_id), 2) * 100 max_abs_diff,
            round(max(elapsed_per_exec_diff_ratio) over(partition by sql_id), 2) * 100 max_diff,
            'select * from table(dbms_xplan.display_awr(sql_id=>''' || sql_id ||
            ''', plan_hash_value=>' || plan_hash_value || '));' xplan
            from elapsed_time_diffs
            where sql_id in (select sql_id from impacted_sql_ids))

/* format the output */
            select 
             a.sql_id,
plan_hash_value,
            -- parsing_schema_name,
             a.snap_count,
total_execs,
     to_char(a.elapsed_per_exec_thisplan, '999999.99') elapsed_per_exec_thisplan,
             to_char(a.elapsed_per_exec_diff, '999999.99') elapsed_per_exec_diff,
             to_char(a.elapsed_per_exec_diff_pct, '999999.99') elapsed_per_exec_diff_pct,
to_char(first_seen, 'dd-mon-yy hh24:mi') first_seen,
to_char(last_seen, 'dd-mon-yy hh24:mi') last_seen
             --xplan
             from all_info a where sql_id in (select distinct sql_id from all_info where elapsed_per_exec_diff_pct < -50)
             order by sql_id, elapsed_per_exec_diff_pct;
=============================

=====explain plan from sql_id==============
SELECT t.*
FROM gv$sql s, table(DBMS_XPLAN.DISPLAY_CURSOR(s.sql_id, s.child_number)) t 
WHERE s.sql_id='&&sql_id';
=============================

Stale Stat check query

select distinct b.table_owner, b.table_name, b.partition_name, b.inserts, b.updates, b.deletes, b.TRUNCATED,c.STALE_STATS,
to_char(b.timestamp, 'mm/dd/yyyy hh24:mi') timestamp, to_char(c.last_analyzed, 'mm/dd/yyyy hh24:mi') last_analyzed,
c.num_rows
from (select distinct sql_id, object#, object_name, object_owner from gv$sql_plan where sql_id = '&&sqlid' UNION select distinct sql_id, object#, object_name, object_owner from dba_hist_sql_plan where sql_id = '&&sqlid') a
, sys.dba_tab_modifications b, dba_tab_statistics c
where a.sql_id = '&&sqlid'
and  a.OBJECT_OWNER = b.table_owner
and  a.OBJECT_NAME = b.table_name
and  b.table_owner = c.owner
and  b.table_name  = c.table_name
and  NVL(b.partition_name,'NONE') = NVL(c.partition_name,'NONE')
and b.table_name is not null
order by b.table_owner, b.table_name, b.partition_name;

+++++++++++++++++++++++++++++++++++++++++++++++


To check unindexed foreign key columnn in database;
column columns format a30 word_wrapped
column table_name format a15 word_wrapped
column constraint_name format a15 word_wrapped
select table_name, constraint_name,
cname1 || nvl2(cname2,','||cname2,null) ||
nvl2(cname3,','||cname3,null) || nvl2(cname4,','||cname4,null) ||
nvl2(cname5,','||cname5,null) || nvl2(cname6,','||cname6,null) ||
nvl2(cname7,','||cname7,null) || nvl2(cname8,','||cname8,null)
columns
from ( select b.table_name,
b.constraint_name,
max(decode( position, 1, column_name, null )) cname1,
 max(decode( position, 2, column_name, null )) cname2,
 max(decode( position, 3, column_name, null )) cname3,
 max(decode( position, 4, column_name, null )) cname4,
 max(decode( position, 5, column_name, null )) cname5,
 max(decode( position, 6, column_name, null )) cname6,
 max(decode( position, 7, column_name, null )) cname7,
 max(decode( position, 8, column_name, null )) cname8,
 count(*) col_cnt
 from (select substr(table_name,1,30) table_name,
 substr(constraint_name,1,30) constraint_name,
 substr(column_name,1,30) column_name,
 position
 from user_cons_columns ) a,
 user_constraints b
 where a.constraint_name = b.constraint_name
 and b.constraint_type = 'R'
 group by b.table_name, b.constraint_name
 ) cons
 where col_cnt > ALL
 ( select count(*)
 from user_ind_columns i,
 user_indexes ui
 where i.table_name = cons.table_name
 and i.column_name in (cname1, cname2, cname3, cname4,
 cname5, cname6, cname7, cname8 )
 and i.column_position <= cons.col_cnt
 and ui.table_name = i.table_name
 and ui.index_name = i.index_name
 and ui.index_type IN ('NORMAL','NORMAL/REV')
 group by i.index_name
 );


Blocking session:
============
select
(select username from v$session where sid=a.sid) blocker,
a.sid,
' is blocking ',
(select username from v$session where sid=b.sid) blockee,
b.sid
from v$lock a, v$lock b
where a.block = 1
and b.request > 0
 and a.id1 = b.id1
 and a.id2 = b.id2;


ALTER SESSION SET EVENTS '10730 trace name context forever level [1, 2, 3]';

perfstats_query:
SELECT
    SYSDATE,
    sql_id,
    sql_fulltext,
    hash_value,
    parsing_schema_name,
    module,
    first_load_time,
    last_active_time,
    parse_calls,
    executions,
    round(cpu_time / (executions * 1000000) ) AS cputime,
    round(user_io_wait_time / (executions * 1000000) ) AS iowait,
    round(elapsed_time / (executions * 1000000),2 ) AS elaptimesecs
FROM
    gv$sqlarea
WHERE
    executions != 0
    AND parsing_schema_name NOT IN (
        'SYS',
        'SYSTEM',
        'DBSNMP',
        'PERFSTATS'
    )
        AND module NOT IN (
        'SQL Developer',
        'Toad'
    )
    and round(elapsed_time/(executions*1000000))>3

========
col BEGIN_INTERVAL_TIME for a30
select snap_id,BEGIN_INTERVAL_TIME,END_INTERVAL_TIME from dba_hist_snapshot where BEGIN_INTERVAL_TIME > systimestamp -1 order by BEGIN_INTERVAL_TIME 

select snap_id,instance_number inst_id,sql_id,plan_hash_value,parsing_schema_name,EXECUTIONS_TOTAL,EXECUTIONS_DELTA,ELAPSED_TIME_TOTAL,ELAPSED_TIME_DELTA from DBA_HIST_SQLSTAT where sql_id=TRIM('&&sql_id.');

 select sql_id,plan_hash_value,elapsed_time,executions from gv$sql where sql_id =TRIM('&&sql_id.');

select Inst_id,SQL_FULLTEXT,SQL_ID,EXECUTIONS,ELAPSED_TIME from gv$sqlarea where sql_id =TRIM('&&sql_id.') ;


SELECT STAT.SNAP_ID,STAT.SQL_ID, PLAN_HASH_VALUE, PARSING_SCHEMA_NAME,elapsed_time_total,executions_total,elapsed_time_delta, nvl(executions_delta,0) executions_delta ,round(ELAPSED_TIME_DELTA/1000000,2) exec_sec, SS.END_INTERVAL_TIME,SQL_PROFILE FROM DBA_HIST_SQLSTAT STAT, DBA_HIST_SQLTEXT TXT, DBA_HIST_SNAPSHOT SS 
WHERE STAT.SQL_ID = TXT.SQL_ID AND STAT.DBID = TXT.DBID AND SS.DBID = STAT.DBID AND SS.INSTANCE_NUMBER = STAT.INSTANCE_NUMBER AND STAT.SNAP_ID = SS.SNAP_ID AND  STAT.INSTANCE_NUMBER = 1 AND SS.BEGIN_INTERVAL_TIME >= sysdate-10 AND UPPER(STAT.SQL_ID) =  upper('&sqlid') ORDER BY stat.snap_id desc
/

Monday, August 30, 2021

Table Fragmentation

 Checking the deleted space from table:

SELECT BLOCKS, BLOCKS*8192/1024 TOTAL_SIZE_KB, AVG_SPACE, round(BLOCKS*AVG_SPACE/1024,2) FREE_SPACE_KB FROM USER_TABLES WHERE TABLE_NAME='EMPLOYEE';

Monday, August 23, 2021

Shell Scripting

 Shell Scripting Handy

User creation at OS level (usercreate_os.sh)

#!/bin/bash

#this script creates an account on the local system.

#you will be prompter for the account name and password

#Ask for username

read -p 'Enter the username: ' USER_NAME

#Ask for the real name

read -p 'Enter the name of the person who is this account for: ' COMMENT

#ask for the password

read -p 'Enter the password to use for the account: ' PASSWORD

#create the username

useradd -c "${COMMENT}" -m ${USER_NAME}

#set the password for the username

echo ${PASSWORD} | passwd --stdin ${USER_NAME}

#force password change on first login

passwd -e ${USER_NAME}

######################################################################

Script2:

 RANDOM Each time this parameter is referenced, a random integer between 0 and 32767

              is  generated.  The sequence of random numbers may be initialized by assign‐

              ing a value to RANDOM.  If RANDOM is unset, it loses its special properties,

              even if it is subsequently reset.


[oracle@kolkata02 ~]$ echo ${RANDOM}

24092

[oracle@kolkata02 ~]$ echo ${RANDOM}

1748

[oracle@kolkata02 ~]$ echo ${RANDOM}

2398

[oracle@kolkata02 ~]$ !v   ----> this will opens the last closed file

#!/bin/bash

#This scripts generates a list of random passwords

#A random number as a password

PASSWORD=${RANDOM}

echo "${PASSWORD}"

#Three random numbers together

PASSWORD="${RANDOM}${RANDOM}${RANDOM}"

echo "${PASSWORD}"

#use the current date/time as the basis for the password

PASSWORD=$(date +%s)    ---->'+' is for format and %s is for seconds since 1970 UTC

echo "${PASSWORD}"

#use nano seconds to act as randomization

PASSWORD=$(date +%s%N)   --->%N is the nano seconds

echo "${PASSWORD}"

# A better password

PASSWORD=$(date +s%N | sha256sum | head -c32)

echo "${PASSWORD}"

# An even better passsword

PASSWORD=$(date +s%N${RANDOME}${RANDOM} | sha256sum | head -c32)

echo "${PASSWORD}"

# An even better passsword

Special_character=$(echo '!@#$%^&*()_+' | fold -w1 | shuf | head -c1)

echo "${PASSWORD}${Special_character}"  --->here special character will be appended

++++++++++++++++++++++++
[oracle@kolkata02 ~]$ echo "1" >> cheksumdata.txt   ---->this will append the data in the next line
[oracle@kolkata02 ~]$ vi cheksumdata.txt
[oracle@kolkata02 ~]$ echo "2" >> cheksumdata.txt       ---->this will append the data in the next line
[oracle@kolkata02 ~]$ vi cheksumdata.txt
asdfdsdfasdf34343434
1
2
++++++++++++++++++++++++
head -2 /etc/passwd
head -n1 /etc/passwd
head -n -1 /etc/passwd
head -c1 /etc/passwd
head -c2 /etc/passwd
echo "testing" | head -c2
date +%s%N | sha256sum | head -c32
++++++++++++++++++++++++
Parameter is the variable using inside the shell script
Argument is the value passing to the parameter
${0} ---> is the positional parameter, which takes filename itself
[oracle@kolkata02 ~]$ which head        ---->which -a head
/usr/bin/head
${#} ---> it tells number of arguments you passed to the script
${@} ---> this will be used in for loop and we dont know how many user input will be passed as arguments
${*} ---> This is consider/combine all the user inputs/arguments into a single argument
[oracle@kolkata02 ~]$ for username in zameer naseer ayaan
> do 
> echo hi ${username}
> done
hi zameer
hi naseer
hi ayaan
[oracle@kolkata02 ~]$

#!/bin/bash
echo 'you executed this command: '${0}

true
sleep
shift
while loop
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#This script demonstrates I/O redirection
#standard input
#standard output
#standard error
#redirect STDOUT to a file
file="/tmp/data"
head -n1 /etc/passwd > ${file}

#redirect STDIN to a program
read LINE < ${file}
echo "the line contains : ${file}"

#we can change the password of the user by redirecting the output from password file to passwd
[oracle@kolkata02 ~]$ echo "secret" > password
[oracle@kolkata02 ~]$ cat password
secret
[root@kolkata02 oracle]# sudo passwd --stdin testuser1 < password
Changing password for user testuser1.
passwd: all authentication tokens updated successfully.
# ">" overwrite the existing content in a file
head -n3 /etc/passwd > ${file}
echo contents of the file ${file} is:
cat ${file}

#redirect STDOUT to a file, appending to the file
echo "${RANDOM} ${RANDOM}" >> ${file}
echo "${RANDOM} ${RANDOM}" >> ${file}
echo
echo "contents of the file: ${file}"
cat ${file}
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[oracle@kolkata02 ~]$ read x < /etc/redhat-release
[oracle@kolkata02 ~]$ echo ${x}
Red Hat Enterprise Linux Server release 7.9 (Maipo)
[oracle@kolkata02 ~]$ read x 0< /etc/redhat-release
[oracle@kolkata02 ~]$ echo ${x}
Red Hat Enterprise Linux Server release 7.9 (Maipo)
[oracle@kolkata02 ~]$
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out    --->this will not redirect the error to the file head.out
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$ cat head.out
==> /etc/passwd <==
root:x:0:0:root:/root:/bin/bash

==> /etc/hosts <==
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
[oracle@kolkata02 ~]$
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>head.err  ---> this will write the error into the file head.err
[oracle@kolkata02 ~]$ cat head.err
head: cannot open ‘/fakefile’ for reading: No such file or directory
the below 2>>head.err will append the error
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ cat head.err
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$
What if we wanted to send the standard output and standard error to the same file, we will see the old syntax and new syntax for that
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.both 2>&1
[oracle@kolkata02 ~]$ cat head.both
==> /etc/passwd <==
root:x:0:0:root:/root:/bin/bash

==> /etc/hosts <==
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$
The new syntax for the same above operation is as below
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile &> head.both
[oracle@kolkata02 ~]$ cat head.both
==> /etc/passwd <==
root:x:0:0:root:/root:/bin/bash

==> /etc/hosts <==
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$
&>> ---> will append the output

===============Below is to display the abended process logs================

#!/bin/bash
BASE_DIR=/u01/app/oracle/scripts
log_dir=$BASE_DIR/log_dir
gg_processes=$BASE_DIR/log_dir/gg_processes.log
gg_process_Detail=$BASE_DIR/gg_process_Detail.txt
Abended_ggprocess=$BASE_DIR/Abended_ggprocess.txt
gg_process_log_output=$BASE_DIR/gg_process_log_output.txt
rm ${gg_process_log_output}
echo $ORACLE_HOME
echo $GG_HOME
cd $GG_HOME
pwd

$GG_HOME/ggsci <<EOF > ${gg_processes}
info all
exit
EOF

cat ${gg_processes} | grep ABENDED | awk '{ print $3 }' > ${Abended_ggprocess}

while read line
do
#echo -e "\n"
#echo -e "The below is the output of last four lines of ${line} GG Process log, please check\n" >>${gg_process_log_output}
echo -e "The below is the output of last four lines of `echo -e "\e[1;31m ${line} \e[0m"`  GG Process log, please check\n" >>${gg_process_log_output}
tail -4 /u01/app/oracle/product/ogg/dirrpt/${line}.rpt >> ${gg_process_log_output}
echo -e "\n" >> ${gg_process_log_output}
done <${BASE_DIR}/Abended_ggprocess.txt

========================================================


QFSDP Patching

Step1: Cell patching in rolling mode 

step2:  Downtime patching for below:

            First have IB Switches patching

            then GI home patch

            then ORACLE homes patching

            then yum upgrade

            then apply datapatch for all databases, if you face issue with automation script of oracle  for CDB databases, so please ran datapatch manually on all databases.

Sunday, August 22, 2021

DataPump - expdp/impdp

 Expdp with Query option: (parfile)

directory=DATA_PUMP

dumpfile=expdp_zamdbtdb_tblbkup_new_%U.dmp

logfile=expdp_zamdbtdb_tblbkup_new.log

tables=schema.table1

parallel=10

query=schema1.table1:"where column1 NOT IN (SELECT column2 FROM schema1.table2 where column3= 'value')"

cluster=N

 Impdp: (parfile)

directory=DATA_PUMP

dumpfile=expdp_zamdbtdb_tblbkup_new_%U.dmp

logfile=imdp_zamdbtdb_tblbkup_new.log

tables=schema.table1

table_exists_action=replace

parallel=10

cluster=N

Regular Commands:

select directory_name,directory_path from dba_directories where directory_name='DATA_PUMP';

select sum(bytes)/1024/1024/1024 from dba_segments;

create directory DATA_PUMP as '/opt/backups';

grant read,write on directory DATA_PUMP to system;

grant all on directory DATA_PUMP to public;

expdp system directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log tables=schema1.table1 parallel=16 exclude=statistics

expdp system directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log schemas=schema1 parallel=12 exclude=statistics

nohup expdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log schemas=schema1 parallel=12 exclude=statistics &

nohup impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=impdp_test_tblbkp.log schemas=schema1 TABLE_EXISTS_ACTION=REPLACE parallel=16 exclude=statistics &

Cluster=N -->use this for RAC environment where export/import is doing in/from local mount point

nohup impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log remap_schema=schema_old_name:schema_new_name remap_tablespace=OLD_TABLESPACE_NAME:NEW_TABLESPACE_NAME parallel=5 Cluster=N &

impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log remap_table=OLD_TABLE_NAME:NEW_TABLE_NAME remap_schema=schema_old_name:schema_new_name parallel=5

Connecting using database service:

nohup expdp system/'password'@kol-scan:1521/zamdbtdb.localdomain parfile=expdp_tablebkup_01.par &




Database Upgrade (autoupgrade.jar)

The most recent version of AutoUpgrade can be downloaded via myoracle support - 2485457.1



Copy autoupgrade.jar to any location as per your convenience 

[oracle@kolkata02 auto_upgrade]$ cp /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/autoupgrade.jar /home/oracle/auto_upgrade

Create a sample config file using :

[oracle@kolkata02 auto_upgrade]$ java -jar /home/oracle/auto_upgrade/autoupgrade.jar -create_sample_file config

Created sample configuration file /home/oracle/auto_upgrade/sample_config.cfg

Now create your own config file using sample_config.cfg

[oracle@kolkata02 ~]$ vi /home/oracle/config_zamdbtdb.cfg

global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/zamdbtdb1

upg1.log_dir=/home/oracle/auto_upgrade/logs/zamdbtdb1

upg1.sid=zamdbtdb1

upg1.source_home=/u01/app/oracle/product/12.2.0.1/dbhome_1

upg1.target_cdb=CNTESTDB1

upg1.target_pdb_name=zamdbtdbx         -->for prechecks remove this target_pdb_name

upg1.target_pdb_copy_option=file_name_convert=('+DATA1/ZAMDBTDB','+DATA1','+RECO1/ZAMDBTDB','+RECO1','+FRA/ZAMDBTDB','+FRA') -->If you want to upgrade with copy use this option, otherwise remove this option

upg1.target_home=/u01/app/oracle/product/19.0.0.0/dbhome_1

upg1.start_time=now

upg1.upgrade_node=kolkata02.localdomain

upg1.run_utlrp=yes

upg1.timezone_upg=yes

And save it, now run the prechecks

nohup java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode analyze -noconsole >> /home/oracle/zamdbtdb_upg.log 2>&1 &

Otherwise you can run it in console mode as below: (PRECHECKS)

java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode analyze

Prechecks are succeeded , you can check in the html file

 [oracle@kolkata02 prechecks]$ pwd

/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/102/prechecks

[oracle@kolkata02 prechecks]$ firefox zamdbtdb_preupgrade.html

Now start the actual upgrade in console mode:
nohup java -jar /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode deploy -noconsole >> /home/oracle/zamdbtdb_upg.log 2>&1 &   ---->This is noconsole mode

java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode deploy -->This is console mode


upg> status -job 103
Progress
-----------------------------------
Start time:      21/08/22 13:00
Elapsed (min):   2
End time:        N/A
Last update:     2021-08-22T13:01:35.143
Stage:           PRECHECKS
Operation:       PREPARING
Status:          RUNNING
Pending stages:  8
Stage summary:
    SETUP             <1 min
    GRP               <1 min
    PREUPGRADE        <1 min
    PRECHECKS         1 min (IN PROGRESS)

Job Logs Locations
-----------------------------------
Logs Base:    /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1
Job logs:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103
Stage logs:   /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/prechecks
TimeZone:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp

Additional information
-----------------------------------
Details:
Checks

Error Details:
None


upg> status -job 103
Progress
-----------------------------------
Start time:      21/08/22 13:00
Elapsed (min):   136
End time:        N/A
Last update:     2021-08-22T15:14:45.923
Stage:           POSTFIXUPS
Operation:       EXECUTING
Status:          RUNNING
Pending stages:  3
Stage summary:
    SETUP             <1 min
    GRP               <1 min
    PREUPGRADE        <1 min
    PRECHECKS         2 min
    PREFIXUPS         16 min
    DRAIN             1 min
    DBUPGRADE         112 min
    POSTCHECKS        <1 min
    POSTFIXUPS        2 min (IN PROGRESS)

Job Logs Locations
-----------------------------------
Logs Base:    /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1
Job logs:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103
Stage logs:   /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/postfixups
TimeZone:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp

Additional information
-----------------------------------
Details:
+---------+---------------+-------+
| DATABASE|          FIXUP| STATUS|
+---------+---------------+-------+
|zamdbtdb1|POST_DICTIONARY|STARTED|
+---------+---------------+-------+

Error Details:
None

upg> status -job 103
Progress
-----------------------------------
Start time:      21/08/22 13:00
Elapsed (min):   177
End time:        N/A
Last update:     2021-08-22T15:57:39.973
Stage:           NONCDBTOPDB
Operation:       EXECUTING
Status:          RUNNING
Pending stages:  1
Stage summary:
    SETUP             <1 min
    GRP               <1 min
    PREUPGRADE        <1 min
    PRECHECKS         2 min
    PREFIXUPS         16 min
    DRAIN             1 min
    DBUPGRADE         112 min
    POSTCHECKS        <1 min
    POSTFIXUPS        16 min
    POSTUPGRADE       <1 min
    NONCDBTOPDB       26 min (IN PROGRESS)

Job Logs Locations
-----------------------------------
Logs Base:    /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1
Job logs:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103
Stage logs:   /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb
TimeZone:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp

Additional information
-----------------------------------
Details:
Executing noncdb_to_pdb.sql

Error Details:
None

Currently noncdb_to_pdb conversion is running and it is on last stage where utlrp.sql is running

/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb
[oracle@kolkata02 noncdbtopdb]$ ls -ltr
total 444
-rwx------. 1 oracle oinstall      0 Aug 22 15:31 noncdb_to_pdb_zamdbtdb.log.lck
-rwx------. 1 oracle dba        8292 Aug 22 15:32 zamdbtdbx.xml
-rwx------. 1 oracle oinstall    702 Aug 22 15:33 createpdb_zamdbtdb.log
-rwx------. 1 oracle oinstall 376832 Aug 22 15:44 noncdbtopdb_zamdbtdb.log
-rwx------. 1 oracle oinstall  60924 Aug 22 16:54 noncdb_to_pdb_zamdbtdb.log

[oracle@kolkata02 noncdbtopdb]$ tail -100f noncdb_to_pdb_zamdbtdb.log
2021-08-22 16:54:12.033 INFO [(SQLPATH=/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb), (ORACLE_SID=CNTESTDB1), (ORACLE_UNQNAME=zamdbtdb), (ORACLE_PATH=/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb), (ORACLE_BASE=/u01/app/oracle), (TWO_TASK=N/A), (ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1), (TNS_ADMIN=N/A), (LDAP_ADMIN=N/A), (PERL5LIB=N/A), (WORKDIR=/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb)] - ExecutionEnv.addEnvToProcess
2021-08-22 16:54:12.034 INFO Starting - ExecuteProcess.setLibsForSqlplus
2021-08-22 16:54:12.034 INFO Finished - ExecuteProcess.setLibsForSqlplus
2021-08-22 16:54:12.036 INFO End Setting Oracle Environment - ExecuteProcess.startSqlPlusProcess
2021-08-22 16:54:12.036 INFO Begin Creating process - ExecuteProcess.startSqlPlusProcess
2021-08-22 16:54:12.109 INFO End Creating process - ExecuteProcess.startSqlPlusProcess
2021-08-22 16:54:12.109 INFO Executing SQL [SELECT COUNT(*) FROM sys.obj$ WHERE status IN (4, 5, 6);] in [CNTESTDB1, container:zamdbtdbx] - ExecuteSql$SQLClient.run
2021-08-22 16:54:12.628 INFO Progress was detected in noncdb_to_pdb.sql script execution due to fewer invalid objects[10] present in the pdb - NonCDBToPDBSQL$CheckProgress.run

Errors in database [zamdbtdb1]
Stage     [NONCDBTOPDB]
Operation [STOPPED]
Status    [ERROR]
Info    [
Error: UPG-1699
[Unexpected exception error]
Cause: Error finding error definition, contact Oracle Support
For further details, see the log file located at /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/autoupgrade_20210822_user.log]

-------------------------------------------------
Logs: [/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/autoupgrade_20210822_user.log]
-------------------------------------------------
upg>
upg> lsj
+----+---------+-----------+---------+------+--------------+--------+--------+
|Job#|  DB_NAME|      STAGE|OPERATION|STATUS|    START_TIME| UPDATED| MESSAGE|
+----+---------+-----------+---------+------+--------------+--------+--------+
| 103|zamdbtdb1|NONCDBTOPDB|  STOPPED| ERROR|21/08/22 13:00|17:06:27|UPG-1699|
+----+---------+-----------+---------+------+--------------+--------+--------+
Total jobs 1

At this step we found that upgrade completed successfully and PDB plugin also done successfully. But due to PDB plug-in violations , the PDB is in RESTRICTED STATE , not coming to OPEN state.
So , found the PDB plugin violations and resolving them , results in successfully opening the PDB in OPEN state

2021-08-22 15:13:58.793 INFO [Upgrading] is [100%] completed for [zamdbtdb]
+---------+--------------------------------+
|CONTAINER|                      PERCENTAGE|
+---------+--------------------------------+
| zamdbtdb|SUCCESSFULLY UPGRADED [zamdbtdb]|
+---------+--------------------------------+
2021-08-22 15:13:58.940 INFO Error opening file [/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/initzamdbtdb1.ora] for reading
2021-08-22 15:14:01.626 INFO Creating spfile completed with success
2021-08-22 15:14:01.627 INFO SUCCESSFULLY UPGRADED [zamdbtdb]
2021-08-22 15:14:01.755 INFO zamdbtdb Return status is SUCCESS
2021-08-22 15:14:24.484 INFO Analyzing zamdbtdb1, 11 checks will run using 2 threads
2021-08-22 15:14:42.616 INFO Using /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/prechecks/zamdbtdb_checklist.cfg to identify required fixups
2021-08-22 15:14:42.714 INFO Content of the checklist /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/prechecks/zamdbtdb_checklist.cfg is:
2021-08-22 15:31:21.106 INFO Guarantee Restore Point (GRP) successfully removed [ZAMDBTDB][AUTOUPGRADE_9212_ZAMDBTDB1122010]
2021-08-22 15:33:38.901 INFO No entry was found for [zamdbtdb1:/u01/app/oracle/product/19.0.0.0/dbhome_1] in /etc/oratab
2021-08-22 17:06:16.137 INFO /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp/after_upgrade_pfile_zamdbtdb1.ora
2021-08-22 17:06:26.615 ERROR Dispatcher failed: AutoUpgException [ERROR3007#Errors executing [CREATE SPFILE='+DATAC1' FROM  PFILE='/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp/after_upgrade_pfile_zamdbtdb1.ora';

CREATE SPFILE='+DATAC1' FROM  PFILE='/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp/after_upgrade_pfile_zamdbtdb1.ora'
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
] [zamdbtdb1]]

select name, cause, type, status,action,message,time from pdb_plug_in_violations;   --->ran this command with connection to CDB(CNTESTDB) and found the below action plan

Some Interim patches are installed in PDB but not in CDB, because when I have applied 32904851;Database Release Update : 19.12.0.0.210720 (32904851) , i have not ran the DATAPATCH at CDB level, so the PDB which is upgraded and plugged-in to CDB it got automatically installed all the interim patches so, the violation came like this "Not installed in the CDB but installed in the PDB"
And now i have ran the datapatch command at CDB level resolved the issues and now the PDB came in OPEN state
The error details are placed in location "E:\zameer_workspace\AutomationScripts\DBUpgrade\database_upgrade_steps"
SQL> alter pluggable database ZAMDBTDBX close instances=all;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 ZAMDBTDBX                      MOUNTED
SQL>
SQL>
SQL> alter pluggable database ZAMDBTDBX open  instances=all;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 ZAMDBTDBX                      READ WRITE NO

SQL>  alter pluggable database ZAMDBTDBX save state instances=all;

Pluggable database altered.
[oracle@kolkata02 ~]$ srvctl status database -d cntestdb -v -f
Instance CNTESTDB1 is running on node kolkata02 with online services TESTPDB.localdomain,zamdbtdb.localdomain. Instance status: Open.
Instance CNTESTDB2 is running on node kolkata03 with online services TESTPDB.localdomain,zamdbtdb.localdomain. Instance status: Open.

FIG project queries

##### Service add & LOad Baclancing on Add Service ####### srvctl add service -s wcccdmt.farmersinsurance.com -r wcccdmtx1,wcccdmtx2,wcc...