Monday, August 30, 2021

Table Fragmentation

 Checking the deleted space from table:

SELECT BLOCKS, BLOCKS*8192/1024 TOTAL_SIZE_KB, AVG_SPACE, round(BLOCKS*AVG_SPACE/1024,2) FREE_SPACE_KB FROM USER_TABLES WHERE TABLE_NAME='EMPLOYEE';

Monday, August 23, 2021

Shell Scripting

 Shell Scripting Handy

User creation at OS level (usercreate_os.sh)

#!/bin/bash

#this script creates an account on the local system.

#you will be prompter for the account name and password

#Ask for username

read -p 'Enter the username: ' USER_NAME

#Ask for the real name

read -p 'Enter the name of the person who is this account for: ' COMMENT

#ask for the password

read -p 'Enter the password to use for the account: ' PASSWORD

#create the username

useradd -c "${COMMENT}" -m ${USER_NAME}

#set the password for the username

echo ${PASSWORD} | passwd --stdin ${USER_NAME}

#force password change on first login

passwd -e ${USER_NAME}

######################################################################

Script2:

 RANDOM Each time this parameter is referenced, a random integer between 0 and 32767

              is  generated.  The sequence of random numbers may be initialized by assign‐

              ing a value to RANDOM.  If RANDOM is unset, it loses its special properties,

              even if it is subsequently reset.


[oracle@kolkata02 ~]$ echo ${RANDOM}

24092

[oracle@kolkata02 ~]$ echo ${RANDOM}

1748

[oracle@kolkata02 ~]$ echo ${RANDOM}

2398

[oracle@kolkata02 ~]$ !v   ----> this will opens the last closed file

#!/bin/bash

#This scripts generates a list of random passwords

#A random number as a password

PASSWORD=${RANDOM}

echo "${PASSWORD}"

#Three random numbers together

PASSWORD="${RANDOM}${RANDOM}${RANDOM}"

echo "${PASSWORD}"

#use the current date/time as the basis for the password

PASSWORD=$(date +%s)    ---->'+' is for format and %s is for seconds since 1970 UTC

echo "${PASSWORD}"

#use nano seconds to act as randomization

PASSWORD=$(date +%s%N)   --->%N is the nano seconds

echo "${PASSWORD}"

# A better password

PASSWORD=$(date +s%N | sha256sum | head -c32)

echo "${PASSWORD}"

# An even better passsword

PASSWORD=$(date +s%N${RANDOME}${RANDOM} | sha256sum | head -c32)

echo "${PASSWORD}"

# An even better passsword

Special_character=$(echo '!@#$%^&*()_+' | fold -w1 | shuf | head -c1)

echo "${PASSWORD}${Special_character}"  --->here special character will be appended

++++++++++++++++++++++++
[oracle@kolkata02 ~]$ echo "1" >> cheksumdata.txt   ---->this will append the data in the next line
[oracle@kolkata02 ~]$ vi cheksumdata.txt
[oracle@kolkata02 ~]$ echo "2" >> cheksumdata.txt       ---->this will append the data in the next line
[oracle@kolkata02 ~]$ vi cheksumdata.txt
asdfdsdfasdf34343434
1
2
++++++++++++++++++++++++
head -2 /etc/passwd
head -n1 /etc/passwd
head -n -1 /etc/passwd
head -c1 /etc/passwd
head -c2 /etc/passwd
echo "testing" | head -c2
date +%s%N | sha256sum | head -c32
++++++++++++++++++++++++
Parameter is the variable using inside the shell script
Argument is the value passing to the parameter
${0} ---> is the positional parameter, which takes filename itself
[oracle@kolkata02 ~]$ which head        ---->which -a head
/usr/bin/head
${#} ---> it tells number of arguments you passed to the script
${@} ---> this will be used in for loop and we dont know how many user input will be passed as arguments
${*} ---> This is consider/combine all the user inputs/arguments into a single argument
[oracle@kolkata02 ~]$ for username in zameer naseer ayaan
> do 
> echo hi ${username}
> done
hi zameer
hi naseer
hi ayaan
[oracle@kolkata02 ~]$

#!/bin/bash
echo 'you executed this command: '${0}

true
sleep
shift
while loop
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#This script demonstrates I/O redirection
#standard input
#standard output
#standard error
#redirect STDOUT to a file
file="/tmp/data"
head -n1 /etc/passwd > ${file}

#redirect STDIN to a program
read LINE < ${file}
echo "the line contains : ${file}"

#we can change the password of the user by redirecting the output from password file to passwd
[oracle@kolkata02 ~]$ echo "secret" > password
[oracle@kolkata02 ~]$ cat password
secret
[root@kolkata02 oracle]# sudo passwd --stdin testuser1 < password
Changing password for user testuser1.
passwd: all authentication tokens updated successfully.
# ">" overwrite the existing content in a file
head -n3 /etc/passwd > ${file}
echo contents of the file ${file} is:
cat ${file}

#redirect STDOUT to a file, appending to the file
echo "${RANDOM} ${RANDOM}" >> ${file}
echo "${RANDOM} ${RANDOM}" >> ${file}
echo
echo "contents of the file: ${file}"
cat ${file}
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[oracle@kolkata02 ~]$ read x < /etc/redhat-release
[oracle@kolkata02 ~]$ echo ${x}
Red Hat Enterprise Linux Server release 7.9 (Maipo)
[oracle@kolkata02 ~]$ read x 0< /etc/redhat-release
[oracle@kolkata02 ~]$ echo ${x}
Red Hat Enterprise Linux Server release 7.9 (Maipo)
[oracle@kolkata02 ~]$
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out    --->this will not redirect the error to the file head.out
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$ cat head.out
==> /etc/passwd <==
root:x:0:0:root:/root:/bin/bash

==> /etc/hosts <==
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
[oracle@kolkata02 ~]$
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>head.err  ---> this will write the error into the file head.err
[oracle@kolkata02 ~]$ cat head.err
head: cannot open ‘/fakefile’ for reading: No such file or directory
the below 2>>head.err will append the error
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ cat head.err
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$
What if we wanted to send the standard output and standard error to the same file, we will see the old syntax and new syntax for that
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.both 2>&1
[oracle@kolkata02 ~]$ cat head.both
==> /etc/passwd <==
root:x:0:0:root:/root:/bin/bash

==> /etc/hosts <==
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$
The new syntax for the same above operation is as below
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile &> head.both
[oracle@kolkata02 ~]$ cat head.both
==> /etc/passwd <==
root:x:0:0:root:/root:/bin/bash

==> /etc/hosts <==
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$
&>> ---> will append the output

===============Below is to display the abended process logs================

#!/bin/bash
BASE_DIR=/u01/app/oracle/scripts
log_dir=$BASE_DIR/log_dir
gg_processes=$BASE_DIR/log_dir/gg_processes.log
gg_process_Detail=$BASE_DIR/gg_process_Detail.txt
Abended_ggprocess=$BASE_DIR/Abended_ggprocess.txt
gg_process_log_output=$BASE_DIR/gg_process_log_output.txt
rm ${gg_process_log_output}
echo $ORACLE_HOME
echo $GG_HOME
cd $GG_HOME
pwd

$GG_HOME/ggsci <<EOF > ${gg_processes}
info all
exit
EOF

cat ${gg_processes} | grep ABENDED | awk '{ print $3 }' > ${Abended_ggprocess}

while read line
do
#echo -e "\n"
#echo -e "The below is the output of last four lines of ${line} GG Process log, please check\n" >>${gg_process_log_output}
echo -e "The below is the output of last four lines of `echo -e "\e[1;31m ${line} \e[0m"`  GG Process log, please check\n" >>${gg_process_log_output}
tail -4 /u01/app/oracle/product/ogg/dirrpt/${line}.rpt >> ${gg_process_log_output}
echo -e "\n" >> ${gg_process_log_output}
done <${BASE_DIR}/Abended_ggprocess.txt

========================================================


QFSDP Patching

Step1: Cell patching in rolling mode 

step2:  Downtime patching for below:

            First have IB Switches patching

            then GI home patch

            then ORACLE homes patching

            then yum upgrade

            then apply datapatch for all databases, if you face issue with automation script of oracle  for CDB databases, so please ran datapatch manually on all databases.

Sunday, August 22, 2021

DataPump - expdp/impdp

 Expdp with Query option: (parfile)

directory=DATA_PUMP

dumpfile=expdp_zamdbtdb_tblbkup_new_%U.dmp

logfile=expdp_zamdbtdb_tblbkup_new.log

tables=schema.table1

parallel=10

query=schema1.table1:"where column1 NOT IN (SELECT column2 FROM schema1.table2 where column3= 'value')"

cluster=N

 Impdp: (parfile)

directory=DATA_PUMP

dumpfile=expdp_zamdbtdb_tblbkup_new_%U.dmp

logfile=imdp_zamdbtdb_tblbkup_new.log

tables=schema.table1

table_exists_action=replace

parallel=10

cluster=N

Regular Commands:

select directory_name,directory_path from dba_directories where directory_name='DATA_PUMP';

select sum(bytes)/1024/1024/1024 from dba_segments;

create directory DATA_PUMP as '/opt/backups';

grant read,write on directory DATA_PUMP to system;

grant all on directory DATA_PUMP to public;

expdp system directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log tables=schema1.table1 parallel=16 exclude=statistics

expdp system directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log schemas=schema1 parallel=12 exclude=statistics

nohup expdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log schemas=schema1 parallel=12 exclude=statistics &

nohup impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=impdp_test_tblbkp.log schemas=schema1 TABLE_EXISTS_ACTION=REPLACE parallel=16 exclude=statistics &

Cluster=N -->use this for RAC environment where export/import is doing in/from local mount point

nohup impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log remap_schema=schema_old_name:schema_new_name remap_tablespace=OLD_TABLESPACE_NAME:NEW_TABLESPACE_NAME parallel=5 Cluster=N &

impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log remap_table=OLD_TABLE_NAME:NEW_TABLE_NAME remap_schema=schema_old_name:schema_new_name parallel=5

Connecting using database service:

nohup expdp system/'password'@kol-scan:1521/zamdbtdb.localdomain parfile=expdp_tablebkup_01.par &




Database Upgrade (autoupgrade.jar)

The most recent version of AutoUpgrade can be downloaded via myoracle support - 2485457.1



Copy autoupgrade.jar to any location as per your convenience 

[oracle@kolkata02 auto_upgrade]$ cp /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/autoupgrade.jar /home/oracle/auto_upgrade

Create a sample config file using :

[oracle@kolkata02 auto_upgrade]$ java -jar /home/oracle/auto_upgrade/autoupgrade.jar -create_sample_file config

Created sample configuration file /home/oracle/auto_upgrade/sample_config.cfg

Now create your own config file using sample_config.cfg

[oracle@kolkata02 ~]$ vi /home/oracle/config_zamdbtdb.cfg

global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/zamdbtdb1

upg1.log_dir=/home/oracle/auto_upgrade/logs/zamdbtdb1

upg1.sid=zamdbtdb1

upg1.source_home=/u01/app/oracle/product/12.2.0.1/dbhome_1

upg1.target_cdb=CNTESTDB1

upg1.target_pdb_name=zamdbtdbx         -->for prechecks remove this target_pdb_name

upg1.target_pdb_copy_option=file_name_convert=('+DATA1/ZAMDBTDB','+DATA1','+RECO1/ZAMDBTDB','+RECO1','+FRA/ZAMDBTDB','+FRA') -->If you want to upgrade with copy use this option, otherwise remove this option

upg1.target_home=/u01/app/oracle/product/19.0.0.0/dbhome_1

upg1.start_time=now

upg1.upgrade_node=kolkata02.localdomain

upg1.run_utlrp=yes

upg1.timezone_upg=yes

And save it, now run the prechecks

nohup java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode analyze -noconsole >> /home/oracle/zamdbtdb_upg.log 2>&1 &

Otherwise you can run it in console mode as below: (PRECHECKS)

java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode analyze

Prechecks are succeeded , you can check in the html file

 [oracle@kolkata02 prechecks]$ pwd

/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/102/prechecks

[oracle@kolkata02 prechecks]$ firefox zamdbtdb_preupgrade.html

Now start the actual upgrade in console mode:
nohup java -jar /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode deploy -noconsole >> /home/oracle/zamdbtdb_upg.log 2>&1 &   ---->This is noconsole mode

java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode deploy -->This is console mode


upg> status -job 103
Progress
-----------------------------------
Start time:      21/08/22 13:00
Elapsed (min):   2
End time:        N/A
Last update:     2021-08-22T13:01:35.143
Stage:           PRECHECKS
Operation:       PREPARING
Status:          RUNNING
Pending stages:  8
Stage summary:
    SETUP             <1 min
    GRP               <1 min
    PREUPGRADE        <1 min
    PRECHECKS         1 min (IN PROGRESS)

Job Logs Locations
-----------------------------------
Logs Base:    /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1
Job logs:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103
Stage logs:   /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/prechecks
TimeZone:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp

Additional information
-----------------------------------
Details:
Checks

Error Details:
None


upg> status -job 103
Progress
-----------------------------------
Start time:      21/08/22 13:00
Elapsed (min):   136
End time:        N/A
Last update:     2021-08-22T15:14:45.923
Stage:           POSTFIXUPS
Operation:       EXECUTING
Status:          RUNNING
Pending stages:  3
Stage summary:
    SETUP             <1 min
    GRP               <1 min
    PREUPGRADE        <1 min
    PRECHECKS         2 min
    PREFIXUPS         16 min
    DRAIN             1 min
    DBUPGRADE         112 min
    POSTCHECKS        <1 min
    POSTFIXUPS        2 min (IN PROGRESS)

Job Logs Locations
-----------------------------------
Logs Base:    /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1
Job logs:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103
Stage logs:   /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/postfixups
TimeZone:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp

Additional information
-----------------------------------
Details:
+---------+---------------+-------+
| DATABASE|          FIXUP| STATUS|
+---------+---------------+-------+
|zamdbtdb1|POST_DICTIONARY|STARTED|
+---------+---------------+-------+

Error Details:
None

upg> status -job 103
Progress
-----------------------------------
Start time:      21/08/22 13:00
Elapsed (min):   177
End time:        N/A
Last update:     2021-08-22T15:57:39.973
Stage:           NONCDBTOPDB
Operation:       EXECUTING
Status:          RUNNING
Pending stages:  1
Stage summary:
    SETUP             <1 min
    GRP               <1 min
    PREUPGRADE        <1 min
    PRECHECKS         2 min
    PREFIXUPS         16 min
    DRAIN             1 min
    DBUPGRADE         112 min
    POSTCHECKS        <1 min
    POSTFIXUPS        16 min
    POSTUPGRADE       <1 min
    NONCDBTOPDB       26 min (IN PROGRESS)

Job Logs Locations
-----------------------------------
Logs Base:    /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1
Job logs:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103
Stage logs:   /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb
TimeZone:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp

Additional information
-----------------------------------
Details:
Executing noncdb_to_pdb.sql

Error Details:
None

Currently noncdb_to_pdb conversion is running and it is on last stage where utlrp.sql is running

/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb
[oracle@kolkata02 noncdbtopdb]$ ls -ltr
total 444
-rwx------. 1 oracle oinstall      0 Aug 22 15:31 noncdb_to_pdb_zamdbtdb.log.lck
-rwx------. 1 oracle dba        8292 Aug 22 15:32 zamdbtdbx.xml
-rwx------. 1 oracle oinstall    702 Aug 22 15:33 createpdb_zamdbtdb.log
-rwx------. 1 oracle oinstall 376832 Aug 22 15:44 noncdbtopdb_zamdbtdb.log
-rwx------. 1 oracle oinstall  60924 Aug 22 16:54 noncdb_to_pdb_zamdbtdb.log

[oracle@kolkata02 noncdbtopdb]$ tail -100f noncdb_to_pdb_zamdbtdb.log
2021-08-22 16:54:12.033 INFO [(SQLPATH=/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb), (ORACLE_SID=CNTESTDB1), (ORACLE_UNQNAME=zamdbtdb), (ORACLE_PATH=/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb), (ORACLE_BASE=/u01/app/oracle), (TWO_TASK=N/A), (ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1), (TNS_ADMIN=N/A), (LDAP_ADMIN=N/A), (PERL5LIB=N/A), (WORKDIR=/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb)] - ExecutionEnv.addEnvToProcess
2021-08-22 16:54:12.034 INFO Starting - ExecuteProcess.setLibsForSqlplus
2021-08-22 16:54:12.034 INFO Finished - ExecuteProcess.setLibsForSqlplus
2021-08-22 16:54:12.036 INFO End Setting Oracle Environment - ExecuteProcess.startSqlPlusProcess
2021-08-22 16:54:12.036 INFO Begin Creating process - ExecuteProcess.startSqlPlusProcess
2021-08-22 16:54:12.109 INFO End Creating process - ExecuteProcess.startSqlPlusProcess
2021-08-22 16:54:12.109 INFO Executing SQL [SELECT COUNT(*) FROM sys.obj$ WHERE status IN (4, 5, 6);] in [CNTESTDB1, container:zamdbtdbx] - ExecuteSql$SQLClient.run
2021-08-22 16:54:12.628 INFO Progress was detected in noncdb_to_pdb.sql script execution due to fewer invalid objects[10] present in the pdb - NonCDBToPDBSQL$CheckProgress.run

Errors in database [zamdbtdb1]
Stage     [NONCDBTOPDB]
Operation [STOPPED]
Status    [ERROR]
Info    [
Error: UPG-1699
[Unexpected exception error]
Cause: Error finding error definition, contact Oracle Support
For further details, see the log file located at /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/autoupgrade_20210822_user.log]

-------------------------------------------------
Logs: [/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/autoupgrade_20210822_user.log]
-------------------------------------------------
upg>
upg> lsj
+----+---------+-----------+---------+------+--------------+--------+--------+
|Job#|  DB_NAME|      STAGE|OPERATION|STATUS|    START_TIME| UPDATED| MESSAGE|
+----+---------+-----------+---------+------+--------------+--------+--------+
| 103|zamdbtdb1|NONCDBTOPDB|  STOPPED| ERROR|21/08/22 13:00|17:06:27|UPG-1699|
+----+---------+-----------+---------+------+--------------+--------+--------+
Total jobs 1

At this step we found that upgrade completed successfully and PDB plugin also done successfully. But due to PDB plug-in violations , the PDB is in RESTRICTED STATE , not coming to OPEN state.
So , found the PDB plugin violations and resolving them , results in successfully opening the PDB in OPEN state

2021-08-22 15:13:58.793 INFO [Upgrading] is [100%] completed for [zamdbtdb]
+---------+--------------------------------+
|CONTAINER|                      PERCENTAGE|
+---------+--------------------------------+
| zamdbtdb|SUCCESSFULLY UPGRADED [zamdbtdb]|
+---------+--------------------------------+
2021-08-22 15:13:58.940 INFO Error opening file [/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/initzamdbtdb1.ora] for reading
2021-08-22 15:14:01.626 INFO Creating spfile completed with success
2021-08-22 15:14:01.627 INFO SUCCESSFULLY UPGRADED [zamdbtdb]
2021-08-22 15:14:01.755 INFO zamdbtdb Return status is SUCCESS
2021-08-22 15:14:24.484 INFO Analyzing zamdbtdb1, 11 checks will run using 2 threads
2021-08-22 15:14:42.616 INFO Using /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/prechecks/zamdbtdb_checklist.cfg to identify required fixups
2021-08-22 15:14:42.714 INFO Content of the checklist /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/prechecks/zamdbtdb_checklist.cfg is:
2021-08-22 15:31:21.106 INFO Guarantee Restore Point (GRP) successfully removed [ZAMDBTDB][AUTOUPGRADE_9212_ZAMDBTDB1122010]
2021-08-22 15:33:38.901 INFO No entry was found for [zamdbtdb1:/u01/app/oracle/product/19.0.0.0/dbhome_1] in /etc/oratab
2021-08-22 17:06:16.137 INFO /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp/after_upgrade_pfile_zamdbtdb1.ora
2021-08-22 17:06:26.615 ERROR Dispatcher failed: AutoUpgException [ERROR3007#Errors executing [CREATE SPFILE='+DATAC1' FROM  PFILE='/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp/after_upgrade_pfile_zamdbtdb1.ora';

CREATE SPFILE='+DATAC1' FROM  PFILE='/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp/after_upgrade_pfile_zamdbtdb1.ora'
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
] [zamdbtdb1]]

select name, cause, type, status,action,message,time from pdb_plug_in_violations;   --->ran this command with connection to CDB(CNTESTDB) and found the below action plan

Some Interim patches are installed in PDB but not in CDB, because when I have applied 32904851;Database Release Update : 19.12.0.0.210720 (32904851) , i have not ran the DATAPATCH at CDB level, so the PDB which is upgraded and plugged-in to CDB it got automatically installed all the interim patches so, the violation came like this "Not installed in the CDB but installed in the PDB"
And now i have ran the datapatch command at CDB level resolved the issues and now the PDB came in OPEN state
The error details are placed in location "E:\zameer_workspace\AutomationScripts\DBUpgrade\database_upgrade_steps"
SQL> alter pluggable database ZAMDBTDBX close instances=all;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 ZAMDBTDBX                      MOUNTED
SQL>
SQL>
SQL> alter pluggable database ZAMDBTDBX open  instances=all;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 ZAMDBTDBX                      READ WRITE NO

SQL>  alter pluggable database ZAMDBTDBX save state instances=all;

Pluggable database altered.
[oracle@kolkata02 ~]$ srvctl status database -d cntestdb -v -f
Instance CNTESTDB1 is running on node kolkata02 with online services TESTPDB.localdomain,zamdbtdb.localdomain. Instance status: Open.
Instance CNTESTDB2 is running on node kolkata03 with online services TESTPDB.localdomain,zamdbtdb.localdomain. Instance status: Open.

Saturday, August 21, 2021

EXADATA Handy

To check the OS version and details

 dcli -l root -g ~/dbs_group " cat /etc/redhat-release"

dcli -l root -g ~/dbs_group imageinfo | grep -i ' image version'

dcli -l root -g ~/dbs_group "ipmitool sunoem version"

dcli -l root -g ~/dbs_group "/opt/oracle.cellos/CheckHWnFWProfile -c strict"

dcli -l root -g ~/dbs_group "ipmitool sunoem cli 'show faulty'"

dcli -l root -g ~/ibswitch_lst "version |grep -i version"

dcli -l root -g ~/all_group 'for cable in `ls /sys/class/net/ |grep ^eth`; do printf "$cable: "; cat /sys/class/net/$cable/carrier ; done'

dcli -l root -g /root/dbs_group dbmcli -e list alerthistory where endTime=null and alertShortName=Hardware and alertType=stateful and severity=critical

dcli -l root -g ~/all_group "uptime"

dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs softwareversion"

dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f"

dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs releasepatch"

dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs releaseversion"

dcli -l oracle -g ~/dbs_group "/u01/app/19.0.0.0/grid/OPatch/opatch lspatches"

dcli -l oracle -g ~/dbs_group "/u01/app/oracle/product/12.1.0.2/DbHome_1/OPatch/opatch lspatches"

dcli -l oracle -g ~/dbs_group "/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatch lspatches"

dcli -l oracle -g ~/dbs_group "/u01/app/oracle/product/19.0.0.0/dbhome_1/OPatch/opatch lspatches"

ssh hostname-ilom  -->then press enter

ls -ltr /proc/meminfo | grep -i huge

sh hugepages.sh --> this is for hugepages suggestion  --> This script will be provided by mosc doc 401749.1 where it is intended to compute the values for huge pages

dbmcl --> list alerthistory



CDB database Handy

Connect to container: (datapatch should run at container level not at pdb level)

set the cluster database parameter to false

then shut down the container

startup the container in single instance with upgrade mode, then

 Alter pluggable database all open upgrade ---> this is used when we want to run datapatch after patching




Friday, August 20, 2021

Daily DBA Handy

Upload attachments to ORACLE SUPPORT TEAM

curl -v -T "awrrpt_2_29210_29212.html" -o output -u "zameerbasha.b@gmail.com" "https://transport.oracle.com/upload/issue/3-22054156781/"

Connecting using Database service:

sqlplus sys/'password'@kol-scan:61000/pdb_name.localdomain.com as sysdba

Deleting records from a table in chunks

begin
LOOP
DELETE FROM schema_name.table_name
WHERE column_name= 'value'
and ROWNUM <50000;
EXIT WHEN sql%ROWCOUNT = 0;
commit;
END LOOP;
COMMIT;
END;
/

Execution of sql files in nohup mode:

nohup sqplus "/ as sysdba" @filename.sql &

nohup sqlplus username/'password'@tnsname as sysdba @filename.sql &

Long running session check:

SELECT
    a.sql_fulltext,
    a.sql_id,
    b.last_call_et,
    b.sid,
    b.serial#,
    b.username,
    b.inst_id,
    b.machine,
    b.module,
    b.client_identifier,
    b.action,
    b.osuser,
    b.program,
    b.event,
    b.final_blocking_session,
    b.status
FROM
    gv$sql       a,
    gv$session   b
WHERE 
--b.client_identifier like '%test%' and
--b.machine='machinename.domain.com' and
    a.sql_id = b.sql_id
    AND b.osuser !='oracle'
    AND b.username != 'SYS'
    AND b.status= 'ACTIVE'
ORDER BY
    b.last_call_et DESC;

Table Index check:

SELECT
    aic.index_owner,
    aic.table_name,
    aic.index_name,
    LISTAGG(aic.column_name, ',') WITHIN GROUP(
        ORDER BY
            aic.column_position
    ) cols
FROM
    all_ind_columns aic
WHERE
    aic.table_name = 'TABLE_NAME'
GROUP BY
    aic.index_owner,
    aic.table_name,
    aic.index_name
ORDER BY
    aic.index_owner,
    aic.table_name;

Blocking Session Check:
 SELECT
    b.inst_id,
    lpad('--->', DECODE(a.request, 0, 0, 5))
    || a.sid sid,
    b.serial#,
    b.sql_id,
    b.prev_sql_id,
    a.id1,
    a.id2,
    a.lmode,
    a.block,
    a.request,
    DECODE(a.type, 'MR', 'Media Recovery', 'RT', 'Redo Thread', 'UN', 'User Name', 'TX', 'Transaction','TM', 'DML', 'UL', 'PL/SQL User Lock'
    , 'DX', 'Distributed Xaction', 'CF', 'Control File', 'IS', 'Instance State', 'FS', 'File Set', 'IR', 'Instance Recovery', 'ST', 'Disk Space Transaction'
    , 'TS', 'Temp Segment', 'IV', 'Library Cache Invalidation', 'LS', 'Log Start or Switch', 'RW', 'Row Wait', 'SQ', 'Sequence Number'
    , 'TE', 'Extend Table', 'TT', 'Temp Table', a.type) lock_type,
    b.program,
    b.osuser,
    b.username,
    b.status,
    b.module,
    b.action,
    b.logon_time,
    b.last_call_et,
    'alter system kill session '
    || ''''
    || a.sid
    || ', '
    || b.serial#
    || ''''
    || ' immediate;' kill_session,
    DECODE(object_type, NULL, NULL, 'Dbms_Rowid.rowid_create(1, '
                                    || row_wait_obj#
                                    || ', '
                                    || row_wait_file#
                                    || ', '
                                    || row_wait_block#
                                    || ', '
                                    || row_wait_row#
                                    || ')') row_id
FROM
    gv$lock       a,
    gv$session    b,
    dba_objects   o
WHERE
    ( a.id1,
      a.id2 ) IN (
        SELECT
            id1,
            id2
        FROM
            gv$lock
        WHERE
            lmode = 0
    )
    AND a.inst_id = b.inst_id
    AND a.sid = b.sid
    AND o.object_id (+) = DECODE(b.row_wait_obj#, - 1, NULL, b.row_wait_obj#)
ORDER BY
    a.id1,
    a.id2,
    a.request;

select a.inst_id, a.sid, a.serial#, a.process, a.logon_time, c.object_name
from gv$session a, gv$locked_object b, dba_objects c
where b.object_id = c.object_id
and a.sid = b.session_id
and OBJECT_NAME like '%LINK%'  order by a.logon_time;

Gather Stats:
exec DBMS_STATS.GATHER_TABLE_STATS (ownname => 'schema_owner' ,tabname => 'table_name', cascade => true, estimate_percent => dbms_stats.auto_sample_size, degree => 15);

execute dbms_stats.gather_schema_stats(ownname => 'schema_owner',ESTIMATE_PERCENT =>dbms_stats.auto_sample_size,CASCADE => TRUE,degree => 15);

 select 'alter system kill session '''|| s.sid|| ','|| s.serial#|| ''' immediate;' from gv$session S where status='INACTIVE';

Tablespace create from source to target :
select 'CREATE BIGFILE TABLESPACE ' || tablespace_name || ' DATAFILE ''+DATA1'' SIZE 200M AUTOEXTEND ON NEXT 500M MAXSIZE UNLIMITED;' from dba_tablespaces where tablespace_name not in ('SYSTEM','SYSAUX','TEMP'); 

FRA SPACE CHECK:
col name for a32
col size_m for 999,999,999
col used_m for 999,999,999
col pct_used for 999
SELECT name
, ceil( space_limit / 1024 / 1024) SIZE_M
, ceil( space_used  / 1024 / 1024) USED_M
, decode( nvl( space_used, 0),
0, 0
, ceil ( ( space_used / space_limit) * 100) ) PCT_USED
FROM v$recovery_file_dest
ORDER BY name
/

Grep Command:
grep sga_max_size diag/rdbms/*/*/trace/alert* | sort | uniq
grep pga_aggregate_limit diag/rdbms/*/*/trace/alert* | sort | uniq


Undo Usage:
----------------
1. To check the current size of the Undo tablespace:

select sum(a.bytes) as undo_size from v$datafile a, v$tablespace b, dba_tablespaces c where c.contents = 'UNDO' and c.status = 'ONLINE' and b.name = c.tablespace_name and a.ts# = b.ts#;

2. To check the free space (unallocated) space within Undo tablespace:

select sum(bytes)/1024/1024 "mb" from dba_free_space where tablespace_name ='<undo tablespace name>';
3.To Check the space available within the allocated Undo tablespace:


select tablespace_name , sum(blocks)*8/(1024)  reusable_space from dba_undo_extents where status='EXPIRED'  group by  tablespace_name;

4. To Check the space allocated in the Undo tablespace:

select tablespace_name , sum(blocks)*8/(1024)  space_in_use from dba_undo_extents where status IN ('ACTIVE','UNEXPIRED') group by  tablespace_name;

with free_sz as ( select tablespace_name, sum(f.bytes)/1048576/1024 free_gb from dba_free_space f group by tablespace_name ) , a as ( select tablespace_name , sum(case when status = 'EXPIRED' then blocks end)*8/1048576 reusable_space_gb , sum(case when status in ('ACTIVE', 'UNEXPIRED') then blocks end)*8/1048576 allocated_gb from dba_undo_extents where status in ('ACTIVE', 'EXPIRED', 'UNEXPIRED') group by tablespace_name ) , undo_sz as ( select tablespace_name, df.user_bytes/1048576/1024 user_sz_gb from dba_tablespaces ts join dba_data_files df using (tablespace_name) where ts.contents = 'UNDO' and ts.status = 'ONLINE' ) select tablespace_name, user_sz_gb, free_gb, reusable_space_gb, allocated_gb , free_gb + reusable_space_gb + allocated_gb total from undo_sz join free_sz using (tablespace_name) join a using (tablespace_name) ;

select name "FEATURE", first_usage_date "FROM", last_usage_date "TO"
       from DBA_FEATURE_USAGE_STATISTICS
       where name like '%OLAP%'

select value from v$diag_info where name ='Diag Trace';  --> alert log location

column usr                 format a20       Heading 'Osuser/User'
column sid                 format 9999      heading 'S|I|D'
column program             format A29       heading 'program'
column stat                format A1        heading 'S|t|a|t'
column serial              format 999999   heading 'Sr#'
column machine             format A20       heading 'Machine'
column logical             format A19       heading '      Logical|    Gets / Chgs'
column module              format A30       heading 'Module'
column sess_detail         format A14       heading 'Sess_details'

set pagesize 300
col client_identifier for a30
select s.inst_id||':('||s.sid||','||s.serial#||')' sess_detail,s.sql_id,s.machine,s.osuser||' '||s.username usr,s.logon_time,s.program,s.event,s.status,s.client_identifier,s.MODULE 
from gv$session s,gv$process p 
where p.addr=s.paddr and s.sql_id='a9gvfh5hx9u98' ;

Trace File name finder:
column trace new_val T
select c.value || '/' || d.instance_name || '_ora_' ||
a.spid || '.trc' ||
case when e.value is not null then '_'||e.value end trace
from v$process a, v$session b, v$parameter c, v$instance d, v$parameter e
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name = 'user_dump_dest'
and e.name = 'tracefile_identifier';

A query to discover the process ID (PID) associated with my dedicated server (the SPID from
V$PROCESS is the operating system PID of the process that was being used during the execution of that query).

select a.spid dedicated_server, b.process clientpid
from v$process a, v$session b
where a.addr = b.paddr
and b.sid = sys_context('userenv','sid');

Top N objects : largest object find query

with
  seg as (
     select
       owner,segment_name
      ,segment_type
      ,tablespace_name
      ,sum(blocks) blocks
      ,sum(bytes)  bytes
     from dba_segments s
     where  segment_type not in (
       'TYPE2 UNDO'
      ,'ROLLBACK'
      ,'SYSTEM STATISTICS'
     )
     and segment_name not like 'BIN$%' --not in recyclebin
     and owner in ('NIMBUS')-- you can specify schema here
     group by owner,segment_name,segment_type,tablespace_name
  )
 ,segs as (
     select
       owner,segment_name
      ,case when segment_name like 'DR$%$%' then 'CTX INDEX' else segment_type end segment_type
      ,tablespace_name
      ,case
         when segment_name like 'DR$%$%'
           then (select table_owner||'.'||table_name from dba_indexes i where i.owner=s.owner and i.index_name = substr(segment_name,4,length(segment_name)-5))
         when segment_type in ('TABLE','TABLE PARTITION','TABLE SUBPARTITION')
            then owner||'.'||segment_name
         when segment_type in ('INDEX','INDEX PARTITION','INDEX SUBPARTITION')
            then (select i.table_owner||'.'||i.table_name from dba_indexes i where i.owner=s.owner and i.index_name=s.segment_name)
         when segment_type in ('LOBSEGMENT','LOB PARTITION','LOB SUBPARTITION')
            then (select l.owner||'.'||l.TABLE_NAME from dba_lobs l where l.segment_name = s.segment_name and l.owner = s.owner)
         when segment_type = 'LOBINDEX'
            then (select l.owner||'.'||l.TABLE_NAME from dba_lobs l where l.index_name = s.segment_name and l.owner = s.owner)
         when segment_type = 'NESTED TABLE'
            then (select nt.owner||'.'||nt.parent_table_name from dba_nested_tables nt where nt.owner=s.owner and nt.table_name=s.segment_name)
         when segment_type = 'CLUSTER'
            then (select min(owner||'.'||table_name) from dba_tables t where t.owner=s.owner and t.cluster_name=s.segment_name and rownum=1)
       end table_name
      ,blocks
      ,bytes
     from seg s
  )
 ,so as (
     select
       segs.owner
      ,substr(segs.table_name,instr(segs.table_name,'.')+1) TABLE_NAME
      ,sum(segs.bytes)/1024/1024/1024  total_Size_GB
      ,sum(segs.blocks) total_blocks
      ,sum(case when segs.segment_type in ('TABLE','TABLE PARTITION','TABLE SUBPARTITION','NESTED TABLE','CLUSTER') then segs.bytes/1024/1024/1024 end) tab_size_GB
      ,sum(case when segs.segment_type in ('INDEX','INDEX PARTITION','INDEX SUBPARTITION','CTX INDEX') then segs.bytes/1024/1024/1024 end) ind_size_GB
      ,sum(case when segs.segment_type in ('CTX INDEX') then segs.bytes end) ctx_size
      ,sum(case when segs.segment_type in ('LOBSEGMENT','LOBINDEX','LOB PARTITION','LOB SUBPARTITION') then segs.bytes/1024/1024/1024 end) lob_size_GB
     from segs
     group by owner,table_name
  )
 ,tops as (
     select
           dense_rank()over (order by total_Size_GB desc) rnk
          ,so.*
     from so
  )
select *
from tops
where rnk<=20 

Fragmented Table finder query;

select 
 table_name,round(((blocks*8)/1024/1024),2) "size (gb)" , 
 round(((num_rows*avg_row_len/1024))/1024/1024,2) "actual_data (gb)",
 round((((blocks*8)) - ((num_rows*avg_row_len/1024)))/1024/1024,2) "wasted_space (gb)",
 round(((((blocks*8)-(num_rows*avg_row_len/1024))/(blocks*8))*100 -10),2) "reclaimable space %",
 partitioned
from 
 dba_tables
where 
 (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
order by 4 desc;

Parent child table relationship finding query
with list_of_pks as (
  select owner, table_name, constraint_name as pk_constraint_name
  from all_constraints
  where constraint_type in( 'P', 'U')
), pcc_tables( owner, table_name, parent_table, pk_constraint_name ) as (
  select owner, table_name, null, pk_constraint_name
  from list_of_pks
  where owner = 'NIMBUS' and table_name in ('SUBSCRIPTION' , 'IADINFO','SUBSCRIPTION_SERVICEINFO','SERVICEORDER','SOVERSION') -- replace these values here
  union all
  select a.owner, a.table_name, c.table_name, a.pk_constraint_name
  from list_of_pks a
    join all_constraints b on a.table_name=b.table_name and a.owner=b.owner
    join pcc_tables c on b.r_owner= c.owner and b.r_constraint_name=c.pk_constraint_name
)
select *
from pcc_tables;

++++++++++++++++++++++++++++
Schema Objects drop:
++++++++++++++++++++++++++++

select count(1),username,osuser,machine from gv$session where username='PCBHUSER' group by username,osuser,machine;

alter session set current_schema=PCBHUSER;

----------------------------------------------
create or replace procedure PCBHUSER.DB_DROP  as
--code to drop all objects in a schema
--please confirm and recheck the schema name and TNS details
BEGIN
begin
execute immediate 'ALTER SESSION FORCE PARALLEL DDL';
execute immediate 'ALTER SESSION FORCE PARALLEL DML';
end;
begin
execute immediate 'purge recyclebin';
end;
begin
FOR I IN (select object_type,object_name from user_objects where object_type='TABLE')
LOOP
BEGIN
EXECUTE IMMEDIATE 'DROP '||I.OBJECT_TYPE||' '||I.OBJECT_NAME||' CASCADE CONSTRAINTS PURGE';
EXCEPTION WHEN OTHERS THEN NULL;
END;
END LOOP;
END;
BEGIN
FOR J IN (select object_type,object_name from user_objects where object_type!='TABLE' and object_name not in ('DB_DROP'))
LOOP
BEGIN
EXECUTE IMMEDIATE 'DROP '||J.OBJECT_TYPE||' '||J.OBJECT_NAME||'';
EXCEPTION WHEN OTHERS THEN NULL;
END;
END LOOP;
END;
begin
execute immediate 'purge recyclebin';
execute immediate 'DELETE FROM user_sdo_geom_metadata';
----execute immediate 'DELETE FROM mdsys.SDO_GEOM_METADATA_TABLE;';
execute immediate 'commit';
execute immediate 'purge recyclebin';
end;
END;
/

execute PCBHUSER.DB_DROP;

DELETE FROM mdsys.SDO_GEOM_METADATA_TABLE;
commit;

select count(1) from dba_objects where owner='PCBHUSER';

+++++++++++++++++++++++++++++++
Silent DB creation commands:
dbca -silent -createTemplateFromDB -sourceDB sim1 -templateName sim1_db_from_osdsimdb1_template.dbt -sysDBAUserName sys -sysDBAPassword cowboy

[oracle@localhost u01]$ find . -type f -name 'sim1_db_from_osdsimdb1_template*'
.

dbca -silent -createDatabase -templateName sim1_db_from_osdsimdb1_template.dbt -gdbname sim3 -sid sim3 -sysPassword cowboy -systemPassword cowboy

set lines 150 pages 150
col HOST_NAME for a15
col NAME for a10
col INSTANCE_NAME for a10
col LOG_MODE for a12
col DATABASE_ROLE for a18
col OPEN_MODE for a20
ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';
select name,db_unique_name,open_mode,log_mode,database_role from gv$database;
select instance_name,status,host_name,startup_time,logins from gv$instance;
select distinct db_unique_name DB_Name,instance_name,open_mode,log_mode,logins,host_name,startup_time,database_role from gv$database,gv$instance;

SELECT host_name,instance_name,TO_CHAR(startup_time, 'DD-MM-YYYY HH24:MI:SS') startup_time,FLOOR(sysdate-startup_time) days FROM sys.v_$instance;


Patch details Check:
=============================
SET LINESIZE 400
COLUMN action_time FORMAT A20
COLUMN action FORMAT A10
COLUMN status FORMAT A10
COLUMN description FORMAT A40
COLUMN version FORMAT A10
COLUMN bundle_series FORMAT A10
SELECT TO_CHAR(action_time, 'DD-MON-YYYY HH24:MI:SS') AS action_time,
       action,
       status,
       description,
       version,
       patch_id,
       bundle_series
FROM   dba_registry_sqlpatch
ORDER by action_time;

set lines 400;
col action_time for a30;
col description for a80;
col action for a10;
select patch_id, action, description, action_time from dba_registry_sqlpatch where to_char(action_time,'DD/MM/YYYY')=to_char(sysdate,'DD/MM/YYYY') order by action_time;

select name, ISSYS_MODIFIABLE from v$parameter where name='_external_scn_rejection_delta_threshold_minutes';




Also make  a note to use the below steps to login to root on non prod env if have access to root to perform during patching.

 

Step-1: sudo to oragrid

Step-2: sudo /usr/localcw/bin/eksh -l

 
==To Check User equivalence oragrid@ipagt1d8(1334) +ASM4 $ cluvfy comp admprv -n all -o user_equiv -verbose


user equivalence below generates trace also.

mkdir /tmp/cvutrace
export CV_TRACELOC=/tmp/cvutrace
export SRVM_TRACE=true
export SRVM_TRACE_LEVEL=2

cluvfy comp admprv -n all -o user_equiv -verbose
Below output looks good. But in the logs we can see for below command it gave SSH error:

Output: '<CV_TRC>/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d5 -n /bin/true >> /tmp/CVU_19_t1cnp1d1_2024-12-31_04-11-37_127593/scratch/exout36842.out


Please run below command from all the three nodes & provide the output:
1.From node ipagt1d7:
/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d5 -n /bin/true


/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d8 -n /bin/true


2. From node ipagt1d5:
/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d7 -n /bin/true


/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d8 -n /bin/true




3. From node ipagt1d8:
/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d5 -n /bin/true


/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d7 -n /bin/true ===SRVCTL command for older versions =================


FLASHBACK DATABASE

 You can flashback the database at SQL prompt and also using RMAN.

SQL> flashback database to restore point restore_point_name;

SQL>flashback database to timestamp TO_TIMESTAMP( '2021-08-19 01:00:00','YYYY-MM-DD HH24:MI:SS');

RMAN > flashback database to time = "to_date('2021-08-19 01:00:00', 'YYYY-MM-DD HH24:MI:SS')";

Benefits using RMAN for flashback:  If below errors occurred in SQL prompt it means flashback required some archived logs which are not available at archive log destination but available at backup location(TAPE/external) then it will automatically restores required archived logs from backup location and applied to restore the database successfully.

ERROR at line 1:

ORA-38754: FLASHBACK DATABASE not started; required redo log is not available

ORA-38762: redo logs needed for SCN 1254285658 to SCN 1254285759

ORA-38761: redo log sequence 1694 in thread 1, incarnation 3 could not be

accessed

**We can create the restore point and restore the database to a restore point without turning on the FLASHBACK. 


FIG project queries

##### Service add & LOad Baclancing on Add Service ####### srvctl add service -s wcccdmt.farmersinsurance.com -r wcccdmtx1,wcccdmtx2,wcc...