Checking the deleted space from table:
SELECT BLOCKS, BLOCKS*8192/1024 TOTAL_SIZE_KB, AVG_SPACE, round(BLOCKS*AVG_SPACE/1024,2) FREE_SPACE_KB FROM USER_TABLES WHERE TABLE_NAME='EMPLOYEE';
Checking the deleted space from table:
SELECT BLOCKS, BLOCKS*8192/1024 TOTAL_SIZE_KB, AVG_SPACE, round(BLOCKS*AVG_SPACE/1024,2) FREE_SPACE_KB FROM USER_TABLES WHERE TABLE_NAME='EMPLOYEE';
Shell Scripting Handy
User creation at OS level (usercreate_os.sh)
#!/bin/bash
#this script creates an account on the local system.
#you will be prompter for the account name and password
#Ask for username
read -p 'Enter the username: ' USER_NAME
#Ask for the real name
read -p 'Enter the name of the person who is this account for: ' COMMENT
#ask for the password
read -p 'Enter the password to use for the account: ' PASSWORD
#create the username
useradd -c "${COMMENT}" -m ${USER_NAME}
#set the password for the username
echo ${PASSWORD} | passwd --stdin ${USER_NAME}
#force password change on first login
passwd -e ${USER_NAME}
######################################################################
Script2:
RANDOM Each time this parameter is referenced, a random integer between 0 and 32767
is generated. The sequence of random numbers may be initialized by assign‐
ing a value to RANDOM. If RANDOM is unset, it loses its special properties,
even if it is subsequently reset.
[oracle@kolkata02 ~]$ echo ${RANDOM}
24092
[oracle@kolkata02 ~]$ echo ${RANDOM}
1748
[oracle@kolkata02 ~]$ echo ${RANDOM}
2398
[oracle@kolkata02 ~]$ !v ----> this will opens the last closed file
#!/bin/bash
#This scripts generates a list of random passwords
#A random number as a password
PASSWORD=${RANDOM}
echo "${PASSWORD}"
#Three random numbers together
PASSWORD="${RANDOM}${RANDOM}${RANDOM}"
echo "${PASSWORD}"
#use the current date/time as the basis for the password
PASSWORD=$(date +%s) ---->'+' is for format and %s is for seconds since 1970 UTC
echo "${PASSWORD}"
#use nano seconds to act as randomization
PASSWORD=$(date +%s%N) --->%N is the nano seconds
echo "${PASSWORD}"
# A better password
PASSWORD=$(date +s%N | sha256sum | head -c32)
echo "${PASSWORD}"
# An even better passsword
PASSWORD=$(date +s%N${RANDOME}${RANDOM} | sha256sum | head -c32)
echo "${PASSWORD}"
# An even better passsword
Special_character=$(echo '!@#$%^&*()_+' | fold -w1 | shuf | head -c1)
echo "${PASSWORD}${Special_character}" --->here special character will be appended
Step1: Cell patching in rolling mode
step2: Downtime patching for below:
First have IB Switches patching
then GI home patch
then ORACLE homes patching
then yum upgrade
then apply datapatch for all databases, if you face issue with automation script of oracle for CDB databases, so please ran datapatch manually on all databases.
Expdp with Query option: (parfile)
directory=DATA_PUMP
dumpfile=expdp_zamdbtdb_tblbkup_new_%U.dmp
logfile=expdp_zamdbtdb_tblbkup_new.log
tables=schema.table1
parallel=10
query=schema1.table1:"where column1 NOT IN (SELECT column2 FROM schema1.table2 where column3= 'value')"
cluster=N
Impdp: (parfile)
directory=DATA_PUMP
dumpfile=expdp_zamdbtdb_tblbkup_new_%U.dmp
logfile=imdp_zamdbtdb_tblbkup_new.log
tables=schema.table1
table_exists_action=replace
parallel=10
cluster=N
Regular Commands:
select directory_name,directory_path from dba_directories where directory_name='DATA_PUMP';
select sum(bytes)/1024/1024/1024 from dba_segments;
create directory DATA_PUMP as '/opt/backups';
grant read,write on directory DATA_PUMP to system;
grant all on directory DATA_PUMP to public;
expdp system directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log tables=schema1.table1 parallel=16 exclude=statistics
expdp system directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log schemas=schema1 parallel=12 exclude=statistics
nohup expdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log schemas=schema1 parallel=12 exclude=statistics &
nohup impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=impdp_test_tblbkp.log schemas=schema1 TABLE_EXISTS_ACTION=REPLACE parallel=16 exclude=statistics &
Cluster=N -->use this for RAC environment where export/import is doing in/from local mount point
nohup impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log remap_schema=schema_old_name:schema_new_name remap_tablespace=OLD_TABLESPACE_NAME:NEW_TABLESPACE_NAME parallel=5 Cluster=N &
impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log remap_table=OLD_TABLE_NAME:NEW_TABLE_NAME remap_schema=schema_old_name:schema_new_name parallel=5
Connecting using database service:
nohup expdp system/'password'@kol-scan:1521/zamdbtdb.localdomain parfile=expdp_tablebkup_01.par &
The most recent version of AutoUpgrade can be downloaded via myoracle support - 2485457.1
Copy autoupgrade.jar to any location as per your convenience
[oracle@kolkata02 auto_upgrade]$ cp /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/autoupgrade.jar /home/oracle/auto_upgrade
Create a sample config file using :
[oracle@kolkata02 auto_upgrade]$ java -jar /home/oracle/auto_upgrade/autoupgrade.jar -create_sample_file config
Created sample configuration file /home/oracle/auto_upgrade/sample_config.cfg
Now create your own config file using sample_config.cfg
[oracle@kolkata02 ~]$ vi /home/oracle/config_zamdbtdb.cfg
global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/zamdbtdb1
upg1.log_dir=/home/oracle/auto_upgrade/logs/zamdbtdb1
upg1.sid=zamdbtdb1
upg1.source_home=/u01/app/oracle/product/12.2.0.1/dbhome_1
upg1.target_cdb=CNTESTDB1
upg1.target_pdb_name=zamdbtdbx -->for prechecks remove this target_pdb_name
upg1.target_pdb_copy_option=file_name_convert=('+DATA1/ZAMDBTDB','+DATA1','+RECO1/ZAMDBTDB','+RECO1','+FRA/ZAMDBTDB','+FRA') -->If you want to upgrade with copy use this option, otherwise remove this option
upg1.target_home=/u01/app/oracle/product/19.0.0.0/dbhome_1
upg1.start_time=now
upg1.upgrade_node=kolkata02.localdomain
upg1.run_utlrp=yes
upg1.timezone_upg=yes
And save it, now run the prechecks
nohup java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode analyze -noconsole >> /home/oracle/zamdbtdb_upg.log 2>&1 &
Otherwise you can run it in console mode as below: (PRECHECKS)
java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode analyze
[oracle@kolkata02 prechecks]$ pwd
/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/102/prechecks
[oracle@kolkata02 prechecks]$ firefox zamdbtdb_preupgrade.html
java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode deploy -->This is console mode
To check the OS version and details
dcli -l root -g ~/dbs_group " cat /etc/redhat-release"
dcli -l root -g ~/dbs_group imageinfo | grep -i ' image version'
dcli -l root -g ~/dbs_group "ipmitool sunoem version"
dcli -l root -g ~/dbs_group "/opt/oracle.cellos/CheckHWnFWProfile -c strict"
dcli -l root -g ~/dbs_group "ipmitool sunoem cli 'show faulty'"
dcli -l root -g ~/ibswitch_lst "version |grep -i version"
dcli -l root -g ~/all_group 'for cable in `ls /sys/class/net/ |grep ^eth`; do printf "$cable: "; cat /sys/class/net/$cable/carrier ; done'
dcli -l root -g /root/dbs_group dbmcli -e list alerthistory where endTime=null and alertShortName=Hardware and alertType=stateful and severity=critical
dcli -l root -g ~/all_group "uptime"
dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs softwareversion"
dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f"
dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs releasepatch"
dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs releaseversion"
dcli -l oracle -g ~/dbs_group "/u01/app/19.0.0.0/grid/OPatch/opatch lspatches"
dcli -l oracle -g ~/dbs_group "/u01/app/oracle/product/12.1.0.2/DbHome_1/OPatch/opatch lspatches"
dcli -l oracle -g ~/dbs_group "/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatch lspatches"
dcli -l oracle -g ~/dbs_group "/u01/app/oracle/product/19.0.0.0/dbhome_1/OPatch/opatch lspatches"
ssh hostname-ilom -->then press enter
ls -ltr /proc/meminfo | grep -i huge
sh hugepages.sh --> this is for hugepages suggestion --> This script will be provided by mosc doc 401749.1 where it is intended to compute the values for huge pages
dbmcl --> list alerthistory
Connect to container: (datapatch should run at container level not at pdb level)
set the cluster database parameter to false
then shut down the container
startup the container in single instance with upgrade mode, then
Alter pluggable database all open upgrade ---> this is used when we want to run datapatch after patching
Upload attachments to ORACLE SUPPORT TEAM
curl -v -T "awrrpt_2_29210_29212.html" -o output -u "zameerbasha.b@gmail.com" "https://transport.oracle.com/upload/issue/3-22054156781/"
Connecting using Database service:
sqlplus sys/'password'@kol-scan:61000/pdb_name.localdomain.com as sysdba
Deleting records from a table in chunks
Execution of sql files in nohup mode:
nohup sqplus "/ as sysdba" @filename.sql &
nohup sqlplus username/'password'@tnsname as sysdba @filename.sql &
Long running session check:
Table Index check:
Also make a note to use the below steps to login to root on non prod env if have access to root to perform during patching.
Step-1: sudo to oragrid
Step-2: sudo /usr/localcw/bin/eksh -l
You can flashback the database at SQL prompt and also using RMAN.
SQL> flashback database to restore point restore_point_name;
SQL>flashback database to timestamp TO_TIMESTAMP( '2021-08-19 01:00:00','YYYY-MM-DD HH24:MI:SS');
RMAN > flashback database to time = "to_date('2021-08-19 01:00:00', 'YYYY-MM-DD HH24:MI:SS')";
Benefits using RMAN for flashback: If below errors occurred in SQL prompt it means flashback required some archived logs which are not available at archive log destination but available at backup location(TAPE/external) then it will automatically restores required archived logs from backup location and applied to restore the database successfully.
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 1254285658 to SCN 1254285759
ORA-38761: redo log sequence 1694 in thread 1, incarnation 3 could not be
accessed
**We can create the restore point and restore the database to a restore point without turning on the FLASHBACK.
##### Service add & LOad Baclancing on Add Service ####### srvctl add service -s wcccdmt.farmersinsurance.com -r wcccdmtx1,wcccdmtx2,wcc...