Saturday, September 18, 2021

Jenkins in Oracle Database Task Automation

 Jenkins in Oracle Database Task Automation

Now we’re ready to finish setting up the node via the Jenkins UI. In Jenkins, go to Manage Jenkins, then Manage Nodes, then click New Node. Here you can give your agent node a name, then select Permanent Agent and click OK. There are a variety of options you can use here to customize your node. All we care about right now is the Launch Method.






Sunday, September 12, 2021

Important Points

 Important points to be noted from daily activities:-

While performing Re-org/oracle home movement on a production database:-  

    i)While performing oracle-home movement of a production database, please add the TNS of both primary and standby databases to the new homes on both the PRIMARY and STANDBY side.

    ii)While performing re-org/table/index/lob/table partitions movement , monitor the RECO disk group closely , flashback log also accumulated. If DR-sync-up is not happening, RECO disk group may get full.

------------------------------------------------------------------------------------------------------

If you are unable to drop the tablespace due to BIN$ objects in dba_indexes, then try to drop those indexes, if you are unable to drop them, then drop the BIN$ constraints of that table first from dba_constraints then drop the BIN$ indexes.

select sum(bytes) from v$sgastat where pool = 'shared pool';

The sharedpool holds many other structures that are outside the scope of the corresponding parameter. The SHARED_POOL_SIZE is typically the largest contributor to the shared pool as reported by the SUM(BYTES), but it is not the only contributor.

In Oracle 10g, the SHARED_POOL_SIZE parameter controls the size of the shared pool, whereas in Oracle9i and before, it was just the largest contributor to the shared pool. You should review your 9i and before actual shared pool size (based on V$SGASTAT) and use that figure to set your SHARED_POOL_SIZE parameter in Oracle 10g and above.


SELECT DBTIMEZONE AS "Database Time Zone", SESSIONTIMEZONE AS "Session Time Zone"

FROM dual;



Wednesday, September 1, 2021

SQL_TUNING

 SQL_TUNING

Turning on 10046 tracing for the sql:

ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 12';

Or if the sql is already running you can turn the trace on for this sql_id:
How to trace sql id:
1. ALTER SYSTEM SET EVENTS sql_trace [sql: sql_id=3s1yukp05bzg7] bind=true, wait=true';
2. execute the query
3. alter system set events 'sql_trace off';
4. Find the trace
You do not need to leave the trace on, just collect for 10 minutes.

Explain plan of a sql query:

SET VERIFY OFF
set echo off
set pagesize 400
set linesize 300
set long 4999
set longc 4999
explain plan for 
SELECT * from soe.employee;
select * from table(dbms_xplan.display);

----------invisible index demo------------------------------
SET VERIFY OFF
set echo off
set pagesize 400
set linesize 300
set long 4999
set longc 4999
explain plan for 
SELECT * /*+ use_invisible_indexes */ from soe.employee;
select * from table(dbms_xplan.display);

--------------Indexes presence on a table------------------
SELECT aic.index_owner, aic.table_name, aic.INDEX_NAME, listagg(aic.column_name,',') within group (order by aic.column_position) cols FROM all_ind_columns aic where  aic.table_name='TABLE_NAME' group by aic.index_owner, aic.table_name, aic.INDEX_NAME order by aic.index_owner, aic.table_name ;

----------SQL Tracing-------------
1. Set the below trace at session level to get trace :
alter session set tracefile_identifier='06502';
alter session set events '06502 trace name errorstack level 3';
2. Run the plsql block and generate the error.
3. To close the trace set the following
alter session set events '06502 trace name context off';
SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';

---------Query execution time check-----------
------this query will give the avg execution time per each execution ---taking calculations from gv$sql executions count and elapsed time

WITH
p AS (
SELECT plan_hash_value
  FROM gv$sql_plan
 WHERE sql_id = TRIM('&&sql_id.')
   AND other_xml IS NOT NULL
 UNION
SELECT plan_hash_value
  FROM dba_hist_sql_plan
 WHERE sql_id = TRIM('&&sql_id.')
   AND other_xml IS NOT NULL ),
m AS (
SELECT plan_hash_value,
       SUM(elapsed_time)/SUM(executions) avg_et_secs
  FROM gv$sql
 WHERE sql_id = TRIM('&&sql_id.')
   AND executions > 0
 GROUP BY
       plan_hash_value ),
a AS (
SELECT plan_hash_value,
       SUM(elapsed_time_total)/SUM(executions_total) avg_et_secs
  FROM dba_hist_sqlstat
 WHERE sql_id = TRIM('&&sql_id.')
   AND executions_total > 0
 GROUP BY
       plan_hash_value )
SELECT p.plan_hash_value,
       ROUND(NVL(m.avg_et_secs, a.avg_et_secs)/1e6, 3) avg_et_secs
  FROM p, m, a
 WHERE p.plan_hash_value = m.plan_hash_value(+)
   AND p.plan_hash_value = a.plan_hash_value(+)
 ORDER BY
       avg_et_secs NULLS LAST;
--------------------------------------------------------------

---------sql plan changes app --------
set lines 200
set pagesize 200
col execs for 999,999,999
col avg_etime for 999,999.999 heading avg_exec|time(s)
col avg_lio for 999,999,999.9
col begin_interval_time for a30
col node for 99999
col SQL_PROFILE for a45

set verify off
col PLAN_HASH_VALUE for 9999999999 heading 'Plan Hash'
col PARSING_SCHEMA_NAME for a10 heading 'Parsing Schema'
col END_INTERVAL_TIME for a30

SELECT STAT.SNAP_ID,STAT.SQL_ID, PLAN_HASH_VALUE, PARSING_SCHEMA_NAME, round(ELAPSED_TIME_DELTA/1000000,2) exec_sec, SS.END_INTERVAL_TIME,SQL_PROFILE FROM DBA_HIST_SQLSTAT STAT, DBA_HIST_SQLTEXT TXT, DBA_HIST_SNAPSHOT SS WHERE STAT.SQL_ID = TXT.SQL_ID AND STAT.DBID = TXT.DBID AND SS.DBID = STAT.DBID AND SS.INSTANCE_NUMBER = STAT.INSTANCE_NUMBER AND STAT.SNAP_ID = SS.SNAP_ID AND  STAT.INSTANCE_NUMBER = 1 AND SS.BEGIN_INTERVAL_TIME >= sysdate-7 AND UPPER(STAT.SQL_ID) =  upper('&sqlid') ORDER BY stat.snap_id desc
/

select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,SQL_PROFILE
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where upper(sql_id) like upper('&&sqlid')
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
order by 1,2,begin_interval_time 
/
PROMPT +-------------------------------------------------+
PROMPT | Execution times of the various plans in history |
PROMPT +-------------------------------------------------+

set lines 200 pages 200
col execs for 999,999,999

col etime for 999,999,999.9
col avg_etime for 999,999.999
col avg_cpu_time for 999,999.999
col avg_lio for 999,999,999.9
col avg_pio for 9,999,999.9
col begin_interval_time for a30
col node for 99999
--break on plan_hash_value on startup_time skip 1
select sql_id, plan_hash_value,
sum(execs) execs,
sum(etime) etime,
sum(etime)/sum(execs) avg_etime,
sum(cpu_time)/sum(execs) avg_cpu_time,
sum(lio)/sum(execs) avg_lio,
sum(pio)/sum(execs) avg_pio
from (
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
elapsed_time_delta/1000000 etime,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
buffer_gets_delta lio,
disk_reads_delta pio,
cpu_time_delta/1000000 cpu_time,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,
(cpu_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta)) avg_cpu_time
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id =  '&&sqlid'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
)
group by sql_id, plan_hash_value
order by 5
/


=====================check plan diff sql id

set linesize 200
set pagesize 40
col sql_plan_hash_value for 9999999999 heading 'Sql|Plan Hash'
col rows_processed for 999999999 heading 'Rows|Processed'
col SORTS for 9999999

set verify off
col last_load for a19
col plan_hash_value for 9999999999 heading "Plan|Hash Value"
select plan_hash_value,to_char(LAST_LOAD_TIME,'DD-MON-YY HH24:MI:SS') last_load,SORTS,FETCHES,EXECUTIONS,PARSE_CALLS,DISK_READS,DIRECT_WRITES,BUFFER_GETS,ROWS_PROCESSED,HASH_VALUE,OBJECT_STATUS from gv$sqlarea where SQL_ID = '&&sqlid';

PROMPT +---------------------------------------+
PROMPT | &&sqlid Query Last 5 plan history    
PROMPT +---------------------------------------+

set lines 200
set pagesize 200
col snap_id for 999999
col instance_number for 9999
col execs for 999,999,999
col avg_etime for 999,999.999
col avg_lio for 999,999,999
col SQL_PROFILE for a32
col begin_interval_time for a26
col node for 99999
--define sqlid=&1
set verify off
select * from (select ss.snap_id, ss.instance_number node, to_char(begin_interval_time,'DD-MON-YY HH24:MI:SS') Begin_Interval, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,SQL_PROFILE
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id ='&&sqlid'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
--and begin_interval_time > sysdate-1/24
order by begin_interval_time desc,1, 2)
where rownum <= 5
/

PROMPT +---------------------------------------+
PROMPT | &&sqlid Avg Exec Plan History
PROMPT +---------------------------------------+

set lines 200 pages 200
col execs for 999,999,999
col etime for 999,999,999 heading 'Exec_Time(sec)'
col avg_etime for 999,990.999 heading 'Avg |Exec_Time(sec)'
col avg_cpu_time for 999,999.999
col avg_lio for 999,999,999 heading 'Avg | Logical IO'
col avg_pio for 9,999,999  heading 'Avg | Physical IO'
col begin_interval_time for a30
col node for 99999
select sql_id, plan_hash_value,
sum(execs) execs,
sum(etime) etime,
sum(etime)/sum(execs) avg_etime,
sum(cpu_time)/sum(execs) avg_cpu_time,
sum(lio)/sum(execs) avg_lio,
sum(pio)/sum(execs) avg_pio
from (
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
elapsed_time_delta/1000000 etime,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
buffer_gets_delta lio,
disk_reads_delta pio,
cpu_time_delta/1000000 cpu_time,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio,
(cpu_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta)) avg_cpu_time
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id =  '&&sqlid'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
)
group by sql_id, plan_hash_value
order by 5
/
==================================

------------processing sql id ------------------------


set linesize 200
set pagesize 40
col sql_plan_hash_value for 9999999999 heading 'Sql|Plan Hash'
col rows_processed for 999999999 heading 'Rows|Processed'

set verify off
col last_load for a19 heading "Last Load Time"
col plan_hash_value for 9999999999 heading "Plan|Hash Value"
select plan_hash_value,to_char(LAST_LOAD_TIME,'DD-MON-YY HH24:MI:SS') last_load,SORTS,FETCHES,EXECUTIONS,PARSE_CALLS,DISK_READS,DIRECT_WRITES,BUFFER_GETS,ROWS_PROCESSED,HASH_VALUE,OBJECT_STATUS from gv$sqlarea where SQL_ID = '&&sqlid';
=========================================

=====stale stat sql id ============

set lines 500
col table_owner for a15
col table_name for a30
col partition_name for a30
select distinct b.table_owner, b.table_name, b.partition_name, b.inserts, b.updates, b.deletes, b.TRUNCATED,c.STALE_STATS,
to_char(b.timestamp, 'mm/dd/yyyy hh24:mi') timestamp, to_char(c.last_analyzed, 'mm/dd/yyyy hh24:mi') last_analyzed,
c.num_rows
from (select distinct sql_id, object#, object_name, object_owner from gv$sql_plan where sql_id = '&&sqlid' UNION select distinct sql_id, object#, object_name, object_owner from dba_hist_sql_plan where sql_id = '&&sqlid') a
, sys.dba_tab_modifications b, dba_tab_statistics c
where a.sql_id = '&&sqlid'
and  a.OBJECT_OWNER = b.table_owner
and  a.OBJECT_NAME = b.table_name
and  b.table_owner = c.owner
and  b.table_name  = c.table_name
and  NVL(b.partition_name,'NONE') = NVL(c.partition_name,'NONE')
and b.table_name is not null
order by b.table_owner, b.table_name, b.partition_name;
========================================================

======Plan change REport ==============


with samples as 
 (select *
  from dba_hist_sqlstat st
  join dba_hist_snapshot sn
  using (snap_id, instance_number) 
  where 
  --  sql_id='sqlid'
-- parsing_schema_name = 'schema'
  --and module  'DBMS_SCHEDULER' -- no sql tuning task
   begin_interval_time between sysdate - '&num_days' and sysdate
  and executions_delta > 0),
 

/* just statements that had at least 2 different plans during that time */
  sql_ids as 
   (select sql_id,
    count(distinct plan_hash_value) plancount
    from samples
    group by sql_id
    having count(distinct plan_hash_value) > 2),

/* per combination of sql_id and plan_hash_value, elapsed times per execution */
    plan_stats as 
     (select sql_id,
      plan_hash_value,
      min(parsing_schema_name),
      count(snap_id) snap_count,
      max(end_interval_time) last_seen,
      min(begin_interval_time) first_seen,
      sum(executions_delta) total_execs,
      sum(elapsed_time_delta) / sum(executions_delta) elapsed_per_exec_thisplan
      from sql_ids
      join samples
      using (sql_id)
      group by sql_id, plan_hash_value),

/* how much different is the elapsed time most recently encountered from other elapsed times in the measurement interval? */
      elapsed_time_diffs as 
       (select p.*,
        elapsed_per_exec_thisplan - first_value(elapsed_per_exec_thisplan)
          over(partition by sql_id order by last_seen desc) elapsed_per_exec_diff,
        (elapsed_per_exec_thisplan - first_value(elapsed_per_exec_thisplan)
          over(partition by sql_id order by last_seen desc)) / elapsed_per_exec_thisplan elapsed_per_exec_diff_ratio
        from plan_stats p),

/* consider just statements for which the difference is bigger than our configured threshold */
        impacted_sql_ids as 
         (select *
          from elapsed_time_diffs ),

/* for those statements, get all required information */
          all_info as
           (select sql_id,
            plan_hash_value,
        --    parsing_schema_name,
            snap_count,
            last_seen,
first_seen,
total_execs,
            round(elapsed_per_exec_thisplan / 1e6, 2) elapsed_per_exec_thisplan,
            round(elapsed_per_exec_diff / 1e6, 2) elapsed_per_exec_diff,
            round(100 * elapsed_per_exec_diff_ratio, 2) elapsed_per_exec_diff_pct,
            round(max(abs(elapsed_per_exec_diff_ratio))
              over(partition by sql_id), 2) * 100 max_abs_diff,
            round(max(elapsed_per_exec_diff_ratio) over(partition by sql_id), 2) * 100 max_diff,
            'select * from table(dbms_xplan.display_awr(sql_id=>''' || sql_id ||
            ''', plan_hash_value=>' || plan_hash_value || '));' xplan
            from elapsed_time_diffs
            where sql_id in (select sql_id from impacted_sql_ids))

/* format the output */
            select 
             a.sql_id,
plan_hash_value,
            -- parsing_schema_name,
             a.snap_count,
total_execs,
     to_char(a.elapsed_per_exec_thisplan, '999999.99') elapsed_per_exec_thisplan,
             to_char(a.elapsed_per_exec_diff, '999999.99') elapsed_per_exec_diff,
             to_char(a.elapsed_per_exec_diff_pct, '999999.99') elapsed_per_exec_diff_pct,
to_char(first_seen, 'dd-mon-yy hh24:mi') first_seen,
to_char(last_seen, 'dd-mon-yy hh24:mi') last_seen
             --xplan
             from all_info a where sql_id in (select distinct sql_id from all_info where elapsed_per_exec_diff_pct < -50)
             order by sql_id, elapsed_per_exec_diff_pct;
=============================

=====explain plan from sql_id==============
SELECT t.*
FROM gv$sql s, table(DBMS_XPLAN.DISPLAY_CURSOR(s.sql_id, s.child_number)) t 
WHERE s.sql_id='&&sql_id';
=============================

Stale Stat check query

select distinct b.table_owner, b.table_name, b.partition_name, b.inserts, b.updates, b.deletes, b.TRUNCATED,c.STALE_STATS,
to_char(b.timestamp, 'mm/dd/yyyy hh24:mi') timestamp, to_char(c.last_analyzed, 'mm/dd/yyyy hh24:mi') last_analyzed,
c.num_rows
from (select distinct sql_id, object#, object_name, object_owner from gv$sql_plan where sql_id = '&&sqlid' UNION select distinct sql_id, object#, object_name, object_owner from dba_hist_sql_plan where sql_id = '&&sqlid') a
, sys.dba_tab_modifications b, dba_tab_statistics c
where a.sql_id = '&&sqlid'
and  a.OBJECT_OWNER = b.table_owner
and  a.OBJECT_NAME = b.table_name
and  b.table_owner = c.owner
and  b.table_name  = c.table_name
and  NVL(b.partition_name,'NONE') = NVL(c.partition_name,'NONE')
and b.table_name is not null
order by b.table_owner, b.table_name, b.partition_name;

+++++++++++++++++++++++++++++++++++++++++++++++


To check unindexed foreign key columnn in database;
column columns format a30 word_wrapped
column table_name format a15 word_wrapped
column constraint_name format a15 word_wrapped
select table_name, constraint_name,
cname1 || nvl2(cname2,','||cname2,null) ||
nvl2(cname3,','||cname3,null) || nvl2(cname4,','||cname4,null) ||
nvl2(cname5,','||cname5,null) || nvl2(cname6,','||cname6,null) ||
nvl2(cname7,','||cname7,null) || nvl2(cname8,','||cname8,null)
columns
from ( select b.table_name,
b.constraint_name,
max(decode( position, 1, column_name, null )) cname1,
 max(decode( position, 2, column_name, null )) cname2,
 max(decode( position, 3, column_name, null )) cname3,
 max(decode( position, 4, column_name, null )) cname4,
 max(decode( position, 5, column_name, null )) cname5,
 max(decode( position, 6, column_name, null )) cname6,
 max(decode( position, 7, column_name, null )) cname7,
 max(decode( position, 8, column_name, null )) cname8,
 count(*) col_cnt
 from (select substr(table_name,1,30) table_name,
 substr(constraint_name,1,30) constraint_name,
 substr(column_name,1,30) column_name,
 position
 from user_cons_columns ) a,
 user_constraints b
 where a.constraint_name = b.constraint_name
 and b.constraint_type = 'R'
 group by b.table_name, b.constraint_name
 ) cons
 where col_cnt > ALL
 ( select count(*)
 from user_ind_columns i,
 user_indexes ui
 where i.table_name = cons.table_name
 and i.column_name in (cname1, cname2, cname3, cname4,
 cname5, cname6, cname7, cname8 )
 and i.column_position <= cons.col_cnt
 and ui.table_name = i.table_name
 and ui.index_name = i.index_name
 and ui.index_type IN ('NORMAL','NORMAL/REV')
 group by i.index_name
 );


Blocking session:
============
select
(select username from v$session where sid=a.sid) blocker,
a.sid,
' is blocking ',
(select username from v$session where sid=b.sid) blockee,
b.sid
from v$lock a, v$lock b
where a.block = 1
and b.request > 0
 and a.id1 = b.id1
 and a.id2 = b.id2;


ALTER SESSION SET EVENTS '10730 trace name context forever level [1, 2, 3]';

perfstats_query:
SELECT
    SYSDATE,
    sql_id,
    sql_fulltext,
    hash_value,
    parsing_schema_name,
    module,
    first_load_time,
    last_active_time,
    parse_calls,
    executions,
    round(cpu_time / (executions * 1000000) ) AS cputime,
    round(user_io_wait_time / (executions * 1000000) ) AS iowait,
    round(elapsed_time / (executions * 1000000),2 ) AS elaptimesecs
FROM
    gv$sqlarea
WHERE
    executions != 0
    AND parsing_schema_name NOT IN (
        'SYS',
        'SYSTEM',
        'DBSNMP',
        'PERFSTATS'
    )
        AND module NOT IN (
        'SQL Developer',
        'Toad'
    )
    and round(elapsed_time/(executions*1000000))>3

========
col BEGIN_INTERVAL_TIME for a30
select snap_id,BEGIN_INTERVAL_TIME,END_INTERVAL_TIME from dba_hist_snapshot where BEGIN_INTERVAL_TIME > systimestamp -1 order by BEGIN_INTERVAL_TIME 

select snap_id,instance_number inst_id,sql_id,plan_hash_value,parsing_schema_name,EXECUTIONS_TOTAL,EXECUTIONS_DELTA,ELAPSED_TIME_TOTAL,ELAPSED_TIME_DELTA from DBA_HIST_SQLSTAT where sql_id=TRIM('&&sql_id.');

 select sql_id,plan_hash_value,elapsed_time,executions from gv$sql where sql_id =TRIM('&&sql_id.');

select Inst_id,SQL_FULLTEXT,SQL_ID,EXECUTIONS,ELAPSED_TIME from gv$sqlarea where sql_id =TRIM('&&sql_id.') ;


SELECT STAT.SNAP_ID,STAT.SQL_ID, PLAN_HASH_VALUE, PARSING_SCHEMA_NAME,elapsed_time_total,executions_total,elapsed_time_delta, nvl(executions_delta,0) executions_delta ,round(ELAPSED_TIME_DELTA/1000000,2) exec_sec, SS.END_INTERVAL_TIME,SQL_PROFILE FROM DBA_HIST_SQLSTAT STAT, DBA_HIST_SQLTEXT TXT, DBA_HIST_SNAPSHOT SS 
WHERE STAT.SQL_ID = TXT.SQL_ID AND STAT.DBID = TXT.DBID AND SS.DBID = STAT.DBID AND SS.INSTANCE_NUMBER = STAT.INSTANCE_NUMBER AND STAT.SNAP_ID = SS.SNAP_ID AND  STAT.INSTANCE_NUMBER = 1 AND SS.BEGIN_INTERVAL_TIME >= sysdate-10 AND UPPER(STAT.SQL_ID) =  upper('&sqlid') ORDER BY stat.snap_id desc
/

Monday, August 30, 2021

Table Fragmentation

 Checking the deleted space from table:

SELECT BLOCKS, BLOCKS*8192/1024 TOTAL_SIZE_KB, AVG_SPACE, round(BLOCKS*AVG_SPACE/1024,2) FREE_SPACE_KB FROM USER_TABLES WHERE TABLE_NAME='EMPLOYEE';

Monday, August 23, 2021

Shell Scripting

 Shell Scripting Handy

User creation at OS level (usercreate_os.sh)

#!/bin/bash

#this script creates an account on the local system.

#you will be prompter for the account name and password

#Ask for username

read -p 'Enter the username: ' USER_NAME

#Ask for the real name

read -p 'Enter the name of the person who is this account for: ' COMMENT

#ask for the password

read -p 'Enter the password to use for the account: ' PASSWORD

#create the username

useradd -c "${COMMENT}" -m ${USER_NAME}

#set the password for the username

echo ${PASSWORD} | passwd --stdin ${USER_NAME}

#force password change on first login

passwd -e ${USER_NAME}

######################################################################

Script2:

 RANDOM Each time this parameter is referenced, a random integer between 0 and 32767

              is  generated.  The sequence of random numbers may be initialized by assign‐

              ing a value to RANDOM.  If RANDOM is unset, it loses its special properties,

              even if it is subsequently reset.


[oracle@kolkata02 ~]$ echo ${RANDOM}

24092

[oracle@kolkata02 ~]$ echo ${RANDOM}

1748

[oracle@kolkata02 ~]$ echo ${RANDOM}

2398

[oracle@kolkata02 ~]$ !v   ----> this will opens the last closed file

#!/bin/bash

#This scripts generates a list of random passwords

#A random number as a password

PASSWORD=${RANDOM}

echo "${PASSWORD}"

#Three random numbers together

PASSWORD="${RANDOM}${RANDOM}${RANDOM}"

echo "${PASSWORD}"

#use the current date/time as the basis for the password

PASSWORD=$(date +%s)    ---->'+' is for format and %s is for seconds since 1970 UTC

echo "${PASSWORD}"

#use nano seconds to act as randomization

PASSWORD=$(date +%s%N)   --->%N is the nano seconds

echo "${PASSWORD}"

# A better password

PASSWORD=$(date +s%N | sha256sum | head -c32)

echo "${PASSWORD}"

# An even better passsword

PASSWORD=$(date +s%N${RANDOME}${RANDOM} | sha256sum | head -c32)

echo "${PASSWORD}"

# An even better passsword

Special_character=$(echo '!@#$%^&*()_+' | fold -w1 | shuf | head -c1)

echo "${PASSWORD}${Special_character}"  --->here special character will be appended

++++++++++++++++++++++++
[oracle@kolkata02 ~]$ echo "1" >> cheksumdata.txt   ---->this will append the data in the next line
[oracle@kolkata02 ~]$ vi cheksumdata.txt
[oracle@kolkata02 ~]$ echo "2" >> cheksumdata.txt       ---->this will append the data in the next line
[oracle@kolkata02 ~]$ vi cheksumdata.txt
asdfdsdfasdf34343434
1
2
++++++++++++++++++++++++
head -2 /etc/passwd
head -n1 /etc/passwd
head -n -1 /etc/passwd
head -c1 /etc/passwd
head -c2 /etc/passwd
echo "testing" | head -c2
date +%s%N | sha256sum | head -c32
++++++++++++++++++++++++
Parameter is the variable using inside the shell script
Argument is the value passing to the parameter
${0} ---> is the positional parameter, which takes filename itself
[oracle@kolkata02 ~]$ which head        ---->which -a head
/usr/bin/head
${#} ---> it tells number of arguments you passed to the script
${@} ---> this will be used in for loop and we dont know how many user input will be passed as arguments
${*} ---> This is consider/combine all the user inputs/arguments into a single argument
[oracle@kolkata02 ~]$ for username in zameer naseer ayaan
> do 
> echo hi ${username}
> done
hi zameer
hi naseer
hi ayaan
[oracle@kolkata02 ~]$

#!/bin/bash
echo 'you executed this command: '${0}

true
sleep
shift
while loop
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#This script demonstrates I/O redirection
#standard input
#standard output
#standard error
#redirect STDOUT to a file
file="/tmp/data"
head -n1 /etc/passwd > ${file}

#redirect STDIN to a program
read LINE < ${file}
echo "the line contains : ${file}"

#we can change the password of the user by redirecting the output from password file to passwd
[oracle@kolkata02 ~]$ echo "secret" > password
[oracle@kolkata02 ~]$ cat password
secret
[root@kolkata02 oracle]# sudo passwd --stdin testuser1 < password
Changing password for user testuser1.
passwd: all authentication tokens updated successfully.
# ">" overwrite the existing content in a file
head -n3 /etc/passwd > ${file}
echo contents of the file ${file} is:
cat ${file}

#redirect STDOUT to a file, appending to the file
echo "${RANDOM} ${RANDOM}" >> ${file}
echo "${RANDOM} ${RANDOM}" >> ${file}
echo
echo "contents of the file: ${file}"
cat ${file}
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[oracle@kolkata02 ~]$ read x < /etc/redhat-release
[oracle@kolkata02 ~]$ echo ${x}
Red Hat Enterprise Linux Server release 7.9 (Maipo)
[oracle@kolkata02 ~]$ read x 0< /etc/redhat-release
[oracle@kolkata02 ~]$ echo ${x}
Red Hat Enterprise Linux Server release 7.9 (Maipo)
[oracle@kolkata02 ~]$
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out    --->this will not redirect the error to the file head.out
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$ cat head.out
==> /etc/passwd <==
root:x:0:0:root:/root:/bin/bash

==> /etc/hosts <==
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
[oracle@kolkata02 ~]$
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>head.err  ---> this will write the error into the file head.err
[oracle@kolkata02 ~]$ cat head.err
head: cannot open ‘/fakefile’ for reading: No such file or directory
the below 2>>head.err will append the error
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.out 2>>head.err
[oracle@kolkata02 ~]$ cat head.err
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$
What if we wanted to send the standard output and standard error to the same file, we will see the old syntax and new syntax for that
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile > head.both 2>&1
[oracle@kolkata02 ~]$ cat head.both
==> /etc/passwd <==
root:x:0:0:root:/root:/bin/bash

==> /etc/hosts <==
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$
The new syntax for the same above operation is as below
[oracle@kolkata02 ~]$ head -n1 /etc/passwd /etc/hosts /fakefile &> head.both
[oracle@kolkata02 ~]$ cat head.both
==> /etc/passwd <==
root:x:0:0:root:/root:/bin/bash

==> /etc/hosts <==
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
head: cannot open ‘/fakefile’ for reading: No such file or directory
[oracle@kolkata02 ~]$
&>> ---> will append the output

===============Below is to display the abended process logs================

#!/bin/bash
BASE_DIR=/u01/app/oracle/scripts
log_dir=$BASE_DIR/log_dir
gg_processes=$BASE_DIR/log_dir/gg_processes.log
gg_process_Detail=$BASE_DIR/gg_process_Detail.txt
Abended_ggprocess=$BASE_DIR/Abended_ggprocess.txt
gg_process_log_output=$BASE_DIR/gg_process_log_output.txt
rm ${gg_process_log_output}
echo $ORACLE_HOME
echo $GG_HOME
cd $GG_HOME
pwd

$GG_HOME/ggsci <<EOF > ${gg_processes}
info all
exit
EOF

cat ${gg_processes} | grep ABENDED | awk '{ print $3 }' > ${Abended_ggprocess}

while read line
do
#echo -e "\n"
#echo -e "The below is the output of last four lines of ${line} GG Process log, please check\n" >>${gg_process_log_output}
echo -e "The below is the output of last four lines of `echo -e "\e[1;31m ${line} \e[0m"`  GG Process log, please check\n" >>${gg_process_log_output}
tail -4 /u01/app/oracle/product/ogg/dirrpt/${line}.rpt >> ${gg_process_log_output}
echo -e "\n" >> ${gg_process_log_output}
done <${BASE_DIR}/Abended_ggprocess.txt

========================================================


QFSDP Patching

Step1: Cell patching in rolling mode 

step2:  Downtime patching for below:

            First have IB Switches patching

            then GI home patch

            then ORACLE homes patching

            then yum upgrade

            then apply datapatch for all databases, if you face issue with automation script of oracle  for CDB databases, so please ran datapatch manually on all databases.

Sunday, August 22, 2021

DataPump - expdp/impdp

 Expdp with Query option: (parfile)

directory=DATA_PUMP

dumpfile=expdp_zamdbtdb_tblbkup_new_%U.dmp

logfile=expdp_zamdbtdb_tblbkup_new.log

tables=schema.table1

parallel=10

query=schema1.table1:"where column1 NOT IN (SELECT column2 FROM schema1.table2 where column3= 'value')"

cluster=N

 Impdp: (parfile)

directory=DATA_PUMP

dumpfile=expdp_zamdbtdb_tblbkup_new_%U.dmp

logfile=imdp_zamdbtdb_tblbkup_new.log

tables=schema.table1

table_exists_action=replace

parallel=10

cluster=N

Regular Commands:

select directory_name,directory_path from dba_directories where directory_name='DATA_PUMP';

select sum(bytes)/1024/1024/1024 from dba_segments;

create directory DATA_PUMP as '/opt/backups';

grant read,write on directory DATA_PUMP to system;

grant all on directory DATA_PUMP to public;

expdp system directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log tables=schema1.table1 parallel=16 exclude=statistics

expdp system directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log schemas=schema1 parallel=12 exclude=statistics

nohup expdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log schemas=schema1 parallel=12 exclude=statistics &

nohup impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=impdp_test_tblbkp.log schemas=schema1 TABLE_EXISTS_ACTION=REPLACE parallel=16 exclude=statistics &

Cluster=N -->use this for RAC environment where export/import is doing in/from local mount point

nohup impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log remap_schema=schema_old_name:schema_new_name remap_tablespace=OLD_TABLESPACE_NAME:NEW_TABLESPACE_NAME parallel=5 Cluster=N &

impdp system/'password' directory=DATA_PUMP dumpfile=expdp_test_tblbkp_%U.dmp logfile=expdp_test_tblbkp.log remap_table=OLD_TABLE_NAME:NEW_TABLE_NAME remap_schema=schema_old_name:schema_new_name parallel=5

Connecting using database service:

nohup expdp system/'password'@kol-scan:1521/zamdbtdb.localdomain parfile=expdp_tablebkup_01.par &




Database Upgrade (autoupgrade.jar)

The most recent version of AutoUpgrade can be downloaded via myoracle support - 2485457.1



Copy autoupgrade.jar to any location as per your convenience 

[oracle@kolkata02 auto_upgrade]$ cp /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/autoupgrade.jar /home/oracle/auto_upgrade

Create a sample config file using :

[oracle@kolkata02 auto_upgrade]$ java -jar /home/oracle/auto_upgrade/autoupgrade.jar -create_sample_file config

Created sample configuration file /home/oracle/auto_upgrade/sample_config.cfg

Now create your own config file using sample_config.cfg

[oracle@kolkata02 ~]$ vi /home/oracle/config_zamdbtdb.cfg

global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/zamdbtdb1

upg1.log_dir=/home/oracle/auto_upgrade/logs/zamdbtdb1

upg1.sid=zamdbtdb1

upg1.source_home=/u01/app/oracle/product/12.2.0.1/dbhome_1

upg1.target_cdb=CNTESTDB1

upg1.target_pdb_name=zamdbtdbx         -->for prechecks remove this target_pdb_name

upg1.target_pdb_copy_option=file_name_convert=('+DATA1/ZAMDBTDB','+DATA1','+RECO1/ZAMDBTDB','+RECO1','+FRA/ZAMDBTDB','+FRA') -->If you want to upgrade with copy use this option, otherwise remove this option

upg1.target_home=/u01/app/oracle/product/19.0.0.0/dbhome_1

upg1.start_time=now

upg1.upgrade_node=kolkata02.localdomain

upg1.run_utlrp=yes

upg1.timezone_upg=yes

And save it, now run the prechecks

nohup java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode analyze -noconsole >> /home/oracle/zamdbtdb_upg.log 2>&1 &

Otherwise you can run it in console mode as below: (PRECHECKS)

java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode analyze

Prechecks are succeeded , you can check in the html file

 [oracle@kolkata02 prechecks]$ pwd

/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/102/prechecks

[oracle@kolkata02 prechecks]$ firefox zamdbtdb_preupgrade.html

Now start the actual upgrade in console mode:
nohup java -jar /u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode deploy -noconsole >> /home/oracle/zamdbtdb_upg.log 2>&1 &   ---->This is noconsole mode

java -jar /home/oracle/auto_upgrade/autoupgrade.jar -config /home/oracle/config_zamdbtdb.cfg -mode deploy -->This is console mode


upg> status -job 103
Progress
-----------------------------------
Start time:      21/08/22 13:00
Elapsed (min):   2
End time:        N/A
Last update:     2021-08-22T13:01:35.143
Stage:           PRECHECKS
Operation:       PREPARING
Status:          RUNNING
Pending stages:  8
Stage summary:
    SETUP             <1 min
    GRP               <1 min
    PREUPGRADE        <1 min
    PRECHECKS         1 min (IN PROGRESS)

Job Logs Locations
-----------------------------------
Logs Base:    /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1
Job logs:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103
Stage logs:   /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/prechecks
TimeZone:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp

Additional information
-----------------------------------
Details:
Checks

Error Details:
None


upg> status -job 103
Progress
-----------------------------------
Start time:      21/08/22 13:00
Elapsed (min):   136
End time:        N/A
Last update:     2021-08-22T15:14:45.923
Stage:           POSTFIXUPS
Operation:       EXECUTING
Status:          RUNNING
Pending stages:  3
Stage summary:
    SETUP             <1 min
    GRP               <1 min
    PREUPGRADE        <1 min
    PRECHECKS         2 min
    PREFIXUPS         16 min
    DRAIN             1 min
    DBUPGRADE         112 min
    POSTCHECKS        <1 min
    POSTFIXUPS        2 min (IN PROGRESS)

Job Logs Locations
-----------------------------------
Logs Base:    /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1
Job logs:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103
Stage logs:   /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/postfixups
TimeZone:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp

Additional information
-----------------------------------
Details:
+---------+---------------+-------+
| DATABASE|          FIXUP| STATUS|
+---------+---------------+-------+
|zamdbtdb1|POST_DICTIONARY|STARTED|
+---------+---------------+-------+

Error Details:
None

upg> status -job 103
Progress
-----------------------------------
Start time:      21/08/22 13:00
Elapsed (min):   177
End time:        N/A
Last update:     2021-08-22T15:57:39.973
Stage:           NONCDBTOPDB
Operation:       EXECUTING
Status:          RUNNING
Pending stages:  1
Stage summary:
    SETUP             <1 min
    GRP               <1 min
    PREUPGRADE        <1 min
    PRECHECKS         2 min
    PREFIXUPS         16 min
    DRAIN             1 min
    DBUPGRADE         112 min
    POSTCHECKS        <1 min
    POSTFIXUPS        16 min
    POSTUPGRADE       <1 min
    NONCDBTOPDB       26 min (IN PROGRESS)

Job Logs Locations
-----------------------------------
Logs Base:    /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1
Job logs:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103
Stage logs:   /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb
TimeZone:     /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp

Additional information
-----------------------------------
Details:
Executing noncdb_to_pdb.sql

Error Details:
None

Currently noncdb_to_pdb conversion is running and it is on last stage where utlrp.sql is running

/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb
[oracle@kolkata02 noncdbtopdb]$ ls -ltr
total 444
-rwx------. 1 oracle oinstall      0 Aug 22 15:31 noncdb_to_pdb_zamdbtdb.log.lck
-rwx------. 1 oracle dba        8292 Aug 22 15:32 zamdbtdbx.xml
-rwx------. 1 oracle oinstall    702 Aug 22 15:33 createpdb_zamdbtdb.log
-rwx------. 1 oracle oinstall 376832 Aug 22 15:44 noncdbtopdb_zamdbtdb.log
-rwx------. 1 oracle oinstall  60924 Aug 22 16:54 noncdb_to_pdb_zamdbtdb.log

[oracle@kolkata02 noncdbtopdb]$ tail -100f noncdb_to_pdb_zamdbtdb.log
2021-08-22 16:54:12.033 INFO [(SQLPATH=/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb), (ORACLE_SID=CNTESTDB1), (ORACLE_UNQNAME=zamdbtdb), (ORACLE_PATH=/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb), (ORACLE_BASE=/u01/app/oracle), (TWO_TASK=N/A), (ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1), (TNS_ADMIN=N/A), (LDAP_ADMIN=N/A), (PERL5LIB=N/A), (WORKDIR=/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/noncdbtopdb)] - ExecutionEnv.addEnvToProcess
2021-08-22 16:54:12.034 INFO Starting - ExecuteProcess.setLibsForSqlplus
2021-08-22 16:54:12.034 INFO Finished - ExecuteProcess.setLibsForSqlplus
2021-08-22 16:54:12.036 INFO End Setting Oracle Environment - ExecuteProcess.startSqlPlusProcess
2021-08-22 16:54:12.036 INFO Begin Creating process - ExecuteProcess.startSqlPlusProcess
2021-08-22 16:54:12.109 INFO End Creating process - ExecuteProcess.startSqlPlusProcess
2021-08-22 16:54:12.109 INFO Executing SQL [SELECT COUNT(*) FROM sys.obj$ WHERE status IN (4, 5, 6);] in [CNTESTDB1, container:zamdbtdbx] - ExecuteSql$SQLClient.run
2021-08-22 16:54:12.628 INFO Progress was detected in noncdb_to_pdb.sql script execution due to fewer invalid objects[10] present in the pdb - NonCDBToPDBSQL$CheckProgress.run

Errors in database [zamdbtdb1]
Stage     [NONCDBTOPDB]
Operation [STOPPED]
Status    [ERROR]
Info    [
Error: UPG-1699
[Unexpected exception error]
Cause: Error finding error definition, contact Oracle Support
For further details, see the log file located at /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/autoupgrade_20210822_user.log]

-------------------------------------------------
Logs: [/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/autoupgrade_20210822_user.log]
-------------------------------------------------
upg>
upg> lsj
+----+---------+-----------+---------+------+--------------+--------+--------+
|Job#|  DB_NAME|      STAGE|OPERATION|STATUS|    START_TIME| UPDATED| MESSAGE|
+----+---------+-----------+---------+------+--------------+--------+--------+
| 103|zamdbtdb1|NONCDBTOPDB|  STOPPED| ERROR|21/08/22 13:00|17:06:27|UPG-1699|
+----+---------+-----------+---------+------+--------------+--------+--------+
Total jobs 1

At this step we found that upgrade completed successfully and PDB plugin also done successfully. But due to PDB plug-in violations , the PDB is in RESTRICTED STATE , not coming to OPEN state.
So , found the PDB plugin violations and resolving them , results in successfully opening the PDB in OPEN state

2021-08-22 15:13:58.793 INFO [Upgrading] is [100%] completed for [zamdbtdb]
+---------+--------------------------------+
|CONTAINER|                      PERCENTAGE|
+---------+--------------------------------+
| zamdbtdb|SUCCESSFULLY UPGRADED [zamdbtdb]|
+---------+--------------------------------+
2021-08-22 15:13:58.940 INFO Error opening file [/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/initzamdbtdb1.ora] for reading
2021-08-22 15:14:01.626 INFO Creating spfile completed with success
2021-08-22 15:14:01.627 INFO SUCCESSFULLY UPGRADED [zamdbtdb]
2021-08-22 15:14:01.755 INFO zamdbtdb Return status is SUCCESS
2021-08-22 15:14:24.484 INFO Analyzing zamdbtdb1, 11 checks will run using 2 threads
2021-08-22 15:14:42.616 INFO Using /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/prechecks/zamdbtdb_checklist.cfg to identify required fixups
2021-08-22 15:14:42.714 INFO Content of the checklist /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/103/prechecks/zamdbtdb_checklist.cfg is:
2021-08-22 15:31:21.106 INFO Guarantee Restore Point (GRP) successfully removed [ZAMDBTDB][AUTOUPGRADE_9212_ZAMDBTDB1122010]
2021-08-22 15:33:38.901 INFO No entry was found for [zamdbtdb1:/u01/app/oracle/product/19.0.0.0/dbhome_1] in /etc/oratab
2021-08-22 17:06:16.137 INFO /home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp/after_upgrade_pfile_zamdbtdb1.ora
2021-08-22 17:06:26.615 ERROR Dispatcher failed: AutoUpgException [ERROR3007#Errors executing [CREATE SPFILE='+DATAC1' FROM  PFILE='/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp/after_upgrade_pfile_zamdbtdb1.ora';

CREATE SPFILE='+DATAC1' FROM  PFILE='/home/oracle/auto_upgrade/logs/zamdbtdb1/zamdbtdb1/temp/after_upgrade_pfile_zamdbtdb1.ora'
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0
] [zamdbtdb1]]

select name, cause, type, status,action,message,time from pdb_plug_in_violations;   --->ran this command with connection to CDB(CNTESTDB) and found the below action plan

Some Interim patches are installed in PDB but not in CDB, because when I have applied 32904851;Database Release Update : 19.12.0.0.210720 (32904851) , i have not ran the DATAPATCH at CDB level, so the PDB which is upgraded and plugged-in to CDB it got automatically installed all the interim patches so, the violation came like this "Not installed in the CDB but installed in the PDB"
And now i have ran the datapatch command at CDB level resolved the issues and now the PDB came in OPEN state
The error details are placed in location "E:\zameer_workspace\AutomationScripts\DBUpgrade\database_upgrade_steps"
SQL> alter pluggable database ZAMDBTDBX close instances=all;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 ZAMDBTDBX                      MOUNTED
SQL>
SQL>
SQL> alter pluggable database ZAMDBTDBX open  instances=all;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 ZAMDBTDBX                      READ WRITE NO

SQL>  alter pluggable database ZAMDBTDBX save state instances=all;

Pluggable database altered.
[oracle@kolkata02 ~]$ srvctl status database -d cntestdb -v -f
Instance CNTESTDB1 is running on node kolkata02 with online services TESTPDB.localdomain,zamdbtdb.localdomain. Instance status: Open.
Instance CNTESTDB2 is running on node kolkata03 with online services TESTPDB.localdomain,zamdbtdb.localdomain. Instance status: Open.

Saturday, August 21, 2021

EXADATA Handy

To check the OS version and details

 dcli -l root -g ~/dbs_group " cat /etc/redhat-release"

dcli -l root -g ~/dbs_group imageinfo | grep -i ' image version'

dcli -l root -g ~/dbs_group "ipmitool sunoem version"

dcli -l root -g ~/dbs_group "/opt/oracle.cellos/CheckHWnFWProfile -c strict"

dcli -l root -g ~/dbs_group "ipmitool sunoem cli 'show faulty'"

dcli -l root -g ~/ibswitch_lst "version |grep -i version"

dcli -l root -g ~/all_group 'for cable in `ls /sys/class/net/ |grep ^eth`; do printf "$cable: "; cat /sys/class/net/$cable/carrier ; done'

dcli -l root -g /root/dbs_group dbmcli -e list alerthistory where endTime=null and alertShortName=Hardware and alertType=stateful and severity=critical

dcli -l root -g ~/all_group "uptime"

dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs softwareversion"

dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f"

dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs releasepatch"

dcli -l root -g ~/dbs_group "/u01/app/19.0.0.0/grid/bin/crsctl query crs releaseversion"

dcli -l oracle -g ~/dbs_group "/u01/app/19.0.0.0/grid/OPatch/opatch lspatches"

dcli -l oracle -g ~/dbs_group "/u01/app/oracle/product/12.1.0.2/DbHome_1/OPatch/opatch lspatches"

dcli -l oracle -g ~/dbs_group "/u01/app/oracle/product/12.2.0.1/dbhome_1/OPatch/opatch lspatches"

dcli -l oracle -g ~/dbs_group "/u01/app/oracle/product/19.0.0.0/dbhome_1/OPatch/opatch lspatches"

ssh hostname-ilom  -->then press enter

ls -ltr /proc/meminfo | grep -i huge

sh hugepages.sh --> this is for hugepages suggestion  --> This script will be provided by mosc doc 401749.1 where it is intended to compute the values for huge pages

dbmcl --> list alerthistory



CDB database Handy

Connect to container: (datapatch should run at container level not at pdb level)

set the cluster database parameter to false

then shut down the container

startup the container in single instance with upgrade mode, then

 Alter pluggable database all open upgrade ---> this is used when we want to run datapatch after patching




Friday, August 20, 2021

Daily DBA Handy

Upload attachments to ORACLE SUPPORT TEAM

curl -v -T "awrrpt_2_29210_29212.html" -o output -u "zameerbasha.b@gmail.com" "https://transport.oracle.com/upload/issue/3-22054156781/"

Connecting using Database service:

sqlplus sys/'password'@kol-scan:61000/pdb_name.localdomain.com as sysdba

Deleting records from a table in chunks

begin
LOOP
DELETE FROM schema_name.table_name
WHERE column_name= 'value'
and ROWNUM <50000;
EXIT WHEN sql%ROWCOUNT = 0;
commit;
END LOOP;
COMMIT;
END;
/

Execution of sql files in nohup mode:

nohup sqplus "/ as sysdba" @filename.sql &

nohup sqlplus username/'password'@tnsname as sysdba @filename.sql &

Long running session check:

SELECT
    a.sql_fulltext,
    a.sql_id,
    b.last_call_et,
    b.sid,
    b.serial#,
    b.username,
    b.inst_id,
    b.machine,
    b.module,
    b.client_identifier,
    b.action,
    b.osuser,
    b.program,
    b.event,
    b.final_blocking_session,
    b.status
FROM
    gv$sql       a,
    gv$session   b
WHERE 
--b.client_identifier like '%test%' and
--b.machine='machinename.domain.com' and
    a.sql_id = b.sql_id
    AND b.osuser !='oracle'
    AND b.username != 'SYS'
    AND b.status= 'ACTIVE'
ORDER BY
    b.last_call_et DESC;

Table Index check:

SELECT
    aic.index_owner,
    aic.table_name,
    aic.index_name,
    LISTAGG(aic.column_name, ',') WITHIN GROUP(
        ORDER BY
            aic.column_position
    ) cols
FROM
    all_ind_columns aic
WHERE
    aic.table_name = 'TABLE_NAME'
GROUP BY
    aic.index_owner,
    aic.table_name,
    aic.index_name
ORDER BY
    aic.index_owner,
    aic.table_name;

Blocking Session Check:
 SELECT
    b.inst_id,
    lpad('--->', DECODE(a.request, 0, 0, 5))
    || a.sid sid,
    b.serial#,
    b.sql_id,
    b.prev_sql_id,
    a.id1,
    a.id2,
    a.lmode,
    a.block,
    a.request,
    DECODE(a.type, 'MR', 'Media Recovery', 'RT', 'Redo Thread', 'UN', 'User Name', 'TX', 'Transaction','TM', 'DML', 'UL', 'PL/SQL User Lock'
    , 'DX', 'Distributed Xaction', 'CF', 'Control File', 'IS', 'Instance State', 'FS', 'File Set', 'IR', 'Instance Recovery', 'ST', 'Disk Space Transaction'
    , 'TS', 'Temp Segment', 'IV', 'Library Cache Invalidation', 'LS', 'Log Start or Switch', 'RW', 'Row Wait', 'SQ', 'Sequence Number'
    , 'TE', 'Extend Table', 'TT', 'Temp Table', a.type) lock_type,
    b.program,
    b.osuser,
    b.username,
    b.status,
    b.module,
    b.action,
    b.logon_time,
    b.last_call_et,
    'alter system kill session '
    || ''''
    || a.sid
    || ', '
    || b.serial#
    || ''''
    || ' immediate;' kill_session,
    DECODE(object_type, NULL, NULL, 'Dbms_Rowid.rowid_create(1, '
                                    || row_wait_obj#
                                    || ', '
                                    || row_wait_file#
                                    || ', '
                                    || row_wait_block#
                                    || ', '
                                    || row_wait_row#
                                    || ')') row_id
FROM
    gv$lock       a,
    gv$session    b,
    dba_objects   o
WHERE
    ( a.id1,
      a.id2 ) IN (
        SELECT
            id1,
            id2
        FROM
            gv$lock
        WHERE
            lmode = 0
    )
    AND a.inst_id = b.inst_id
    AND a.sid = b.sid
    AND o.object_id (+) = DECODE(b.row_wait_obj#, - 1, NULL, b.row_wait_obj#)
ORDER BY
    a.id1,
    a.id2,
    a.request;

select a.inst_id, a.sid, a.serial#, a.process, a.logon_time, c.object_name
from gv$session a, gv$locked_object b, dba_objects c
where b.object_id = c.object_id
and a.sid = b.session_id
and OBJECT_NAME like '%LINK%'  order by a.logon_time;

Gather Stats:
exec DBMS_STATS.GATHER_TABLE_STATS (ownname => 'schema_owner' ,tabname => 'table_name', cascade => true, estimate_percent => dbms_stats.auto_sample_size, degree => 15);

execute dbms_stats.gather_schema_stats(ownname => 'schema_owner',ESTIMATE_PERCENT =>dbms_stats.auto_sample_size,CASCADE => TRUE,degree => 15);

 select 'alter system kill session '''|| s.sid|| ','|| s.serial#|| ''' immediate;' from gv$session S where status='INACTIVE';

Tablespace create from source to target :
select 'CREATE BIGFILE TABLESPACE ' || tablespace_name || ' DATAFILE ''+DATA1'' SIZE 200M AUTOEXTEND ON NEXT 500M MAXSIZE UNLIMITED;' from dba_tablespaces where tablespace_name not in ('SYSTEM','SYSAUX','TEMP'); 

FRA SPACE CHECK:
col name for a32
col size_m for 999,999,999
col used_m for 999,999,999
col pct_used for 999
SELECT name
, ceil( space_limit / 1024 / 1024) SIZE_M
, ceil( space_used  / 1024 / 1024) USED_M
, decode( nvl( space_used, 0),
0, 0
, ceil ( ( space_used / space_limit) * 100) ) PCT_USED
FROM v$recovery_file_dest
ORDER BY name
/

Grep Command:
grep sga_max_size diag/rdbms/*/*/trace/alert* | sort | uniq
grep pga_aggregate_limit diag/rdbms/*/*/trace/alert* | sort | uniq


Undo Usage:
----------------
1. To check the current size of the Undo tablespace:

select sum(a.bytes) as undo_size from v$datafile a, v$tablespace b, dba_tablespaces c where c.contents = 'UNDO' and c.status = 'ONLINE' and b.name = c.tablespace_name and a.ts# = b.ts#;

2. To check the free space (unallocated) space within Undo tablespace:

select sum(bytes)/1024/1024 "mb" from dba_free_space where tablespace_name ='<undo tablespace name>';
3.To Check the space available within the allocated Undo tablespace:


select tablespace_name , sum(blocks)*8/(1024)  reusable_space from dba_undo_extents where status='EXPIRED'  group by  tablespace_name;

4. To Check the space allocated in the Undo tablespace:

select tablespace_name , sum(blocks)*8/(1024)  space_in_use from dba_undo_extents where status IN ('ACTIVE','UNEXPIRED') group by  tablespace_name;

with free_sz as ( select tablespace_name, sum(f.bytes)/1048576/1024 free_gb from dba_free_space f group by tablespace_name ) , a as ( select tablespace_name , sum(case when status = 'EXPIRED' then blocks end)*8/1048576 reusable_space_gb , sum(case when status in ('ACTIVE', 'UNEXPIRED') then blocks end)*8/1048576 allocated_gb from dba_undo_extents where status in ('ACTIVE', 'EXPIRED', 'UNEXPIRED') group by tablespace_name ) , undo_sz as ( select tablespace_name, df.user_bytes/1048576/1024 user_sz_gb from dba_tablespaces ts join dba_data_files df using (tablespace_name) where ts.contents = 'UNDO' and ts.status = 'ONLINE' ) select tablespace_name, user_sz_gb, free_gb, reusable_space_gb, allocated_gb , free_gb + reusable_space_gb + allocated_gb total from undo_sz join free_sz using (tablespace_name) join a using (tablespace_name) ;

select name "FEATURE", first_usage_date "FROM", last_usage_date "TO"
       from DBA_FEATURE_USAGE_STATISTICS
       where name like '%OLAP%'

select value from v$diag_info where name ='Diag Trace';  --> alert log location

column usr                 format a20       Heading 'Osuser/User'
column sid                 format 9999      heading 'S|I|D'
column program             format A29       heading 'program'
column stat                format A1        heading 'S|t|a|t'
column serial              format 999999   heading 'Sr#'
column machine             format A20       heading 'Machine'
column logical             format A19       heading '      Logical|    Gets / Chgs'
column module              format A30       heading 'Module'
column sess_detail         format A14       heading 'Sess_details'

set pagesize 300
col client_identifier for a30
select s.inst_id||':('||s.sid||','||s.serial#||')' sess_detail,s.sql_id,s.machine,s.osuser||' '||s.username usr,s.logon_time,s.program,s.event,s.status,s.client_identifier,s.MODULE 
from gv$session s,gv$process p 
where p.addr=s.paddr and s.sql_id='a9gvfh5hx9u98' ;

Trace File name finder:
column trace new_val T
select c.value || '/' || d.instance_name || '_ora_' ||
a.spid || '.trc' ||
case when e.value is not null then '_'||e.value end trace
from v$process a, v$session b, v$parameter c, v$instance d, v$parameter e
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name = 'user_dump_dest'
and e.name = 'tracefile_identifier';

A query to discover the process ID (PID) associated with my dedicated server (the SPID from
V$PROCESS is the operating system PID of the process that was being used during the execution of that query).

select a.spid dedicated_server, b.process clientpid
from v$process a, v$session b
where a.addr = b.paddr
and b.sid = sys_context('userenv','sid');

Top N objects : largest object find query

with
  seg as (
     select
       owner,segment_name
      ,segment_type
      ,tablespace_name
      ,sum(blocks) blocks
      ,sum(bytes)  bytes
     from dba_segments s
     where  segment_type not in (
       'TYPE2 UNDO'
      ,'ROLLBACK'
      ,'SYSTEM STATISTICS'
     )
     and segment_name not like 'BIN$%' --not in recyclebin
     and owner in ('NIMBUS')-- you can specify schema here
     group by owner,segment_name,segment_type,tablespace_name
  )
 ,segs as (
     select
       owner,segment_name
      ,case when segment_name like 'DR$%$%' then 'CTX INDEX' else segment_type end segment_type
      ,tablespace_name
      ,case
         when segment_name like 'DR$%$%'
           then (select table_owner||'.'||table_name from dba_indexes i where i.owner=s.owner and i.index_name = substr(segment_name,4,length(segment_name)-5))
         when segment_type in ('TABLE','TABLE PARTITION','TABLE SUBPARTITION')
            then owner||'.'||segment_name
         when segment_type in ('INDEX','INDEX PARTITION','INDEX SUBPARTITION')
            then (select i.table_owner||'.'||i.table_name from dba_indexes i where i.owner=s.owner and i.index_name=s.segment_name)
         when segment_type in ('LOBSEGMENT','LOB PARTITION','LOB SUBPARTITION')
            then (select l.owner||'.'||l.TABLE_NAME from dba_lobs l where l.segment_name = s.segment_name and l.owner = s.owner)
         when segment_type = 'LOBINDEX'
            then (select l.owner||'.'||l.TABLE_NAME from dba_lobs l where l.index_name = s.segment_name and l.owner = s.owner)
         when segment_type = 'NESTED TABLE'
            then (select nt.owner||'.'||nt.parent_table_name from dba_nested_tables nt where nt.owner=s.owner and nt.table_name=s.segment_name)
         when segment_type = 'CLUSTER'
            then (select min(owner||'.'||table_name) from dba_tables t where t.owner=s.owner and t.cluster_name=s.segment_name and rownum=1)
       end table_name
      ,blocks
      ,bytes
     from seg s
  )
 ,so as (
     select
       segs.owner
      ,substr(segs.table_name,instr(segs.table_name,'.')+1) TABLE_NAME
      ,sum(segs.bytes)/1024/1024/1024  total_Size_GB
      ,sum(segs.blocks) total_blocks
      ,sum(case when segs.segment_type in ('TABLE','TABLE PARTITION','TABLE SUBPARTITION','NESTED TABLE','CLUSTER') then segs.bytes/1024/1024/1024 end) tab_size_GB
      ,sum(case when segs.segment_type in ('INDEX','INDEX PARTITION','INDEX SUBPARTITION','CTX INDEX') then segs.bytes/1024/1024/1024 end) ind_size_GB
      ,sum(case when segs.segment_type in ('CTX INDEX') then segs.bytes end) ctx_size
      ,sum(case when segs.segment_type in ('LOBSEGMENT','LOBINDEX','LOB PARTITION','LOB SUBPARTITION') then segs.bytes/1024/1024/1024 end) lob_size_GB
     from segs
     group by owner,table_name
  )
 ,tops as (
     select
           dense_rank()over (order by total_Size_GB desc) rnk
          ,so.*
     from so
  )
select *
from tops
where rnk<=20 

Fragmented Table finder query;

select 
 table_name,round(((blocks*8)/1024/1024),2) "size (gb)" , 
 round(((num_rows*avg_row_len/1024))/1024/1024,2) "actual_data (gb)",
 round((((blocks*8)) - ((num_rows*avg_row_len/1024)))/1024/1024,2) "wasted_space (gb)",
 round(((((blocks*8)-(num_rows*avg_row_len/1024))/(blocks*8))*100 -10),2) "reclaimable space %",
 partitioned
from 
 dba_tables
where 
 (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
order by 4 desc;

Parent child table relationship finding query
with list_of_pks as (
  select owner, table_name, constraint_name as pk_constraint_name
  from all_constraints
  where constraint_type in( 'P', 'U')
), pcc_tables( owner, table_name, parent_table, pk_constraint_name ) as (
  select owner, table_name, null, pk_constraint_name
  from list_of_pks
  where owner = 'NIMBUS' and table_name in ('SUBSCRIPTION' , 'IADINFO','SUBSCRIPTION_SERVICEINFO','SERVICEORDER','SOVERSION') -- replace these values here
  union all
  select a.owner, a.table_name, c.table_name, a.pk_constraint_name
  from list_of_pks a
    join all_constraints b on a.table_name=b.table_name and a.owner=b.owner
    join pcc_tables c on b.r_owner= c.owner and b.r_constraint_name=c.pk_constraint_name
)
select *
from pcc_tables;

++++++++++++++++++++++++++++
Schema Objects drop:
++++++++++++++++++++++++++++

select count(1),username,osuser,machine from gv$session where username='PCBHUSER' group by username,osuser,machine;

alter session set current_schema=PCBHUSER;

----------------------------------------------
create or replace procedure PCBHUSER.DB_DROP  as
--code to drop all objects in a schema
--please confirm and recheck the schema name and TNS details
BEGIN
begin
execute immediate 'ALTER SESSION FORCE PARALLEL DDL';
execute immediate 'ALTER SESSION FORCE PARALLEL DML';
end;
begin
execute immediate 'purge recyclebin';
end;
begin
FOR I IN (select object_type,object_name from user_objects where object_type='TABLE')
LOOP
BEGIN
EXECUTE IMMEDIATE 'DROP '||I.OBJECT_TYPE||' '||I.OBJECT_NAME||' CASCADE CONSTRAINTS PURGE';
EXCEPTION WHEN OTHERS THEN NULL;
END;
END LOOP;
END;
BEGIN
FOR J IN (select object_type,object_name from user_objects where object_type!='TABLE' and object_name not in ('DB_DROP'))
LOOP
BEGIN
EXECUTE IMMEDIATE 'DROP '||J.OBJECT_TYPE||' '||J.OBJECT_NAME||'';
EXCEPTION WHEN OTHERS THEN NULL;
END;
END LOOP;
END;
begin
execute immediate 'purge recyclebin';
execute immediate 'DELETE FROM user_sdo_geom_metadata';
----execute immediate 'DELETE FROM mdsys.SDO_GEOM_METADATA_TABLE;';
execute immediate 'commit';
execute immediate 'purge recyclebin';
end;
END;
/

execute PCBHUSER.DB_DROP;

DELETE FROM mdsys.SDO_GEOM_METADATA_TABLE;
commit;

select count(1) from dba_objects where owner='PCBHUSER';

+++++++++++++++++++++++++++++++
Silent DB creation commands:
dbca -silent -createTemplateFromDB -sourceDB sim1 -templateName sim1_db_from_osdsimdb1_template.dbt -sysDBAUserName sys -sysDBAPassword cowboy

[oracle@localhost u01]$ find . -type f -name 'sim1_db_from_osdsimdb1_template*'
.

dbca -silent -createDatabase -templateName sim1_db_from_osdsimdb1_template.dbt -gdbname sim3 -sid sim3 -sysPassword cowboy -systemPassword cowboy

set lines 150 pages 150
col HOST_NAME for a15
col NAME for a10
col INSTANCE_NAME for a10
col LOG_MODE for a12
col DATABASE_ROLE for a18
col OPEN_MODE for a20
ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD HH24:MI:SS';
select name,db_unique_name,open_mode,log_mode,database_role from gv$database;
select instance_name,status,host_name,startup_time,logins from gv$instance;
select distinct db_unique_name DB_Name,instance_name,open_mode,log_mode,logins,host_name,startup_time,database_role from gv$database,gv$instance;

SELECT host_name,instance_name,TO_CHAR(startup_time, 'DD-MM-YYYY HH24:MI:SS') startup_time,FLOOR(sysdate-startup_time) days FROM sys.v_$instance;


Patch details Check:
=============================
SET LINESIZE 400
COLUMN action_time FORMAT A20
COLUMN action FORMAT A10
COLUMN status FORMAT A10
COLUMN description FORMAT A40
COLUMN version FORMAT A10
COLUMN bundle_series FORMAT A10
SELECT TO_CHAR(action_time, 'DD-MON-YYYY HH24:MI:SS') AS action_time,
       action,
       status,
       description,
       version,
       patch_id,
       bundle_series
FROM   dba_registry_sqlpatch
ORDER by action_time;

set lines 400;
col action_time for a30;
col description for a80;
col action for a10;
select patch_id, action, description, action_time from dba_registry_sqlpatch where to_char(action_time,'DD/MM/YYYY')=to_char(sysdate,'DD/MM/YYYY') order by action_time;

select name, ISSYS_MODIFIABLE from v$parameter where name='_external_scn_rejection_delta_threshold_minutes';




Also make  a note to use the below steps to login to root on non prod env if have access to root to perform during patching.

 

Step-1: sudo to oragrid

Step-2: sudo /usr/localcw/bin/eksh -l

 
==To Check User equivalence oragrid@ipagt1d8(1334) +ASM4 $ cluvfy comp admprv -n all -o user_equiv -verbose


user equivalence below generates trace also.

mkdir /tmp/cvutrace
export CV_TRACELOC=/tmp/cvutrace
export SRVM_TRACE=true
export SRVM_TRACE_LEVEL=2

cluvfy comp admprv -n all -o user_equiv -verbose
Below output looks good. But in the logs we can see for below command it gave SSH error:

Output: '<CV_TRC>/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d5 -n /bin/true >> /tmp/CVU_19_t1cnp1d1_2024-12-31_04-11-37_127593/scratch/exout36842.out


Please run below command from all the three nodes & provide the output:
1.From node ipagt1d7:
/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d5 -n /bin/true


/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d8 -n /bin/true


2. From node ipagt1d5:
/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d7 -n /bin/true


/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d8 -n /bin/true




3. From node ipagt1d8:
/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d5 -n /bin/true


/usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 ipagt1d7 -n /bin/true ===SRVCTL command for older versions =================

find / -name jenkins.war 2>/dev/null

🔍 Step 1: Check free space in the undo tablespace

Run this query to check how much space is free:

SELECT
a.tablespace_name, ROUND(a.bytes/1024/1024) AS total_mb, ROUND(NVL(b.bytes, 0)/1024/1024) AS free_mb, ROUND((NVL(b.bytes,0)/a.bytes)*100, 2) AS pct_free FROM dba_data_files a, (SELECT file_id, SUM(bytes) bytes FROM dba_free_space GROUP BY file_id) b WHERE a.file_id = b.file_id(+) AND a.tablespace_name = 'UNDOTBS1';
SQL> CREATE UNDO TABLESPACE UNDO_TEMP DATAFILE '+DATAC1' SIZE 512M AUTOEXTEND OFF;
CREATE UNDO TABLESPACE UNDO_TEMP DATAFILE '+DATAC1' SIZE 512M AUTOEXTEND OFF
*
ERROR at line 1:
ORA-01119: error in creating database file '+DATAC1'
ORA-17502: ksfdcre:4 Failed to create file +DATAC1
ORA-15041: diskgroup "DATAC1" space exhausted


SQL> CREATE UNDO TABLESPACE UNDO_TEMP DATAFILE '+DATAC1' SIZE 1M AUTOEXTEND on next 500m;

Tablespace created.

SQL> ALTER SYSTEM SET UNDO_TABLESPACE=UNDO_TEMP SCOPE=BOTH;

System altered.

SQL> DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

 SELECT name, total_mb, free_mb, usable_file_mb, offline_disks
FROM v$asm_diskgroup
WHERE name = 'DATAC1';

ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/product/12.2.0/db_1/dbs/DATAC1' TO '+DATAC1';

DB_FILES parameter change

COL name FORMAT A20
COL value FORMAT A10
COL ispdb_modifiable FORMAT A5
SELECT name, value, ispdb_modifiable FROM v$parameter WHERE name = 'db_files';

Take backup of spfile :


srvctl stop database -d <DB_UNIQUE_NAME>

ALTER SYSTEM SET db_files = 500 SCOPE=SPFILE sid='*';

srvctl stop database -d <DB_UNIQUE_NAME>

srvctl start database -d <DB_UNIQUE_NAME>


SELECT name, value
FROM v$parameter
WHERE name = 'db_files';



undo advisory:
==============

RDBMS version 10g and above:

SET SERVEROUTPUT ON
SET LINES 600
ALTER SESSION SET NLS_DATE_FORMAT = 'DD/MM/YYYY HH24:MI:SS';

DECLARE
    v_analyse_start_time    DATE := SYSDATE - 7;
    v_analyse_end_time      DATE := SYSDATE;
    v_cur_dt                DATE;
    v_undo_info_ret         BOOLEAN;
    v_cur_undo_mb           NUMBER;
    v_undo_tbs_name         VARCHAR2(100);
    v_undo_tbs_size         NUMBER;
    v_undo_autoext          BOOLEAN;
    v_undo_retention        NUMBER(6);
    v_undo_guarantee        BOOLEAN;
    v_instance_number       NUMBER;
    v_undo_advisor_advice   VARCHAR2(100);
    v_undo_health_ret       NUMBER;
    v_problem               VARCHAR2(1000);
    v_recommendation        VARCHAR2(1000);
    v_rationale             VARCHAR2(1000);
    v_retention             NUMBER;
    v_utbsize               NUMBER;
    v_best_retention        NUMBER;
    v_longest_query         NUMBER;
    v_required_retention    NUMBER;
BEGIN
    select sysdate into v_cur_dt from dual;
    DBMS_OUTPUT.PUT_LINE(CHR(9));
    DBMS_OUTPUT.PUT_LINE('- Undo Analysis started at : ' || v_cur_dt || ' -');
    DBMS_OUTPUT.PUT_LINE('--------------------------------------------------');

    v_undo_info_ret := DBMS_UNDO_ADV.UNDO_INFO(v_undo_tbs_name, v_undo_tbs_size, v_undo_autoext, v_undo_retention, v_undo_guarantee);
    select sum(bytes)/1024/1024 into v_cur_undo_mb from dba_data_files where tablespace_name = v_undo_tbs_name;

    DBMS_OUTPUT.PUT_LINE('NOTE:The following analysis is based upon the database workload during the period -');
    DBMS_OUTPUT.PUT_LINE('Begin Time : ' || v_analyse_start_time);
    DBMS_OUTPUT.PUT_LINE('End Time   : ' || v_analyse_end_time);
    
    DBMS_OUTPUT.PUT_LINE(CHR(9));
    DBMS_OUTPUT.PUT_LINE('Current Undo Configuration');
    DBMS_OUTPUT.PUT_LINE('--------------------------');
    DBMS_OUTPUT.PUT_LINE(RPAD('Current undo tablespace',55) || ' : ' || v_undo_tbs_name);
    DBMS_OUTPUT.PUT_LINE(RPAD('Current undo tablespace size (datafile size now) ',55) || ' : ' || v_cur_undo_mb || 'M');
    DBMS_OUTPUT.PUT_LINE(RPAD('Current undo tablespace size (consider autoextend) ',55) || ' : ' || v_undo_tbs_size || 'M');
    IF V_UNDO_AUTOEXT THEN
        DBMS_OUTPUT.PUT_LINE(RPAD('AUTOEXTEND for undo tablespace is',55) || ' : ON');  
    ELSE
        DBMS_OUTPUT.PUT_LINE(RPAD('AUTOEXTEND for undo tablespace is',55) || ' : OFF');  
    END IF;
    DBMS_OUTPUT.PUT_LINE(RPAD('Current undo retention',55) || ' : ' || v_undo_retention);

    IF v_undo_guarantee THEN
        DBMS_OUTPUT.PUT_LINE(RPAD('UNDO GUARANTEE is set to',55) || ' : TRUE');
    ELSE
        dbms_output.put_line(RPAD('UNDO GUARANTEE is set to',55) || ' : FALSE');
    END IF;
    DBMS_OUTPUT.PUT_LINE(CHR(9));

    SELECT instance_number INTO v_instance_number FROM V$INSTANCE;

    DBMS_OUTPUT.PUT_LINE('Undo Advisor Summary');
    DBMS_OUTPUT.PUT_LINE('---------------------------');

    v_undo_advisor_advice := dbms_undo_adv.undo_advisor(v_analyse_start_time, v_analyse_end_time, v_instance_number);
    DBMS_OUTPUT.PUT_LINE(v_undo_advisor_advice);

    DBMS_OUTPUT.PUT_LINE(CHR(9));
    DBMS_OUTPUT.PUT_LINE('Undo Space Recommendation');
    DBMS_OUTPUT.PUT_LINE('-------------------------');

    v_undo_health_ret := dbms_undo_adv.undo_health(v_analyse_start_time, v_analyse_end_time, v_problem, v_recommendation, v_rationale, v_retention, v_utbsize);
    IF v_undo_health_ret > 0 THEN
        DBMS_OUTPUT.PUT_LINE('Minimum Recommendation           : ' || v_recommendation);
        DBMS_OUTPUT.PUT_LINE('Rationale                        : ' || v_rationale);
        DBMS_OUTPUT.PUT_LINE('Recommended Undo Tablespace Size : ' || v_utbsize || 'M');
    ELSE
        DBMS_OUTPUT.PUT_LINE('Allocated undo space is sufficient for the current workload.');
    END IF;
    
    SELECT dbms_undo_adv.best_possible_retention(v_analyse_start_time, v_analyse_end_time) into v_best_retention FROM dual;
    SELECT dbms_undo_adv.longest_query(v_analyse_start_time, v_analyse_end_time) into v_longest_query FROM dual;
    SELECT dbms_undo_adv.required_retention(v_analyse_start_time, v_analyse_end_time) into v_required_retention FROM dual;

    DBMS_OUTPUT.PUT_LINE(CHR(9));
    DBMS_OUTPUT.PUT_LINE('Retention Recommendation');
    DBMS_OUTPUT.PUT_LINE('------------------------');
    DBMS_OUTPUT.PUT_LINE(RPAD('The best possible retention with current configuration is ',60) || ' : ' || v_best_retention || ' Seconds');
    DBMS_OUTPUT.PUT_LINE(RPAD('The longest running query ran for ',60) || ' : ' || v_longest_query || ' Seconds');
    DBMS_OUTPUT.PUT_LINE(RPAD('The undo retention required to avoid errors is ',60) || ' : ' || v_required_retention || ' Seconds');

END;
/



FIG project queries

##### Service add & LOad Baclancing on Add Service ####### srvctl add service -s wcccdmt.farmersinsurance.com -r wcccdmtx1,wcccdmtx2,wcc...