Saturday, February 15, 2025

GG doc details

 GGSUSER Dictionary Build proc and job.txt

GGSUSER Dictionary Build procedure and job


column first_change# format 999999999999999


SELECT first_change#, to_char(first_time,'yyyy-mm-dd hh24:mi:ss')  FROM v$archived_log WHERE dictionary_begin = 'YES' AND standby_dest = 'NO' AND name IS NOT NULL AND status = 'A';




CREATE OR REPLACE procedure GGSUSER.Dictionary_Build is

cdate date;

begin

dbms_output.enable(10000000);

select sysdate into cdate from dual;

DBMS_LOGMNR_D.BUILD(options=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

select sysdate into cdate from dual;

end;

/



BEGIN

  SYS.DBMS_SCHEDULER.CREATE_JOB

    (

       job_name        => 'GGSUSER.DICTIONARY_BUILD_JOB'

      ,start_date      => TO_TIMESTAMP_TZ('2020/11/20 00:00:00.000000 -07:00','yyyy/mm/dd hh24:mi:ss.ff tzr')

      ,repeat_interval => 'FREQ=DAILY; BYHOUR=06;BYMINUTE=1;'

      ,end_date        => NULL

      ,job_class       => 'DEFAULT_JOB_CLASS'

      ,job_type        => 'STORED_PROCEDURE'

      ,job_action      => 'Dictionary_Build'

      ,comments        => 'OSD GGS Dictionary'

    );

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'GGSUSER.DICTIONARY_BUILD_JOB'

     ,attribute => 'RESTARTABLE'

     ,value     => FALSE);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'GGSUSER.DICTIONARY_BUILD_JOB'

     ,attribute => 'LOGGING_LEVEL'

     ,value     => SYS.DBMS_SCHEDULER.LOGGING_OFF);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL

    ( name      => 'GGSUSER.DICTIONARY_BUILD_JOB'

     ,attribute => 'MAX_FAILURES');

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL

    ( name      => 'GGSUSER.DICTIONARY_BUILD_JOB'

     ,attribute => 'MAX_RUNS');

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'GGSUSER.DICTIONARY_BUILD_JOB'

     ,attribute => 'STOP_ON_WINDOW_CLOSE'

     ,value     => FALSE);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'GGSUSER.DICTIONARY_BUILD_JOB'

     ,attribute => 'JOB_PRIORITY'

     ,value     => 3);

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL

    ( name      => 'GGSUSER.DICTIONARY_BUILD_JOB'

     ,attribute => 'SCHEDULE_LIMIT');

  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE

    ( name      => 'GGSUSER.DICTIONARY_BUILD_JOB'

     ,attribute => 'AUTO_DROP'

     ,value     => TRUE);


  SYS.DBMS_SCHEDULER.ENABLE

    (name                  => 'GGSUSER.DICTIONARY_BUILD_JOB');

END;

/



BEGIN

  DBMS_SCHEDULER.RUN_JOB(

    JOB_NAME            => 'GGSUSER.DICTIONARY_BUILD_JOB',

    USE_CURRENT_SESSION => FALSE);

END;

/



----

Goldengate Recovery checkpoint reset.txt
alter evenus,ioextseqno 469690,ioextrba 0,thread 2
alter EP2BBN2D,ioextseqno 469690,ioextrba 0,thread 2


  Current Checkpoint (position of last record read in the data source):
    Thread #: 2
    Sequence #: 469690
    RBA: 1186048512
    Timestamp: 2023-05-06 04:30:38.000000
    SCN: 5925.2439372008 (25450120600808)
    Redo File: +P1BBN2D6_REDO02/p1bbn2d6/onlinelog/group_6.304.889365237

=====

Goldengate_Xid_transaction_showtrans_skiptrans.txt
To recreate the extract without dictionary build you cannot start at point in time that was in past when the dictionary build
is not available.

If you can downgrade to classic extract and perform this step and then upgrade to integrated once you have recovered the data that
is the best possibility. You can also apply the patch in parallel before upgrading to integrated.


Downgrading INTEGRATED Extract to Classic Extract fails on RAC environment ( Doc ID 1476058.1 ) ==> no thread options for non RAC

How To Upgrade From Goldengate Classic Extract To Integrated Extract ( Doc ID 1484313.1 )

"One point worth noting, unlike classic extract you cannot simply point at any historical point in time and expects a new IE to work. 
So if you have a need to create a new IE now to process yesterday's data and all your current IEs was created last year meaning the dictionaries 
are in year old archived logs you are basically stuck. Therefore one way (if you have a need) is to build a dictionary every night if you generally have
 archived logs on disk for > 1 day you can go backwards to find a dictionary for your new IE."

If you have dictionaries built from time to time, say every day. Then you can configure a new extract (with the same parameters) and tag it to this 
daily/yesterday dictionary's scn. Or you can delete/unregister the current extract and create the new extract with the same name.

You cannot skip archived logs for an existing IE even if you are not interested in the data. It has to read through the logs for metadata consistency.

You can build dictionaries at any time using

execute DBMS_LOGMNR_D.BUILD( options => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

And subsequently search for the SCN of a data dictionary build using

column first_change# format 999999999999999

SELECT first_change#, to_char(first_time, 'yyyy-mm-dd hh24:mi:ss') FROM v$archived_log WHERE dictionary_begin = 'YES' AND standby_dest = 'NO' AND 
name IS NOT NULL AND status = 'A';

You will see that ggsci register extract has a scn option and you can use this scn where a dictionary exist to register saving another dictionary build.

Note register with no options does 2 things, build a dictionary and instructs RMAN not to delete any archived logs that an extract has not processed. 
See the REGISTER EXTRACT in the reference manual, it also explains the above sql stuff above.

send extract <extname>,showtrans tabular
send extract <extname>,skiptrans <xid>

Troubleshooting OGG with XID Trasnaction

XID:221.26

select * from gv$transaction where xidusn=221 and xidslot=26;
select * from gv$transaction where xidusn=8 and xidslot=11;

transaction.addr=session.taddr;

select status,addr from gv$transaction where xidusn=221 and xidslot=26;
select status,addr from gv$transaction where xidusn=8 and xidslot=11;
Status ADDR
_____________________________
Active 0000000C89CBAEF0


SQL>select sid,serial# from gv$session where taddr='0000000C89CBAEF0';

SID Serial#
________________________
152 44251

select a.sid,a.serial#,b.sql_text from gv$session a,gv$sqlarea b where a.sql_address=b.address and sid='152';

select sid,inst_id,serial#,username,osuser,status,machine,sql_id,sql_hash_value,state from gv$session where sid=152;
select sql_text from gv$sqlarea where sql_id='<get sql_id from above'>;



---

OGG  Upgrade steps on system hosting Goldengate during DB upgrade.docx

OGG Upgrade steps on system hosting Goldengate during DB upgrade

 

1. Stop Extract , pump processes on source

 

2. Wait until there is no lag on replicat , stop replicat process.

 

--4. Stop extract process on source wait until pump process all the trail files , then stop it.

 

3. Once application is down on target, shutdown the DB.

5. Install OGG 19c under different home (Choose appropriate DB versionduring installation , choose for 19c database ) .

 

6. Upgrade database on 19c .

 

7. Copy goldengate process related files from old goldengate to the new 19c goldengate for 19c RDBMS like .prm and .cpe in appropriate directories.

 

NOTE ! If your new goldengate version on target site is lower then source , you should add "format release " into the extract and pump for trail files , otherwise , the OGG lower version cannot read 19c trail file format.

 

8. Start the extract. And do etrollover

 

10 . Alter the pump to read from new generated trail file : Example : alter <pump_name> , extseqno <newtrail file seq> extrba 0

alter <pump_name> , etrollover <<<< It will generate new trail file on site 3 >

 

11. On target, alter replicat to read from new trail file .

 

12 . Start replicat

 

13. After ensuring all goldengate processes are running, start the applications accessing the database.



=========

Troubleshooting_OGG_XID_Transaction.txt

SDBA can u confirm if below archives on thse threads are currently available for server flpd670:
Redo Thread 1, Redo Log Sequence Number 196837
Redo Thread 2, Redo Log Sequence Number 206654
Redo Thread 4, Redo Log Sequence Number 204942
Redo Thread 3, Redo Log Sequence Number 207227



Theread 1:
Recovery SCN
25401121862297
25399292664984
25399692782630
25401118440536




Select * from gv$transaction where xidusn=221 and xidslot=26;


select status,addr from gv$transaction where xidusn=221 and xidslot=26;


SQL> select status,addr from gv$transaction where xidusn=221 and xidslot=26;

STATUS           ADDR
---------------- ----------------
ACTIVE           0000000C89CBAEF0

SQL> select sid,serial# from v$session where taddr='0000000C89CBAEF0';

       SID    SERIAL#
---------- ----------
       152      44251


select sid,inst_id,serial#,username,osuser,status,machine,sql_id,sql_hash_value,state from gv$session where sid ='152';
select sql_text from gv$sqlarea where sql_id ='152';

select a.sid,a.serial#,b.sql_text from gv$session a,gv$sqlarea b where a.sql_address=b.address and sid='152';


268.31.3274012

Select * from gv$transaction where xidusn=268 and xidslot=31;


SQL> select status,addr from gv$transaction where xidusn=268 and xidslot=31;

STATUS           ADDR
---------------- ----------------
ACTIVE           0000000C92285918


SQL> select sid,serial# from gv$session where taddr='0000000C92285918';

       SID    SERIAL#
---------- ----------
      1614      20213


select sid,inst_id,serial#,username,osuser,status,machine,sql_id,sql_hash_value,state,SQL_EXEC_START from gv$session where sid ='1614';

select a.sid,a.serial#,b.sql_text from gv$session a,gv$sqlarea b where a.sql_address=b.address and sid='1614';


195.17.1096475

Select * from gv$transaction where xidusn=195 and xidslot=17;


SQL> select status,addr from gv$transaction where xidusn=195 and xidslot=17;


STATUS           ADDR
---------------- ----------------
ACTIVE           0000000619940E90


SQL> select sid,serial# from gv$session where taddr='0000000619940E90';

       SID    SERIAL#
---------- ----------
       908      21249


select sid,inst_id,serial#,username,osuser,status,machine,sql_id,sql_hash_value,state,SQL_EXEC_START from gv$session where sid ='908';

select a.sid,a.serial#,b.sql_text from gv$session a,gv$sqlarea b where a.sql_address=b.address and sid='908';


=====
Upgrading_Classic_Extract_to_Integrated_Extract.txt
dblogin gguser
1. Stop Extract ext1
2. register extract ext1,database
3. info ext1,upgrade
If u see message ext1 is ready to be upgraded to integrated capture then proceed for below

If u dont see above message then start the extract process so that it passes mentiponed scn and then stop it once u see. U can do info to check current status of extract.

4. alter extract ext1,upgrade integrated tranlog
Extract ext1 successfully upgraded to integrated capture
 Remove the parameters that are specific to Classic capture only

5. Start extract ext1
6. info ext1

7. sql>select capture_name,queue_name,status from dba_capture;


===

LOB Migration.txt


--Verify table and LOB size
SELECT
 (SELECT SUM(S.BYTES)/1024/1024/1024  FROM user_SEGMENTS S where                                                                                           -- The Table Segment size
         (S.SEGMENT_NAME = UPPER('WSMESSAGE'))) +
 (SELECT SUM(S.BYTES)/1024/1024/1024                                                                                                 -- The Lob Segment Size
  FROM user_SEGMENTS S, user_LOBS L
  WHERE 
       (L.SEGMENT_NAME = S.SEGMENT_NAME AND L.TABLE_NAME = UPPER('WSMESSAGE') )) +
 (SELECT SUM(S.BYTES)/1024/1024/1024                                                                                                 -- The Lob Index size
  FROM user_SEGMENTS S, user_INDEXES I
  WHERE 
       (I.INDEX_NAME = S.SEGMENT_NAME AND I.TABLE_NAME = UPPER('WSMESSAGE') AND INDEX_TYPE = 'LOB' ))
  "TOTAL TABLE SIZE"
FROM DUAL;
--Before 56 GB



select  table_name, column_name, securefile,compression from user_lobs where TABLE_NAME= 'WSMESSAGE';



ALTER TABLE WSMESSAGE MOVE LOB(REQUEST) STORE AS  securefile (TABLESPACE USERS);
ALTER TABLE WSMESSAGE MOVE LOB(RESPONSE) STORE AS  securefile (TABLESPACE USERS);
ALTER TABLE WSMESSAGE MOVE LOB(DEBUG) STORE AS  securefile (TABLESPACE USERS);




select  table_name, column_name, securefile,compression from user_lobs where TABLE_NAME= 'WSMESSAGE';


ALTER TABLE WSMESSAGE MODIFY LOB(REQUEST) (COMPRESS HIGH);
ALTER TABLE WSMESSAGE MODIFY LOB(RESPONSE) (COMPRESS HIGH);
ALTER TABLE WSMESSAGE MODIFY LOB(DEBUG) (COMPRESS HIGH);

select  table_name, column_name, securefile,compression from user_lobs where TABLE_NAME= 'WSMESSAGE';


select segment_type,round(sum(bytes)/1024/1024/1024) "GB"
from user_segments
where segment_name='WSMESSAGE'
group by segment_type
union
-- total size of LOBs for table MYTABLE
SELECT segment_TYPE,round(SUM(BYTES)/1024/1024/1024) "GB LOB"
FROM user_SEGMENTS 
WHERE SEGMENT_NAME IN(
    select distinct lob_name 
    from user_lob_partitions
    where table_name='WSMESSAGE'
)
group by segment_type;



BEGIN
  SYS.DBMS_STATS.GATHER_TABLE_STATS (
     OwnName           => 'IPAM'
    ,TabName           => 'WSMESSAGE_2020_BKUP'
    ,Estimate_Percent  => 10
    ,Method_Opt        => 'FOR ALL COLUMNS SIZE 1'
    ,Degree            => 4
    ,Cascade           => FALSE
    ,No_Invalidate  => FALSE);
END;
/

___________________________________________________________

2nd method
create table IPAM.WSMESSAGE_2020_BKUP_1
(
  TRANSACTION_ID  NUMBER                        NOT NULL,
  INTERFACE_NAME  VARCHAR2(30 BYTE),
  OPERATION       VARCHAR2(100 BYTE)            NOT NULL,
  START_TIME      DATE                          NOT NULL,
  ELAPSED_TIME    NUMBER,
  REQUEST         CLOB                          DEFAULT EMPTY_CLOB(),
  RESPONSE        CLOB                          DEFAULT EMPTY_CLOB(),
  SOURCE          VARCHAR2(128 BYTE),
  STATUS          NUMBER,
  REQUEST_TYPE    VARCHAR2(50 BYTE),
  DEBUG           CLOB                          DEFAULT EMPTY_CLOB()
)
LOB (REQUEST) STORE AS SECUREFILE (
  TABLESPACE  USERS
  enable      STORAGE IN ROW
  CHUNK       8192
  COMPRESS    HIGH
  NOCACHE
  LOGGING
      STORAGE    (
                  INITIAL          104K
                  NEXT             1M
                  MINEXTENTS       1
                  MAXEXTENTS       UNLIMITED
                  PCTINCREASE      0
                  BUFFER_POOL      DEFAULT
                 ))
LOB (RESPONSE) STORE AS SECUREFILE (
  TABLESPACE  USERS
  enable      STORAGE IN ROW
  CHUNK       8192
  COMPRESS    HIGH
  NOCACHE
  LOGGING
      STORAGE    (
                  INITIAL          104K
                  NEXT             1M
                  MINEXTENTS       1
                  MAXEXTENTS       UNLIMITED
                  PCTINCREASE      0
                  BUFFER_POOL      DEFAULT
                 ))
LOB (DEBUG) STORE AS SECUREFILE (
  TABLESPACE  USERS
  enable      STORAGE IN ROW
  CHUNK       8192
  COMPRESS    HIGH
  NOCACHE
  LOGGING
      STORAGE    (
                  INITIAL          104K
                  NEXT             1M
                  MINEXTENTS       1
                  MAXEXTENTS       UNLIMITED
                  PCTINCREASE      0
                  BUFFER_POOL      DEFAULT
                 ))
TABLESPACE USERS
PCTUSED    0
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MAXSIZE          UNLIMITED
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
LOGGING 
NOCACHE;

insert into IPAM.WSMESSAGE_2020_BKUP_1 select * from IPAM.WSMESSAGE;

select * from IPAM.WSMESSAGE_2020_BKUP;
commit;


SELECT
 (SELECT SUM(S.BYTES)/1024/1024/1024  FROM user_SEGMENTS S where                                                                                           -- The Table Segment size
         (S.SEGMENT_NAME = UPPER('WSMESSAGE'))) +
 (SELECT SUM(S.BYTES)/1024/1024/1024                                                                                                 -- The Lob Segment Size
  FROM user_SEGMENTS S, user_LOBS L
  WHERE 
       (L.SEGMENT_NAME = S.SEGMENT_NAME AND L.TABLE_NAME = UPPER('WSMESSAGE') )) +
 (SELECT SUM(S.BYTES)/1024/1024/1024                                                                                                 -- The Lob Index size
  FROM user_SEGMENTS S, user_INDEXES I
  WHERE 
       (I.INDEX_NAME = S.SEGMENT_NAME AND I.TABLE_NAME = UPPER('WSMESSAGE') AND INDEX_TYPE = 'LOB' ))
  "TOTAL TABLE SIZE"
FROM DUAL;

===

GG Upgrade benefits.txt

CIM GG upgrade:


Source 1:

12c DB and OGG 12.3.0.1.4
flpd670.ffdc.sbc.com

1. OGG version currently is 12.3.
2. The extract processes upgraded to integrated extract. Below are the advantages of IE:

a. Because integrated capture is fully integrated with the database, no additional setup is required to work with Oracle RAC, ASM, and TDE. The databaes are on RAC and ASM hence this is very useful featurefor the environment.
b. Performance of IE is faster than Classic. We can notice around 10% improvement in Goldengate performance.
c. Integrated capture enables faster filtering of tables.
d. Integrated capture handles point-in-time recovery and RAC integration more efficiently.
e. Integrated capture features integrated log management. The Oracle Recovery Manager (RMAN) automatically retains the archive logs that are needed by Extract.
f. For a release 11.2.0.4 source database and later (with source compatibility set to 11.2.0.4 or higher), the capture of DDL is performed by the logmining server asynchronously and requires no special triggers, tables, or other database objects to be installed. Oracle GoldenGate upgrades can be performed without stopping user applications. The use of a DDL trigger and supporting objects is required when Extract is in integrated mode with an Oracle 11g source database that is earlier than version 11.2.0.4. In our environment we were facing some performance issues due to these DDL triggers. Now this has been eliminated with introduction of IE.

Target:
Target:  Source for one of our application
11g DB and OGG 11.2.1 ---Is getting upgraded

1. The DB has been upgraded to 12c and GG to 12.3.0.1.4
2. The extract processes upgraded to integrated extract. We are already noticing perfromance improvement.

OL GG Upgrade:
Source:
12c DB and OGG 12.3.0.1.4
flph460.ffdc.sbc.com

1. The extract processes upgraded from Classic to Integrated extract
2. Added credentialstore which introduces more security. The password is not plain text anymore. This brings more security to the environment.
3. Dictioanry build happening everyday ensuring point in time recovery in case of any issues.

Target:  Source for one of our application
flph1167.ffdc.sbc.com

1. The extract processes upgraded from Classic to Integrated extract
2. Added credentialstore which introduces more security. The password is not plain text anymore. This brings more security to the environment.
3. Dictioanry build happening everyday ensuring point in time recovery in case of any issues.


______________________________________________________________


Improvement in GG and DB performance upto 10%
Easier Maintenance anf fixed lot of bugs and internal issues which Classic extract wasnt capable to handle.
Some of the DB performance burdens were reducing upon upgrading GG extract process to integrated capture
Enhancement of DB/GG credential with introduction of latest credentialstore feature which is used to store credentials more securely.
Implementation Dictionary build concept to ensure easier data recovery. 

====

No comments:

Post a Comment

FIG project queries

##### Service add & LOad Baclancing on Add Service ####### srvctl add service -s wcccdmt.farmersinsurance.com -r wcccdmtx1,wcccdmtx2,wcc...