Tuesday, December 20, 2011

SCN information from Goldengate trail files

Goldengate trails by default doesnot contain the SCN information. To extract this in the trail files we need to use the TOKENS parameter to the table mapping. TOKENS would store the SCN information for each transaction with an extra filed 'token' in the trails during the capture.

Here is a sample extract parameter to accomplish:

EXTRACT E_SITEA
SETENV (ORACLE_SID=ORCL)
GETENV (ORACLE_SID)
USERID orclgg, PASSWORD oracle
EXTTRAIL D:/GG11/dirdat/ea
TRANLOGOPTIONS EXCLUDEUSER ORCLGG
STATOPTIONS RESETREPORTSTATS
REPORT AT 00:01
REPORTROLLOVER AT 00:01
REPORTCOUNT EVERY 60 SECONDS, RATE
GETUPDATEBEFORES
TABLE sitea.products, tokens (tk-scn=@getenv("ORATRANSACTION","SCN));

Now start the extract and see the trail file generated using LOGDUMP.

OGG_HOME> logdump
Logdump 275 >open D:/GG11/dirdat/ea000004
Current LogTrail is D:\GG11\dirdat\ea000004
Logdump 276 >ghdr on
Logdump 277 >detail data
Logdump 278 >usertoken detail
Logdump 279 >n

2011/12/20 20:24:26.633.000 FileHeader Len 952 RBA 0
Name: *FileHeader*
3000 01b0 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0...0...GG..TL..1...
0002 3200 0004 2000 0000 3300 0008 02f1 daba b318 | ..2... ...3.........
a528 3400 002e 002c 7572 693a 564d 5241 4748 4156 | .(4....,uri:xxxxx
2d4c 4150 3a69 6463 3a6f 7261 636c 653a 636f 6d3a | -LAP:idc:oracle:com:
6472 6976 652d 443a 4747 3131 3600 0019 0017 443a | drive-D:GG116.....D:
2f47 4731 312f 6469 7264 6174 2f65 6130 3030 3030 | /GG11/dirdat/ea00000
3437 0000 0101 3800 0004 0000 0004 39ff 0008 0000 | 47....8.......9.....

Logdump 280 >n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x00)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 0 (x0000) IO Time : 2011/12/20 20:24:26.592.000
IOType : 150 (x96) OrigNode : 0 (x00)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 0 AuditPos : 0
Continued : N (x00) RecCount : 0 (x00)

2011/12/20 20:17:15.000.000 Insert Len 39 RBA 1022
Name: SITEB.PRODUCTS
After Image: Partition 4 GU s
0000 000a 0000 0000 0000 0000 0002 0001 0007 0000 | ....................
0003 4a4a 4a00 0200 0a00 0000 0000 0000 0000 14 | ..JJJ..............
Column 0 (x0000), Len 10 (x000a)
0000 0000 0000 0000 0002 | ..........
Column 1 (x0001), Len 7 (x0007)
0000 0003 4a4a 4a | ....JJJ
Column 2 (x0002), Len 10 (x000a)
0000 0000 0000 0000 0014 | ..........

User tokens: 15 bytes
tk-scn : 1109315 -->> SCN number

Monday, December 19, 2011

Conflict detection in an Active-Active environment

As Goldengate operates independently of other processes. In an active-active environment conflicts can occur when changes are made to the identical data on different databases at the same time.

The below scenarios can occur when the timing of simultaneous changes results in one of the out of sync conditioins:

# A replicated insert attempts to add a row that already exists in the target.

# The before image of a replicated update doesnot match the current row in the target.

# a replicated delete attempts to remove a row that doesnot exist on the target.

An example, when UserA on SiteA updates a row, and UserB on SiteB updates the same row. If userB's transaction occurs before userA's transactions is synchronized to Database B, there will be a confict on the replicated transaction.

In these scenarious, Goldengate doesnot provide default procedures for handling the conflicts and this can be handled with custom conflict resulution scripts that needs to be included in the parameter files.

Here is a small demonstration in an online retail environment:

* An online shopping store is stocked with 20 limited ipods. If the quantity goes to 0, then this item is tagged to be out of stock on the online system.

* Customer A places an order for 15 ipods that came from Site A database:
sql would be:
"update products set quantity_in_stock = quantity_in_stock -15 where product_name = 'ipod' and quantity_in_stock = 20;"

* Customer B simultaneiously places an order for 4 ipods which came from ABC database.

sql would be:
"update products set quantity_in_stock = quantity_in_stock - 4 where products_name = 'ipod' and quantity_in_stock = 20;"


In the above example, the number of ipods in stock according to the SiteA database is 5 and the number of ipods in stock according to SiteB database is 16. When Goldengate attempts to update the SiteA database with the transaction from customer A, the transaction will fail because the before image of quantity_in_stock is 20 which doesnot match the current figure value which is 16. Reversely the transaction replicated to the SiteB database will also fail because the before image of quantity_in_stock 20 doesnot match the current value of 5.

So, unless a conflict resolution script is not written, Goldengate doesnot veirfy target row data before replicating the changes. And this conflict resolution can be handled by writing a sql stored procedure or a query using the SQLEXEC parameter in the replicat parameter file on both the databases.

Demo(Extract pump is not used as both the DB's are in the same server):

Site A
_______

e_sitea --> Extract(Change data capture that reads redo from the SiteA database)

EXTRACT E_SITEA
SETENV (ORACLE_SID=SITEA)
GETENV (ORACLE_SID)
USERID orclgg, PASSWORD oracle
EXTTRAIL D:/GG11/dirdat/ea
-- This parameter prevents the transaction looping as the transactions already exists on the target(SiteB).
TRANLOGOPTIONS EXCLUDEUSER ORCLGG
-- This parameter causes Extract to capture the before image of the record and write it into the trail file.
GETUPDATEBEFORES
TABLE SITEA.PRODUCTS;

r_sitea --> Replicat(Replicat to apply data captured from the SiteB database to the SiteA database)

REPLICAT R_SITEB
SETENV (ORACLE_SID=SITEA)
GETENV (ORACLE_SID)
USERID orclgg, PASSWORD oracle
ASSUMETARGETDEFS
ALLOWDUPTARGETMAP
IGNOREUPDATES
MAP SITEA.*, SITEB.*;
GETUPDATES
IGNOREINSERTS
IGNOREDELETES
MAP SITEB.PRODUCTS, TARGET SITEB.PRODUCTS,
SQLEXEC (id conflict,
query "select qty_in_stock from sitea.products where products_number = :vpnum",
params (vpnum = products_number)),
colmap (products_number = products_number,
products_name = products_name,
qty_in_stock = @if(conflict.qty_in_stock <> BEFORE.qty_in_stock,
@compute(conflict.qty_in_stock - (BEFORE.qty_in_stock - qty_in_stock)),
qty_in_stock)
);

Site B
______

e_siteB --> Extract (Change data capture that reads redo from the SiteB database)

EXTRACT E_SITEB
SETENV (ORACLE_SID=SITEB)
GETENV (ORACLE_SID)
USERID orclgg1, PASSWORD oracle
EXTTRAIL D:/GG1112/dirdat/eb
-- This parameter prevents the transaction looping as the transactions already exists on the target(SiteA).
TRANLOGOPTIONS EXCLUDEUSER ORCLGG1
-- This parameter causes Extract to capture the before image of the record and write it into the trail file.
GETUPDATEBEFORES
TABLE SITEB.PRODUCTS;

r_siteA --> Replicat (Replicat to apply data captured from the SiteA database to the SiteB database)

REPLICAT R_SITEA
SETENV (ORACLE_SID=SITEB)
GETENV (ORACLE_SID)
USERID orclgg1, PASSWORD oracle
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/rwconf.dsc, purge
ALLOWDUPTARGETMAP
IGNOREUPDATES
MAP SITEB.*, TARGET SITEA.*;
GETUPDATES
IGNOREINSERTS
IGNOREDELETES
MAP SITEA.PRODUCTS, TARGET SITEB.PRODUCTS,
SQLEXEC (id conflict,
query "select qty_in_stock from siteb.products where products_number = :vpnum",
params (vpnum = products_number)),
colmap (products_number = products_number,
products_name = products_name,
qty_in_stock = @if(conflict.qty_in_stock <> BEFORE.qty_in_stock,
@compute(conflict.qty_in_stock - (BEFORE.qty_in_stock - qty_in_stock)),
qty_in_stock)
);


Over here, the replicats on both the sites have been added with "ALLOWDUPTARGETMAP", this is used to perform conflict detection on the update operation. This would turn off processing the update operations and do a wild card mapping of all sitea and siteb tables(only the inserts and deletes will be processed by the wild card mapping).

Next, we turn on processing for updates, and turn off processing for insert and delete operations. This update would be applied using a pl/sql code which is used in by SQLEXEC.

For all update operations performed on the PRODUCTS table, a query is executed against the target table and the data from the qty_in_stock column is returned. The GoldenGate column mapping munction @IF, is used to compare the returned value with the before column image read from the GoldenGate Trail. If the values are not equal, the first statement is executed where the source table value for qty_in_stock is subtracted from the before image qty_in_stock value. The result is then subtracted from the qty_in_stock value returned from the query, and the result is applied to the target table.

Test:

1. Create the below table on both the databases SiteA and SiteB:

create table products (
products_number number(10),
products_name varchar2(10),
qty_in_stock number(5),
primary key (products_number)
using index
);

insert into products values (1,'IPOD', 20);
commit;

2. Get the database to log the keys for the table products using "add trandata" from ggsci prompt. Do this on both the GG instances.

Site A:
ggsci> dblogin userid orclgg, password ggsci> add trandata sourcea.products

Site B:
ggsci> dblogin userid orclgg, password ggsci> add trandata sourceb.products

3. create the Goldengate components on both sides.

Site A:

ggsci> add extract e_sitea, tranlog, begin now
ggsci> add exttrail D:/GG11/dirdat/ea extract e_sitea
ggsci> dblogin userid orclgg, password
ggsci> add replicat r_siteb, exttrail D:/GG1112/dirdat/eb checkpointtable


Site B:

ggsci> add extract e_siteb, tranlog, begin now
ggsci> add exttrail D:/GG1112/dirdat/eb extract e_siteb
ggsci> dblogin userid orclgg1, password
ggsci> add replicat r_sitea, exttrail D:/GG11/dirdat/ea checkpointtable


4. Update the product table on both the databases and commit the transaction.

update sitea.products set qty_in_stock = qty_in_stock-15 where products_name = 'IPOD' and qty_in_stock = 20;

commit;

update siteb.products set qty_in_stock = qty_in_stock-4 where products_name = 'IPOD' and qty_in_stock = 20;

commit;

Now, validate the results from both the databases and see that "qty_in_stock" results with 1 ipod.

Friday, December 16, 2011

How to guage approximate time for the replicat to catchup the long running transactions

An instance is when running any batch job which has some long running transactions you would want to know an estimate on how long it will take to propogate these on target database.

Basically, we will need to check the replicat report to see how many minutes it is taking to finish processing a trail. Now multiply this time into the number of remaining trail files which the replicat has to process.

For example, in the report file:

Opened trail file ./dirdat/vv001401 at 2011-12-16 16:13:16

---
---
---

Switching to next trail file ./dirdat/vv001402 at 2011-12-16 16:43:18 due to EOF, with current RBA 499999734
Opened trail file ./dirdat/vv001402 at 2011-12-16 16:43:18

This took almost 30 minutes for processing. Now multiply 30 with the remaining trail files number and this should give a rough extimate of time for the replicat to process. We should also consider the trail files getting populated from source to the target machine and we need to anticipate time for that as well.


Thursday, December 15, 2011

Steps for performing the Various filtering method between two oracle databases through initial Data load method

We will filter the records that are extracted on the source and applied on the target – only rows where the country column value equals “UAE” in the “University” table will be considered for extraction.

1. Make sure to create a same table structure on source and Target Machines and manager process must be running.

2. Enable supplemental redo logging on the tables to be replicated

3. Sample Table structure:

Create table sender.University (studcode number(10),studname varchar2(10),country varchar2(10) , constraint uty_key unique(studcode));
Create table receiver.University (studcode number(10),studname varchar2(10),country varchar2(10) , constraint uty_key unique(studcode));


On Source Machine

Step 1 :GGSCI> add extract fillter, sourceistable

Step 2 : GGSCI> edit params fillter

Extract fillter
Userid ogg@goldengate:1521/source,password ****
Rmthost goldengate, mgrport 7810
Rmttask replicat,Group repl2
Table sender.university, FILTER ( @STRFIND ( city, “UAE” > 0));

Step 3 :GGSCI > start extract fillter

On Target Machine

Step 1 :GGSCI> add replicat repl2, specialrun

Step 2: GGSCI> edit params repl2

Replicat repl2
Userid ogg@goldengate:1521/target,password ****
assumetargetdefs
MAP sender. university, TARGET receiver.university;

Step 3: GGSCI> start replicat repl2


The output of the filter condition:

On Source Machine :

An insert sample statements for checking the replication

Insert into sender.university( 100,’SAM’,’UAE’)
Insert into sender.university( 101,’SAM’,’UK’)
commit;


On Target Machine :

If the country value is “UAE” only should replicate to Target Machine from the Source Machine as per the filter condition.

SQL> select * from receiver.university;

Studcode Studname Country
100 SAM UAE

What is FILTER Keyword?

The FILTER clause offers us more functionality than the WHERE clause because you can employ any of GoldenGate’s column conversion functions to filter data, whereas the WHERE clause accepts basic WHERE operators.

For example we can use standard arithmetic operators like ‘+’,'-’,'/’,'*’ or comparison operators like ‘>’,’<', '=' as well as GoldenGate functions like @COMPUTE, @DATE, @STRFIND, @STRNUM etc

For example we can use the STRFIND function as part of the Extract parameter file to only extract records from the table that match a particular string value as shown below.


TABLE sender.university,FILTER (@STRFIND (city, “LONDON”) > 0)


Another few examples for filtering

Filtering Columns

Suppose that instead of extracting all the columns in the Employees table, you’re only interested in the EMPLOYEE_ID,FIRST_NAME and LAST_NAME columns. You can do this by adding the following to your extract TABLE Parameter.

TABLE HR.EMPLOYEES, COLS (employee_id,first_name,last_name)

In the example, all columns are ignored except the three columns listed in the COLS parameter. Keep in mind that if you use column filtering in your data pump extract, you can no longer use passthru mode.



Another example for Column filtering is when you want all the columns in a table to be extracted except for a few columns. Let’s assume you want to extract the entire

EMPLOYEES table except for the EMAIL column. You can do that in the Extract with COLEXCEPT parameter, as shown in the following example:

TABLE HR.EMPLOYEES COLSEXCEPT (email);

Keep in mind that you can’t exclude key columns with COLSEXCEPT.



Filtering Rows
Let’s look at another example using filtering based on the DML operation, such as insert, update, or delete. Suppose you only want to replicate deletes if the employee’s monthly salary is less than 1000.

You can set up a filter as shown next that only executes the filter for delete statements. This example
filters using the Extract TABLE parameter


Table HR.EMPLOYEES,
FILTER (ON DELETE, SALARY / 12 < 1000);


You’ve seen a few examples of the extensive capabilities of Goldengate to filter tables,rows and columns .

Implementing Parallel Extracts and Replicats Using Key Ranges – Performance Tuning

Implementing Parallel Extracts and Replicats Using Key Ranges – Performance Tuning
Creating parallel Extracts and replicates is to use the @RANGE function to spilt the incoming table rows into equal buckets using a Golden gate hash algorithm on the key values.
You do this to divide the replication workload into parallel processes by key ranges and make the replication process more data and reduce the replicate lag.





Sample Table structure:
Create table sender.empl (empid number(10),empname varchar2(10),constraint emp_key unique(empid));
Create table receiver.empl (empid number(10),empname varchar2(10),constraint emp_key unique(empid));


On Source database

1. GGSCI> add extract ext_ran, tranlog, Begin now

2. GGSCI> add exttrail /u01/app/oracle/product/GG1/dirdat/lt,extract ext_ran

3. GGSCI> add extract dpump, exttrailsource /u01/app/oracle/product/GG1/dirdat/lt

4. GGSCI> edit params ext_ran

Extract ext_ran
Userid ogg@goldengate:1521/source,password ****
Exttrail /u01/app/oracle/product/GG1/dirdat/lt
Table sender.empl;

5. GGSCI> add rmttrail /u01/app/oracle/product/GG2/dirdat/rt,extract dpump, megabytes 500

6. GGSCI> edit params dpump
Extract dpump
Userid ogg@goldengate:1521/source,password ****
Rmthost goldengate,mgrport 7810
Rmttrail /u01/app/oracle/product/GG2/dirdat/rt
Passthru
Table sender.empl;

On Target database

1. GGSCI> add replicat rep_ran1, exttrail /u01/app/oracle/product/GG2/rt,checkpointtable ogg.chkptab

2. GGSCI> edit params rep_ran1

Replicat rep_ran1
Userid OGG@goldengate:1521/target,password ****
MAP sender.empl, TARGET receiver.empl, EMPLOYEES, COLMAP (USEDEFAULTS), FILTER (@RANGE (1,3));

3. GGSCI> add replicat rep_ran2, exttrail /u01/app/oracle/product/GG2/rt,checkpointtable ogg.chkptab

4. GGSCI> edit params rep_ran2

Replicat rep_ran2
Userid OGG@goldengate:1521/target,password ****
MAP sender.empl, TARGET receiver.empl, EMPLOYEES,COLMAP (USEDEFAULTS),FILTER (@RANGE (2,3));

5. GGSCI> add replicat rep_ran3, exttrail /u01/app/oracle/product/GG2/rt,checkpointtable ogg.chkptab

6. GGSCI> edit params rep_ran3

Replicat rep_ran3
Userid OGG@goldengate:1521/target,password ****
MAP sender.empl, TARGET receiver.empl, COLMAP (USEDEFAULTS),FILTER (@RANGE (3,3));


On Source & Target database

1. GGSCI > start extract exp_ran

2. GGSCI> start replicat rep_ran1

3. GGSCI> start replicat rep_ran2

4. GGSCI> start replicat rep_ran3

0r

1. GGSCI> start replicat rep_ran*

After some changes are applied , you can check the status of the replicats to make sure they’re processing properly.

1. GGSCI> info rep *

Replicat abending with database error 1403

Replicat abending with database error 1403() during the initial startup during a compressed update.

This error can be seen very frequently on a production environment when implementing Goldengate for the first time.

During the instantiation of the target database, if there is any open transaction which is yet to be committed, the extract is started with "begin now" and then the export/rman backup is started. This open transaction could be missed out by either of the processes and when this open transaction is committed at a later point of time and an update/delete is fired against this transaction which is captured by the extract, guess what happens?

Replicat would not be able to find the row for doing the latter transaction.

There is a way to avoid this situation by altering the extract to the oldest long-pending transaction from gv$transaction/v$transaction. Use the below to find:

SQL> select min(START_TIME) from gv$transaction;

MIN(START_TIME)
--------------------
12/15/11 20:53:54

Alter the extract with the begin timestamp to process the oldest open transaction:
# delete the existing extract and pump trails
# create a new trail sequence for extract and point the pump to read the new trail sequence

# ggsci> alter EXTRACT BEGIN 2011-12-15 20:53:54
# ggsci> alter EXTRACT BEGIN 2011-12-15 20:53:54, threads

# alter the replicat to the new trail sequence and start the processes.

Ofcourse remember to use HANDLECOLLISIONS, until you pass by the time your ORIGINAL TRAIL was processed until.

And for NEXT TIME make sure you run your GV$TRANSACTION sql above. And create the extract process to start capturing from the time you get from GV$TRANSACTION.

Wednesday, December 7, 2011

Incompatible record in the trail file

Extract or Replicat abends with error message "Incompatible Record" or ERROR 514 or ERROR 509.


An "Incompatible record" is when an incomplete record is encountered in a trail of regular records. An Incomplete record would miss out the header (where it is actually suppose to start) and would start in the middle of the record. The record is considered corrupted.

It can result in an incompatible record. These could occur from bad transmission, bad data recording, TCP/IP errors or other hardware related errors.

A few cases when it can happen:

1) Extract is writing to the trail, and a portion around the begin and end of a record became corrupt. In this case, GoldenGate is missing a portion of the data.

2) Two Extracts over writing the same set of trail files by mistake.

3) An extract pump is writing to a trail file and crashes before it could update its checkpoint file and after the data has flushed out to the trail. When it is restarted

Some ways to avoid:
Oracle recommends disabling cache if AsyncIO is enabled.
Test your hardware and keep constant eye on any disk or fibre related errors.


Fixing and getting past this error:

There is no easy and right way way to recover from such errors. At the end, there is a chance you might end up loosing a transaction which you have to re-factor from source and make your data consistent.

If the trail is written by the pump, you could effectively stop the replicat on target and pump on source and alter the pump to resend all files from source from the point it failed on. It's not easy to compare and map out the transactions between the source and target because the file sizes will rarely match thus seqno and rba being different on source and target.
Here are the steps:

- Stop the extract pump
- Create a new trail sequence (to avoid confusion and keep the old one intact)
- Change the extract checkpoint after you locate the correct rba (make sure you backup the checkpoint file)
- alter the replicat to this new trail sequence
- start the extract data pump and replicat

This above, if done correctly, could avoid having to skip ANY transactions/records keeping the data consistent.


If for some reason the record is bad on source as well, then you can just alter the replicat to skip this error and move to the next record (This might be a legit record and you will end up missing it)

On an abended replicat change the following in it's paramter file and restart:
MAXTRANSOPS 1
GROUPTRANSOPS 1

After replicat abends again, use logdump to look at the last seqno and rba the replicat is at:
./logdump
open
ghdr on
detail on
pos
scanforheader



At this point note down the seqno and rba and alter the replicat with this new rba

alter replicat extseqno extrba

start replicat

You will likely have lost data probably just a record.

In most cases it is best to reconfigure the pump so send data through a different trails and re-apply the data (with HANDLECOLLISIONS if you couldn't find the exact rba).

Data integrity issues may exist when using any of these recovery techniques above.

Friday, December 2, 2011

FILTER PARAMETER

Steps for performing filtering the data between two oracle databases through initial load


We will filter the records that are extracted on the source and applied on the target – only rows where the country column value equals “UAE” in the “University” table will be considered for extraction.

1. Make sure to create a same table structure on source and Target Machines and manager process must be running (or use a data definitions file)

2. Enable supplemental redo logging on the tables to be replicated

3. Sample Table structure:

Create table sender.University

(studcode number(10),

studname varchar2(10),

country varchar2(10) ,

constraint uty_key unique(studcode));


Create table receiver.University

(studcode number(10),

studname varchar2(10),

country varchar2(10) ,

constraint uty_key unique(studcode));

On Source Machine

Step 1 :GGSCI> add extract fillter, sourceistable


Step 2 : GGSCI> edit params fillter

extract fillter

Userid ogg@goldengate:1521/source,password ****

Rmthost goldengate, mgrport 7810

Rmttask replicat,Group repl2

Table sender.university, FILTER ( @STRFIND ( city, “UAE” > 0));


Step 3 :GGSCI > start extract fillter

On Target Machine

Step 1 :GGSCI> add replicat repl2, specialrun


Step 2: GGSCI> edit params repl2

replicat repl2

userid ogg@goldengate:1521/target,password ****

assumetargetdefs

MAP sender. university, TARGET receiver.university;


Step 3: GGSCI> start replicat repl2


Test/Verification:

The output of the filter condition:

On Source Machine :

Sample insert statements for checking the replication

Insert into sender.university values ( 100,’SAM’,’UAE’);

Insert into sender.university values ( 101,’SAM’,’UK’);

commit;


On Target Machine :

The filter will filter records where the country is “UAE”.


SQL> select * from receiver.university;

Studcode Studname Country

100 SAM UAE


What is FILTER Keyword?

The FILTER clause offers us more functionality than the WHERE clause because you can employ any of GoldenGate’s column conversion functions to filter data, whereas the WHERE clause accepts basic WHERE operators.

For example we can use standard arithmetic operators like ‘+’,'-’,'/’,'*’ or comparison operators like ‘>’,’<', '=' as well as GoldenGate functions like @COMPUTE, @DATE, @STRFIND, @STRNUM etc

For example we can use the STRFIND function as part of the Extract parameter file to only extract records from the table that match a particular string value as shown below.

TABLE sender.university,FILTER (@STRFIND (city, “LONDON”) > 0)

----------------------------All the best------------------------------------------------

Thursday, December 1, 2011

Steps for enabling DDL Replication with two Oracle databases

Steps for performing/testing Online DDL Replication between two Oracle Databases


Here I am performing Golden gate unidirectional replication between 2 Oracle11g instances.

Assuming that the user ogg are created on both Source and Target databases.

Run the following scripts on source from the directory where the GoldenGate software was installed.

SQL> alter session set recyclebin=OFF;

SQL> @marker_setup

SQL> @ddl_setup

SQL> @role_setup

Enter GoldenGate schema name: OGG

SQL> grant ggs_ggsuser_role to ogg;

SQL> @ddl_enable

SQL> @ddl_pin OGG # Optional


After installing please follow the below steps :

1. Make sure to create a same table structure on source and Target Machines and manager process must be running.

2. Enable supplemental redo logging on the tables to be replicated.

3. Sample Table structure:

Create table sender.globe

(phonecode number(10),

country varchar2(10),

constraint glb_key unique(phonecode));


Create table receiver.globe

(phonecode number(10),

country varchar2(10),

constraint glb_key unique(phonecode));


On Source machine

Step 1 : GGSCI> add extract extp_ddl, tranlog, Begin now

Step 2 : GGSCI> edit params extp_ddl

Extract extp_ddl

Userid ogg@goldengate:1521/source,password XXXXXX

Rmthost goldengate, mgrport 7810

Rmttrail /u01/app/oracle/product/OGG/dirdat/rt

DDL INCLUDE MAPPED

Table sender.globe;



On Target machine

Step 1 :GGSCI> add replicat repl_ddl, exttrail /u01/app/oracle/product/GG2/rt,

Step 2: GGSCI> edit params ./GLOBALS

checkpointtable ogg.chkptab

Step 3: GGSCI> exit

$ ./ggsci

GGSCI> dblogin userid ogg password xxx

GGSCI> add checkpointtable



Step 4: GGSCI> edit params repl_ddl

Replicat repl_ddl

Userid ggate@goldengate:1521/target,password XXX

assumetargetdefs

MAP sender.globe, TARGET receiver.globe;


Start Extract and Replicat processes on source & target machines

  1. GGSCI> start extract extp_ddl

2. GGSCI> start replicat repl_ddl



Once the Lag is 0, perform the following:

On Source Databases

SQL> ALTER TABLE SENDER.GLOBE ADD STATE VARCHAR2(10);

On Target Machine

SQL> DESC SENDER.GLOBE

phonecode number(10)

country varchar2(10)

state varchar2(10)


Look on “DDL INCLUDE MAPPED” on source extract file. This is the parameter that causes DDL replication between the databases.

Short explanation for DDL INCLUDE MAPPED: This parameter enables DDL Support for all tables which have been mapped.



Monday, October 31, 2011

Upgrade Oracle Database using Oracle GoldenGate

I have used many upgrade options in the past to Upgrade an Oracle database. Be it an inplace upgrade, out of place upgrade, upgrade on logical standby, using materialized view logs and views or using GOLDENGATE. Turns out Oracle GoldenGate is the best amongst all of them.

We get the most out of this replication in terms of reliability, speed, error-free, ease of use, utmost least downtime. Here are a few more advantages compared to others:

Advantages:
1. Least amount of work upfront.
2. Least amount of errors.
3. Low overhead of troubleshooting if something fails.
4. Least amount of programming effort.
5. Least amount of DOWNTIME.
6. On going progress can be maintained. You can test your upgraded version while sync'ing your production.You can also compare data between your old and new production all the time.

The only disadvantage is that there is a price TAG associated with OGG as with any other product. Contact us and we maybe we able to help you and help you succeed with your upgrade plans.

Monday, October 24, 2011

Replicat Abended Due To Oracle Error ORA-12899

PROBLEM STATEMENT:

The report file contains:
2011-10-24 05:48:25 WARNING OGG-00869 Aborting BATCHSQL transaction. Database error 12899 (ORA-12899: value too large for column "SECS5"."ARDELETEDACCTHOLDERAUTH"."HI
NTQ" (actual: 137, maximum: 128)).
2011-10-24 05:48:25 WARNING OGG-01137 BATCHSQL suspended, continuing in normal mode.
2011-10-24 05:48:25 WARNING OGG-01003 Repositioning to rba 388304405 in seqno 362.
2011-10-24 05:48:26 WARNING OGG-00869 OCI Error ORA-12899: value too large for column "SECS5"."ARDELETEDACCTHOLDERAUTH"."HINTQ" (actual: 137, maximum: 128) (status =
12899), SQL ARDELETEDACCTHOLDERAUTH" ("BANKID","RANGEID","ACTIVE","CARDNUMBER","CARDHOLDERNAME","DATEEXPIRED","SHOPPE
RID","SECRETCODE","CHIPSECRET","HINTQ","HINTA>.


PROBLEM AREAS:

1. You have a different NLS_LANG on the source and target system.
or
2. You are using replicat parameter "REPLACEBADCHAR SPACE"


SOLUTION:

Change/add the NLS_LANG param on the replicat to match what is specified on the extract. Make sure to add NLS_LANG parameter is added before the USERID and PASSWORD fields to connect to the database.

In the extract parameter file the NLS_LANG parameter has been set as follows

setenv (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")

Example

setenv (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")

userid xxxx password yyyy


2. If you are using replicat parameter "REPLACEBADCHAR SPACE"

Another cause for ora-12899 is use of replicat parameter "REPLACEBADCHAR SPACE" when processing multiple byte (multibyte)data. the parameter should not be used for multi-byte data.

For the second cause, do not use "REPLACEBADCHAR SPACE" when processing multi-byte data.


Sunday, October 23, 2011

Database Replication: Streams vs. GoldenGate

Replication in any database provides functionality to facilitate replication (one way or multi), upgrades, reporting, data warehousing, query offloading, and many others without having to put pressure on production. Actually the main benefit of replicating data in a database is to reduce and if possible, eliminate, the load on the source system. It also allows you to offload some percentage of data thus move load to another server. Replication of part of the source may be more efficient than cloning it completely by the use of a standby system, when remote sites do not need to be exact copies of the source databases.

From the point of Replication, Streams and GoldenGate might seem similar but are very different. Streams for starters sits inside the database which in experience, makes it slower. GoldenGate on the other hand sits outside and uses the memory from the OS and NOT from the database. It's basic design is to read redologs/archive logs like a journal and record all new transactions that it encounters (of course based on your parameter settings).

Streams architecture on the other hand uses AQ's within the database which has proven to be very inefficient. More over streams for heterogeneous environments is very convoluted. Personally I have yet to see a successful efficient Heterogeneous environment that people would prefer to use Streams over OGG.

Some of the places where OGG is a hit:
1. One way replication - Reporting
2. One way replication - DR (esp when the network pipe is small)
3. One way replication - DW/ODS/BI
4. Query off-Loading
5. Heterogeneous Replication (From Oracle -> MS SQL,etc)
6. Multi master between two database (Homogenous or Heterogeneous).






Wednesday, October 19, 2011

Data Type Support for GoldenGate 11.1.1.1

This blog post shows what is supported / Unsupported in Goldengate 11.1.1 Replication

Below are the supported and unsupported data types in Goldengate 11.1.1 for Oracle databases. These are the basic limitations that must be taken into consideration when configuring replication using Goldengate.

DATA TYPES
Supported GoldenGate Oracle Data Types

NUMBER
BINARY FLOAT
BINARY DOUBLE
CHAR
VARCHAR2
LONG
NCHAR
NVARCHAR2
RAW
LONG RAW
CLOB
NCLOB
BLOB
XML data types
User defined types (UDT)
DATE
TIMESTAMP
TIMESTAMP with TIMEZONE


Non-Supported Goldengate Oracle Data Types
ORDDICOM
ANYDATA
ANYDATASET
ANYTYPE
BFILE
MLSLABEL
TIMEZONE_ABBR
TIMEZONE_REGION
URITYPE
UROWID
DDL REPLICATION SUPPORT

Support for DDL on objects:
TABLES
INDEX
TRIGGERS
SEQUENCES
MATERIALIZED VIEWS
VIEWS
FUNCTION
PACKAGES
PROCEDURE
SYNONYMS
PUBLIC SYNONYMS

DDL On ALL Oracle Reserved Schemas is unsupported.
These include:
ANONYMOUS
AURORA
$JIS
$UTILITY
$AURORA
$ORB
$UNAUTHENTICATED
CTXSYS
DBSNMP
DMSYS
DSSYS
EXFSYS
MDSYS
ODM
ODM_MTR
OLAPSYS
ORDPLUGINS
ORDSYS
OSE$HTTP$ADMIN
OUTLN
PERFSTAT
PUBLIC
REPADMIN
SYS
SYSMAN
SYSTEM
TRACESVR
WKPROXY
WKSYS
WMSYS
XDB

ADDITIONAL UNSUPPORTED OBJECTS TO WATCH OUT FOR:

External tables are not supported
Compressed tables are not supported
TDE at column and tablespace level is supported. Check out Release guide for more information.


Restrictions on using DDL Replication

OGG: Restrictions on using DDL replication

Oracle GoldenGate supports DDL synchronization only in a like-to-like configuration.

Oracle GoldenGate DDL replication requires the following:
Source and target object definitions must be identical.

The ASSUMETARGETDEFS parameter must be used in the Replicat parameter file. Delivery process (Replicat) will fail if objects are configured for DDL support and the SOURCEDEFS parameter is being used.

DDL Constraints

Oracle GoldenGate supports DDL synchronization only in a like-to-like configuration.

Oracle GoldenGate DDL requires the following:
Source and target object definitions must be identical.

The ASSUMETARGETDEFS parameter must be used in the Replicat parameter file. Delivery process (Replicat) will fail if objects are configured for DDL support and the SOURCEDEFS parameter is being used.