Showing posts with label ORACLE DATABASE. Show all posts
Showing posts with label ORACLE DATABASE. Show all posts

Monday, 19 May 2014

Differences between EXP,IMP and EXPDP,IMPDP

Differences between EXP,IMP and EXPDP,IMPDP

1)Impdp/Expdp has self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.
3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior.

5)Expdp/Impdp access files on the server rather than on the client.

6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import we could directly compress the dump by using pipes.

8)The Data Pump method for moving data between different database versions is different than the method used by original Export/Import.

9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

10)Expdp/Impdp consume more undo tablespace than original Export and Import.

11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.


13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.Di

Tuesday, 23 July 2013

How to recover and open the database if the archive log required for recovery is missing.

How to recover and open the database if the archive log required for recovery is missing.


As part of recovery process, our restore went fine and also were able to re-create controlfile. During recovery, it asked for Archive logs. We checked with our Unix team for required archivelogs and found out they don’t have required archive logs.
It was critical for us to recover database because of some project deadline.
Error:
SQL> recover database until cancel using backup controlfile;
ORA-00279: change 9867098396261 generated at 03/21/2008 13:37:44 needed for
thread 1
ORA-00289: suggestion : /arcredo/XSCLFY/log1_648355446_2093.arc
ORA-00280: change 9867098396261 for thread 1 is in sequence #2093
Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’
ORA-01112: media recovery not started
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’
After doing some research, I found out one hidden parameter (_ALLOW_RESETLOGS_CORRUPTION=TRUE) will allow us to open database even though it’s not properly recovered.
We forced open the database by setting the _ALLOW_RESETLOGS_CORRUPTION=TRUE. It allows us to open database but instance crashed immediately after open. I checked the alert.log file and found out we have undo tablespace corruption.
Alert log shows below error
Errors in file /u01/XSCLFYDB/admin/XSCLFY/udump/xsclfy_ora_9225.trc:
ORA-00600: internal error code, arguments: [4194], [17], [9], [], [], [], [], []
Tue Mar 25 12:45:55 2008
Errors in file /u01/XSCLFYDB/admin/XSCLFY/bdump/xsclfy_smon_24975.trc:
ORA-00600: internal error code, arguments: [4193], [53085], [50433], [], [], [], [], []
Doing block recovery for file 433 block 13525
Block recovery from logseq 2, block 31 to scn 9867098416340
To resolve undo corruption issue, I changed undo_management to “Manual” in init.ora. Now it allowed us to open database successfully. Once database was up and running, I created new undo tablespace and dropped old corrupted undo tablespace. I changed back the undo_management to “Auto” and undo_tablespace to “NewUndoTablespace”.
It resolved our issue and database was up and running without any issue.
_ALLOW_RESETLOGS_CORRUPTION=TRUE allows database to open without consistency checks. This may result in a corrupted database. The database should be recreated.
As per Oracle Metalink, there is no 100% guarantee that setting _ALLOW_RESETLOGS_CORRUPTION=TRUE will open the database. However, once the database is opened, then we must immediately rebuild the database. Database rebuild means doing the following, namely: (1) perform a full-database export, (2) create a brand new and separate database, and finally (3) import the recent export dump. This option can be tedious and time consuming, but once we successfully open the new database, then we expect minimal or perhaps no data loss at all. Before you try this option, ensure that you have a good and valid backup of the current database.Solution:
1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.
2) Startup Mount
3) Recover database
4) Alter database open resetlogs.
5) reset undo_management to “manual” in init.ora file.
6) startup database
7) Create new undo tablespace
changed undo_management to “AUTO” and undo_tablespace to “NewTablespace”
9) Bounce database.

Wednesday, 17 July 2013

Transportable tablespace for different datablock size(db block size)

Source:In this machine size of database block is 8K.
1) In source machine create one tablespace TTS and create user  TTS and assign default tablespace to TTS user as TTS tablespace.
2)exec dbms_tts.transport_set_check(‘TTS’);
3)Check for violations  using
Select * from transport_set_violations;
4)keep tablespace TTS in read only
Alter tablespace TTS read only;
5)$ exp file=tts.dmp TABLESPACES=TTS  TRANSPORT_TABLESPACE=Y
Username/password:- sys as sysdba
6)copy dumpfile and datafile to destination machine

Destination:- In this machine size of database block is 4K
Add following parameter in pfile
Db_8K_cache_size=200M
Perform import using below command
[oracle5@fed01 ~]$ imp file=tts.dmp TABLESPACES=TTS TRANSPORT_TABLESPACE=Y DATAF                                                                                        ILES='/u02/oradata/tts/tts01.dbf'
Import: Release 11.2.0.1.0 - Production on Thu Jul 18 04:25:22 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V11.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SYS's objects into SYS
. importing TTS's objects into TTS
. . importing table   "TTS1"
. importing SYS's objects into SYS
Import terminated successfully without warnings.

Note If we don’t mention db_8k_cache_size in pfile it will give following error
Export file created by EXPORT:V11.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 29339:
 "BEGIN   sys.dbms_plugts.beginImpTablespace('TTS',5,'SYS',1,0,8192,1,322450,"
 "1,2147483645,8,128,8,0,1,2147483645,8,1197301176,1,4129,321958,NULL,0,0,NUL"
 "L,NULL); END;"
IMP-00003: ORACLE error 29339 encountered
ORA-29339: tablespace block size 8192 does not match configured block sizes
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_PLUGTS", line 1682
ORA-06512: at "SYS.DBMS_PLUGTS", line 1813
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully
[oracle5@fed01 ~]$ oerr ora 29339
29339, 00000, "tablespace block size %s does not match configured block sizes"
// *Cause:  The block size of the tablespace to be plugged in or
//          created does not match the block sizes configured in the
//          database.
// *Action:Configure the appropriate cache for the block size of this
//         tablespace using one of the various (db_2k_cache_size,
//         db_4k_cache_size, db_8k_cache_size, db_16k_cache_size,
//         db_32K_cache_size) parameters.


Saturday, 13 July 2013

How to speedup utlrp.sql script after catupgrd.sql scripts

How to compile invalid objects faster:-


To know Invalid objects issue

select count(*) from dba_objects where status like 'INVALID';

To compile these invalid objects faster than normal case then issue following command in sql prompt

SQL>@?/rdbms/admin/utlrp.sql n;

Where n= no of CPU's + 2

This will compile your invalid objects faster than normal case.

Friday, 12 July 2013

Block Change Tracking file



RMAN's change tracking feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a change tracking file. If change tracking is enabled, RMAN uses the change trackingfile to identify changed blocks for incremental backup, thus avoiding the need to scan every block in the datafile.

Change tracking is disabled by default, because it introduces some minimal performance overhead on database during normal operations. However, the benefits of avoiding full datafile scans during backup are considerable, especially if only a small percentage of data blocks are changed between backups. If backup strategy involves incremental backups, then we should enable change tracking.

One change tracking file is created for the whole database. By default, the change tracking file is created as an Oracle managed file in DB_CREATE_FILE_DEST. We can also specify the name of the block change tracking file, placing it in any desired location.

Using change
 tracking in no way changes the commands used to perform incremental backups, and the change trackingfiles themselves generally require little maintenance after initial configuration.

From Oracle 10g, the background process Block Change Tracking Writer (CTWR) will do the job of writing modified block details to block change tracking file.

In a Real Applications Clusters (RAC) environment, the change tracking file must be located on shared storage accessible from all nodes in the cluster.

Oracle saves enough change-tracking information to enable incremental backups to be taken using any of the 8 most recent incremental backups as its parent.

Although RMAN does not support backup and recovery of the change-tracking file itself, if the whole database or a subset needs to be restored and recovered, then recovery has no user-visible effect on change tracking. After the restore and recovery, the change tracking file is cleared, and starts recording block changes again. The next incremental backup after any recovery is able to use change-tracking data.

After enabling change
 tracking, the first level 0 incremental backup still has to scan the entire datafile, as the changetracking file does not yet reflect the status of the blocks. Subsequent incremental backup that use this level 0 as parent will take advantage of the change tracking file.



Enabling and Disabling Change Tracking

We can enable or disable change tracking when the database is either open or mounted. To alter the change tracking setting, we must use SQL*Plus to connect to the target database with administrator privileges.

To store the change tracking file in the database area, set DB_CREATE_FILE_DEST in the target database. Then issue the following SQL statement to enable change tracking:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;  

We can also create the change
 tracking file in a desired location, using the following SQL statement:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/u02/rman/rman_change_track.f';

The 
REUSE option tells Oracle to overwrite any existing file with the specified name.
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/u02/rman/rman_change_track.f' REUSE;

To disable change
 tracking, use this SQL statement:
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;  

If the change
 tracking file was stored in the database area, then it will be deleted when we disable change tracking.

Checking Whether Change Tracking is enabled
From SQL*Plus, we can query V$BLOCK_CHANGE_TRACKING to determine whether change tracking is enabled or not. 

SQL> select status from V$BLOCK_CHANGE_TRACKING;
        ENABLED   => block change tracking is enabled.
        DISABLED  => block change tracking is disabled.

Query V$BLOCK_CHANGE_TRACKING to display the filename.
SQL> select filename from V$BLOCK_CHANGE_TRACKING;

Moving the Change Tracking File
If you need to move the change tracking file, the ALTER DATABASE RENAME FILE command updates the control file to refer to the new location.

1.
 If necessary, determine the current name of the change tracking file:

SQL> SELECT filename FROM V$BLOCK_CHANGE_TRACKING;
        /u02/rman/rman_change_track.f

2.
 Shutdown the database.
SQL> SHUTDOWN IMMEDIATE

3.
 Using host operating system commands, move the change tracking file to its new location.
$ mv /u02/rman/rman_change_track.f /u02/rman_new/rman_change_track.f

4.
 Mount the database and move the change tracking file to a location that has more space. For example:

SQL> ALTER DATABASE RENAME FILE '/u02/rman/rman_change_track.f' TO '/u02/rman_new/rman_change_track.f';

5.
 Open the database.
SQL> ALTER
 DATABASE OPEN;

SQL> SELECT filename FROM V$BLOCK_CHANGE_TRACKING;
        /u02/rman_new/rman_change_track.f
If you cannot shutdown the database, then you must disable change tracking and re-enable it, at the new location:
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/u02/rman_new/rman_change_track.f';

If you choose this method, you will lose the contents of the change tracking file. Until the next time you complete a level 0 incremental backup, RMAN will have to scan the entire file.

Estimating Size of the , on Disk
The size of the change tracking file is proportional to the size of the database and the number of enabled threads of redo. The size is not related to the frequency of updates to the database.

Typically, the space required for block change tracking is approximately 1/30,000 the size of the data blocks to be tracked. The following two factors that may cause the file to be larger than this estimate suggests:
·                     To avoid overhead of allocating space as database grows, the change tracking file size starts at 10MB, and new space is allocated in 10MB increments. Thus, for any database up to approximately 300GB the file size is no smaller than 10MB, for up to approximately 600GB the file size is no smaller than 20MB, and so on.

·                     For each datafile, a minimum of 320K of space is allocated in the change tracking file, regardless of the size ofthe file. Thus, if you have a large number of relatively small datafiles, the change tracking file is larger than for databases with a smaller number of larger datafiles containing the same data.

Sunday, 7 July 2013

What happens during RMAN active duplicate cloning in oracle


In an active duplication process, target database online image copies and archived redo log files were copied through the auxiliary instance service name. So we no need the target database backup.
Target database must be in archive log mode.
Database duplication process RMAN does the following things
1.Generate the unique DBID for auxiliary database.
2.Copy the data files & archived log files from target database to auxiliary database.
3.Recreate the new control files for auxiliary database.
4.Recreates the online redo log files.
5.Restart the auxiliary instance.
6.Open the database with RESETLOGS.

Wednesday, 15 May 2013

Overview of Inventory





The inventory is a very important part of the Oracle Universal Installer. This is where OUI keeps all information regarding the products installed on a specific machine.

There are two ypes of inventories

1)Global or Central Inventory:- The Global Inventory records the physical location of Oracle products installed on the machine, such as ORACLE_HOMES (RDBMS and IAS) or JRE. It does not have any information about the detail of patches applied to each ORACLE_HOMEs.

The Global Inventory gets updated every time you install or de-install an ORACLE_HOME on the machine, be it through OUI Installer, Rapid Install, or Rapid Clone.

Note: If you need to delete an ORACLE_HOME, you should always do it through the OUI de-installer in order to keep the Global Inventory synchronized.

If  Global Inventory  is lost or corrupted we can regenerate it using runInstaller tool .
There will be only one Global Inventory per machine. Its location is defined by the pointer file

Pointer File:-Pointer File is user to know the location of Global Inventory.Location of pointer file is /etc/oraInst.loc. By looking into the contents of pointer file we can know the  location of global inventory.

2)Local Inventory:- Local inventory contains information of installed softwares which are specific to single ORACLE_HOME.There is one Local Inventory per ORACLE_HOME. It is physically located inside the ORACLE_HOME at $ORACLE_HOME/inventory and contains the detail of the patch level for that ORACLE_HOME.

The Local Inventory gets updated whenever a patch is applied to the ORACLE_HOME, using OUI.

If the Local Inventory becomes corrupt or is lost, this is very difficult to recover, and may result in having to reinstall the ORACLE_HOME and re-apply all patchsets and patches.

Tuesday, 2 April 2013

11G RMAN BACKUP BASED CLONING


Source(TARGET) instance information:
Instance name=DLINK
hostname=mydbt4db2
datafile location=/u03/oradata/DLINK, /u02/oradata/DLINK
Redolog file location=/u03/oradata/DLINK
Archive log file location=/u02/oradata/arch

Auxiliary instance (RLINK) information:

Instance name=RLINK
hostname=mydbt4db4
datafile location=/u03/oradata/RLINK, /u02/oradata/RLINK
Redolog file location=/u03/oradata/RLINK
Archive log file location=/u02/oradata/arch_RLINK

1) Create required directory structure at auxiliary side

[oracle@mydbt4db4 admin]$ mkdir -p /u02/oradata/RLINK
[oracle@mydbt4db4 admin]$ mkdir -p /u03/oradata/RLINK
[oracle@mydbt4db4 admin]$ mkdir -p /u02/oradata/arch_RLINK

2) Define ORACLE_SID and other required settings for auxiliary instance

[oracle@mydbt4db4 ~]$ vi .bash_profile
[oracle@mydbt4db4 ~]$ pwd
/home/oracle
[oracle@mydbt4db4 ~]$ . .bash_profile
[oracle@mydbt4db4 ~]$ echo $ORACLE_SID
RLINK
[oracle@mydbt4db4 ~]$

3) Creating initialization Parameter file for the Auxiliary instance(RLINK)


[oracle@mydbt4db4 dbs]$ vi initTLINK.ora
[oracle@mydbt4db4 dbs]$ cat initTLINK.ora
db_name='RLINK'
memory_target=200M
db_block_size=8192
control_files ='/u02/oradata/RLINK/control1.ctl','/u03/oradata/RLINK/control2.ctl'
compatible ='11.2.0'
log_archive_dest=/u02/oradata/arch_RLINK
log_file_name_convert='/u03/oradata/DLINK','/u03/oradata/RLINK'
db_file_name_convert='/u03/oradata/DLINK','/u03/oradata/RLINK','/u02/oradata/DLINK','/u02/oradata/RLINK'

4) Create password file at auxiliary side(TLINK)

[oracle@mydbt4db4 ~]$ cd $ORACLE_HOME/dbs
[oracle@mydbt4db4 dbs]$ orapwd file=orapw$ORACLE_SID password=sys force=y
[oracle@mydbt4db4 dbs]$ ls -ltr orapwR*
-rw-r----- 1 oracle oinstall 1536 Nov 26 16:55 orapwRLINK
[oracle@mydbt4db4 dbs]$


5) Take a backup of the database Source

RMAN> backup database plus archivelog;


Starting backup at 26-NOV-12
current log archived
configuration for DISK channel 2 is ignored
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=133 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
.
.
.
.
Starting Control File and SPFILE Autobackup at 26-NOV-12
piece handle=/u02/oradata/rman_bkp/c-2903807241-20121126-00 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-12


6)  Prepare for duplicate by starting the auxiliary instance

[oracle@mydbt4db4 rman_bkp]$  sqlplus '/as sysdba'

SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 26 08:41:59 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
.
7) Copy the backupsets from source to your auxiliary server

If the duplicate is going to happen on different server, move the backup pieces to a new server using commands like ftp,scp etc

[oracle@mydbt4db2 rman_bkp]$ scp *  oracle@192.168.3.19:/u02/oradata/RMAN_BKP


8) Connect to the auxiliary instance from RMAN and perform the rman duplicate as follows

rman auxiliary /

Recovery Manager: Release 11.2.0.1.0 - Production on Mon Nov 26 20:41:32 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to auxiliary database: RLINK (not mounted)

RMAN>  DUPLICATE DATABASE TO 'RLINK'
2> BACKUP LOCATION '/u02/oradata/RMAN_BKP';

Starting Duplicate Db at 26-NOV-12

contents of Memory Script:
{
   sql clone "create spfile from memory";
}
.
.
.
.
.
executing Memory Script
contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 26-NOV-12