Showing posts with label Oracle. Show all posts
Showing posts with label Oracle. Show all posts

Monday, 19 May 2014

Differences between EXP,IMP and EXPDP,IMPDP

Differences between EXP,IMP and EXPDP,IMPDP

1)Impdp/Expdp has self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.
3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior.

5)Expdp/Impdp access files on the server rather than on the client.

6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import we could directly compress the dump by using pipes.

8)The Data Pump method for moving data between different database versions is different than the method used by original Export/Import.

9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

10)Expdp/Impdp consume more undo tablespace than original Export and Import.

11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.


13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.Di

Monday, 5 August 2013

GoldenGate Architecture


GoldenGate enables us to extract and replicate data across a variety of topologies as shown the diagram below as well as the exchange and manipulation of data at the transactional level between a variety of database platforms like Oracle, DB2, SQL Server, Ingres, MySQL etc.
It can support a number of different business requirements like:

  • Business Continuity and High Availablity
  • Data migrations and upgrades
  • Decision Support Systems and Data Warehousing
  • Data integration and consolidation

  • Let us know look at the differents components and processes that make up a typical GoldenGate configuration on Oracle.

    (source: Oracle GoldenGate Administration Guide)
    Manager
    The Manager process must be running on both the source as well as target systems before the Extract or Replicat process can be started and performs a number of functions including monitoring and starting other GoldenGate processes, managing the trail files and also reporting.
    Extract
    The Extract process runs on the source system and is the data caoture mechanism of GoldenGate. It can be configured both for initial loading of the source data as well as to synchronize the changed data on the source with the target. This can be configued to also propagate any DDL changes on those databases where DDL change support is available.
    Replicat
    The Replicat process runs on the target system and reads transactional data changes as well as DDL changes and replicates then to the target database. Like the Extract process, the Replicat process can also be configured for Initial Load as well as Change Synchronization.
    Collector
    The Collector is a background process which runs on the target system and is started automatically by the Manager (Dynamic Collector) or it can be configured to stsrt manually (Static Collector). It receives extracted data changes that are sent via TCP/IP and writes then to the trail files from where they are processed by the Replicat process.
    Trails
    Trails are series of files that GoldenGate temporarily stores on disks and these files are written to and read from by the Extract and Replicat processes as the case may be. Depending on the configuration chosen, these trail files can exist on the source as well as on the target systems. If it exists on the local system, it will be known an Extract Trail or as an Remote Trail if it exists on the target system.
    Data Pumps
    Data Pumps are secondary extract mechanisms which exist in the source configuration. This is optional component and if Data Pump is not used then Extract sends data via TCP/IP to the remote trail on the target. When Data Pump is configured, the the Primary Extract process will write to the Local Trail and then this trail is read by the Data Pump and data is sent over the network to Remote Trails on the target system.
    In the absence of Data Pump, the data that the Extract process extracts resides in memory alone and there is no storage of this data anywhere on the source system. In case of network of target failures, there could be cases where the primary extract process can abort or abend. Data Pump can also be useful in those cases where we are doing complex filtering and transformation of data as well as when we are consolidating data from many sources to a central target.
    Data source
    When processing transactional data changes, the Extract process can obtain data directly from the database transaction logs (Oracle, DB2, SQL Server, MySQL etc) or from a GoldenGate Vendor Access Module (VAM) where the database vendor (for example Teradata) will provide the required components that will be used by Extract to extract the data changes.
    Groups
    To differentiate between the number of different Extract and Replicat groups which can potentially co-exist on a system, we can define processing groups. For instance, if we want to replicate different sets of data in parallel, we can create two Replicat groups.
    A processing group consisits of a process which could be either a Extract or Replicat process, a corresponding parameter file, checkpoint file or checkpoint table (for Replicat) and other files which could be associated with the process.

    Monday, 15 July 2013

    How to speedup copying files using scp command in LINUX

    scp output.dat 192.168.111.11:/tmp/               2%   27MB   5.2MB/s   03:01 ETA
    scp -c blowfish output.dat 192.168.111.11:/tmp/   6%  100MB   6.8MB/s   02:52 ETA
    scp -c arcfour output.dat 192.168.111.11:/tmp/   24%  235MB   7.2MB/s   01:42 ETA
    
    
    
    
    
    
    To clear history in linux--------history -c
    
    
    arcfour, blowfish, how to copy files faster in linux, imp command for dba, LINUX, Oracle, oracle dba, PERFORMANCE TUNING, scp, scp command in linux, scp make fast, speedup scp command,