Monday 14 October 2013

important Concurrent manager scripts

Non Destructive Script to Clean Concurrent Manager TablesCMCLEAN.sql(<– Click here for details & download)
This script re-sets the flags for requests to completed to allow the Managers to come up

How To Re-create Concurrent Manager Views
How To Re-create Concurrent Manager Views
(<– Click here for details & download)
This script details how to Re-create the Concurrent Manager Views

Concurrent Manager Check Script
CCM.sql
(<– Click here for details & download)
This script will diagnose Concurrent Managers and Concurrent Requests

Concurrent Manager Check Script
FNDCCMDiagnostic115.sh
(<– Click here for details & download)
This script will diagnose Concurrent Managers and Concurrent Requests

Internal Manager Log Script
ICMLOG.sql
(<– Click here for details & download)
This script will provide the location for Retrieving the Internal Manager Log file
Concurrent Request Log ScriptCMLOGS.txt(<– Click here for details & download)
Script for Retrieving Log Files Associated to a Concurrent Request

Concurrent Request Check Script
REQCHECK.sql
(<– Click here for details & download)
Script for diagnosing problematic Concurrent Requests

Analyze Pending Requests Script
ANALYZEPENDING.sql
(<– Click here for details & download)
Script for diagnosing Pending Concurrent Requests

Analyze Request Script
ANALYZEREQ.sql
(<– Click here for details & download)
Script for Providing a Detailed Analysis of One Concurrent Request

Who Can Run which Requests
WHOCANRUN.sql
(<– Click here for details & download)
Script that lists Responsibilities that can Run a Given Concurrent Program

Concurrent Request Diagnostic Script
FNDCCMDiagnostic115.sh
(<– Click here for details & download)
This script will diagnose Concurrent Managers and Concurrent Requests

Concurrent Request Diagnostic Script
bde_request.sql
(<– Click here for details & download)
Process and Session info for one Concurrent Request
Printing ScriptFNDPrinterValidation115.sh(<– Click here for details & download)
This script checks a customer’s Printer configuration/setup within Oracle Applications.
Report Review Agent ScriptFNDValidateFNDFS115.sh(<– Click here for details & download)
This script checks basic setup of the concurrent processing tier report review agent.

Monday 5 August 2013

Oracle GoldenGate ---performing initial data load

Oracle GoldenGate ---performing initial data load

This example illustrates using the GoldenGate direct load method to extract records from an Oracle 11g database on Red Hat Linux platform and load the same into an Oracle 11g target database on an AIX platform.
The table PRODUCTS in the SH schema on the source has 72 rows and on the target database the same table is present only in structure without any data. We will be loading the 72 rows in this example from the source database to the target database using GoldenGate Direct Load method.
On Source
1) Create the Initial data extract process ‘load1′
GGSCI (redhat346.localdomain) 5> ADD EXTRACT load1, SOURCEISTABLE
EXTRACT added.
Since this is a one time data extract task, the source of the data is not the transaction log files of the RDBMS (in this case the online and archive redo log files) but the table data itself, that is why the keyword SOURCEISTABLE is used.
2) Create the parameter file for the extract group load1
EXTRACT: name of the extract group
USERID/PASSWORD: the database user which has been configured earlier for Extract ( this user is created in the source database)
RMTHOST: This will be the IP address or hostname of the target system
MGRPORT: the port where the Manager process is running
TABLE: specify the table which is being extracted and replicated. This can be specified in a number of ways using wildcard characters to include or exclude tables as well as entire schemas.
GGSCI (redhat346.localdomain) 6> EDIT PARAMS load1
EXTRACT load1
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST devu007, MGRPORT 7809
RMTTASK replicat, GROUP load2
TABLE sh.products;
On Target
3) Create the initial data load task ‘load2′
Since this is a one time data load task, we are using the keyword SPECIALRUN
GGSCI (devu007) 1> ADD REPLICAT load2, SPECIALRUN
REPLICAT added.
4) Create the parameter file for the Replicat group, load2
REPLICAT: name of the Replicat group created for the initial data load
USERID/PASSWORD: database credentials for the Replicat user (this user is created in the target database)
ASSUMETARGETDEFS: this means that the source table structure exactly matches the target database table structure
MAP: with GoldenGate we can have the target database structure entirely differ from that of the source in terms of table names as well as the column definitions of the tables. This parameter provides us the mapping of the source and target tables which is same in this case
GGSCI (devu007) 2> EDIT PARAMS load2
“/u01/oracle/software/goldengate/dirprm/rep4.prm” [New file]
REPLICAT load2
USERID ggs_owner, PASSWORD ggs_owner
ASSUMETARGETDEFS
MAP sh.customers, TARGET sh.customers;
On Source
SQL> select count(*) from products;
COUNT(*)
———-
72
On Target
SQL> select count(*) from products;
COUNT(*)
———-
0
On Source
5) Start the initial load data extract task on the source system
We now start the initial data load task load 1 on the source. Since this is a one time task, we will initially see that the extract process is runningand after the data load is complete it will be stopped. We do not have to manually start the Replicat process on the target as that is done when the Extract task is started on the source system.
On Source
GGSCI (redhat346.localdomain) 16> START EXTRACT load1
Sending START request to MANAGER …
EXTRACT LOAD1 starting
GGSCI (redhat346.localdomain) 28> info extract load1
EXTRACT LOAD1 Last Started 2010-02-11 11:33 Status RUNNING
Checkpoint Lag Not Available
Log Read Checkpoint Table SH.PRODUCTS
2010-02-11 11:33:16 Record 72
Task SOURCEISTABLE
GGSCI (redhat346.localdomain) 29> info extract load1
EXTRACT LOAD1 Last Started 2010-02-11 11:33 Status STOPPED
Checkpoint Lag Not Available
Log Read Checkpoint Table SH.PRODUCTS
2010-02-11 11:33:16 Record 72
Task SOURCEISTABLE
On Target
SQL> select count(*) from products;
COUNT(*)
———-
72

GoldenGate---- Installation (Oracle 11g on Linux)

GoldenGate---- Installation (Oracle 11g on Linux)

This example will illustrate the installation of Oracle GoldenGate on an RHEL 5 platform. We had in an earlier post discussed the architecture and various components of a GoldenGate environment.
GoldenGate software is also available on OTN but for our platform we need to download the required software from theOracle E-Delivery web site.
Select the Product Pack “Oracle Fusion Middleware” and the platform Linux X86-64.
Then select “Oracle GoldenGate on Oracle Media Pack for Linux x86-64″ and since we are installing this for an Oracle 11g database, we download “Oracle GoldenGate V10.4.0.x for Oracle 11g 64bit on RedHat 5.0″
unzip V18159-01.zip
Archive: V18159-01.zip
inflating: ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar
$tar -xvof ggs_redhatAS50_x64_ora11g_64bit_v10.4.0.19_002.tar
export PATH=$PATH:/u01/oracle/ggs
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/u01/oracle/ggs
ggsci
Oracle GoldenGate Command Interpreter for Oracle
Version 10.4.0.19 Build 002
Linux, x64, 64bit (optimized), Oracle 11 on Sep 17 2009 23:51:28
Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.
GGSCI (redhat346.localdomain) 1>
GGSCI (redhat346.localdomain) 1> CREATE SUBDIRS
Creating subdirectories under current directory /u01/app/oracle/product/11.2.0/dbhome_1
Parameter files /u01/oracle/ggs/dirprm: created
Report files /u01/oracle/ggs/dirrpt: created
Checkpoint files /u01/oracle/ggs/dirchk: created
Process status files /u01/oracle/ggs/dirpcs: created
SQL script files /u01/oracle/ggs/dirsql: created
Database definitions files /u01/oracle/ggs/dirdef: created
Extract data files /u01/oracle/ggs/dirdat: created
Temporary files /u01/oracle/ggs/dirtmp: created
Veridata files /u01/oracle/ggs/dirver: created
Veridata Lock files /u01/oracle/ggs/dirver/lock: created
Veridata Out-Of-Sync files /u01/oracle/ggs/dirver/oos: created
Veridata Out-Of-Sync XML files /u01/oracle/ggs/dirver/oosxml: created
Veridata Parameter files /u01/oracle/ggs/dirver/params: created
Veridata Report files /u01/oracle/ggs/dirver/report: created
Veridata Status files /u01/oracle/ggs/dirver/status: created
Veridata Trace files /u01/oracle/ggs/dirver/trace: created
Stdout files /u01/oracle/ggs/dirout: created
We then need to create a database user which will be used by the GoldenGate Manager, Extract and Replicat processes. We can create individual users for each process or configure just a common user – in our case we will create the one user GGS_OWNER and grant it the required privileges.
SQL> create tablespace ggs_data
datafile ‘/u02/oradata/gavin/ggs_data01.dbf’ size 200m;
SQL> create user ggs_owner identified by ggs_owner
default tablespace ggs_data
temporary tablespace temp;
User created.
SQL> grant connect,resource to ggs_owner;
Grant succeeded.
SQL> grant select any dictionary, select any table to ggs_owner;
Grant succeeded.
SQL> grant create table to ggs_owner;
Grant succeeded.
SQL> grant flashback any table to ggs_owner;
Grant succeeded.
SQL> grant execute on dbms_flashback to ggs_owner;
Grant succeeded.
SQL> grant execute on utl_file to ggs_owner;
Grant succeeded.
We can then confirm that the GoldenGate user we have just created is able to connect to the Oracle database
ggsci
Oracle GoldenGate Command Interpreter for Oracle
Version 10.4.0.19 Build 002
AIX 5L, ppc, 64bit (optimized), Oracle 11 on Sep 17 2009 23:54:16
Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.
GGSCI (devu007) 1> DBLOGIN USERID ggs_owner, PASSWORD ggs_owner
Successfully logged into database.
We also need to enable supplemental logging at the database level otherwise we will get this error when we try to start the Extract process -
2010-02-08 13:51:21 GGS ERROR 190 No minimum supplemental logging is enabled. This may cause extract process to handle key update incorrectly if key
column is not in first row piece.
2010-02-08 13:51:21 GGS ERROR 190 PROCESS ABENDING.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
Database altered

GoldenGate -- Configuring the Manager process

GoldenGate -- Configuring the Manager process

The Oracle GoldenGate Manager performs a number of functions like starting the other GoldenGate processes, trail log file management and reporting.
The Manager process needs to be configured on both the source as well as target systems and configuration is carried out via a parameter file just as in the case of the other GoldenGate processes like Extract and Replicat.
After installation of the software, we launch the GoldenGate Software Command Interface (GGSCI) and issue the following command to edit the Manager parameter file
EDIT PARAMS MGR
The only mandatory parameter that we need to specify is the PORT which defines the port on the local system where the manager process is running. The default port is 7809 and we can either specify the default port or some other port provided the port is available and not restricted in any way.
Some other recommended optional parameters are AUTOSTART which which automatically start the Extract and Replicat processes when the Manager starts.
The USERID and PASSWORD parameter and required if you enable GoldenGate DDL support and this is the Oracle user account that we created for the Manager(and Extract/Replicat) as described in the earlier tutorial.
The Manager process can also clean up trail files from disk when GoldenGate has finished processing them via the PURGEOLDEXTRACTS parameter. Used with the USECHECKPOINTS clause, it will ensure that until all processes have fnished using the data contained in the trail files, they will not be deleted.
The following is an example of a manager parameter file
[oracle@redhat346 ggs]$ ./ggsci
Oracle GoldenGate Command Interpreter for Oracle
Version 10.4.0.19 Build 002
Linux, x64, 64bit (optimized), Oracle 11 on Sep 17 2009 23:51:28
Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.
GGSCI 2> EDIT PARAMS MGR
PORT 7809
USERID ggs_owner, PASSWORD ggs_owner
PURGEOLDEXTRACTS /u01/oracle/ggs/dirdat/ex, USECHECKPOINTS
The manager can be stopped and started via the GSSCI commands START MANAGER and STOP MANAGER .
Information on the status of the Manager can be obtained via the INFO MANAGER command
GGSCI (devu007) 4> info manager
Manager is running (IP port devu007.7809).


source:---gavinsoorma.com

GoldenGate Architecture


GoldenGate enables us to extract and replicate data across a variety of topologies as shown the diagram below as well as the exchange and manipulation of data at the transactional level between a variety of database platforms like Oracle, DB2, SQL Server, Ingres, MySQL etc.
It can support a number of different business requirements like:

  • Business Continuity and High Availablity
  • Data migrations and upgrades
  • Decision Support Systems and Data Warehousing
  • Data integration and consolidation

  • Let us know look at the differents components and processes that make up a typical GoldenGate configuration on Oracle.

    (source: Oracle GoldenGate Administration Guide)
    Manager
    The Manager process must be running on both the source as well as target systems before the Extract or Replicat process can be started and performs a number of functions including monitoring and starting other GoldenGate processes, managing the trail files and also reporting.
    Extract
    The Extract process runs on the source system and is the data caoture mechanism of GoldenGate. It can be configured both for initial loading of the source data as well as to synchronize the changed data on the source with the target. This can be configued to also propagate any DDL changes on those databases where DDL change support is available.
    Replicat
    The Replicat process runs on the target system and reads transactional data changes as well as DDL changes and replicates then to the target database. Like the Extract process, the Replicat process can also be configured for Initial Load as well as Change Synchronization.
    Collector
    The Collector is a background process which runs on the target system and is started automatically by the Manager (Dynamic Collector) or it can be configured to stsrt manually (Static Collector). It receives extracted data changes that are sent via TCP/IP and writes then to the trail files from where they are processed by the Replicat process.
    Trails
    Trails are series of files that GoldenGate temporarily stores on disks and these files are written to and read from by the Extract and Replicat processes as the case may be. Depending on the configuration chosen, these trail files can exist on the source as well as on the target systems. If it exists on the local system, it will be known an Extract Trail or as an Remote Trail if it exists on the target system.
    Data Pumps
    Data Pumps are secondary extract mechanisms which exist in the source configuration. This is optional component and if Data Pump is not used then Extract sends data via TCP/IP to the remote trail on the target. When Data Pump is configured, the the Primary Extract process will write to the Local Trail and then this trail is read by the Data Pump and data is sent over the network to Remote Trails on the target system.
    In the absence of Data Pump, the data that the Extract process extracts resides in memory alone and there is no storage of this data anywhere on the source system. In case of network of target failures, there could be cases where the primary extract process can abort or abend. Data Pump can also be useful in those cases where we are doing complex filtering and transformation of data as well as when we are consolidating data from many sources to a central target.
    Data source
    When processing transactional data changes, the Extract process can obtain data directly from the database transaction logs (Oracle, DB2, SQL Server, MySQL etc) or from a GoldenGate Vendor Access Module (VAM) where the database vendor (for example Teradata) will provide the required components that will be used by Extract to extract the data changes.
    Groups
    To differentiate between the number of different Extract and Replicat groups which can potentially co-exist on a system, we can define processing groups. For instance, if we want to replicate different sets of data in parallel, we can create two Replicat groups.
    A processing group consisits of a process which could be either a Extract or Replicat process, a corresponding parameter file, checkpoint file or checkpoint table (for Replicat) and other files which could be associated with the process.

    Monday 29 July 2013

    Partitioning In oracle



    This Document Covers
    1)What is Partitioning, What Partitions Do Oracle Support and What Not
    2)Advantages Of Partitions
    3)Types Of Partitions with Examples
       3.1)Range Partitions
       3.2)List Partitions         
       3.3)Hash Partitions
       3.4)Composite Partitions
                3.4.1)Range-Hash
                3.4.2)Range-List
                3.4.3)Range-Range
                3.4.4)List-Range
                3.4.5)List-Hash
                3.4.6)List-List
                3.4.7)Interval-Range
                3.4.8)Interval-List
                3.4.9)Interval-Interval
    4)Related Base Tables and Views for Partitions
    5)Converting Existing Non Partitioned Table to Partitioned Table
       5.1)Using Export/Import Method
       5.2)Using Insert With SubQuery Method
       5.3)Using Partition Exchange Method
       5.4)Using DBMS_REDEFINITION Method
    6)Converting Existing Partitioned Table to Non  Partitioned Table
       6.1)Using Export/Import Method
       6.2)Using Insert With SubQuery Method
       6.3)Using Partition Exchange Method
       6.4)Using DBMS_REDEFINITION Method
    7)Global and Local Indexes
      7.1)Local Indexes
      7.2)Global Indexes
                7.2.1)Non Partitioned Global Indexes
                 7.2.2)Partitioned Global Indexes
    8)Partition Pruning
    9)References
    PARTITIONING IN ORACLE:

    Partitioning allows a table, index or index-organized table to be subdivided into smaller pieces. Each piece of the database object is called a partition. Each partition has its own name, and may optionally have its own storage characteristics. From the perspective of a database administrator, a partitioned
    object has multiple pieces that can be managed either collectively or individually. A Partitioned table is
    identical to a non-partitioned table; no modifications are necessary when accessing a partitioned table using SQL DML commands.

    Database objects - tables, indexes, and index-organized tables - are partitioned using a 'partitioning key', a set of columns which determine in which partition a given row will reside(simply partitioning key describes , based on which column you want to partition table).

    Once partitions are defined, SQL statements can access and manipulate the partitions rather than entire tables or indexes or M-View.

    Oracle supports
    (i)partitioning only for tables, indexes on tables/materialized views, and materialized views.

    Oracle does not support
    (i) Partitioning of clustered tables or indexes on clustered tables.
    (ii) LONG & LONG RAW datatypes.
    Oracle8i, supports up to 64,000 partitions per table or index.

    Advantages of partitioning.

    1.       Partitions can be independently managed.
    2.       Backup & recovery can be done on individual partitions. Hence maintenance windows will be reduced.
    3.       Partitions that may be unavailable (say if one disk goes down, that doesn't stop the show).
    4.       Partitions can be moved from one tablespace to another.
    5.       Can be added, dropped and truncated at the partition level.
    6.       Select, insert, update and delete can be done at the partition level, instead of table level and can be done in parallel.
    7.       Oracle optimizer eliminates partitions that need not be scanned. This is  called partition pruning.
    8.       Partitions can be scanned in parallel (OPQO - Oracle Parallel Query Option).
    9.       The size of the temporary segments used for sorting can be reduced.
    10.   we can load balance partitions across physical devices.
    11.   We can do Export/Import data from/to at partition level, by using normal export/import or datapump export/import.
    $ exp ... table=emp:partition2
    $ imp ... table=emp:partition2
    $ expdp ... tables=test.test_tab:part_2007
    $ impdp ... partition_options=departition dumpfile=part_2007.dmp
    12.   SQL*Loader can load data into a specified partition.



    Types of Partition:-
    The following table lists the partitioning features (high level) that have evolved over each version/release:
    Oracle Database Version
    Partitioning Features
    8.0.5
    Introduced Range Partitioning
    8i
    Introduced Hash and composite Range-Hash partitioning.
    9i
    Introduced List Partitioning, Composite Range-List partitioning.
    10G
    Introduced Range, List and Hash partitioning of Index Organized tables. Also introduced other composite partitioning options.
    11G
    Introduced partition extensions:
    -Interval partitioning
    -REF partitioning
    -Virtual Column-based partitioning
    -Introduced Partition Advisor.

    Range Partitioning:-
    The data is distributed based on a range of values of the partitioning key. To creaete a range partitioning table we must specify “PARTITION BY RANGE” clause while creating the table.
    Range partitioning is useful when you have distinct ranges of data you want to store together
    The classic example of this is the use of dates. Partitioning a table using date ranges allows all data of a similar age to be stored in same partition. Once historical data is no longer needed the whole partition can be removed.
    Example for Range partition table.
    CREATE TABLE  invoices
    (invoice_no    NUMBER  NOT  NULL,
     invoice_date  DATE   NOT  NULL,
     comments      VARCHAR2(500))
    PARTITION BY RANGE (invoice_date)
    (PARTITION invoices_q1 VALUES LESS THAN (TO_DATE('01/04/2001', 'DD/MM/YYYY')) TABLESPACE users,
     PARTITION invoices_q2 VALUES LESS THAN (TO_DATE('01/07/2001', 'DD/MM/YYYY')) TABLESPACE users,
     PARTITION invoices_q3 VALUES LESS THAN (TO_DATE('01/09/2001', 'DD/MM/YYYY')) TABLESPACE users,
     PARTITION invoices_q4 VALUES LESS THAN (TO_DATE('01/01/2002', 'DD/MM/YYYY')) TABLESPACE users);

    In the above example invoice_date is partition key , so the data is splited into partitions based on invoice_date column.

    Check Whether partition is created or not:

    select TABLE_NAME,TABLE_OWNER,PARTITION_NAME,PARTITION_POSITION,TABLESPACE_NAME from DBA_TAB_PARTITIONS where table_name like '%CUST%';

    To select data from single partition:

    select * from  cust_sales partition(cust_sales_q1);
    select * from  cust_sales partition(cust_sales_q2);

    LIST Partitioning:
     The data distribution is defined by a list of values of the partitioning key. This is useful for discrete lists. e.g: Regions, States etc. For list partitioning, the partitioning key can only be a single column name from the table.

    we specify a PARTITION BY LIST clause in the CREATE TABLE statement,

    The DEFAULT partition enables you to avoid specifying all possible values for a list partitioned table by using a default partition, so that all rows that do not map to any other partition do not generate an error.(It means if we insert data which do not point towards any partition then data will reside in DEFAULT partition)

    Example for List Partitioning:

    create table dept_part8(deptno number(2), dname varchar2(14), loc varchar2(13))
    partition by list (dname)
    (partition d1_east values ('BOSTON', 'NEW YORK') tablespace users,
     partition d2_west values ('SAN FRANCISCO', 'LOS ANGELES') tablespace sysaux,
     partition d3_south values ('ATLANTA', 'DALLAS') tablespace users,
     partition d4_north values (DEFAULT) tablespace users);

    Check Whether partition is created or not:

    select TABLE_NAME,TABLE_OWNER,PARTITION_NAME,PARTITION_POSITION,TABLESPACE_NAME from DBA_TAB_PARTITIONS where table_name like '%DEPT_PART%’;

    To select data from single partition:

    select * from  cust_sales partition(D4_NORTH);
    select * from  cust_sales partition(D1_EAST);



    HASH Partitioning:-
    Hash partitioning enables partitioning of data that does not lend itself to range or list partitioning. A hash algorithm is applied to the partitioning key to determine the partition for a given row. This provides I/O balancing, but cannot be used for range or inequality queries.
    Hash partitioning does not have any logical meaning to the partitions as do the range partitioning.

    Example to Create Hash Partitioning:

    CREATE TABLE invoices2
    (invoice_no    NUMBER NOT NULL,
     invoice_date  DATE   NOT NULL,
     comments      VARCHAR2(500))
    PARTITION BY HASH (invoice_no)
    (PARTITION invoices_q1 TABLESPACE users,
     PARTITION invoices_q2 TABLESPACE users,
     PARTITION invoices_q3 TABLESPACE users,
     PARTITION invoices_q4 TABLESPACE users);

    Check Whether partition is created or not:

    select TABLE_NAME,TABLE_OWNER,PARTITION_NAME,PARTITION_POSITION,TABLESPACE_NAME from DBA_TAB_PARTITIONS where table_name like 'INVOICES%’;

    To select data from single partition:

    select * from  cust_sales partition(invoices_q1);
    select * from  cust_sales partition(invoices_q2);


    NTERVAL Partitioning:-
    Interval partitioning is an extension to range partitioning in which, beyond a point in time, partitions are defined by an interval. Interval partitions are automatically created by the database when data is inserted into the partition.

    Interval partitioning defined by an interval, providing equi-width ranges. With the exception of the first partition all partitions are automatically created as-needed when matching data arrives.


    CREATE TABLE SALES_PART1
    (TIME_ID    NUMBER,
    REGION_ID NUMBER,
    ORDER_ID NUMBER,
    ORDER_DATE DATE,
    SALES_QTY NUMBER(10,2),
    SALES_AMOUNT NUMBER(12,2))
    PARTITION BY RANGE (ORDER_DATE)
    INTERVAL (NUMTOYMINTERVAL(1,'month'))
    (PARTITION p_first VALUES LESS THAN ('01-JAN-2006'));

    In Interval partitioning,

    ·         You can only specify one partitioning key column, and it must be of NUMBER or DATE type.
    ·         Interval partitioning is not supported for index-organized tables.
    ·         You cannot create a domain index on an interval partitioned table.
    ·         At least one partition must be defined when the table is created.
    ·         A MAXVALUE partition cannot be defined for an interval partitioned table.
    ·         NULL values are not allowed in the partition column.

    Composite Partitioning
    Composite partitioning is a combination of the basic partitioning techniques of Range, List, Hash, and Interval Partitionings.  The table is initially partitioned by the first data distribution method and then each partition is sub-partitioned by the second data distribution method
    Different types of Composite Partitionings

    1.    Range-Hash (from 8i)
    2.    Range-List (from 9i)
    3.    Range-Range (from 11g)
    4.    List-Hash (from 11g)
    5.    List-List (from 11g)
    6.    List-Range (from 11g)
    7.    Interval-Range (from 11g)
    8.    Interval-List (from 11g)
    9.    Interval-Interval (from 11g)

    1)Range-Hash Partitioning
    This is basically a combination of range and hash partitions. The data is divided using the range partition and then each range partitioned data is further subdivided into a hash partition using hash key values. All sub partitions, together, represent a logical subset of the data.


    CREATE TABLE composite_rng_hash (
    cust_id     NUMBER(10),
    cust_name   VARCHAR2(25),
    amount_sold NUMBER(10,2),
    time_id     DATE)
    PARTITION BY RANGE(time_id)
    SUBPARTITION BY HASH(cust_id)
    SUBPARTITION TEMPLATE(
    SUBPARTITION sp1 TABLESPACE users,
    SUBPARTITION sp2 TABLESPACE users,
    SUBPARTITION sp3 TABLESPACE users,
    SUBPARTITION sp4 TABLESPACE users) (
    PARTITION sales_pre05
    VALUES LESS THAN (TO_DATE('01/01/2005','DD/MM/YYYY')),
    PARTITION sales_2005
    VALUES LESS THAN(TO_DATE('01/01/2006','DD/MM/YYYY')),
    PARTITION sales_2006
    VALUES LESS THAN(TO_DATE('01/01/2007','DD/MM/YYYY')),
    PARTITION sales_2007
    VALUES LESS THAN(TO_DATE('01/01/2008','DD/MM/YYYY')),
    PARTITION sales_2008
    VALUES LESS THAN(TO_DATE('01/01/2009','DD/MM/YYYY')),
    PARTITION sales_future
    VALUES LESS THAN(MAXVALUE));

    2)Range-List Partitioning
    This is a combination of Range and List partitions, first the data is divided using the Range partition and then each Range partitioned data is further subdivided into List partitions using list key values. Each subpartition individually represents logical subset of the data not like composite Range-Hash Partition.


    create table rang_list (
    sales_dt date,
    state char(2),
    amount number)
    partition by range (sales_dt)
    subpartition by list (state)
    (
    partition s2004q1
    values less than (to_date('04-2004','MM-YYYY'))
    (subpartition s2004q1_south values  ('TX','LA','OK'),
    subpartition s2004q1_north values ('NY','DE','MA'),
    subpartition s2004q1_others values (DEFAULT)),
    partition s2004q2
    values less than (to_date('07-2004','MM-YYYY'))
    (subpartition s2004q2_south values  ('TX','LA','OK'),
    subpartition s2004q2_north values ('NY','DE','MA'),
    subpartition s2004q2_others values (DEFAULT))
    );

    3)Range-Range Partitioning
    Composite range-range partitioning enables logical range partitioning along two dimensions.
    e.g. partition by order_date and range subpartition by shipping_date.


    CREATE TABLE composite_rng_rng (
    cust_id     NUMBER(10),
    cust_name   VARCHAR2(25),
    cust_state  VARCHAR2(2),
    time_id     DATE)
    PARTITION BY RANGE(time_id)
    SUBPARTITION BY RANGE (cust_id)
    SUBPARTITION TEMPLATE(
    SUBPARTITION original VALUES LESS THAN (1001) TABLESPACE users,
    SUBPARTITION acquired VALUES LESS THAN (8001) TABLESPACE users,
    SUBPARTITION recent VALUES LESS THAN (MAXVALUE) TABLESPACE users) (
    PARTITION per1 VALUES LESS THAN (TO_DATE('01/01/2000','DD/MM/YYYY')),
    PARTITION per2 VALUES LESS THAN (TO_DATE('01/01/2005','DD/MM/YYYY')),
    PARTITION per3 VALUES LESS THAN (TO_DATE('01/01/2010','DD/MM/YYYY')),
    PARTITION future VALUES LESS THAN (MAXVALUE));

    Check for partitions
    SELECT table_name, partition_name, composite, high_value, subpartition_name, num_rows  FROM user_tab_partitions where table_name like '%COMPOSITE%';

    4)List-Range Partitioning
    Composite list-range partitioning enables logical range subpartitioning within a given list partitioning strategy.

    CREATE TABLE composite_list_rng (
    cust_id     NUMBER(10),
    cust_name   VARCHAR2(25),
    cust_state  VARCHAR2(2),
    time_id     DATE)
    PARTITION BY LIST(cust_state)
    SUBPARTITION BY RANGE (cust_id)
    SUBPARTITION TEMPLATE(
    SUBPARTITION original VALUES LESS THAN (1001) TABLESPACE users,
    SUBPARTITION acquired VALUES LESS THAN (8001) TABLESPACE users,
    SUBPARTITION recent VALUES LESS THAN (MAXVALUE) TABLESPACE users) (
    PARTITION west VALUES ('OR', 'WA'),
    PARTITION east VALUES ('NY', 'CT'),
    PARTITION cent VALUES ('IL', 'MN'));


    5)List-Hash Partitioning
    Composite list-hash partitioning enables hash sub partitioning of a list-partitioned object.
    e.g. to enable partition-wise joins.


    CREATE TABLE composite_lis_has (
    cust_id     NUMBER(10),
    cust_name   VARCHAR2(25),
    cust_state  VARCHAR2(2),
    time_id     DATE)
    PARTITION BY LIST(cust_state)
    SUBPARTITION BY HASH (cust_id)
    SUBPARTITION TEMPLATE(
    SUBPARTITION sp1 TABLESPACE users,
    SUBPARTITION sp2 TABLESPACE users,
    SUBPARTITION sp3 TABLESPACE users,
    SUBPARTITION sp4 TABLESPACE users) (
    PARTITION west VALUES ('OR', 'WA'),
    PARTITION east VALUES ('NY', 'CT'),
    PARTITION cent VALUES ('IL', 'MN'));



    6)List-List Partitioning
    Composite list-list partitioning enables logical list partitioning along two dimensions.
    e.g. list partition by country_id and list subpartition by sales_channel.

    CREATE TABLE composite_list_list (
    cust_id     NUMBER(10),
    cust_name   VARCHAR2(25),
    cust_state  VARCHAR2(2),
    time_id     DATE)
    PARTITION BY LIST(cust_state)
    SUBPARTITION BY LIST (cust_id)
    SUBPARTITION TEMPLATE(
    SUBPARTITION beg VALUES (1,3,5) TABLESPACE users,
    SUBPARTITION mid VALUES (2,4,6) TABLESPACE users,
    SUBPARTITION end VALUES (7,8,9,0) TABLESPACE users) (
    PARTITION west VALUES ('OR', 'WA'),
    PARTITION east VALUES ('NY', 'CT'),
    PARTITION cent VALUES ('IL', 'MN'));

    7)Interval-Range Partitioning
    This is a combination of Interval and Range partitions, first the data is divided using the Interval partition and then each Interval partitioned data is further subdivided into Range partitions.

    8)Interval-List Partitioning
    This is a combination of Interval and List partitions, first the data is divided using the Interval partition and then each Interval partitioned data is further subdivided into List partitions.

    9)Interval-Interval Partitioning
    First the data is divided using the Interval partition and then each Interval partitioned data is further subdivided into Interval partitions.

    Related Base Tables
    tabpart$
    tabsubpart$
    indpart$
    indsubpart$
    partobj$
    indpart_param$
    partlob$
    subpartcol$

    Related Views
    dba_part_tables
    dba_tab_partitions
    dba_tab_subpartitions
    dba_tab_cols
    dba_part_indexes
    dba_ind_partitions
    dba_ind_subpartitions
    dba_part_lobs
    dba_lob_partitions
    dba_lob_subpartitions
    dba_subpartition_templates
    dba_subpart_key_columns



    ##################################################################################################################
    ##################################################################################################################

    1)Converting Existing Non Partitioned table to Partitioned table

    You can convert non-partitioned to portioned table in one of four ways:

    A) Export/import method
    B) Insert with a subquery method
    C) Partition exchange method
    D) DBMS_REDEFINITION

    Either of these four methods will create a partitioned table from an existing non-partitioned table.

    A. Export/import method

    1) Export table:

    exp file=emp.dmp log=emp.log  tables=emp

    2) Drop the table:

    drop table emp;

    3) Recreate the table with partitions:

    create table emp (sno number(3), sname varchar2(15)) partition by range (sno) (partition p1 values less than (501),partition p2 values less than (maxvalue));

    4) Import the table with ignore=y:

    imp  file=emp.dmp ignore=y

    The ignore=y causes the import to skip the table creation and continues to load all rows.

    Check for partitioned table
    select * from emp partition(P1);
    select * from emp partition(P2);

    B) Insert with a subquery method 

    1) Create a partitioned table:

    create table pattbl (qty number(3),
    name varchar2(15))
    partition by range (qty)
    (partition p1 values less than (501),
    partition p2 values less than (maxvalue));

    2) Insert into the partitioned table with a subquery from the existing  non-partitioned table:

    insert into partbl (qty, name)  select * from origtbl;

    3) If you want the partitioned table to have the same name as the original table, then drop the original table and rename the new table:

    drop table origtbl;
    alter table partbl rename to origtbl;

    C) Partition exchange method

    ALTER TABLE EXCHANGE PARTITION can be used to convert a partition (or subpartition) into a non-partitioned table and a non-partitioned table into a partition (or subpartition) of a partitioned table by exchanging their data and index segments.

    1) Create table dummy_t as select with the required partitions

    2) Alter table EXCHANGE partition partition_name with table non-partition_table;

    Example:-
    SQL> CREATE TABLE p_emp1 (sal NUMBER(7,2))
     PARTITION BY RANGE(sal)
    (partition emp_p1 VALUES LESS THAN (2000),
     partition emp_p2 VALUES LESS THAN (4000));
    Table created.

    SQL> SELECT * FROM emp;
    EMPNO ENAME JOB MGR HIREDATE SAL
    --------- ---------- --------- --------- --------- ---------
    7369 SMITH CLERK 7902 17-DEC-80 800
    7499 ALLEN SALESMAN 7698 20-FEB-81 1600
    7521 WARD SALESMAN 7698 22-FEB-81 1250
    7566 JONES MANAGER 7839 02-APR-81 2975
    7654 MARTIN SALESMAN 7698 28-SEP-81 1250
    7698 BLAKE MANAGER 7839 01-MAY-81 2850
    7782 CLARK MANAGER 7839 09-JUN-81 2450
    7788 SCOTT ANALYST 7566 19-APR-87 3000
    7839 KING PRESIDENT 17-NOV-81 5000
    7844 TURNER SALESMAN 7698 08-SEP-81 1500
    7876 ADAMS CLERK 7788 23-MAY-87 1100
    7900 JAMES CLERK 7698 03-DEC-81 950
    7902 FORD ANALYST 7566 03-DEC-81 3000
    7934 MILLER CLERK 7782 23-JAN-82 1300
    14 rows selected.

    SQL> CREATE TABLE dummy_y as SELECT sal FROM emp WHERE sal<2000;
    Table created.

    SQL> CREATE TABLE dummy_z as SELECT sal FROM emp WHERE sal BETWEEN 2000 AND 3999;
    Table created.

    SQL> alter table p_emp exchange partition emp_p1 with table dummy_y;
    Table altered.

    SQL> alter table p_emp exchange partition emp_p2 with table dummy_z;
    Table altered.

    Check for partitions:

    Select * from p_emp partition(emp_p2);
    select * from p_emp partition(emp_p1);

    Here in this case first  we are creating a partitioned table. Then we are creating a dummy table with selected range values. Then we are exchanging partioned table without data from  dummy table with data.

    D. Using DBMS_REDEFINITION Method

    Some restrictions of Online Redefinition:
    ·         Cannot belong to SYS or SYSTEM Schema.
    ·         The table to be redefined online should not be using User-defined data types
    ·         Should not be a clustered table.
    ·         Should not have MV Logs or MVs defined
    ·         You cannot move a table from one schema to another using Online Redefinition feature.


    Step by step instructions on how to convert unpartitioned table to partitioned one using dbms_redefinition package.


    1) Create unpartitioned table with the name unpar_table

    SQL> CREATE TABLE unpar_table (
    id NUMBER(10),
    create_date DATE,
    name VARCHAR2(100)
    );

    2) Apply some constraints to the table:

    SQL> ALTER TABLE unpar_table ADD (
    CONSTRAINT unpar_table_pk PRIMARY KEY (id)
    );

    SQL> CREATE INDEX create_date_ind ON unpar_table(create_date);

    3) Gather statistics on the table:

    SQL> EXEC DBMS_STATS.gather_table_stats(USER, 'unpar_table', cascade => TRUE);

    4) Create a Partitioned Interim Table:

    SQL> CREATE TABLE par_table (
    id NUMBER(10),
    create_date DATE,
    name VARCHAR2(100)
    )
    PARTITION BY RANGE (create_date)
    (PARTITION unpar_table_2005 VALUES LESS THAN (TO_DATE('01/01/2005', 'DD/MM/YYYY')),
    PARTITION unpar_table_2006 VALUES LESS THAN (TO_DATE('01/01/2006', 'DD/MM/YYYY')),
    PARTITION unpar_table_2007 VALUES LESS THAN (MAXVALUE));

    5) Start the Redefinition Process:

    a) Check the redefinition is possible using the following command:

    SQL> EXEC Dbms_Redefinition.can_redef_table(USER, 'unpar_table'); 

    b)If no errors are reported, start the redefintion using the following command:

    SQL> BEGIN
    DBMS_REDEFINITION.start_redef_table(
    uname => USER,
    orig_table => 'unpar_table',
    int_table => 'par_table');
    END;
    /

    Note: This operation can take quite some time to complete.

    c) Optionally synchronize new table with interim name before index creation:

    SQL> BEGIN
    dbms_redefinition.sync_interim_table(
    uname => USER,
    orig_table => 'unpar_table',
    int_table => 'par_table');
    END;

    d) Create Constraints and Indexes:

    ALTER TABLE par_table ADD (CONSTRAINT unpar_table_pk2 PRIMARY KEY (id) );
    CREATE INDEX create_date_ind2 ON par_table(create_date); 

    e
    ) Gather statistics on the new table:

    SQL> EXEC DBMS_STATS.gather_table_stats(USER, 'par_table', cascade => TRUE); 

    f) Complete the Redefintion Process:

    SQL> BEGIN
    dbms_redefinition.finish_redef_table(
    uname => USER,
    orig_table => 'unpar_table',
    int_table => 'par_table');
    END;
    /
    At this point the interim table has become the "real" table and their names have been switched in the name dictionary.

    g) Remove original table which now has the name of the interim table:

    SQL> DROP TABLE par_table; 

    h)Rename all the constraints and indexes to match the original names.

    ALTER TABLE unpar_table RENAME CONSTRAINT unpar_table_pk2 TO unpar_table_pk;
    ALTER INDEX create_date_ind2 RENAME TO create_date_ind; 

    i) Check whether partitioning is successful or not:

    SQL> SELECT partitioned  FROM user_tables  WHERE table_name = 'unpar_table';

    PAR
    ---
    YES

    1 row selected.

    SQL> SELECT partition_name  FROM user_tab_partitions  WHERE table_name = 'unpar_table';

    PARTITION_NAME
    ------------------------------
    unpar_table_2005
    unpar_table_2006
    unpar_table_2007

    ###########################################################################

    Second method using  DBMS_REDEFINITION(This is working)

    In this section we need to execute all DBMS packages as sys user

    Step 1
    Check to make sure that the table can use the online redefintion feature

    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - ProductionWith the Partitioning, OLAP and Data Mining options
    SQL> EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('SCOTT','EMP');
    PL/SQL procedure successfully completed.

    Step 2
    Create the temporary partition table as same structure of original table.

    SQL> Create table EMP_part
    2 (EMPNO NUMBER(4),
    3 ENAME VARCHAR2(10),
    4 JOB VARCHAR2(9),
    5 MGR NUMBER(4),
    6 HIREDATE DATE,
    7 SAL NUMBER(7,2),
    8 COMM NUMBER(7,2),
    9 DEPTNO NUMBER(2))
    10 partition by range (SAL)
    11 (Partition p1 values less than (1000),
    12 Partition p2 values less than (2000),
    13 Partition p3 values less than (3000),
    14 Partition p4 values less than (4000),
    15 Partition max values less than (maxvalue))
    16 tablespace users;

    Table created.

    Step 3
    Start the online redefinition process.

    SQL> EXEC DBMS_REDEFINITION.START_REDEF_TABLE('SCOTT','EMP','EMP_PART');

    PL/SQL procedure successfully completed.

    Step 4
    Here is where oracle10g feature come into play. We do not need to copy any dependent objects to part_emp table. Dependent objects are like grants, synonym, triggers etc
    .

    SQL> VARIABLE NUM_ERRORS NUMBER;
    SQL> EXEC DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('SCOTT','EMP','EMP_PART', 1,TRUE,TRUE,TRUE,FALSE,
    :NUM_ERRORS,FALSE);

    PL/SQL procedure successfully completed.

    SQL> PRINT NUM_ERRORS

    NUM_ERRORS
    ----------
    0

    Step 5
    Resync the table

    SQL> EXEC DBMS_REDEFINITION.SYNC_INTERIM_TABLE('SCOTT','EMP','EMP_PART');

    PL/SQL procedure successfully completed.

    Step 6
    Complete the online redefinition

    SQL> EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE('SCOTT','EMP','EMP_PART');

    PL/SQL procedure successfully completed.



    Step 7
    Make sure emp table has all partitions as we created in emp_part table

    SQL> Select partition_name, high_value from user_tab_partitions where table_name='EMP';

    PARTITION_NAME HIGH_VALUE
    ------------------------------ ---------------------------------------------------------------------
    MAX MAXVALUE
    P1 1000
    P2 2000
    P3 3000
    P4 4000

    SQL> select partition_name, high_value from user_tab_partitions where table_name='EMP_PART';

    no rows selected

    Step 8
    Make sure all the dependent objects are copied .

    SQL> SELECT TRIGGER_NAME FROM USER_TRIGGERS WHERE TABLE_NAME='EMP';

    TRIGGER_NAME
    ------------------------------
    EMPTRIG

    SQL> select constraint_name from user_constraints where table_name='EMP';

    CONSTRAINT_NAME
    ------------------------------
    PK_EMP
    FK_DEPTNO


    Note : The only problem i see here is, if we have any index on the original table, it will convert to global index on partition table. If we need the index to be local index, then we have to drop and recreate the index.


    ############################################################################################################################################################


    2)Converting Existing Partitioned table to Non Partitioned table

    A)Export Import Method

    1)create a partitioned table and insert some rows.
    2)check whether table is partitioned or not,
    3)Export table using exp or expdp
    4)Drop the table
    5)Create table with same columns as in partitioned table
    6)Now import table using ignore=y option.
    7) check whether table is unpartitioned or not

    During the import, there are three options: NONE, DEPARTITION and MERGE.
    NONE means to import the structure exactly the same as it was in the source database, therefore, partitioned if it was so.
    DEPARTITION means to create a separate table for every partition and sub-partition
    MERGE-


    B) INSERT WITH A SUBQUERY METHOD

    1)create a partitioned table
    2)insert  some rows into partitioned table
    3)create non partitioned table from partitioned table(Using CTAS )
    4)drop partitioned table
    5)rename newly created non partitioned
    6)check for partitioned table.
    SQL> create table honcho_partition (id number, name varchar2(20),time date)
      2  partition by range (time)
      3  (partition t1 values less than (to_date('01-01-2012','dd-mm-yyyy')),
      4   partition t2 values less than (to_date('01-02-2012','dd-mm-yyyy')),
      5   partition t3 values less than (to_date('01-03-2012','dd-mm-yyyy')),
      6   partition t4 values less than (maxvalue));
     
    SQL> insert into honcho_partition values (1,'honcho',to_date('01-12-2011','dd-mm-yyyy'));
    Insert some more rows 

    select * from honcho_partition;

     1 honcho               2012-03-15 00:00:00

    SQL> create table honcho_test as select * from honcho_partition;

    Table created.

    SQL> select table_name,partition_name from user_tab_partitions;

    TABLE_NAME                     PARTITION_NAME
    ------------------------------ ------------------------------
    HONCHO_PARTITION               T1
    HONCHO_PARTITION               T2
    HONCHO_PARTITION               T3
    HONCHO_PARTITION               T4

    SQL> drop table honcho_partition purge;

    Table dropped.

    SQL> alter table honcho_test rename to honcho_partition;

    Table altered.


    SQL> select * from honcho_partition;

            ID NAME                 TIME
    ---------- -------------------- -------------------
             1 honcho               2011-12-01 00:00:00
             1 honcho               2011-12-15 00:00:00
             1 honcho               2012-01-01 00:00:00
             1 honcho               2012-01-15 00:00:00
             1 honcho               2012-02-01 00:00:00
             1 honcho               2012-02-15 00:00:00
             1 honcho               2012-03-01 00:00:00
             1 honcho               2012-03-15 00:00:00
             1 honcho               2012-04-01 00:00:00

    9 rows selected.

    SQL> select table_name,partition_name from user_tab_partitions;

    no rows selected


    C. USING REDIFINITION METHOD

    With DBMS_REDEFINITION, you can perform an online redefinition of tables. To achieve this online redefinition, incrementally maintainable local materialized views are used. Snapshot logs need to be defined on the master tables to support incrementally maintainable materialized views. These logs keep track of the changes to the master tables and are used by the materialized views during refresh synchronization. 

    1. Create a interim table with same structure as of the souce table WITHOUT partition.

    2. Check the redefinition is possible using the following command on source table

    EXEC Dbms_Redefinition.Can_Redef_Table(USER, 'SOURCE_TABLE');

    3. If no errors are reported proceed with the redefintion using the following command.

    BEGIN
    DBMS_REDEFINITION.start_redef_table(
    uname => USER,
    orig_table => 'SOURCE_TABLE',
    int_table => 'INTERIM_TABLE');
    END;
    /

    4. Synchronize new table with interim data before index creation

    BEGIN
    dbms_redefinition.sync_interim_table(
    uname => USER,
    orig_table => 'SOURCE_TABLE',
    int_table => 'ITERIM_TABLE');
    END;
    /

    5. Create Contraints and indexes. The constraints and indexes from the original table must be applied to interim table using alternate names to prevent errors.

    6. Complete the redefinition process

    BEGIN
    dbms_redefinition.finish_redef_table(
    uname => USER,
    orig_table => 'SOURCE_TABLE',
    int_table => 'INTERIM_TABLE');
    END;
    /

    Now the interim table has become the real table and their names have been switched in the data dictionary. Now perform some cleanup operations.

    7. Remove original table which now has the name of the interim table

    DROP TABLE ‘INTERIM_TABLE'

    - Rename all the constraints and indexes to original name
    - Make grants as on original table


    ##########################################################################################################################################################################

    Global and Local Indexes:

    Index:- Oracle indexes provides faster access to table rows by storing sorted values in specific columns and using those sorted values to easily lookup the associated table rows. This means that you can lookup data without having to look at more than a small fraction of the total rows within the table

    When using Oracle partitioning, you can specify the “global” or “local” parameter in the create index syntax

    Local Index: Local partitioned indexes are easier to manage than other types of partitioned indexes. The reason for this is equipartitioning: each partition of a local index is associated with exactly one partition of the table. This enables Oracle to automatically keep the index partitions in sync with the table partitions, and makes each table-index pair independent. Any actions that make one partition's data invalid or unavailable only affect a single partition.

    Local partitioned indexes support more availability when there are partition or subpartition maintenance operations on the table.

    We cannot explicitly add a partition to a local index. New partitions are added to local indexes only when you add a partition to the underlying table. Likewise, you cannot explicitly drop a partition from a local index. Instead, local index partitions are dropped only when you drop a partition from the underlying table.

    A local index can be unique. However, in order for a local index to be unique, the partitioning key of the table must be part of the index's key columns.

    Main advantage of local indexes is we can take individual partitions of a table and indexes offline for maintenance (or reorganization) without affecting the other partitions and indexes in the table.

    Example to create Local Index:

    1)Create a partitioned table
    CREATE TABLE tabind1
    (invoice_no    NUMBER NOT NULL,
     invoice_date  DATE   NOT NULL,
     comments      VARCHAR2(500))
    PARTITION BY HASH (invoice_no)
    (PARTITION invoices_q1 TABLESPACE users,
     PARTITION invoices_q2 TABLESPACE users,
     PARTITION invoices_q3 TABLESPACE users,
     PARTITION invoices_q4 TABLESPACE users);


    2)Create a local index for above table.
    Here  we are creating local index so for every partition in the table we must create an Index.

    CREATE INDEX tabind_ind1
    on tabind1 (invoice_no)
    LOCAL
    (PARTITION invoices1 tablespace users,
    PARTITION invoices2 tablespace users
    PARTITION invoices3 tablespace users,
    partition invoices4 tablespace users);

    Global Index:-

    Global indexes are of two types

    i)Non partitioned Global index and    ii)Partitioned Global Index

    i)Non partitioned Global index

    Global nonpartitioned indexes behave just like a nonpartitioned index. They are commonly used in OLTP environments and offer efficient access to any individual record .




    ii)Partitioned Global Index

    There are two types of global partitioned index: range partitioned and hash partitioned.

    Global partitioned index partition key is independent of Table partition key. The highest partition of a global index must have a partition bound, all of whose values are MAXVALUE. If you want to add new partition, always, you need to split the MAX partition. If a global index partition is empty, you can explicitly drop it by issuing the ALTER INDEX DROP PARTITION statement. If a global index partition contains data, dropping the partition causes the next highest partition to be marked unusable. You cannot drop the highest partition in a global index.


    Note:-A global partitioned index is used for all other indexes except for the one that is used as the table partition key.

    Example for Global partitioned indexes:-
    CREATE TABLE  invoices55
    (invoice_no    NUMBER  NOT  NULL,
     invoice_date  DATE   NOT  NULL,
     comments      VARCHAR2(500),
    invoice_id NUMBER NOT NULL)
    PARTITION BY RANGE (invoice_date)
    (PARTITION invoices_q1 VALUES LESS THAN (TO_DATE('01/04/2001', 'DD/MM/YYYY')) TABLESPACE users,
     PARTITION invoices_q2 VALUES LESS THAN (TO_DATE('01/07/2001', 'DD/MM/YYYY')) TABLESPACE users,
     PARTITION invoices_q3 VALUES LESS THAN (TO_DATE('01/09/2001', 'DD/MM/YYYY')) TABLESPACE users,
     PARTITION invoices_q4 VALUES LESS THAN (TO_DATE('01/01/2002', 'DD/MM/YYYY')) TABLESPACE users);
    Create Partitioned Index for Partitioned table
    CREATE INDEX invoices_idx ON invoices55(invoice_id)
    GLOBAL PARTITION BY RANGE(invoice_id)
    (PARTITION p1 VALUES LESS THAN(3),
    PARTITION p2 VALUES LESS THAN(6),
    PARTITION p3 VALUES LESS THAN(11),
    PARTITION p4 VALUES LESS THAN(20),
    PARTITION p5 VALUES LESS THAN(MAXVALUE));


    Here, we see that the item index has been defined with five partitions, each containing a subset of the index range values. Note that it is irrelevant that the base table is in three partitions. In fact, it is acceptable to create a global partitioned index on a table that does not have any partitioning.

    To get information about INDEXES:

    select INDEX_NAME,PARTITION_NAME,STATUS,GLOBAL_STATs,tablespace_name from DBA_IND_PARTITIONS where index_name like 'TABIND1';


    Partition Pruning:-
    In other words, partition pruning is the skipping of unnecessary index and data partitions or subpartitions in a query. The Oracle database server explicitly recognizes partitions and subpartitions. It then optimizes SQL statements to mark the partitions or subpartitions that need to be accessed and eliminates (prunes) unnecessary partitions or subpartitions from access by those SQL statements

    For each SQL statement, depending on the selection criteria specified, unneeded partitions or subpartitions can be eliminated. For example, if a query only involves March sales data, then there is no need to retrieve data for the remaining eleven months. Such intelligent pruning can dramatically reduce the data volume, resulting in substantial improvements in query performance.

    If the optimizer determines that the selection criteria used for pruning are satisfied by all the rows in the accessed partition or subpartition, it removes those criteria from the predicate list (WHERE clause) during evaluation in order to improve performance. However, the optimizer cannot prune partitions if the SQL statement applies a function to the partitioning column (with the exception of the TO_DATE function). Similarly, the optimizer cannot use an index if the SQL statement applies a function to the indexed column, unless it is a function-based index.

    Pruning can eliminate index partitions even when the underlying table's partitions cannot be eliminated, but only when the index and table are partitioned on different columns. You can often improve the performance of operations on large tables by creating partitioned indexes that reduce the amount of data that your SQL statements need to access or modify.

    Partition Pruning Example

    We have a partitioned table called cust_orders. The partition key for cust_orders is order_date. Let us assume that cust_orders has six months of data, January to June, with a partition for each month of data. If the following query is run:

    SELECT SUM(value)
    FROM cust_orders
    WHERE order_date BETWEEN '28-MAR-98' AND '23-APR-98';

    Partition pruning is achieved by:
    ·         First, partition elimination of January, February, May, and June data partitions.
    ·         An index scan of the March and April data partition due to high index selectivity
    ·         A full scan of the March and April data partition due to low index selectivity


    References:-

    http://learnappsdbaskills.blogspot.com/2013/07/blog-post.html