Index on the partitioned table can be of three types.
1)Global Non-partitioned Index.
2)Global Partitioned Index.
3)Local Partitioned Index.
With an example I will make you clear of these three different types of indexes on the partitioned table.
1)Create a partitioned table.
CREATE TABLE test_partition
(
id number,
created_date date,
col3 varchar2(20)
)
PARTITION BY RANGE (created_date)
(
PARTITION part1 VALUES LESS THAN (to_date('11-oct-2008','dd-mon-yyyy')) ,
PARTITION part2 VALUES LESS THAN (to_date('12-oct-2008','dd-mon-yyyy')) ,
PARTITION part3 VALUES LESS THAN (to_date('13-oct-2008','dd-mon-yyyy')) ,
PARTITION part4 VALUES LESS THAN (to_date('14-oct-2008','dd-mon-yyyy')) ,
PARTITION junk VALUES LESS THAN (MAXVALUE)
)
/
Table created.
Global Non Partitioned Index
This index span all over the table. Hence it is global and index is not partitioned.
SQL> create index tpid_i on test_partition(id);
Index created.
SQL> select index_name,partitioned, index_type from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME PAR INDEX_TYPE
------------------------------ --- ---------------------------
TPID_I NO NORMAL
Local Partitioned Index
SQL> create index tpid_i on test_partition(id) local;
Index created.
SQL> select index_name,partitioned, index_type from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME PAR INDEX_TYPE
------------------------------ --- ---------------------------
TPID_I YES NORMAL
SQL> select partition_name, status from user_ind_partitions where
2 index_name = 'TPID_I';
PARTITION_NAME STATUS
------------------------------ --------
JUNK USABLE
PART1 USABLE
PART2 USABLE
PART3 USABLE
PART4 USABLE
Global Partitioned Index
CREATE INDEX tpid_g ON test_partition(id)
GLOBAL PARTITION BY RANGE (id)
(PARTITION p1 VALUES LESS THAN (10),
PARTITION p2 VALUES LESS THAN (20),
PARTITION p3 VALUES LESS THAN (MAXVALUE));
Index created.
SQL> select index_name,partitioned, index_type from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME PAR INDEX_TYPE
------------------------------ --- ---------------------------
TPID_G YES NORMAL
February 28, 2009
How to avoid rebuild of indexes if DDL performed on partitioned table
On the partitioned table, if we don't do anything with the index then the index against the partitioned table remain global Non partitioned indexes and thus whenever we perform any DDL operation against the table partition like drop a partition or truncate a partition or add a partition or merge a partition or split a partition then the associated global indexes become invalid. Thus after doing DDL we need to rebuild the indexes. This may be very cumbersome and it may need several hours if the table is big enough.
With an example I am demonstrating how global indexes against a table become invalid if I do any DDL against the table.
1)Create a partitioned table.
CREATE TABLE test_partition
(
id number,
created_date date,
col3 varchar2(20)
)
PARTITION BY RANGE (created_date)
(
PARTITION part1 VALUES LESS THAN (to_date('11-oct-2008','dd-mon-yyyy')) ,
PARTITION part2 VALUES LESS THAN (to_date('12-oct-2008','dd-mon-yyyy')) ,
PARTITION part3 VALUES LESS THAN (to_date('13-oct-2008','dd-mon-yyyy')) ,
PARTITION part4 VALUES LESS THAN (to_date('14-oct-2008','dd-mon-yyyy')) ,
PARTITION junk VALUES LESS THAN (MAXVALUE)
)
/
Table created.
2)Insert data into the partitioned table.
insert into test_partition
select rownum, to_date('10-oct-2008','dd-mon-yyyy')+mod(rownum,7),
rownum
from dba_users;
41 rows created.
3)Create index on the partitioned table.
SQL> create index tpid_i on test_partition(id);
Index created.
4)Be sure partitioned is done successful.
SQL> select count(*) from test_partition partition (part2);
COUNT(*)
----------
6
Let's see some sample data.
SQL> select * from test_partition partition (part2);
ID CREATED_D COL3
---------- --------- --------------------
1 11-OCT-08 1
8 11-OCT-08 8
15 11-OCT-08 15
22 11-OCT-08 22
29 11-OCT-08 29
36 11-OCT-08 36
6 rows selected.
5)Let us see the status of the index against the partitioned table.
Note that since we have made global non-partitioned indexes so we will not get any entry about valid/unusable index in view user_ind_partitions. To get index status we have to query from user_indexes view.
SQL> select partition_name, status from user_ind_partitions where
index_name = 'TPID_I';
no rows selected
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I VALID
6)Do some DDL operation on the partitioned table.
SQL> alter table test_partition truncate partition part2;
Table truncated.
Check by,
SQL> select count(*) from test_partition partition (part2);
COUNT(*)
----------
0
7)Check the index status now.
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I UNUSABLE
Since we have global index so any DDL under the table marks the index unusable. If you want to make the index valid then you have to rebuild the index.
SQL> alter index tpid_i rebuild;
Index altered.
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I VALID
This rebuild process really time consuming. And for production server it may not be tolerable.
8)The solution is to make the index as local partitioned index.
SQL> drop index tpid_i;
Index dropped.
SQL> create index tpid_i on test_partition(id) local;
Index created.
Check status by,
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I N/A
Since it is local partitioned index we have to query from user_ind_partitions.
SQL> select partition_name, status from user_ind_partitions where
2 index_name = 'TPID_I';
PARTITION_NAME STATUS
------------------------------ --------
JUNK USABLE
PART1 USABLE
PART2 USABLE
PART3 USABLE
PART4 USABLE
SQL> select count(*) from test_partition partition (part3);
COUNT(*)
----------
6
9)Let's do some DDL operation and test.
SQL> alter table test_partition truncate partition part3;
Table truncated.
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I N/A
SQL> select partition_name, status from user_ind_partitions where
2 index_name = 'TPID_I';
PARTITION_NAME STATUS
------------------------------ --------
JUNK USABLE
PART1 USABLE
PART2 USABLE
PART3 USABLE
PART4 USABLE
And it still remains in USABLE status. So you might able to save a lots of time as well as non-interrupt service and effort against it.
With an example I am demonstrating how global indexes against a table become invalid if I do any DDL against the table.
1)Create a partitioned table.
CREATE TABLE test_partition
(
id number,
created_date date,
col3 varchar2(20)
)
PARTITION BY RANGE (created_date)
(
PARTITION part1 VALUES LESS THAN (to_date('11-oct-2008','dd-mon-yyyy')) ,
PARTITION part2 VALUES LESS THAN (to_date('12-oct-2008','dd-mon-yyyy')) ,
PARTITION part3 VALUES LESS THAN (to_date('13-oct-2008','dd-mon-yyyy')) ,
PARTITION part4 VALUES LESS THAN (to_date('14-oct-2008','dd-mon-yyyy')) ,
PARTITION junk VALUES LESS THAN (MAXVALUE)
)
/
Table created.
2)Insert data into the partitioned table.
insert into test_partition
select rownum, to_date('10-oct-2008','dd-mon-yyyy')+mod(rownum,7),
rownum
from dba_users;
41 rows created.
3)Create index on the partitioned table.
SQL> create index tpid_i on test_partition(id);
Index created.
4)Be sure partitioned is done successful.
SQL> select count(*) from test_partition partition (part2);
COUNT(*)
----------
6
Let's see some sample data.
SQL> select * from test_partition partition (part2);
ID CREATED_D COL3
---------- --------- --------------------
1 11-OCT-08 1
8 11-OCT-08 8
15 11-OCT-08 15
22 11-OCT-08 22
29 11-OCT-08 29
36 11-OCT-08 36
6 rows selected.
5)Let us see the status of the index against the partitioned table.
Note that since we have made global non-partitioned indexes so we will not get any entry about valid/unusable index in view user_ind_partitions. To get index status we have to query from user_indexes view.
SQL> select partition_name, status from user_ind_partitions where
index_name = 'TPID_I';
no rows selected
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I VALID
6)Do some DDL operation on the partitioned table.
SQL> alter table test_partition truncate partition part2;
Table truncated.
Check by,
SQL> select count(*) from test_partition partition (part2);
COUNT(*)
----------
0
7)Check the index status now.
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I UNUSABLE
Since we have global index so any DDL under the table marks the index unusable. If you want to make the index valid then you have to rebuild the index.
SQL> alter index tpid_i rebuild;
Index altered.
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I VALID
This rebuild process really time consuming. And for production server it may not be tolerable.
8)The solution is to make the index as local partitioned index.
SQL> drop index tpid_i;
Index dropped.
SQL> create index tpid_i on test_partition(id) local;
Index created.
Check status by,
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I N/A
Since it is local partitioned index we have to query from user_ind_partitions.
SQL> select partition_name, status from user_ind_partitions where
2 index_name = 'TPID_I';
PARTITION_NAME STATUS
------------------------------ --------
JUNK USABLE
PART1 USABLE
PART2 USABLE
PART3 USABLE
PART4 USABLE
SQL> select count(*) from test_partition partition (part3);
COUNT(*)
----------
6
9)Let's do some DDL operation and test.
SQL> alter table test_partition truncate partition part3;
Table truncated.
SQL> select index_name, status from user_indexes where table_name='TEST_PARTITION';
INDEX_NAME STATUS
------------------------------ --------
TPID_I N/A
SQL> select partition_name, status from user_ind_partitions where
2 index_name = 'TPID_I';
PARTITION_NAME STATUS
------------------------------ --------
JUNK USABLE
PART1 USABLE
PART2 USABLE
PART3 USABLE
PART4 USABLE
And it still remains in USABLE status. So you might able to save a lots of time as well as non-interrupt service and effort against it.
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
Error Description:
The scenario is I have created a materialized view through database link over a table reside on the remote database.
I used the ON COMMIT option and it fails with ORA-12054 as below.
SQL> create materialized view scott.mv_phones
2 TABLESPACE users
3 REFRESH FAST on commit
4 WITH rowid
5 AS select ID, PASS,
6 CONN_EXPIRY_DATE, CONN_STATUS
7 from PHONES@lnxdb where upper(IS_SOLD)='Y';
from PHONES@lnxdb where upper(IS_SOLD)='Y'
*
ERROR at line 7:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
Cause of the Problem
The ON COMMIT clause of CREATE MATERIALIZE VIEW has some restrictions. Such as,
•ON COMMIT clause is not supported for materialized views containing object types.
•With ON COMMIT clause inserting into the materialized view table(master table) is not supported by selecting rows from remote tables.
In our case we satisfy the second restriction and hence error comes.
Solution of the Problem
To implement the solution you have to provide ON DEMAND clause. You have to create as below.
SQL> create materialized view scott.mv_phones
2 TABLESPACE users
3 REFRESH FAST on demand
4 WITH rowid
5 AS select ID, PASS,
6 CONN_EXPIRY_DATE, CONN_STATUS
7 from PHONES@lnxdb where upper(IS_SOLD)='Y';
The scenario is I have created a materialized view through database link over a table reside on the remote database.
I used the ON COMMIT option and it fails with ORA-12054 as below.
SQL> create materialized view scott.mv_phones
2 TABLESPACE users
3 REFRESH FAST on commit
4 WITH rowid
5 AS select ID, PASS,
6 CONN_EXPIRY_DATE, CONN_STATUS
7 from PHONES@lnxdb where upper(IS_SOLD)='Y';
from PHONES@lnxdb where upper(IS_SOLD)='Y'
*
ERROR at line 7:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
Cause of the Problem
The ON COMMIT clause of CREATE MATERIALIZE VIEW has some restrictions. Such as,
•ON COMMIT clause is not supported for materialized views containing object types.
•With ON COMMIT clause inserting into the materialized view table(master table) is not supported by selecting rows from remote tables.
In our case we satisfy the second restriction and hence error comes.
Solution of the Problem
To implement the solution you have to provide ON DEMAND clause. You have to create as below.
SQL> create materialized view scott.mv_phones
2 TABLESPACE users
3 REFRESH FAST on demand
4 WITH rowid
5 AS select ID, PASS,
6 CONN_EXPIRY_DATE, CONN_STATUS
7 from PHONES@lnxdb where upper(IS_SOLD)='Y';
Block or Accept Oracle access by IP Address
You sometimes may wish to access to logon to your database filtered by IP address. Suppose you will allow to connect to database having a list of IP address. Or you like to ban a list of IP addresses in order to deny logon as a database user.
With oracle this scenario can be achieved, however this seems to me a bit of fun.
The secret lies in the SQLNET.ORA file. On UNIX system this file resides in $ORACLE_HOME/network/admin directory along with tnsnames.ora and listener.ora.
In order to put any filtering by IP address open the sqlnet.ora file with any editor and insert the following line,
tcp.validnode_checking = yes
This in fact, turns on the hostname/IP checking for your listeners. After this, with
tcp.invited_nodes /tcp.excluded_nodes you can supply lists of nodes to enable/disable, as such:
tcp.invited_nodes = (hostname1, hostname2)
tcp.excluded_nodes = (192.168.100.101,192.168.100.160)
Note that if you only specify invited nodes with tcp.invited_nodes, all other nodes will be excluded, so there is really no reason to do both. The same is true for excluded nodes. If you put tcp.excluded_nodes = (192.168.100.101,192.168.100.160) then IP containing 192.168.100.101 and 192.168.100.160 will be excluded/denied to connect to database as a database user while allowing others to connect.
Some rules for entering invited/excluded nodes:
1. You cannot use wild cards in your specifications.
2. You must put all invited nodes in one line; likewise for excluded nodes.
3. You should always enter localhost as an invited node.
Once you have set up your rules and enabled valid node checking, you must restart your listeners to reap the benefits.
To do so,
$lsnrctl stop
$lsnrctl start
A simple example: Suppose in your database server you simply allow the host containing IP Address 192.168.100.2 and 192.168.100.3. Then your sqlnet.ora file look like,
tcp.validnode_checking = yes
tcp.invited_nodes = (localhost,192.168.100.2,192.168.100.3)
With oracle this scenario can be achieved, however this seems to me a bit of fun.
The secret lies in the SQLNET.ORA file. On UNIX system this file resides in $ORACLE_HOME/network/admin directory along with tnsnames.ora and listener.ora.
In order to put any filtering by IP address open the sqlnet.ora file with any editor and insert the following line,
tcp.validnode_checking = yes
This in fact, turns on the hostname/IP checking for your listeners. After this, with
tcp.invited_nodes /tcp.excluded_nodes you can supply lists of nodes to enable/disable, as such:
tcp.invited_nodes = (hostname1, hostname2)
tcp.excluded_nodes = (192.168.100.101,192.168.100.160)
Note that if you only specify invited nodes with tcp.invited_nodes, all other nodes will be excluded, so there is really no reason to do both. The same is true for excluded nodes. If you put tcp.excluded_nodes = (192.168.100.101,192.168.100.160) then IP containing 192.168.100.101 and 192.168.100.160 will be excluded/denied to connect to database as a database user while allowing others to connect.
Some rules for entering invited/excluded nodes:
1. You cannot use wild cards in your specifications.
2. You must put all invited nodes in one line; likewise for excluded nodes.
3. You should always enter localhost as an invited node.
Once you have set up your rules and enabled valid node checking, you must restart your listeners to reap the benefits.
To do so,
$lsnrctl stop
$lsnrctl start
A simple example: Suppose in your database server you simply allow the host containing IP Address 192.168.100.2 and 192.168.100.3. Then your sqlnet.ora file look like,
tcp.validnode_checking = yes
tcp.invited_nodes = (localhost,192.168.100.2,192.168.100.3)
ORA-01033: ORACLE initialization or shutdown in progress
Error Description
While connecting to database user get the error,
ORA-01033: ORACLE initialization or shutdown in progress
Cause of The Problem
SYSDBA used issued STARTUP command to the database and while starting up database, connecting to database as normal user will get the error ORA-01033.
There may be the scenario that SHUTDOWN command waits a long time.
Solution of The Problem
Scenario 01:
Based on the situation STARTUP may take fewer minutes. As a normal user what more you do is to wait few minutes and try again. You will succeed if database is open state.
Here is the waiting result.
C:\Documents and Settings\Queen>sqlplus arju/a
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Nov 7 17:34:23 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Enter user-name: arju
Enter password:
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Enter user-name: arju
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
At third attempts I became succeed.
Scenario 02:
If you have SYSDBA privilege then connect to database as SYSDBA and see the status of the database.
SQL> conn / as sysdba
Connected.
SQL> select open_mode from v$database;
select open_mode from v$database
*
ERROR at line 1:
ORA-01507: database not mounted
SQL> select open_mode from v$database;
select open_mode from v$database
*
ERROR at line 1:
ORA-01507: database not mounted
SQL> select open_mode from v$database;
OPEN_MODE
----------
MOUNTED
SQL> select open_mode from v$database;
OPEN_MODE
----------
MOUNTED
SQL> select open_mode from v$database;
OPEN_MODE
----------
READ WRITE
As we can see that database gradually became usable state. Someone issued STARTUP and it take some times to be in READ WRITE state.
Scenario 03:
If shutdown takes more times suppose SHUTDOWN NORMAL then issue,
SHUTDOWN ABORT;
and later normal startup,
STARTUP;
While connecting to database user get the error,
ORA-01033: ORACLE initialization or shutdown in progress
Cause of The Problem
SYSDBA used issued STARTUP command to the database and while starting up database, connecting to database as normal user will get the error ORA-01033.
There may be the scenario that SHUTDOWN command waits a long time.
Solution of The Problem
Scenario 01:
Based on the situation STARTUP may take fewer minutes. As a normal user what more you do is to wait few minutes and try again. You will succeed if database is open state.
Here is the waiting result.
C:\Documents and Settings\Queen>sqlplus arju/a
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Nov 7 17:34:23 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Enter user-name: arju
Enter password:
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Enter user-name: arju
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
At third attempts I became succeed.
Scenario 02:
If you have SYSDBA privilege then connect to database as SYSDBA and see the status of the database.
SQL> conn / as sysdba
Connected.
SQL> select open_mode from v$database;
select open_mode from v$database
*
ERROR at line 1:
ORA-01507: database not mounted
SQL> select open_mode from v$database;
select open_mode from v$database
*
ERROR at line 1:
ORA-01507: database not mounted
SQL> select open_mode from v$database;
OPEN_MODE
----------
MOUNTED
SQL> select open_mode from v$database;
OPEN_MODE
----------
MOUNTED
SQL> select open_mode from v$database;
OPEN_MODE
----------
READ WRITE
As we can see that database gradually became usable state. Someone issued STARTUP and it take some times to be in READ WRITE state.
Scenario 03:
If shutdown takes more times suppose SHUTDOWN NORMAL then issue,
SHUTDOWN ABORT;
and later normal startup,
STARTUP;
How to convert non-partitioned table to partition table using re-definition
We will do partition of table OUT_CDR which reside on CR_2 schema. We will do partition on column CDATE using RANGE partitioning technique. We will not change any table structure other than partition. All indexes, constraints, triggers, privileges defined on the table will be remain same.
Step 01: Let's have a look at the table on which we will do partitioning.
SQL> set pagesize 200
SQL> set long 999999
SQL> set linesize 150
SQL> select dbms_metadata.get_ddl('TABLE','OUT_CDR','CR_2') from dual;
DBMS_METADATA.GET_DDL('TABLE','OUT_CDR','CR_2')
--------------------------------------------------------------------------------
CREATE TABLE "CR_2"."OUT_CDR"
( "ID" NUMBER(32,0) NOT NULL ENABLE,
"CDATE" DATE NOT NULL ENABLE,
"DDATE" DATE NOT NULL ENABLE,
"ACCTSESSIONID" VARCHAR2(100),
"CALLINGNO" VARCHAR2(100),
"CALLEDNO" VARCHAR2(100) NOT NULL ENABLE,
"AREACODE" VARCHAR2(100),
"PREFIX" VARCHAR2(100),
"SESSIONTIME" NUMBER(32,0),
"BILLABLETIME" NUMBER(32,0),
"RATE" NUMBER(32,4),
"CALL_COST" NUMBER(32,4),
"CURRENTBILL" NUMBER(32,4),
"DISCONNECTCAUSE" VARCHAR2(50),
"SOURCEIP" VARCHAR2(100),
"DESTIP" VARCHAR2(100),
"BILLABLE" NUMBER(32,0) NOT NULL ENABLE,
"LESS" NUMBER(32,0) NOT NULL ENABLE,
"ACCID" NUMBER(32,0),
"IN_DDATE" DATE,
"IN_PREFIX" VARCHAR2(100),
"IN_SESSIONTIME" NUMBER(32,0),
"IN_BILLABLETIME" NUMBER(32,0),
"IN_RATE" NUMBER(32,4),
"IN_CALL_COST" NUMBER(32,4),
"IN_MONEYLEFT" NUMBER(32,4),
"IN_DISCONNECTCAUSE" VARCHAR2(50),
"IN_BILLABLE" NUMBER(32,0),
"IN_LESS" NUMBER(32,0),
"SWITCH_ID" NUMBER(32,0) NOT NULL ENABLE,
"USER_ID" NUMBER(32,0) NOT NULL ENABLE,
"IN_USER_ID" NUMBER(32,0),
"PROCESSED" NUMBER(1,0),
CONSTRAINT "OUT_CDR_PK" PRIMARY KEY ("ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 168820736 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "CDR_INDX_SPC" ENABLE,
CONSTRAINT "OUT_CDR_UQ" UNIQUE ("CDATE", "CALLEDNO", "USER_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 522190848 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "CDR_INDX_SPC" ENABLE,
CONSTRAINT "OUT_CDR_UQ_2" UNIQUE ("DDATE", "CALLEDNO", "USER_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 521142272 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "CDR_INDX_SPC" ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 2013265920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "OUT_CDR_NEW_SPC"
Step 02: Let's determine if the table OUT_CDR can be redefined online.
SQL> exec dbms_redefinition.can_redef_table('CR_2', 'OUT_CDR');
PL/SQL procedure successfully completed.
Step 03: Create a interim table which holds the same structure as the original table except constraints, indexes, triggers but add the partitioning attribute.
I named the interim table as OUT_CDR_. Later we may drop it.
SQL> CREATE TABLE "CR_2"."OUT_CDR_"
2 ( "ID" NUMBER(32,0),
3 "CDATE" DATE ,
4 "DDATE" DATE ,
5 "ACCTSESSIONID" VARCHAR2(100),
6 "CALLINGNO" VARCHAR2(100),
7 "CALLEDNO" VARCHAR2(100) ,
8 "AREACODE" VARCHAR2(100),
9 "PREFIX" VARCHAR2(100),
10 "SESSIONTIME" NUMBER(32,0),
11 "BILLABLETIME" NUMBER(32,0),
12 "RATE" NUMBER(32,4),
13 "CALL_COST" NUMBER(32,4),
14 "CURRENTBILL" NUMBER(32,4),
15 "DISCONNECTCAUSE" VARCHAR2(50),
16 "SOURCEIP" VARCHAR2(100),
17 "DESTIP" VARCHAR2(100),
18 "BILLABLE" NUMBER(32,0) ,
19 "LESS" NUMBER(32,0) ,
20 "ACCID" NUMBER(32,0),
21 "IN_DDATE" DATE,
22 "IN_PREFIX" VARCHAR2(100),
23 "IN_SESSIONTIME" NUMBER(32,0),
24 "IN_BILLABLETIME" NUMBER(32,0),
25 "IN_RATE" NUMBER(32,4),
26 "IN_CALL_COST" NUMBER(32,4),
27 "IN_MONEYLEFT" NUMBER(32,4),
28 "IN_DISCONNECTCAUSE" VARCHAR2(50),
29 "IN_BILLABLE" NUMBER(32,0),
30 "IN_LESS" NUMBER(32,0),
31 "SWITCH_ID" NUMBER(32,0) ,
32 "USER_ID" NUMBER(32,0) ,
33 "IN_USER_ID" NUMBER(32,0),
34 "PROCESSED" NUMBER(1,0)
35 ) TABLESPACE "OUT_CDR_NEW_SPC"
36 Partition by range(cdate)
37 (
38 partition P08152008 values less than (to_date('15-AUG-2008','DD-MON-YYYY')),
39 partition P09012008 values less than (to_date('01-SEP-2008','DD-MON-YYYY')),
40 partition P09152008 values less than (to_date('15-SEP-2008','DD-MON-YYYY')),
41 partition P10012008 values less than (to_date('01-OCT-2008','DD-MON-YYYY')),
42 partition P10152008 values less than (to_date('15-OCT-2008','DD-MON-YYYY')),
43 partition P11012008 values less than (to_date('01-NOV-2008','DD-MON-YYYY')),
44 partition P11152008 values less than (to_date('15-NOV-2008','DD-MON-YYYY')),
45 partition P12012008 values less than (to_date('01-DEC-2008','DD-MON-YYYY')),
46 partition P12152008 values less than (to_date('15-DEC-2008','DD-MON-YYYY')),
47 partition P01012009 values less than (to_date('01-JAN-2009','DD-MON-YYYY')),
48 partition P01152009 values less than (to_date('15-JAN-2009','DD-MON-YYYY')),
49 partition P02012009 values less than (to_date('01-FEB-2009','DD-MON-YYYY')),
50 partition PMAX values less than (maxvalue));
Table created.
Step 04: Initiates the redefinition process by calling dbms_redefinition.start_redef_table procedure.
SQL> exec dbms_redefinition.start_redef_table('CR_2', 'OUT_CDR', 'OUT_CDR_');
PL/SQL procedure successfully completed.
Step 05: Copies the dependent objects of the original table onto the interim table. The COPY_TABLE_DEPENDENTS Procedure clones the dependent objects of the table being redefined onto the interim table and registers the dependent objects. But this procedure does not clone the already registered dependent objects.
In fact COPY_TABLE_DEPENDENTS Procedure is used to clone the dependent objects like grants, triggers, constraints and privileges from the table being redefined to the interim table which in facr represents the post-redefinition table.
SQL> declare
2 error_count pls_integer := 0;
3 BEGIN
4 dbms_redefinition.copy_table_dependents('CR_2', 'OUT_CDR', 'OUT_CDR_',1, true, true, true, false,error_count);
5 dbms_output.put_line('errors := ' || to_char(error_count));
6 END;
7 /
PL/SQL procedure successfully completed.
Step 06: Completes the redefinition process by calling FINISH_REDEF_TABLE Procedure.
SQL> exec dbms_redefinition.finish_redef_table('CR_2', 'OUT_CDR', 'OUT_CDR_');
PL/SQL procedure successfully completed.
Step 07: Check the partitioning validation by,
SQL> Select partition_name, high_value from user_tab_partitions where table_name='OUT_CDR';
PARTITION_NAME HIGH_VALUE
------------------------------ ---------------------------------------------------------------------------------------------------
P01012009 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P01152009 TO_DATE(' 2009-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P02012009 TO_DATE(' 2009-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P08152008 TO_DATE(' 2008-08-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P09012008 TO_DATE(' 2008-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P09152008 TO_DATE(' 2008-09-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P10012008 TO_DATE(' 2008-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P10152008 TO_DATE(' 2008-10-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P11012008 TO_DATE(' 2008-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P11152008 TO_DATE(' 2008-11-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P12012008 TO_DATE(' 2008-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P12152008 TO_DATE(' 2008-12-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
PMAX MAXVALUE
13 rows selected.
Check index status by,
SQL> select index_name , status from user_indexes where table_name='OUT_CDR';
INDEX_NAME STATUS
------------------------------ --------
OUT_CDR_PK VALID
OUT_CDR_UQ VALID
OUT_CDR_UQ_2 VALID
Step 08: Drop the interim table OUT_CDR_.
SQL> DROP TABLE OUT_CDR_;
Table dropped.
Step 01: Let's have a look at the table on which we will do partitioning.
SQL> set pagesize 200
SQL> set long 999999
SQL> set linesize 150
SQL> select dbms_metadata.get_ddl('TABLE','OUT_CDR','CR_2') from dual;
DBMS_METADATA.GET_DDL('TABLE','OUT_CDR','CR_2')
--------------------------------------------------------------------------------
CREATE TABLE "CR_2"."OUT_CDR"
( "ID" NUMBER(32,0) NOT NULL ENABLE,
"CDATE" DATE NOT NULL ENABLE,
"DDATE" DATE NOT NULL ENABLE,
"ACCTSESSIONID" VARCHAR2(100),
"CALLINGNO" VARCHAR2(100),
"CALLEDNO" VARCHAR2(100) NOT NULL ENABLE,
"AREACODE" VARCHAR2(100),
"PREFIX" VARCHAR2(100),
"SESSIONTIME" NUMBER(32,0),
"BILLABLETIME" NUMBER(32,0),
"RATE" NUMBER(32,4),
"CALL_COST" NUMBER(32,4),
"CURRENTBILL" NUMBER(32,4),
"DISCONNECTCAUSE" VARCHAR2(50),
"SOURCEIP" VARCHAR2(100),
"DESTIP" VARCHAR2(100),
"BILLABLE" NUMBER(32,0) NOT NULL ENABLE,
"LESS" NUMBER(32,0) NOT NULL ENABLE,
"ACCID" NUMBER(32,0),
"IN_DDATE" DATE,
"IN_PREFIX" VARCHAR2(100),
"IN_SESSIONTIME" NUMBER(32,0),
"IN_BILLABLETIME" NUMBER(32,0),
"IN_RATE" NUMBER(32,4),
"IN_CALL_COST" NUMBER(32,4),
"IN_MONEYLEFT" NUMBER(32,4),
"IN_DISCONNECTCAUSE" VARCHAR2(50),
"IN_BILLABLE" NUMBER(32,0),
"IN_LESS" NUMBER(32,0),
"SWITCH_ID" NUMBER(32,0) NOT NULL ENABLE,
"USER_ID" NUMBER(32,0) NOT NULL ENABLE,
"IN_USER_ID" NUMBER(32,0),
"PROCESSED" NUMBER(1,0),
CONSTRAINT "OUT_CDR_PK" PRIMARY KEY ("ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 168820736 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "CDR_INDX_SPC" ENABLE,
CONSTRAINT "OUT_CDR_UQ" UNIQUE ("CDATE", "CALLEDNO", "USER_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 522190848 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "CDR_INDX_SPC" ENABLE,
CONSTRAINT "OUT_CDR_UQ_2" UNIQUE ("DDATE", "CALLEDNO", "USER_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 521142272 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "CDR_INDX_SPC" ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 2013265920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "OUT_CDR_NEW_SPC"
Step 02: Let's determine if the table OUT_CDR can be redefined online.
SQL> exec dbms_redefinition.can_redef_table('CR_2', 'OUT_CDR');
PL/SQL procedure successfully completed.
Step 03: Create a interim table which holds the same structure as the original table except constraints, indexes, triggers but add the partitioning attribute.
I named the interim table as OUT_CDR_. Later we may drop it.
SQL> CREATE TABLE "CR_2"."OUT_CDR_"
2 ( "ID" NUMBER(32,0),
3 "CDATE" DATE ,
4 "DDATE" DATE ,
5 "ACCTSESSIONID" VARCHAR2(100),
6 "CALLINGNO" VARCHAR2(100),
7 "CALLEDNO" VARCHAR2(100) ,
8 "AREACODE" VARCHAR2(100),
9 "PREFIX" VARCHAR2(100),
10 "SESSIONTIME" NUMBER(32,0),
11 "BILLABLETIME" NUMBER(32,0),
12 "RATE" NUMBER(32,4),
13 "CALL_COST" NUMBER(32,4),
14 "CURRENTBILL" NUMBER(32,4),
15 "DISCONNECTCAUSE" VARCHAR2(50),
16 "SOURCEIP" VARCHAR2(100),
17 "DESTIP" VARCHAR2(100),
18 "BILLABLE" NUMBER(32,0) ,
19 "LESS" NUMBER(32,0) ,
20 "ACCID" NUMBER(32,0),
21 "IN_DDATE" DATE,
22 "IN_PREFIX" VARCHAR2(100),
23 "IN_SESSIONTIME" NUMBER(32,0),
24 "IN_BILLABLETIME" NUMBER(32,0),
25 "IN_RATE" NUMBER(32,4),
26 "IN_CALL_COST" NUMBER(32,4),
27 "IN_MONEYLEFT" NUMBER(32,4),
28 "IN_DISCONNECTCAUSE" VARCHAR2(50),
29 "IN_BILLABLE" NUMBER(32,0),
30 "IN_LESS" NUMBER(32,0),
31 "SWITCH_ID" NUMBER(32,0) ,
32 "USER_ID" NUMBER(32,0) ,
33 "IN_USER_ID" NUMBER(32,0),
34 "PROCESSED" NUMBER(1,0)
35 ) TABLESPACE "OUT_CDR_NEW_SPC"
36 Partition by range(cdate)
37 (
38 partition P08152008 values less than (to_date('15-AUG-2008','DD-MON-YYYY')),
39 partition P09012008 values less than (to_date('01-SEP-2008','DD-MON-YYYY')),
40 partition P09152008 values less than (to_date('15-SEP-2008','DD-MON-YYYY')),
41 partition P10012008 values less than (to_date('01-OCT-2008','DD-MON-YYYY')),
42 partition P10152008 values less than (to_date('15-OCT-2008','DD-MON-YYYY')),
43 partition P11012008 values less than (to_date('01-NOV-2008','DD-MON-YYYY')),
44 partition P11152008 values less than (to_date('15-NOV-2008','DD-MON-YYYY')),
45 partition P12012008 values less than (to_date('01-DEC-2008','DD-MON-YYYY')),
46 partition P12152008 values less than (to_date('15-DEC-2008','DD-MON-YYYY')),
47 partition P01012009 values less than (to_date('01-JAN-2009','DD-MON-YYYY')),
48 partition P01152009 values less than (to_date('15-JAN-2009','DD-MON-YYYY')),
49 partition P02012009 values less than (to_date('01-FEB-2009','DD-MON-YYYY')),
50 partition PMAX values less than (maxvalue));
Table created.
Step 04: Initiates the redefinition process by calling dbms_redefinition.start_redef_table procedure.
SQL> exec dbms_redefinition.start_redef_table('CR_2', 'OUT_CDR', 'OUT_CDR_');
PL/SQL procedure successfully completed.
Step 05: Copies the dependent objects of the original table onto the interim table. The COPY_TABLE_DEPENDENTS Procedure clones the dependent objects of the table being redefined onto the interim table and registers the dependent objects. But this procedure does not clone the already registered dependent objects.
In fact COPY_TABLE_DEPENDENTS Procedure is used to clone the dependent objects like grants, triggers, constraints and privileges from the table being redefined to the interim table which in facr represents the post-redefinition table.
SQL> declare
2 error_count pls_integer := 0;
3 BEGIN
4 dbms_redefinition.copy_table_dependents('CR_2', 'OUT_CDR', 'OUT_CDR_',1, true, true, true, false,error_count);
5 dbms_output.put_line('errors := ' || to_char(error_count));
6 END;
7 /
PL/SQL procedure successfully completed.
Step 06: Completes the redefinition process by calling FINISH_REDEF_TABLE Procedure.
SQL> exec dbms_redefinition.finish_redef_table('CR_2', 'OUT_CDR', 'OUT_CDR_');
PL/SQL procedure successfully completed.
Step 07: Check the partitioning validation by,
SQL> Select partition_name, high_value from user_tab_partitions where table_name='OUT_CDR';
PARTITION_NAME HIGH_VALUE
------------------------------ ---------------------------------------------------------------------------------------------------
P01012009 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P01152009 TO_DATE(' 2009-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P02012009 TO_DATE(' 2009-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P08152008 TO_DATE(' 2008-08-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P09012008 TO_DATE(' 2008-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P09152008 TO_DATE(' 2008-09-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P10012008 TO_DATE(' 2008-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P10152008 TO_DATE(' 2008-10-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P11012008 TO_DATE(' 2008-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P11152008 TO_DATE(' 2008-11-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P12012008 TO_DATE(' 2008-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
P12152008 TO_DATE(' 2008-12-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
PMAX MAXVALUE
13 rows selected.
Check index status by,
SQL> select index_name , status from user_indexes where table_name='OUT_CDR';
INDEX_NAME STATUS
------------------------------ --------
OUT_CDR_PK VALID
OUT_CDR_UQ VALID
OUT_CDR_UQ_2 VALID
Step 08: Drop the interim table OUT_CDR_.
SQL> DROP TABLE OUT_CDR_;
Table dropped.
ORA-12091: cannot online redefine table with materialized views
Problem Description
While I attempt to reorganize a table online whenever I do with DBMS_REDEFINITION.can_redef_table or DBMS_REDEFINITION.START_REDEF_TABLE procedure on the table it fails with error ORA-12091 as below.
SQL> exec dbms_redefinition.can_redef_table('CR_2', 'OUT_CDR');
BEGIN dbms_redefinition.can_redef_table('CR_2', 'OUT_CDR'); END;
*
ERROR at line 1:
ORA-12091: cannot online redefine table "CR_2"."OUT_CDR" with materialized views
ORA-06512: at "SYS.DBMS_REDEFINITION", line 137
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1478
ORA-06512: at line 1
Cause of the Problem
If you unsuccessfully launched the DBMS_REDEFINITION.START_REDEF_TABLE procedure
once, the materialized view would be created on the interim table and still be remained there. So subsequent run of the DBMS_REDEFINITION.CAN_REDEF_TABLE BMS_REDEFINITION.START_REDEF_TABLE will return error until you remove the materalized view against the table.
Solution of the Problem
Solution 01:
Remove the snapshot log and materialized view created during the unsuccessful
first execution of the DBMS_REDEFINITION.START_REDEF_TABLE procedure.
You can define the snapshot log and materialized view log of current table by,
SQL> select log_table from user_snapshot_logs;
LOG_TABLE
------------------------------
MLOG$_IN_CDR
MLOG$_OUT_CDR
SQL> select master,log_table from user_mview_logs;
MASTER LOG_TABLE
------------------------------ ------------------------------
IN_CDR MLOG$_IN_CDR
OUT_CDR MLOG$_OUT_CDR
Remove it by,
SQL> DROP MATERIALIZED VIEW LOG ON IN_CDR;
Materialized view log dropped.
SQL> DROP MATERIALIZED VIEW LOG ON OUT_CDR;
Materialized view log dropped.
You can use the keyword SNAPSHOT in place of MATERIALIZED VIEW. The keyword SNAPSHOT is supported in place of MATERIALIZED VIEW for backward compatibility.
Now execute your statement like,
SQL> exec dbms_redefinition.can_redef_table('CR_2', 'OUT_CDR');
PL/SQL procedure successfully completed.
And it works.
Solution 02:
Run the dbms_redefinition.abort_redef_table procedure which will automatically do the clean up task. This procedure will remove the temporary objects that are created by the redefinition process such as materialized view logs.
SQL> exec dbms_redefinition.abort_redef_table('CR_2', 'IN_CDR', 'IN_CDR_');
PL/SQL procedure successfully completed.
While I attempt to reorganize a table online whenever I do with DBMS_REDEFINITION.can_redef_table or DBMS_REDEFINITION.START_REDEF_TABLE procedure on the table it fails with error ORA-12091 as below.
SQL> exec dbms_redefinition.can_redef_table('CR_2', 'OUT_CDR');
BEGIN dbms_redefinition.can_redef_table('CR_2', 'OUT_CDR'); END;
*
ERROR at line 1:
ORA-12091: cannot online redefine table "CR_2"."OUT_CDR" with materialized views
ORA-06512: at "SYS.DBMS_REDEFINITION", line 137
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1478
ORA-06512: at line 1
Cause of the Problem
If you unsuccessfully launched the DBMS_REDEFINITION.START_REDEF_TABLE procedure
once, the materialized view would be created on the interim table and still be remained there. So subsequent run of the DBMS_REDEFINITION.CAN_REDEF_TABLE BMS_REDEFINITION.START_REDEF_TABLE will return error until you remove the materalized view against the table.
Solution of the Problem
Solution 01:
Remove the snapshot log and materialized view created during the unsuccessful
first execution of the DBMS_REDEFINITION.START_REDEF_TABLE procedure.
You can define the snapshot log and materialized view log of current table by,
SQL> select log_table from user_snapshot_logs;
LOG_TABLE
------------------------------
MLOG$_IN_CDR
MLOG$_OUT_CDR
SQL> select master,log_table from user_mview_logs;
MASTER LOG_TABLE
------------------------------ ------------------------------
IN_CDR MLOG$_IN_CDR
OUT_CDR MLOG$_OUT_CDR
Remove it by,
SQL> DROP MATERIALIZED VIEW LOG ON IN_CDR;
Materialized view log dropped.
SQL> DROP MATERIALIZED VIEW LOG ON OUT_CDR;
Materialized view log dropped.
You can use the keyword SNAPSHOT in place of MATERIALIZED VIEW. The keyword SNAPSHOT is supported in place of MATERIALIZED VIEW for backward compatibility.
Now execute your statement like,
SQL> exec dbms_redefinition.can_redef_table('CR_2', 'OUT_CDR');
PL/SQL procedure successfully completed.
And it works.
Solution 02:
Run the dbms_redefinition.abort_redef_table procedure which will automatically do the clean up task. This procedure will remove the temporary objects that are created by the redefinition process such as materialized view logs.
SQL> exec dbms_redefinition.abort_redef_table('CR_2', 'IN_CDR', 'IN_CDR_');
PL/SQL procedure successfully completed.
Online Redefinition fails with ORA-23540: Redefinition not defined or initiated
Problem Description
While doing online redefinition on a table dbms_redefinition.copy_table_dependents fails with error ORA-23540 as below.
SQL> set serverout on
SQL> declare
2 error_count pls_integer := 0;
3 BEGIN
4 dbms_redefinition.copy_table_dependents('MUCI', 'HISTORY', 'HISTORY_',1, true, true, true, false,error_count);
5 dbms_output.put_line('errors := ' || to_char(error_count));
6 END;
7 /
declare
*
ERROR at line 1:
ORA-23540: Redefinition not defined or initiated
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1662
ORA-06512: at line 4
Cause of the problem
There may be different causes against it. If the schema specified in the dbms_redefinition.copy_table_dependents procedure does not exist already in the database then above error can appears. Suppose it may be the case that you specified schema name as lower case but by default username is created in uppercase.
There may be other reason is that the redefinition process is not yet started or initiated by dbms_redefinition.start_redef_table over the table. In fact dbms_redefinition.start_redef_table function create a materialized view on the table. If the materialized view is not yet created then ORA-23540 come.
Solution of the Problem
Let's see whether materialized view created on the underlying table.
SQL> select master,log_table from dba_mview_logs where log_owner='MUCI' and master='HISTORY';
no rows selected
Check for underlying table under the user.
SQL> desc muci.history;
ERROR:
ORA-04043: object muci.history does not exist
In fact user MUCI is not existed on the system and hence error comes.
The same error also appears if redefinition is not defined on the table. Suppose here schema MUCI_FINAL exist and also table HISTORY_DETAIL as well as interim table HISTORY_ but redefinition is not defined.
SQL> declare
2 error_count pls_integer := 0;
3 BEGIN
4 dbms_redefinition.copy_table_dependents('MUCI_FINAL', 'HISTORY_DETAIL', 'HISTORY_',1, true, true, true, false,error_count);
5 dbms_output.put_line('errors := ' || to_char(error_count));
6 END;
7 /
declare
*
ERROR at line 1:
ORA-23540: Redefinition not defined or initiated
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1662
ORA-06512: at line 4
Check for tables by,
SQL> select table_name from dba_tables where owner='MUCI_FINAL' and table_name like 'HISTORY%';
TABLE_NAME
------------------------------
HISTORY_DETAIL_
HISTORY_DETAIL
HISTORY_
HISTORY
See materialized view log exists on HISTORY (ran dbms_redefinition.start_redef_table earlier) and hence dbms_redefinition.copy_table_dependents procedure can be run over HISTORY table.
SQL> select master,log_table from dba_mview_logs where log_owner='MUCI_FINAL' and master='HISTORY';
MASTER LOG_TABLE
------------------------------ ------------------------------
HISTORY MLOG$_HISTORY
SQL> declare
2 error_count pls_integer := 0;
3 BEGIN
4 dbms_redefinition.copy_table_dependents('MUCI_FINAL', 'HISTORY', 'HISTORY_',1, true, true, true, false,error_count);
5 dbms_output.put_line('errors := ' || to_char(error_count));
6 END;
7 /
errors := 0
PL/SQL procedure successfully completed.
While doing online redefinition on a table dbms_redefinition.copy_table_dependents fails with error ORA-23540 as below.
SQL> set serverout on
SQL> declare
2 error_count pls_integer := 0;
3 BEGIN
4 dbms_redefinition.copy_table_dependents('MUCI', 'HISTORY', 'HISTORY_',1, true, true, true, false,error_count);
5 dbms_output.put_line('errors := ' || to_char(error_count));
6 END;
7 /
declare
*
ERROR at line 1:
ORA-23540: Redefinition not defined or initiated
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1662
ORA-06512: at line 4
Cause of the problem
There may be different causes against it. If the schema specified in the dbms_redefinition.copy_table_dependents procedure does not exist already in the database then above error can appears. Suppose it may be the case that you specified schema name as lower case but by default username is created in uppercase.
There may be other reason is that the redefinition process is not yet started or initiated by dbms_redefinition.start_redef_table over the table. In fact dbms_redefinition.start_redef_table function create a materialized view on the table. If the materialized view is not yet created then ORA-23540 come.
Solution of the Problem
Let's see whether materialized view created on the underlying table.
SQL> select master,log_table from dba_mview_logs where log_owner='MUCI' and master='HISTORY';
no rows selected
Check for underlying table under the user.
SQL> desc muci.history;
ERROR:
ORA-04043: object muci.history does not exist
In fact user MUCI is not existed on the system and hence error comes.
The same error also appears if redefinition is not defined on the table. Suppose here schema MUCI_FINAL exist and also table HISTORY_DETAIL as well as interim table HISTORY_ but redefinition is not defined.
SQL> declare
2 error_count pls_integer := 0;
3 BEGIN
4 dbms_redefinition.copy_table_dependents('MUCI_FINAL', 'HISTORY_DETAIL', 'HISTORY_',1, true, true, true, false,error_count);
5 dbms_output.put_line('errors := ' || to_char(error_count));
6 END;
7 /
declare
*
ERROR at line 1:
ORA-23540: Redefinition not defined or initiated
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1662
ORA-06512: at line 4
Check for tables by,
SQL> select table_name from dba_tables where owner='MUCI_FINAL' and table_name like 'HISTORY%';
TABLE_NAME
------------------------------
HISTORY_DETAIL_
HISTORY_DETAIL
HISTORY_
HISTORY
See materialized view log exists on HISTORY (ran dbms_redefinition.start_redef_table earlier) and hence dbms_redefinition.copy_table_dependents procedure can be run over HISTORY table.
SQL> select master,log_table from dba_mview_logs where log_owner='MUCI_FINAL' and master='HISTORY';
MASTER LOG_TABLE
------------------------------ ------------------------------
HISTORY MLOG$_HISTORY
SQL> declare
2 error_count pls_integer := 0;
3 BEGIN
4 dbms_redefinition.copy_table_dependents('MUCI_FINAL', 'HISTORY', 'HISTORY_',1, true, true, true, false,error_count);
5 dbms_output.put_line('errors := ' || to_char(error_count));
6 END;
7 /
errors := 0
PL/SQL procedure successfully completed.
IMPDP fails with ORA-39002, ORA-29283: invalid file operation
Problem Description
My impdp operation fails with error ORA-39002, ORA-39070, ORA-29283, ORA-29283: invalid file operation as below.
SQL> host impdp cr_3/cr_3 remap_schema=cr_2:cr_3 dumpfile=skydbcr_2-04-10-2008.dmp directory=e
Import: Release 10.2.0.1.0 - Production on Wednesday, 12 November, 2008 16:42:27
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
Cause and Solution of the Problem
1. Check whether the user who run the expdp operation has READ,WRITE permission on the database directory. If not then as a dba user run grant permission on cr_3,
GRANT READ,WRITE ON DIRECTORY E TO CR_3;
2. Check whether the path used in database directory physically exists on the OS.
Whenever you create a directory oracle does not check existence physical directory on OS. The checking is done on runtime.
3. Check whether the owner of oracle process has enough permission on the OS directory where the file is accessed. On unix issue,
ls -l name_of_the_file and see permission. If not then using chmod grant read, write access to the file to oracle user.
On windows if NTFS file system then you can change it by right clicking on the file or folder and then select sharing and security option.
4. Even though oracle-user has enough permissions on the directory, it may raise ora-29283 if the owner of the file is not oracle. In such case using chown change the ownership of the file on unix. On windows if NTFS file system then you can change it by right clicking on the file or folder and then select sharing and security option.
My impdp operation fails with error ORA-39002, ORA-39070, ORA-29283, ORA-29283: invalid file operation as below.
SQL> host impdp cr_3/cr_3 remap_schema=cr_2:cr_3 dumpfile=skydbcr_2-04-10-2008.dmp directory=e
Import: Release 10.2.0.1.0 - Production on Wednesday, 12 November, 2008 16:42:27
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
Cause and Solution of the Problem
1. Check whether the user who run the expdp operation has READ,WRITE permission on the database directory. If not then as a dba user run grant permission on cr_3,
GRANT READ,WRITE ON DIRECTORY E TO CR_3;
2. Check whether the path used in database directory physically exists on the OS.
Whenever you create a directory oracle does not check existence physical directory on OS. The checking is done on runtime.
3. Check whether the owner of oracle process has enough permission on the OS directory where the file is accessed. On unix issue,
ls -l name_of_the_file and see permission. If not then using chmod grant read, write access to the file to oracle user.
On windows if NTFS file system then you can change it by right clicking on the file or folder and then select sharing and security option.
4. Even though oracle-user has enough permissions on the directory, it may raise ora-29283 if the owner of the file is not oracle. In such case using chown change the ownership of the file on unix. On windows if NTFS file system then you can change it by right clicking on the file or folder and then select sharing and security option.
What is an ORA-00600? : internal error code
Generally whenever you got ORA-00600 error on your screen or in the alert log or in trace files they indicate that something inconsistency happened with your oracle database software. This is raised from the kernel code of the Oracle RDBMS software when an unexpected condition is met. This mean exception encountered in the RDBMS kernel code.
A typical form of ORA-00600 error in the alert log is like below,
Errors in file /var/opt/dumpfile/udump/orastdby_ora_15862.trc:
ORA-00600: internal error code, arguments: [15160], [], [], [], [], [], [], []
Sat Nov 15 17:49:27 2008
•Whenever an ORA-600 error is raised a trace file is generated and an entry written to the alert.log with details of the trace file location.
•The trace file contains vital information about what led to the error condition.
•The trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). In the above example we see trace file is generated in udump directory which indicates the error was caught in a user process.
•Besides with ORA-00600 we see a list of arguments in square brackets. The first argument tells us from where in the code the error was caught and thus it is the key information in identifying the problem. This argument is either a number or a character string.
•Subsequent arguments are used to supply further information eg. values of internal variables etc.
A typical form of ORA-00600 error in the alert log is like below,
Errors in file /var/opt/dumpfile/udump/orastdby_ora_15862.trc:
ORA-00600: internal error code, arguments: [15160], [], [], [], [], [], [], []
Sat Nov 15 17:49:27 2008
•Whenever an ORA-600 error is raised a trace file is generated and an entry written to the alert.log with details of the trace file location.
•The trace file contains vital information about what led to the error condition.
•The trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). In the above example we see trace file is generated in udump directory which indicates the error was caught in a user process.
•Besides with ORA-00600 we see a list of arguments in square brackets. The first argument tells us from where in the code the error was caught and thus it is the key information in identifying the problem. This argument is either a number or a character string.
•Subsequent arguments are used to supply further information eg. values of internal variables etc.
Resolve of You may be a victim of software counterfeiting Windows is not genuine
Microsoft has released WGA(Windows Genuine Advantage) Notifications application which generate the warning message "This copy of Windows is not genuine". This Windows Genuine Advantage Notifications application check your Microsoft Windows XP validity and then if it found that the copy of Windows XP is not validated, not genuine, counterfeit, unlicensed, pirated, illegal, unauthorized or simply failed the Windows Genuine Advantage validation process, then the notification messages will appear at various places and time.
While logn to your machine it comes a pop up window indicating you to 'Resolve now' or 'Get Genuine' and wait for 5 seconds.
After logon to your system you get "You may be a victim of software counterfeiting' message on the bottom right corner of log-in screen.
Along with randomly appeared as balloon notification message with an icon in the notification area (system tray) you got other changes like you background became black.
In this post I will provide several solutions so that we may get rid of WGA notifications.
Method 01:
1.Open task manager window, You can do it by pressing Ctrl+Alt+Del or right click on taskbar and then select Task Manager.
2.On the Task Manager pop up window under processes tab find wgatray.exe process and then right click and select End process tree option and then select yes.
3.Restart Windows XP in Safe Mode.
4.Delete WgaTray.exe file from c:\Windows\System32.
5.Delete WgaTray.exe file from c:\Windows\System32\dllcache.
6.Click on Start menu and write regedit.
7.Browse to the location,
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\
Windows NT\CurrentVersion\Winlogon\Notify
and then delete the folder 'WgaLogon' and all its contents
8.Restart Windows XP.
Method 02:
This one is very easy method and works perfectly.
1.Rename the file 'WgaLogon.dll' and 'WgaTray.exe' in C:\WINDOWS\system32 folder.
2.Open Task Manager and then kill WgaTray.exe process.
3.Reboot your computer.
Method 03:
If you have turned on system restore feature then do a system Restore to restore the PC to a previous restore point or to a previous time where the WGA Notifications hasn’t kicked in. To use System Restore, go to Start -> All Programs -> Accessories -> System Tools -> System Restore.
While logn to your machine it comes a pop up window indicating you to 'Resolve now' or 'Get Genuine' and wait for 5 seconds.
After logon to your system you get "You may be a victim of software counterfeiting' message on the bottom right corner of log-in screen.
Along with randomly appeared as balloon notification message with an icon in the notification area (system tray) you got other changes like you background became black.
In this post I will provide several solutions so that we may get rid of WGA notifications.
Method 01:
1.Open task manager window, You can do it by pressing Ctrl+Alt+Del or right click on taskbar and then select Task Manager.
2.On the Task Manager pop up window under processes tab find wgatray.exe process and then right click and select End process tree option and then select yes.
3.Restart Windows XP in Safe Mode.
4.Delete WgaTray.exe file from c:\Windows\System32.
5.Delete WgaTray.exe file from c:\Windows\System32\dllcache.
6.Click on Start menu and write regedit.
7.Browse to the location,
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\
Windows NT\CurrentVersion\Winlogon\Notify
and then delete the folder 'WgaLogon' and all its contents
8.Restart Windows XP.
Method 02:
This one is very easy method and works perfectly.
1.Rename the file 'WgaLogon.dll' and 'WgaTray.exe' in C:\WINDOWS\system32 folder.
2.Open Task Manager and then kill WgaTray.exe process.
3.Reboot your computer.
Method 03:
If you have turned on system restore feature then do a system Restore to restore the PC to a previous restore point or to a previous time where the WGA Notifications hasn’t kicked in. To use System Restore, go to Start -> All Programs -> Accessories -> System Tools -> System Restore.
Communicate with the user through Accept and Prompt in Sql*plus
With Sql*plus Accept and Prompt command we can take input from users and based on the input we can work. Prompt command will display output and accept command is used to take input from users in to a substitution variable.
Example of Receiving a Substitution Variable Value
1)Creating a script named b.sql. This script will take title from users and will display a title along with salary report.
SQL> ed b
PROMPT Enter a title up to 30 characters
ACCEPT TITLE PROMPT 'Title: '
TTITLE LEFT TITLE SKIP 2
SELECT ID, Designation, max_salary
FROM DESIGNATION
WHERE ID IN (1,2,3);
2)Run the b.sql
SQL> start b
Enter a title up to 30 characters
Title: This is Salary Testing Report
This is Salary Testing Report
ID DESIGNATION MAX_SALARY
---------- -------------------------------------------------- ----------
1 General Manager 100000
2 Asst General Manager 20000
3 Manager 50000
Customizing Prompts for Substitution Variable
This script will take input into a substitution variable and display report based on the input.
SQL> ed c
PROMPT Enter a valid employee ID
ACCEPT ENUMBER NUMBER PROMPT 'Employee ID. :'
SELECT ID,DESIGNATION, MAX_SALARY
FROM DESIGNATION
WHERE ID=&ENUMBER;
SQL> start c
Enter a valid employee ID
Employee ID. :3
old 3: WHERE ID=&ENUMBER
new 3: WHERE ID= 3
ID DESIGNATION MAX_SALARY
---------- -------------------------------------------------- ----------
3 Manager 50000
Example of Receiving a Substitution Variable Value
1)Creating a script named b.sql. This script will take title from users and will display a title along with salary report.
SQL> ed b
PROMPT Enter a title up to 30 characters
ACCEPT TITLE PROMPT 'Title: '
TTITLE LEFT TITLE SKIP 2
SELECT ID, Designation, max_salary
FROM DESIGNATION
WHERE ID IN (1,2,3);
2)Run the b.sql
SQL> start b
Enter a title up to 30 characters
Title: This is Salary Testing Report
This is Salary Testing Report
ID DESIGNATION MAX_SALARY
---------- -------------------------------------------------- ----------
1 General Manager 100000
2 Asst General Manager 20000
3 Manager 50000
Customizing Prompts for Substitution Variable
This script will take input into a substitution variable and display report based on the input.
SQL> ed c
PROMPT Enter a valid employee ID
ACCEPT ENUMBER NUMBER PROMPT 'Employee ID. :'
SELECT ID,DESIGNATION, MAX_SALARY
FROM DESIGNATION
WHERE ID=&ENUMBER;
SQL> start c
Enter a valid employee ID
Employee ID. :3
old 3: WHERE ID=&ENUMBER
new 3: WHERE ID= 3
ID DESIGNATION MAX_SALARY
---------- -------------------------------------------------- ----------
3 Manager 50000
Working with Sql*plus Scripts
Running Sql Scripts
To run a sql scripts in Sql*plus use @ or @@ or start command. To run a script named C:\test.sql use,
@c:\test
or,
@@c:\test
or,
start c:\test
Define editor and editing sql scripts on command line
Have a look at
Say yes to Sql*Plus Sql*Plus Tips
In addition,
CLEAR BUFFER (CL BUFF): Will clear buffer, i.e delete all lines from the Sql buffer.
To insert a line before line 1, enter a zero ("0") and follow the zero with text.
0 SELECT Username
Placing comments in Sql Scripts
1)Using Sql*plus REMARK command: If a line starts with REMARK rest words of the line are considered as comment and hence ignored to compile. This is applied to single line only.
Example:
REMARK This script will generate the salary history
REM This script is generated on 11/01/2008
2)Using /*...*/: In order to comment multiple lines start with /* and until it gets */ entire commands inside it are considered as comments. With /*...*/ you can also comment a single line.
You must enter a space after the slash-asterisk(/*).
Example:
/* Column Salary for 9999
Column Name for a30
*/
3)Using --: You can use ANSI/ISO "- -" style comments within SQL statements, PL/SQL blocks, or SQL*Plus commands. This comment cannot span multiple lines.
To span it multiple lines start each line with --.
Example:
--This is a comment.
Issues while placing comments in Sql Scripts
SQL*Plus does not have a SQL or PL/SQL command parser.
It scans the first few keywords of each new statement to determine the command type, whether it is SQL, PL/SQL or SQL*Plus.
Comments in some locations can prevent SQL*Plus from correctly identifying the command type, and thus give unexpected results.
Rules
1)Do not put comments within the first few keywords of a statement. Beow is an example. In first case I commented before procedure keyword and thus PL/SQL block terminates before it is expected and in the second case, it is successfully created because of right select for comment.
SQL> create or replace
2 /* This is a comment */
3 procedure test_proc as
4 begin
5 dbms_output.put_line('Test Comment');
Warning: Procedure created with compilation errors.
SQL> end;
SP2-0042: unknown command "end" - rest of line ignored.
SQL> /
Warning: Procedure created with compilation errors.
SQL> create or replace procedure
2 /* This is a comment */
3 test_proc as
4 begin
5 dbms_output.put_line('Test Comment');
6 end;
7 /
Procedure created.
2)Do not put comments after statement terminators (period, semicolon or slash).
Example:
SQL> select sysdate from dual; --This is a comment
2
SQL> /
select sysdate from dual; --This is a comment
*
ERROR at line 1:
ORA-00911: invalid character
3)Don't put statement termination character at the end of the comment. For example,
select sysdate
--This is comment;
from dual;
statement will fail because statement termination occurs in the comments.
SQL> select sysdate from dual; --This is a comment
2
SQL> /
select sysdate from dual; --This is a comment
*
ERROR at line 1:
ORA-00911: invalid character
SQL> select sysdate
2 --This is comment;
--This is comment
*
ERROR at line 2:
ORA-00923: FROM keyword not found where expected
SQL> from dual;
SP2-0042: unknown command "from dual" - rest of line ignored.
However, if we omit semicolon from comment then statement successfully executed.
SQL> select sysdate
2 --This is comment
3 from dual;
SYSDATE--
---------
29-NOV-08
4)Do not use ampersand characters '&' in comments in a SQL statement or PL/SQL
block. Because & is identified as substitution variable unless if you set SET DEFINE OFF to prevent scanning for the substitution character.
Example:
SQL> select sysdate
2 --Comment &
3 from dual;
SP2-0317: expected symbol name is missing
SYSDATE--
---------
29-NOV-08
SQL> set define off
SQL> select sysdate
2 --Comment &
3 from dual;
SYSDATE--
---------
29-NOV-08
To run a sql scripts in Sql*plus use @ or @@ or start command. To run a script named C:\test.sql use,
@c:\test
or,
@@c:\test
or,
start c:\test
Define editor and editing sql scripts on command line
Have a look at
Say yes to Sql*Plus Sql*Plus Tips
In addition,
CLEAR BUFFER (CL BUFF): Will clear buffer, i.e delete all lines from the Sql buffer.
To insert a line before line 1, enter a zero ("0") and follow the zero with text.
0 SELECT Username
Placing comments in Sql Scripts
1)Using Sql*plus REMARK command: If a line starts with REMARK rest words of the line are considered as comment and hence ignored to compile. This is applied to single line only.
Example:
REMARK This script will generate the salary history
REM This script is generated on 11/01/2008
2)Using /*...*/: In order to comment multiple lines start with /* and until it gets */ entire commands inside it are considered as comments. With /*...*/ you can also comment a single line.
You must enter a space after the slash-asterisk(/*).
Example:
/* Column Salary for 9999
Column Name for a30
*/
3)Using --: You can use ANSI/ISO "- -" style comments within SQL statements, PL/SQL blocks, or SQL*Plus commands. This comment cannot span multiple lines.
To span it multiple lines start each line with --.
Example:
--This is a comment.
Issues while placing comments in Sql Scripts
SQL*Plus does not have a SQL or PL/SQL command parser.
It scans the first few keywords of each new statement to determine the command type, whether it is SQL, PL/SQL or SQL*Plus.
Comments in some locations can prevent SQL*Plus from correctly identifying the command type, and thus give unexpected results.
Rules
1)Do not put comments within the first few keywords of a statement. Beow is an example. In first case I commented before procedure keyword and thus PL/SQL block terminates before it is expected and in the second case, it is successfully created because of right select for comment.
SQL> create or replace
2 /* This is a comment */
3 procedure test_proc as
4 begin
5 dbms_output.put_line('Test Comment');
Warning: Procedure created with compilation errors.
SQL> end;
SP2-0042: unknown command "end" - rest of line ignored.
SQL> /
Warning: Procedure created with compilation errors.
SQL> create or replace procedure
2 /* This is a comment */
3 test_proc as
4 begin
5 dbms_output.put_line('Test Comment');
6 end;
7 /
Procedure created.
2)Do not put comments after statement terminators (period, semicolon or slash).
Example:
SQL> select sysdate from dual; --This is a comment
2
SQL> /
select sysdate from dual; --This is a comment
*
ERROR at line 1:
ORA-00911: invalid character
3)Don't put statement termination character at the end of the comment. For example,
select sysdate
--This is comment;
from dual;
statement will fail because statement termination occurs in the comments.
SQL> select sysdate from dual; --This is a comment
2
SQL> /
select sysdate from dual; --This is a comment
*
ERROR at line 1:
ORA-00911: invalid character
SQL> select sysdate
2 --This is comment;
--This is comment
*
ERROR at line 2:
ORA-00923: FROM keyword not found where expected
SQL> from dual;
SP2-0042: unknown command "from dual" - rest of line ignored.
However, if we omit semicolon from comment then statement successfully executed.
SQL> select sysdate
2 --This is comment
3 from dual;
SYSDATE--
---------
29-NOV-08
4)Do not use ampersand characters '&' in comments in a SQL statement or PL/SQL
block. Because & is identified as substitution variable unless if you set SET DEFINE OFF to prevent scanning for the substitution character.
Example:
SQL> select sysdate
2 --Comment &
3 from dual;
SP2-0317: expected symbol name is missing
SYSDATE--
---------
29-NOV-08
SQL> set define off
SQL> select sysdate
2 --Comment &
3 from dual;
SYSDATE--
---------
29-NOV-08
Sql*plus Basics and Tips
The Sql Buffer
The Sql buffer stores most recently SQL command or most recently PL/SQL block.
SQL*Plus does not store SQL*Plus commands, or the semicolon or slash characters
you type to execute a command in the SQL buffer.
You can execute the command or block in the SQL buffer using the RUN or /(slash)
commands.
RUN commands displays the command or block in the buffer and then execute it.
/(slash) executes the command or block in the buffer but does not display it.
Below is the example.
A simple SQL command.
SQL> select sysdate from dual;
SYSDATE
---------
29-NOV-08
This one (Setting linesize) is SQL*plus command and hence is not stored in SQL buffer.
SQL> set linesize 100
Invoking list will show SQL buffer, not SQL*plus command.
SQL> list
1* select sysdate from dual
Run will execute the command after displaying it.
SQL> run
1* select sysdate from dual
SYSDATE
---------
29-NOV-08
/(Slash) will not display the command. It will only execute it.
SQL> /
SYSDATE
---------
29-NOV-08
Continuing a Long SQL*Plus Command on Additional Lines
Suppose you want to write a long sql*plus command or a sql command, a long identifier or a long word in the SQL*plus. You can put them in one line by entering hyphen(-) at the end and then entering a carriage return. SQL*Plus displays a right angle-bracket (>) as a prompt for each additional line.
So hyphen(-) in Sql*plus is considered as continuation character. For example to format a long column you can use hyphen in sql*plus.
SQL> column salary format 99,999 -
> heading 'Monthly Salary'
Similarly,
SQL> select 200--
> 100 from dual;
200-100
----------
100
SQL> select 200-
> -100 from dual;
200-100
----------
100
Hyphen at the last is considered as continuation character. So hyphen at the last is ignored. Hence following will return error.
SQL> select 200-
> 100 from dual;
select 200 100 from dual
*
ERROR at line 1:
ORA-00923: FROM keyword not found where expected
Stopping a Command while it is Running
In order to stop displaying a long running output on Sql*plus press CTRL+C.
Displaying output page by page in Sql*plus
Normally whenever we run any SQL comamnd the output of it came all in a row. If the command return 1000 rows then it shows all on the console. You may also could not be able previous output by scrolling. This may be annoying. In order to display output page by page you have to set two variable.
SET PAGESIZE 30 --which will say in a page how many lines will be displayed.
SET PAUSE ON --which will pause the output to display unless you press enter.
Now run sql command like SELECT OBJECT_NAME FROM DBA_OBJECTS; and proceed further after pressing enter. Every new pressing of ENTER keyword will display you a new page.
Saving Changes to database automatically
With the Sql*plus variable AUTOCOMMIT option you can decide when to commit and when to not. By default it is set to OFF which means committing transaction is done whenever you give COMMIT explicitly. If you want committing will happen immediately then invoke,
SET AUTOCOMMIT ON
In order to do committing after 10 DML statements do,
SET AUTOCOMMIT 10
To turn the autocommit feature off again, enter
SET AUTOCOMMIT OFF
Sql*plus Error messages
Sql*plus errors begin will letter SP2.
Like,
SQL> @testing.sql
SP2-0310: unable to open file "testing.sql"
Sql*plus copy command error messages start with CPY.
The Sql buffer stores most recently SQL command or most recently PL/SQL block.
SQL*Plus does not store SQL*Plus commands, or the semicolon or slash characters
you type to execute a command in the SQL buffer.
You can execute the command or block in the SQL buffer using the RUN or /(slash)
commands.
RUN commands displays the command or block in the buffer and then execute it.
/(slash) executes the command or block in the buffer but does not display it.
Below is the example.
A simple SQL command.
SQL> select sysdate from dual;
SYSDATE
---------
29-NOV-08
This one (Setting linesize) is SQL*plus command and hence is not stored in SQL buffer.
SQL> set linesize 100
Invoking list will show SQL buffer, not SQL*plus command.
SQL> list
1* select sysdate from dual
Run will execute the command after displaying it.
SQL> run
1* select sysdate from dual
SYSDATE
---------
29-NOV-08
/(Slash) will not display the command. It will only execute it.
SQL> /
SYSDATE
---------
29-NOV-08
Continuing a Long SQL*Plus Command on Additional Lines
Suppose you want to write a long sql*plus command or a sql command, a long identifier or a long word in the SQL*plus. You can put them in one line by entering hyphen(-) at the end and then entering a carriage return. SQL*Plus displays a right angle-bracket (>) as a prompt for each additional line.
So hyphen(-) in Sql*plus is considered as continuation character. For example to format a long column you can use hyphen in sql*plus.
SQL> column salary format 99,999 -
> heading 'Monthly Salary'
Similarly,
SQL> select 200--
> 100 from dual;
200-100
----------
100
SQL> select 200-
> -100 from dual;
200-100
----------
100
Hyphen at the last is considered as continuation character. So hyphen at the last is ignored. Hence following will return error.
SQL> select 200-
> 100 from dual;
select 200 100 from dual
*
ERROR at line 1:
ORA-00923: FROM keyword not found where expected
Stopping a Command while it is Running
In order to stop displaying a long running output on Sql*plus press CTRL+C.
Displaying output page by page in Sql*plus
Normally whenever we run any SQL comamnd the output of it came all in a row. If the command return 1000 rows then it shows all on the console. You may also could not be able previous output by scrolling. This may be annoying. In order to display output page by page you have to set two variable.
SET PAGESIZE 30 --which will say in a page how many lines will be displayed.
SET PAUSE ON --which will pause the output to display unless you press enter.
Now run sql command like SELECT OBJECT_NAME FROM DBA_OBJECTS; and proceed further after pressing enter. Every new pressing of ENTER keyword will display you a new page.
Saving Changes to database automatically
With the Sql*plus variable AUTOCOMMIT option you can decide when to commit and when to not. By default it is set to OFF which means committing transaction is done whenever you give COMMIT explicitly. If you want committing will happen immediately then invoke,
SET AUTOCOMMIT ON
In order to do committing after 10 DML statements do,
SET AUTOCOMMIT 10
To turn the autocommit feature off again, enter
SET AUTOCOMMIT OFF
Sql*plus Error messages
Sql*plus errors begin will letter SP2.
Like,
SQL> @testing.sql
SP2-0310: unable to open file "testing.sql"
Sql*plus copy command error messages start with CPY.
List of country calling codes in the world
Country Code
Afghanistan 93
Albania 355
Algeria 213
American Samoa 1-684
Andorra 376
Angola 244
Anguilla 1-264
Antigua 1-268
Argentina 54
Armenia 374
Aruba 297
Ascension 247
Australia 61
Australian External Territories 672
Austria 43
Azerbaijan 994
Bahamas 1-242
Bahrain 973
Bangladesh 880
Barbados 1-246
Barbuda 1-268
Belarus 375
Belgium 32
Belize 501
Benin 229
Bermuda 1-441
Bhutan 975
Bolivia 591
Bosnia and Herzegovina 387
Botswana 267
Brazil 55
British Virgin Islands 1-284
Brunei Darussalam 673
Bulgaria 359
Burkina Faso 226
Burundi 257
Cambodia 855
Cameroon 237
Canada 1
Cape Verde Islands 238
Cayman Islands 1-345
Central African Republic 236
Chad 235
Chatham Island (New Zealand) 64
Chile 56
China (PRC) 86
Christmas Island 61-8
Cocos-Keeling Islands 61
Colombia 57
Comoros 269
Congo 242
Congo, Dem. Rep. of (Zaire) 243
Cook Islands 682
Costa Rica 506
Côte d'Ivoire (Ivory Coast) 225
Croatia 385
Cuba 53
Cuba (Guantanamo Bay) 5399
Curaçao 599
Cyprus 357
Czech Republic 420
Denmark 45
Diego Garcia 246
Djibouti 253
Dominica 1-767
Dominican Republic 1-809 and 1-829
East Timor 670
Easter Island 56
Ecuador 593
Egypt 20
El Salvador 503
Ellipso (Mobile Satellite service) 8812, 8813
EMSAT (Mobile Satellite service) 88213
Equatorial Guinea 240
Eritrea 291
Estonia 372
Ethiopia 251
Falkland Islands 500
Faroe Islands 298
Fiji Islands 679
Finland 358
France 33
French Antilles 596
French Guiana 594
French Polynesia 689
Gabonese Republic 241
Gambia 220
Georgia 995
Germany 49
Ghana 233
Gibraltar 350
Global Mobile Satellite System (GMSS) 881
Globalstar (Mobile Satellite Service) 8818, 8819
Greece 30
Greenland 299
Grenada 1-473
Guadeloupe 590
Guam 1-671
Guantanamo Bay 5399
Guatemala 502
Guinea 224
Guinea-Bissau 245
Guyana 592
Haiti 509
Honduras 504
Hong Kong 852
Hungary 36
Iceland 354
ICO Global (Mobile Satellite Service) 8810, 8811
India 91
Indonesia 62
Inmarsat (Atlantic Ocean - East) 871
Inmarsat (Atlantic Ocean - West) 874
Inmarsat (Indian Ocean) 873
Inmarsat (Pacific Ocean) 872
Inmarsat SNAC 870
International Freephone Service 800
International Shared Cost Service (ISCS) 808
Iran 98
Iraq 964
Ireland 353
Iridium (Mobile Satellite service) 8816, 8817
Israel 972
Italy 39
Jamaica 1-876
Japan 81
Jordan 962
Kazakhstan 7-6, 7-7
Kenya 254
Kiribati 686
Korea (North) 850
Korea (South) 82
Kuwait 965
Kyrgyz Republic 996
Laos 856
Latvia 371
Lebanon 961
Lesotho 266
Liberia 231
Libya 218
Liechtenstein 423
Lithuania 370
Luxembourg 352
Macao 853
Macedonia (Former Yugoslav Rep of.) 389
Madagascar 261
Malawi 265
Malaysia 60
Maldives 960
Mali Republic 223
Malta 356
Marshall Islands 692
Martinique 596
Mauritania 222
Mauritius 230
Mayotte Island 262
Mexico 52
Micronesia, (Federal States of) 691
Midway Island 1-808
Moldova 373
Monaco 377
Mongolia 976
Montenegro 382
Montserrat 1-664
Morocco 212
Mozambique 258
Myanmar 95
Namibia 264
Nauru 674
Nepal 977
Netherlands 31
Netherlands Antilles 599
Nevis 1-869
New Caledonia 687
New Zealand 64
Nicaragua 505
Niger 227
Nigeria 234
Niue 683
Norfolk Island 672
Northern Marianas Islands (Saipan, Rota, and Tinian) 1-670
Norway 47
Oman 968
Pakistan 92
Palau 680
Palestinian Settlements 970
Panama 507
Papua New Guinea 675
Paraguay 595
Peru 51
Philippines 63
Poland 48
Portugal 351
Puerto Rico 1-787 or 1-939
Qatar 974
Réunion Island 262
Romania 40
Russia 7
Rwandese Republic 250
St. Helena 290
St. Kitts/Nevis 1-869
St. Lucia 1-758
St. Pierre and Miquelon 508
St. Vincent and Grenadines 1-784
Samoa 685
San Marino 378
São Tomé and Principe 239
Saudi Arabia 966
Senegal 221
Serbia 381
Seychelles Republic 248
Sierra Leone 232
Singapore 65
Slovak Republic 421
Slovenia 386
Solomon Islands 677
Somali Democratic Republic 252
South Africa 27
Spain 34
Sri Lanka 94
Sudan 249
Suriname 597
Swaziland 268
Sweden 46
Switzerland 41
Syria 963
Taiwan 886
Tajikistan 992
Tanzania 255
Thailand 66
Thuraya (Mobile Satellite service) 88216
Timor Leste 670
Togolese Republic 228
Tokelau 690
Tonga Islands 676
Trinidad and Tobago 1-868
Tunisia 216
Turkey 90
Turkmenistan 993
Turks and Caicos Islands 1-649
Tuvalu 688
Uganda 256
Ukraine 380
United Arab Emirates 971
United Kingdom 44
United States of America 1
Universal Personal Telecommunications (UPT) 878
Uruguay 598
US Virgin Islands 1-340
Uzbekistan 998
Vanuatu 678
Vatican City 39-06-698; 379
Venezuela 58
Vietnam 84
Wake Island 808
Wallis and Futuna Islands 681
Yemen 967
Zambia 260
Zanzibar 255
Zimbabwe 263
Afghanistan 93
Albania 355
Algeria 213
American Samoa 1-684
Andorra 376
Angola 244
Anguilla 1-264
Antigua 1-268
Argentina 54
Armenia 374
Aruba 297
Ascension 247
Australia 61
Australian External Territories 672
Austria 43
Azerbaijan 994
Bahamas 1-242
Bahrain 973
Bangladesh 880
Barbados 1-246
Barbuda 1-268
Belarus 375
Belgium 32
Belize 501
Benin 229
Bermuda 1-441
Bhutan 975
Bolivia 591
Bosnia and Herzegovina 387
Botswana 267
Brazil 55
British Virgin Islands 1-284
Brunei Darussalam 673
Bulgaria 359
Burkina Faso 226
Burundi 257
Cambodia 855
Cameroon 237
Canada 1
Cape Verde Islands 238
Cayman Islands 1-345
Central African Republic 236
Chad 235
Chatham Island (New Zealand) 64
Chile 56
China (PRC) 86
Christmas Island 61-8
Cocos-Keeling Islands 61
Colombia 57
Comoros 269
Congo 242
Congo, Dem. Rep. of (Zaire) 243
Cook Islands 682
Costa Rica 506
Côte d'Ivoire (Ivory Coast) 225
Croatia 385
Cuba 53
Cuba (Guantanamo Bay) 5399
Curaçao 599
Cyprus 357
Czech Republic 420
Denmark 45
Diego Garcia 246
Djibouti 253
Dominica 1-767
Dominican Republic 1-809 and 1-829
East Timor 670
Easter Island 56
Ecuador 593
Egypt 20
El Salvador 503
Ellipso (Mobile Satellite service) 8812, 8813
EMSAT (Mobile Satellite service) 88213
Equatorial Guinea 240
Eritrea 291
Estonia 372
Ethiopia 251
Falkland Islands 500
Faroe Islands 298
Fiji Islands 679
Finland 358
France 33
French Antilles 596
French Guiana 594
French Polynesia 689
Gabonese Republic 241
Gambia 220
Georgia 995
Germany 49
Ghana 233
Gibraltar 350
Global Mobile Satellite System (GMSS) 881
Globalstar (Mobile Satellite Service) 8818, 8819
Greece 30
Greenland 299
Grenada 1-473
Guadeloupe 590
Guam 1-671
Guantanamo Bay 5399
Guatemala 502
Guinea 224
Guinea-Bissau 245
Guyana 592
Haiti 509
Honduras 504
Hong Kong 852
Hungary 36
Iceland 354
ICO Global (Mobile Satellite Service) 8810, 8811
India 91
Indonesia 62
Inmarsat (Atlantic Ocean - East) 871
Inmarsat (Atlantic Ocean - West) 874
Inmarsat (Indian Ocean) 873
Inmarsat (Pacific Ocean) 872
Inmarsat SNAC 870
International Freephone Service 800
International Shared Cost Service (ISCS) 808
Iran 98
Iraq 964
Ireland 353
Iridium (Mobile Satellite service) 8816, 8817
Israel 972
Italy 39
Jamaica 1-876
Japan 81
Jordan 962
Kazakhstan 7-6, 7-7
Kenya 254
Kiribati 686
Korea (North) 850
Korea (South) 82
Kuwait 965
Kyrgyz Republic 996
Laos 856
Latvia 371
Lebanon 961
Lesotho 266
Liberia 231
Libya 218
Liechtenstein 423
Lithuania 370
Luxembourg 352
Macao 853
Macedonia (Former Yugoslav Rep of.) 389
Madagascar 261
Malawi 265
Malaysia 60
Maldives 960
Mali Republic 223
Malta 356
Marshall Islands 692
Martinique 596
Mauritania 222
Mauritius 230
Mayotte Island 262
Mexico 52
Micronesia, (Federal States of) 691
Midway Island 1-808
Moldova 373
Monaco 377
Mongolia 976
Montenegro 382
Montserrat 1-664
Morocco 212
Mozambique 258
Myanmar 95
Namibia 264
Nauru 674
Nepal 977
Netherlands 31
Netherlands Antilles 599
Nevis 1-869
New Caledonia 687
New Zealand 64
Nicaragua 505
Niger 227
Nigeria 234
Niue 683
Norfolk Island 672
Northern Marianas Islands (Saipan, Rota, and Tinian) 1-670
Norway 47
Oman 968
Pakistan 92
Palau 680
Palestinian Settlements 970
Panama 507
Papua New Guinea 675
Paraguay 595
Peru 51
Philippines 63
Poland 48
Portugal 351
Puerto Rico 1-787 or 1-939
Qatar 974
Réunion Island 262
Romania 40
Russia 7
Rwandese Republic 250
St. Helena 290
St. Kitts/Nevis 1-869
St. Lucia 1-758
St. Pierre and Miquelon 508
St. Vincent and Grenadines 1-784
Samoa 685
San Marino 378
São Tomé and Principe 239
Saudi Arabia 966
Senegal 221
Serbia 381
Seychelles Republic 248
Sierra Leone 232
Singapore 65
Slovak Republic 421
Slovenia 386
Solomon Islands 677
Somali Democratic Republic 252
South Africa 27
Spain 34
Sri Lanka 94
Sudan 249
Suriname 597
Swaziland 268
Sweden 46
Switzerland 41
Syria 963
Taiwan 886
Tajikistan 992
Tanzania 255
Thailand 66
Thuraya (Mobile Satellite service) 88216
Timor Leste 670
Togolese Republic 228
Tokelau 690
Tonga Islands 676
Trinidad and Tobago 1-868
Tunisia 216
Turkey 90
Turkmenistan 993
Turks and Caicos Islands 1-649
Tuvalu 688
Uganda 256
Ukraine 380
United Arab Emirates 971
United Kingdom 44
United States of America 1
Universal Personal Telecommunications (UPT) 878
Uruguay 598
US Virgin Islands 1-340
Uzbekistan 998
Vanuatu 678
Vatican City 39-06-698; 379
Venezuela 58
Vietnam 84
Wake Island 808
Wallis and Futuna Islands 681
Yemen 967
Zambia 260
Zanzibar 255
Zimbabwe 263
Use of bind variables in Sql*plus
Bind variables are variables that are declared inside PL/SQL or a variable in a SQL statement that must be replaced with a valid value.
In the SQL*Plus you can also create bind variable. If you create a bind variable in SQL*Plus, you can use the variable as you would a declared variable in your PL/SQL subprogram and then access the variable from SQL*Plus.
Creating Bind Variables
In order to create bind variable in Sql*plus you have to use variable keyword. To create a bind variable named bind_var as number datatype use,
variable bind_var NUMBER
Displaying Bind Variables
To display all bind variables in the session just use variable keyword.
SQL> variable
variable bind_var
datatype NUMBER
In order to display the value of a particular bind variable use,
SQL> print bind_var
BIND_VAR
----------
Using Bind Variables
To use bind variable first use colon(:) and then give variable name. In order to change values of bind variable you must enter in a PL/SQL block. To change bind variable of bind_var to 10 use,
begin
:bind_var:=10;
end;
/
In the SQL*Plus you can also create bind variable. If you create a bind variable in SQL*Plus, you can use the variable as you would a declared variable in your PL/SQL subprogram and then access the variable from SQL*Plus.
Creating Bind Variables
In order to create bind variable in Sql*plus you have to use variable keyword. To create a bind variable named bind_var as number datatype use,
variable bind_var NUMBER
Displaying Bind Variables
To display all bind variables in the session just use variable keyword.
SQL> variable
variable bind_var
datatype NUMBER
In order to display the value of a particular bind variable use,
SQL> print bind_var
BIND_VAR
----------
Using Bind Variables
To use bind variable first use colon(:) and then give variable name. In order to change values of bind variable you must enter in a PL/SQL block. To change bind variable of bind_var to 10 use,
begin
:bind_var:=10;
end;
/
Formatting SQL*Plus Reports
Sometimes you may find difficulties while displaying result on SQL*Plus like a column length displayed as a long length and thus does not result a fine output. In this post I will try to show how we can make good looking result displayed on SQL*Plus.
With COLUMN a column can be formatted. For string type data format is specified by A and then length. Like A7 means total column will span to 7 word length. For number data type the format is 99999 where number of 9s decide the length of digits of number data type.
Changing Column Headings
To change the heading of column a,b and c use commands like below. The vertical bar is used if you want to display the columns in a new line.
SQL> COLUMN A HEADING 'FIRST|COLUMN'
SQL> COLUMN C HEADING 'THIRD|COLUMN'
SQL> COLUMN B HEADING 'SECOND|COLUMN' FORMAT A7
SQL> select a,b,c from test1;
FIRST SECOND THIRD
COLUMN COLUMN COLUMN
---------- ------- ------
1 Hi Hi2
Setting the Underline Character
You see the underline character in the above output under heading is set to -. If you want to change it to = then use,
SQL> SET UNDERLINE =
SQL> /
FIRST SECOND THIRD
COLUMN COLUMN COLUMN
========== ======= ======
1 Hi Hi2
Default Display of columns
1)A NUMBER column's width equals the width of the heading or the width of the FORMAT plus one space for the sign, whichever is greater. If you do not explicitly use FORMAT, then the column's width will always be at least the value of SET NUMWIDTH.
BY default the NUMWIDTH is set to 10.
SQL> select 22/7 "This is the pi value" from dual;
This is the pi value
--------------------
3.14285714
SQL> show numwidth
numwidth 10
2)The default width of datatype columns is the width of the column in the database. The column width of a LONG, CLOB, NCLOB or XMLType defaults to the value of SET LONGCHUNKSIZE or SET LONG, whichever is the smaller.
If LONG is set to 80 and LONGCHUNKSIZE is set to 90 then for CLOB database only first 80 character will be shown on Sql*plus by default.
3)The default width and format of unformatted DATE columns in SQL*Plus is determined by the database NLS_DATE_FORMAT parameter. Otherwise, the default format width is A9.
Listing and Resetting Column Display Attributes
1)To list the current display attributes for a given column, use the COLUMN command followed by the column name.
COLUMN column_name
2)To list the current display attributes for all columns, enter just the COLUMN command.
COLUMN
3)To reset the display attributes for a column to their default values, use the CLEAR clause of the COLUMN command as below.
COLUMN column_name CLEAR
4)To reset all column display attributes to their default values enter,
CLEAR COLUMNS
Suppressing and Restoring Column Display Attributes
Though COLUMN command you once format a column and now you want to use the default display attributes for the column, but you don't want to remove the attributes you have defined through the COLUMN command. You can achieve that by,
COLUMN column_name OFF
To restore the attributes you defined through COLUMN, use the ON clause:
COLUMN column_name ON
Printing a Line of Characters after Wrapped Column Values
RECSEP determines when the line of characters is printed;
-you set RECSEP to EACH to print after every line,
-to WRAPPED to print after wrapped lines,
-The default setting of RECSEP is WRAPPED.
RECSEPCHAR sets the character printed in each line.
To print a line of dashes after each wrapped column value, enter:
SET RECSEP WRAPPED
SET RECSEPCHAR "-"
Clarifying Your Report with Spacing and Summary Lines
With the BREAK command you can suppress the duplicate values in a column specified in an ORDER BY clause. If two columns values come in the subsequent rows and you use BREAK on the column then only one will be shown as below.
Note that here I did not use order by clause. You should use order by on the column that you break. If you do not do this, breaks occur every time the column value changes.
SQL> select station_id,uname from users where station_id!=1 and rownum<=10;
STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
14 parvez
14 reza
14 voyeger
40 support
71 accounts5
71 bill5
71 chinmoy
71 crash5
71 mubeen5
10 rows selected.
SQL> break on station_id skip 2
SQL> select station_id,uname from users where station_id!=1 and rownum<=10;
STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
14 parvez
reza
voyeger
40 support
71 accounts5
bill5
chinmoy
crash5
mubeen5
10 rows selected.
SQL> break on station_id skip page
SQL> /
STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
STATION_ID UNAME
---------- --------------------------------------------------
14 parvez
reza
voyeger
STATION_ID UNAME
---------- --------------------------------------------------
40 support
STATION_ID UNAME
---------- --------------------------------------------------
71 accounts5
bill5
chinmoy
crash5
mubeen5
10 rows selected.
To insert n blank lines, use the BREAK command in the following form:
BREAK ON break_column SKIP n
where n can be 1 or 2 or ....
To skip a page, use the command in this form:
BREAK ON break_column SKIP PAGE
You may wish to insert blank lines or a blank page after every row.
To skip n lines after every row, use BREAK in the following form:
BREAK ON ROW SKIP n
To skip a page after every row, use
BREAK ON ROW SKIP PAGE
You can list your current break definition by entering the BREAK command with no clauses:
BREAK
You can remove the current break definition by entering the CLEAR command with the BREAKS clause:
CLEAR BREAKS
With COLUMN a column can be formatted. For string type data format is specified by A and then length. Like A7 means total column will span to 7 word length. For number data type the format is 99999 where number of 9s decide the length of digits of number data type.
Changing Column Headings
To change the heading of column a,b and c use commands like below. The vertical bar is used if you want to display the columns in a new line.
SQL> COLUMN A HEADING 'FIRST|COLUMN'
SQL> COLUMN C HEADING 'THIRD|COLUMN'
SQL> COLUMN B HEADING 'SECOND|COLUMN' FORMAT A7
SQL> select a,b,c from test1;
FIRST SECOND THIRD
COLUMN COLUMN COLUMN
---------- ------- ------
1 Hi Hi2
Setting the Underline Character
You see the underline character in the above output under heading is set to -. If you want to change it to = then use,
SQL> SET UNDERLINE =
SQL> /
FIRST SECOND THIRD
COLUMN COLUMN COLUMN
========== ======= ======
1 Hi Hi2
Default Display of columns
1)A NUMBER column's width equals the width of the heading or the width of the FORMAT plus one space for the sign, whichever is greater. If you do not explicitly use FORMAT, then the column's width will always be at least the value of SET NUMWIDTH.
BY default the NUMWIDTH is set to 10.
SQL> select 22/7 "This is the pi value" from dual;
This is the pi value
--------------------
3.14285714
SQL> show numwidth
numwidth 10
2)The default width of datatype columns is the width of the column in the database. The column width of a LONG, CLOB, NCLOB or XMLType defaults to the value of SET LONGCHUNKSIZE or SET LONG, whichever is the smaller.
If LONG is set to 80 and LONGCHUNKSIZE is set to 90 then for CLOB database only first 80 character will be shown on Sql*plus by default.
3)The default width and format of unformatted DATE columns in SQL*Plus is determined by the database NLS_DATE_FORMAT parameter. Otherwise, the default format width is A9.
Listing and Resetting Column Display Attributes
1)To list the current display attributes for a given column, use the COLUMN command followed by the column name.
COLUMN column_name
2)To list the current display attributes for all columns, enter just the COLUMN command.
COLUMN
3)To reset the display attributes for a column to their default values, use the CLEAR clause of the COLUMN command as below.
COLUMN column_name CLEAR
4)To reset all column display attributes to their default values enter,
CLEAR COLUMNS
Suppressing and Restoring Column Display Attributes
Though COLUMN command you once format a column and now you want to use the default display attributes for the column, but you don't want to remove the attributes you have defined through the COLUMN command. You can achieve that by,
COLUMN column_name OFF
To restore the attributes you defined through COLUMN, use the ON clause:
COLUMN column_name ON
Printing a Line of Characters after Wrapped Column Values
RECSEP determines when the line of characters is printed;
-you set RECSEP to EACH to print after every line,
-to WRAPPED to print after wrapped lines,
-The default setting of RECSEP is WRAPPED.
RECSEPCHAR sets the character printed in each line.
To print a line of dashes after each wrapped column value, enter:
SET RECSEP WRAPPED
SET RECSEPCHAR "-"
Clarifying Your Report with Spacing and Summary Lines
With the BREAK command you can suppress the duplicate values in a column specified in an ORDER BY clause. If two columns values come in the subsequent rows and you use BREAK on the column then only one will be shown as below.
Note that here I did not use order by clause. You should use order by on the column that you break. If you do not do this, breaks occur every time the column value changes.
SQL> select station_id,uname from users where station_id!=1 and rownum<=10;
STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
14 parvez
14 reza
14 voyeger
40 support
71 accounts5
71 bill5
71 chinmoy
71 crash5
71 mubeen5
10 rows selected.
SQL> break on station_id skip 2
SQL> select station_id,uname from users where station_id!=1 and rownum<=10;
STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
14 parvez
reza
voyeger
40 support
71 accounts5
bill5
chinmoy
crash5
mubeen5
10 rows selected.
SQL> break on station_id skip page
SQL> /
STATION_ID UNAME
---------- --------------------------------------------------
4 a.sadat
STATION_ID UNAME
---------- --------------------------------------------------
14 parvez
reza
voyeger
STATION_ID UNAME
---------- --------------------------------------------------
40 support
STATION_ID UNAME
---------- --------------------------------------------------
71 accounts5
bill5
chinmoy
crash5
mubeen5
10 rows selected.
To insert n blank lines, use the BREAK command in the following form:
BREAK ON break_column SKIP n
where n can be 1 or 2 or ....
To skip a page, use the command in this form:
BREAK ON break_column SKIP PAGE
You may wish to insert blank lines or a blank page after every row.
To skip n lines after every row, use BREAK in the following form:
BREAK ON ROW SKIP n
To skip a page after every row, use
BREAK ON ROW SKIP PAGE
You can list your current break definition by entering the BREAK command with no clauses:
BREAK
You can remove the current break definition by entering the CLEAR command with the BREAKS clause:
CLEAR BREAKS
Import fails with ORA-39005, ORA-31600: invalid input value NULL for parameter
Problem Description
I used both remap_schema(import operation will be performed in another user than the user whose data to be imported) and remap_tablespace(tablespace to be changed while importing) and the import operation failed with ORA-39005 and ORA-31600.
$host impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC
Import: Release 10.2.0.1.0 - 64bit Production on Thursday, 04 December, 2008 19:17:08
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: system
Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORA-39005: inconsistent arguments
ORA-31600: invalid input value NULL for parameter VALUE in function DBMS_DATAPUMP.METADATA_REMAP
Cause of the Problem
A NULL or invalid value was supplied for the parameter. Here a NULL value was supplied for the parameter remap_tablespace. The new_value was considered as NULL because of space after colon.
There was a space in the line user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC.
The space is just after colon and hence error comes.
Solution of the ProblemEnsure that parameters are not supplied as value NULL. Rewrite the above script as below will solve the problem.
impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC:user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC
I used both remap_schema(import operation will be performed in another user than the user whose data to be imported) and remap_tablespace(tablespace to be changed while importing) and the import operation failed with ORA-39005 and ORA-31600.
$host impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC
Import: Release 10.2.0.1.0 - 64bit Production on Thursday, 04 December, 2008 19:17:08
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: system
Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORA-39005: inconsistent arguments
ORA-31600: invalid input value NULL for parameter VALUE in function DBMS_DATAPUMP.METADATA_REMAP
Cause of the Problem
A NULL or invalid value was supplied for the parameter. Here a NULL value was supplied for the parameter remap_tablespace. The new_value was considered as NULL because of space after colon.
There was a space in the line user1_HISTORY_DETAIL_SPC: user2_HISTORY_DETAIL_SPC.
The space is just after colon and hence error comes.
Solution of the ProblemEnsure that parameters are not supplied as value NULL. Rewrite the above script as below will solve the problem.
impdp directory=d dumpfile=user1_metadata04-12-08.dmp remap_schema=user1:user2 table_exists_action=skip logfile=user1_metadata04-12-08.log \
remap_tablespace=user1_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_SPC:user2_HISTORY_DETAIL_SPC \
remap_tablespace=user1_HISTORY_SPC:user2_HISTORY_SPC \
remap_tablespace=user1_ACC_SPC:user2_SPC \
remap_tablespace=user1_HISTORY_DETAIL_INDX_SPC:user2_HIS_DETAIL_INDX_SPC \
remap_tablespace=user1_HISTORY_INDX_SPC:user2_HISTORY_INDX_SPC
ORA-07445: exception encountered: core dump kghalp SIGSEGV ORA-10980
Problem Description
I wanted to do online redefinition on a table. Whenever I run start_redef_table procedure then, while creating materialized view it failed with ORA-03114 and ORA-03113.
Below is the scenario.
SQL> exec dbms_redefinition.start_redef_table('M', 'HISTORY', 'HISTORY_INT');
ERROR:
ORA-03114: not connected to ORACLE
BEGIN dbms_redefinition.start_redef_table('M', 'HISTORY', 'HISTORY_INT'); END;
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
In the Alert log file it displays,
Sat Dec 6 16:34:40 2008
Errors in file /var/opt/dumpfile/udump/arjudb_ora_12860.trc:
ORA-07445: exception encountered: core dump [kghalp()+58] [SIGSEGV] [Address not mapped to object] [0x000000068] [] []
ORA-10980: prevent sharing of parsed query during Materialized View query generation
Sat Dec 6 16:36:39 2008
Cause of the Problem
This is an oracle bug. When auditing is enabled and creating materialized view or executing start_redef_table and a Commit/Rollback/Abort transaction is done, memory which is being cleaned up is accessed leading to a dump.
Solution of the Problem
As this problem is an oracle bug and this bug fires when auditing is enabled so there are two solutions.
1)Disable Audit and Restart database:
SQL> alter system set audit_trail=NONE scope=spfile;
System altered.
Since audit_trail is a static parameter so it is needed to bounce the database.
SQL> shutdown immediate;
SQL> startup
2)Apply Patch 10.2.0.3:
This bug is fixed on 10.2.0.3. So apply patch set 10.2.0.3.
I wanted to do online redefinition on a table. Whenever I run start_redef_table procedure then, while creating materialized view it failed with ORA-03114 and ORA-03113.
Below is the scenario.
SQL> exec dbms_redefinition.start_redef_table('M', 'HISTORY', 'HISTORY_INT');
ERROR:
ORA-03114: not connected to ORACLE
BEGIN dbms_redefinition.start_redef_table('M', 'HISTORY', 'HISTORY_INT'); END;
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
In the Alert log file it displays,
Sat Dec 6 16:34:40 2008
Errors in file /var/opt/dumpfile/udump/arjudb_ora_12860.trc:
ORA-07445: exception encountered: core dump [kghalp()+58] [SIGSEGV] [Address not mapped to object] [0x000000068] [] []
ORA-10980: prevent sharing of parsed query during Materialized View query generation
Sat Dec 6 16:36:39 2008
Cause of the Problem
This is an oracle bug. When auditing is enabled and creating materialized view or executing start_redef_table and a Commit/Rollback/Abort transaction is done, memory which is being cleaned up is accessed leading to a dump.
Solution of the Problem
As this problem is an oracle bug and this bug fires when auditing is enabled so there are two solutions.
1)Disable Audit and Restart database:
SQL> alter system set audit_trail=NONE scope=spfile;
System altered.
Since audit_trail is a static parameter so it is needed to bounce the database.
SQL> shutdown immediate;
SQL> startup
2)Apply Patch 10.2.0.3:
This bug is fixed on 10.2.0.3. So apply patch set 10.2.0.3.
ORA-02297: cannot disable constraint -dependencies exist
Problem Description
Whenever you try to disable a constraint of a table it fails with error message ORA-02297: cannot disable constraint -dependencies exist as below.
SQL> alter table transaction disable constraint TRANSACTION_PK;
alter table transaction disable constraint TRANSACTION_PK
*
ERROR at line 1:
ORA-02297: cannot disable constraint (OMS.TRANSACTION_PK) - dependencies exist
Cause of the Problem
Disable constraint command fails as the table is parent table and it has foreign key that are dependent on this constraint.
Solution of the Problem
Two solutions exist for this problem.
1)Find foreign key constraints on the table and disable those foreign key constraints and then disable this table constraint.
Following query will check dependent table and the dependent constraint name. After that disable child first and then parent constraint.
SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
2 p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
3 FROM user_constraints p
4 JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
5 WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
6 AND c.constraint_type = 'R'
7 AND p.table_name = UPPER('&table_name');
Enter value for table_name: transaction
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('transaction')
Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TRANSACTION USER_SALARY_RECORD TRANSACTION_PK SYS_C005564
TRANSACTION TRANSACTION_DETAIL TRANSACTION_PK TRNSCTN_DTL_TRNSCTN_FK
SQL> alter table USER_SALARY_RECORD disable constraint SYS_C005564;
Table altered.
SQL> alter table TRANSACTION_DETAIL disable constraint TRNSCTN_DTL_TRNSCTN_FK;
Table altered.
SQL> alter table transaction disable constraint TRANSACTION_PK;
Table altered.
2)Disable the constraint with cascade option.
SQL> alter table transaction disable constraint TRANSACTION_PK cascade;
Table altered.
Whenever you try to disable a constraint of a table it fails with error message ORA-02297: cannot disable constraint -dependencies exist as below.
SQL> alter table transaction disable constraint TRANSACTION_PK;
alter table transaction disable constraint TRANSACTION_PK
*
ERROR at line 1:
ORA-02297: cannot disable constraint (OMS.TRANSACTION_PK) - dependencies exist
Cause of the Problem
Disable constraint command fails as the table is parent table and it has foreign key that are dependent on this constraint.
Solution of the Problem
Two solutions exist for this problem.
1)Find foreign key constraints on the table and disable those foreign key constraints and then disable this table constraint.
Following query will check dependent table and the dependent constraint name. After that disable child first and then parent constraint.
SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
2 p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
3 FROM user_constraints p
4 JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
5 WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
6 AND c.constraint_type = 'R'
7 AND p.table_name = UPPER('&table_name');
Enter value for table_name: transaction
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('transaction')
Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TRANSACTION USER_SALARY_RECORD TRANSACTION_PK SYS_C005564
TRANSACTION TRANSACTION_DETAIL TRANSACTION_PK TRNSCTN_DTL_TRNSCTN_FK
SQL> alter table USER_SALARY_RECORD disable constraint SYS_C005564;
Table altered.
SQL> alter table TRANSACTION_DETAIL disable constraint TRNSCTN_DTL_TRNSCTN_FK;
Table altered.
SQL> alter table transaction disable constraint TRANSACTION_PK;
Table altered.
2)Disable the constraint with cascade option.
SQL> alter table transaction disable constraint TRANSACTION_PK cascade;
Table altered.
Expdp fails with ORA-39125, ORA-04031
Problem Description
I was performing data pump export operation and it fails with ORA-39125, ORA-04031 as below.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a
Export: Release 10.2.0.1.0 - Production on Tuesday, 16 December, 2008 17:07:44
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_FULL_02": full=y dumpfile=fulldb.dmp directory=d userid=system/********
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-04031: unable to allocate 28 bytes of shared memory ("shared pool","SELECT /*+rule*/ SYS_XMLGEN(...","sql area","ub1[]: qkexrXformVal")
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235
----- PL/SQL Call Stack -----
object line object
handle number name
64E077B4 14916 package body SYS.KUPW$WORKER
64E077B4 6300 package body SYS.KUPW$WORKER
64E077B4 9120 package body SYS.KUPW$WORKER
64E077B4 1880 package body SYS.KUPW$WORKER
64E077B4 6861 package body SYS.KUPW$WORKER
64E077B4 1262 package body SYS.KUPW$WORKER
64CB4398 2 anonymous block
Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped due to fatal error at 17:08:03
Cause of the Error
The ORA-04031 error returns whenever oracle attempts to allocate a large piece of contiguous memory in the shared pool but fails to allocate. Oracle first flushes all objects that are not currently in use from the pool and the resulting free memory chunks are merged. If there is still not a single chunk large enough to satisfy the request the ORA-04031 error is returned.
Since in 10g automatic memory management in enabled and this allows the dba to reserve a pool of shared memory that is used to allocate the shared pool, the buffer cache, the java pool and the large pool with the single parameter SGA_TARGET. So simply increase of SGA_TARGET likely solve the problem.
Solution
Let's see the sga_target value.
SQL> show parameter sga_t
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 100M
And SGA_MAX_SIZE value,
SQL> show parameter sga_max
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 100M
We see these two parameter settings are low. So we increase it and restart database. As setting of SGA_TARGET is depend on SGA_MAX_SIZE and to change SGA_MAX_SIZE we need to bounce the database in order to effect.
SQL> alter system set sga_max_size=300M scope=spfile;
System altered.
SQL> alter system set sga_target=300M scope=spfile;
System altered.
SQL> startup force
ORACLE instance started.
Total System Global Area 314572800 bytes
Fixed Size 1248768 bytes
Variable Size 96469504 bytes
Database Buffers 209715200 bytes
Redo Buffers 7139328 bytes
Database mounted.
Database opened.
SQL> show parameter sga_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 300M
Now the export operation runs smoothly as shared pool finds enough memory to do the operation.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a
I was performing data pump export operation and it fails with ORA-39125, ORA-04031 as below.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a
Export: Release 10.2.0.1.0 - Production on Tuesday, 16 December, 2008 17:07:44
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_FULL_02": full=y dumpfile=fulldb.dmp directory=d userid=system/********
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-04031: unable to allocate 28 bytes of shared memory ("shared pool","SELECT /*+rule*/ SYS_XMLGEN(...","sql area","ub1[]: qkexrXformVal")
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235
----- PL/SQL Call Stack -----
object line object
handle number name
64E077B4 14916 package body SYS.KUPW$WORKER
64E077B4 6300 package body SYS.KUPW$WORKER
64E077B4 9120 package body SYS.KUPW$WORKER
64E077B4 1880 package body SYS.KUPW$WORKER
64E077B4 6861 package body SYS.KUPW$WORKER
64E077B4 1262 package body SYS.KUPW$WORKER
64CB4398 2 anonymous block
Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped due to fatal error at 17:08:03
Cause of the Error
The ORA-04031 error returns whenever oracle attempts to allocate a large piece of contiguous memory in the shared pool but fails to allocate. Oracle first flushes all objects that are not currently in use from the pool and the resulting free memory chunks are merged. If there is still not a single chunk large enough to satisfy the request the ORA-04031 error is returned.
Since in 10g automatic memory management in enabled and this allows the dba to reserve a pool of shared memory that is used to allocate the shared pool, the buffer cache, the java pool and the large pool with the single parameter SGA_TARGET. So simply increase of SGA_TARGET likely solve the problem.
Solution
Let's see the sga_target value.
SQL> show parameter sga_t
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 100M
And SGA_MAX_SIZE value,
SQL> show parameter sga_max
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 100M
We see these two parameter settings are low. So we increase it and restart database. As setting of SGA_TARGET is depend on SGA_MAX_SIZE and to change SGA_MAX_SIZE we need to bounce the database in order to effect.
SQL> alter system set sga_max_size=300M scope=spfile;
System altered.
SQL> alter system set sga_target=300M scope=spfile;
System altered.
SQL> startup force
ORACLE instance started.
Total System Global Area 314572800 bytes
Fixed Size 1248768 bytes
Variable Size 96469504 bytes
Database Buffers 209715200 bytes
Redo Buffers 7139328 bytes
Database mounted.
Database opened.
SQL> show parameter sga_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 300M
Now the export operation runs smoothly as shared pool finds enough memory to do the operation.
SQL> $expdp full=y dumpfile=fulldb.dmp directory=d userid=system/a
How to put dumps in multiple locations
You have a very big database and you want to take data pump export of this very big database. Suppose your dump size would be 100G. But in your hard disk in one partition 100G is not available. You may get free space more than 100G that span over partitions. For example in one partition 40G free space is available, in another 50G and in another partition 30G.
You then span this 100G dumpfile in this partitions and you can take the dump. The combination of two parameters DUMPFILE and FILESIZE of data pump export operation help to take the dump.
Suppose in /oradata1 40G is available.
in /oradata2 50G is available.
in /oradata3 30G is available.
And you have created directories dir1,dir2 and dir3 that represents /oradata1, /oradata3 and /oradata3 respectively.
If your approximate dump size is 100G then you would write your datapump export command as,
expdp user_name/pass dumpfile=dir1:part1.dmp, dir2:part2.dmp, dir3:part3.dmp, dir2:rest%U.dmp filesize=30G
In this case dump file will be created sequentially.
First in /oradata1 30G dump size will be created named part1.dmp.
Then in /oradata2 30G dump size will be created named part2.dmp.
Then in /oradata3 30G dump size will be created named part3.dmp.
Finally rest 10G dump size will be created in /oradata2 named rest01.dmp.
The %U causes file name to be generated by oracle and sequentially like rest01.dmp, rest02.dmp and etc.
Below is an example which will take dump of tablespace users and each dump size will be 300K and it will span in C:\, D:\ and E:\ drive each.
SQL> create or replace directory d as 'd:';
Directory created.
SQL> create or replace directory e as 'e:';
Directory created.
SQL> create directory c as 'c:';
Directory created.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:16:15
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_01": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."O_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."O_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."O_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_01 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully completed at 12:16:33
For dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp,
first in D: drive part1.dmp will be created with sized 300K.
Then in E: drive part2.dmp will be created with sized 300K.
Then all remaining parts of the dump will be created in C: drive each with 300K (possibly except last one due to dump size is not multiple of 300K) and their name will be like rest01.dmp, rest02.dmp etc.
If you use original export then the you control the size with FILE and FILESIZE parameter. The FILE parameter takes a list of file names to be created and FILESIZE parameter shows the maximum size of each dump file.
You then span this 100G dumpfile in this partitions and you can take the dump. The combination of two parameters DUMPFILE and FILESIZE of data pump export operation help to take the dump.
Suppose in /oradata1 40G is available.
in /oradata2 50G is available.
in /oradata3 30G is available.
And you have created directories dir1,dir2 and dir3 that represents /oradata1, /oradata3 and /oradata3 respectively.
If your approximate dump size is 100G then you would write your datapump export command as,
expdp user_name/pass dumpfile=dir1:part1.dmp, dir2:part2.dmp, dir3:part3.dmp, dir2:rest%U.dmp filesize=30G
In this case dump file will be created sequentially.
First in /oradata1 30G dump size will be created named part1.dmp.
Then in /oradata2 30G dump size will be created named part2.dmp.
Then in /oradata3 30G dump size will be created named part3.dmp.
Finally rest 10G dump size will be created in /oradata2 named rest01.dmp.
The %U causes file name to be generated by oracle and sequentially like rest01.dmp, rest02.dmp and etc.
Below is an example which will take dump of tablespace users and each dump size will be 300K and it will span in C:\, D:\ and E:\ drive each.
SQL> create or replace directory d as 'd:';
Directory created.
SQL> create or replace directory e as 'e:';
Directory created.
SQL> create directory c as 'c:';
Directory created.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:16:15
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_01": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=300K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."O_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."O_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."O_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_01 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_01" successfully completed at 12:16:33
For dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp,
first in D: drive part1.dmp will be created with sized 300K.
Then in E: drive part2.dmp will be created with sized 300K.
Then all remaining parts of the dump will be created in C: drive each with 300K (possibly except last one due to dump size is not multiple of 300K) and their name will be like rest01.dmp, rest02.dmp etc.
If you use original export then the you control the size with FILE and FILESIZE parameter. The FILE parameter takes a list of file names to be created and FILESIZE parameter shows the maximum size of each dump file.
Troubleshoot ORA-02292, ORA-02449 and ORA-02266
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKUSER_TASK) violated - child record
found
SQL> drop table task;
drop table task
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys
SQL> truncate table task;
truncate table task
*
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
To get list of child table with the constraint name that is referencing the TASK table issue,
SQL> set lines 140
SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p
JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND p.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('task')
Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK_REF TASK_PK TASKREF_TASK
TASK TASK TASK_PK TASK_TASK_FK
TASK TASK_SCHEDULE TASK_PK TASKSCHEDULE_TASK
TASK TASK_USER TASK_PK TASKUSER_TASK
TASK TASK_ATTACHED_FILEINFO TASK_PK TASK_ATTACH_FILE_FK
Let's start by disabling constraint from the child table one by one and try to delete rows from the table TASK.
SQL> ALTER TABLE TASK_USER DISABLE CONSTRAINT TASKUSER_TASK;
Table altered.
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKSCHEDULE_TASK) violated - child record found
SQL> ALTER TABLE task_schedule DISABLE CONSTRAINT TASKSCHEDULE_TASK;
Table altered.
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASK_ATTACH_FILE_FK) violated - child record found
SQL> ALTER TABLE TASK_ATTACHED_FILEINFO DISABLE CONSTRAINT TASK_ATTACH_FILE_FK;
Table altered.
SQL> delete from task;
1289 rows deleted.
And now delete succeed. In order to get a list of parent tables that is referencing a child table issue,
SQL> SELECT c.table_name "Child Table", p.table_name "Parent Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p JOIN user_constraints c
ON (p.constraint_name = c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND c.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND c.table_name = UPPER('&table_name')
new 7: AND c.table_name = UPPER('task')
Child Table Parent Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK TASK_PK TASK_TASK_FK
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKUSER_TASK) violated - child record
found
SQL> drop table task;
drop table task
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys
SQL> truncate table task;
truncate table task
*
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys
To get list of child table with the constraint name that is referencing the TASK table issue,
SQL> set lines 140
SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p
JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND p.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND p.table_name = UPPER('&table_name')
new 7: AND p.table_name = UPPER('task')
Parent Table Child Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK_REF TASK_PK TASKREF_TASK
TASK TASK TASK_PK TASK_TASK_FK
TASK TASK_SCHEDULE TASK_PK TASKSCHEDULE_TASK
TASK TASK_USER TASK_PK TASKUSER_TASK
TASK TASK_ATTACHED_FILEINFO TASK_PK TASK_ATTACH_FILE_FK
Let's start by disabling constraint from the child table one by one and try to delete rows from the table TASK.
SQL> ALTER TABLE TASK_USER DISABLE CONSTRAINT TASKUSER_TASK;
Table altered.
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASKSCHEDULE_TASK) violated - child record found
SQL> ALTER TABLE task_schedule DISABLE CONSTRAINT TASKSCHEDULE_TASK;
Table altered.
SQL> delete from task;
delete from task
*
ERROR at line 1:
ORA-02292: integrity constraint (OMS.TASK_ATTACH_FILE_FK) violated - child record found
SQL> ALTER TABLE TASK_ATTACHED_FILEINFO DISABLE CONSTRAINT TASK_ATTACH_FILE_FK;
Table altered.
SQL> delete from task;
1289 rows deleted.
And now delete succeed. In order to get a list of parent tables that is referencing a child table issue,
SQL> SELECT c.table_name "Child Table", p.table_name "Parent Table",
p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
FROM user_constraints p JOIN user_constraints c
ON (p.constraint_name = c.r_constraint_name)
WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
AND c.constraint_type = 'R'
AND c.table_name = UPPER('&table_name');
Enter value for table_name: task
old 7: AND c.table_name = UPPER('&table_name')
new 7: AND c.table_name = UPPER('task')
Child Table Parent Table Parent Constraint Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TASK TASK TASK_PK TASK_TASK_FK
ORA-02070: database does not support in this context
Problem Description
I like to differentiate between columns of table HISTORY from two databases using database link as below and it resulted error ORA-02070.
SQL> select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY';
select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY'
*
ERROR at line 1:
ORA-02070: database TIGER1.NET does not support operator NLS_CHARSET_DECL_LEN
in this context
I found that the cols contain LONG column and hence result above error. If I only query long datatype data_default then same error return with different error message.
SQL> select data_default from cols minus select data_default from cols@tiger1.net;
select data_default from cols minus select data_default from cols@tiger1.net
*
ERROR at line 1:
ORA-02070: database does not support in this context
Experiment and cause of the Error
The reason behind the error is the long datatype of data_default column. Wheenever you go to access long datatype column of cols then above error returns. However for experiment I created LONG datatype column table but did not get above error. I conclude that only SYSTEM defined table/view/synonym throw above error and user defined table/view/synonym throw ORA-00997.
For experiment,
1)Create table t with only long column in both database and then try to use minus operation using database link.
SQL> create table t(a long);
Table created.
SQL> select * from t minus select * from t@tiger1.net;
select * from t minus select * from t@tiger1.net
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
ORA-02063: preceding line from TIGER1.NET
So in order to avoid above error just don't do any operation over LONG column through database link.
I like to differentiate between columns of table HISTORY from two databases using database link as below and it resulted error ORA-02070.
SQL> select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY';
select * from cols where table_name='HISTORY' minus select * from cols@tiger1.net where table_name='HISTORY'
*
ERROR at line 1:
ORA-02070: database TIGER1.NET does not support operator NLS_CHARSET_DECL_LEN
in this context
I found that the cols contain LONG column and hence result above error. If I only query long datatype data_default then same error return with different error message.
SQL> select data_default from cols minus select data_default from cols@tiger1.net;
select data_default from cols minus select data_default from cols@tiger1.net
*
ERROR at line 1:
ORA-02070: database does not support in this context
Experiment and cause of the Error
The reason behind the error is the long datatype of data_default column. Wheenever you go to access long datatype column of cols then above error returns. However for experiment I created LONG datatype column table but did not get above error. I conclude that only SYSTEM defined table/view/synonym throw above error and user defined table/view/synonym throw ORA-00997.
For experiment,
1)Create table t with only long column in both database and then try to use minus operation using database link.
SQL> create table t(a long);
Table created.
SQL> select * from t minus select * from t@tiger1.net;
select * from t minus select * from t@tiger1.net
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
ORA-02063: preceding line from TIGER1.NET
So in order to avoid above error just don't do any operation over LONG column through database link.
Interact with the executing data pump jobs in oracle
Oracle data pump export jobs run on server machine. Sometimes from remote machine you run data pump jobs on server or in server machine you do it. If you close your window then still data pump export operation runs background in the server machine. You may wish to control your data pump jobs that is running in the background and wish to know the status of it like which objects now it is processing or may wish to kill the job.
In order to do that first be sure which jobs are running now and see the state by querying from dba_datapump_jobs view.
SQL> select job_name,state from dba_datapump_jobs;
JOB_NAME STATE
------------------------------ ------------------------------
SYS_EXPORT_SCHEMA_01 EXECUTING
You may also wish to see other fields of the view like owner_name, operation etc.
Now in order to go to the interactive mode of the executing jobs just ATTCH the executing jobs like below,
SQL> $expdp maximsg/a attach=SYS_EXPORT_SCHEMA_01
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 11:43:49
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_EXPORT_SCHEMA_01
Owner: MAXIMSG
Operation: EXPORT
Creator Privs: FALSE
GUID: BF0614AD15254C7B964B78333B7D9E1A
Start Time: Thursday, 01 January, 2009 11:38:30
Mode: SCHEMA
Instance: tiger
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,dat
a_pump_dir%U.dmp
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: E:\oracle\product\10.2.0\admin\tiger\dpdump\data_pump_dir%u.dmp
Dump File: D:\PART1.DMP
bytes written: 704,512
Dump File: E:\PART2.DMP
bytes written: 4,096
Dump File: E:\ORACLE\PRODUCT\10.2.0\ADMIN\TIGER\DPDUMP\DATA_PUMP_DIR01.DMP
bytes written: 4,096
Worker 1 Status:
State: EXECUTING
Object Schema: MAXIMSG
Object Name: ACCOUNTING
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 171
Completed Rows: 8,169,569
Worker Parallelism: 1
Now you have reached in interactive mode. In order to back to the command line mode just write continue_client.
Export> continue_client
In order to kill the job that is executing,
Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
In this way through ATTACH command you can interact with the job that is running. Similarly you can interact the data pump import operation.
In order to do that first be sure which jobs are running now and see the state by querying from dba_datapump_jobs view.
SQL> select job_name,state from dba_datapump_jobs;
JOB_NAME STATE
------------------------------ ------------------------------
SYS_EXPORT_SCHEMA_01 EXECUTING
You may also wish to see other fields of the view like owner_name, operation etc.
Now in order to go to the interactive mode of the executing jobs just ATTCH the executing jobs like below,
SQL> $expdp maximsg/a attach=SYS_EXPORT_SCHEMA_01
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 11:43:49
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_EXPORT_SCHEMA_01
Owner: MAXIMSG
Operation: EXPORT
Creator Privs: FALSE
GUID: BF0614AD15254C7B964B78333B7D9E1A
Start Time: Thursday, 01 January, 2009 11:38:30
Mode: SCHEMA
Instance: tiger
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,dat
a_pump_dir%U.dmp
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: E:\oracle\product\10.2.0\admin\tiger\dpdump\data_pump_dir%u.dmp
Dump File: D:\PART1.DMP
bytes written: 704,512
Dump File: E:\PART2.DMP
bytes written: 4,096
Dump File: E:\ORACLE\PRODUCT\10.2.0\ADMIN\TIGER\DPDUMP\DATA_PUMP_DIR01.DMP
bytes written: 4,096
Worker 1 Status:
State: EXECUTING
Object Schema: MAXIMSG
Object Name: ACCOUNTING
Object Type: SCHEMA_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 171
Completed Rows: 8,169,569
Worker Parallelism: 1
Now you have reached in interactive mode. In order to back to the command line mode just write continue_client.
Export> continue_client
In order to kill the job that is executing,
Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
In this way through ATTACH command you can interact with the job that is running. Similarly you can interact the data pump import operation.
Oracle - Formatting SQL*Plus Reports Part 2
Computing Summary of a column
With combination of BREAK and COMPUTE command you can do several calculations on SQL reports using sql*plus.
Remember the COMPUTE command has no effect without a corresponding BREAK command.
Below is the lists of compute functions of Sql*plus and their effects.
1)SUM: Sum of the values in the column.
2)MINIMUM: Minimum value in the column.
3)MAXIMUM: Maximum value in the column.
4)AVG: Average of the values in the column.
5)STD: Standard deviation of the values in the column.
6)VARIANCE: Variance of the values in the column.
7)COUNT: Number of non-null values in the column.
8)NUMBER: Number of rows in the column.
Let's look at our data,
SQL> select dept_id,name,salary from emp;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
20 Riaz 35000
20 Hasib 60000
30 Raihan 25000
6 rows selected.
Now we wish to compute the total of SALARY by department. To do that use,
SQL> break on dept_id
SQL> compute sum of salary on dept_id
SQL> /
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
Ershad 22000
Dulal 40000
********** ----------
sum 122000
20 Riaz 35000
Hasib 60000
********** ----------
sum 95000
30 Raihan 25000
********** ----------
sum 25000
6 rows selected.
Note that the word sum appears in every break. If you don't want to print sum word then do as,
SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY SKIP 1;
SQL> select dept_id DUMMY,dept_id,name,salary from emp;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
----------
122000
20 Riaz 35000
20 Hasib 60000
----------
95000
30 Raihan 25000
----------
25000
6 rows selected.
To compute the salaries just at the end of the report,
SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY;
SQL> SELECT NULL DUMMY,DEPT_ID,NAME,SALARY FROM EMP;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
20 Riaz 35000
20 Hasib 60000
30 Raihan 25000
----------
242000
6 rows selected.
To calculate grand total of salary and make it a level do,
SQL> BREAK ON REPORT
SQL> COMPUTE SUM LABEL TOTAL OF SALARY ON REPORT
SQL> select dept_id,name,salary from emp;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
20 Riaz 35000
20 Hasib 60000
30 Raihan 25000
----------
TOTAL 242000
6 rows selected.
To compute the both average and sum of salaries of a department do,
SQL> BREAK ON DEPT_ID
SQL> COMPUTE AVG SUM OF SALARY ON DEPT_ID
SQL> select dept_id,name,salary from emp where dept_id=10;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
Ershad 22000
Dulal 40000
********** ----------
avg 40666.6667
sum 122000
To see current compute settings,
SQL> compute
COMPUTE sum LABEL 'sum' OF SALARY ON DUMMY
COMPUTE sum LABEL 'TOTAL' OF SALARY ON REPORT
COMPUTE avg LABEL 'avg' sum LABEL 'sum' OF SALARY ON DEPT_ID
To remove all computes definitions,
SQL> clear compute
computes cleared
With combination of BREAK and COMPUTE command you can do several calculations on SQL reports using sql*plus.
Remember the COMPUTE command has no effect without a corresponding BREAK command.
Below is the lists of compute functions of Sql*plus and their effects.
1)SUM: Sum of the values in the column.
2)MINIMUM: Minimum value in the column.
3)MAXIMUM: Maximum value in the column.
4)AVG: Average of the values in the column.
5)STD: Standard deviation of the values in the column.
6)VARIANCE: Variance of the values in the column.
7)COUNT: Number of non-null values in the column.
8)NUMBER: Number of rows in the column.
Let's look at our data,
SQL> select dept_id,name,salary from emp;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
20 Riaz 35000
20 Hasib 60000
30 Raihan 25000
6 rows selected.
Now we wish to compute the total of SALARY by department. To do that use,
SQL> break on dept_id
SQL> compute sum of salary on dept_id
SQL> /
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
Ershad 22000
Dulal 40000
********** ----------
sum 122000
20 Riaz 35000
Hasib 60000
********** ----------
sum 95000
30 Raihan 25000
********** ----------
sum 25000
6 rows selected.
Note that the word sum appears in every break. If you don't want to print sum word then do as,
SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY SKIP 1;
SQL> select dept_id DUMMY,dept_id,name,salary from emp;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
----------
122000
20 Riaz 35000
20 Hasib 60000
----------
95000
30 Raihan 25000
----------
25000
6 rows selected.
To compute the salaries just at the end of the report,
SQL> COLUMN DUMMY NOPRINT;
SQL> COMPUTE SUM OF SALARY ON DUMMY;
SQL> BREAK ON DUMMY;
SQL> SELECT NULL DUMMY,DEPT_ID,NAME,SALARY FROM EMP;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
20 Riaz 35000
20 Hasib 60000
30 Raihan 25000
----------
242000
6 rows selected.
To calculate grand total of salary and make it a level do,
SQL> BREAK ON REPORT
SQL> COMPUTE SUM LABEL TOTAL OF SALARY ON REPORT
SQL> select dept_id,name,salary from emp;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
10 Ershad 22000
10 Dulal 40000
20 Riaz 35000
20 Hasib 60000
30 Raihan 25000
----------
TOTAL 242000
6 rows selected.
To compute the both average and sum of salaries of a department do,
SQL> BREAK ON DEPT_ID
SQL> COMPUTE AVG SUM OF SALARY ON DEPT_ID
SQL> select dept_id,name,salary from emp where dept_id=10;
DEPT_ID NAME SALARY
---------- ------------ ----------
10 Arju 60000
Ershad 22000
Dulal 40000
********** ----------
avg 40666.6667
sum 122000
To see current compute settings,
SQL> compute
COMPUTE sum LABEL 'sum' OF SALARY ON DUMMY
COMPUTE sum LABEL 'TOTAL' OF SALARY ON REPORT
COMPUTE avg LABEL 'avg' sum LABEL 'sum' OF SALARY ON DEPT_ID
To remove all computes definitions,
SQL> clear compute
computes cleared
Oracle- Controlling the Autotrace Report in sql*plus
In sql*plus with AUTOTRACE command you can see the execution plan of successful SQL DML (that is, SELECT, DELETE, UPDATE and INSERT) statements which is useful for monitoring and tuning the performance of these statements.
You can control the report by setting the AUTOTRACE system variable.
Following is the available AUTOTRACE settings.
1)SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default. Result of a query will be displayed on the console as usual.
2)SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows the optimizer execution path along with query output and note but does not show statistics.
3)SET AUTOTRACE ON STATISTICS: The AUTOTRACE report shows the SQL statement execution statistics along with query output but does not display execution plan.
4)SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics along with query output.
5)SET AUTOTRACE TRACEONLY: Report displays both optimizer execution path as well as execution statistics but does not display query output. If STATISTICS is enabled, query data is still fetched, but not printed.
You can control the report by setting the AUTOTRACE system variable.
Following is the available AUTOTRACE settings.
1)SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default. Result of a query will be displayed on the console as usual.
2)SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows the optimizer execution path along with query output and note but does not show statistics.
3)SET AUTOTRACE ON STATISTICS: The AUTOTRACE report shows the SQL statement execution statistics along with query output but does not display execution plan.
4)SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics along with query output.
5)SET AUTOTRACE TRACEONLY: Report displays both optimizer execution path as well as execution statistics but does not display query output. If STATISTICS is enabled, query data is still fetched, but not printed.
Oracle Understanding Execution Plan Statistics
Below is the output of an execution plan statistics.
SQL> set autot trace statistics
SQL> select * from tab;
107 rows selected.
Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
898 consistent gets
0 physical reads
0 redo size
3459 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
9 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
107 rows processed
You sometimes want to know what these fields indicates. Below is the details of these fields.
1)recursive calls: For processing of sql statements oracle database maintain internal tables. Sometimes for processing sql statements change is needed in the internal tables and sometimes not. When Oracle Database needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call.
2)db block gets: Number of times a CURRENT block was requested.
3)consistent gets: Number of times a consistent read was requested for a block. This is called the logical reads indicates for processing of a query how many blocks needs to be accessed.
4)physical reads: Total number of data blocks read from disk. This number equals the value of "physical reads direct" + all reads into buffer cache.
5)redo size: For processing of a query total amount of redo generated in bytes.
6)bytes sent through SQL*Net to client: Total number of bytes sent to the client from the foreground processes.
7)bytes received through SQL*Net from client: Total number of bytes received from the client over Oracle Net.
8)SQL*Net round-trips to/from client: Total number of Oracle Net messages sent to and received from the client.
9)sorts (memory):Number of sort operations that were performed completely in memory and did not require any disk writes.
10)sorts (disk):Number of sort operations that required at least one disk write.
11)rows processed: Number of rows processed during the operation.
SQL> set autot trace statistics
SQL> select * from tab;
107 rows selected.
Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
898 consistent gets
0 physical reads
0 redo size
3459 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
9 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
107 rows processed
You sometimes want to know what these fields indicates. Below is the details of these fields.
1)recursive calls: For processing of sql statements oracle database maintain internal tables. Sometimes for processing sql statements change is needed in the internal tables and sometimes not. When Oracle Database needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call.
2)db block gets: Number of times a CURRENT block was requested.
3)consistent gets: Number of times a consistent read was requested for a block. This is called the logical reads indicates for processing of a query how many blocks needs to be accessed.
4)physical reads: Total number of data blocks read from disk. This number equals the value of "physical reads direct" + all reads into buffer cache.
5)redo size: For processing of a query total amount of redo generated in bytes.
6)bytes sent through SQL*Net to client: Total number of bytes sent to the client from the foreground processes.
7)bytes received through SQL*Net from client: Total number of bytes received from the client over Oracle Net.
8)SQL*Net round-trips to/from client: Total number of Oracle Net messages sent to and received from the client.
9)sorts (memory):Number of sort operations that were performed completely in memory and did not require any disk writes.
10)sorts (disk):Number of sort operations that required at least one disk write.
11)rows processed: Number of rows processed during the operation.
February 27, 2009
ORA-39095: Dump file space has been exhausted
Problem Description
I ran my datapump export operation with the combination of multiple dumpfile arguments and filesize parameter but it fails with ORA-39095 as below.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:23:30
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_05": maximsg/******** dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespace
s=users
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 188416 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_05" stopped due to fatal error at 12:23:47
Just a variant of the above is below.
>expdp maximsg/a@tiger tablespaces=users filesize=200K directory=d dumpfile=b.dmp
Export: Release 10.2.0.1.0 - Production on Saturday, 03 January, 2009 22:29:34
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_08": maximsg/********@tiger tablespaces=users filesize=200K directory=d dumpfile=b.d
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 208896 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_08" stopped due to fatal error at 10:30:17
Note that both in first and second case same errors are return but they show various bytes in order of unable to allocate. In first case Unable to allocate 188416 bytes and in second case Unable to allocate 208896 bytes.
Cause of the Error
They are many cause for which ORA-39095 is fired.
1)If the summation of dumpfile sizes that is specified in the datapump parameters is less than total dumpfile size that produces then ORA-39095 returns. Like in the second case the filesize is specified as 200K and only one dumpfile is specified. So maximum dumpfile size that can be produced is no more than 200K. If dumpfile to be produced is greater than 100K then above error will return.
Note that %U specification for the dump file can expand up to 99 files.
If 99 files have been generated before the export has completed, it will again return the ORA-39095 error. If in the second case %U is specified then maximum dumpfile size can be 99*200K=19800K. If dumpfile to be produced is greater than 19800K then error will return.
2)Like in the first case the size of filesize is too small and hence the error returns. filesize is defined about 184K(188407 bytes). And in the error message it shows unable to allocate 184K(188407 bytes). The solutions is and extra block is needed. As with 4k (with standard block size 8k) a block is generated so 188K filesize will solve the problem.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:25:09
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_06": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."OUT_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."OUT_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."OUT_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_06 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
C:\REST04.DMP
C:\REST05.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully completed at 12:25:32
Solution of the Error
1)Increase the total dumpfile number so that dump produced by export operation is lower than total dumpfile sizes that is specified in the expdp.
With the 'wildcard' specification (%U) the dump file can expand up to 99 files.
2)Use a bigger filesize value.
3)Another option is to use,
dumpfile=dmpdir1:part1_%U.dmp, dmpdir2:part2_%U.dmp, dmpdir3:part3_%U.dmp
I ran my datapump export operation with the combination of multiple dumpfile arguments and filesize parameter but it fails with ORA-39095 as below.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:23:30
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_05": maximsg/******** dumpfile=d:parti1.dmp,e:parti2.dmp,c:resit%U.dmp filesize=188417 tablespace
s=users
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 188416 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_05" stopped due to fatal error at 12:23:47
Just a variant of the above is below.
>expdp maximsg/a@tiger tablespaces=users filesize=200K directory=d dumpfile=b.dmp
Export: Release 10.2.0.1.0 - Production on Saturday, 03 January, 2009 22:29:34
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_08": maximsg/********@tiger tablespaces=users filesize=200K directory=d dumpfile=b.d
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39095: Dump file space has been exhausted: Unable to allocate 208896 bytes
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_08" stopped due to fatal error at 10:30:17
Note that both in first and second case same errors are return but they show various bytes in order of unable to allocate. In first case Unable to allocate 188416 bytes and in second case Unable to allocate 208896 bytes.
Cause of the Error
They are many cause for which ORA-39095 is fired.
1)If the summation of dumpfile sizes that is specified in the datapump parameters is less than total dumpfile size that produces then ORA-39095 returns. Like in the second case the filesize is specified as 200K and only one dumpfile is specified. So maximum dumpfile size that can be produced is no more than 200K. If dumpfile to be produced is greater than 100K then above error will return.
Note that %U specification for the dump file can expand up to 99 files.
If 99 files have been generated before the export has completed, it will again return the ORA-39095 error. If in the second case %U is specified then maximum dumpfile size can be 99*200K=19800K. If dumpfile to be produced is greater than 19800K then error will return.
2)Like in the first case the size of filesize is too small and hence the error returns. filesize is defined about 184K(188407 bytes). And in the error message it shows unable to allocate 184K(188407 bytes). The solutions is and extra block is needed. As with 4k (with standard block size 8k) a block is generated so 188K filesize will solve the problem.
maximsg@TIGER> $expdp maximsg/a dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=users
Export: Release 10.2.0.1.0 - Production on Thursday, 01 January, 2009 12:25:09
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MAXIMSG"."SYS_EXPORT_TABLESPACE_06": maximsg/******** dumpfile=d:part1.dmp,e:part2.dmp,c:rest%U.dmp filesize=188K tablespaces=use
rs
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 20.12 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "CR_2"."OUT_CDR_TEST" 310.9 KB 2166 rows
. . exported "CR_2"."OUT_CDR" 484.4 KB 2606 rows
. . exported "CR_2"."USER_IP" 126 KB 3403 rows
. . exported "CR_2"."OUT_CDR_TEMP" 33.72 KB 361 rows
. . exported "CR_2"."USERS" 42.57 KB 230 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.851 KB 15 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully loaded/unloaded
******************************************************************************
Dump file set for MAXIMSG.SYS_EXPORT_TABLESPACE_06 is:
D:\PART1.DMP
E:\PART2.DMP
C:\REST01.DMP
C:\REST02.DMP
C:\REST03.DMP
C:\REST04.DMP
C:\REST05.DMP
Job "MAXIMSG"."SYS_EXPORT_TABLESPACE_06" successfully completed at 12:25:32
Solution of the Error
1)Increase the total dumpfile number so that dump produced by export operation is lower than total dumpfile sizes that is specified in the expdp.
With the 'wildcard' specification (%U) the dump file can expand up to 99 files.
2)Use a bigger filesize value.
3)Another option is to use,
dumpfile=dmpdir1:part1_%U.dmp, dmpdir2:part2_%U.dmp, dmpdir3:part3_%U.dmp
Subscribe to:
Posts (Atom)