Saturday, August 23, 2008

IMP-00020: long column too large for column buffer size (7)

Encountered this problem during import data from 9i to 10g database. Table structure was already laid out.

. . importing table "TEMP_PRE_ACC_INFO"
IMP-00020: long column too large for column buffer size (7)


The definition in source database (9i)
SQL> desc cmsopr.TEMP_PRE_ACC_INFO;
Name Null? Type
----------------------------------------- -------- ----------------------------
MSISDN VARCHAR2(8)
SUB_ID NUMBER(30)
SUB_TYPE VARCHAR2(10)
PERSONNAME VARCHAR2(50)
PERSONADDRESS VARCHAR2(100)
EMAIL VARCHAR2(50)
CONTACTNO VARCHAR2(10)
COMPANYNAME VARCHAR2(50)
COMPANYADDRESS VARCHAR2(100)
LASTUPDATEDATE TIMESTAMP(3)
DATEOFCREATION TIMESTAMP(3)
LAST_UPD_ID VARCHAR2(8)


And definition in target database (10g)

SQL> desc cmsopr.TEMP_PRE_ACC_INFO;
Name Null? Type
----------------------------------------- -------- ----------------------------
MSISDN VARCHAR2(8)
SUB_ID NUMBER(30)
SUB_TYPE VARCHAR2(10)
PERSONNAME VARCHAR2(50)
PERSONADDRESS VARCHAR2(100)
EMAIL VARCHAR2(50)
CONTACTNO VARCHAR2(10)
COMPANYNAME VARCHAR2(50)
COMPANYADDRESS VARCHAR2(100)
LASTUPDATEDATE TIMESTAMP(0)
DATEOFCREATION TIMESTAMP(0)
LAST_UPD_ID VARCHAR2(8)


According to metalink note Note:286597.1. This is a bug in 8i and 9i when table is created over DB link.

Below is the extraction from metalink.


Subject: Oracle9i: IMP-20 on Import of Table with TIMESTAMP Column that was Created via Database Link

Doc ID: Note:286597.1 Type: PROBLEM

Last Revision Date: 04-JUL-2008 Status: PUBLISHED

The information in this article applies to:
- Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 9.2.0.5
- Oracle Server - Personal Edition - Version: 8.1.7.0 to 9.2.0.5
- Oracle Server - Standard Edition - Version: 8.1.7.0 to 9.2.0.5
This problem can occur on any platform.


ERRORS
------

IMP-00020: long column too large for column buffer size (7)


SYMPTOMS
--------

In a remote Oracle8i or higher release database, you have a table with
a column of the TIMESTAMP data type:

SQL> connect scott/tiger
SQL> alter session set nls_timestamp_format='YYYY-MM-DD HH24:MI:SSXFF';
SQL> create table my_original_tab (nr number, stamp_col timestamp(3));
SQL> insert into my_original_tab values (1, '2004-10-20 18:34:31.123456');
SQL> commit;
SQL> select * from my_original_tab;

NR STAMP_COL
---------- --------------------------------
1 2004-10-20 18:34:31.123


In your local Oracle8i or Oracle9i database you have created a database link:

SQL> connect system/manager
SQL> grant create database link to scott;

SQL> connect scott/tiger
SQL> alter session set nls_timestamp_format='MON DD, YYYY HH:MI:SSXFF AM';
SQL> create database link orcl.oracle.com
connect to scott identified by tiger using 'orcl';


Now you create a new table in this database with the CREATE TABLE AS SELECT
statement (CTAS), accessing the original table via the database link:

SQL> create table my_tab
as select * from my_original_tab@orcl.oracle.com;
SQL> select * from my_tab;

NR STAMP_COL
---------- --------------------------------
1 OCT 20, 2004 06:34:31.123 PM


You export this table:

% exp scott/tiger file=exp_tab.dmp log=exp_t.log tables=my_tab

About to export specified tables via Conventional Path ...
. . exporting table MY_TAB 1 rows exported
Export terminated successfully without warnings.


Any you try to import from this export dump file, e.g. into a different user:

% imp system/manager file=exp_tab.dmp log=imp_t.log fromuser=scott touser=hugo

. importing SCOTT's objects into HUGO
. . importing table "MY_TAB"
IMP-00020: long column too large for column buffer size (7)
Import terminated successfully with warnings.


- Increasing the value for the Import BUFFER parameter does not solve the
IMP-20 error.
- Pre-creating the table in the target schema, and running import with
IGNORE=Y does not solve the IMP-20 error.
- Specifying Import parameter COMMIT=Y does not import any row for this table.


CAUSE
-----

If you create a table with the CREATE TABLE ... AS SELECT (CTAS) syntax,
and the original table is located on a remote database that you access
via database link, and the table has a column with the TIMESTAMP data type,
then the new created table has an incorrect precision in the data dictionary.
Example:

If you would query the view dba_tab_columns for the original table in the
remote database you would see:

SQL> connect system/manager
SQL> select substr(data_type,1,15) data_type, data_length, data_precision,
data_scale, column_id, substr(column_name,1,15) column_name
from dba_tab_cols where owner='SCOTT' and table_name='MY_ORIGINAL_TAB'
order by column_id;

DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE COLUMN_ID COLUMN_NAME
--------------- ----------- -------------- ---------- ---------- ------------
NUMBER 22 1 NR
TIMESTAMP(3) 11 3 2 STAMP_COL


However, a similar query in the local database (where you ran the CTAS),
gives a different result:

SQL> connect system/manager
SQL> select substr(data_type,1,15) data_type, data_length, data_precision,
data_scale, column_id, substr(column_name,1,15) column_name
from dba_tab_cols where owner='SCOTT' and table_name='MY_TAB'
order by column_id;

DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE COLUMN_ID COLUMN_NAME
--------------- ----------- -------------- ---------- ---------- ------------
NUMBER 22 1 NR
TIMESTAMP(0) 11 0 2 STAMP_COL


If you export this table, there is a mismatch in the metadata syntax of
the table in the export dumpfile. As a result, import will fail because
import expects a field of 7 bytes (used to store the value of a timestamp
without any precision), but actually encounters values of 11 bytes (used
to store timestamps with a precision). As a result, an error is produced:

IMP-00020: long column too large for column buffer size (7)


FIX
---

This defect has been fixed in Oracle10g. This means that if the table is
created in Oracle10g via a database link and exported afterwards, then
the import of that table with a TIMESTAMP column, will not result
in an IMP-20 error.

In Oracle8i and Oracle9i, use the workaround by correcting the precision
of the TIMESTAMP column in the data dictionary. Example:

1. In the local database, determine the correct value for the precision
of the column with the timestamp data type. You can determine the correct
value with a simple DESCRIBE command:

SQL> connect scott/tiger
SQL> desc my_tab

Name Null? Type
----------------------------------------- -------- -------------
NR NUMBER
STAMP_COL TIMESTAMP(3)

2. Modify the column:

SQL> connect scott/tiger
SQL> alter table my_tab modify (stamp_col timestamp(3));

If you receive the error:
ORA-01866: the datetime class is invalid
Please refer to:
Note 372638.1C> "ORA-1866 on ALTER TABLE MODIFY when trying to change
precision of timestamp column"

3. Check for the correct output (scale = 3 in this case):

SQL> connect system/manager
SQL> select substr(data_type,1,15) data_type, data_length, data_precision,
data_scale, column_id, substr(column_name,1,15) column_name
from dba_tab_cols where owner='SCOTT' and table_name='MY_TAB'
order by column_id;

DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE COLUMN_ID COLUMN_NAME
--------------- ----------- -------------- ---------- ---------- ------------
NUMBER 22 1 NR
TIMESTAMP(3) 11 3 2 STAMP_COL

4. Afterwards, re-create the export dumpfile:

% exp scott/tiger file=exp_tab.dmp log=exp_t.log tables=my_tab

About to export specified tables via Conventional Path ...
. . exporting table MY_TAB 1 rows exported
Export terminated successfully without warnings.

5. And re-run the import from the new export dump file:

% imp system/manager file=exp_tab.dmp log=imp_t.log fromuser=scott touser=hugo

. importing SCOTT's objects into HUGO
. . importing table "MY_TAB" 1 rows imported
Import terminated successfully without warnings.

REFERENCE
---------
Bug 2417643 "CTAS VIA DBLINK WITH TIMESTAMP COLUMN CHANGES PRECISION TO 0
IN DATA DICTIONARY".
Note 372638.1 "ORA-1866 on ALTER TABLE MODIFY when trying to change
precision of timestamp column"