Friday, August 29, 2008

very slow when import tables having timestamp columns.

FYI, learned during test CMS data migration with 5 tables having timestamp columns.
The symptom is top wait event " log file sync".
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time 3,164 95.2
log file sync 1,409,922 3,105 2 93.4 Commit
log file parallel write 1,409,992 3,036 2 91.3 System I/O

db file parallel write 1,463 17 11 0.5 System I/O
control file parallel write 1,254 9 7 0.3 System I/O

Cause

For a number of data types, imp will insert one row at a time.

This is expected behavior. If table has following datatypes ,

LOBs, LONGs, ADTs (user-defined types), ROWID/UROWIDs and the timestamp.

"For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, array inserts are not done. If COMMIT=y, Import commits these tables after each row."

Solution

This issue is same as Bug 2797580 which is closed as not a bug and
subsequently an enhancement request Bug 3452820 was logged. "

Work around to this issue is to use Commit=N parameter while import.

After I comment out "commit=y", I got the import done within 6 mins vs 161 mins with "commit=y"

Wednesday, August 27, 2008

Use import statistics to save database migration downtime

drop table oracms.stat_table;
exec dbms_stats.create_stat_table('ORACMS','STAT_TABLE');

-- backup existing stats
exec dbms_stats.export_schema_stats(ownname=>'DBACMS',stattab=>'STAT_TABLE',statid=>'run_init',statown=>'ORACMS');
-- gather new stats
exec dbms_stats.gather_schema_stats(ownname=>'DBACMS');

-- export newly gathered stats
exec dbms_stats.export_schema_stats(ownname=>'DBACMS',stattab=>'STAT_TABLE',statid=>'run_new',statown=>'ORACMS');

select statid, d1, count(*) from oracms.STAT_TABLE where c5 ='DBACMS' group by statid, d1;

SQL> select last_analyzed, avg_row_len, num_rows from dba_tables where table_name='VAS_INFO';

LAST_ANAL AVG_ROW_LEN NUM_ROWS
--------- ----------- ----------
26-AUG-08 57 12763481

SQL> select last_analyzed, avg_row_len, num_rows,owner from dba_tables where table_name='VAS_INFO';

LAST_ANAL AVG_ROW_LEN NUM_ROWS OWNER
--------- ----------- ---------- ------------------------------
26-AUG-08 57 12763481 DBACMS

SQL> set timing on echo on time on
09:02:36 SQL> EXEC DBMS_STATS.DELETE_SCHEMA_STATS('DBACMS');

PL/SQL procedure successfully completed.

Elapsed: 00:00:09.57
09:02:51 SQL> select last_analyzed, avg_row_len, num_rows from dba_tables where table_name='VAS_INFO';

LAST_ANAL AVG_ROW_LEN NUM_ROWS
--------- ----------- ----------


Elapsed: 00:00:00.01
09:03:08 SQL> EXEC DBMS_STATS.IMPORT_SCHEMA_STATS('DBACMS','STAT_TABLE',statid=>'run_new',statown=>'ORACMS');

PL/SQL procedure successfully completed.

Elapsed: 00:00:07.53
09:03:25 SQL> select last_analyzed, avg_row_len, num_rows from dba_tables where table_name='VAS_INFO';

LAST_ANAL AVG_ROW_LEN NUM_ROWS
--------- ----------- ----------
26-AUG-08 57 12740519

Elapsed: 00:00:00.01

Typical feature not in standard edition 10g

FAST_START_MTTR_TARGET
PARALLEL SERVERS
PARTITIONING

v$option shows more info

Tuesday, August 26, 2008

假如发现表中有坏块,怎样检索其它未坏的数据?

答:首先需要找到坏块的ID(可以运行dbverify实现),假设为1234。运行下面的查询查找段名:


select segment_name, segment_type, block_id, blocks from dba_extents where (1234 between block_id and (block_id + blocks - 1));
一旦找到坏段名称,若段是一个表,则最好建立一个临时表,存放好的数据。若段是索引,则删除它,再重建。

create table good_table as select  from bad_table where rowid not in ( select /+index(bad_table, any_index)/ rowid from bad_table where substr( rowid, 1, 8 ) = 1234)

Listener: Static and Dynamc Registration

Prior to Oracle 8i, a listener was statically configured (listener.ora) to
service a given set of SIDs. From 8i, PMON dynamically registers a database
service with the listener.

Further, if the listener is running on the default TCP port of 1521, then


there is no need to configure a listener.ora at all.
from Oracle 8i onwards, database instances can register themselves with
the Listener when started. Prior to this release, information about the
instance was required to be manually configured in the "listener.ora" file
in the form of a STATIC SID_LIST.

Database instance registration is comprised of two elements:

- Service registration provides the Listener with instance
information, such as database service names and instance
names.

- MTS (Shared Server) dispatcher registration provides dispatcher
information to the Listener.

By default, an instance background process registers instance information
to a Listener on the local machine. If a Listener is started after the
instance, there may be a delay before the instance and dispatchers are
registered (normally 30 to 35 seconds). As such, if a Listener receives an
incoming connection request before an instance is registered, the Listener
may reject the request with an ORA-12514 error. The instance will attempt to


register its services to the Listener periodically.

To disable auto registration add the local_listener parameter to the init<>.ora file or in the spfile to specify a dummy address.

PMON will reject the invalid address and will not attempt to register with port
1521.


It will be necessary to statically configure the SID_LIST_
for your instance if you disable dynamic registration.

lock statistics

[symptom]

Complete gathering statistics for schema or whole database within few minutes, which is not supposed to , as the data size is more than 100Gb. There is no error message prompted.

checked last_analyzed from dba_tables, it is null.

[Troubleshooting]

SQL> ANALYZE TABLE SYS.T1 COMPUTE statistics;
ANALYZE TABLE SYS.T1 COMPUTE statistics
*
ERROR at line 1:
ORA-38029: object statistics are locked


Now realized , the statistics is locked.


SQL> exec dbms_stats.unlock_table_stats('SYS','T1');

PL/SQL procedure successfully completed.

SQL> ANALYZE TABLE SYS.T1 COMPUTE statistics;

Table analyzed.



[More Info]
You may also lock statistics at the schema level using the LOCK_SCHEMA_STATS procedure.

You may determine if a table's statistics are locked by querying the STATTYPE_LOCKED column in the
{USER | ALL | DBA}_TAB_STATISTICS view

To unlock the tables statitics for a specified table you may use the UNLOCK_TABLE_STATS procedure.
Error Message

sqlplus user/user@cmspt42g
Error 6 initializing SQL*Plus
Message file sp1.msb not found
SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory


Description
Wrong permissions may be set on some files following installation
of the the 10gR2 base release (10.2.0.1).

eg: Some directories under $ORACLE_HOME have no world read nor
execute permission following the install (eg: rwxr-x---)

This can stop local users not in the oracle code owner group
from connecting using the bequeath driver.

Workaround:
Manually set the required permissions.
chmod -R 755 $ORACLE_HOME/*
or
Apply 10.2.0.2 or higher and run changeperm.sh
or
Download the one off patch for this bug and run changeperm.sh

Note:
A similar problem also occurs after applying the 9.2.0.7
Patch Set - that issue is covered under Bug 4533592.

Saturday, August 23, 2008

EXP-00008: ORACLE error 1403 encountered

. . exporting table BS221
EXP-00008: ORACLE error 1403 encountered
ORA-01403: no data found


Encountered when I export dynamic table in production during application is active.

Did some research, but doesn't match to my case. List the relevant URL below.

http://space.itpub.net/?uid-33600-action-viewspace-itemid-266420

Second thought, since it is dynamic data. I should give it 2nd try. (CONSISTENT=Y is used in the PAR file)

I succeed when I export the table again.

IMP-00020: long column too large for column buffer size (7)

Encountered this problem during import data from 9i to 10g database. Table structure was already laid out.

. . importing table "TEMP_PRE_ACC_INFO"
IMP-00020: long column too large for column buffer size (7)


The definition in source database (9i)
SQL> desc cmsopr.TEMP_PRE_ACC_INFO;
Name Null? Type
----------------------------------------- -------- ----------------------------
MSISDN VARCHAR2(8)
SUB_ID NUMBER(30)
SUB_TYPE VARCHAR2(10)
PERSONNAME VARCHAR2(50)
PERSONADDRESS VARCHAR2(100)
EMAIL VARCHAR2(50)
CONTACTNO VARCHAR2(10)
COMPANYNAME VARCHAR2(50)
COMPANYADDRESS VARCHAR2(100)
LASTUPDATEDATE TIMESTAMP(3)
DATEOFCREATION TIMESTAMP(3)
LAST_UPD_ID VARCHAR2(8)


And definition in target database (10g)

SQL> desc cmsopr.TEMP_PRE_ACC_INFO;
Name Null? Type
----------------------------------------- -------- ----------------------------
MSISDN VARCHAR2(8)
SUB_ID NUMBER(30)
SUB_TYPE VARCHAR2(10)
PERSONNAME VARCHAR2(50)
PERSONADDRESS VARCHAR2(100)
EMAIL VARCHAR2(50)
CONTACTNO VARCHAR2(10)
COMPANYNAME VARCHAR2(50)
COMPANYADDRESS VARCHAR2(100)
LASTUPDATEDATE TIMESTAMP(0)
DATEOFCREATION TIMESTAMP(0)
LAST_UPD_ID VARCHAR2(8)


According to metalink note Note:286597.1. This is a bug in 8i and 9i when table is created over DB link.

Below is the extraction from metalink.


Subject: Oracle9i: IMP-20 on Import of Table with TIMESTAMP Column that was Created via Database Link

Doc ID: Note:286597.1 Type: PROBLEM

Last Revision Date: 04-JUL-2008 Status: PUBLISHED

The information in this article applies to:
- Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 9.2.0.5
- Oracle Server - Personal Edition - Version: 8.1.7.0 to 9.2.0.5
- Oracle Server - Standard Edition - Version: 8.1.7.0 to 9.2.0.5
This problem can occur on any platform.


ERRORS
------

IMP-00020: long column too large for column buffer size (7)


SYMPTOMS
--------

In a remote Oracle8i or higher release database, you have a table with
a column of the TIMESTAMP data type:

SQL> connect scott/tiger
SQL> alter session set nls_timestamp_format='YYYY-MM-DD HH24:MI:SSXFF';
SQL> create table my_original_tab (nr number, stamp_col timestamp(3));
SQL> insert into my_original_tab values (1, '2004-10-20 18:34:31.123456');
SQL> commit;
SQL> select * from my_original_tab;

NR STAMP_COL
---------- --------------------------------
1 2004-10-20 18:34:31.123


In your local Oracle8i or Oracle9i database you have created a database link:

SQL> connect system/manager
SQL> grant create database link to scott;

SQL> connect scott/tiger
SQL> alter session set nls_timestamp_format='MON DD, YYYY HH:MI:SSXFF AM';
SQL> create database link orcl.oracle.com
connect to scott identified by tiger using 'orcl';


Now you create a new table in this database with the CREATE TABLE AS SELECT
statement (CTAS), accessing the original table via the database link:

SQL> create table my_tab
as select * from my_original_tab@orcl.oracle.com;
SQL> select * from my_tab;

NR STAMP_COL
---------- --------------------------------
1 OCT 20, 2004 06:34:31.123 PM


You export this table:

% exp scott/tiger file=exp_tab.dmp log=exp_t.log tables=my_tab

About to export specified tables via Conventional Path ...
. . exporting table MY_TAB 1 rows exported
Export terminated successfully without warnings.


Any you try to import from this export dump file, e.g. into a different user:

% imp system/manager file=exp_tab.dmp log=imp_t.log fromuser=scott touser=hugo

. importing SCOTT's objects into HUGO
. . importing table "MY_TAB"
IMP-00020: long column too large for column buffer size (7)
Import terminated successfully with warnings.


- Increasing the value for the Import BUFFER parameter does not solve the
IMP-20 error.
- Pre-creating the table in the target schema, and running import with
IGNORE=Y does not solve the IMP-20 error.
- Specifying Import parameter COMMIT=Y does not import any row for this table.


CAUSE
-----

If you create a table with the CREATE TABLE ... AS SELECT (CTAS) syntax,
and the original table is located on a remote database that you access
via database link, and the table has a column with the TIMESTAMP data type,
then the new created table has an incorrect precision in the data dictionary.
Example:

If you would query the view dba_tab_columns for the original table in the
remote database you would see:

SQL> connect system/manager
SQL> select substr(data_type,1,15) data_type, data_length, data_precision,
data_scale, column_id, substr(column_name,1,15) column_name
from dba_tab_cols where owner='SCOTT' and table_name='MY_ORIGINAL_TAB'
order by column_id;

DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE COLUMN_ID COLUMN_NAME
--------------- ----------- -------------- ---------- ---------- ------------
NUMBER 22 1 NR
TIMESTAMP(3) 11 3 2 STAMP_COL


However, a similar query in the local database (where you ran the CTAS),
gives a different result:

SQL> connect system/manager
SQL> select substr(data_type,1,15) data_type, data_length, data_precision,
data_scale, column_id, substr(column_name,1,15) column_name
from dba_tab_cols where owner='SCOTT' and table_name='MY_TAB'
order by column_id;

DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE COLUMN_ID COLUMN_NAME
--------------- ----------- -------------- ---------- ---------- ------------
NUMBER 22 1 NR
TIMESTAMP(0) 11 0 2 STAMP_COL


If you export this table, there is a mismatch in the metadata syntax of
the table in the export dumpfile. As a result, import will fail because
import expects a field of 7 bytes (used to store the value of a timestamp
without any precision), but actually encounters values of 11 bytes (used
to store timestamps with a precision). As a result, an error is produced:

IMP-00020: long column too large for column buffer size (7)


FIX
---

This defect has been fixed in Oracle10g. This means that if the table is
created in Oracle10g via a database link and exported afterwards, then
the import of that table with a TIMESTAMP column, will not result
in an IMP-20 error.

In Oracle8i and Oracle9i, use the workaround by correcting the precision
of the TIMESTAMP column in the data dictionary. Example:

1. In the local database, determine the correct value for the precision
of the column with the timestamp data type. You can determine the correct
value with a simple DESCRIBE command:

SQL> connect scott/tiger
SQL> desc my_tab

Name Null? Type
----------------------------------------- -------- -------------
NR NUMBER
STAMP_COL TIMESTAMP(3)

2. Modify the column:

SQL> connect scott/tiger
SQL> alter table my_tab modify (stamp_col timestamp(3));

If you receive the error:
ORA-01866: the datetime class is invalid
Please refer to:
Note 372638.1C> "ORA-1866 on ALTER TABLE MODIFY when trying to change
precision of timestamp column"

3. Check for the correct output (scale = 3 in this case):

SQL> connect system/manager
SQL> select substr(data_type,1,15) data_type, data_length, data_precision,
data_scale, column_id, substr(column_name,1,15) column_name
from dba_tab_cols where owner='SCOTT' and table_name='MY_TAB'
order by column_id;

DATA_TYPE DATA_LENGTH DATA_PRECISION DATA_SCALE COLUMN_ID COLUMN_NAME
--------------- ----------- -------------- ---------- ---------- ------------
NUMBER 22 1 NR
TIMESTAMP(3) 11 3 2 STAMP_COL

4. Afterwards, re-create the export dumpfile:

% exp scott/tiger file=exp_tab.dmp log=exp_t.log tables=my_tab

About to export specified tables via Conventional Path ...
. . exporting table MY_TAB 1 rows exported
Export terminated successfully without warnings.

5. And re-run the import from the new export dump file:

% imp system/manager file=exp_tab.dmp log=imp_t.log fromuser=scott touser=hugo

. importing SCOTT's objects into HUGO
. . importing table "MY_TAB" 1 rows imported
Import terminated successfully without warnings.

REFERENCE
---------
Bug 2417643 "CTAS VIA DBLINK WITH TIMESTAMP COLUMN CHANGES PRECISION TO 0
IN DATA DICTIONARY".
Note 372638.1 "ORA-1866 on ALTER TABLE MODIFY when trying to change
precision of timestamp column"



PLS-00371

157/6 PL/SQL: SQL Statement ignored
157/93 PLS-00371: at most one declaration for 'RP_CMS_OFFER_RETRIEVAL.LCAMPAIGNCODE' is permitted


[ROOT CAUSE]
lcampaigncode VARCHAR2(10) := '';
lsegtable VARCHAR2(30) := '';
lidentification_no VARCHAR2(30) := '';
idnocounter INTEGER := 1;
idnostart INTEGER := 0;
idnoend INTEGER := -1;
lcampaigncode VARCHAR2(10) := '';
...


As you can see above, there are two lines to declare lcampaigncode.

After I removed one, re-compile the procedure successfully.


This problem occurred when I import objects from 9i database to 10g. The object was valid in 9i , however invalid in 10g. Think 10g has stricter checking.


Thursday, August 14, 2008

How to rename instance name?

method to rename instance
1. shutdown DB
2.prepare the following settings
prx06:DASP2:/software/oradas/product/9.2.0/dbs> ls -l
lrwxrwxr-x 1 oradas dba 46 Sep 22 17:21 initDASP.ora -> /software/oradas/admin/DASP/pfile/initDASP.ora
lrwxr-xr-x 1 oradas dba 46 Oct 13 11:51 initDASP2.ora -> /software/oradas/admin/DASP/pfile/initDASP.ora
-rwSr----- 1 oradas dba 3072 Oct 11 15:53 orapwDASP
-rwSr----- 1 oradas dba 3072 Oct 11 15:53 orapwDASP2
3. export ORACLE_SID=DASP2
4. change listener and tnsname.ora accordingly
5. change service_name inside init.ora

6. bring up and verify v$instance and v$database



db_name
DB_NAME specifies a database identifier of up to 8 characters. This parameter must be specified and must correspond to the name specified in the CREATE DATABASE statement.

db_unique_name
DB_UNIQUE_NAME specifies a globally unique name for the database. Databases with the same DB_NAME within the same DB_DOMAIN (for example, copies of a database created for reporting or a physical standby) must have a unique DB_UNIQUE_NAME. Every database's DB_UNIQUE_NAME must be unique within the enterprise.

instance_name
In a Real Application Clusters environment, multiple instances can be associated with a single database service. Clients can override Oracle's connection load balancing by specifying a particular instance by which to connect to the database. INSTANCE_NAME specifies the unique name of this instance.

In a single-instance database system, the instance name is usually the same as the database name.


service_names
SERVICE_NAMES specifies one or more names by which clients can connect to the instance. The instance registers its service names with the listener. When a client requests a service, the listener determines which instances offer the requested service and routes the client to the appropriate instance.

--before

SQL> show parameter name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert string
db_name string CMSPT42g
db_unique_name string CMSPT42g
global_names boolean FALSE
instance_name string CMSPT42g
lock_name_space string
log_file_name_convert string
service_names string CMSPT42g

--Follow above steps to rename current instance CMSPT42g to CMSPT42G2
--after

SQL> show parameter _name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert string
db_name string CMSPT42g
db_unique_name string CMSPT42g
global_names boolean FALSE
instance_name string CMSPT42G2
lock_name_space string
log_file_name_convert string
service_names string CMSPT42G2


psdev03:CMSPT42G2:/software/oracle/admin/CMSPT42g/pfile> ps -ef |grep CMSPT42G
oracle 25062 1 0 14:24:58 ? 0:00 ora_lgwr_CMSPT42G2
oracle 25058 1 0 14:24:58 ? 0:00 ora_mman_CMSPT42G2
oracle 25068 1 0 14:24:58 ? 0:00 ora_reco_CMSPT42G2
oracle 25094 1 0 14:25:19 ? 0:00 ora_q000_CMSPT42G2
oracle 25070 1 0 14:24:59 ? 0:00 ora_cjq0_CMSPT42G2
oracle 25064 1 0 14:24:58 ? 0:00 ora_ckpt_CMSPT42G2
oracle 25054 1 0 14:24:58 ? 0:00 ora_pmon_CMSPT42G2
oracle 25074 1 0 14:24:59 ? 0:00 ora_mmnl_CMSPT42G2
oracle 25141 24354 0 14:27:02 pts/2 0:00 grep CMSPT42G
oracle 25072 1 0 14:24:59 ? 0:00 ora_mmon_CMSPT42G2
oracle 25078 1 0 14:24:59 ? 0:00 ora_s000_CMSPT42G2
oracle 25120 1 0 14:25:59 ? 0:00 ora_q001_CMSPT42G2
oracle 25086 1 0 14:25:09 ? 0:00 ora_qmnc_CMSPT42G2
oracle 25056 1 0 14:24:58 ? 0:00 ora_psp0_CMSPT42G2
oracle 25066 1 0 14:24:58 ? 0:00 ora_smon_CMSPT42G2
oracle 25060 1 0 14:24:58 ? 0:00 ora_dbw0_CMSPT42G2
oracle 25076 1 0 14:24:59 ? 0:00 ora_d000_CMSPT42G2

Thursday, August 07, 2008

two good AWR articles

--to use AWR
http://www.dbapool.com/articles/061906.html

--to check AWR retention
select * from DBA_HIST_WR_CONTROL;


--to estimate AWR size
http://www.dba-oracle.com/t_awr_retention_period.htm

ORA-12537: TNS:connection closed

restarted listener, looks fine and user can connect.

but after a while, told me happened again.

funny and google internet, follow below article.

http://www.dba-oracle.com/t_ora_12537_tns_error.htm


nothing matches me.

No choice, try my luck to restart database. while shutdown it , hanging with dispatcher d000.


shutdown abort
startup

looks working more than 2 hours now.

to keep it monitor further.