Wednesday, December 26, 2007

ORA 1410 "invalid ROWID"

Explanation:
This error is raised when an operation refers to a ROWID in a table
for which there is no such row.

The reference to a ROWID may be implicit from a WHERE CURRENT OF
clause or directly from a WHERE ROWID=... clause.

ORA 1410 indicates the ROWID is for a BLOCK that is not part of this
table. If you update a rowid where the BLOCK is valid but the slot
within the block is invalid you just get ZERO rows processed/returned.
This can be a source of confusion as it is inconsistent.

In my case , it is due to the index was rebuild at end of data (direct path) loading , while another select statement was running & caused the problem.

Solution:

Add dependency to the job for operational schedule.

Saturday, December 22, 2007

VISTA on VMWARE network & sound problem

Install VISTA on VMWARE

main problems are network and sound, as driver is not automatically installed.

to fix network problem
1.Ensuer the network device type is vmxnet
2.Enable vmware tools in the menu. (the image file windows.iso should be under /usr/lib/vmware/isoimages/windows.iso). This should automatically mount the windows.iso as D: Drve
3.run the setup.exe from D: drive
4.If need manually update the drive. The location is D:\Program Files\VMWare tools\Drivers\vmxnet\win2k folder

To fix sound problem

Easy . After I add sound adapter in vmware console and logon to vista. I let it search on internet automatically . Done!


References:
http://www.brandonhutchinson.com/Installing_VMware_Tools_with_VMware_Player.html
http://h0bbel.p0ggel.org/installing-microsoft-vista-in-vmware
http://www.msfn.org/board/lofiversion/index.php/t52054.html

Friday, December 14, 2007

ORA-00600 [qmxtrScalarIsRewritable:dft] during expdp

Hit below two errors during running exp & expdp.

After increase shared pool size , the program gone.

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user DBAM2
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user DBAM2
About to export DBAM2's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
EXP-00056: ORACLE error 600 encountered
ORA-00600: internal error code, arguments: [qmxtrScalarIsRewritable:dft], [], [], [], [], [], [], []
EXP-00000: Export terminated unsuccessfully



> cat expdp_DBAM2.log
;;;
Export: Release 10.2.0.2.0 - 64bit Production on Friday, 14 December, 2007 9:05: 54

Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "LIQY"."EXPDP_DBAM2": parfile=exp_DBAM2.par
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 3.861 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
ORA-31693: Table data object "DBAM2"."M1_TM_COLLECTION_HISTORY" failed to load/u nload and is being skipped due to error:
ORA-04031: unable to allocate 48 bytes of shared memory ("shared pool","BEGIN S YS.KUPF$FILE.UPD_TDX...","parameters","kglpda")
. . exported "DBAM2"."M1_TM_FIN_TRANS_TRACKING" 888.1 MB 11968728 rows
. . exported "DBAM2"."M1_TM_LDL_SCT_DC_PAYMENT_UPD" 258.3 MB 2119285 rows
. . exported "DBAM2"."M1_TM_CONTACT_HISTORY_DONE" 715.0 MB 8097902 rows
. . exported "DBAM2"."M1_TM_SMS_ACTION" 100.0 MB 488877 rows
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW01" prematurely terminated
ORA-31672: Worker process DW01 died unexpectedly.
Job "LIQY"."EXPDP_DBAM2" stopped due to fatal error at 09:08:59
ORA-39014: One or more workers have prematurely exited.

Friday, November 09, 2007

libjox10.sl error While applying the 10.2.0.2 patch

Issue 2: While applying the 10.2.0.2 patch, you receive the following error message:


The following actions are not applicable:

Copy Action: Desctination File

"/bugmnt4/ap/rmtdchp6/BOD/10.2.0.2/A/64bit/app/oracle/product/10.2.0.2/lib/libjox10.sl" is not writeable.

'oracle.javavm.server, 10.2.0.2.0': Cannot copy file from 'libjox10.sl' to

'/bugmnt4/ap/rmtdchp6/BOD/10.2.0.2/A/64bit/app/oracle/product/10.2.0.2/lib/libjox10.sl'



Do you want to proceed? [y|n]


This error is likely to occur for any third party library that comes with out write permission, like libons.sl, libjox10.sl etc.

Workaround (perform all of the following steps):

Stop the patch installation by typing n as the response.

Make note of the file permissions on the libraries that are not writeable.

Provide write permission (preferably 755) to the libraries.

Reapply the patch.

Revert back to the original file permissions for the libraries. Failure to do so may result in the following error:


usr/lib/pa20_64/dld.sl: Mmap failed for the

library<....10.2.0.2/@ lib/libjox10.sl> : Permission denied.

ERROR:

ORA-12547: TNS:lost contact

Thursday, November 08, 2007

xchm installation

after installation, you might have problem after type "xchm &" with below error message.
[root@localhost Desktop]# xchm
xchm: error while loading shared libraries: libchm.so.0: cannot open shared object file: No such file or directory

Solution:
  #ln -s /usr/local/lib/libchm.so.0 /usr/lib/libchm.so.0

  #ln -s /usr/lib/libwx_gtk-2.4.so.0 /usr/lib/libwx_gtk-2.4.so

Thursday, November 01, 2007

ORA-00600: internal error code, arguments: [opixrb-4], [1036]

ORA-00600: internal error code, arguments: [opixrb-4], [1036], [ORA-01036: illegal variable name/number
], [], [], [], [], []

According to metalink , this is a known bug.

Bug 4964703 OERI[opixrb-4] [1036] from SELECT over DBLINK with trailing :N or :Q bind names
Fixed-Releases: 9208 A203

PROPOSED SOLUTION(S)
======================
1.Applying patch 4964703
2.Replacing Bind Variable 'n' with another character. (if you use TOAD,this workaround can't
be accepted becuase this variable is defined in TOAD code.)


For my case, it is caused by TOAD 8.6.0.38

Sunday, October 21, 2007

password value in dba_users

[CONCLUSION]
Same encrypted password string (PASSWORD column) in dba_users doesn't mean the actual password are the same. USERNAME looks the key to encrypt the password.
Below is the experiment.

SQL> select password, username from dba_users where username in ('LIQY');

PASSWORD USERNAME
------------------------------ ------------------------------
B68901BE8B75A6BA LIQY

SQL> create user liqy1 identified by values 'B68901BE8B75A6BA';

User created.

SQL> grant create session to liqy1;

Grant succeeded.

SQL> sers where username in ('LIQY','LIQY1'); <> connect liqy1/liqyliqy;
ERROR:
ORA-01017: invalid username/password; logon denied


Warning: You are no longer connected to ORACLE.
SQL> connect liqy/liqyliqy;
Connected.


SQL> connect /
Connected.
SQL> alter user liqy identified by liqyliqy;

User altered.

SQL> s where username in ('LIQY','LIQY1'); <> connect /
Connected.
SQL> alter user liqy1 identified by liqyliqy;

User altered.

SQL> select password, username from dba_users where username in ('LIQY','LIQY>

PASSWORD USERNAME
------------------------------ ------------------------------
B68901BE8B75A6BA LIQY
56775B8ACC3F0E97 LIQY1

Internal error:ulnail: bad row out of bounds [21]

Problem:

8i SQL*Loader failed with error message SQL*Loader-704:Internal error:ulnail: bad row out of bounds [21]. Target database is 10g.

Troubleshooting:

Tried option DIRECT=TURE but failed as 10g database doesn't support such option from 8i client.

Solution:

Add parameter bindsize=500000 and succeed.

Sunday, October 07, 2007

No trigger body in dba_source

[The problem]

Can see trigger body with dbms_metadata.get_ddl, but can't see in dba_source or dba_triggers. Moreover, the trigger functions well.

[Root Cause]
This is due to a bug in Oracle version 8. For my case , this database was upgraded from 8.0 to 8i to 9i eventually.

To rectify it, just recompile it.


select TRIGGER_BODY from dba_triggers where trigger_name='IVOC_SP_CONTACT3';

Tuesday, September 25, 2007

ora-07445 $cold_qerfxArrayMaxSize()

Tue Sep 18 22:12:34 2007
Errors in file /software/oracle/admin/OOOS/bdump/odsp_j001_25476.trc:
ORA-00081: address range [0x60000000000F5A40, 0x60000000000F5A44) is not readable
ORA-07445: exception encountered: core dump [$cold_qerfxArrayMaxSize()+7456] [SIGBUS] [Invalid address alignment] [0x00000002
B] [] []

Associated Bug: 5442780 on HP-UX Itanium platform

'Invalid Block Type' Reported During Default Database Install

Symptoms

Downloaded 10.2.0.3 from metalink. Oracle from OTN.

Started apply apply patch and DBCA fails with

ERROR IN WRITING TO FILE LIBJMISC.SL[INVALID BLOCK TPYE] at 11% while copying the datafiles.


Checked with metalink, this could be due to file corrupted while download.

To verify this, the download page for the patch on Metalink has a button "View Digest" which shows the checksums
of the zip file.

Patch: p5337014_10203_HP64.zip
1424057924 bytes
MD5 : AC57E417EA46F6A281B54ACC338FD86B

To check the checksum of the download with an
MD-5 checker.
Public Domain checkers are available like on
http://www.fourmilab.ch/md5/

In my case , the checksum are shown below.

F2310739DACE8F9EB7C6067A8A7A1434 (wrong md5 chksum)

AC57E417EA46F6A281B54ACC338FD86B (correct md5 chksum)



From here , I learned taht for big file, verify the Digest is really important.

Wednesday, September 19, 2007

modify inline constraint

ods01@/software/oraods/product/10.2.0/bin> ./oerr ora 1451
01451, 00000, "column to be modified to NULL cannot be modified to NULL"
// *Cause: the column may already allow NULL values, the NOT NULL constraint
// is part of a primary key or check constraint.
// *Action: if a primary key or check constraint is enforcing the NOT NULL
// constraint, then drop that constraint.

This problem occurred during a database migration.
Failed statement is "alter table test_table modify colname NULL;". Developer told me was working.

Did a "desc test_table"
The column of "NULL?" is blank, which usually means NULLable.
However, check dba_constraints, it shown column not null and enabled.


This is discrepancy caused by my migration steps" alter table modify constraint enable NOVALIDATE;"


After I issue " alter table modify constraint enable;" again. The DESC tallys with dba_constraint, i.e. show me NOT NULL.

Subsequently, the failed job went through.



location of opatch lsenv

notice the location of "opatch lsinventory" changed in 10.2.0.2

Log file location : /software/oracle/product/10.2.0/cfgtoollogs/opatch/opatch-2007_Sep_14_09-54-43-SGT_Fri.log

Lsinventory Output file location : /software/oracle/product/10.2.0/cfgtoollogs/opatch/lsinv/lsinventory-2007_Sep_14_09-54-43-SGT_Fri.txt

Sunday, September 02, 2007

Pro*c compilation error

Problem: Pro*c compilation error on HP-UX Itanium Platform

cc -o pass10g pass.c -I$ORACLE_HOME/precomp/public -L$ORACLE_HOME/lib -lclntsh

proc itanium ld: Can't find library or mismatched ABI for -lclntsh

Solution: change to 32bit lib , i.e.
cc -o pass10g pass.c -I$ORACLE_HOME/precomp/public -L$ORACLE_HOME/lib32 -lclntsh

Friday, August 17, 2007

find: cannot stat odbc while running changePerm.sh

> ./changePerm.sh

-------------------------------------------------------------------------------
Disclaimer: The purpose of this script is to relax permissions on some of the
files in the database Oracle Home so that all clients can access them.
Please note that Oracle Corporation recommends using the most restrictive file
permissions as possible for your given implementation. Running this script
should be done only after considering all security ramifications.
-------------------------------------------------------------------------------

-n Do you wish to continue (y/n) [n]:
y
find: cannot stat odbc

Finished running the script successfully
Please see /tmp/changePerm_err.log for errors and /tmp/changePerm.log for the log of events


--excerpt from Note:453603.1

Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.3
This problem can occur on any platform.

Symptoms

-- Problem Statement:
While running changePerm.sh on 10.2.0.3, getting following error
find: cannot stat odbc
Finished running the script successfully
Please see /tmp/changePerm_err.log for errors and /tmp/changePerm.log


Cause

Most likely cause of this problem is there is no odbc directory inside $ORACLE_HOME.
The changePerm.sh script cannot find the folder odbc and the log files /tmp/changePerm_err.log for errors and /tmp/changePerm.log are 0 bytes


Solution

-- To implement the solution, please execute the following steps::
Ignore the error message
The script has been run successfully. The permissions inside $ORACLE_HOME should be relaxed.

Tuesday, August 07, 2007

Determining Space used by Filegroups

http://forums.databasejournal.com/archive/index.php/t-30789.html

dbcc molestations( n )

n is the fileid which you can get from
select fileid,name from sysfiles

Here is a query that simulates sp_spaceused, modified to group by groupid (you can look up the filegroup names in sysfilegroups):

SELECT
groupid,
SUM(CASE WHEN indid <> 100
AND object_name(sysindexes.id) <> 'dtproperties'
GROUP BY groupid
ORDER BY groupid;

http://www.databasejournal.com/features/mssql/article.php/10894_3414111_2

Thursday, August 02, 2007

"no TTY available" with secure remote excution shell(SSH)

[Problem]
ssh radev01 ls

Unauthorized Use Prohibited and Will be Prosecuted

Received disconnect from 10.132.90.21: 2: Password change required but no TTY available

[Root Cause]
The password in target server has been expired.

[Solution]
Explicitly logon to the server and changed the password.

Sunday, July 29, 2007

ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier

[Problem]

Export: Release 9.2.0.8.0 - Production on Wed Jul 25 10:47:29 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses UTF8 character set (possible charset conversion)
About to export specified tables via Direct Path ...
Current user changed to DBAPL
. . exporting table RPT_LOCAL__01 1012208 rows exported
EXP-00056: ORACLE error 904 encountered
ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier
Export terminated successfully with warnings.

[Root Cause]

It caused by different DB version in the server , as below two version is installed:
DB1 V9.2.0.8 , while DB2 is 9.2.0.5
In operator id 's env , the ORACLE_HOME is point to 9.2.0.8, caused using higher version exp to exp data from lower version db.
[solution]
in the script , explicitly specify the path of exp , i.e:
/software/oradb1/product/9.2.0/bin/exp $RPT_CONNECT@$RPT_DB file=$expfile log=$LOGFILE direct=y recordlength=65535 buffer=10240000 compress=N tables=${tab_name} statistics=none

Today 21-Jan-2009, hit another similar case. get new findings from metalink
Doc ID: 358508.1
.

Applies to:

Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 10.2.0.1
This problem can occur on any platform.

Symptoms

Export fails with:

EXP-00056: ORACLE error 904 encountered
ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier


Changes

A patchset was recently installed against the database.

Cause

After the patchset installation, the data dictionary components have not been upgraded to the patchset release (e.g. 10.1.0.4)

A patchset (e.g. 10.1.0.4) was applied to an existing ORACLE_HOME install but one or more of the patchset steps were not performed per the patchset readme to bring the database components up to the same version as the patched software as shown in DBA_REGISTRY.

If the database components versions do not match the software and the running instance, then the exp utility fails as it finds an earlier version of the exp packages in the database.

Solution

Follow the Post-Installation Tasks section of the Oracle Database Server Patch Set Notes to run the catpatch.sql script with the database started in migrate mode and then retry the export.

Connect to the database as SYSDBA and run:

SQL> STARTUP MIGRATE
SQL> SPOOL patch.log
SQL> @?/rdbms/admin/catpatch.sql
SQL> SPOOL OFF
SQL> SHUTDOWN
SQL> STARTUP


Friday, July 27, 2007

Flashback drop a table

Today I have change to use flashback drop table. A developer came to me and tell me what happend in one 10g databse.

With the help of "flashback table abc to before drop", I recovered it within minutes.
Compare to doing the same thing in 9i,
1. use lominer to find the time when the table was dropped.
2. Restored the database in anothe place
3. Perform incomplete revoery to the timestamp using backup controlfile
4. Export the table and import to production.

Guess how many time is needed.

More to add
flashback doesn't use undo tablespace.
You can recover a dropped table using Oracle’s recycle bin functionality: a dropped table is
stored in a special structure in the tablespace and is available for retrieval as long as the space
occupied by the table in the tablespace is not needed for new objects. Even if the table is no
longer in the tablespace’s recycle bin, depending on the criticality of the dropped table, you can
use either tablespace point in time recovery (TSPITR) or Flashback Database Recovery to
recover the table, taking into consideration the potential data loss for other objects stored in the
same tablespace for TSPITR or in the database if you use Flashback Database Recovery.

Thursday, July 26, 2007

ora-2393 cpu_call exceed profile limit

Wed Jul 25 10:24:30 2007
Errors in file
ORA-00604: error
occurred at recursive SQL level 1
ORA-02393: exceeded call limit on CPU
usage
ORA-00604: error occurred at recursive SQL level 1
ORA-02393:
exceeded call limit on CPU usage
ORA-00604: error occurred at recursive SQL
level 1
ORA-02393: exceeded call limit on CPU usage
ORA-02393: exceeded
call limit on CPU usage


Error: ORA 2393
Text: exceeded call limit on CPU usage
-------------------------------------------------------------------------------
Cause: An attempt was made to exceed the maximum CPU time for a call, a parse,
execute, or fetch, allowed by the CPU_PER_CALL clause of the user
profile.
Action: If this happens often, ask the database administrator to increase the
CPU_PER_CALL limit of the user profile.


Wednesday, July 25, 2007

CentOS 4.4 upgrade to CentOS 5

During start transaction, hit the error message : installation failed due to the follwing reason (no reason it is shown, instead it is blank).

What I did are:
1. Remove unnecessary repository from /etc/yum.repo.d .
2. Free up disk space (/) to 11Gb. (Think it was due to this reason,as failed twice when free space was <6Gb)
3. Tried to upgrade in text mode.

Don't know which of above is the exact cause/solution, anyway it suceed after near four hours upgrade.


After that, need the folloings to make live udpate available.

cd /etc
cp yum.conf.rpmnew yum.conf
cd /etc/yum.repos.d
cp CentOS-Base.repo.rpmnew CentOS-Base.repo

Cheers.

References:
http://www.linuxfans.org/bbs/redirect.php?tid=178447&goto=lastpost&highlight=

Thursday, July 19, 2007

Purging the Audit Trail

Database audit records for statement, privilege, and object auditing are stored in the table
SYS.AUD$. Depending on how extensive your auditing and retention policies are, you will need
to periodically delete old audit records from this table. The database does not provide an inter-
face to assist in deleting rows from the audit table, so you will need to do so yourself. To purge
audit records older than 90 days, execute the following as user SYS:
DELETE FROM sys.aud$ WHERE timestamp# < SYSDATE -90;

Tuesday, July 17, 2007

Default database tablespace

If the default tablespace is not specified during the database creation, it defaults to SYSTEM. But how do you know which tablespace is default for the existing database? Issue the following query:

SELECT PROPERTY_VALUE
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME = 'DEFAULT_PERMANENT_TABLESPACE';

The DATABASE_PROPERTIES view shows some very important information, in addition to the default tablespace—such as the default temporary tablespace, global database name, time zone, and much more.


To change the database default permenantly/temporary tablespace, use the ALTER DATABASE statement,
like this:
ALTER DATABASE DEFAULT TABLESPACE users;ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;

Eclipse Intro error

Problem : Error message Windos "Intro error " prompted during initialistion OF "Eclipse IDE for C / C++ Developers".

Could not find a valid configuration for intro Part: intro element" in
extension: org.eclipse.ui.intro in Bundle: org.eclipse.ui.intro.universal

Check Error Log view for details

CAUSE:
No JAVA_HOME is exported.

Solution:

1. download the J2SE from java.com, choose the tar.gz file instead of the auto extracted file.
2. Export the JAVA_HOME

Sunday, July 15, 2007

rlwrap works on centos5

Yesterday I managed to compile wlrap on CentOS 5. The root cause is missing development packages, like libglib, gtk+-devel ... , after the fresh installation of CentOS 5.

To rectify:
1. note down the missage file name, NOT package name.(as we are not ware of the package name)
2. For lookup purpose, go to rpm.pbone.ne and key in the mssing file, click seach and from the listed packages you will guess which to install.
3. Try to install the package from thy linux distributor, for centos, use "yum install

By using this method, I succeed to install my favor tools: mplayer, freemind, stardict ...

Cheers

Saturday, July 14, 2007

use except to find dupliated records

SQL> create table stu (f1 number);
insert table st
Table created.

SQL> insert into stu values (100);

1 row created.

SQL> insert into stu values (101);

1 row created.

SQL> insert into stu values (101);

1 row created.

SQL> insert into stu values (102);

1 row created.

SQL> insert into stu values (103);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from stu;

F1
----------
100
101
101
102
103


SQL> alter table stu add constraint f1_uk unique(f1) using index tablespace users;
alter table stu add constraint f1_uk unique(f1) using index tablespace users
*
ERROR at line 1:
ORA-02299: cannot validate (SYS.F1_UK) - duplicate keys found


SQL> desc exceptions
ERROR:
ORA-04043: object exceptions does not exist


SQL> @?/rdbms/admin/utlexcpt

Table created.

SQL> desc exceptions
Name Null? Type
----------------------------------------- -------- ----------------------------
ROW_ID ROWID
OWNER VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
CONSTRAINT VARCHAR2(30)


SQL> alter table stu add constraint f1_uk unique(f1) using index tablespace users exceptions into exceptions;
alter table stu add constraint f1_uk unique(f1) using index tablespace users exceptions into exceptions
*
ERROR at line 1:
ORA-02299: cannot validate (SYS.F1_UK) - duplicate keys found


SQL> desc exceptions
Name Null? Type
----------------------------------------- -------- ----------------------------
ROW_ID ROWID
OWNER VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
CONSTRAINT VARCHAR2(30)

SQL> select * from exceptions
2 /

ROW_ID OWNER TABLE_NAME
------------------ ------------------------------ ------------------------------
CONSTRAINT
------------------------------
AAAMfdAABAAANsSAAC SYS STU
F1_UK

AAAMfdAABAAANsSAAB SYS STU
F1_UK


SQL> select * from user_constraints where table_name='STU';

no rows selected

SQL> SELECT * FROM STU where rowid in ('AAAMfdAABAAANsSAAC','AAAMfdAABAAANsSAAB');

F1
----------
101
101

SQL> DELETE FROM STU where rowid in ('AAAMfdAABAAANsSAAB');
1 row deleted.

SQL> COMMIT;

Commit complete.

SQL> alter table stu add constraint f1_uk unique(f1) using index tablespace users exceptions into exceptions;

Table altered.


SQL> SET PAGES 1000
SQL> select * from user_constraints where table_name='STU';

OWNER CONSTRAINT_NAME C
------------------------------ ------------------------------ -
TABLE_NAME
------------------------------
SEARCH_CONDITION
--------------------------------------------------------------------------------
R_OWNER R_CONSTRAINT_NAME DELETE_RU STATUS
------------------------------ ------------------------------ --------- --------
DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHAN
-------------- --------- ------------- -------------- --- ---- ---------
INDEX_OWNER INDEX_NAME INVALID
------------------------------ ------------------------------ -------
VIEW_RELATED
--------------
SYS F1_UK U
STU

ENABLED
NOT DEFERRABLE IMMEDIATE VALIDATED USER NAME 14-JUL-07
SYS F1_UK

Friday, July 13, 2007

sqlplus command history wrapper -- gqlplus

Before I upgrade my Linux to CentOS 5(equal to RHEL 5), I am happy with the sql command wrapper --rlwrap, except for having problem with automatically invoke "alias sqlplus='rlwrap sqlplus' " in .bash_profile. Each time I have to manually type it again after launch gnome-terminal.

Now my favorite rlwrap can't be succefully comipled under CentOS 5, becaus can't manage to install the prerequisite library -readline. No choice , have to find alternative solution.

Luckly, I fing gsqlplus. Download it from sourceforge.net, without compilation. I run the pre-build binary suprisely.

Move it to $ORACLE_HOME/bin without hesitate.

The minor issue is looks slow while invoking it, takes tens of seconds on my p3-600/384Mb memory.

Never mind. It is my mostest stable machine.


BTW, I also find another open source yasql, however facing the same problem with compilation. Think I need to improve my Linux knowlege rapidly

MetaLink search engine plugin


There's a new MetaLink search engine plugin available for Microsoft Internet Explorer 7.0 and Mozilla Firefox 2.0.

Want to have something like this?

1. go to http://mycroft.mozdev.org/ and teyp "metalink" to search the plugin.
2. Install it.
3. That is it. Try!

9.2.0.8 critcal bug 584523

A critical bug of 10.2.0.2. I had blood experience ...

possible error message could be

ORA-01595: error freeing extent (2) of rollback segment (10))
ORA-01594: attempt to wrap into rollback segment (10) extent (2) which is being

ORA-00600: internal error code, arguments: [2032], [12598820], [2131047733], [81
92], [0], [1], [4155315611], [362479617]


ORA-00600: internal error code, arguments: [kcbzpb_1], [2206308750], [4], [1], [], [], [], []


bug 584523

10g Init parameter(sepecific)

--Common Settings for 10g Database
commit_write='batch' #(or not to set it )
recyclebin=on
disk_asynch_io=false #(For platform doesn’t support asynchronised I/O, like HP-UX)
-- Use hidden parameters as workaround for 10g bugs, which are fixed in future version.
_gby_hash_aggregation_enabled=FALSE # bug 4604970
_kghdsidx_count=1 #bug 5508574
_table_lookup_prefetch_size=0 #bug 5509707, 4148420

ORA-1031 while connect as sysdba on windows platform

Two possible causes are:

1. SQLNET.AUTHENTICATION_SERVICES = (NTS)

The value is not NTS . By default in 8.1.x and onwards, a new database uses Windows Native Authentication by having the above setting in the "sqlnet.ora" file.

2. No appropriate DBA group

Create group ORA_DBA or ORA__DBA, and add your windows ID to it.

About _log_io_size

_LOG_IO_SIZE specifies the max number of redo blocks to write out per IO request.
By default the value is 1/3 of log buffer size, number in blocks.Its range is between 1/3 (default) and 1/2 (max) of the log buffer size. Note that the log block size is not the same as DB block size, which is platform-dependent. e.g, in HP-UX, it is 1K.


To check it values , below two ways work.
1. check BLOCK_SIZE from v$archived_log.
-- This is the logical block size of the archived log, which is the same as the logical block size of the online log from which the archived log was copied. The online log logical block size is a platform-specific value that is not adjustable by the user.
-- For my x86 Linux 32bit , it is 512 bytes.

2. select distinct lebsz "LGWR block size (Bytes)" from X$KCCLE;


While, The max IO size on HPUX 11 is 256K, on HPUX 10.20 it is 64K. Async IO on HPUX 11 has a limit of 1MB.

If the Oracle redo log block size on the systems are 1K, and we are doing large batch commits, and using a large log buffer to reduce LGWR IOs, then it is possible we will be doing IO writes that are larger than the OS I/O size. Hence we are not doing efficient IO writes.

e.g. 1MB Log buffer, 1/3rd of that is 341K, which will be 341 redo blocks written in redo log. This incurs two I/O request, right?

So, if we are doing large commits and we want better efficient IOs:

If HPUX 11 LVM has a max IO size of 256K, and
my redo log block size is 1K, then should I set this parameter to 256?


As lgwr writes whatever to be written out in one i/o ie it uses one system call to write all the buffered data so lgwr may flush 1/2 of the log buffer if this _log_io_size is changed. So in theory setting it to the max i/o size of the OS LVM should help.
(Never had a chance to tweak with this parameter, so can't say for sure that it'll improve performance, but looking at the facts it should.)

Wednesday, July 11, 2007

About Long datatype

LONG A legacy datatype that exists for backward compatibility. It stores variable-length alphanumeric
data up to 2GB in size. There are many restrictions on the usage of the columns of type
LONG: there is a limit of one column of type LONG per table, tables containing a LONG cannot be
partitioned, LONG datatypes cannot be used in subqueries, and few functions will work with
LONG data. The CLOB datatype is the preferred datatype for character data larger than VARCHAR2.

Can't start DBConsole

C:\Documents and Settings\oracle>emctl start dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.2.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
http://mdw01:1159/em/console/aboutApplication
Starting Oracle Enterprise Manager 10g Database Control ...The OracleDBConsoleALPHAservice is starting.............


A service specific error occurred: 2.

More help is available by typing NET HELPMSG 3547.


below is except from metalink note Note:344150.1
Applies to:
Enterprise Manager for RDBMS - Version: 10.1.0.4
Microsoft Windows 2000
Symptoms
The dbconsole service does not start.
'emctl start dbconsole' returns the following error:
' A service specific error occurred: 2'
Cause
The Windows service has not enough privileges to be started.
Solution
Configure DbConsole service to use the "Log on As" feature. Log on as the the 'oracle' user, which is
member of local Administrators.

When Does Database Writer Write?

The DBWn background process writes to the datafiles whenever one of the following events occurs:
 A user’s Server Process has searched too long for a free buffer when reading a buffer into the Buffer Cache.
 The number of modified and committed, but unwritten, buffers in the Database Buffer
Cache is too large.
 At a database Checkpoint event. See Chapters 10 and 11 for information on checkpoints.
 The instance is shut down using any method other than a shutdown abort.
 A tablespace is placed into backup mode.
 A tablespace is taken offline to make it unavailable or changed to READ ONLY.
 A segment is dropped.

When Does Log Writer Write?

The LGWR background process writes to the current redo log group whenever one of the
following database events occurs:
 Every three seconds.
 A user commits a transaction.
 The Redo Log Buffer is one-third full.
 The Redo Log Buffer contains 1MB worth of redo information.
 Before the DBWn process whenever a database checkpoint occurs.

Tuesday, July 10, 2007

Optimal Flexible Architecture (OFA)

http://members.ozemail.com.au/~gpiper/oracle/ora_8.html
http://download-uk.oracle.com/docs/html/A97297_01/appg_ofa.htm
http://www.remote-dba.cc/teaser_aegis_ault_dba_ofa.htm

One way to simplify installation planning is to adopt the Optimal Flexible Architecture
(OFA) model that Oracle recommends as a best-practice methodology for managing Oracle
installations in Unix environments (and to a lesser extent, Windows environments).

The OFA model prescribes that the directory structures under the mount points use a consistent
and meaningful naming convention.
In addition to this naming convention, the OFA model also
assigns standard operating system environment variable names to some of these directory paths
as “nicknames” to aid in navigation and ensure portability of the directory structures in the
event that they need to be moved to new file systems.


The OFA model is useful for establishing a manageable directory structure for a new Oracle
server. The OFA model recommends mount point, directory, and file-naming conventions.
Once the OFA structure is established, you can use the OUI to install the Oracle 10g software
into the location you’ve selected.

Saturday, July 07, 2007

Shrink table space

SQL> create table waste_table as select * from dba_objects;

Table created.

SQL> set lines 2000
SQL> select bytes, segment_name from user_segments;

BYTES SEGMENT_NAME
---------- ---------------------------------------------------------------------------------
6291456 WASTE_TABLE

SQL> delete from waste_table;

50553 rows deleted.

SQL> commit;

Commit complete.

SQL> select bytes, segment_name from user_segments;

BYTES SEGMENT_NAME
---------- ---------------------------------------------------------------------------------
6291456 WASTE_TABLE

--to avoid table lock if end users are accessing it

SQL> alter table waste_table shrink space compact;
alter table waste_table shrink space compact
*
ERROR at line 1:
ORA-10636: ROW MOVEMENT is not enabled


SQL> alter table waste_table enable row movement;

Table altered.

SQL> alter table waste_table shrink space compact;

Table altered.

SQL> select bytes, segment_name from user_segments;

BYTES SEGMENT_NAME
---------- ---------------------------------------------------------------------------------
6291456 WASTE_TABLE

--issue the command 2nd time, to lower down the High Water Mark
SQL> alter table waste_table shrink space ;

Table altered.

SQL> select bytes, segment_name from user_segments;

BYTES SEGMENT_NAME
---------- ---------------------------------------------------------------------------------
65536 WASTE_TABLE

Tuesday, June 26, 2007

Understanding roles in Oracle stored procedures

By Bob Watkins, Special to ZDNet Asia
24 May 2007

One of the trickiest parts of Oracle's security model is the way that roles (collections of database privileges) interact with stored procedures, functions, and packages. Object privileges in Oracle can be granted directly to the user or indirectly via a role.

Suppose an HR user grants some permissions on the EMPLOYEES table to user ABEL:

GRANT select, insert, update, delete ON employees TO abel;

This directly grants the four privileges mentioned to the user named ABEL. On the other hand, suppose an HR user did this:

GRANT select, insert, update, delete ON employees TO hr_role;

If ABEL has been granted the role HR_ROLE, he now has these privileges indirectly via that role.

Either way, ABEL now has the SELECT privilege on the table HR.EMPLOYEES. If ABEL selects data from the table directly via the SELECT statement, it doesn't matter how he obtained permission. However, if ABEL tries to create stored procedures, functions, or packages that SELECT from this table, it makes a big difference whether he was granted permission directly or via a role.

Oracle requires that permissions to non-owned objects in a stored procedure be granted directly to the user. Roles are temporarily turned off during compilation, and the user has no access to anything granted through them. This is done for performance and security reasons. Roles can be dynamically activated and deactivated via the SET ROLE command, and it would be a large overhead for Oracle to constantly check which roles and permissions are currently active.

The following code shows a short stored procedure that updates the HR copy of employees (the code assumes that a synonym, EMPLOYEES, is used to stand for HR.EMPLOYEES). When Abel tries to compile this under the first case above with direct rights, the compilation succeeds. When he tries to compile it under the second case above with only indirect rights, the compilation fails.

CREATE OR REPLACE PROCEDURE update_emp (
p_employee_id IN NUMBER
,p_salary IN NUMBER
)

AS
v_department_id employees.department_id%TYPE;

BEGIN
SELECT department_id INTO v_department_id
FROM employees
WHERE employee_id = p_employee_id;


UPDATE employees
SET salary = p_salary
WHERE employee_id = p_employee_id;


IF v_department_id = 100 THEN
UPDATE local_employees
SET salary = p_salary
WHERE employee_id = p_employee_id;
END IF;



END;

/

One interesting fact is that granting to PUBLIC is the same as granting to all users directly. PUBLIC is often thought of as a role, but it isn't. It's a collection of users and not a collection of permissions. If the permissions on HR.EMPLOYEES had been granted to PUBLIC, ABEL would have been able to create his stored procedure. While it's not recommended in the case of an EMPLOYEES table, any table that is granted to PUBLIC can be freely used in stored procedures.

Render query tool output in HTML

By Bob Watkins, TechRepublic
SQL*Plus has traditionally been thought of as a plain text SQL query tool. But since Oracle 8i, it has also had the capability to render its output using HTML.


One of SQL*Plus's environment settings, MARKUP, controls what kind of markup language (if any) to use for its output. By default, MARKUP defines HTML as the markup language, but markup itself is turned off. A set of HTML tags is predefined; all you have to do is turn markup on by typing:

SET MARKUP HTML ON
and the tags will be added to each output produced by SQL*Plus. For example, after activating the feature as above, you could type the following:

SPOOL deptlist.html
SELECT * FROM departments;
SPOOL OFF
and the result would be formatted as an HTML table ready to add to an intranet or other Web page. To create a complete HTML document, including the HTML and /HTML tags and a CSS style sheet, type:

SET MARKUP HTML ON SPOOL ON
To turn the feature off again or exit the session, type:

SET MARKUP HTML OFF
or

SET MARKUP HTML OFF SPOOL OFF
If you don't like the way that SQL*Plus formats the output, no problem. You can also use the SET MARKUP command to replace the built-in formatting codes with your own. The HEAD, BODY, TABLE, and other options let you specify the HTML to generate.

For more information, consult the SQL*Plus User's Guide and Reference, Chapter 7, Generating HTML Reports from SQL*Plus.

Monday, June 25, 2007

Compressing Data for Space and Speed

Just happened to read this article "Compressing Data for Space and Speed "

http://www.oracle.com/technology/oramag/oracle/04-mar/o24tech_data.html


It looks suitable for report database /ODS.


According to the article,

The benefits are:

1. Half space saved
2. Faster speed as less block accessed

The overhead is double time required during data loading.


Details of testing list below
CodeListing 3: Comparing blocks in uncompressed and compressed tables

ANALYZE TABLE SALES_HISTORY COMPUTE STATISTICS;
ANALYZE TABLE SALES_HISTORY_COMP COMPUTE STATISTICS;

SELECT TABLE_NAME, BLOCKS, NUM_ROWS, COMPRESSION
FROM USER_TABLES
WHERE TABLE_NAME LIKE 'SALES_HIST%';

TABLE_NAME BLOCKS NUM_ROWS COMPRESSION
------------------ ------ -------- -----------
SALES_HISTORY 12137 1000000 DISABLED
SALES_HISTORY_COMP 6188 1000000 ENABLED

codeLISTING 4: Comparing queries on uncompressed and compressed tables

TKPROF results of the query on the uncompressed table:

SELECT SALE_DATE, COUNT(*) FROM SALES_HISTORY GROUP BY SALE_DATE;

call count cpu elapsed disk query current rows
------- ------ ---- ------- ----- ---------- ---------- -----
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 5.22 13.76 10560 12148 0 1
------- ------ ---- ------- ----- ---------- ---------- -----
total 4 5.22 13.78 10560 12148 0 1


TKPROF results of the query on the compressed table:

SELECT SALE_DATE, COUNT(*) FROM SALES_HISTORY_COMP GROUP BY SALE_DATE;

call count cpu elapsed disk query current rows
------- ------ ---- ------- ----- ---------- ---------- -----
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 5.27 7.20 6082 6091 0 1
------- ------ ---- ------- ----- ---------- ---------- -----
total 4 5.27 7.20 6082 6091 0 1

customized oem notification

ora-01594

This is related to 8i database, which uses rollback segments.

01594, 00000, "attempt to wrap into rollback segment (%s) extent (%s) which is being freed"
// *Cause: Undo generated to free a rollback segment extent is attempting
// to write into the same extent due to small extents and/or too
// many extents to free
// *Action: The rollback segment shrinking will be rollbacked by the system;
// increase the optimal size of the rollback segment.

select * from dba_rollback_segs;
select * from v$rollstat;
--get the optimal size there

use below command to increase optimal size , if needs
alter rollback segment .. storage ( .. optimal );

对于troubleshooting的一点体会

Friday, June 22, 2007

WARNING: Subscription for node down event still pending

After I add "
SUBSCRIBE_FOR_NODE_DOWN_EVENT_=OFF
" to listener.ora, notice that below message is found in listener.log


20-JUN-2007 10:50:57 * ping * 0
WARNING: Subscription for node down event still pending
According to Net Services Administrator's Guide,

16 Troubleshooting Oracle Net Services

I think it should be safe to ignore, as I don't have ONS.


Listener Subscription for ONS Node Down Event Information

Listener will subscribe to the Oracle Notification Service (ONS) node down event on startup if ONS configuration file is available. This subscription enables the listener to remove the affected service when it receives node down event notification from ONS. The listener uses asynchronous subscription for the event notification. The following warning message will be recorded to listener log file on each STATUS command if the subscription has not completed; for example if the ONS daemon is not running on the host.

WARNING: Subscription for node down event still pending

Listener will not be able to receive the ONS event while subscription is pending. Other than that, no other listener functionality is affected.

Wednesday, June 20, 2007

About function based index

SQL> create table stu (f1 number, f2 varchar2(10));

Table created.

SQL> insert into stu values(100,'want');

1 row created.

SQL> insert into stu values(101,'ye');

1 row created.

SQL> insert into stu values(103,'li');

1 row created.

SQL> set autotrace on
SQL> delete from stu where f2='ye';

1 row deleted.


Execution Plan
----------------------------------------------------------
Plan hash value: 1645979371

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 1 | 7 | 3 (0)| 00:00:01 |
| 1 | DELETE | STU | | | | |
|* 2 | TABLE ACCESS FULL| STU | 1 | 7 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("F2"='ye')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
28 recursive calls
1 db block gets
18 consistent gets
0 physical reads
320 redo size
830 bytes sent via SQL*Net to client
725 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> insert into stu values(102,'ye');

1 row created.


Execution Plan
----------------------------------------------------------

-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------


Statistics
----------------------------------------------------------
1 recursive calls
3 db block gets
1 consistent gets
0 physical reads
280 redo size
834 bytes sent via SQL*Net to client
728 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> set autotrace off
SQL> select * from stu;

F1 F2
---------- ----------
100 want
103 li
102 ye

SQL> create index f2_idx on stu(upper(f2));

Index created.

SQL> set autotrace on
SQL> select * from stu where f2='ye';

F1 F2
---------- ----------
102 ye


Execution Plan
----------------------------------------------------------
Plan hash value: 2614136206

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 20 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| STU | 1 | 20 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("F2"='ye')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
5 recursive calls
0 db block gets
15 consistent gets
0 physical reads
0 redo size
573 bytes sent via SQL*Net to client
469 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> select * from stu where upper(f2)='YE';

F1 F2
---------- ----------
102 ye


Execution Plan
----------------------------------------------------------
Plan hash value: 2667645883

--------------------------------------------------------------------------------
------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|

--------------------------------------------------------------------------------
------

| 0 | SELECT STATEMENT | | 1 | 20 | 2 (0)| 00:0
0:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| STU | 1 | 20 | 2 (0)| 00:0
0:01 |

|* 2 | INDEX RANGE SCAN | F2_IDX | 1 | | 1 (0)| 00:0
0:01 |

--------------------------------------------------------------------------------
------


Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(UPPER("F2")='YE')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
28 recursive calls
0 db block gets
13 consistent gets
0 physical reads
0 redo size
573 bytes sent via SQL*Net to client
469 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> select * from stu where upper(f2)=upper('ye');

F1 F2
---------- ----------
102 ye


Execution Plan
----------------------------------------------------------
Plan hash value: 2667645883

--------------------------------------------------------------------------------
------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|

--------------------------------------------------------------------------------
------

| 0 | SELECT STATEMENT | | 1 | 20 | 2 (0)| 00:0
0:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| STU | 1 | 20 | 2 (0)| 00:0
0:01 |

|* 2 | INDEX RANGE SCAN | F2_IDX | 1 | | 1 (0)| 00:0
0:01 |

--------------------------------------------------------------------------------
------


Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(UPPER("F2")='YE')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
10 consistent gets
0 physical reads
0 redo size
573 bytes sent via SQL*Net to client
469 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> set autotrace off


SQL> set pages 1000
SQL> select * from user_indexes where index_name='F2_IDX';

INDEX_NAME INDEX_TYPE
------------------------------ ---------------------------
TABLE_OWNER TABLE_NAME TABLE_TYPE
------------------------------ ------------------------------ -----------
UNIQUENES COMPRESS PREFIX_LENGTH TABLESPACE_NAME INI_TRANS
--------- -------- ------------- ------------------------------ ----------
MAX_TRANS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE
---------- -------------- ----------- ----------- ----------- ------------
PCT_THRESHOLD INCLUDE_COLUMN FREELISTS FREELIST_GROUPS PCT_FREE LOG
------------- -------------- ---------- --------------- ---------- ---
BLEVEL LEAF_BLOCKS DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY
---------- ----------- ------------- -----------------------
AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR STATUS NUM_ROWS SAMPLE_SIZE
----------------------- ----------------- -------- ---------- -----------
LAST_ANAL DEGREE
--------- ----------------------------------------
INSTANCES PAR T G S BUFFER_ USE DURATION
---------------------------------------- --- - - - ------- --- ---------------
PCT_DIRECT_ACCESS ITYP_OWNER ITYP_NAME
----------------- ------------------------------ ------------------------------
PARAMETERS
--------------------------------------------------------------------------------
GLO DOMIDX_STATU DOMIDX FUNCIDX_ JOI IOT DRO
--- ------------ ------ -------- --- --- ---
F2_IDX FUNCTION-BASED NORMAL
ORARA STU TABLE
NONUNIQUE DISABLED USERS 2
255 65536 1 2147483645
10 YES
0 1 3 1
1 1 VALID 3 3
20-JUN-07 1
1 NO N N N DEFAULT NO


NO ENABLED NO NO NO


SQL> select * from user_ind_columns where index_name='F2_IDX';

INDEX_NAME TABLE_NAME
------------------------------ ------------------------------
COLUMN_NAME
--------------------------------------------------------------------------------
COLUMN_POSITION COLUMN_LENGTH CHAR_LENGTH DESC
--------------- ------------- ----------- ----
F2_IDX STU
SYS_NC00003$
1 10 10 ASC


SQL> select column_name from user_ind_columns where index_name='F2_IDX';

COLUMN_NAME
--------------------------------------------------------------------------------
SYS_NC00003$

SQL> desc user_ind_expressions
Name Null? Type
----------------------------------------- -------- ----------------------------
INDEX_NAME VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
COLUMN_EXPRESSION LONG
COLUMN_POSITION NUMBER

SQL> select * from user_ind_expressions;

INDEX_NAME TABLE_NAME
------------------------------ ------------------------------
COLUMN_EXPRESSION
--------------------------------------------------------------------------------
COLUMN_POSITION
---------------
F2_IDX STU
UPPER("F2")
1

SQL> spool off

Tuesday, June 19, 2007

影响开发效率的12大杀手



Windows system error code

http://msdn2.microsoft.com/en-us/library/ms681381.aspx

Saturday, June 16, 2007

Schedule cygwin shell script with windows' scheduler

The default cygwin shell actually is a Windows shell script. Below is the source code.

@echo off

C:
chdir C:\cygwin\bin

bash --login -i

To use the windows' scheduler, automatically pass the bash shell script to cygwin.bat is needed.
How? Here is the solution-- by using the redirect .

@echo off

REM C:
C:\cygwin\bin\bash --login -i <"C:\Documents and Settings\liqy\avupd.sh"

Friday, June 15, 2007

pga_aggregate_target tuning

"最近再深入了解调优方面的东西
做了一个试验。
有一个表,30个g的数据,在上面建索引。
一开始,pga_aggregate_target设置为200m,发现v$sql_workarea_histogram中有one pass的值,但是没怎么在意后来将设置为1000m,发现速度快了,现在正在做设置为2g的测试,看是否会更快。

我现在就想看看对内存敏感的sql statement操作时,workarea的大小对速度究竟有多大的影响?"

The relevant table is v$sql_workarea_histogram

alter tablespace compress

relevant commands are:

alter tablespace SUBSCRIBER02 default compress;

alter tablespace SUBSCRIBER02 default nocompress;

select tablespace_name, DEF_TAB_COMPRESSION from dba_tablespaces;

Thursday, June 14, 2007

advance usage of ci

> co -p1.1 initOCP.ora >1.txt
initOCP.ora,v --> stdout
revision 1.1
bill07:B.11:ADMP:/software/oracle1/admin/CATP/pfile> co -p1.2 initCATP.ora >2.txt
initOCP.ora,v --> stdout
revision 1.2
> ls -lrt
total 36
drwxrwxr-x 2 oracle1 dba1 2048 Jun 12 10:18 archive
-r--r--r-- 1 oracle1 dba1 2640 Jun 14 14:29 initOCP.ora,v
-rw-rw-r-- 1 oracle1 dba1 2214 Jun 14 14:30 1.txt
-rw-rw-r-- 1 oracle1 dba1 2241 Jun 14 14:30 2.txt
> diff 1.txt 2.txt
64c64
< --- > fast_start_mttr_target=1200

about fast_start_mttr_target

Other than reduce time needed for recovery, it also affects performance, shorten checkpoint time. Especially when the DB has batch job running during certain period.

Sunday, June 10, 2007

申请OCP证书的流程

http://www.itpub.net/778013.html

Saturday, June 09, 2007

commit_write Performance

This 10g new parameter looks behaves much different with combination of values. and it is not very clear documented .

listing my testing result below

commit_write value SQL perf Remarks
immediate,wait 9:28 "log file sync" oberserved
null, null 1:21
immediate,nowait 1:54
batch,nowait 1:10
immediate 1:17
batch 1:07
batch,wait 10:20 "log file sync" oberserved

Oracle Price Model

These EE options looks not cheap.

Oracle RAC
Oracle Partitioning
Oracle OLAP
Oracle Data Mining
Oracel Spatial
Oracle Advanced Security
Oracle Label Security

Waiting for smon to disable tx recovery

Yesterday when I was dropping a stalled big mview log (7gb, commad run for about 30mins ), system was rebooted by system administrator.

I don't what will happend to this database.

This morning, I tried to bring it up. luckily , it open successfully as usual.
Verified that :
--the mview log is not shown in dba_mview_logs
--the segment type is shown as TEMPORARY in dba_segments

I have no idea on this TEMPORARY segment, thought it okay as it is not production environment. We can solve it easily by sync it again.


However, when I tried to "shutdown immediate"
The DB looks DB is hanging at "shutdown immedaite" step. DB closed is not shown immediate as usual.
After 15 mins, I checked alert.log that the last line is:
"Waiting for smon to disable tx recovery." and the CPU utilization of smon is almost 100% . I realized it is doing some recovery. This could be related to the TEMPORARY segment, hope this recovery can help to clean it.

So I just let it run, no rush to open another session to issue "shutdown abort".

As the same time, I did some research on the internet. Which supports my suspect -- the DB is doing cleansing.

The relevant metalink notes is 1076161.6
"Verify that temporary segments are decreasing
---------------------------------------------
To verify that the temporary segments are decreasing have an active session
available in Server Manager during the SHUTDOWN IMMEDIATE. Issue the following
query to ensure the database is not hanging, but is actually perform extent
cleanup:

SVRMGR> select count(block#) from fet$;
COUNT(BLOC
----------
7

SVRMGR> select count(block#) from uet$;
COUNT(BLOC
----------
402

After some time has elapsed, reissue the query and see that the values for fet$
have increased while the values or uet$ have decreased:

SVRMGR> select count(block#) from fet$;
COUNT(BLOC
----------
10

SVRMGR> select count(block#) from uet$;
COUNT(BLOC
----------
399

During shutdown the SMON process is cleaning up extents and updating the data
dictionary tables with the marked free extents. As the extents are marked as
freed, they are removed from the table for used extents, UET$ and placed on the
table for free extents, FET$."


Finally, after 1.5 hours, the DB was shutdown gracefully.

Open it again, checked that the segment is no more there ,7Gb tablespace reclaimed!
Shutdown it within few seconds.

Attache the alert.log for your reference.
Fri Jun 8 09:16:03 2007
ARC1: Completed archiving log 4 thread 1 sequence 28
Fri Jun 8 09:26:18 2007
Shutting down instance: further logons disabled
Shutting down instance (immediate)
License high water mark = 4
Fri Jun 8 09:26:18 2007
ALTER DATABASE CLOSE NORMAL
Fri Jun 8 09:31:21 2007
Waiting for smon to disable tx recovery.
Fri Jun 8 11:05:07 2007
SMON: disabling tx recovery
SMON: disabling cache recovery
Fri Jun 8 11:05:08 2007
Shutting down archive processes
Archiving is disabled
Fri Jun 8 11:05:08 2007
ARCH shutting down
ARC1: Archival stopped
Fri Jun 8 11:05:08 2007
ARCH shutting down
Fri Jun 8 11:05:08 2007
ARC0: Archival stopped
Fri Jun 8 11:05:08 2007
Thread 1 closed at log sequence 29
Successful close of redo thread 1
Fri Jun 8 11:05:08 2007
Completed: ALTER DATABASE CLOSE NORMAL
Fri Jun 8 11:05:08 2007
ALTER DATABASE DISMOUNT
Completed: ALTER DATABASE DISMOUNT
ARCH: Archiving is disabled
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH: Archiving is disabled
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active

Wednesday, June 06, 2007

set prompt for sql*plus

I like to have a clear command line prompt to remind me where / Who I am.

Just learn that this can be done in sqlplus also. This is use full when having mutilple instance running in one ORACLE_HOME.

the command can be addd to $ORACLE_HOME/sqlplus/admin/glogin.sql

by below command , you will have something like SYS@OCP> in your sql*plus prompt
set sqlprompt "_user'@'_user_identifier> "

ORA-01720 when view on other schema's objects

ORA-01720 grant option does not exist for 'string.string'

Cause: A grant was being performed on a view and the grant option was not present for an underlying object.

Action: Obtain the grant option on all underlying objects of the view.

Tuesday, June 05, 2007

什么是Incarnation 和Thread Number

在10g的LOG_ARCHIVE_FORMAT 有提到
%t Thread Number
%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the databaser

请问它们的具体含义及用途是什么?

Thread一般针对RAC的情况,RAC中每个instance为一个thread,单实例数据库只有一个thread 1

incarnation意思为“化身”,个人觉得主要是针对resetlogs后的情况,resetlog后,log sequence被重置,此时以前的controlfile不能再用力进行恢复,当前controlfile即为以前备份的controlfile的一个 incarnatio

在rmancatalog里,一个数据库有一个incarnation号。

Asynchronous Commit :commit_write

Apps team feedback that few heavy jobs run much slower after upgrade to 10g.

After two hours investigation, I notice there is lots of wait event of "log file sync" , definitely the root cause is too many COMMIT. However, this still can explain the behavior in 9i.

There must be something different.

After compare all parameters , I notice that there is no commit_write in 9i , while in 10g the value is set 'BATCH,WAIT'.

That must be the thing I want.

check metalink, oracle docs ...

starting testing ...

found the significant difference. With NOWAIT or not set commit_wait at all, no more wait event "log file sync" observed. The testing SQL can finish within 1.5mins versus problematic 1 hours.
Wow, what a nice day. There should be many happy face tomorrow ...

Table 2-1 Initialization Parameter and COMMIT Options for Managing Commit Redo
Option Specifies that . . .
WAIT The commit does not return as successful until the redo corresponding to the
commit is persisted in the on line redo logs (default).
NOWAIT The commit should return to the application without waiting for the redo to be written
to the on line redo logs.
IMMEDIATEThe log writer process should write the redo for the commit immediately (default). In
other words, this option forces a disk I/O.
BATCH Oracle Database should buffer the redo. The log writer process is permitted to write
the redo to disk in its own time.

For remember the moment of China Stock Market

下周重要提示:
1
、下周关注农药板块和木材板块,因为大批股民需要喝农药,并在其后紧接着需要用木材加工的棺材和骨灰盒,因此
这两个板块将受益 
2
、重大利空:猪肉板块重大利空!由于今天股民大幅割肉,故肉供应量大大增加,猪肉价格将大大下降,不利于猪肉
上市公司板块!
3
、紧急关注钢铁板块:大批股民上街买菜刀,准备到血拼,市场上菜刀已被抢购一空,钢铁供不应求,多家菜刀公司
正在运作上市 

open_cursors differs in 9i and10g

In 9i , it is hard-limit. If open cursors exceed it, you will hit oracle error and application becomes funny.
In 10g, it looks soft-limit. Luckily according to below Oracle docs, there is no overhead to setting this value higher than actually needed.

"It is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors. The number will vary from one application to another. Assuming that a session does not open the number of cursors specified by OPEN_CURSORS, there is no added overhead to setting this value higher than actually needed."

Monday, June 04, 2007

How to use GET_THRESHOLD

The day before yesterday, due to time issuing on solve production problem (as I am not faimiliar with PL/SQL). I didn't know to use DBMS_SERVER_ALERT.GET_THRESHOLD. Now I have the answer.

Thanks the article from http://turner.itpub.net/post/2343/66558

Below is sample to get threshold of tablespace used.

-- using DBMS_SERVER_ALERT.GET_THRESHOLD
variable warn_oper varchar2;
variable warn_value number;
variable crit_oper varchar2;
variable crit_value number;
variable obs_per number;
variable cons_oc number;

BEGIN
DBMS_SERVER_ALERT.GET_THRESHOLD(
DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL,
:warn_oper, :warn_value, :crit_oper, :crit_value,
:obs_per, :cons_oc, 'OCP',
DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE ,NULL
);
END;
/
print warn_value
print crit_value


--to check specific tablespace is to replace NULL with actual name , like 'SYSTEM'

Things to notes are:

1. In the arguments, "=>" can't be used mixedly with traditional method. Otherwise you'll see PLS-00312: a positional parameter association may not follow a named association
2. Must GUESS the OBJECT_TYPE linked to metric name. Currently I don't this is well documented in ORACLE document sets.

How to trace

--Instance-level trace

alter system|session set sql_trace=true;

--session-level SQL tracing
select sid,serial# from v$session where username='ABC' ;
execute dbms_monitor.session_trace_enable(session_id=>123, serial_num=>123);
execute dbms_monitor.session_trace_disable(session_id=>123, serial_num=>123);

Tracing with database control
--The DBMS_MONITOR packages has procedures that will let you enable tracing at these levels:
*session level
*Module level
*Client ID level
*Service Level
*Action

ORA-3136 in 10.2


Mon Jun 4 01:59:08 2007
WARNING: inbound connection timed out (ORA-3136)

> oerr ora 3136
03136, 00000, "inbound connection timed out"
// *Cause: Inbound connection was timed out by the server because
// user authentication was not completed within the given time
// specified by SQLNET.INBOUND_CONNECT_TIMEOUT or its default value
// *Action: 1) Check SQL*NET and RDBMS log for trace of suspicious connections.
// 2) Configure SQL*NET with a proper inbound connect timeout value
// if necessary.
The log in sqlnet.log is
***********************************************************************
Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for HPUX: Version 10.2.0.2.0 - Production
Oracle Bequeath NT Protocol Adapter for HPUX: Version 10.2.0.2.0 - Production
TCP/IP NT Protocol Adapter for HPUX: Version 10.2.0.2.0 - Production
Time: 04-JUN-2007 01:59:08
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12606
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.22.100.155)(PORT=1341))
According to metalink notes: 345197.1 and 316901.1, the default value in 10.2 is 60 seconds ( 0 second in 10.1)
Hence, we might need to monitor the frequency to decide whether need to set INBOUND_CONNECT_TIMEOUT =0


Oracle also recommends set for both listener and database.
Hence as below to listener.ora is advised.

INBOUND_CONNECT_TIMEOUT_ = 0

Saturday, June 02, 2007

DB Monitoring scripts

什么时候发生checkpoint?


我们知道了checkpoint会刷新脏数据,但什么时候会发生checkpoint呢?以下几种情况会触发checkpoint。
1.当发生日志组切换的时候
2.当符合LOG_CHECKPOINT_TIMEOUT[限制了上一检查点和最近的重做记录之间的秒数];
LOG_CHECKPOINT_INTERVAL[恢复过程中将要被读的重做记录的数目,最优=redo log/os(512)];
fast_start_io_target[恢复需要的数据块数目];
fast_start_mttr_target[允许DBA指定数据库进行崩溃恢复需要的秒数]参数设置的时候;
3.当运行ALTER SYSTEM SWITCH LOGFILE的时候;
4.当运行ALTER SYSTEM CHECKPOINT的时候;
5.当运行alter tablespace XXX begin backup,end backup的时候;
6.当运行alter tablespace ,datafile offline的时候;

Have a Rest

Testing ...

DBMS_SERVER_ALERT

Yesterday(Fridday and not working on Saturday), I configured four server-generated alerts on four database, which sends critical alert via SMS. Unfortunately , I didn't expect one of the databases was so busy in the night (DB Wait time critcal threshold exceeded) , SMS beep keeps coming. What a night.

The firewall port has number opened, caused I can't access DB Control 's web page. Only things at end is having SYSMAN and password. No choice have to try to explore using DBMS_SERVER_ALERT to temporarily increase the threshold.

Google ... 10g OEM document ...
Here is thing learned quickly.

select * from dba_thresholds where metrics_name='Database Wait Time Ratio';
--easy than using DBMS_SERVER_ALERT.GET_THRESHOLD
BEGIN
DBMS_SERVER_ALERT.SET_THRESHOLD(
metrics_id => DBMS_SERVER_ALERT.DATABASE_WAIT_TIME,
warning_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
warning_value => '95',
critical_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
critical_value => '99',
observation_period => 1,
consecutive_occurrences => 3,
instance_name => 'orcl10g2',
object_type => DBMS_SERVER_ALERT.OBJECT_TYPE_SYSTEM,
object_name => NULL
);
END;
/

--for blocking session count
--session blocking for CUSPA
select * from dba_thresholds where metrics_name ='Blocked User Session Count';

BEGIN
DBMS_SERVER_ALERT.SET_THRESHOLD(
metrics_id => DBMS_SERVER_ALERT.BLOCKED_USERS,
warning_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
warning_value => '50',
critical_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
critical_value => '100',
observation_period => 1,
consecutive_occurrences => 15,
instance_name => 'orcl10g2',
object_type => DBMS_SERVER_ALERT.OBJECT_TYPE_SESSION,
object_name => NULL
);
END;
/

Hope I can sleep well tonight.

Ref Oracle 10g doc:
Enterprise Manager Oracle Database and Database-Related Metric Reference Manual
Database PL/SQL Packages and Types Reference

Log Miner

我们可以使用logminer分析其它instance(版本可不一致)的重做日志文件,但是必须遵循以下要求:

1. LogMiner日志分析工具仅能够分析Oracle 8以后的产品

2. LogMiner必须使用被分析数据库实例产生的字典文件,且安装LogMiner数据库的字符集必须和被分析数据库的字符集相同

3. 被分析数据库平台必须和当前LogMiner所在数据库平台一样,且block size相同。



使用logminer

1. 安装logminer:

要安装LogMiner工具,必须首先要运行下面这样两个脚本,
$ORACLE_HOME/rdbms/admin/dbmslm.sql
$ORACLE_HOME/rdbms/admin/dbmslmd.sql.
这两个脚本必须均以SYS用户身份运行。



2. 创建数据字典文件

首先在init.ora初始化参数文件中,添加参数UTL_FILE_DIR,该参数值为服务器中放置数据字典文件的目录。如:
UTL_FILE_DIR = (D:\Oracle\logs)



重新启动数据库,使新加的参数生效,然后创建数据字典文件:
SQL> EXECUTE dbms_logmnr_d.build(
dictionary_filename => ' logmn_ora817.dat',
dictionary_location => ' D:\Oracle\logs ');

创建数据字典是让LogMiner引用涉及到内部数据字典中的部分时使用对象的名称,而不是系统内部的16进制的ID。如果我们要分析的数据库中的表有变化,影响到库的数据字典也发生变化,就需要重新创建该字典。



3. 添加要分析的日志文件

Logminer可以用来分析在线的重做日志文件和归档日志文件,但是我们一般建议使用归档的日志文件。

a.添加新的日志文件:
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' D:\database\oracle\oradata\ora817\archive \ ARC01491.001 ', Options=>dbms_logmnr.new);

b.添加另外的日志文件到列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' D:\database\oracle\oradata\ora817\archive \ ARC01491.002', Options=>dbms_logmnr.addfile);



c. 移去一个日志文件

SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' D:\database\oracle\oradata\ora817\archive \ ARC01491.002', Options=>dbms_logmnr. REMOVEFILE);



创建了要分析的日志文件,就可以对其进行分析。



4. 进行日志分析

SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName=>' D:\Oracle\logs\ logmn_ora817.dat ');

可以使用相应的限制条件:

时间范围:对dbms_logmnr.start_logmnr使用StartTime和EndTime参数

SCN范围:对dbms_logmnr.start_logmnr使用StartScn和EndScn参数



5.观察结果:

主要是查询v$logmnr_contents:

SQL> desc v$logmnr_contents;


通过字段sql_redo可以得到该日志文件中进行过的sql操作,通过sql_undo可以得到撤销的sql语句。

还可以用这样的sql对日志文件中的所有的操作分类统计:

select operation,count(*)from v$logmnr_contents group by operation;


视图v$logmnr_contents中的分析结果仅在我们运行过程'dbms_logmrn.start_logmnr'这个会话的生命期中存在。这是因为所有的LogMiner存储都在PGA内存中,所有其他的会话是看不到它的,同时随着会话的结束而清除分析结果。



最后,使用过程DBMS_LOGMNR.END_LOGMNR终止日志分析事务,PGA内存区域将被清除。


查看日志分析的结果,通过查询v$logmnr_contents可以查询到
a、查看DML操作,示例:
SELECT operation,sql_redo,sql_undo,FROM V$logmnr_contents
WHERE seg_name = 'QIUYB';

OPERATION SQL_REDO SQL_UNDO
---------- -------------------------- --------------------------
INSERT inser into qiuyb.qiuyb ... delete from qiuyb.qiuyb...

其中operation指的是操作,sql_redo指的是实际的操作,sql_undo指的是用于取消的相反的操作。

b、查看DDL操作,示例:
SELECT timstamp,sql_redo FROM v$logmnr_contents
WHERE upper(sql_redo) like '%TRUNCATE%';

OCM list

Admiring ...
http://www.oracle.com/technology/ocm/index.html

Tuesday, May 29, 2007

Bad magic number

Hope this title didn't shock you.

Today is a nightmare day! While on the to office, someone SMS me that tens of jobs are failed.
Below is the funny message.

ORA-28575

/usr/lib/dld.sl: Bad magic number for shared library:
/home/app/oracle/product/9.0.1/lib/libclntsh.sl.9.0
/usr/lib/dld.sl: Exec format error

When spoke out this to sysadmin, 3 sysadmin jumped up and 3 others laughed.

While I was investigating, quite a few Managers checked with me other areas and question me WHY WHY WHY.
Not in the mood. Are their things important than production problem? Appreciate they can "PLAN" better in future in a whole picture.

Back to the subject, this is caused after change the ORACLE_HOME in the profile, subsequently caused the LIB_LIBRARY_PATH can't find 32bit lib file.

i.e the SHLIB_PATH includes $ORACLE_HOME/lib (64bit) and does not include lib32 (32bit).