Tuesday, June 26, 2007

Understanding roles in Oracle stored procedures

By Bob Watkins, Special to ZDNet Asia
24 May 2007

One of the trickiest parts of Oracle's security model is the way that roles (collections of database privileges) interact with stored procedures, functions, and packages. Object privileges in Oracle can be granted directly to the user or indirectly via a role.

Suppose an HR user grants some permissions on the EMPLOYEES table to user ABEL:

GRANT select, insert, update, delete ON employees TO abel;

This directly grants the four privileges mentioned to the user named ABEL. On the other hand, suppose an HR user did this:

GRANT select, insert, update, delete ON employees TO hr_role;

If ABEL has been granted the role HR_ROLE, he now has these privileges indirectly via that role.

Either way, ABEL now has the SELECT privilege on the table HR.EMPLOYEES. If ABEL selects data from the table directly via the SELECT statement, it doesn't matter how he obtained permission. However, if ABEL tries to create stored procedures, functions, or packages that SELECT from this table, it makes a big difference whether he was granted permission directly or via a role.

Oracle requires that permissions to non-owned objects in a stored procedure be granted directly to the user. Roles are temporarily turned off during compilation, and the user has no access to anything granted through them. This is done for performance and security reasons. Roles can be dynamically activated and deactivated via the SET ROLE command, and it would be a large overhead for Oracle to constantly check which roles and permissions are currently active.

The following code shows a short stored procedure that updates the HR copy of employees (the code assumes that a synonym, EMPLOYEES, is used to stand for HR.EMPLOYEES). When Abel tries to compile this under the first case above with direct rights, the compilation succeeds. When he tries to compile it under the second case above with only indirect rights, the compilation fails.

CREATE OR REPLACE PROCEDURE update_emp (
p_employee_id IN NUMBER
,p_salary IN NUMBER
)

AS
v_department_id employees.department_id%TYPE;

BEGIN
SELECT department_id INTO v_department_id
FROM employees
WHERE employee_id = p_employee_id;


UPDATE employees
SET salary = p_salary
WHERE employee_id = p_employee_id;


IF v_department_id = 100 THEN
UPDATE local_employees
SET salary = p_salary
WHERE employee_id = p_employee_id;
END IF;



END;

/

One interesting fact is that granting to PUBLIC is the same as granting to all users directly. PUBLIC is often thought of as a role, but it isn't. It's a collection of users and not a collection of permissions. If the permissions on HR.EMPLOYEES had been granted to PUBLIC, ABEL would have been able to create his stored procedure. While it's not recommended in the case of an EMPLOYEES table, any table that is granted to PUBLIC can be freely used in stored procedures.

Render query tool output in HTML

By Bob Watkins, TechRepublic
SQL*Plus has traditionally been thought of as a plain text SQL query tool. But since Oracle 8i, it has also had the capability to render its output using HTML.


One of SQL*Plus's environment settings, MARKUP, controls what kind of markup language (if any) to use for its output. By default, MARKUP defines HTML as the markup language, but markup itself is turned off. A set of HTML tags is predefined; all you have to do is turn markup on by typing:

SET MARKUP HTML ON
and the tags will be added to each output produced by SQL*Plus. For example, after activating the feature as above, you could type the following:

SPOOL deptlist.html
SELECT * FROM departments;
SPOOL OFF
and the result would be formatted as an HTML table ready to add to an intranet or other Web page. To create a complete HTML document, including the HTML and /HTML tags and a CSS style sheet, type:

SET MARKUP HTML ON SPOOL ON
To turn the feature off again or exit the session, type:

SET MARKUP HTML OFF
or

SET MARKUP HTML OFF SPOOL OFF
If you don't like the way that SQL*Plus formats the output, no problem. You can also use the SET MARKUP command to replace the built-in formatting codes with your own. The HEAD, BODY, TABLE, and other options let you specify the HTML to generate.

For more information, consult the SQL*Plus User's Guide and Reference, Chapter 7, Generating HTML Reports from SQL*Plus.

Monday, June 25, 2007

Compressing Data for Space and Speed

Just happened to read this article "Compressing Data for Space and Speed "

http://www.oracle.com/technology/oramag/oracle/04-mar/o24tech_data.html


It looks suitable for report database /ODS.


According to the article,

The benefits are:

1. Half space saved
2. Faster speed as less block accessed

The overhead is double time required during data loading.


Details of testing list below
CodeListing 3: Comparing blocks in uncompressed and compressed tables

ANALYZE TABLE SALES_HISTORY COMPUTE STATISTICS;
ANALYZE TABLE SALES_HISTORY_COMP COMPUTE STATISTICS;

SELECT TABLE_NAME, BLOCKS, NUM_ROWS, COMPRESSION
FROM USER_TABLES
WHERE TABLE_NAME LIKE 'SALES_HIST%';

TABLE_NAME BLOCKS NUM_ROWS COMPRESSION
------------------ ------ -------- -----------
SALES_HISTORY 12137 1000000 DISABLED
SALES_HISTORY_COMP 6188 1000000 ENABLED

codeLISTING 4: Comparing queries on uncompressed and compressed tables

TKPROF results of the query on the uncompressed table:

SELECT SALE_DATE, COUNT(*) FROM SALES_HISTORY GROUP BY SALE_DATE;

call count cpu elapsed disk query current rows
------- ------ ---- ------- ----- ---------- ---------- -----
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 5.22 13.76 10560 12148 0 1
------- ------ ---- ------- ----- ---------- ---------- -----
total 4 5.22 13.78 10560 12148 0 1


TKPROF results of the query on the compressed table:

SELECT SALE_DATE, COUNT(*) FROM SALES_HISTORY_COMP GROUP BY SALE_DATE;

call count cpu elapsed disk query current rows
------- ------ ---- ------- ----- ---------- ---------- -----
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 5.27 7.20 6082 6091 0 1
------- ------ ---- ------- ----- ---------- ---------- -----
total 4 5.27 7.20 6082 6091 0 1

customized oem notification

ora-01594

This is related to 8i database, which uses rollback segments.

01594, 00000, "attempt to wrap into rollback segment (%s) extent (%s) which is being freed"
// *Cause: Undo generated to free a rollback segment extent is attempting
// to write into the same extent due to small extents and/or too
// many extents to free
// *Action: The rollback segment shrinking will be rollbacked by the system;
// increase the optimal size of the rollback segment.

select * from dba_rollback_segs;
select * from v$rollstat;
--get the optimal size there

use below command to increase optimal size , if needs
alter rollback segment .. storage ( .. optimal );

对于troubleshooting的一点体会

Friday, June 22, 2007

WARNING: Subscription for node down event still pending

After I add "
SUBSCRIBE_FOR_NODE_DOWN_EVENT_=OFF
" to listener.ora, notice that below message is found in listener.log


20-JUN-2007 10:50:57 * ping * 0
WARNING: Subscription for node down event still pending
According to Net Services Administrator's Guide,

16 Troubleshooting Oracle Net Services

I think it should be safe to ignore, as I don't have ONS.


Listener Subscription for ONS Node Down Event Information

Listener will subscribe to the Oracle Notification Service (ONS) node down event on startup if ONS configuration file is available. This subscription enables the listener to remove the affected service when it receives node down event notification from ONS. The listener uses asynchronous subscription for the event notification. The following warning message will be recorded to listener log file on each STATUS command if the subscription has not completed; for example if the ONS daemon is not running on the host.

WARNING: Subscription for node down event still pending

Listener will not be able to receive the ONS event while subscription is pending. Other than that, no other listener functionality is affected.

Wednesday, June 20, 2007

About function based index

SQL> create table stu (f1 number, f2 varchar2(10));

Table created.

SQL> insert into stu values(100,'want');

1 row created.

SQL> insert into stu values(101,'ye');

1 row created.

SQL> insert into stu values(103,'li');

1 row created.

SQL> set autotrace on
SQL> delete from stu where f2='ye';

1 row deleted.


Execution Plan
----------------------------------------------------------
Plan hash value: 1645979371

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 1 | 7 | 3 (0)| 00:00:01 |
| 1 | DELETE | STU | | | | |
|* 2 | TABLE ACCESS FULL| STU | 1 | 7 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("F2"='ye')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
28 recursive calls
1 db block gets
18 consistent gets
0 physical reads
320 redo size
830 bytes sent via SQL*Net to client
725 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> insert into stu values(102,'ye');

1 row created.


Execution Plan
----------------------------------------------------------

-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------


Statistics
----------------------------------------------------------
1 recursive calls
3 db block gets
1 consistent gets
0 physical reads
280 redo size
834 bytes sent via SQL*Net to client
728 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> set autotrace off
SQL> select * from stu;

F1 F2
---------- ----------
100 want
103 li
102 ye

SQL> create index f2_idx on stu(upper(f2));

Index created.

SQL> set autotrace on
SQL> select * from stu where f2='ye';

F1 F2
---------- ----------
102 ye


Execution Plan
----------------------------------------------------------
Plan hash value: 2614136206

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 20 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| STU | 1 | 20 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("F2"='ye')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
5 recursive calls
0 db block gets
15 consistent gets
0 physical reads
0 redo size
573 bytes sent via SQL*Net to client
469 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> select * from stu where upper(f2)='YE';

F1 F2
---------- ----------
102 ye


Execution Plan
----------------------------------------------------------
Plan hash value: 2667645883

--------------------------------------------------------------------------------
------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|

--------------------------------------------------------------------------------
------

| 0 | SELECT STATEMENT | | 1 | 20 | 2 (0)| 00:0
0:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| STU | 1 | 20 | 2 (0)| 00:0
0:01 |

|* 2 | INDEX RANGE SCAN | F2_IDX | 1 | | 1 (0)| 00:0
0:01 |

--------------------------------------------------------------------------------
------


Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(UPPER("F2")='YE')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
28 recursive calls
0 db block gets
13 consistent gets
0 physical reads
0 redo size
573 bytes sent via SQL*Net to client
469 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> select * from stu where upper(f2)=upper('ye');

F1 F2
---------- ----------
102 ye


Execution Plan
----------------------------------------------------------
Plan hash value: 2667645883

--------------------------------------------------------------------------------
------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|

--------------------------------------------------------------------------------
------

| 0 | SELECT STATEMENT | | 1 | 20 | 2 (0)| 00:0
0:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| STU | 1 | 20 | 2 (0)| 00:0
0:01 |

|* 2 | INDEX RANGE SCAN | F2_IDX | 1 | | 1 (0)| 00:0
0:01 |

--------------------------------------------------------------------------------
------


Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(UPPER("F2")='YE')

Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
10 consistent gets
0 physical reads
0 redo size
573 bytes sent via SQL*Net to client
469 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> set autotrace off


SQL> set pages 1000
SQL> select * from user_indexes where index_name='F2_IDX';

INDEX_NAME INDEX_TYPE
------------------------------ ---------------------------
TABLE_OWNER TABLE_NAME TABLE_TYPE
------------------------------ ------------------------------ -----------
UNIQUENES COMPRESS PREFIX_LENGTH TABLESPACE_NAME INI_TRANS
--------- -------- ------------- ------------------------------ ----------
MAX_TRANS INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE
---------- -------------- ----------- ----------- ----------- ------------
PCT_THRESHOLD INCLUDE_COLUMN FREELISTS FREELIST_GROUPS PCT_FREE LOG
------------- -------------- ---------- --------------- ---------- ---
BLEVEL LEAF_BLOCKS DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY
---------- ----------- ------------- -----------------------
AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR STATUS NUM_ROWS SAMPLE_SIZE
----------------------- ----------------- -------- ---------- -----------
LAST_ANAL DEGREE
--------- ----------------------------------------
INSTANCES PAR T G S BUFFER_ USE DURATION
---------------------------------------- --- - - - ------- --- ---------------
PCT_DIRECT_ACCESS ITYP_OWNER ITYP_NAME
----------------- ------------------------------ ------------------------------
PARAMETERS
--------------------------------------------------------------------------------
GLO DOMIDX_STATU DOMIDX FUNCIDX_ JOI IOT DRO
--- ------------ ------ -------- --- --- ---
F2_IDX FUNCTION-BASED NORMAL
ORARA STU TABLE
NONUNIQUE DISABLED USERS 2
255 65536 1 2147483645
10 YES
0 1 3 1
1 1 VALID 3 3
20-JUN-07 1
1 NO N N N DEFAULT NO


NO ENABLED NO NO NO


SQL> select * from user_ind_columns where index_name='F2_IDX';

INDEX_NAME TABLE_NAME
------------------------------ ------------------------------
COLUMN_NAME
--------------------------------------------------------------------------------
COLUMN_POSITION COLUMN_LENGTH CHAR_LENGTH DESC
--------------- ------------- ----------- ----
F2_IDX STU
SYS_NC00003$
1 10 10 ASC


SQL> select column_name from user_ind_columns where index_name='F2_IDX';

COLUMN_NAME
--------------------------------------------------------------------------------
SYS_NC00003$

SQL> desc user_ind_expressions
Name Null? Type
----------------------------------------- -------- ----------------------------
INDEX_NAME VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
COLUMN_EXPRESSION LONG
COLUMN_POSITION NUMBER

SQL> select * from user_ind_expressions;

INDEX_NAME TABLE_NAME
------------------------------ ------------------------------
COLUMN_EXPRESSION
--------------------------------------------------------------------------------
COLUMN_POSITION
---------------
F2_IDX STU
UPPER("F2")
1

SQL> spool off

Tuesday, June 19, 2007

影响开发效率的12大杀手



Windows system error code

http://msdn2.microsoft.com/en-us/library/ms681381.aspx

Saturday, June 16, 2007

Schedule cygwin shell script with windows' scheduler

The default cygwin shell actually is a Windows shell script. Below is the source code.

@echo off

C:
chdir C:\cygwin\bin

bash --login -i

To use the windows' scheduler, automatically pass the bash shell script to cygwin.bat is needed.
How? Here is the solution-- by using the redirect .

@echo off

REM C:
C:\cygwin\bin\bash --login -i <"C:\Documents and Settings\liqy\avupd.sh"

Friday, June 15, 2007

pga_aggregate_target tuning

"最近再深入了解调优方面的东西
做了一个试验。
有一个表,30个g的数据,在上面建索引。
一开始,pga_aggregate_target设置为200m,发现v$sql_workarea_histogram中有one pass的值,但是没怎么在意后来将设置为1000m,发现速度快了,现在正在做设置为2g的测试,看是否会更快。

我现在就想看看对内存敏感的sql statement操作时,workarea的大小对速度究竟有多大的影响?"

The relevant table is v$sql_workarea_histogram

alter tablespace compress

relevant commands are:

alter tablespace SUBSCRIBER02 default compress;

alter tablespace SUBSCRIBER02 default nocompress;

select tablespace_name, DEF_TAB_COMPRESSION from dba_tablespaces;

Thursday, June 14, 2007

advance usage of ci

> co -p1.1 initOCP.ora >1.txt
initOCP.ora,v --> stdout
revision 1.1
bill07:B.11:ADMP:/software/oracle1/admin/CATP/pfile> co -p1.2 initCATP.ora >2.txt
initOCP.ora,v --> stdout
revision 1.2
> ls -lrt
total 36
drwxrwxr-x 2 oracle1 dba1 2048 Jun 12 10:18 archive
-r--r--r-- 1 oracle1 dba1 2640 Jun 14 14:29 initOCP.ora,v
-rw-rw-r-- 1 oracle1 dba1 2214 Jun 14 14:30 1.txt
-rw-rw-r-- 1 oracle1 dba1 2241 Jun 14 14:30 2.txt
> diff 1.txt 2.txt
64c64
< --- > fast_start_mttr_target=1200

about fast_start_mttr_target

Other than reduce time needed for recovery, it also affects performance, shorten checkpoint time. Especially when the DB has batch job running during certain period.

Sunday, June 10, 2007

申请OCP证书的流程

http://www.itpub.net/778013.html

Saturday, June 09, 2007

commit_write Performance

This 10g new parameter looks behaves much different with combination of values. and it is not very clear documented .

listing my testing result below

commit_write value SQL perf Remarks
immediate,wait 9:28 "log file sync" oberserved
null, null 1:21
immediate,nowait 1:54
batch,nowait 1:10
immediate 1:17
batch 1:07
batch,wait 10:20 "log file sync" oberserved

Oracle Price Model

These EE options looks not cheap.

Oracle RAC
Oracle Partitioning
Oracle OLAP
Oracle Data Mining
Oracel Spatial
Oracle Advanced Security
Oracle Label Security

Waiting for smon to disable tx recovery

Yesterday when I was dropping a stalled big mview log (7gb, commad run for about 30mins ), system was rebooted by system administrator.

I don't what will happend to this database.

This morning, I tried to bring it up. luckily , it open successfully as usual.
Verified that :
--the mview log is not shown in dba_mview_logs
--the segment type is shown as TEMPORARY in dba_segments

I have no idea on this TEMPORARY segment, thought it okay as it is not production environment. We can solve it easily by sync it again.


However, when I tried to "shutdown immediate"
The DB looks DB is hanging at "shutdown immedaite" step. DB closed is not shown immediate as usual.
After 15 mins, I checked alert.log that the last line is:
"Waiting for smon to disable tx recovery." and the CPU utilization of smon is almost 100% . I realized it is doing some recovery. This could be related to the TEMPORARY segment, hope this recovery can help to clean it.

So I just let it run, no rush to open another session to issue "shutdown abort".

As the same time, I did some research on the internet. Which supports my suspect -- the DB is doing cleansing.

The relevant metalink notes is 1076161.6
"Verify that temporary segments are decreasing
---------------------------------------------
To verify that the temporary segments are decreasing have an active session
available in Server Manager during the SHUTDOWN IMMEDIATE. Issue the following
query to ensure the database is not hanging, but is actually perform extent
cleanup:

SVRMGR> select count(block#) from fet$;
COUNT(BLOC
----------
7

SVRMGR> select count(block#) from uet$;
COUNT(BLOC
----------
402

After some time has elapsed, reissue the query and see that the values for fet$
have increased while the values or uet$ have decreased:

SVRMGR> select count(block#) from fet$;
COUNT(BLOC
----------
10

SVRMGR> select count(block#) from uet$;
COUNT(BLOC
----------
399

During shutdown the SMON process is cleaning up extents and updating the data
dictionary tables with the marked free extents. As the extents are marked as
freed, they are removed from the table for used extents, UET$ and placed on the
table for free extents, FET$."


Finally, after 1.5 hours, the DB was shutdown gracefully.

Open it again, checked that the segment is no more there ,7Gb tablespace reclaimed!
Shutdown it within few seconds.

Attache the alert.log for your reference.
Fri Jun 8 09:16:03 2007
ARC1: Completed archiving log 4 thread 1 sequence 28
Fri Jun 8 09:26:18 2007
Shutting down instance: further logons disabled
Shutting down instance (immediate)
License high water mark = 4
Fri Jun 8 09:26:18 2007
ALTER DATABASE CLOSE NORMAL
Fri Jun 8 09:31:21 2007
Waiting for smon to disable tx recovery.
Fri Jun 8 11:05:07 2007
SMON: disabling tx recovery
SMON: disabling cache recovery
Fri Jun 8 11:05:08 2007
Shutting down archive processes
Archiving is disabled
Fri Jun 8 11:05:08 2007
ARCH shutting down
ARC1: Archival stopped
Fri Jun 8 11:05:08 2007
ARCH shutting down
Fri Jun 8 11:05:08 2007
ARC0: Archival stopped
Fri Jun 8 11:05:08 2007
Thread 1 closed at log sequence 29
Successful close of redo thread 1
Fri Jun 8 11:05:08 2007
Completed: ALTER DATABASE CLOSE NORMAL
Fri Jun 8 11:05:08 2007
ALTER DATABASE DISMOUNT
Completed: ALTER DATABASE DISMOUNT
ARCH: Archiving is disabled
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH: Archiving is disabled
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active

Wednesday, June 06, 2007

set prompt for sql*plus

I like to have a clear command line prompt to remind me where / Who I am.

Just learn that this can be done in sqlplus also. This is use full when having mutilple instance running in one ORACLE_HOME.

the command can be addd to $ORACLE_HOME/sqlplus/admin/glogin.sql

by below command , you will have something like SYS@OCP> in your sql*plus prompt
set sqlprompt "_user'@'_user_identifier> "

ORA-01720 when view on other schema's objects

ORA-01720 grant option does not exist for 'string.string'

Cause: A grant was being performed on a view and the grant option was not present for an underlying object.

Action: Obtain the grant option on all underlying objects of the view.

Tuesday, June 05, 2007

什么是Incarnation 和Thread Number

在10g的LOG_ARCHIVE_FORMAT 有提到
%t Thread Number
%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the databaser

请问它们的具体含义及用途是什么?

Thread一般针对RAC的情况,RAC中每个instance为一个thread,单实例数据库只有一个thread 1

incarnation意思为“化身”,个人觉得主要是针对resetlogs后的情况,resetlog后,log sequence被重置,此时以前的controlfile不能再用力进行恢复,当前controlfile即为以前备份的controlfile的一个 incarnatio

在rmancatalog里,一个数据库有一个incarnation号。

Asynchronous Commit :commit_write

Apps team feedback that few heavy jobs run much slower after upgrade to 10g.

After two hours investigation, I notice there is lots of wait event of "log file sync" , definitely the root cause is too many COMMIT. However, this still can explain the behavior in 9i.

There must be something different.

After compare all parameters , I notice that there is no commit_write in 9i , while in 10g the value is set 'BATCH,WAIT'.

That must be the thing I want.

check metalink, oracle docs ...

starting testing ...

found the significant difference. With NOWAIT or not set commit_wait at all, no more wait event "log file sync" observed. The testing SQL can finish within 1.5mins versus problematic 1 hours.
Wow, what a nice day. There should be many happy face tomorrow ...

Table 2-1 Initialization Parameter and COMMIT Options for Managing Commit Redo
Option Specifies that . . .
WAIT The commit does not return as successful until the redo corresponding to the
commit is persisted in the on line redo logs (default).
NOWAIT The commit should return to the application without waiting for the redo to be written
to the on line redo logs.
IMMEDIATEThe log writer process should write the redo for the commit immediately (default). In
other words, this option forces a disk I/O.
BATCH Oracle Database should buffer the redo. The log writer process is permitted to write
the redo to disk in its own time.

For remember the moment of China Stock Market

下周重要提示:
1
、下周关注农药板块和木材板块,因为大批股民需要喝农药,并在其后紧接着需要用木材加工的棺材和骨灰盒,因此
这两个板块将受益 
2
、重大利空:猪肉板块重大利空!由于今天股民大幅割肉,故肉供应量大大增加,猪肉价格将大大下降,不利于猪肉
上市公司板块!
3
、紧急关注钢铁板块:大批股民上街买菜刀,准备到血拼,市场上菜刀已被抢购一空,钢铁供不应求,多家菜刀公司
正在运作上市 

open_cursors differs in 9i and10g

In 9i , it is hard-limit. If open cursors exceed it, you will hit oracle error and application becomes funny.
In 10g, it looks soft-limit. Luckily according to below Oracle docs, there is no overhead to setting this value higher than actually needed.

"It is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors. The number will vary from one application to another. Assuming that a session does not open the number of cursors specified by OPEN_CURSORS, there is no added overhead to setting this value higher than actually needed."

Monday, June 04, 2007

How to use GET_THRESHOLD

The day before yesterday, due to time issuing on solve production problem (as I am not faimiliar with PL/SQL). I didn't know to use DBMS_SERVER_ALERT.GET_THRESHOLD. Now I have the answer.

Thanks the article from http://turner.itpub.net/post/2343/66558

Below is sample to get threshold of tablespace used.

-- using DBMS_SERVER_ALERT.GET_THRESHOLD
variable warn_oper varchar2;
variable warn_value number;
variable crit_oper varchar2;
variable crit_value number;
variable obs_per number;
variable cons_oc number;

BEGIN
DBMS_SERVER_ALERT.GET_THRESHOLD(
DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL,
:warn_oper, :warn_value, :crit_oper, :crit_value,
:obs_per, :cons_oc, 'OCP',
DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE ,NULL
);
END;
/
print warn_value
print crit_value


--to check specific tablespace is to replace NULL with actual name , like 'SYSTEM'

Things to notes are:

1. In the arguments, "=>" can't be used mixedly with traditional method. Otherwise you'll see PLS-00312: a positional parameter association may not follow a named association
2. Must GUESS the OBJECT_TYPE linked to metric name. Currently I don't this is well documented in ORACLE document sets.

How to trace

--Instance-level trace

alter system|session set sql_trace=true;

--session-level SQL tracing
select sid,serial# from v$session where username='ABC' ;
execute dbms_monitor.session_trace_enable(session_id=>123, serial_num=>123);
execute dbms_monitor.session_trace_disable(session_id=>123, serial_num=>123);

Tracing with database control
--The DBMS_MONITOR packages has procedures that will let you enable tracing at these levels:
*session level
*Module level
*Client ID level
*Service Level
*Action

ORA-3136 in 10.2


Mon Jun 4 01:59:08 2007
WARNING: inbound connection timed out (ORA-3136)

> oerr ora 3136
03136, 00000, "inbound connection timed out"
// *Cause: Inbound connection was timed out by the server because
// user authentication was not completed within the given time
// specified by SQLNET.INBOUND_CONNECT_TIMEOUT or its default value
// *Action: 1) Check SQL*NET and RDBMS log for trace of suspicious connections.
// 2) Configure SQL*NET with a proper inbound connect timeout value
// if necessary.
The log in sqlnet.log is
***********************************************************************
Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for HPUX: Version 10.2.0.2.0 - Production
Oracle Bequeath NT Protocol Adapter for HPUX: Version 10.2.0.2.0 - Production
TCP/IP NT Protocol Adapter for HPUX: Version 10.2.0.2.0 - Production
Time: 04-JUN-2007 01:59:08
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12606
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.22.100.155)(PORT=1341))
According to metalink notes: 345197.1 and 316901.1, the default value in 10.2 is 60 seconds ( 0 second in 10.1)
Hence, we might need to monitor the frequency to decide whether need to set INBOUND_CONNECT_TIMEOUT =0


Oracle also recommends set for both listener and database.
Hence as below to listener.ora is advised.

INBOUND_CONNECT_TIMEOUT_ = 0

Saturday, June 02, 2007

DB Monitoring scripts

什么时候发生checkpoint?


我们知道了checkpoint会刷新脏数据,但什么时候会发生checkpoint呢?以下几种情况会触发checkpoint。
1.当发生日志组切换的时候
2.当符合LOG_CHECKPOINT_TIMEOUT[限制了上一检查点和最近的重做记录之间的秒数];
LOG_CHECKPOINT_INTERVAL[恢复过程中将要被读的重做记录的数目,最优=redo log/os(512)];
fast_start_io_target[恢复需要的数据块数目];
fast_start_mttr_target[允许DBA指定数据库进行崩溃恢复需要的秒数]参数设置的时候;
3.当运行ALTER SYSTEM SWITCH LOGFILE的时候;
4.当运行ALTER SYSTEM CHECKPOINT的时候;
5.当运行alter tablespace XXX begin backup,end backup的时候;
6.当运行alter tablespace ,datafile offline的时候;

Have a Rest

Testing ...

DBMS_SERVER_ALERT

Yesterday(Fridday and not working on Saturday), I configured four server-generated alerts on four database, which sends critical alert via SMS. Unfortunately , I didn't expect one of the databases was so busy in the night (DB Wait time critcal threshold exceeded) , SMS beep keeps coming. What a night.

The firewall port has number opened, caused I can't access DB Control 's web page. Only things at end is having SYSMAN and password. No choice have to try to explore using DBMS_SERVER_ALERT to temporarily increase the threshold.

Google ... 10g OEM document ...
Here is thing learned quickly.

select * from dba_thresholds where metrics_name='Database Wait Time Ratio';
--easy than using DBMS_SERVER_ALERT.GET_THRESHOLD
BEGIN
DBMS_SERVER_ALERT.SET_THRESHOLD(
metrics_id => DBMS_SERVER_ALERT.DATABASE_WAIT_TIME,
warning_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
warning_value => '95',
critical_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
critical_value => '99',
observation_period => 1,
consecutive_occurrences => 3,
instance_name => 'orcl10g2',
object_type => DBMS_SERVER_ALERT.OBJECT_TYPE_SYSTEM,
object_name => NULL
);
END;
/

--for blocking session count
--session blocking for CUSPA
select * from dba_thresholds where metrics_name ='Blocked User Session Count';

BEGIN
DBMS_SERVER_ALERT.SET_THRESHOLD(
metrics_id => DBMS_SERVER_ALERT.BLOCKED_USERS,
warning_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
warning_value => '50',
critical_operator => DBMS_SERVER_ALERT.OPERATOR_GE,
critical_value => '100',
observation_period => 1,
consecutive_occurrences => 15,
instance_name => 'orcl10g2',
object_type => DBMS_SERVER_ALERT.OBJECT_TYPE_SESSION,
object_name => NULL
);
END;
/

Hope I can sleep well tonight.

Ref Oracle 10g doc:
Enterprise Manager Oracle Database and Database-Related Metric Reference Manual
Database PL/SQL Packages and Types Reference

Log Miner

我们可以使用logminer分析其它instance(版本可不一致)的重做日志文件,但是必须遵循以下要求:

1. LogMiner日志分析工具仅能够分析Oracle 8以后的产品

2. LogMiner必须使用被分析数据库实例产生的字典文件,且安装LogMiner数据库的字符集必须和被分析数据库的字符集相同

3. 被分析数据库平台必须和当前LogMiner所在数据库平台一样,且block size相同。



使用logminer

1. 安装logminer:

要安装LogMiner工具,必须首先要运行下面这样两个脚本,
$ORACLE_HOME/rdbms/admin/dbmslm.sql
$ORACLE_HOME/rdbms/admin/dbmslmd.sql.
这两个脚本必须均以SYS用户身份运行。



2. 创建数据字典文件

首先在init.ora初始化参数文件中,添加参数UTL_FILE_DIR,该参数值为服务器中放置数据字典文件的目录。如:
UTL_FILE_DIR = (D:\Oracle\logs)



重新启动数据库,使新加的参数生效,然后创建数据字典文件:
SQL> EXECUTE dbms_logmnr_d.build(
dictionary_filename => ' logmn_ora817.dat',
dictionary_location => ' D:\Oracle\logs ');

创建数据字典是让LogMiner引用涉及到内部数据字典中的部分时使用对象的名称,而不是系统内部的16进制的ID。如果我们要分析的数据库中的表有变化,影响到库的数据字典也发生变化,就需要重新创建该字典。



3. 添加要分析的日志文件

Logminer可以用来分析在线的重做日志文件和归档日志文件,但是我们一般建议使用归档的日志文件。

a.添加新的日志文件:
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' D:\database\oracle\oradata\ora817\archive \ ARC01491.001 ', Options=>dbms_logmnr.new);

b.添加另外的日志文件到列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' D:\database\oracle\oradata\ora817\archive \ ARC01491.002', Options=>dbms_logmnr.addfile);



c. 移去一个日志文件

SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' D:\database\oracle\oradata\ora817\archive \ ARC01491.002', Options=>dbms_logmnr. REMOVEFILE);



创建了要分析的日志文件,就可以对其进行分析。



4. 进行日志分析

SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName=>' D:\Oracle\logs\ logmn_ora817.dat ');

可以使用相应的限制条件:

时间范围:对dbms_logmnr.start_logmnr使用StartTime和EndTime参数

SCN范围:对dbms_logmnr.start_logmnr使用StartScn和EndScn参数



5.观察结果:

主要是查询v$logmnr_contents:

SQL> desc v$logmnr_contents;


通过字段sql_redo可以得到该日志文件中进行过的sql操作,通过sql_undo可以得到撤销的sql语句。

还可以用这样的sql对日志文件中的所有的操作分类统计:

select operation,count(*)from v$logmnr_contents group by operation;


视图v$logmnr_contents中的分析结果仅在我们运行过程'dbms_logmrn.start_logmnr'这个会话的生命期中存在。这是因为所有的LogMiner存储都在PGA内存中,所有其他的会话是看不到它的,同时随着会话的结束而清除分析结果。



最后,使用过程DBMS_LOGMNR.END_LOGMNR终止日志分析事务,PGA内存区域将被清除。


查看日志分析的结果,通过查询v$logmnr_contents可以查询到
a、查看DML操作,示例:
SELECT operation,sql_redo,sql_undo,FROM V$logmnr_contents
WHERE seg_name = 'QIUYB';

OPERATION SQL_REDO SQL_UNDO
---------- -------------------------- --------------------------
INSERT inser into qiuyb.qiuyb ... delete from qiuyb.qiuyb...

其中operation指的是操作,sql_redo指的是实际的操作,sql_undo指的是用于取消的相反的操作。

b、查看DDL操作,示例:
SELECT timstamp,sql_redo FROM v$logmnr_contents
WHERE upper(sql_redo) like '%TRUNCATE%';

OCM list

Admiring ...
http://www.oracle.com/technology/ocm/index.html