Saturday, April 24, 2010

Learning PL/SQL

Driven by application PL/SQL troubleshooting, as the Pro*C doesn't work .  while succeeds to OPEN CURSOR for a very complex SQL in less than 1 second (manually run it takes few minutes). The processing part is taken care by PRO*C.

From http://docstore.mik.ua/orelly/oracle/langpkt/ch01_09.htm, it says,
"You must open an explicit cursor before you can fetch rows from that cursor. When the cursor is opened, the processing includes the PARSE, BIND, OPEN, and EXECUTE statements. This OPEN processing includes: determining an execution plan, associating host variables and cursor parameters with the placeholders in the SQL statement, determining the result set, and, finally, setting the current row pointer to the first row in the result set. "



However, I doubt "the processing" includes " EXECUTE . Reason being, from my 10046 tracing comparison of  without FETCH or with FETCH,  there is no EXECUTION PLAN showed for without FETCH.

alter session set events '10046 trace name context forever, level 12' ;

     OPEN emp_cv FOR SELECT first_name, salary FROM employees where employee_id < emp_id;


--WITHOUT fetch

SELECT FIRST_NAME, SALARY
FROM
 EMPLOYEES WHERE EMPLOYEE_ID < :B1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 33     (recursive depth: 1)
********************************************************************************

alter session set events '10046 trace name context off'

--corresponding 10046 trace file of WITHOUT FETCH processing

=====================
PARSING IN CURSOR #1 len=65 dep=1 uid=33 oct=3 lid=33 tim=1242222958636520 hv=3716011877 ad='4cd98690'
SELECT FIRST_NAME, SALARY FROM EMPLOYEES WHERE EMPLOYEE_ID < :B1
END OF STMT
PARSE #1:c=0,e=71,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1242222958636514
BINDS #1:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1206001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=f6fe60ac  bln=22  avl=03  flg=05
  value=106
EXEC #1:c=0,e=144,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=1242222958636745
WAIT #3: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1242222958636793
EXEC #3:c=10000,e=3030,p=0,cr=0,cu=0,mis=1,r=1,dep=0,og=1,tim=1242222958636870
WAIT #3: nam='SQL*Net message from client' ela= 236 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1242222958637166



-- with  FETCH processing

    OPEN emp_cv FOR SELECT first_name, salary FROM employees where employee_id < emp_id;
     loop
      fetch emp_cv into emp_dataPkg.er;
     exit when emp_cv%notfound;
      dbms_output.put_line(emp_dataPkg.er.name || ' - ' || emp_dataPkg.er.sal);
    end loop;
   CLOSE emp_cv;








********************************************************************************

SELECT FIRST_NAME, SALARY
FROM
 EMPLOYEES WHERE EMPLOYEE_ID < :B1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        7      0.00       0.00          0         13          0           6
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        9      0.00       0.00          0         13          0           6

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 33     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      6  TABLE ACCESS BY INDEX ROWID EMPLOYEES (cr=13 pr=0 pw=0 time=104 us)
      6   INDEX RANGE SCAN EMP_EMP_ID_PK (cr=7 pr=0 pw=0 time=120 us)(object id 12082)

********************************************************************************

select condition
from
 cdef$ where rowid=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 2)

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY USER ROWID CDEF$ (cr=1 pr=0 pw=0 time=67 us)

********************************************************************************

alter session set events '10046 trace name context off';

--corresponding 10046 trace file of WITH FETCH processing

=====================
PARSING IN CURSOR #1 len=65 dep=1 uid=33 oct=3 lid=33 tim=1242222802758295 hv=3716011877 ad='4cd98690'
SELECT FIRST_NAME, SALARY FROM EMPLOYEES WHERE EMPLOYEE_ID < :B1
END OF STMT
PARSE #1:c=0,e=548,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,tim=1242222802758286
=====================
PARSING IN CURSOR #4 len=42 dep=2 uid=0 oct=3 lid=0 tim=1242222802759562 hv=844002283 ad='4cf6f080'
select condition from cdef$ where rowid=:1
END OF STMT
PARSE #4:c=0,e=123,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=1242222802759554
BINDS #4:
kkscoacd
 Bind#0
  oacdty=11 mxl=16(16) mxlc=00 mal=00 scl=00 pre=00
  oacflg=18 fl2=0001 frm=00 csi=00 siz=16 off=0
  kxsbbbfp=f6f62148  bln=16  avl=16  flg=05
  value=00006F44.001D.0001
EXEC #4:c=0,e=362,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,tim=1242222802760102
FETCH #4:c=0,e=99,p=0,cr=2,cu=0,mis=0,r=1,dep=2,og=4,tim=1242222802760255
STAT #4 id=1 cnt=1 pid=0 pos=1 obj=31 op='TABLE ACCESS BY USER ROWID CDEF$ (cr=1 pr=0 pw=0 time=67 us)'
BINDS #1:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1206001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=f6f6254c  bln=22  avl=03  flg=05
  value=106
EXEC #1:c=0,e=3997,p=0,cr=2,cu=0,mis=1,r=0,dep=1,og=1,tim=1242222802762431
FETCH #1:c=0,e=108,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=1,tim=1242222802762631
FETCH #1:c=0,e=24,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=1,tim=1242222802762862
FETCH #1:c=0,e=18,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=1,tim=1242222802762950
FETCH #1:c=0,e=15,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=1,tim=1242222802763030
FETCH #1:c=0,e=14,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=1,tim=1242222802763108
FETCH #1:c=0,e=15,p=0,cr=2,cu=0,mis=0,r=1,dep=1,og=1,tim=1242222802763185
FETCH #1:c=0,e=9,p=0,cr=1,cu=0,mis=0,r=0,dep=1,og=1,tim=1242222802763257
STAT #1 id=1 cnt=6 pid=0 pos=1 obj=12080 op='TABLE ACCESS BY INDEX ROWID EMPLOYEES (cr=13 pr=0 pw=0 time=104 us)'
STAT #1 id=2 cnt=6 pid=1 pos=1 obj=12082 op='INDEX RANGE SCAN EMP_EMP_ID_PK (cr=7 pr=0 pw=0 time=120 us)'
WAIT #3: nam='SQL*Net message to client' ela= 10 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1242222802763502
EXEC #3:c=10000,e=6638,p=0,cr=15,cu=0,mis=0,r=1,dep=0,og=1,tim=1242222802763589
WAIT #3: nam='SQL*Net message from client' ela= 359 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1242222802764056
=====================

Tuesday, April 20, 2010

SSH public key authentication

Finally I configured SSH public key authentication on my Cygwin.  So it is so convenient to log on to various Unix servers.  :-)

-- on my local PC
$ pwd
/home/liqy/.ssh
$ ls -l
total 8
-rw------- 1 liqy Domain Users  898 2010-04-19 11:09 id_rsa
-rw-r--r-- 1 liqy Domain Users 1196 2010-04-19 10:56 known_hosts
$ ssh rpt02
Last   successful login for liqy: Mon Apr 19 11:10:48 SST-8 2010
Last unsuccessful login for liqy: Tue Mar 16 01:59:06 SST-8 2010    
Last login: Mon Apr 19 11:10:48 2010 from 146-105-22.int.m1.com.sg
rpt02@/home/liqy> cd .ssh    
-- on remote server                                          
rpt02@/home/liqy/.ssh> ls -lrt
total 64
-rw-------   1 liqy       users         1024 Nov 27  2007 prng_seed
-rw-r--r--   1 liqy       users         9061 Dec 14 09:57 known_hosts
-rw-r--r--   1 liqy       users          220 Apr  1 13:57 authorized_keys

prerequsite of using sql loader direct path loading

Be careful with using sql*loader, if the target table :

is big and having indexes during data loading  with logging = YES.

Reason being during data loading, index becomes UNUSABLE in dba_indexes (remains VALID in dba_objects). After data loading, then start rebuild indexes. When logging=YES, image FULL TABLE SCAN on a huge table, how many archived log we shall we ?  multiply by number of indexes ...  It is a big shock.  and how much more time it will take , assuming the job loads multiple files , each invoke of sql*loader loads one file only .

The common understanding of sql*load 

Use Direct Path Loads - The conventional path loader essentially loads the data by using standard insert statements.  The direct path loader (direct=true) loads directly into the Oracle data files and creates blocks in Oracle database block format.  The fact that SQL is not being issued makes the entire process much less taxing on the database.  There are certain cases, however, in which direct path loads cannot be used (clustered tables).  To prepare the database for direct path loads, the script $ORACLE_HOME/rdbms/admin/catldr.sql.sql must be executed.



In my case, apps team is not happy with 5 minutes performance, hence added "direct=ture", in the end caused archive log disk space full, job can't finish after running for 3 hours, generated 40+Gb archived log until archiver log hang.

During indexes rebuild after "direct=ture", see tremendous I/O incurred.

Report of conventional loading
Top 5 Timed Events

Event    Waits    Time(s)    Avg Wait(ms)    % Total Call Time    Wait Class
db file parallel write    6,358    240    38    97.5    System I/O
db file sequential read    44,238    128    3    52.0    User I/O
CPU time         115         46.7   
log file parallel write    11,538    79    7    32.1    System I/O
log file sync    11,094    76    7    30.9    Commit

Tablespace    Reads    Av Reads/s    Av Rd(ms)    Av Blks/Rd    Writes    Av Writes/s    Buffer Waits    Av Buf Wt(ms)
IDX     40,216     11     2.84     1.00     99,853     28     0     0.00

report of direct path loading
Top 5 Timed Events

Event    Waits    Time(s)    Avg Wait(ms)    % Total Call Time    Wait Class
db file sequential read    2,901,803    1,610    1    42.4    User I/O
log file parallel write    22,845    1,410    62    37.1    System I/O
CPU time         729         19.2    
Log archive I/O    46,028    455    10    12.0    System I/O
log file sequential read    22,795    87    4    2.3    System I/O


Tablespace IO Stats

    * ordered by IOs (Reads + Writes) desc

Tablespace    Reads    Av Reads/s    Av Rd(ms)    Av Blks/Rd    Writes    Av Writes/s    Buffer Waits    Av Buf Wt(ms)
IDX     2,866,984     794     0.52     1.00     427,225     118     0     0.00  
  

rebuild indexes  takes more time than gained from data loading part. Below is the figure I just used to rebuild unusable indexes for surepay04 tables, took near 10 minutes for nonvoice04 for each invoke of sql*loader, event with rebuild parallel.


    For the remedy, after I remove those backed up archived log, set relevant tables and indexes to NOLOGGING mode. Ask application team to remove "deirct=true".

> select index_name, status from dba_indexes where index_name like 'M1_SUREPAY_%04%_IDX%' ;

INDEX_NAME                     STATUS
------------------------------ --------
NONVOICE04_IDX1     UNUSABLE
NONVOICE04_IDX2     UNUSABLE
NONVOICE04_IDX4     UNUSABLE
VOICE04_IDX1        VALID
VOICE04_IDX2        VALID
VOICE04_IDX3        VALID
VOICE04_IDX4        VALID
OTHER04_IDX2        VALID
OTHER04_IDX3        VALID
OTHER04_IDX1        VALID
NONVOICE04_IDX3     UNUSABLE

11 rows selected.

> alter index NONVOICE04_IDX1 rebuild parallel 3 ;

Index altered.

Elapsed: 00:02:43.39
> alter index NONVOICE04_IDX2 rebuild parallel 3 ;

Index altered.

Elapsed: 00:02:40.96
> alter index NONVOICE04_IDX3 rebuild parallel 3 ;

Index altered.

Elapsed: 00:02:23.70
> alter index NONVOICE04_IDX4 rebuild parallel 3 ;

Index altered.

alter index NONVOICE04_IDX1 noparallel  ;
alter index NONVOICE04_IDX2 noparallel  ;
alter index NONVOICE04_IDX3 noparallel  ;
alter index NONVOICE04_IDX4 noparallel  ;

some info about datapump import

data pump is much efficient than imp/exp , which help me import 19Gb within one hour without generating redo logs.


LIQY@ADMP> @chk_session    

SUBSTR(USERNAME,1,15)                                SID    SERIAL# SQL_HASH_VALUE SERVER    STATUS   OSUSER
--------------------------------------------- ---------- ---------- -------------- --------- -------- ------------------------------
SUBSTR(MACHINE,1,20)
------------------------------------------------------------
SUBSTR(PROGRAM,1,25)                                                        TO_CHAR(LOGON_TIME,'DD-MON CPU_MINUTES
--------------------------------------------------------------------------- -------------------------- -----------
SYSMAN                                              1079          9     4281219134 DEDICATED ACTIVE
bill07
OMS                                                                         08-apr-2010 02:09:47        2.56233333

DBSNMP                                              1101          3              0 DEDICATED ACTIVE   oracle1
bill07
emagent@bill07 (TNS V1-V3                                                   08-apr-2010 02:11:20        2.78166667

SYS                                                  187      64288     3732391352 DEDICATED ACTIVE   oracle1
bill07
udi@bill07 (TNS V1-V3)                                                      19-apr-2010 15:54:17              .011

SYS                                                  269      16294              0 DEDICATED ACTIVE   oracle1
bill07
oracle@bill07 (DM00)                                                        19-apr-2010 15:54:17        .011833333

SYS                                                 1069       3981              0 DEDICATED ACTIVE   oracle1
bill07
oracle@bill07 (DW01)                                                        19-apr-2010 15:54:18        1.69966667

LIQY                                                 359         32     2579827540 DEDICATED ACTIVE   liqy
bill07
sqlplus@bill07 (TNS V1-V3                                                   19-apr-2010 15:55:18              .002


6 rows selected.


--SID 187, 269 and 1069 belong to datapump import, directly write to data file by DB writer.

LIQY@ADMP> @chk_session_wait

       SID SUBSTR(EVENT,1,25)                                                                  P1 SUBSTR(P1TEXT,1,10)
---------- --------------------------------------------------------------------------- ---------- ------------------------------
        P2 SUBSTR(P2TEXT,1,10)                    P3 SUBSTR(P3TEXT,1,10)
---------- ------------------------------ ---------- ------------------------------
       187 wait for unread message o                                                   1.3835E+19 channel co
1.3835E+19 channel ha                              0

       269 wait for unread message o                                                   1.3835E+19 channel co
1.3835E+19 channel ha                              0

       283 Streams AQ: qmn slave idl                                                            0
         0                                         0

       534 Streams AQ: waiting for t                                                            0
         0                                         0

       536 Streams AQ: qmn coordinat                                                            0
         0                                         0

      1017 Streams AQ: qmn slave idl                                                            0
         0                                         0

      1069 Datapump dump file I/O                                                               1 count
       256 intr                           4294967295 timeout

      1079 wait for unread message o                                                   1.3835E+19 channel co
1.3835E+19 channel ha                              0

      1101 Streams AQ: waiting for m                                                       602323 queue id
1.3835E+19 process#                                5 wait time

      1102 Streams AQ: waiting for t                                                            0
         0                                         0


10 rows selected.

--SID 187, 269 and 1069 belong to datapump import


2. insert data first , then rebuild indexes. Howerver, during data is written to data file, indexes remain valid and usable (not like directly path loading by sql*loader)

LIQY@ADMP> select segment_name, bytes from dba_segments where segment_name like '%NONVOICE03%';

SEGMENT_NAME                                                                           BYTES
--------------------------------------------------------------------------------- ----------
IMPDP_M1_SUREPAY_NONVOICE03                                                           524288
M1_SUREPAY_NONVOICE03                                                             1.5781E+10
M1_SUREPAY_NONVOICE03_IDX1                                                        1090519040
M1_SUREPAY_NONVOICE03_IDX2                                                        1237319680
M1_SUREPAY_NONVOICE03_IDX3                                                          20971520
M1_SUREPAY_NONVOICE03_IDX4                                                          20971520

6 rows selected.

--notice the indexes's size keep changing one by one.
LIQY@ADMP> /

SEGMENT_NAME                                                                           BYTES
--------------------------------------------------------------------------------- ----------
M1_SUREPAY_NONVOICE03                                                             1.5781E+10
M1_SUREPAY_NONVOICE03_IDX1                                                        1090519040
M1_SUREPAY_NONVOICE03_IDX2                                                        1384120320
M1_SUREPAY_NONVOICE03_IDX3                                                        1048576000
M1_SUREPAY_NONVOICE03_IDX4                                                         901775360


-rw-rw----   1 oracle1    dba1       26440704 Apr 19 13:31 ADMP_0001_525208771_80943.arc
-rw-rw----   1 oracle1    dba1         55296 Apr 19 13:31 ADMP_0001_525208771_80944.arc
-rw-rw----   1 oracle1    dba1       24170496 Apr 19 13:59 ADMP_0001_525208771_80945.arc
drwxrwxr-x   2 oracle1    dba1         16384 Apr 19 14:22 oldarch
-rw-rw----   1 oracle1    dba1       195064832 Apr 19 16:25 ADMP_0001_525208771_80946.arc

--no log file generated


-rw-rw-r--   1 oracle1    dba1       13533204480 Apr 19 15:32 expdp_M1_SUREPAY_NONVOICE03.dmp
-rw-rw-r--   1 oracle1    dba1       698085376 Apr 19 15:32 expdp_M1_SUREPAY_OTHER03.dmp
-rw-rw-r--   1 oracle1    dba1       19535446016 Apr 19 15:38 expdp_M1_SUREPAY_VOICE03.dmp

bill07:ADMP:/csa074/oradata/surepayimp> cat impdp_M1_SUREPAY_VOICE03.par
userid='/ as sysdba'
directory=surepay_dump_dir
dumpfile=expdp_M1_SUREPAY_VOICE03.dmp
logfile=impdp_M1_SUREPAY_VOICE03.log
tables=dbam1.M1_SUREPAY_VOICE03
CONTENT=data_only    # whit table & indexes exist (truncate before import)
#CONTENT=metadata_only
#REMAP_SCHEMA=dbam1:liqy
REMAP_TABLESPACE=surepay_tbl:m1_surepay_tbl
REMAP_TABLESPACE=surepay_idx:m1_surepay_idx
job_name=impdp_M1_SUREPAY_VOICE03

bill07:ADMP:/csa074/oradata/surepayimp> ll expdp_M1_SUREPAY_VOICE03.dmp
-rw-rw-r--   1 oracle1    dba1       19535446016 Apr 19 15:38 expdp_M1_SUREPAY_VOICE03.dmp
bill07:ADMP:/csa074/oradata/surepayimp> impdp parfile=impdp_M1_SUREPAY_VOICE03.par^C
bill07:ADMP:/csa074/oradata/surepayimp> cat impdp_M1_SUREPAY_NONVOICE03.log
;;;
Import: Release 10.2.0.2.0 - 64bit Production on Monday, 19 April, 2010 15:54:17

Copyright (c) 2003, 2005, Oracle.  All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning and Data Mining options
Master table "SYS"."IMPDP_M1_SUREPAY_NONVOICE03" successfully loaded/unloaded
Starting "SYS"."IMPDP_M1_SUREPAY_NONVOICE03":  parfile=impdp_M1_SUREPAY_NONVOICE03.par
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "DBAM1"."M1_SUREPAY_NONVOICE03"             12.60 GB 41019272 rows
Job "SYS"."IMPDP_M1_SUREPAY_NONVOICE03" successfully completed at 16:25:32


bill07:ADMP:/csa074/oradata/surepayimp> impdp parfile=impdp_M1_SUREPAY_VOICE03.par

Import: Release 10.2.0.2.0 - 64bit Production on Monday, 19 April, 2010 16:53:35

Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning and Data Mining options
Master table "SYS"."IMPDP_M1_SUREPAY_VOICE03" successfully loaded/unloaded
Starting "SYS"."IMPDP_M1_SUREPAY_VOICE03":  parfile=impdp_M1_SUREPAY_VOICE03.par
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "DBAM1"."M1_SUREPAY_VOICE03"                18.19 GB 57052084 rows
Job "SYS"."IMPDP_M1_SUREPAY_VOICE03" successfully completed at 17:41:12

Sunday, April 18, 2010

11g sqlplus doesn't hide password

Prior to 10g Oracle database, anyone can see the database account password using 'ps -ef |grep sqlplus ' , if  it is hard coded on same command line with sqlplus, e.g sqlplus  username/password@DBNAME

In 10g , Oracle "solved" this problem ,.

However, it comes back again in 11g. Searched metalink, no more document says password will be removed. :(

Too bad.  Lucky, we have alternative way.

> ps -ef |grep sqlplus
oravas11  4997  4748  0 13:41:31 pts/0     0:00 ied sqlplus perfstat/xxxxxxxx@VAS11GP
oravas11  4998  4997  0 13:41:31 ttyp1     0:00 sqlplus perfstat/xxxxxxxx@VAS11GP

two cases of missing indexes

Case 1:

SQL> select status, count(*) from dbaantispam.tb_sms_out group by status;

    STATUS   COUNT(*)
---------- ----------
         1     112459


SQL> alter session set current_schema=dbaantispam;

Session altered.

SQL> set autotrace on
SP2-0613: Unable to verify PLAN_TABLE format or existence
SP2-0611: Error enabling EXPLAIN report

cd  $ORACLE_HOME/rdbms/admin/

SQL> @utlxplan

Table created.

SQL> set autotrace on

SQL> conn /
Connected.
SQL> set autotrace on


SQL>  SELECT sms_out_id, text, msisdn, messagetype FROM dbaantispam.tb_sms_out WHERE status = 0 ORDER BY sms_out_id ASC;

no rows selected


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'TB_SMS_OUT'
   2    1     INDEX (FULL SCAN) OF 'SMS_OUT_ID_PK' (UNIQUE)




Statistics
----------------------------------------------------------
        248  recursive calls
          0  db block gets
      31466  consistent gets
       3072  physical reads
          0  redo size
        467  bytes sent via SQL*Net to client
        461  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
          0  rows processed

As you can see there are lot of  consistent reads and physical reads .

SQL> @chk_indcol
Enter value for tbl: TB_SMS_OUT
old   1: select index_name,column_name,column_position from dba_ind_columns where table_name=upper('&tbl')
new   1: select index_name,column_name,column_position from dba_ind_columns where table_name=upper('TB_SMS_OUT')

INDEX_NAME                     COLUMN_NAME          COLUMN_POSITION
------------------------------ -------------------- ---------------
SMS_OUT_ID_PK                  SMS_OUT_ID                         1

-- No index found on STATUS column
SQL> create index dbaantispam.tb_sms_out_statusidx on tb_sms_out(status) tablespace antispam_idx;

Index created.



SQL>  SELECT sms_out_id, text, msisdn, messagetype FROM dbaantispam.tb_sms_out WHERE status = 0 ORDER BY sms_out_id ASC;

no rows selected


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   SORT (ORDER BY)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'TB_SMS_OUT'
   3    2       INDEX (RANGE SCAN) OF 'TB_SMS_OUT_STATUSIDX' (NON-UNIQ
          UE)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          1  physical reads
          0  redo size
        467  bytes sent via SQL*Net to client
        461  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          0  rows processed

2 consistent reads and 1 physical reads only.



Case 2 :


alter session set current_schema=dbamsgctr;


SQL> set timing on
SQL> set pages 1000
SQL> SELECT '11:00-11:59', COUNT(destination_type) FROM dbamsgctr.tb_msg_transaction
  2   WHERE destination_type = 0 AND msg_type LIKE '%SMS%'
  3   AND transaction_timestamp >= TO_DATE('2010-03-16 11:00:00', 'yyyy-mm-dd hh24:mi:ss')
  4   AND transaction_timestamp <= TO_DATE('2010-03-16 11:59:59', 'yyyy-mm-dd hh24:mi:ss');


'11:00-11:5 COUNT(DESTINATION_TYPE)
----------- -----------------------
11:00-11:59                       0

Elapsed: 00:07:21.94

Execution Plan
----------------------------------------------------------




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     380503  consistent gets
     380195  physical reads

          0  redo size
        586  bytes sent via SQL*Net to client
        656  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Huge amounts of logical reads, because of this statement, during 3 hours ,  the "Buffer Cache Reads" is 278.4G

SQL> set timing on pages 1000
SQL> create index msg_idx2  on tb_msg_transaction(transaction_timestamp) tablespace msgctr_idx nologging parallel (degree 3 );


Index created.

Elapsed: 00:14:55.86

alter index msg_idx2 logging noparallel;



SQL> set autotrace on
SQL> SELECT '11:00-11:59', COUNT(destination_type) FROM dbamsgctr.tb_msg_transaction
  2   WHERE destination_type = 0 AND msg_type LIKE '%SMS%'
  3   AND transaction_timestamp >= TO_DATE('2010-03-16 11:00:00', 'yyyy-mm-dd hh24:mi:ss')
  4   AND transaction_timestamp <= TO_DATE('2010-03-16 11:59:59', 'yyyy-mm-dd hh24:mi:ss');

'11:00-11:5 COUNT(DESTINATION_TYPE)
----------- -----------------------
11:00-11:59                       0

Elapsed: 00:00:02.81

Execution Plan
----------------------------------------------------------




Statistics
----------------------------------------------------------
        164  recursive calls
          0  db block gets
         28  consistent gets
          4  physical reads

          0  redo size
        586  bytes sent via SQL*Net to client
        656  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
          1  rows processed

Tremendous improvement !      minutes of run becomes 3 seconds !!! 
Again , housekeeping and Oracle developer understands  how Oracle database works is important.

blog templated widened

Finally I widened this blog's layout , which is using template rounders 4, leave those .gif picture unchanged. (Lazy to download it from blogblog.com and customize them and  upload to 3rd party web ).

Main time is spent on removing the post's right sidebar -- looks like a vertical line.  Hence, searching on keyword "width" throughout the xml file.   Finally identify it a GIF picture  from   http://www.blogblog.com/rounders4/rails_main.gif , as the "width" is fixed  inside a picture.

My solution is to comment it .

/*  background:url("http://www.blogblog.com/rounders4/rails_main.gif") repeat-y $startSide; */



References:

http://www.bloggerguide.lk/2008/04/rounders-with-wider-post-area.html

How To Change The Width Of The Blogger Templates


http://blogger-and-money.blogspot.com/2007/06/how-to-change-width-of-blogger.html

Rounders 4 with a wider post area

http://www.bloggerguide.lk/2008/11/rounders-4-with-wider-post-area.html



Key Words: widen blogger.com template rounder4

Friday, April 16, 2010

Private db link

private link can be created/dropped by owner only, it can't be achieved by using below command.



alter session set current_schema= ;

cluttered gviw backup files

Below info is from   http://my.opera.com/peterchenadded/blog/2008/12/27/gvim-7-1-swap-and-backup-files, which puzzled me few years.

A swap file is a file that contains changes made to the original file. it matches *.swp.
A backup file is created right after saving changes to a newly opened file. it is a copy of the file before changing the file. it matches *~ where the * is the original filename.
So logically, by definition you can put them together to make the original or original plus some or all changes made.
source: http://www.ph.unimelb.edu.au/~ssk/vim/recover.html

Normally, both files appear in the directory of the file being edited. This can make the directory very cluttered, so would be better to remove it some where else. To do this use

set directory=c:/vim_swap_files
set backupdir=c:/vim_swap_files


You will need to create the vim_swap_files folder before it will work.
Add the lines to the c:\program files\vim\_vimrc file or in vim edit -> startup settings and add the lines and you will never have to worry about it again.

Human Error Preventives

Being a DBA or system administrator, simple ENTER may bring us into a nightmare. 

It is unavoidable, however what we can do is to prevent it from happening at  low probability as possible. 

Good habit should help us. 
*** This thread would be updated  on-going , any comments are welcome.

1. shell environment settings.
Today I notice below are mentioned in cygwin's .bashrc

 # Some example alias instructions
# If these are enabled they will be used instead of any instructions
# they may mask.  For example, alias rm='rm -i' will mask the rm
# application.  To override the alias instruction use a \ before, ie
# \rm will call the real rm not the alias.

# Interactive operation...
 alias rm='rm -i'
 alias cp='cp -i'
 alias mv='mv -i'

Good ! In real world practices, I'd like these options to be there.


To be continued  ....


Interesting to see a from from Amazon, menting Human Error in Medicine, however looks not any for IT industry yet.

Using rxvt in Cygwin

What is  rxvt ?

Here it is http://infrablue.tripod.com/cygwin.html  , notice the installation part, it is under Shells: rxvt  .

And I get this command

C:\cygwin\bin\rxvt.exe -sl 1500 -fn "Lucida Console-12" -bg black -fg grey -sr -e bash --login -i


However, when I follow it , the terminal crash upon click (window open and immediately disappear . It is also reported here :  http://cygwin.com/ml/cygwin-xfree/2006-07/msg00146.html  , saying crash with "-e" option (not true to my case , find answer below ) .

Finally , I get below command from  http://www.khngai.com/emacs/bash.php

D:\Programs\Cygwin\bin\rxvt.exe -sl 500 -fn courier -bg wheat
-sr -title "Bash Sailor" -e D:\Programs\Cygwin\bin\bash.exe 
--rcfile %HOME%\.bashrc


But , it is still not working for me, however slightly better . Window remains open with error message complaining .bashrc.

Comparing above two commands carefully,  finally I get my working version, which is need to specify full path for bash.exe


c:\cygwin\bin\rxvt.exe -sl 25000 -fn 'Lucida Console-16' -bg wheat -sr -title "Bash Sailor" -e c:\cygwin\bin\bash.exe --login -i

And my final cygwin.bat is like this.


-- content of cygwin.bat

@echo off
C:
chdir C:\cygwin\bin

 set EDITOR=vi
 set VISUAL=vi
 set CYGWIN=codepage:oem tty binmode title

rem change font size below
rem c:\cygwin\bin\rxvt.exe -sl 5000 -fn courier -bg wheat -sr -title "Bash Sailor" -e c:\cygwin\bin\bash.exe --login -i

rem c:\cygwin\bin\rxvt.exe -sl 25000 -fn 'Courier New-18' -bg wheat -sr -title "Bash Sailor" -e c:\cygwin\bin\bash.exe --login -i
c:\cygwin\bin\rxvt.exe -sl 25000 -fn 'Lucida Console-16' -bg wheat -sr -title "Bash Sailor" -e c:\cygwin\bin\bash.exe --login -i

-- end of content of cygwin.bat

*** -sl 25000  : stands for number of scrollback lines, 25000 lines here
*** -fn 'Lucida Console-16' : font Lucida Console with size 16, change it to suit you best .

References:

http://www.khngai.com/emacs/bash.php
http://www.petersblog.org/node/1583
http://cygwin.com/ml/cygwin/2005-09/msg00741.html
http://c2.com/cgi/wiki?BetterCygwinTerminal
Configuring the Command Prompt Window  http://commandwindows.com/configure.htm

sqlplus command wrapper for cygwin rxvt (terminal)

 From limited scrollback lines of  ugly MS-DOS COMMAND prompt, I link to cywin bash shell, later on I found rxvt (ouR eXended Virtual Terminal), then I started thinking using local sqlplus without SSH logon to remote database server, finally I realize that I need command wrapper rlwrap, which the exactly same as what I use in Linux.


Without rlwrap , you can see the funny problem with UP, DOWN key as described in http://www.cygwin.com/ml/cygwin/2007-10/msg00465.html


To install rlwrap in Cygwin,  choose rlwrap under Utils packages of cygwin setup program. Now try :


$ sqlplus perfstat/XXXX@ICTD
/usr/bin/rlwrap.exe: error while loading shared libraries: cygreadline6.dll: cannot open shared object file: No such file or directory

oops !


search "cygreadline*.*"  under c:\cygwin,  found cygreadline7.dll under c:\cygwin\bin


Search "readline" again in cgywin setup program, found there are version5, 6,7 dll of cyglibreadline under Lib packages,  so I install  version 6.

This time,  it works !!!

 Search c:\cygwin folder again , see cygreadline6.dll,  cygreadline7.dll under c:\cygwin\bin

 
Finally, I define below in .bashrc
alias sqlplus='rlwrap sqlplus'




 It is exact what I have in Linux environment.  :)

Next step, I will configure using Public Key for SSH authentication on local PC , so as not to bother with password for tens of  logons everyday.

Reference: 

http://www.cygwin.com/ml/cygwin/2007-10/msg00465.html
http://infrablue.tripod.com/cygwin.html