关于High Versions Count总结

High Versions Count后果
sql查询–>hash对比确定是否存在shared pool中(不讨论不存在情况)–>选择合适的children
可能导致library cache latch contention

When you have unnecessary versions of a cursor, each time that cursor is executed, 
the parse engine has to search through the list of versions to see which is the cursor that you want. 
This wastes CPU cycles that you could be using on something else.

High version counts can easily cause high contention for library cache latches.
A process parsing a SQL statement with many versions (children cursors) will need to scan
through all these children while holding on to a library cache latch.
This means that other processes needing the same latch will have to wait and
can lead to significant database-wide performance degradation.

引起High Versions Count原因

•UNBOUND_CURSOR - The existing child cursor was not fully built (in other words, it was not optimized) 

•SQL_TYPE_MISMATCH - The SQL type does not match the existing child cursor 

•**OPTIMIZER_MISMATCH - The optimizer environment does not match the existing child cursor. 
For example:
select count(*) from emp; ->> 1 PARENT, 1 CHILD 
alter session set optimizer_mode=ALL_ROWS 
select count(*) from emp; ->> 1 PARENT, 2 CHILDREN 
    (The optimizer mode has changed and therefore 
       the existing child cannot be reused)
(The same applies with events - if I turned on tracing with 10046 than 
I would get the OPTIMIZER_MISMATCH again and a 3rd child) 

•OUTLINE_MISMATCH - The outlines do not match the existing child cursor 
If my user had created stored outlines previously for this command and they were 
stored in seperate categories (say "OUTLINES1" and "OUTLINES2") running:-

alter session set use_stored_outlines = OUTLINES1; 
select count(*) from emp; 
alter session set use_stored_oulines= OUTLINES2; 
select count(*) from emp;
--> Would create a 2nd child as the outline used is different than the first run. 

•STATS_ROW_MISMATCH - The existing statistics do not match the existing child cursor. 
Check that 10046/sql_trace is not set on all sessions as this can cause this. 

•LITERAL_MISMATCH - Non-data literal values do not match the existing child cursor 

•SEC_DEPTH_MISMATCH - Security level does not match the existing child cursor 

•EXPLAIN_PLAN_CURSOR - The child cursor is an explain plan cursor and should not be shared. 
Explain plan statements will generate a new child by default - the mismatch will be this. 

•BUFFERED_DML_MISMATCH - Buffered DML does not match the existing child cursor 

•PDML_ENV_MISMATCH - PDML environment does not match the existing child cursor 

•INST_DRTLD_MISMATCH - Insert direct load does not match the existing child cursor 

•SLAVE_QC_MISMATCH -The existing child cursor is a slave cursor and the new one was issued by the coordinator 
(or, the existing child cursor was issued by the coordinator and the new one is a slave cursor). 

•TYPECHECK_MISMATCH - The existing child cursor is not fully optimized 

•AUTH_CHECK_MISMATCH - Authorization/translation check failed for the existing child cursor 

The user does not have permission to access the object in any previous version of the cursor. 
A typical example would be where each user has it's own copy of a table 

•**BIND_MISMATCH - The bind metadata does not match the existing child cursor. For example:
variable a varchar2(100); 
select count(*) from emp where ename = :a ->> 1 PARENT, 1 CHILD 
variable a varchar2(400); 
select count(*) from emp where ename = :a ->> 1 PARENT, 2 CHILDREN 
   (The bind 'a' has now changed in definition)

•DESCRIBE_MISMATCH - The typecheck heap is not present during the describe for the child cursor 

•LANGUAGE_MISMATCH - The language handle does not match the existing child cursor 

•TRANSLATION_MISMATCH - The base objects of the existing child cursor do not match. 
The definition of the object does not match any current version. 
Usually this is indicative of the same issue as "AUTH_CHECK_MISMATCH" where the object is different.

•ROW_LEVEL_SEC_MISMATCH - The row level security policies do not match 

•INSUFF_PRIVS - Insufficient privileges on objects referenced by the existing child cursor 

•INSUFF_PRIVS_REM - Insufficient privileges on remote objects referenced by the existing child cursor 

•REMOTE_TRANS_MISMATCH - The remote base objects of the existing child cursor do not match 
USER1: select count(*) from table@remote_db 
USER2: select count(*) from table@remote_db 

  (Although the SQL is identical, the dblink pointed to 
   by remote_db may be a private dblink which resolves 
   to a different object altogether)

•LOGMINER_SESSION_MISMATCH 

•INCOMP_LTRL_MISMATCH 

•OVERLAP_TIME_MISMATCH - error_on_overlap_time mismatch 

•SQL_REDIRECT_MISMATCH - sql redirection mismatch 

•MV_QUERY_GEN_MISMATCH - materialized view query generation 

•USER_BIND_PEEK_MISMATCH - user bind peek mismatch 

•TYPCHK_DEP_MISMATCH - cursor has typecheck dependencies 

•NO_TRIGGER_MISMATCH - no trigger mismatch 

•FLASHBACK_CURSOR - No cursor sharing for flashback 

•ANYDATA_TRANSFORMATION - anydata transformation change 

•INCOMPLETE_CURSOR - incomplete cursor.
When bind length is upgradeable (i.e. we found a child cursor that matches everything
else except that the bind length is not long enough), we mark the old cursor is not usable
and build a new one.  This means the version can be ignored.

•TOP_LEVEL_RPI_CURSOR - top level/rpi cursor 
In a Parallel Query invocation this is expected behaviour (we purposely do not share)

•DIFFERENT_LONG_LENGTH - different long length 

•LOGICAL_STANDBY_APPLY - logical standby apply mismatch 

•DIFF_CALL_DURN - different call duration 

•BIND_UACS_DIFF - bind uacs mismatch 

•PLSQL_CMP_SWITCHS_DIFF - plsql compiler switches mismatch 

•CURSOR_PARTS_MISMATCH - cursor-parts executed mismatch 

•STB_OBJECT_MISMATCH - STB object different (now exists) 

•ROW_SHIP_MISMATCH - row shipping capability mismatch 

•PQ_SLAVE_MISMATCH - PQ slave mismatch 
Check you want to be using PX with this reason code, as the problem could be caused by running 
lots of small SQL statements which do not really need PX. If you are on < 11i you may be hitting Bug:4367986

•TOP_LEVEL_DDL_MISMATCH - top-level DDL cursor 

•MULTI_PX_MISMATCH - multi-px and slave-compiled cursor 

•BIND_PEEKED_PQ_MISMATCH - bind-peeked PQ cursor 

•MV_REWRITE_MISMATCH - MV rewrite cursor 

•ROLL_INVALID_MISMATCH - rolling invalidation window exceeded 
This is caused by the rolling invalidation capability in DBMS_STATS. 
The child cannot be shared as it's invalidation window is exceeded. See:
Note:557661.1  Rolling Cursor Invalidations with DBMS_STATS in Oracle10g (Doc ID 557661.1)

•OPTIMIZER_MODE_MISMATCH - optimizer mode mismatch 

•PX_MISMATCH - parallel query mismatch 
If running 11.1.0.6 and RAC see Bug:7352775. 
Check that if (on each instance) parallel_instance_groups is set then instance_groups is set to the same.

•MV_STALEOBJ_MISMATCH - mv stale object mismatch 

•FLASHBACK_TABLE_MISMATCH - flashback table mismatch 

•LITREP_COMP_MISMATCH - literal replacement compilation mismatch 

New in 11g :
•PLSQL_DEBUG - debug mismatch Session has debugging parameter plsql_debug set to true

•LOAD_OPTIMIZER_STATS  - Load optimizer stats for cursor sharing

•ACL_MISMATCH   -  Check ACL mismatch

•FLASHBACK_ARCHIVE_MISMATCH  - Flashback archive mismatch

•LOCK_USER_SCHEMA_FAILED  - Failed to lock user and schema

•REMOTE_MAPPING_MISMATCH  - Remote mapping mismatch

•LOAD_RUNTIME_HEAP_FAILED  - Runtime heap mismatch

•HASH_MATCH_FAILED  - Hash mismatch
Set to "Y" if sharing fails due to a hash mismatch, such as the case with mismatched histogram data 
or a range predicate marked as unsafe by literal replacement (See Bug:3461251)

New in 11.2  :
•PURGED_CURSOR - cursor marked for purging
The cursor has been marked for purging with dbms_shared_pool.purge

•BIND_LENGTH_UPGRADEABLE - bind length upgradeable
 Could not be shared because a bind variable size was smaller than the new value beiing inserted 
(marked as BIND_MISMATCH in earlier versions). 

•USE_FEEDBACK_STATS - cardinality feedback
Cardinality feedback is being used and therefore a new plan could be formed for the current execution.


•BIND_EQUIV_FAILURE - The bind value's selectivity does not match that used to optimize the existing child cursor

There is no longer  ROW_LEVEL_SEC_MISMATCH in 11.2.

High Versions Count查询
1.使用version_rpt function查询

--install version_rpt3_21
connect / as sysdba
start version_rpt3_21.sql

-- Generate reports for all cursors with more than 100 versions using SQL_ID (10g and up)
select b.* from v$sqlarea a ,table(version_rpt(a.sql_id)) b where loaded_versions >=100;

-- Generate reports for all cursors with more than 100 versions using HASH_VALUE
select b.* from v$sqlarea a ,table(version_rpt(null,a.hash_value)) b where loaded_versions>=100;

-- Generate the report for cursor with sql_id cyzznbykb509s
select * from table(version_rpt('cyzznbykb509s'));

2.直接查询

select sql_id,version_count, ADDRESS,sql_text
from   v$sqlarea 
where version_count > 10 
order by version_count, hash_value;

•Version 9.2.X.X and below : 
select * from v$sql_shared_cursor where kglhdpar = '0000000386BC2E58'
•Version 10.0.X.X and above:
select * from v$sql_shared_cursor where address = '0000000386BC2E58'

NOTE:The 'Y's denote a mismatch

High Versions Count跟踪

CURSORTRACE(10G及其以后版本)

TO trace on using:-  
alter system set events 
'immediate trace name cursortrace level 577, address hash_value';
(level 578/580 can be used for high level tracing (577=level 1, 578=level 2, 580=level 3) 

To turn off tracing use:- 
alter system set events 
'immediate trace name cursortrace level 2147483648, address 1';

Please note: BUG:5555371 exists in 10.2 (fixed in 10.2.0.4) where cursor trace cannot fully be turned off 
and single line entries will still be made to the trace file as a result. 
The w/a is to restart the instance. How invasive this BUG is depends on the executions of the cursor 
(and the size of the resultant trace file additions) 

cursordump(11GR2)
alter system set events 'immediate trace name cursordump level 16'

version_rpt3_21脚本:下载
参考:Troubleshooting: High Version Count Issues [ID 296377.1]

shared pool latch 等待事件

shared pool latch相关描述

The shared pool latch is used to protect critical operations when allocating 
and freeing memory in the shared pool. 
If an application makes use of literal (unshared) SQL then this can severely 
limit scalability and throughput. The cost of parsing a new SQL statement is 
expensive both in terms of CPU requirements and the number of times the library 
cache and shared pool latches may need to be acquired and released. Before Oracle9, 
there was just one such latch for the entire database to protect the allocation of 
memory in the library cache. In Oracle9, multiple children were introduced to relieve
contention on this resource.

减少shared pool latch方法

Avoid hard parses when possible, parse once, execute many. 
Eliminate  literal SQL so that same sql is shared by many sessions.
Size the shared_pool adequately to avoid reloads
Use of MTS (shared server option) also greatly influences the shared pool latch. 

查询未绑定sql

--9i
SELECT substr(sql_text,1,40) "SQL", 
         count(*) , 
         sum(executions) "TotExecs"
    FROM v$sqlarea
   WHERE executions < 5
   GROUP BY substr(sql_text,1,40)
  HAVING count(*) > 30
   ORDER BY 2
  ;

--10g及其以后版本
SET pages 10000
SET linesize 250
column FORCE_MATCHING_SIGNATURE format 99999999999999999999999
WITH c AS
     (SELECT  FORCE_MATCHING_SIGNATURE,
              COUNT(*) cnt
     FROM     v$sqlarea
     WHERE    FORCE_MATCHING_SIGNATURE!=0
     GROUP BY FORCE_MATCHING_SIGNATURE
     HAVING   COUNT(*) > 20
     )
     ,
     sq AS
     (SELECT  sql_text                ,
              FORCE_MATCHING_SIGNATURE,
              row_number() over (partition BY FORCE_MATCHING_SIGNATURE ORDER BY sql_id DESC) p
     FROM     v$sqlarea s
     WHERE    FORCE_MATCHING_SIGNATURE IN
              (SELECT FORCE_MATCHING_SIGNATURE
              FROM    c
              )
     )
SELECT   sq.sql_text                ,
         sq.FORCE_MATCHING_SIGNATURE,
         c.cnt "unshared count"
FROM     c,
         sq
WHERE    sq.FORCE_MATCHING_SIGNATURE=c.FORCE_MATCHING_SIGNATURE
AND      sq.p                       =1
ORDER BY c.cnt DESC

查询数据库整体解析情况

select to_char(100 * sess / calls, '999999999990.00') || '%' cursor_cache_hits, 
to_char(100 * (calls - sess - hard) / calls, '999990.00') || '%' soft_parses, 
to_char(100 * hard / calls, '999990.00') || '%' hard_parses 
from ( select value calls from v$sysstat where name = 'parse count (total)' ), 
( select value hard from v$sysstat where name = 'parse count (hard)' ), 
( select value sess from v$sysstat where name = 'session cursor cache hits' );

参考:Note 62143.1 Troubleshooting: Tuning the Shared Pool and Tuning Library Cache Latch

cache buffer lru chain latch等待事件

cache buffer lru chain latch官方解释

The cache buffer lru chain latch is acquired in order to introduce a new block into the buffer cache 
and when writing a buffer back to disk, specifically when trying to scan the LRU (least recently used) chain 
containing all the dirty blocks in the buffer cache.

cache buffer lru chain latch可能原因

想查看或者修改LRU+LRUW的进程,始终要持有cache buffers lru chain latch。
若在此过程中发生争用,则要等待latch:cache buffers lru chain 事件。
总结出来如下两种情况会导致cache buffers lru chain latch:
1.进程欲读取还没有装载到内存上的块时,通过查询LRU 列分配到所需空闲缓冲区,在此过程中需要cache buffers lru chain latch。
2.DBWR 为了将脏缓冲区记录到文件上,查询LRUW 列,将相应缓冲区移动到LRU 列的过程中也要获得cache buffers lru chain latch。
2.1)DBWR在如下情况下将脏缓冲区记录到文件里。
2.2)Oracle 进程为了获得空闲缓冲区,向DBWR 请求记录脏缓冲区时;
2.3)Oracle进程为执行Parallel Query 或Tablespace Backup,Truncate/Drop 等工作,请求记录相关对象的脏缓冲区时; 
2.4)周期性或管理上的原因检查点(checkpointing)被执行时。
2.5)Oracle 为了保障将通过FAST_START_MTTR_TARGET(或LOG_CHECKPOINT_TIMEOUT)指定的时间的恢复,周期性执行检查点。
2.6)管理员执行检查点命令或根据日志文件切换,也会发生检查点。

cache buffers lru chain latch争用的最重要的原因是过多请求空闲缓冲区。低效的SQL语句是过多请求空闲缓冲区的最典型情况,若多个会话同时执行低效的SQL语句,则在查询空闲缓冲区过程中和记录脏缓冲区的过程中,为了获取buffers lru chain latch发生争用。多个会话同时扫描不同表或索引时,发生cache buffers lru chain latch争用的概率高。多个会话将各不相同的块载入到内存过程中,确保空闲缓冲区的请求会增多,因此发生对工作组争用的概率将提高。特别是因为数据修改频繁,以至于脏缓冲区数量多,正因此DBWR 因为检查点而查询LRUW 列的次数频繁,所以cache buffers lru chain latch争用将更加严重。cache buffers lru chain latch争用的另一个重要特点就是伴随着物理I/O。若是低效的索引扫描引起的问题,则同时发生db file sequential read 等待和lru chain latch争用;若是不必要的全表扫描引起的问题,则同时发生db file scattered read 等待和lru chain latch争用。事实上,cache buffers chains latch争用和cache buffers lru chain latch争用同时发生的情况较多,因为复杂的应用程序将复合地应用上述模式。data buffer过小或检查点周期过短时,也会增加cache buffers lru chain latch争用;但是现在的数据库的data buffer都不会太小,而检查点周期一般使用缺省值,所以通常定位cache buffers lru chain latch的原因还是在低效的SQL语句上

CACHE BUFFERS CHAINS等待事件

关于CACHE BUFFERS CHAINS描述

CACHE BUFFERS CHAINS latch is acquired when searching for data blocks cached in the buffer cache. 
Since the Buffer cache is implemented as a sum of chains of blocks, each of those chains is protected 
by a child of this latch when needs to be scanned. Contention in this latch can be caused by very heavy 
access to a single block. This can require the application to be reviewed.

产生CACHE BUFFERS CHAINS原因

The main cause of the cache buffers chains latch contention is usually a hot block issue. 
This happens when multiple sessions repeatedly access one or more blocks that are protected 
by the same child cache buffers chains latch.

CACHE BUFFERS CHAINS 处理方法
1) Examine the application to see if the execution of certain DML and SELECT statements can be reorganized to eliminate contention on the object.

处理方法如下:
--通过报告确定latch: cache buffers chains 等待

Top 5 Timed Events                                      Avg    %Total
~~~~~~~~~~~~~~~~~~                                      wait   Call
Event                          Waits        Time (s)    (ms)   Time   Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
latch: cache buffers chains          74,642      35,421    475    6.1 Concurrenc
CPU time                                         11,422           2.0
log file sync                        34,890       1,748     50    0.3 Commit
latch free                            2,279         774    340    0.1 Other
db file parallel write               18,818         768     41    0.1 System I/O
-------------------------------------------------------------

--找出逻辑读高sql
SQL ordered by Gets         DB/Inst:  Snaps: 1-2
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> Total Buffer Gets:   265,126,882
-> Captured SQL account for   99.8% of Total


                            Gets                CPU      Elapsed
Buffer Gets    Executions   per Exec     %Total Time (s) Time (s)  SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
   256,763,367       19,052     13,477.0   96.8 ######## ######### a9nchgksux6x2
Module: JDBC Thin Client
SELECT * FROM SALES ....

     1,974,516      987,056          2.0    0.7    80.31    110.94 ct6xwvwg3w0bv
SELECT COUNT(*) FROM ORDERS ....

--逻辑读大对象
Segments by Logical Reads           
-> Total Logical Reads:     265,126,882
-> Captured Segments account for   98.5% of Total

           Tablespace                      Subobject  Obj.       Logical
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
DMSUSER    USERS      SALES                           TABLE  212,206,208   80.04
DMSUSER    USERS      SALES_PK                        INDEX   44,369,264   16.74
DMSUSER    USERS      SYS_C0012345                    INDEX    1,982,592     .75
DMSUSER    USERS      ORDERS_PK                       INDEX      842,304     .32
DMSUSER    USERS      INVOICES                        TABLE      147,488     .06
          -------------------------------------------------------------
处理思路:
1.Look for SQL that accesses the blocks in question and determine if the repeated reads are necessary. 
  This may be within a single session or across multiple sessions.

2.Check for suboptimal SQL (this is the most common cause of the events)  
 look at the execution plan for the SQL being run and try to reduce the 
 gets per executions which will minimize the number of blocks being accessed 
 and therefore reduce the chances of multiple sessions contending for the same block.

Note:1342917.1 Troubleshooting ‘latch: cache buffers chains’ Wait Contention

2) Decrease the buffer cache -although this may only help in a small amount of cases.

3) DBWR throughput may have a factor in this as well.If using multiple DBWR’s then increase the number of DBWR’s.

4) Increase the PCTFREE for the table storage parameters via ALTER TABLE or rebuild. This will result in less rows per block.

找出热点对象
First determine which latch id(ADDR) are interesting by examining the number of 
sleeps for this latch. The higher the sleep count, the more interesting the 
latch id(ADDR) is:

SQL> select CHILD#  "cCHILD"
     ,      ADDR    "sADDR"
     ,      GETS    "sGETS"
     ,      MISSES  "sMISSES"
     ,      SLEEPS  "sSLEEPS" 
     from v$latch_children 
     where name = 'cache buffers chains'
     order by 5, 1, 2, 3;

Run the above query a few times to to establish the id(ADDR) that has the most 
consistent amount of sleeps. Once the id(ADDR) with the highest sleep count is found
then this latch address can be used to get more details about the blocks
currently in the buffer cache protected by this latch. 
The query below should be run just after determining the ADDR with 
the highest sleep count.

SQL> column segment_name format a35
     select /*+ RULE */
       e.owner ||'.'|| e.segment_name  segment_name,
       e.extent_id  extent#,
       x.dbablk - e.block_id + 1  block#,
       x.tch,
       l.child#
     from
       sys.v$latch_children  l,
       sys.x$bh  x,
       sys.dba_extents  e
     where
       x.hladdr  = '&ADDR' and
       e.file_id = x.file# and
       x.hladdr = l.addr and
       x.dbablk between e.block_id and e.block_id + e.blocks -1
     order by x.tch desc ;

Example of the output :
SEGMENT_NAME                     EXTENT#      BLOCK#       TCH    CHILD#
-------------------------------- ------------ ------------ ------ ----------
SCOTT.EMP_PK                       5            474          17     7,668
SCOTT.EMP                          1            449           2     7,668

Depending on the TCH column (The number of times the block is hit by a SQL 
statement), you can identify a hot block. The higher the value of the TCH column,
the more frequent the block is accessed by SQL statements.

5) Consider implementing reverse key indexes (if range scans aren’t commonly used against the segment)

Bind Variable Peeking 测试

相关参数

SQL> select * from v$version;

BANNER
---------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production

SQL> show parameter optimizer_mode;

NAME                                 TYPE                   VALUE
------------------------------------ ---------------------- ----------------
optimizer_mode                       string                 ALL_ROWS

SQL> show parameter cursor_sharing;

NAME                                 TYPE                   VALUE
------------------------------------ ---------------------- ----------------
cursor_sharing                       string                 EXACT

SQL> select a.ksppinm name,b.ksppstvl value,a.ksppdesc description
  2    from x$ksppi a,x$ksppcv b
  3   where a.inst_id = USERENV ('Instance')
  4     and b.inst_id = USERENV ('Instance')
  5     and a.indx = b.indx
  6     and upper(a.ksppinm) LIKE upper('%&param%')
  7  order by name
  8  /
Enter value for param: _optim_peek_user_binds
old   6:    and upper(a.ksppinm) LIKE upper('%&param%')
new   6:    and upper(a.ksppinm) LIKE upper('%_optim_peek_user_binds%')

NAME                             VALUE                    DESCRIPTION
-------------------------------- ------------------------ ----------------------------------
_optim_peek_user_binds           TRUE                     enable peeking of user binds

创建模拟表

SQL> create table t_xifenfei(id number,name varchar2(30));

Table created.

SQL>  begin
  2     for i in 1..100000 loop
  3          insert into t_xifenfei values(i,'xifenfei');
  4      end loop;
  5      commit;
  6    end;
  7    /

PL/SQL procedure successfully completed.

SQL> update t_xifenfei SET name='www.orasos.com' where mod(id,20000)=0;

5 row updated.

SQL> commit;

Commit complete.

SQL> create index i_xifenfei on t_xifenfei(name);

Index created.

默认收集统计信息,查看执行计划

SQL> exec  DBMS_STATS.gather_table_stats(user,'T_XIFENFEI',CASCADE=>TRUE);

PL/SQL procedure successfully completed.

SQL> set autot trace exp 
SQL> select id from t_xifenfei where name='xifenfei';

Execution Plan
----------------------------------------------------------
Plan hash value: 548923532

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            | 50000 |   683K|   103   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T_XIFENFEI | 50000 |   683K|   103   (1)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("NAME"='xifenfei')

SQL> select id from t_xifenfei where name='www.orasos.com';

Execution Plan
----------------------------------------------------------
Plan hash value: 548923532

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            | 50000 |   683K|   103   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T_XIFENFEI | 50000 |   683K|   103   (1)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("NAME"='www.orasos.com')
--这里可以发现,对于这样少量的列的情况,没有选择一个合适的执行计划

准确收集统计信息

SQL> exec  DBMS_STATS.gather_table_stats(user,'T_XIFENFEI',CASCADE=>TRUE,
    2 method_opt => 'FOR ALL COLUMNS SIZE 254',estimate_percent => 100);

PL/SQL procedure successfully completed.

再次查看执行计划

SQL> select id from t_xifenfei where name='www.orasos.com';


Execution Plan
----------------------------------------------------------
Plan hash value: 1926396081

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |     1 |    14 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T_XIFENFEI |     1 |    14 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | I_XIFENFEI |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("NAME"='www.orasos.com')


Statistics
----------------------------------------------------------
          0  recursive calls
          1  db block gets
        320  consistent gets
          0  physical reads
          0  redo size
        418  bytes sent via SQL*Net to client
        419  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed

SQL> select id from t_xifenfei where name='xifenfei';

99995 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 548923532

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            | 99999 |  1367K|   103   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T_XIFENFEI | 99999 |  1367K|   103   (1)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("NAME"='xifenfei')


Statistics
----------------------------------------------------------
          0  recursive calls
          1  db block gets
       6970  consistent gets
          0  physical reads
          0  redo size
    1455968  bytes sent via SQL*Net to client
      73745  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      99995  rows processed
--通过这里可以看出在完整的收集表和index包括直方图信息后,数据库执行计划正常
--也说明一点:在数据列分布不均匀的时候,依靠数据库自动收集直方图还是不怎么拷贝.

使用AUTOTRACE测试

SQL> set autot trace exp
SQL> var a varchar2(30);
SQL> exec :a := 'www.orasos.com';

PL/SQL procedure successfully completed.

SQL> select id from t_xifenfei where name=:a;

Execution Plan
----------------------------------------------------------
Plan hash value: 548923532

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            | 50000 |   683K|   103   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T_XIFENFEI | 50000 |   683K|   103   (1)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("NAME"=:A)
--这里可以发现11g的Bind Variable Peeking 没有使用正确的执行计划,其实这个是AUTOTRACE本身的bug导致

收集下面sql执行计划(peeking测试需要)get_plan.sql脚本

SQL> select * from t_xifenfei where name='wwww.orasos.com' and id=100;

no rows selected

SQL> @get_plan.sql

Rollback complete.

Enter value for hash_value: 2708637417
select * from t_xifenfei where name='wwww.orasos.com' and id=100

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T_XIFENFEI |     1 |    14 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | I_XIFENFEI |     3 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=100)
   2 - access("NAME"='wwww.orasos.com')
SQL>  select * from t_xifenfei where name='xifenfei' and id=100;

        ID NAME
---------- ------------------------------------------------------------
       100 xifenfei

1 row selected.

SQL> @get_plan.sql

Rollback complete.

Enter value for hash_value: 1355242984
 select * from t_xifenfei where name='xifenfei' and id=100

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |       |       |   103 (100)|          |
|*  1 |  TABLE ACCESS FULL| T_XIFENFEI |     1 |    14 |   103   (1)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("ID"=100 AND "NAME"='xifenfei'))
--这里可以看到,两个执行计划都我们希望的

测试peeking功能

SQL> alter system flush shared_pool;

System altered.

SQL> select * from t_xifenfei where name='xifenfei' and id=100;

        ID NAME
---------- ------------------------------------------------------------
       100 xifenfei

1 row selected.

SQL> @get_plan.sql

Rollback complete.

Enter value for hash_value: 2860562673
select * from t_xifenfei where name='xifenfei' and id=100

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |       |       |   103 (100)|          |
|*  1 |  TABLE ACCESS FULL| T_XIFENFEI |     1 |    14 |   103   (1)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("ID"=100 AND "NAME"='xifenfei'))
SQL> var b varchar2(30);
SQL> exec :b := 'www.orasos.com';

PL/SQL procedure successfully completed.

SQL> select * from t_xifenfei where name=:b and id=100;

no rows selected

SQL> @get_plan.sql

Rollback complete.

Enter value for hash_value: 4157424768
select * from t_xifenfei where name=:b and id=100

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |       |       |   103 (100)|          |
|*  1 |  TABLE ACCESS FULL| T_XIFENFEI |     1 |    14 |   103   (1)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("ID"=100 AND "NAME"=:B))

--重新硬解析
SQL> alter system flush shared_pool;

System altered.

SQL> var b varchar2(30);
SQL> exec :b := 'www.orasos.com';

PL/SQL procedure successfully completed.

SQL> select * from t_xifenfei where name=:b and id=100;

no rows selected

SQL> @get_plan.sql

Rollback complete.

Enter value for hash_value: 4157424768
select * from t_xifenfei where name=:b and id=100

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T_XIFENFEI |     1 |    14 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | I_XIFENFEI |     6 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=100)
   2 - access("NAME"=:B)
SQL> var b varchar2(30);
SQL> exec :b := 'xifenfei';

PL/SQL procedure successfully completed.

SQL> select * from t_xifenfei where name=:b and id=100;

        ID NAME
---------- ------------------------------------------------------------
       100 xifenfei

1 row selected.

SQL> @get_plan.sql

Rollback complete.

Enter value for hash_value: 4157424768
select * from t_xifenfei where name=:b and id=100

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T_XIFENFEI |     1 |    14 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | I_XIFENFEI |     6 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=100)
   2 - access("NAME"=:B)
--虽然oracle 11g宣称在在Bind Variable Peeking上增强了很多,
--但是这里的实验,依然证明他存在问题,导致执行计划不正确

通过整体实验过程,证明几个问题:
1.默认的的DBMS_STATS收集统计信息不一定使得所有执行计划均正确,特别在数据很不均匀分布时.
2.AUTOTRACE不能跟踪Bind Variable Peeking
3.Bind Variable Peeking是在硬解析时候生效,虽然11g进行了改善,但是有些时候效果还是不明显,如果数据很不均匀,在发现sql语句很多不合适的时候,建议先删除该sql的执行计划,让其再次硬解析,碰碰运气,如果一直效果不好,建议不适用绑定参数形式(正确的执行计划,更多的硬解析)
4._optim_peek_user_binds参数可以关闭Bind Variable Peeking功能,很不推荐.