Monday 7 December 2009

What is the purpose of SYS_OP_C2C internal function

Recently I was involved in one problem with CBO's selectivity estimations on NVARCHAR2 data type column. What I spotted in predicate information was the usage of internal Oracle function SYS_OP_C2C.

Here is an example of the run-time execution plan using bind variables:


SQL_ID 1fj17ram77n5w, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ *
from x1 where cn >= to_nchar(:a1) and cn <= to_nchar(:a2)

Plan hash value: 2189453339

-------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------
|* 1 | FILTER | | 1 | | 9999 |00:00:00.08 | 102 |
|* 2 | TABLE ACCESS FULL| X1 | 1 | 1 | 9999 |00:00:00.03 | 102 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(SYS_OP_C2C(:A1)<=SYS_OP_C2C(:A2))
2 - filter(("CN"<=SYS_OP_C2C(:A2) AND "CN">=SYS_OP_C2C(:A1)))

As you can see the bind variables are converted to national character set using TO_NCHAR function. Column X1.CN is of type NVARCHAR2. It is easy to spot the difference between the SQL statement and the predicate information from the execution plan: to_nchar(:a2) from SQL statement is transformed to SYS_OP_C2C(:A2) in predicate information. The internal Oracle function SYS_OP_C2C performs conversion from one character set to another character set - C(haracterSet)2C(haracterSet). There are situations when one will see this conversion going on without explicit command as in this case what should be a sign that the data types are not the same and implicit conversion is taking place and this might be also a problem from performance perspective as it may disable index usage.

Jonathan Lewis and Charles Hooper, my fellows from the OakTable, had a discussion about this internal function back in 2007.

5 comments:

  1. Hi, Joze,
    We currently had the performance problem caused by this filter function simply because our code is connecting oracle database through odbc using widestring and the data stored in the database is standard string. (the odbc queries are shared and we do have some table columns using double bytes, so we can't just simply change the query use standard string)
    the filter function made it impossible to use index which triggers a full table scan. My question is why oracle CBO behaves this way? Wouldn't it be more simple and efficient to just convert the input parameters to a standard string and use index to get the result?
    Regards.

    Lucy

    ReplyDelete
  2. Hi, Joze,
    after I read the link you posted "discussion about this internal function...", I have one more question, adding a new index (function based index) will also affect our software performance since we have large amount inserts. Is there any other way to solve this problem? Do we have to change our database schema to use utf8 instead? thanks.

    Regards

    Lucy

    ReplyDelete
  3. Lucy,
    I think you already found the right answer - "to change our database schema to use utf8". This would be the best solution and probably would require the smallest amount of effort although it looks now that this would be a big project. By my opinion fixing application code and adding some "extra indexes" would help but probably you would experience the performance problems all the time. So eliminating the root cause is the best possible option.
    But you can add function based index and see if it really hurts your performance so much during data load. Probably your users would be happier if your query performance would be better and would use less resources. Just an idea, I really don't know the details of your case.

    ReplyDelete
  4. Hi, Joze,
    thanks for your quick response! I did try to add a functional index and checked the execution plan of the simple select query, (though I haven't tried on the software itself yet), the cpu cost much less than without the functional index. but I estimate it might cost 10~15% more on inserting.
    We will set up the environment soon to test import over 2M records, so that will be a chellenge for the performance. Will let you know how it went. thanks again.
    Lucy

    ReplyDelete
  5. Hi, Joze
    I am still curious why Oracle won't convert input parameters but converting data records implicitly (adding the filter function)? Converting parameters only needs to run the converting function once, but converting the data will cost so much depends on the table data size.
    Regards,
    Lucy

    ReplyDelete