Oracle Performance and Engineering
Random notes on database features, code, practice, optimization and techniques.
Friday, April 29, 2005
Wanna make your PL/SQL faster - UPGRADE today!!
Earlier I've written a couple of entries on how
Oracle 10g PL/SQL compiler has been re-written to make it around 200% faster than 9i on average. Based on some crude tests on an Enterprise Applications - I've suggested the same to a customer. The results - in his words - were *amazing*.
He got more than 3 times speed improvement for his critical PL/SQL programs.
Here's what I asked him to do - he was running on 9i --
(1) Upgrade to 10g. Make sure things work.
(2) Make sure the 'PLSQL_OPTIMIZE_LEVEL' is not changed from its default value 2.
(3) Assign a directory to 'PLSQL_NATIVE_LIBRARY_DIR' where compiled PL/SQL program units will be stored. Make sure the regular backup schedules backs this directory up.
(4) Change PLSQL_CODE_TYPE to 'native'.
(5) Recompile ALL database PL/SQL objects.
There's no 'win all' rule - but I would definitely ask you to test this route.
PL/SQL itself is about twice faster in 10g compared to 9i. When running the NATIVE compiler (as opposed to INTERPRETED) you expect to get another 35% or so speed improvement!
I've done a lot of studies and experiments on 10g and found out the 10g PL/SQL compiler rocks because of -
(on the descending order of 'rockstar factor' ;) ) -
- Remember the code like -
for c1 in (select ename from emp) LOOP
;
end loop;
This type of code is abundant in 'old' installs. But after Oracle 8i you always knew that is not the most efficient way of doing thing. You can just do BULK PROCESSING of the whole dataset using FORALL. FORALL is one single call compared to one call per record in FOR.
With 10g, you can relax. 10g automatically 'bulkify' all STATIC SQLs like above.
Thus the compiler itself 'rewrites' the inefficient code for 'array processing'.
i.e., the rewritten code will be like -
open c1; -- for select ename from emp
loop
fetch c1 bulk collect into m_array
forall i in 1..m_array_var.count
end loop;
2. No more need to use DBMS_SQL for dynamic SQL. Native Dynamic SQL (EXECUTE IMMEDIATE) is faster with 10g.
Here's what I've found on 10g -
EXECUTE IMMEDIATE and DBMS_SQL are both about equally fast for single-row operations. They're, however, slightly slower than Static SQL doing the same single-row operation.
On 9i EXECUTE IMMEDIATE was considerably slower since everytime EXECUTE IMMEDIATE was fired, it had to hard parse again. From 10g, EXECUTE IMMEDIATE can 'reuse the cursor'. i.e, starting from 10g an EXECUTE IMMEDIATE statement can be found in PL/SQL cursor cache.
EXECUTE IMMEDIATE is significantly slower than static SQL for multiple row non-bulked code since PL/SQL optimizer bulkify the static SQL in such cases. *Thus*, bulking up - array processing - still is and will remain a good practice. 10g just helped with bulkifying the static SQL, nothing so is still happening with dynamic sql!
EXECUTE IMMEDIATE and static SQL are very comparable when both are bulked.
(Contd. with examples..)
Tuesday, April 26, 2005
A nice (and free!) resource profiler
I stumbled across
this today. Tested with a few large trace files and guess what, it works very well. That it's free can only add to my excitement.
Hotsos profiler, designed by the Cary Millsap, however can provide bind variable histograms.
Read a very interesting, and deviant, comment regarding Cary's "
Optimizing Oracle Performance" book
here. I always rated Cary's book pretty highly - but not for the things it teaches you. It's good to have a great 30,000 ft view of performance analysis and tuning, specially when you read the first part. If I'm a starter, I would most definitely not read this book till I finish my Concepts manual, administrator's manual, hang around in the 'field' for a couple of years before I do that. If I am a 'mid-level' guy - I would still not carry this book with me everywhere I go. I would read this book, however, as soon as I finish with 'Practical Oracle 8i', 'Expert One-on-One on Oracle', 'Effective Oracle by Design' and 'Mastering Oracle PL/SQL'. If I am an instructor or performance 'evangelist', lead a team of 15 people doing performance stuff - this book will be much higher on my reading list though. But then, this would not be restricted to analyze Oracle specific issues *only*.
Thursday, April 21, 2005
Capacity Planning - what CPU to choose for Oracle?
I did a capacity planning for a client few days back. I suggested - based on their budget and application usage - 4 Intel Xeon 2.8Ghz(+) CPUs with hyperthreading enabled. Today I got a very interesting question -
"For processors on your hardware suggestions, you specified you suggested Intel Xeon 2.8GHz (+), but nothing was specified about cache or Front Side Bus, which both make a huge difference in cost. Assuming these are the options, what would you suggest?
* 3GHz /4MB/ 400 MHz FSB
* 3.2GHz/1MB/800Mhz FSB"
Hmmm..So what are these things ?
3GHz is the clock-speed of the CPU. 4 (or 1) MB is the CPU Cache and 800 (or 400) MHz is the front-side bus speed.
That's the maximum frequency at which the CPU can operate.
What's "front-side bus"?
It does what a muni bus does i.e., transfer stuff. Rather than people, it transfers data from other components like RAM, hard disks etc.
A hash join is CPU intensive. The CPU needs to sort the data. All these things use "bus" to go to CPU.
What's "CPU Cache"?
You use RAM to cache data from hard disk because hard disk access is slow. Similarly, CPU uses CPU cache to cache data from RAM since CPU Cache access is faster than RAM access.
Guess what? There're terms like 'Cache Hit' and 'Cache Miss' too.
What's hyper-threading?
It's the ability of one single CPU to process two threads (say, Oracle processes) at the same time. Intel introduced this with their Xeon processors.
Here's what I wrote back to the client -
"My answer -
BTW, we suggested Xeon (or, hyper-threaded CPU) because database benchmarks typically run about 25% faster on a hyper-threaded box compared to a similar clock-speed CPU without hyper-threading box.
Hyper-threading enabled Xeon CPU will share two threads (or, processes) on the same physical CPU. Depending the type and scale of the application these two sharing processes could actually be working against each other (say, reading huge data from different parts of memory). That typically raises "bus contention". I quickly checked some benchmarks - and it seems Hyper-threading performance scales better with an increased bus, than it scales with an increased cache.
In other words, just increase in CPU clock-speed for such an app running hyper-threaded could be futile without an associated increase in bus.
Thus, logically and factually, my vote would be for having the bigger bus your money could buy, i.e., the second option here.
* 3.2GHz/1MB/800Mhz FSB
Another off-topic thing, just in case it's raised later - To use the hyper-threading (or, any other app for that matter) Oracle does *not* need to run in a multi-threaded mode.
BTW, if you want to do more research on this -
For hyper-threading basicsSome Crude BenchmarksExcellent Hardware Site - Reviews, How-tosEven more hyper-threading
Wednesday, April 20, 2005
Wasted Work by your app. and licensing model for Oracle
Today was a really busy day.
This is how it went -
- Call with an European client (pretty big ;)) to assist him in budgeting for Oracle databases
- Webex-ed with a Texas customer to trace a severe performance issue. There was a nasty query on two views - and as it happens, we figured out both the views had issue! One was written badly. Another? Was fine. Except the data model changed between the releases and no one looked at the view to 'update' it.
Thus, where they were trying to "select project and if the project has a "notes" field" so that they highlight the "notes" icon on the UI - they actually were trying to read "select the project AND the LATEST notes that project has"!
Wow! Talk to me of "wasted" work. I would be happy to see a survey that points out at the % of cases where the application is slow because it was doing "throwaway work". It was reading data or computing stuff that it NEVER ever used!
You find out the "LATEST" movie you've seen just to figure out if you've seen a movie at all in your life!!! And guess what? Some people actually have seen over 10,000 movies. You sort all of them on time order and find out the latest. "Conspiracy Theory". Hey! That's pretty old. So what? You just go back and say "Guess what. I actually have seen a movie". Folks, use EXISTS. Use ANY. Use stuff like that. Sometimes JOINs are NOT necessary.
The above example with movies was a simplified one. In reality, they were actually computing "Last movie you watched wearing a yellow cloth when it was not raining"!
I have an easy advice. If you're lactose intolerant - don't buy milk from Costco!
- Then had to capacity plan for another client who's going to license a certain software. Moore's law is going to celebrate its 40'th anniversary. Thankfully, things are cheap. At least hardware is. A decade back I would cross check 5 times if I'd calculated the disk requirements to be 125GB for next 2 years (including archive logs and flash recovery area). Today, I just shoot the email!
Here's the extract from my mail to my European client on what Oracle licensing he should use. His initial worries, in his language -
"** is having problems because his cost for Oracle licenses is through the roof. He could not disclose the cost, but said it was over €100 per user. If he can free up that cost I think that € will become part of his new budget. We have decommissioned Planview which required a huge amount of Oracle licencees. So we freed up licencees in this area. We need a description latest Thursday this week due to upcoming meetings with Oracle."
So, basically, the client was so far paying "per user" license. Wanted to deploy a huge Enterprise Apps running on Oracle and now is concerned that his Oracle cost will shoot "through the roof".
My reply -
"Oracle *database* has two licensing models, namely -
(1) ‘Named User Plus’ that counts, according to Oracle, “human users and non-human operated devices” that access a database.
Listed price for Oracle database Standard Edition is $300 per user.
For Oracle database Enterprise Edition - $800 per user.
(2) Processor based – self-explanatory.
Standard Edition: $15,000 per CPU
Enterprise Edition: $40,000 per CPU.
The only catch is there could be a maximum number of FOUR physical CPUs for standard edition.
Thus the ‘which type of license to buy’ should be based on –
-- Is the user base countable?
-- From above, if the number of users per CPU exceeds 50 then “processor” license becomes cheaper.
It’s hard for me to say without knowing the scale of **** implementation there, but our biggest implementations do usually have more than 50 users per processor. For example, XXXXX has a pretty big module licensed for 6000+ named **** users implemented over a four-CPU database server. I don’t know if any **** site purchased ‘named user plus’ licensing from Oracle (doesn’t mean there’s none!) since ‘processor’ based license makes more sense for ****.
**** should also run fine, up to a certain scale, on Standard Edition. The decision to upgrade to Enterprise Edition should be driven by scalability concerns (say, do they need to implement RAC) and not by **** usability issues.**** is fully functional on Oracle Standard Edition.
Does XXX have Oracle E-business suite deployed there? E-business suite licenses, I checked out, are per-user based (median Euro 99 per user per month).
In their case, they don’t have to pay Oracle for annual support and upgrades ($3000 to $8000) exclusively for **** since it must have been covered by the existing installs. I also found Oracle has a published discount of 25% for million-dollar plus customers."
Monday, April 18, 2005
Propagation of HINT from a view to the query - workflow for hint
Good news -
Tom Kyte has started a blog.
There've been many situations where a view is hinted for performance. That view, then, is reused by many a query. For cases when the said view contains a "global" hint like "RULE" -- the whole query - unless otherwise hinted - assumes the hint!
There're some other subtleties in the hint 'workflow' (i.e., how to find the scope of a hint) -
(1) QUERY A (hinted)
UNION
QUERY B (not hinted)
The hint will apply only with the first part (if at all - Oracle could merge both together and do a single statement executed).
(2) A hint can also be applied against a "query block" after finding the system generated query block name from the explain plan. A good example of query block could be -
select ename, mgr_name from emp
where salary =
(select emp_type, max(salary) from emp group by emp_type)
The last line could be a query block and will be system-named.
Here's an example of applying hint to a query block.
Query blocks can also be named with the hint QB_NAME.
(3) Hints do not propagate inside a complex view.
(4) For "optimizer goal" hints - the query level hint is used. The hint in the view is ignored.
i.e., if a query is hinted FIRST_ROWS and the view ALL_ROWS -- optimizer will try to execute the query in FIRST_ROWS plan.
(5) If top-level query does not have a goal hint - the goal hint from the view is applied against the whole query.
(6) With non-mergeable views optimization goal hints inside the view are totally ignored.
For such views the access path and join hints are preserved.
Nonmergeable views are basically like stand-alone tables. Oracle will have to materialize the results of the view before joining it to other data sources.
(7) FIRST_ROWS hint is ignored in all aggregate or set operations like UNION, DISTINCT, ORDER BY. This is easy to understand since for such operations as GROUP BY the full dataset have to be materialized before returning the asked data.
(8) RULE hint is unsupported!
(9) INDEX_SS is a hint to use "index skip scan"
(10) Sometimes for OLTP operations HASH join needs to be avoided. USE_NO_HASH coupled with USE_NL_WITH_INDEX hint works wonder in those cases.
Tomorrow I will show a real-life example where a RULE hint inside a (non-mergeable) view was killing the performance of a query that called the view.
Thursday, April 07, 2005
Proof, Concept, Correlation and Truth
I am donating my free time today to reading '
In Search of truth' Or Correlation is not causation - today.
Boy! Oracle community is steaming hot these days. That said, a fantastic article and very thought provoking.
Wednesday, April 06, 2005
Random Rant - Ongoing Debates in community
I was going through
this post - where Don tries to launch a subtle attack on
Tom Kyte.
I was most particularly offended by Don's comment -
"
Hi Mike,
Is it just me, or does it seem like the vast majority of these "script kiddies" come from outside the USA? Why is it that with the overwhelming majority of Oracle users in the United States, there is such a disproportional amount of this nonsense from foreign lands?"This is very offensive. It's not only offensive. It's ultra-racist. I was born outside US and it's annoying. And I thought Nazis are gone!
It was subsequently deleted, but can be found in
Jonathan Lewis' well-worded article in reply.
Happened to come across a (bit dated)
list of Oracle Certified Masters with their email and phone numbers!! Recruiters, attack :)
Well, as for the debate - what works and what doesn't - I have a very "open" policy. I keep my mind open. When I face a technical challenge - I try to attack with a 'free' mind with two things. Namely, (a) basic truth (as in tables contain rows, databases work and should not in always throw "ORA-600" errors by design etc.) and (b) the set of options I have (say, hash-partitoned index, clustering, using DECODE replacing a bitmap index etc etc). I try to totally forget about precepts - as in index rebuild is good, or index rebuild is bad - preached by "gurus".
To me, the world has become smaller. But number of dimensions of every problem has many folded. Earlier assumptions don't work now. New and more complicated facets make the problem description totally different. Yes, attacking each new problem in its merit
may sometimes be more time-consuming than it would have had I tried to make each fall within a bucket. But in the long run - for
really challenging issues - I found the 'open approach' works the best and leads me to a solution. I would continue with that.
Many many years back, when I started to read detective novels - the first thing I used to do after the murder was to suspect just everyone and work with the author from there to digest the truth. It was and is such a cool method!
Tuesday, April 05, 2005
New PL/SQL compiler - twice as faster to process?
If you're already on Oracle 10g you must have noticed the very *significant* baseline performance gain your application has (if it's PL/SQL intensive) after the upgrade. If you're closely monitoring the database, you may also have found that PLSQL_OPTIMIZE_LEVEL init parameter's default value is now 2 (was 0 pre 10g). This means PL/SQL compiler now works hard to tune your PL/SQL program for you.
They say and we've verified -
"Oracle Database 10g provides a new optimizing PL/SQL compiler which speeds up computationally intensive programs ported from older versions by two to three times on average."
An example, so far --
A = B + C + D ;or
X or Y or Zor
if SQL that's slow but executes LESS often;else SQL that's fast but executes more because earlier part is evaluated false most of the time;end if;(or, any associative expression for that matter) was always evaluated from left to right.
No more - if you've upgraded to 10g *and* have not tweaked with PLSQL_OPTIMIZE_LEVEL (let it remain default). This is the best feature of Oracle10g according to my needs and tests.
A
classic paper by Charles Wetherell discusses this in more technical detail.
Some other examples from the paper -
B = A * NULL - should NEVER be evaluated. Since it's always true.
The operations
c := a + b + e;…d := f + a + b;can be 'optimized' as -
T := a + b;c := T + e;So, you are doing LESS calculation since (a + b) is evaluated ONCE!
or, the expressions
… A + B ……… A + B … can be changed to
T := A + B;… T ……… T …or,
for i in 1 .. 10 loop A := B + C; …end loop; can be changed to
A := B + C;for i in 1 .. 10 loop …end loop;"To achieve these improvements, the compiler must be free to reorder and to eliminate computations so long as the computational intention of the original program is preserved."
- That's where the PLSQL_OPTIMIZE_LEVEL comes handy to Oracle!
I got from there that "for" loop has become much faster since the iterative incrementing of the loop indicator has been "rewritten". In my tests, simple FOR loops are indeed over 400% faster between comparable 9i and 10g versions.
However, I was trying if the "
PL/SQL function used in WHERE clause issue"has been fixed. Earlier, even if the function was bringing back a constant - it was evaluated ONCE per ROW returned by the query. The most optimized form would of course be executing the function just once.
I see something interesting in the white paper in this regard. To quote -
"There is a DETERMINISTIC keyword which is used by SQL on PL/SQL functions to indicate that the function’s value depends only on the values of the actual arguments and that the function does not have any side effects which are of consequence. For the moment, the PL/SQL compiler ignores this keyword. It is possible in the future and with explicit permission from the programmer that the compiler will take advantage of the DETERMINISTIC keyword to eliminate some function calls. If this change is ever made to the behavior of PL/SQL, there will be plenty of advanced warning to programmers."
It seems to me that in some future release Oracle will try to figure out if the function is DETERMINISTIC and the parameters passed are the same between two iterations. If the parameters passed are the same and its deterministic (means it will ALWAYS produce the same result for the same parameter) - Oracle would not compute the function twice. But we're not there yet! Sigh...
I will put up the test case for it tomorrow.
Meanwhile, I really liked the definition of optimizer in the above paper. It says -
"An optimizing compiler makes programs run more quickly. Optimizers employ three
basic techniques:
1. Replacement of a slow operation with a faster one.
2. Elimination of an unnecessary operation. This might be regarded as a special case of
the first technique where the faster operation does nothing at all!
3. Movement of an operation from a place where it will be executed many times to a
place where it will be executed less often."
It seems Charles Wetherell
was working on this re-designing the compiler thing way back in 1999 ;)
BTW, can anyone lend me the "
Etudes for Programmer" book by him. Now I think he's just too smart a guy to leave his book out!
Monday, April 04, 2005
A great paper on new PL/SQL compiler on 10g
I remember the upgrade from 8i to 9i we did in Summer, 2002. That - for our application - was a straight forward 30% to 35% performance improvement over the baseline PL/SQL. Quite amazing.
Oracle 9i to 10g offers PL/SQL performance improvements on two counts.
(a) General PL/SQL compiler enhancements - I (on my test app) found this to be around 12% to 15% faster on most program units compared to PL/SQL 9i.
(b) Using "NATIVE" compiler for PL/SQL. This gives about 200% to 300% performance improvement of PL/SQL. NATIVE PL/SQL is "interpreted" by the hardware instruction set directly. Conventional PL/SQL, just like other programming languages, is "interpreted" by a software called "virtual machine". In other words, NATIVE means faster :)
Basically, PL/SQL program units will be stored as compiled C files on the directory mentioned by you on the Oracle database server. Two initialization parameters are to be changed.
-- Change plsql_native_library_dir to a directory where the PL/SQL program units will live in a C-compiled form. Remember, since it's stored externally - your backup procedures should include this directory into account. I have just created a sub-directory "plsql" on my data file folder and used that as the library directory. That whole directory is anyway backed up and thus, I did not have to effect any change.
-- PLSQL_CODE_TYPE should be changed to NATIVE.
That's it! Now your PL/SQL programs are ready to glide. SQL embedded within PL/SQL won't be impacted much. But pure Pl/SQL as in LOOP iterations, variable assignment, multiset or collection iterations etc are much more faster (Oracle claims 300% or more and my tests show that they are not at all exaggerating!).
Remember, after you turn "NATIVE" - you'd have to recompile your PL/SQL program units otherwise they would still be living as "INTERPRETED" form in database.
There's an
excellent white paper from Oracle talking about these changes. This paper was first presented on Oracle Openworld 2003 (it may be noted that production version of 10g shipped much*after* OOW 2003). I really liked this paper. Specially -
"When PL/SQL is a carrier for SQL statements, the performance of the PL/SQL portion of the combination is generally not very important; execution of even the simplest interesting SQL operation takes much more time than does the typical PL/SQL statement. When PL/SQL is used as a conventional programming language, then its performance is important, just as is that of any other programming language.""The PL/SQL compiler uses Diana (for Descriptive Intermediate Attributed Notation for Ada) for its internal representation, reflecting PL/SQL’s historical debt to the Ada language."[So far, I've always wondered what DIANA really was!]
--
"Obviously the performance characteristics of a PL/SQL unit are influenced by how the author chooses and expresses the algorithm in PL/SQL. Without a psychic subsystem in the front-end, the compiler can do nothing to determine the programmer’s ultimate high-level intent!...... the best way to improve PL/SQL program performance is to choose the right algorithms and data structures in the first place!"[I just cannot say how much I *LOVED* this sentence. This, in essence, describes the whole performance improvement process.]
"PL/SQL programmers who use suchnew features appropriately will be rewarded with faster programs."[Cannot agree more. Things like Bulk Binding, Bulk Collect, FORALL, associative arrays index by varchar2 etc - can literally have dramatic effect on performance.]
"The relative potential for improvement in program performance due to changes in the code generator or in the PVM can be understood by analogy with what determines the speed of a car journey. The choice of route (as the crow flies or hugely circuitous) is analogous to the code generation phase. And the speed at which you drive is analogous to the expressiveness and the efficiency of the PVM. Famously, choosing the optimal route, even when you drive conservatively in a modest car, gets you there much quicker than choosing a poor route and driving flat out in a sports car. Of course, driving a sports car flat out on the optimal route is quickest of all." [This paragraph should be read and re-read and re-read and then be remembered whenever you're coding! The "expert" guy - who's a PL/SQL performance hero - has severe limitation when he/she goes to tune. You as a programmer has no such limitation.]
"Oracle10g removes this confusion by redefining binary_integer to be identical to pls_integer. This has both a usability benefit and an immediate performance benefit for extant programs that accidentally or unavoidably use binary_integer. The effect is to speed up integer arithmetic in PL/SQL by a factor of 2.5 to 3."[In my 'best practice lecture' - using PLS_INTEGER replacing the NUMBER datatype has always come up very high in importance. With Oracle's commitment to maintain PLS_INTEGER I feel even safer to use it just everywhere!]
Trying the given FIB_N function to compute fibonacci number - (the code is wrong there - ELSEIF does not exist in PL/SQL) -
create or replace function fib_n(n in pls_integer) return pls_integer
is
begin
if n=1 then return 1;
elsif n=2 then return 1;
else return fib_n(n-1) + fib_n(n-2);
end if;
end ;
/
fib_n(50) took -on a 10g database with Native compilation - 0.05 sec on a 9i database with interpreted compilation - 0.35 secBoth ran on the same phyical server with similar configurations. Oracle10g is about 700% faster in this case!
10g has another parameter to "pre-tune" PL/SQL called PLSQL_OPTIMIZE_LEVEL - which is set to 2 by default (was 0 in 9i).
Next to review is an OTN paper (supposed to be companion to this one) called "“Freedom, Order, and PL/SQL Optimization”.
Friday, April 01, 2005
"My Korean Application Doesn't work even though DB characterset is UTF-8"
Internationalization is a big thing going on these days.
I received such a call for support from Korea yesterday. Apparently, the client believed that setting UTF-8 as the db_character_set will fix everything for once and all. Following was my response to him. Last I heard the issue was fixed.
"However, there are two sides of the character set as far as a database application is concerned.
A) Database side character set: UTF-8 supports Unicode 3.0 standard and should be able to hold most of the character sets.
(Oracle's newer "AL32UTF8" character set is a superset of UTF-8 and follows newer Unicode 3.1 standard. Though Oracle recommends this to be set from Oracle 9i onwards, we have not done enough testing to officially support it with ITG yet. )
B) Client side character set: A client-side parameter called NLS_LANG is used to perform character set conversion between server (database) and client.
NLS_LANG should be explicitly set in the following format if you're using UTF-8 as the database side character set -
NLS_LANG=
_.
e.g., NLS_LANG could look like "AMERICAN_AMERICA.UTF8"
* NLS_LANG is set in the environment for linux/unix and in the registry for windows.
* NLS_LANG does NOT change the client machine's default character set.
It lets Oracle know what character set to expect from client, so that it can do necessary conversion.
So if NLS_LANG is AMERICAN_AMERICA.UTF8 and the database character set is also UTF-8 -- Oracle does not do *any* conversion. It then just expects UTF8 characters from client.
The "Language", "Territory" and "character set" supported by Oracle on Client side, to set up proper NLS_LANG, can be obtained from "Appendix A" of "Globalization Support Guide" available at -
http://otn.oracle.com/pls/db92/db92.docindex?remark=homepage (for 9i -- needs free login)
http://otn.oracle.com/pls/db10g/portal.portal_demo3?selected=3 (for 10g -- needs free login)
I see Korean (Language), Korea (Territory) and different Korean Character Sets are supported by Oracle.
NLS_LANG should be set up to the character set the client is actually using so Oracle can do proper conversions of that data into UTF-8.
I think it will then solve the "broken" data issue.
There are some docs at metalink.oracle.com (needs support login) to determine the actual "clients characterset". If you do not have the login, let me know - I will send the documents attached."
Archives
January 2005
February 2005
March 2005
April 2005
May 2005
March 2006
April 2006
August 2006
September 2006
October 2006
January 2007
May 2008
