Oracle Performance and Engineering
Random notes on database features, code, practice, optimization and techniques.
Wednesday, March 30, 2005
Storage of (fixed or variable) Multi-byte data in varchar2
Took the 10g upgrade exam this Sunday. It was *long*. It took me about 2 hours 15 mins to crack all 161 questions. Was a good jump-start on 10g.
One of the applications I advise had client issues yesterday. Apparently, they are "UTF8 enabled" but still when customers tried to store 250 multi-byte characters in a VARCHAR2(250) it failed with " ORA-12899: value too large for column <>"
I was asked if there's something to be done on Oracle side. Of course, there is. In fact, this is an issue that's supposed to be handle totally on Oracle side.
Here's what I wrote in different emails back on forth on the issue -
"Yes, there's an Oracle cure :)
Oracle lets you declare the data type according to "character" and not "bytes" (the default). i.e., if you declare a field "name" to be a varchar2 (40 bytes or 40 characters).
In a single-byte charater set the name will contain 40 characters (==40 bytes). In a double-byte character set the name will contain 40 characters (80 bytes). Oracle auto-adjusts bytes when you declare the length semantics on 'characters'.
The syntax looks like -
varchar2(20 byte) -- if byte is not specified that's the default. varchar2(20 char)
This is doable from 9i onwards.
We can change the "default way" Oracle stores the string by adjusting NLS_LENGTH_SEMANTICS (i.e., Oracle should follow the "byte" or "character" semantics). The default length semantic, naturally, is byte. However, the setting of this parameter should not be changed more than once in a database since it gives way to mixed-semantics database.
Any table changed *after* the NLS_LENGTH_SEMANTICS change will inherit the "char" semantics. However, old tables will continue to be on the "byte" semantic.
I was trying with different use cases against a 9i database. Here're the notes on findings -
(1) On changing "byte" semantic to "char" -
If we change NLS_LENGTH_SEMANTICS to CHAR -- all the old tables still are maintained in "Byte Semantic" and new tables are maintained in "Char Semantic".
(2) Effect on "char" semantic on named PL/SQL -
If we change the NLS_LENGTH_SEMANTICS to CHAR -- all named PL/SQL codes will use the Character semantics. i.e., VARCHAR2(20) inside a *named* PL/SQL(as opposed to an anonymous PL/SQL block) will be VARCHAR2(20 CHAR).
Anonymous PL/SQL blocks, however, have to be recompiled.
(3) If VARCHAR2 can store more than 4000 bytes with "char" semantic -
If we change the NLS_LENTH_SEMANTICS we still are bound by maximum bytes (4000) of VARCHAR2.
Let's do some math here - max byte a character can hold is 3 for UTF8 (Oracle 8i); 4 bytes for AL32UTF8 (Oracle 9i, we're going to use this for database, i.e., VARCHAR2s); and 2 bytes for AL16UTF16 (for NVARCHAR data type).
Thus the maximum number of character a VARCHAR2(4000) will hold is 1333 in UTF8 (8i) and 1000 in AL32UTF8 (9i).
Here's a nice table I found on Oracle metalink -
UTF8 (1 to 3 bytes) AL32UTF8 (1 to 4 bytes) AL16UTF16 ( 2 bytes)
MIN MAX MIN MAX MIN MAX
CHAR 2000 666 2000 500 N/A N/A
VARCHAR2 4000 1333 4000 1000 N/A N/A
NCHAR 2000 666 N/A N/A 1000 1000
NVARCHAR2 4000 1333 N/A N/A 2000 2000
N/A means Not possible!
However, if the width of the data type is well below max, i.e., 4000 - max number of characters will be stored.
i.e., a VARCHAR2(100 char) will store 100 characters leading to a max. 400 bytes with AL32UTF8 (Unicode 3.0 encoding)
Note – Pl/SQL code can handle upto 32K bytes in a VARCHAR2 variable!
(4) What if we need to store more than 4000 bytes in a varchar2 field for multi-byte characters without changing the length semantics -
We need to resort to using NVARCHAR2.
NVARCHAR2 falls under "national character set" and *ALWAYS* stores data in "character" semantic, even if the NLS_LENGTH_SEMANTICS is set to be "byte".
(5) What's the easiest way to solve the problem "varchar2(250) field cannot take 250 characters because it was declared in 'byte' semantic” -
Case 1 -- Easiest for an already up and running database --
Alter table modify column (problem_column varchar2(250 char));
OR
Alter table modify column (problem_column nvarchar2(250));
Change the database nls_length_semantics to CHAR to let PL/SQL handle these extra bytes for multi-byte characters.
Case 2 -- Easiest for a "fresh" Install --
Alter system set nls_length_semantics = char;
-- create all the tables or Import them fresh.
If the problem happens only with a few tables - I would say case 1 is a better way to go. For ideal cases or instance-wide multi-byte data, "Case 2" is the one.
(6) What's the performance implication of "char" semantic?
Well, when Oracle stores string as "characters" Oracle
(a) cannot do the easier "byte" arithmetic for string computations. It has to then count characters to begin and start, not bytes. So string-extensive operations - particularly SUBSTR(:string,1,40) types, are slower. I've to benchmark to find out just how much slower. I would guess "LIKE" operator would also be slower.
But then I received an interesting feedback from a co-worker -
"In regard to performance implications resulting from not being able to do byte-level arithmetic for "short-cutting" string manipulation and comparison calculations, the detrimental effect of this is probably marginal.
For example, all string-manipulation done internally by win32 (MS Windows NT and later) is multi-byte. This doesn't have cause major slow-down (not even in heavily multi-byte versions, such as Chinese, Japanese, Korean). Unless oracle does something radically different, and wildly inefficient, this is probably not a concern for us."
(b) (obviously), increases the space requirements.
Saturday, March 26, 2005
More 10g New Feature Notes (Cont).
11) Oracle Database 10g: Maximize Availability - Backup and Recovery EnhancementsA) Simplified Recovery -
No backup needed after RESETLOGS operation. Old backups still work. Earlier, old backup files could not be used again in the new incarnation of the database after resetting logs.
Earlier, following a resetlogs archived log files could be rewritten - not anymore. So LOG_ARCHIVE_FORMAT is changed to accomodate the resetlogs operation to avoid rewrite. New (%r) format looks like %t%s
%r.arc
V$LOG_HISTORY and V$OFFLINE_RANGE are not cleared after a resetlogs.
V$DATABASE_INCARNATION
B) Full Database BEGIN BACKUP -
ALTER DATABASE (BEGIN / END) BACKUP; -- earlier was done tablespace wise.
A very nice operational change.
Is not needed for RMAN backup since RMAN backups are self-consistent (no need to take files in backup 'mode' with RMAN)>
C) FLASH BACKUP -
Oracle is banking on disk to be widely accepted as the default backup area because of (a) ease of disk backup and more because of (b) decreasing price of disk. Thus 'flash backup area' has been designated as the default backup area for control files, archived logs, data files and everything else - particularly when using RMAN.
DB_RECOVERY_FILE_DEST ='' disables the flash recovery area.
Fast recovery - SWITCH DATABASE TO COPY;
No more copying from backup area - but then database runs against backup area.
New DROP DATABASE command - how useful!
Incremental Backup -
Fast Incremental Backup -
If 100M of a 10G file has changed - Oracle will read 10M and copy 10M only. Oracle keeps track of all data block changes in a new 'change tracking file' using the new background process CTWR (aptly called, Change Tracking Writer).
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '...';
V$BLOCK_CHANGE_TRACKING
V$BACKUP_DATAFILE shows the effectiveness of block change tracking (% of blocks read / changed).
21) Oracle Database 10g: Maximize Availability - General EnhancementsTables with columns LONG and LONG RAW can be redefined online.
CREATE OR REPLACE SYNONYM doesn't invalidate PL/SQL program units if they were already valid.
10g New Feature Notes (Cont).
Contd. from the
earlier post -
10) Oracle Database 10g: Maximize Availability - Human Error Correction
Flashback Database - Flashback logs == Before images of data blocks. Created in the flashback recovery area (DB_RECOVERY_FILE_DEST) by (new) background process RVWR (Recovery Writer).
V$FLASHBACK_DATABASE_LOG
V$FLASHBACK_DATABASE_STAT
Following a FLASHBACK DATABASE (TO TIMESTAMP | TO SCN), db must be opened in RESETLOGS mode.
To exclude a tablespace - ALTER TABLESPACE .. FLASHBACK OFF;
FLASHBACK TABLE
BEFORE DROP;
DROP TABLE PURGE; -- cannot flashback. No more stored in recycle bin
DBA_RECYCLEBIN
PURGE RECYCLEBIN;
Flashback Versions -
VERSIONS BETWEEN (SCN | TIMESTAMP)
SCN_TO_TIMESTAMP
TIMESTAMP_TO_SCN
RETENTION GUARANTEE for UNDO tablespace.
New pseudo-colums - VERSIONS_STARTTIME, VERSIONS_ENDTIME
Flashback Transaction Query -
Must enable "Supplemental Logging" to use this feature. Additional columns are logged into undo segments and redo log files when this is enabled. SELECT ANY TRANSACTION privilege must be granted as well.
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
V$DATABASE - shows if this is on.
Query FLASHBACK_TRANSACTION_QUERY (operation, undo_sql, table_name). Find the XID of the transaction using Flashback Versions query before.
Flashback Table (TO SCN | TO TIMESTAMP) - doesn't work for AQ, data dictionary tables. Much faster than traditional media recovery.
Must 'ENABLE ROW MOVEMENT' before using it.
Triggers are by default disabled before flashback table operation.
Flashing back before a DDL operation is not possible.
Flashing back during a transaction is not allowed.
Friday, March 25, 2005
My Learning Path for Oracle 10g - Notes on New Features
As said before, I decided to take the (beta) exam "
Oracle 10g upgrade for OCP 8i candidates". Though, I am both 8i and 9i certified - I chose to upgrade from 8i because -
(a) the exam was cheaper ($50 - compared to a $125 for upgrade from 9i)
(b) it's a beta exam (still!) and therefore, much more questions are expected. Since the "cram guides" are not out on market - it will be a bit of challenge.
This is a nice, formal way of learning. I am in a regular job - so could not really assign more than 3-4 days to learn the topics I am expected to cover.
The exam is on Sunday and on this nice, sunny Friday afternoon I feel well-prepared. My preparation strategy was based on the fact that I actually wanted to learn the new product rather than just 'mug' some cramsheets and get a paper certificate.
So, I chose to follow the *excellent* hands-on courses by Oracle University. My present employer is an Oracle partner - so all these courses were free to me. They not only cover the real meat, but I could see some excellent 'demo' of the new features. I don't know what software they were using to create those excellent self-contained demos - but it rocked.
The courses I covered were - you can also search for them on http://ilearning.oracle.com -
(I've added my study notes alongwith) ..
1) Oracle 10g - The Self-managing DatabaseA great introduction to Oracle 10g by Anuj Goel. Talks about the design decisions behind 10g (minimize the maintenance, administration, installation and troubleshooting time) and how they have actually been tackled.
Without going through the live presentation, you can also download the powerpoint presentation and go through it.
2) Oracle Database 10g Administration in 2 DaysVery basic material. Nothing really 'conceptual' but on 'how to do using different Oracle given tools'. Don't spend more than 15 mins here.
This is totally SQL free zone - as all administration and management tasks have been illustrated with Enterprise Manager
The 'Backup and Recovery' section talks about 'Oracle suggested Backup Strategy' - which is
"The Oracle-suggested backup strategy is based on creating an image copy of your database which is rolled forward using incrementally updated backups. Oracle Enterprise Manager schedules RMAN backups jobs for you to run during the overnight hours.
For each datafile, the strategy calls for backups to be made as follows:
* At the beginning of day 1 of the strategy (the time the first scheduled job actually runs), an incremental level 0 datafile copy backup. It contains the datafile contents at the beginning of day
1. In a restore-and-recovery scenario, the redo logs from day 1 can be used to recover to any point during day 1.
* At the beginning of day 2, an incremental level 1 backup is created, containing the blocks changed during day 1. In a restore-and-recovery scenario, this incremental level 1 can be applied to quickly recover the rolled-forward level 0 backup to the beginning of day 2, and redo logs can be used to recover to any point during day 2.
* At the beginning of each day n for days 3 and later, the level 1 backup from the beginning of day n-1 is applied to the level 0 backup. This brings the datafile copy to its state at the beginning of day n-1. Then, a new level 1 is created, containing the blocks changed during day n-1. In a restore-and-recovery scenario, this incremental level 1 can be applied to quickly recover a restored backup to the beginning of day n, and redo logs can be used to recover the database to any point during day n.
The datafile copies used in the Oracle-suggested backup strategy are tagged with the tag ORA$OEM_LEVEL_0. The level 1 incremental backups for use in this strategy are created for use with datafile copies that are so labeled. You can safely implement other backup strategies without concern for interference from the backups for the Oracle suggested strategy."
3)
Oracle Database 10G: Manageability OverviewMostly the framework for this 'self-managing' database. Same presentation as #1 - but much more detailed and has an informative question-answer session.
Download the presentation (click on "Reference") from link #1 above. This one doesn't work.
4) Oracle Database 10g: Self-Managing Database - Space ManagementDeals with
(a) Proactive Tablespace Monitoring every 10 mins done by new background process MMON - supposed to be in-memory check and therefore fast (Critical Alert : 97%, Warning Alert : 85%). Threshold value - can be emailed or paged. Works only on locally managed tablespace though. DBMS_SERVER_ALERTS is the PL/SQL API.
(b) Segment Management : Has a very cool demo of new 'online Shrink' operation. Remember,
even though it's 'online' - there're both shared (when compacting unused free space) as well as exclusive locks (when reclaiming the freed space) obtained on the segment. It will block a DML on the segment that's going on in parallel while shrinking is on.
Takes place through, delete-insert mechanism by Oracle and therefore, chained rows may or may not be impacted. Though it's a DML operation, triggers are NOT fired by Oracle.
Earlier, High water mark can only be brought down by rebuilding (or, moving) the table. The tables need to have 'enable row movement' on. Indexes are usable after a shrink (unlike rebuild / move).
Very useful feature if you remember 'online' doesn't mean 'can do anytime'.
However, following structures cannot be shrunk -
cluster tables, tables with LONG, LOB, tables with function-based indexes.
Temporary and UNDO segments, since handles different way, cannot of course be shrunk.
Syntax -
ALTER TABLE
SHRINK SPACE [COMPACT] [CASCADE];
If the COMPACT clause is not specified, Oracle resets the HWM by reclaiming the freed up space. If COMPACT is specified, reclaimable space is reclaimed, but the HWM is not set-back (and thus, no exclusive lock on the segment is obtained).
EM Advisor Central -> Segment Advisor can give an idea of which segments are ripe for SHRINK operation.
(c) Undo Management : Now even the undo_retention is automatic and default. Only, if you set statistics_level to typical (collects 30 statistics on a frequent basis and dumps it on AWR).
AUM = Automatic Undo Management.
Init params like MAX_ROLLBACK_SEGMENTS, UNDO_SUPPRESS_ERRORS (always TRUE), ROW_LOCKING, SERIALIZABLE, and TRANSACTION_AUDITING (always TRUE) no longer work with 10g.
Resumable Space Allocation can be altered in session and system level.
(d) Redo Logfile Advisor : Another advisor in 10g advisor network. Lowly useful, but not a killer feature.
If FAST_START_MTTR_TARGET is set the smallest optimal size of REDO file is calculated.
OPTIMAL_LOGFILE_SIZE (in MB) is a new column in V$INSTANCE_RECOVERY.
5)Oracle Database 10g: Self-Managing Database - Integrated Storage Services
Basically 'Space Management -2'. Talks about little but important things. Like, the obsolescense of MAXTRANS parameter. BIGFILE tablespace. I liked the easy description of big-endian and small-endian architecture.
SYSAUX - Now SYSAUX tablespace stores everything that should be installed with database ('auxilliary database metadata) but does not belong to CORE kernel (i.e., SYS objects). It has to be locally managed, ASSM, Extent Management Local. The data it contains - Text, UltraSearch, InterMedia, OLAP, Scheduler, AWR, XMLDB, Workspace Manager, Logminer, Statspack etc. Query V$SYSAUX_OCCUPANTS to know about it.
Just like SYSTEM tablespace, SYSAUX tablespace cannot be dropped or renamed.
TABLESPACE -
(a) with 10g, a 'temporary tablespace group' - basically a bunch of TEMP tablespaces - can be configured and be defaulted to a user's temporary tablespace. The benefit - user can use different temp tablespaces in different sessions or, a single SQL can use multiple temp tablespaces for sorting / hashing.
A temporary tablespace group - just like a single temporary tablespace - can be made the 'default temporary tablespace' of the database. Then none of the tablespaces in that temp. tablespace group can be dropped.
New DD view - DBA_TABLESPACE_GROUPS.
(b) Now tablespaces can be renamed. Could be useful in transporting tablespaces between two databases. Remember, RENAME does not rename the data files.
(c) A simple but very useful feature - Like 'default temp tablespace' -a 'default permanent tablespace' can be made in database level. Thus, any unqualified object does not end up being in SYSTEM anymore. Can be suggested during 'create database' or later.
BIGFILE Tablespace -
A BFT has a really big datafile. For all practical purpose, the concept of datafile can now be forgotten and whole architecture can be looked at a logical level (i.e., tablespace stores table). Only bitmaps can contain the huge extent information for BFT - thus BFT is possible only with Locally managed tablespace. Oracle can now store upto 8 Exa Bytes (8 Million Tera Bytes) of data thanks to BFT.
Max data in Database = D *F * B (D -> max. data files, F -> Max blocks per file, B-> max block size).
1 PB (petabyte) = 1024 TB (terabyte); 1EB (exabyte) = 1024 PB = 1,048,576 TB = 2**60 bytes!
"You should avoid creating a BFT on a system that does not support striping, because of negative implications for parallel query execution and RMAN backup parallelization."
There's something interesting on the 'it's useless to think how many extents a segment has' -
"You should try to avoid having more than 100,000 extents (based on a 2 K block size). When more extents are created, there may be a performance penalty under concurrency and during DDL operations that involve high space management activity. However, the above number of extents is not an absolute limit and this number varies proportionally higher to the block size used. For example, with an 8 K block size, you can have 400,000 extents; with a 16 K block size, you can have 800,000 extents. This calculation is based on the number of bitmap blocks (and their size) that will be required to represent the file."
Exotic Info - ROWID for normal (small file) tablespaces is of following format -
OOOOOOFFFBBBBBBRRR
- Where OOOOOO -> segment number
FFF -> relative file number
BBBBBB-> data block address (relative to the tablespace, not datafile. Thus two rows of the same table can have same data block address, and reside on two separate data files).
RRR -> slot # inside a block
Since FFF becomes irrelevant with BIGFILE tablespace (relative file number is always 1024!), the rowid FFF part is used to denote the data block address as well.
Thus, DBMS_ROWID now takes a new input parameter called TS_TYPE_IN (BIGFILE or SMALLFILE).
Transportable Tablespace - 10g allows tablespaces to be transported across different platforms (not doable in 9i).
Steps ->
1. Make tablespace READ ONLY
2. Extract METADATA using data pump (expdp)
3. If the target has different 'endianness' - convert byte ordering in files using RMAN
4. Copy the datafiles in target, Load metadata using data pump (impdp)
5. Make tablespace READ WRITE
To determine endianness -
select endian_format
from
v$transportable_format a, v$database b
where a.platform_name = b.platform_name;
ASM -
Where Oracle treats the entire disk subsystem as a single RAW device. No more partitioning, I/O tuning, data movement - Oracle does it for you. Extension of the earlier coined SAME (Stripe and Mirror Everything) concept. No more need to buy "volume manager" or "cluster manager" software.
A separate Oracle ASM instance (database) is started, when used. Both this and the working database have access to a common set of disks (DISK GROUPS). ASM instance has two new background processes - (a) RBAL (rebalancing DISK GROUPS) and (b) ARBn (does data extent movements).
Databases using ASM also has two new background processes namely, (a) RBAL (global opens of the disks in DISK GROUPS) and (b) ASMB (talks to ASM instance). ASM instances can be clustered as well.
6) Oracle Database 10g: Self-Managing Database - General Enhancements
(a) SYS statistics should be collected (Oracle recommends so). GATHER_DATABASE_STATS can collect stats on dictionary tables and so is the new GATHER_DICTIONARY_STATS.
(b) FIXED tables should also be analyzed once in the instance lifetime using GATHER_FIXED_OBJECTS_STATS. However, fixed tables do not have I/O cost since they reside on memory and thus only CPU costing is used, if at all, by CBO.
(c) Table MONITORING no longer relevant since the monitoring is done by STATISTICS_LEVEL anyways.
(d) V$FAST_START_TRANSACTIONS keep record of recovering and recovered transactions.
(e) MAXTRANS is gone. No longer works now. Oracle does MAXTRANS management itself.
(f) MTS_ parameters are now SHARED_ parameters. Just setting SHARED_SERVERS parameter is sufficient for multi-threading.
(g) TRCSESS command line tool collates all trace files from a session. You still have to perform TKPROF on it though.
(h) ALTER SYSTEM FLUSH BUFFER_CACHE;
7) Oracle Database 10g: Self-Managing Database - Common Infrastructure
Talks about the whole ADDM-AWR-Server Alert paradigm. Good to know the framework of the new features. I am not yet done testing the claims.
AWR -
Manageability infrastructure -
- Application and SQL
- Storage
- Space
- System Resource
- Backup & Recovery
These tasks have been automated with new 10g infrastructure where automated tasks, server alerts and advisory framework work upon AWR (Automatic Workload Repository).
Not all 'base statistics' (those on memory) are converted into AWR (on disk). The ones that go to AWR are -
- Access and usage stats for objects
- time model stats (V$SYS_TIME_MODEL)
- V$SESSTAT and V$SYSSTAT
- Optimizer Stats (V$SQL)
- ASH (active session history)
ASH - secondly data sampled from v$session. MMON flushes this data to AWR every hour and MMNL whenever the buffer is full. ASH == 2MB per CPU memory and max upto 5% of shared_pool_size.
Base Statistics = Raw Data Collected (total physical reads since startup)
Metrics = secondary stats derived from above (avg. physical reads in an hour) and thus, used by management framework.
10g supports metrics for system, session, files and wait-events stats.
AWR == Base Stats, SQL Stats, Metrics, ASH, Advisor Results in DBA_HIST_* tables. DBMS_WORKLOAD_REPOSITORY is the PL/SQL API to control AWR data and baseline.
$ORACLE_HOME/rdbms/admin/awrrpt creates a statspack type (much more detailed) report.
Database Feature Usage Stats - DBA_FEATURE_USAGE_STATISTICS & DBA_HIGH_WATER_MARK_STATISTICS
Server Generated Alerts --
V$ALERT_TYPES - DBA_THRESHOLDS
Threshold Alerts = Stateful Alerts (e.g., tablespace 97% full. Appear in DBA_OUTSTANDING_ALERTS, when cleared DBA_ALERT_HISTORY)
Stateless (Binary. Like, snapshot_too_old - goes directly to the history table)
V$SYSMETRIC & V$SYSMETRIC_HISTORY keep metrics for 1 hr.
PL/SQL API - DBMS_SERVER_ALERT
Automated Jobs -
By default, GATHER_STATS_JOB.
DBMS_STATS.LOCK_TABLE/SCHEMA_STATS locks a table/schema from getting analyzed.
DBA_OPTSTAT_OPERATIONS - history of DBMS_STATS operations.
A very very useful feature - when index is created, the stats are auto-generated.
Advisory Framework -
Advisors -
- ADDM
- SQL tuning
- SQL access
- Memory (PGA, SGA, Buffer Cache, Library Cache)
- Segment
- Undo
Advisor attributes can be LIMITED or COMPREHENSIVE. Can be specified a max. TIME LIMIT. Can be interruptible. Can take user directives.
PL/SQL api - DBMS_ADVISOR.
DBA_ADVISOR_ tables.
8) Oracle Database 10g: Self-Managing Database - Performance Management
The new SGA advisor, Automated SGA, Redo advisor.
Optimizer can now analyze a statement for, say 30 mins and come out with the best possible plan and store it internally for all future executions (only if you want it!). Thus tuning SQL in packaged application becomes easier (no need to touch code anymore).
Some issues not captured by Statspack, but by ADDM -
- excessive logon / logoff
- memory undersizing
- hot blocks and objects
- RAC service issues
- Locks and ITL contention
- Checkpointing issues
- PL/SQL, Java time
Has an *excellent* demo viewlet on Proactive Monitoring -> ADDM recommendations. Really helpful to understand the new philosophy on ohhlistic performance management.
For Reactive Tuning, EM provides
- CPU & Wait classes (reports holistic CPU usage including that by extra-Oracle stuff)
- Top SQL Statements
- Top sessions hurting
Just set SGA_TARGET for automated SGA sizing. Setting it to 0 disables automatic SGA sizing.
It can be increased up to SGA_MAX_SIZE. New background process MMAN serves as SGA memory broker.
SQL Tuning Advisor -
Calls the CBO in a 'special internal mode' to make it do four analyses -
- stale / missing stats analysis
- Plan Tuning Analysis (SQL Profile) - stored in data dictionary and thus does not need a hard code change in packaged apps
- Access Path Analysis
- Restructure SQL
Things done in SQL Profile which is not doable when CBO 'normally' analyzes a SQL (usually under fraction of a second) to find the access plan -
- runs partial execution
- dynamic sampling of statistics
- advanced predicate selectivity - correlation between columns, join skews and complex predicates (like function)
PL/SQL API for SQL Advisor: DBMS_SQLTUNE
9) Oracle Database 10g: Maximize Performance - General Enhancements
Talks of the general performance enhancements (as opposed to the new performance management and infrastructure in the previous two lessons).
A) Partitioning Enhancements - - List-partitioned table operations are permissible on list-partitioned IOT
- Index status = USABLE throughout partition management (DROP, TRUNCATE..)
- Bitmap index on list-partitioned IOT
- LOB columns supported on partitioned IOT
- Partitioned table can be DDLd with 'UPDATE INDEXES' so Oracle rebuilds all indexe segments
- SKIP_UNUSABLE_INDEXES (true) can be set in session or stsrem level
- NEW hash-partitioned global indexes are introduced (hash funcion = f(partitioning key, no. of partitions)
- Hash-partitioned index saves from right-hand contention (during insert of incremental sequenced columns). The contention is then spread across different partitions from the 'right hand' partition (as happens in list-partitioned indexes)
- According to Oracle -For queries involving equality or in-list predicates, performance of hash-partitioned indexes and non-partitioned indexes are comparable
- ALTER INDEX ADD PARTITION..COALESCE PARTITION
- Hash-partitioned indexes must be built partition wise (RENAME, SPLIT, MERGE are unsupported)
- Hash-partitioned indexes work better with parallel queries