Random notes on database features, code, practice, optimization and techniques.
Recently I was communicating with a developer first wanted to -
"I wanted to the fetch a batch of identifiers from a sequence, but that sequence was defined with “increment by 1”. And I could not change that, since other stuff was using that sequence. So I wrote a function that hit the sequence multiple times and returned a comma-separated list of identifies. But that turned out to be somewhat slow…
I wish there was a Sequence.next_val() that takes a parameter indicating by how much to increment the sequence after getting the value. For example, Sequence.next_val(20) returns 300, but increments the counter to 320. So now you can use 300 through 319. That comes in handy when you import a 1000 line project and have to fetch 1000 new identifiers in advance. Changing the increment value of a seqence has other side effects, so that’s not valueable… unless I’m missing something. "
So my solution was plain and easy --
If you really need sequential, incremental, unique guranteed values from a sequence in ONE call , do this –
SQL> select tseq.nextval from request_tab where rownum < 101
(say u want, 100 numbers…use any big table which will always have more records than you will be fetching)
Who then wrote to me expressing his concerns with the performance -
"Yes, I know. I did something similar, … but every call to nextval is a transaction with a commit. So fetching 1000 id’s from a sequence that way uses 1000 autonomous transactions, comitting each… right? Then there’s the logic to generate a string in PL/SQL to store this sequence and pass it back… all would be so much faster if nextval took a parameter that contains the increment of that call."
In other words, he wanted to reinvent the wheel trying to do something that a database already does. He thought since "select seq.nextval from table" fetches 100 records, it must do 100 calls. Which, being a procedural language developer he is, he thought he can prevent by writing his own routine!
This is the major mistake people do all the time. Without understanding database, they ignore something that database is perfectly capable of doing and try writing their own code. This model surely is more susceptible to failure than the database, which is a proven solution anyways.
So I decided to prove "fetching 100 numbers from a sequence does NOT mean 100 "autonomous transactions" by Oracle. Oracle is smarter than that!"
Here's a little test case proving that -
SQL> create table test (n1 number);
SQL> create index i1 on test(n1);
SQL> create sequence s1;
SQL> declare
begin
for i in 1 .. 100000 loop
insert into test values(dbms_random.random);
end loop;
end;
/
SQL> show arraysize
arraysize 15
SQL> analyze table test compute statistics for table for all indexes for all columns;
SQL> select s1.nextval from test where rownum < 100;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=82)
1 0 SEQUENCE OF 'S1' (SEQUENCE)
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'TEST' (TABLE) (Cost=2 Card=82)
Statistics
----------------------------------------------------------
70 recursive calls 15 db block gets 21 consistent gets 0 physical reads
3304 redo size
The sequence was created with default cache, i.e., 10. Subsequent execution of the above always resulted in 70 recursive calls (i.e., calls Oracle makes to update the actual sequence here).
Now, let's increase the cache to 100. Meaning, Oracle will now pre-populate memory with 100 subsequent values of the sequence. If the database is shut down now, they will be gone unused. The sequence.nextval being set to the next number.
SQL> alter sequence s1 cache 100;
SQL> select s1.nextval from test where rownum < 100;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=99)
1 0 SEQUENCE OF 'S1' (SEQUENCE)
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'TEST' (TABLE) (Cost=2 Card=99)
Statistics
----------------------------------------------------------
15 recursive calls 3 db block gets 13 consistent gets 0 physical reads
696 redo size
Notice, with increase in CACHE the number of recursive calls went down from 70 to 15!!
Increasing CACHE to 1000 now.
SQL> alter sequence s1 cache 1000;
-- First execution with new cache
SQL> select s1.nextval from test where rownum < 100;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=99)
1 0 SEQUENCE OF 'S1' (SEQUENCE)
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'TEST' (TABLE) (Cost=2 Card=99)
Statistics
----------------------------------------------------------
30 recursive calls 3 db block gets 16 consistent gets 0 physical reads
696 redo size
-- second execution with new cache
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=99)
1 0 SEQUENCE OF 'S1' (SEQUENCE)
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'TEST' (TABLE) (Cost=2 Card=99)
Statistics
----------------------------------------------------------
0 recursive calls 0 db block gets 11 consistent gets 0 physical reads
0 redo size
Since each time 99 values are read, the recursive calls would show to be ZERO for 9 subsequent calls! This is because now the pre-computed values are directly read from memory and actual sequence structure does not have to be updated.
-- 11th run
Statistics
----------------------------------------------------------
14 recursive calls 3 db block gets 13 consistent gets 0 physical reads
652 redo size
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=99)
1 0 SEQUENCE OF 'S1' (SEQUENCE)
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'TEST' (TABLE) (Cost=2 Card=99)
Trying to reduce the "consistent gets" --
SQL> set array 200
SQL> select s1.nextval from test where rownum < 100;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=99)
1 0 SEQUENCE OF 'S1' (SEQUENCE)
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'TEST' (TABLE) (Cost=2 Card=99)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
5 consistent gets 0 physical reads
0 redo size
Notice the reduction in "Consistent Gets" now.
So I wrote back to him, with this full case study --
"As I say, I’m lazy…It’s hard to beat a popular commercial product by doing yourself something that it’s supposed to do anyways..
BTW, fetching 1000 records from a sequence doesn’t always mean 1000 transactions. As you see we increased CACHE size, and Oracle became smarter to not “commit” after every row."
So, what exactly are these "recursive calls"?