All posts by Tim Gorman

Tim Gorman has worked in IT with relational databases since 1984, as an Oracle PRO*C and PL/SQL application developer since 1990, as an Oracle DBA since 1993, and managing/designing very large data warehouses on Oracle since 1994. He is an independent consultant (www.EvDBT.com) specializing in data warehousing and database administration (particularly performance and availability). Mr Gorman is currently president of RMOUG (Rocky Mountain Oracle Users Group - www.RMOUG.org) where he has been a member since 1992 and a board member since 1995. He is currently a member of the board of directors of ODTUG (Oracle Developer Tools Users Group - www.ODTUG.com) and of Project SafeGuard (www.PSGHelps.org), which provides legal advice to victims of domestic violence. He has co-authored five books (three with the Oak Table Press), performed technical review on eight more books, has been an Oracle ACE since 2007 and an Oracle ACE Director since 2012, a member of the Oak Table Network (www.OakTable.net) since 2002, and has presented at Oracle Open World, Collaborate, KScope, Hotsos, RMOUG, UKOUG, and Oracle users groups in lots of wonderful places around the world.

Hello Delphix!

After almost 16 years as an independent consultant, with a couple side-steps into the world of small consulting-services startups, I’ve accepted an offer from Delphix, a startup building the future of information technology, enabling agile data management and storage virtualization.

I’m closing EvDBT as a business, since the employee count will reduce from one to zero, and finishing up my consulting engagements, starting with my new employer on 01-May 2014.

Thank you, EvDBT.  You were my lifeboat and my vehicle to a better career and a better life!

The DBA is dead. Again.

Mark Twain never said, “Reports of my death are greatly exaggerated.“  Instead, his comment in 1897 was less tongue-in-cheek than matter-of-fact.  Confronted with news reports that he was gravely ill he responded, “James Ross Clemens, a cousin of mine, was seriously ill two or three weeks ago in London, but is well now.  The report of my illness grew out of his illness; the report of my death was an exaggeration.“  I can only hope that, while being equally matter of fact, in the retelling my comments will also grow wittier than they were written.  It is a lot for which to hope, as past experience is that my comments generally provoke unintended offense.

Every few years, when wondrous new automation appears imminent, reports surface about the long-anticipated death of the role of the database administrator.  Sometimes it seems these reports arise out of sheer frustration that DBAs and databases still exist, as seemed to have happened in 2008 during a conversation on the Oak Table email list, which closely followed a similar discussion on the ORACLE-L list.  To whit:  the war is over, and we lost.

Alex Gorbachev commented succinctly at the time:

We have already “lost” the war many times, haven’t we?  We lost it to object-oriented databases (8i?)  We lost to XML databases (9i?)  We lost to grid databases (10g?)  And we are losing to what now with 11g?  The “fusion” will save us all with or *without* databases in the first place?  Yeah right … the end is close.

The focus of discussion on both email lists was a thought-provoking blog post in March 2008 by Dom Brooks entitled “The dea(r)th of Oracle RDBMS and contracting?” He commented that the tide of history had finally turned against the Oracle database and the highly-visible role of database administrator.  Stiff competition from open-source competitors, emerging scalable technologies, absurd license fees, and belt-tightening by many IT shops were the overwhelming trend.  Poor database design exacerbated by immature implementation; if you’re going to produce a disaster, probably best that it not cost as much as Oracle.

My response on both email threads on ORACLE-L and the Oak Table was this…

Back in the 1980s, I worked for a company that had built some really cool applications in the area of travel reservations.  Eventually, the travel providers (i.e. airlines, hotels, car rental agencies, etc) caught on to what we were doing and did it themselves, effectively putting us out of business overnight.  So, it came time to sell the company off in pieces.  We tried to sell the applications, but nobody wanted them — they had their own, or could buy or build better.  We sold the hardware and facilities, but for pennies on the dollar.  Then, when we tried to sell the data, we hit the jackpot — everybody wanted the data, and we were able to sell it over and over again, to multiple buyers.

I never forgot that lesson, and several years later traded being a programmer for being a DBA because (as Michael just said, below) I like working with data.  Data, not programs, is the only thing that matters — applications are transient and have no value except to acquire, manipulate, and display data.  Data is the only thing with value.  The long-term value of data is the reason I’ve moved toward data warehousing and business intelligence, too.

Data is important.  Databases manage data.  DBAs architect, configure, and manage databases.  So, being a skilled database administrator will always be necessary as long as data exists.  If the state of the art ceases advancing, then automation will finally catch up to extinguish the DBA role/job.  But until then, being a DBA is a career.

That’s my story.  And I’m stickin’ to it.

Doug Burns was following both threads and was kind enough to lend his support in a post to his blog entitled “There’s Hope For Us All“, in which he stated “although it doesn’t reflect my personal experience in the slightest, there was something about what he had to say and the way he said it that rung very true to me.“  Kinder words are rarely spoken, and thank you, Doug.  And thank you too Dom, for your follow-up comment to Doug’s post, “Solidarity Brother!  I’m sure Tim’s right and will continue to be right.  I was having an emotional moment… the flat earth society are everywhere!

We all have those moments.

And here we are again, having another moment.

Once again, the topic of discussion on the Oak Table list was a blog post from Kenny Gorman (no relation) entitled “The Database Administrator Is Dead.“  My father, who was a police officer for 25 years, worked in a profession much more dangerous, and certainly several people had wished him harmed or dead over his career and even acted in that direction, but in a general way my chosen profession has received more death threats, it seems.

Now, the forces opposing the DBA are not necessarily cheaper, different, or disruptive technology, but better automation and provisioning.  The role of the DBA will literally be smothered out of existence, as highly-automated management consoles extend to the ultimate capability.  “Database As A Service” or “DBaaS“, cloud provisioning for databases, is the next development to obsolesce the database administrator.

The synchronicity of these discussions is spooky.  During the week previous to the discussion of Mr. [Kenny] Gorman’s blog post, I had related another particular story 4-5 separate times to 4-5 separate people, and now I found that I was relating it yet again, this time to the Oak Table email list.  It was something of a continuation from my earlier story…

In the 1990s, when I chose to move from being a developer to a DBA, the trend of out-sourcing was quite abundantly evident, not quite augmented by the trend of offshoring yet.  In 1999 I did my first ever keynote address at a conference in Portland, Maine to the Maine’S Oracle Users Group (MSOUG) on the topic of being a DBA in a world of out-sourcing.  I described a visualization of one of those water-holes in the Sahara.  A water-hole that is brimming and supporting a lush oasis during the rainy season, but that dries up and shrinks to a small muddy puddle during the dry season, surrounded by dead vegetation and dead animals that didn’t quite make it to the water-hole or another.

Repeating the comments in Doug’s blog, code comes and goes but data is the only thing with lasting value.  I visualized that the dead vegetation and dead animals surrounding the muddy remainders of the water-hole were developers and DBAs whose jobs were outsourced.  Right in the middle of the muddy water were two eyes above the surface, and this was the skilled DBA, guarding the remainder of the water-hole, containing only the most important stuff that couldn’t be outsourced or offshored.  I had long decided that I would be that DBA, and stay as close to the data as I could, and especially the most strategic data (i.e. data warehousing).

I figure y’all might have as much fun as the good folks at MSOUG did with that visualization, especially when subjected to Freudian and Jungian dream analysis.

Though it has nothing to do with why I’ve related this story 4-5 times previously this week, in this context, the author of the article (we’re not related) talks about having been an Oracle DBA 15 years ago, which is about the time I did my keynote for MSOUG.

Perhaps he left the field too early too early?  :-)

I completely agree with his “automate or die” comment, and I might add “keeping learning or die”, and of course the job’s roles are changing, but besides DBaaS being a long way from the pointy-and-clicky utopia that this post implies, the question remains: who sets up the DBaaS environments?  DBaaS isn’t the end of the DBA role, it is more automation.

Who will set up DBaaS environments, if not DBAs?  Don’t get me wrong:  I agree that DBaaS is here.  And I think DBAs will set it up, use it, and improve on it.

That’s my story.  And I’m stickin’ to it.

15 years of EvDBT

I worked at Oracle Consulting for eight and a half years, from January 1990 until July 1998, starting as a senior consultant and finishing as a technical manager.  In the summer of 1998, I was experiencing a dual crisis in my career, directionally and ethically.

From the directional perspective, Oracle Consulting was sending very clear signals that the way Gary Dodge and I were doing business in the Denver consulting practice was not aligned with corporate goals.  The corporation wanted vertical “centers of expertise” with global and national scope.  In Denver, Gary and I managed about a dozen generalists, with experience ranging from very junior to very senior, who effectively covered all types of technology.  Our goal was to let each person work locally on the type of work they enjoyed, occasionally coercing some to try something different.  Many of us had families, and all of us lived in Colorado for a reason.

Attempting to adhere to corporate direction, when we received a request from a local customer, we began to first contact the relevant national or global “center of expertise”.  Most often, we would be told that nobody was available within the next few weeks (or months) and that, when they did become available, the rates charged would reflect a very senior person coupled with travel expenses.  We would feed that response back to the customer, who understandably became concerned or irate, and asked for one of our local generalists, whom they had probably used previously, which would have been our first response anyway.  In almost each case, we would end up staffing one of our local folks in the engagement, who completed the engagement often before the national or global group’s person became available.  As this continued, the pressure from corporate became more direct, complaining about a “black hole in the Rockies”.  So, looking ahead into the future at Oracle, I saw a model of business with which I wasn’t comfortable:  our local people getting on planes to work elsewhere, while out-of-town personnel were flying into Colorado to work here.  Perhaps it looked good from a higher level, but from our street-level view, it was absurd.

However, I also had a more serious ethical problem.  I had been sent to Los Angeles to work an engagement involving my primary expertise at the time:  Oracle Parallel Server on IBM RS6000/SP clusters.  The customer was a start-up website job board.  Both IBM and Oracle were determined to sell some massive hardware and software in there, and were working together toward common purpose with rare cooperation.

Except the customer wasn’t cooperating.

Instead, they had come up with a far less-expensive scheme involving dozens of commodity servers, where the one server contained a master database to which new job postings were added and changes were made, which was then replicated to dozens of read-only database servers using backup/restore, with a connection load-balancer directing traffic.  This allowed their read-mostly website to scale as needed by off-loading the reads from the master database and segregating writes from the read-only databases.  It was fast, cheap, and easy — a rare occasion when it wasn’t necessary to choose only two.  It was novel for the time, I was impressed, and said so.  Nowadays, such a thing is called a reader farm and can easily be implemented using Active Data Guard.

However, the IBM and Oracle teams were adamantly opposed – fast, cheap, and easy would ruin the lucrative deal they had planned for themselves.  So I was directly ordered by the regional vice-president in charge of the deal to reject as unworkable the customer’s plans and instead extol the virtues of Oracle Parallel Server and IBM RS6000/SP clustered servers one way or the other, and recommend it strongly in conclusion.

What to do?

I certainly did not enjoy being ordered to lie.  Not asked, but ordered.  On the other hand, I worked for Oracle and I had a boss and that boss stated very clearly what to do, as he had every right to do.  After all, no blood would be spilled, no babies would be killed.

So my solution to the ethical dilemma was:

  1. Complete the engagement as directed
  2. Prevent it from happening again

I am not smart enough to avoid making mistakes, but I believe in making mistakes only once.  I did what I was told to do, enduring the astonished looks from the good folks who couldn’t believe I was spouting such nonsense.  I subsequently resigned from Oracle, to avoid ever having to make that mistake again.  But having resigned from one well-regarded corporation, the question became:  are there any corporations, anywhere in the world, where I would not be asked to do something like that again?

The answer was simple and, in August 1998, Evergreen Database Technologies, Inc opened for business.

The first person I told of my decision to resign was Gary Dodge.  He wasn’t my supervisor, but we were peers.  I entered his office and closed the door, and he looked up and commented, “Oh, that’s not a good sign.”  I sat down and told him, and he nodded and said, “Well, good thing you closed the door, because I’m leaving also.”  He didn’t leave Oracle, but he left consulting, for the same directional reasons as I.  So, we didn’t inform our management together, but we informed them at the same time.

EvDBT hasn’t been open continuously over the past 15 years;  I have far too much to learn.  I spent a few years attempting to start another consulting-services company with some colleagues, and that ended unsuccessfully.  Any deal that starts with handshakes inevitably ends with lawyers, so my lesson is to always start with lawyers so that it ends with handshakes.

At one point, I hired in with Compaq Professional Services because they offered an intriguing opportunity.  However, my timing was bad, as Compaq was absorbed by HP a few months after I started, and knowing that I would not enjoy the noise and mess of the mating of the elephants, I moved on.

Thank you all for the past 15 years, and I look forward to the next 15 years.

Update on Friday 18-Oct 2013:  I’ve received some criticism and questions for my perceived criticism of Oracle in this article, particularly with the ethical dilemma described above.  I didn’t write this to criticize Oracle as a company, the situation simply happened while I was working there.  It is a large company like many others.  Corporations are comprised of people who respond in varying ways to the incentives given them.  I’m personally aware of many people with similar roles at Oracle who have not and never will react to their incentives in that particular way.  Likewise, I know of a few who would have reacted far worse.  It’s all part of the grand pageant of human behavior.

The person who ordered me to do my job was not himself facing an ethical dilemma.  He had brought me onto the engagement to expedite the deal, and he never imagined that I would balk;  it just wasn’t professional.

He had a task to do, and I began to jeopardize the success of that task.  I would hope to be as decisive and effective as he.

Keyword DETERMINISTIC is anything but…

According TheFreeDictionary.com, the word “deterministic” means…

deterministic
de·termin·istic adj. an inevitable consequence of antecedent sufficient causes

According to Wikipedia, the explanation of deterministic algorithm is…

In computer science, a deterministic algorithm is an algorithm which, given a particular
input, will always produce the same output, with the underlying machine always passing
through the same sequence of states.

In the Oracle PL/SQL Language documentation, it is used as a keyword, as follows…

DETERMINISTIC

Indicates that the function returns the same result value whenever it is called with the same values for its parameters.

You must specify this keyword if you intend to invoke the function in the expression of a function-based index or from the query of a materialized view that is marked REFRESH FAST or ENABLE QUERY REWRITE. When the database encounters a deterministic function in one of these contexts, it attempts to use previously calculated results when possible rather than re-executing the function. If you subsequently change the semantics of the function, then you must manually rebuild all dependent function-based indexes and materialized views.

Do not specify this clause to define a function that uses package variables or that accesses the database in any way that might affect the return result of the function. The results of doing so are not captured if the database chooses not to re-execute the function.

These semantic rules govern the use of the DETERMINISTIC clause:

  • You can declare a schema-level subprogram DETERMINISTIC.

  • You can declare a package-level subprogram DETERMINISTIC in the package specification but not in the package body.

  • You cannot declare DETERMINISTIC a private subprogram (declared inside another subprogram or inside a package body).

  • A DETERMINISTIC subprogram can invoke another subprogram whether the called program is declared DETERMINISTIC or not.

There is a subtle twist about this explanation.  It states that the keyword “indicates that the function returns the same result value whenever it is called with the same values for its parameters“, but if you think about the use of the verb indicates, you realize that they are conceding that the keyword itself doesn’t enforce the behavior.  Instead, it is curiously carefully-chosen language to sidestep the important fact that the PL/SQL language compiler does not actually enforce the necessary behavior.

So as a result, it is possible to write the following function…

SQL> create or replace function test_func(in_col1 in number)
  2           return number deterministic
  3  is
  4           v_col1  number;
  5  begin
  6           select  col1
  7           into    v_col1
  8           from    test_tbl2
  9           where   col1 = in_col1;
 10           return(v_col1);
 11  end test_func;
 12  /
SQL> show errors
No errors.

Is this function really deterministic?  No, of course not.  Anyone else changing data in the TEST_TBL2 table can change the outcome of this function.

Yet, the DETERMINISTIC keyword did not cause compilation of the function to fail, as it should have.  Only the use of the pragma restrict_references using the qualifiers RNDS (i.e. read no database state), RNPS (i.e. read no package state), WNDS (i.e. write no database state), and WNPS (i.e. write no package state) would do that…

SQL> create or replace package test_pkg
  2  as
  3          function test_func(in_col1 in number)
  4                  return number;
  5          pragma  restrict_references(test_func,RNPS,WNPS,RNDS,WNDS);
  6  end test_pkg;
  7  /

SQL> show errors
No errors.

SQL> create or replace package body test_pkg
  2  as
  3          function test_func(in_col1 in number)
  4                  return number
  5          is
  6                  v_col1  number;
  7          begin
  8                  select  col1
  9                  into    v_col1
 10                  from    test_tbl2
 11                  where   col1 = in_col1;
 12                  return(v_col1);
 13          end test_func;
 14  end test_pkg;
 15  /

Warning: Package Body created with compilation errors.

SQL> show errors
Errors for PACKAGE BODY TEST_PKG:

LINE/COL ERROR
-------- -----------------------------------------------------------------
3/2      PLS-00452: Subprogram 'TEST_FUNC' violates its associated pragma

Notice that this pragma can only be used within a function declared within a PL/SQL package;  this pragma cannot be used within a standalone function.  But it proves that the PL/SQL compiler is capable of detecting the problem, and failing the compilation.  They have the technology.

Further, it is now possible to create a function-based index using this function…

SQL> create index test_tbl1_fbi on test_tbl1(test_func(col1))
  2  tablespace users compute statistics;

Index created.

…and that function-based index will be used by the Oracle optimizer for queries, after all, why shouldn’t it?

SQL> select t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
----------------------------------------------------------
Plan hash value: 357717947
-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                 |    10 |   170 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_TBL1       |    10 |   170 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST_TBL1_FBI01 |     4 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

SQL> select /*+ full(t1) */ t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
----------------------------------------------------------
Plan hash value: 1370928414
-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |    10 |   170 |     5   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_TBL1 |    10 |   170 |     5   (0)| 00:00:01 |
-------------------------------------------------------------------------------

SQL> select  t1.col1 t1_col1, t2.col1 t2_ool1
  2  from    test_tbl1 t1, test_tbl2 t2
  3  where   t2.col1 = t1.col1
  4  and     t1.col1 = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
----------------------------------------------------------
Plan hash value: 2884964714
-----------------------------------------------------------------------------------
| Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |              |     1 |     8 |     1   (0)| 00:00:01 |
|   1 |  NESTED LOOPS      |              |     1 |     8 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN| TEST_TBL2_PK |     1 |     4 |     1   (0)| 00:00:01 |
|*  3 |   INDEX UNIQUE SCAN| TEST_TBL1_PK |     1 |     4 |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

So, whether the query uses the function-based index, or whether it performs a simple FULL table scan, or whether the function-based index isn’t used at all, the results are the same.

But, now suppose another session changes that row in the TEST_TBL2?

SQL> update  test_tbl2
  2  set     col1 = 1700
  3  where   col1 = 170;

1 row updated.

SQL> commit;

Commit complete.

…and now someone performs a query using the function-based index?

SQL> select t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
----------------------------------------------------------
Plan hash value: 357717947
-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                 |    10 |   170 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_TBL1       |    10 |   170 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST_TBL1_FBI01 |     4 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

How can that be?  We know that the UPDATE changed this data?  And here is proof obtained by bypassing the function-based index during the WHERE clause by forcing a FULL table scan…

SQL> select /*+ full(t1) */ t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1370928414
-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |    10 |   170 |     5   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_TBL1 |    10 |   170 |     5   (0)| 00:00:01 |
-------------------------------------------------------------------------------

And here is further proof obtained by completely eliminating the function from the SELECT list and instead performing a simple inner-join…

SQL> select  t1.col1 t1_col1, t2.col1 t2_ool1
  2  from    test_tbl1 t1, test_tbl2 t2
  3  where   t2.col1 = t1.col1
  4  and     t1.col1 = 170;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 2884964714
-----------------------------------------------------------------------------------
| Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |              |     1 |     8 |     1   (0)| 00:00:01 |
|   1 |  NESTED LOOPS      |              |     1 |     8 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN| TEST_TBL2_PK |     1 |     4 |     1   (0)| 00:00:01 |
|*  3 |   INDEX UNIQUE SCAN| TEST_TBL1_PK |     1 |     4 |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

So, what PL/SQL has permitted us to do is create a situation where it would be reasonable for end-users to conclude that the database has corrupted data.  In a sense, it does — the corrupted data is within the function-based index, where deterministic data is expected.

I found this very situation on the loose in the wild;  that is, within a production application in use at a healthcare company.  Think about that.

I didn’t find it because someone complained about possible data corruption.  I found it because an AWR report pointed me at the SQL statement within the function, which was being executed 1.3 billion times over a 5 day period.  Each execution was quite fast, but if you do 1.3 billion of anything over less than a lifetime, someone will eventually notice.

If you consider that 1.3 billion executions over a 5 day period implies a average rate of about 3,000 executions per second, sustained, for every second of those 5 days, then you start to get an idea of the problem.  Especially when you consider that there were peaks and valleys in that activity.

So, I have raised the issue with the affected healthcare organization, and the problem is worming its way through change management.  In the meantime, this application continues to return incorrect information, over and over and over again.

Are you sure that none of your function-based indexes were built this way?

OOW13 small

OOW13 and OTW13

Abstract selection for Oracle Open World 2013 (OOW13) completed recently, and two of the eleven abstracts that I had submitted has been accepted, entitled “Scaling To Infinity: Making Star Transformations Sing” on Monday 23-Sep at 1:45pm and “RDBMS Forensics:  Troubleshooting using ASH” on Sunday 22-Sep at 10:30am.

Lesser known, but more fun than a barrel of drunken DBAs, is the un-conference formerly known as “Oracle Closed World“, now known as “Oak Table World“.  Oak Table World 2013 (OTW13) is in the planning stages for a second year following the wild success of Oak Table World 2012.  OTW12 was cooked up at the last minute, less than 4 weeks before OOW12, and it had the impromptu energy of a Friday night kegger after a long week at work.  I had the honor of being first up at that event with a presentation on “The Fastest UPDATE Is An INSERT“, and it was a blast.  The event is going to be much bigger and more popular this year, particularly since OOW13 turned down so many excellent abstracts and presenters.

Deep technical content not suitable for OOW13?  Fine, we’ll give it an airing at OTW13!

OTW13 will once again be held at the Children’s Creativity Museum in Yerba Buena Gardens in San Francisco, CA on Monday-Tuesday, 23-24 September 2013.  The driving forces behind this wonderful event are Mogens Norgaard and Kyle Hailey, who have recruited more corporate sponsors (to be announced soon) and yes, EvDBT will once again be a sponsor.

I plan to present on “Three Types of Table Compression” at Monday 23-Sep at 9:00am.  The talk has to do with an experience I had last summer with a customer running the Oracle Demantra application, which had several quirks that prevented the use of either of the two types of table compression offered by Oracle (i.e. BASIC/OLTP compression and HCC compression).  But there is a third, little-known form of table compression rooted back in ancient Oracle versions which served the purpose, so the most of the presentation is a detailed description of BASIC/OLTP compression and HCC compression, both of which are just a prelude to a discussion about trailing NULL columns.

So, if you make it to the Moscone Center this September, then be sure to take a brief detour on the trampled path from Moscone West to Moscone South and back, and take a peek behind the carousel at Yerba Buena Gardens behind which lies Oak Table World 2013 (OTW13), and see the top-notch technical presentations that were declined for Open World.

Below is an extremely cool panoramic shot taken during my presentation at OTW12…

OTW12 20121001

View V$SQL_OPTIMIZER_ENV

This was fun, and I’ve been trying to remember to post about the things that are fun.

I was working with a customer using a purchased application from a relatively over-bearing vendor.  You know the type:  they have very specific (and somewhat out-of-date) demands for the Oracle database environment on which their application is hosted.

My customer complained that some reports were taking 15-20 minutes to complete, when everyone who knew what the reports were doing were certain that they shouldn’t take more than a few seconds at most.  The search parameters to the report were very specific, the report didn’t do a whole lot, only supposed to bring back a few bits n pieces, not the whole kit n kaboodle.

So they extracted the text of the SQL statement from the application and emailed it.  Using that text, I searched for the text in the V$SQL view and found the SQL_ID.  Now we can have some fun…

First step is get some context of the problem, and that means looking at the recent “history” of executions (as displayed by my “sqlhistory.sql” script) which looked like…

 +--------------------------------------------------------------------------------------------------+
 |Plan HV     Min Snap  Max Snap  Execs       LIO/Exec       PIO/Exec       CPU/Exec    Elapsd/Exec |
 +--------------------------------------------------------------------------------------------------+
 |1834398219  30594     30595     1           15,244,622     0              1,277.87    1,633.08    |
 +--------------------------------------------------------------------------------------------------+
 .
 ========== PHV = 1834398219==========
 First seen from "06/13/13 10:30:22" (snap #30594)
 Last seen from  "06/13/13 11:00:23" (snap #30595)
 .
 Execs          LIO            PIO            CPU            Elapsed
 =====          ===            ===            ===            =======
 1              15,244,622     0              1,277.87       1,633.08
 .
 Plan hash value: 1834398219

 ------------------------------------------------------------------------------------------------------
 | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
 ------------------------------------------------------------------------------------------------------
 |   0 | SELECT STATEMENT                  |                  |       |       |   124K(100)|          |
 |   1 |  COUNT STOPKEY                    |                  |       |       |            |          |
 |   2 |   VIEW                            |                  |     1 |    95 |   124K  (2)| 00:10:15 |
 |   3 |    SORT ORDER BY STOPKEY          |                  |     1 |    94 |   124K  (2)| 00:10:15 |
 |   4 |     NESTED LOOPS SEMI             |                  |     1 |    94 |   124K  (2)| 00:10:15 |
 |   5 |      NESTED LOOPS SEMI            |                  |     1 |    88 |    35   (0)| 00:00:01 |
 |   6 |       NESTED LOOPS                |                  |     1 |    64 |    33   (0)| 00:00:01 |
 |   7 |        TABLE ACCESS BY INDEX ROWID| CC_CLAIM         |     1 |    55 |    32   (0)| 00:00:01 |
 |   8 |         INDEX RANGE SCAN          | CC000000D0U3     |    11 |       |    22   (0)| 00:00:01 |
 |   9 |        INDEX UNIQUE SCAN          | CC000000DKU3     |     1 |     9 |     1   (0)| 00:00:01 |
 |  10 |       INDEX RANGE SCAN            | CC000000D2U6     |   346K|  8110K|     2   (0)| 00:00:01 |
 |  11 |      VIEW                         | VW_NSO_1         |    20 |   120 |   124K  (2)| 00:10:15 |
 |  12 |       NESTED LOOPS                |                  |    20 |   680 |   124K  (2)| 00:10:15 |
 |  13 |        INDEX FULL SCAN            | PERSONLASTFIRSTN |    20 |   380 |   124K  (2)| 00:10:14 |
 |  14 |        INDEX RANGE SCAN           | CC000000D3U3     |     1 |    15 |     2   (0)| 00:00:01 |
 ------------------------------------------------------------------------------------------------------

                                     Summary Execution Statistics Over Time
                                                                               Avg                 Avg
 Snapshot                          Avg LIO             Avg PIO          CPU (secs)      Elapsed (secs)
 Time            Execs            Per Exec            Per Exec            Per Exec            Per Exec
 ------------ -------- ------------------- ------------------- ------------------- -------------------
 13-JUN 10:30        1        3,362,907.00                0.00              284.33              358.08
 13-JUN 11:00        0       11,881,715.00                0.00              993.54            1,275.00
              -------- ------------------- ------------------- ------------------- -------------------
 avg                          7,622,311.00                0.00              638.94              816.54
 sum                 1

So only one execution which started sometime between 10:30am and 11:00am, and continued into the 11:00 – 11:30am period, racking up a little over 27 minutes (1633.08 seconds) of run-time.

Please note the plan_hash_value of 1834398219;  my next task is to create a SQL*Plus script to provide a reproducible test case using the exact same execution plan ID.  Because I’m a command-line kind of guy…

I copied the text of the SQL statement, and prettied it up manually, a process which is time-consuming but also allows me to learn how the query is written intimately.  In order to find the data values to plug into the bind-variables used by the SQL statement, I queried the V$SQL_BIND_CAPTURE view as follows…

SQL> select  child_number, name, position, datatype_string, value_string
   2  from    v$sql_bind_capture
   3  where   sql_id = '&&V_SQL_ID'
   4  order by 1, 3;
 old   3: where    sql_id = '&&V_SQL_ID'
 new   3: where    sql_id = '4xmb4k9uhqn90'

 CHILD_NUMBER Table Name   POSITION DATATYPE_STRING                VALUE_STRING
 ------------ ---------- ---------- ------------------------------ ------------------------------
            0 :1                  1 NUMBER                         5220
              :2                  2 NUMBER                         2
              :3                  3 NUMBER                         1
              :4                  4 NUMBER                         4
              :5                  5 NUMBER                         6
              :6                  6 NUMBER                         8
              :7                  7 NUMBER                         15
              :8                  8 NUMBER                         20
              :9                  9 NUMBER                         32
              :10                10 VARCHAR2(32)                   lena
              :11                11 NUMBER                         10002
              :12                12 NUMBER                         92
              :13                13 NUMBER                         124
              :14                14 NUMBER                         4
              :15                15 NUMBER                         5221
              :16                16 NUMBER                         15

So, now I have a SQL*Plus script ready to try to replicate the plan hash value from real-life…

set echo on feedback on timing on linesize 300 tab off
set autotrace traceonly explain  /* AUTOTRACE TRACEONLY EXPLAIN just parses, doesn't execute the query */
col plan_plus_exp format a295    /* permits output from AUTOTRACE to display wide, and not wrap */
variable b1 number 
variable b2 number
variable b3 number
variable b4 number
variable b5 number
variable b6 number
variable b7 number
variable b8 number
variable b9 number
variable b10 varchar2(32)
variable b11 number
variable b12 number
variable b13 number
variable b14 number
variable b15 number
variable b16 number
exec :b1 := 5220;        /* plugging in the bind-variable values captured from V$SQL_BIND_CAPTURE */
exec :b2 := 2;
exec :b3 := 1;
exec :b4 := 4;
exec :b5 := 6;
exec :b6 := 8;
exec :b7 := 15;
exec :b8 := 20;
exec :b9 := 32;
exec :b10 := 'lena';
exec :b11 := 10002;
exec :b12 := 92;
exec :b13 := 124;
exec :b14 := 4;
exec :b15 := 5221;
exec :b16 := 15;
spool x10

(...SQL statement itself edited out, because it might be recognizable...)

spool off
set echo off feedback 6 timing off linesize 130 tab on autotrace off

So now, I can use the SQL*Plus AUTOTRACE TRACEONLY EXPLAIN command to determine if I have created a reproducible test case…

SQL> SELECT  *
   2  FROM    (SELECT         qRoot.ID col0,

(...SQL statement body edited out, because it might be recognizable...)

  38  WHERE   rownum <= :b16;
 Elapsed: 00:00:00.00

 Execution Plan
 ----------------------------------------------------------
 Plan hash value: 605492508      <== not the same Plan Hash Value

 --------------------------------------------------------------------------------------------------------
 | Id  | Operation                            | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
 --------------------------------------------------------------------------------------------------------
 |   0 | SELECT STATEMENT                     |                 |     1 |    95 |    44   (5)| 00:00:01 |
 |*  1 |  COUNT STOPKEY                       |                 |       |       |            |          |
 |   2 |   VIEW                               |                 |     1 |    95 |    44   (5)| 00:00:01 |
 |*  3 |    SORT ORDER BY STOPKEY             |                 |     1 |    88 |    44   (5)| 00:00:01 |
 |   4 |     NESTED LOOPS SEMI                |                 |     1 |    88 |    30   (0)| 00:00:01 |
 |   5 |      NESTED LOOPS                    |                 |     1 |    64 |    28   (0)| 00:00:01 |
 |*  6 |       TABLE ACCESS BY INDEX ROWID    | TABLE_A1        |     1 |    55 |    27   (0)| 00:00:01 |
 |*  7 |        INDEX RANGE SCAN              | XX000000D0U3    |     1 |       |    26   (0)| 00:00:01 |
 |   8 |         NESTED LOOPS                 |                 |       |       |            |          |
 |   9 |          NESTED LOOPS                |                 |     2 |    68 |    12   (0)| 00:00:01 |
 |  10 |           TABLE ACCESS BY INDEX ROWID| TABLE_B2        |     7 |   105 |     7   (0)| 00:00:01 |
 |* 11 |            INDEX RANGE SCAN          | XX000000D3N2    |     7 |       |     4   (0)| 00:00:01 |
 |* 12 |           INDEX UNIQUE SCAN          | SYS_C002390384  |     1 |       |     1   (0)| 00:00:01 |
 |* 13 |          TABLE ACCESS BY INDEX ROWID | TABLE_B3        |     1 |    19 |     2   (0)| 00:00:01 |
 |* 14 |       INDEX UNIQUE SCAN              | XX000000DKU3    |     1 |     9 |     1   (0)| 00:00:01 |
 |* 15 |      INDEX RANGE SCAN                | XX000000D2U6    |   508K|    11M|     2   (0)| 00:00:01 |
 --------------------------------------------------------------------------------------------------------

Sadly, it appeared that my SQL*Plus test script could not reproduce the exact same execution plan.  I looked at the CBO statistics on the underlying tables (and their associated indexes and columns) to see if a stats-regather since the most recent executions had changed the input values provided to the Oracle cost-based optimizer (CBO), but that was not the case.

Another area to check are the session-level parameter values used when the SQL statement was parsed, which is stored in the V$SQL_OPTIMIZER_ENV view, and here I hit the jack-pot…

SQL> select name, value, isdefault from v$sql_optimizer_env where sql_id = '4xmb4k9uhqn90' order by name;

 NAME                                     VALUE                     ISD
 ---------------------------------------- ------------------------- ---
 _always_semi_join                        nested_loops              NO
 _enable_row_shipping                     false                     NO
 _fast_full_scan_enabled                  false                     NO
 _hash_join_enabled                       false                     NO
 _optimizer_cost_based_transformation     off                       NO
 _optimizer_sortmerge_join_enabled        false                     NO
 _pga_max_size                            1677720 KB                NO
 active_instance_count                    1                         YES
 bitmap_merge_area_size                   1048576                   YES
 cell_offload_compaction                  ADAPTIVE                  YES
 cell_offload_plan_display                AUTO                      YES
 cell_offload_processing                  true                      YES
 cpu_count                                12                        YES
 cursor_sharing                           exact                     YES
 db_file_multiblock_read_count            128                       YES
 hash_area_size                           131072                    YES
 is_recur_flags                           0                         YES
 optimizer_capture_sql_plan_baselines     false                     YES
 optimizer_dynamic_sampling               2                         YES
 optimizer_features_enable                11.1.0.7                  YES
 optimizer_index_caching                  0                         YES
 optimizer_index_cost_adj                 100                       YES
 optimizer_mode                           all_rows                  YES
 optimizer_secure_view_merging            true                      YES
 optimizer_use_invisible_indexes          false                     YES
 optimizer_use_pending_statistics         false                     YES
 optimizer_use_sql_plan_baselines         false                     NO
 parallel_ddl_mode                        enabled                   YES
 parallel_degree                          0                         YES
 parallel_dml_mode                        disabled                  YES
 parallel_execution_enabled               true                      YES
 parallel_query_default_dop               0                         YES
 parallel_query_mode                      enabled                   YES
 parallel_threads_per_cpu                 2                         YES
 pga_aggregate_target                     8388608 KB                YES
 query_rewrite_enabled                    true                      YES
 query_rewrite_integrity                  enforced                  YES
 result_cache_mode                        MANUAL                    YES
 skip_unusable_indexes                    true                      YES
 sort_area_retained_size                  0                         YES
 sort_area_size                           65536                     YES
 star_transformation_enabled              false                     YES
 statistics_level                         typical                   YES
 transaction_isolation_level              read_commited             YES
 workarea_size_policy                     auto                      YES

 45 rows selected.

It turns out that the session running the SQL statement had eight (8) database parameters set by ALTER SESSION SET commands within the session; seven (7) of these parameters are undocumented “underscore” parameters.  It turns out that two (2) of these eight changed parameters have the same value as the global database instance (i.e. “_pga_max_size” and “optimizer_use_sql_plan_baselines”), but six (6) of the underscore parameters do not.  So, to understand whether these changes were the determining factor, I re-ran my test case with these six parameters changed to the values shown above at the session level…

Plan hash value: 1834398219

 ------------------------------------------------------------------------------------------------------
 | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
 ------------------------------------------------------------------------------------------------------
 |   0 | SELECT STATEMENT                  |                  |     1 |    95 |   804K  (1)| 01:06:08 |
 |*  1 |  COUNT STOPKEY                    |                  |       |       |            |          |
 |   2 |   VIEW                            |                  |     1 |    95 |   804K  (1)| 01:06:08 |
 |*  3 |    SORT ORDER BY STOPKEY          |                  |     1 |    94 |   804K  (1)| 01:06:08 |
 |   4 |     NESTED LOOPS SEMI             |                  |     1 |    94 |   804K  (1)| 01:06:08 |
 |   5 |      NESTED LOOPS SEMI            |                  |     1 |    88 |    41   (0)| 00:00:01 |
 |   6 |       NESTED LOOPS                |                  |     1 |    64 |    39   (0)| 00:00:01 |
 |*  7 |        TABLE ACCESS BY INDEX ROWID| CC_CLAIM         |     1 |    55 |    38   (0)| 00:00:01 |
 |*  8 |         INDEX RANGE SCAN          | CC000000D0U3     |    14 |       |    26   (0)| 00:00:01 |
 |*  9 |        INDEX UNIQUE SCAN          | CC000000DKU3     |     1 |     9 |     1   (0)| 00:00:01 |
 |* 10 |       INDEX RANGE SCAN            | CC000000D2U6     |   508K|    11M|     2   (0)| 00:00:01 |
 |* 11 |      VIEW                         | VW_NSO_1         |   338K|  1983K|   804K  (1)| 01:06:07 |
 |  12 |       NESTED LOOPS                |                  |   338K|    10M|   804K  (1)| 01:06:07 |
 |* 13 |        INDEX FULL SCAN            | PERSONLASTFIRSTN |   338K|  6274K|   126K  (4)| 00:10:27 |
 |* 14 |        INDEX RANGE SCAN           | CC000000D3U3     |     1 |    15 |     2   (0)| 00:00:01 |
 ------------------------------------------------------------------------------------------------------

…and that, my friends, is a direct hit.  I hadn’t expected the solution to be this easy.  Usually, a problem fights tooth-and-nail before yielding a solution.

So now the big question becomes:  which is truly the better plan?  The Oracle optimizer clearly state that plan hash value = 1834398219 (i.e. the existing execution plan) has an estimated cost of 894k and an estimated elapsed time of 1 hour, 6 minutes, 8 seconds whereas plan hash value = 605492508 (i.e. the execution plan using the default parameter values) has an estimated cost of 44 and an estimated elapsed time of 1 second.

And those estimates seem to be largely correct:  when I run the new execution plan with the same data value inputs, it returns no rows but does so in 0.3 seconds.  Subsequently changing the session parameters and re-running the same query took at least 5 mins before I killed it.

So the better execution for this query uses the default database level parameter settings, not the session-level settings.

Now the question becomes:  where are the session parameters being changed?

For that answer, I ran another query, first finding a list of all the recently-executed SQL statements whose session-level parameters have been changed, and then scanning the Oracle Active Session History (ASH) data for client information…

SQL> select   distinct module, action, client_id, sql_id
   2  from     v$active_session_history
   3  where    (sql_id,sql_child_number) in
   4           (select sql_id, child_number
   5            from   v$sql_optimizer_env
   6            where  name = '_fast_full_scan_enabled' and value = 'false');

 MODULE          ACTION                           CLIENT_ID       SQL_ID
 --------------- -------------------------------- --------------- -------------
 AppCenterXX     WebReq:SimpleXYZSearch           aconnell        1rbd5kmgy25fz
 AppCenterXX     WebReq:XYZProcess                scalderon       6w7zt7rgjrsnv
 sqlplus@prod01                                                   1wadmdpy49qg9

(...167 rows edited out for brevity...)

 AppCenterXX     WebReq:SimpleXYZSearch           jwilson         5239t67sg662v
 AppCenterXX     WebReq:SimpleXYZSearch           jlacaille       d9qtthk5000sd

 171 rows selected.

I highlighted my own SQL*Plus session in red typeface, which goes toward corroborating that it is ALTER SESSION SET commands that are changing the environment.  One other thing to note is that all of the sessions connecting as “ClaimCenter” are connecting to the same account on Oracle, so I checked if there were any AFTER LOGON database triggers on that (or other) accounts in the database, and while there were AFTER LOGON triggers, they did not cause these parameters to be re-set.

So, I can only conclude that the application code itself is changing these eight (8) database parameters, and we need to find out why.  It is quite possible that they provide some form of AFTER LOGON functionality within their application, permitting administrators to specify parameter changes if desired, but “seeded” with pre-configured values?  In that case, we just have to locate the capability to change those and change them.  The worst-case scenario would be having these ALTER SESSION SET commands hard-coded right into their application code somewhere;  let’s keep fingers crossed that it is not that!  :-)

At any rate, it was fun finding such a clear example of the pitfalls of fiddling with undocumented “underscore” parameters.  I’ll update this post if we find anything interesting in the application code.

Gathering statistics in 10.2.0.5+ and 11.2.0.2+ encounters strange waits

It appears that, somewhere in the 10.2.0.5 and 11.2.0.2 patchsets, Oracle introduced some additional unwanted functionality to the “GATHER_*_STATS” procedures in the DBMS_STATS package.

I have been working on a customer’s 11.2.0.3.0 database supporting a data-mart application where the data loading programs call DBMS_STATS.GATHER_TABLE_STATS as a concluding part of load processing, which means that the procedure gets called a *lot*.

We noticed that some calls to the same procedure were waiting excessively on the event “enq: TX – allocate ITL entry” and being blocked by sessions calling similar DBMS_STATS procedures, themselves in turn waiting excessively on the event “row cache lock” on data dictionary tables like SYS.CON$ (i.e. underlying the DBA_CONSTRAINTS view) and SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$.

OK, not that it isn’t confusing or worrisome to have data dictionary objects be the source of contention within standard packages such as DBMS_STATS, but I can understand the involvement of a table like WRI$_OPTSTAT_SYNOPSIS_HEAD$ since this *is* the DBMS_STATS package and that table involves the “synopses” used for incremental stats gathering in 11g.

But, what do constraints have to do with anything?

So, it appears that changes introduced to the DBMS_STATS package (or its supporting underlying packages) somewhere in the 10.2.0.5.x and 11.2.0.2.x patchsets now apparently cause the package to operate in a slightly anal-retentive way.  According to MOS note ID#13784384.8 (entitled “Bug 13784384 – DBMS_STATS produces unnecessary constraints during gather of statistics“), the cause is…

Superfluous NOT NULL constraints could be generated for internal TEMP 
tables by DBMS_STATS when gathering table statistics. This can lead to poor 
performance of statistics gathering in parallel / concurrent sessions.
This can also cause the system to hit ORA-600 [12807] and prevent 
creation of any new constraints if left to occur for a long period of 
time.

So in creating a global temporary table from the data in the table on which we are gathering statistics, someone decided that there was a justification to create constraints (i.e. NOT NULL) on data which did not require validation.  Superfluous indeed!  The note goes on to comment…

Rediscovery Notes:
 Evidence of excessive constraint creation.

 In extreme cases this issue can also cause ORA-600 [12807] due to 
 exceeding the maximum number of allowed constraints over time.
 In this case one would see the _NEXT_CONSTRAINT value in CON$
 as very high.
  eg: select con# from con$ where name='_NEXT_CONSTRAINT' and owner#=0;
      ^
      If this value is anywhere near 2147483647 then the system is 
	close to the maximum constraint number.

In our situation, we were seeing those symptoms (i.e. poor performance for concurrent DBMS_STATS calls), in addition to the “allocate ITL entry” contention on the data dictionary tables.

The contention in the data dictionary slowly expanded to affect just about all operations, so I had time to construct a fun tree-structured query to illustrate the stacked pattern of database sessions blocking each other…

SQL> select  rpad('.', s.lvl-1, '.')||s.inst_id||'@'||s.sid||'.'||s.serial#||' - '||
  2          decode(o.owner,'',' ',o.owner||'.'||o.object_name||'('||o.object_type||')')||' - '||
  3          event||' ['||seconds_in_wait||'] {'||sql_id||'}' txt
  4  from    (select  r, lvl, inst_id, sid, serial#,
  5                   blocking_instance, blocking_session,
  6                   row_wait_obj#, sql_id, sql_child_number,
  7                   event, seconds_in_wait,
  8                   max(lvl) over (partition by cbr_sid) max_lvl
  9           from    (select      rownum r, level lvl, inst_id, sid, serial#,
 10                                connect_by_root((inst_id*1000000)+sid) cbr_sid,
 11                                blocking_instance, blocking_session,
 12                                row_wait_obj#, sql_id, sql_child_number,
 13                                event, seconds_in_wait
 14                    from        gv$session
 15                    where       status = 'ACTIVE'
 16                    and         state = 'WAITING'
 17                    start with  blocking_session is null
 18                    connect by  blocking_instance = prior inst_id
 19                           and  blocking_session = prior sid)) s,
 20          dba_objects     o
 21  where   s.max_lvl > 1
 22  and     o.object_id (+) = s.row_wait_obj#
 23  order by s.r;

2@219.1449 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {bsnkd25z5bv1u}
.1@1076.5 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63335] {4bjwv5sp99589}
.2@741.301 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {7pay8fq94tkrh}
..2@3522.5 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {awuxsc7bwcs3v}
...2@439.5 -   - row cache lock [3] {bmp0ynrs81mf1}
....2@1633.12341 -   - row cache lock [1] {ajbqjx5px1dsz}
.....1@240.877 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63308] {4bjwv5sp99589}
.....1@3383.1 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63301] {4bjwv5sp99589}
.....2@2454.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {3b2st0yh73gdc}
......2@1145.15 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {650n0can8wufy}
.......1@62.1095 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63337] {4bjwv5sp99589}
.......2@2792.13 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {8vu6d1tzrwm4w}
........2@599.1931 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {b7a4r7wq8z6bc}
.........2@3150.407 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {bmss7cw3ts40r}
..........1@1715.431 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63338] {4bjwv5sp99589}
..........2@3188.2189 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {6w5j91fdpgnv6}
...........1@1.6249 - SYS.AUD$(TABLE) - enq: TX - allocate ITL entry [25034] {9xp91bkh5na4k}
...........2@2308.87 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {a95xtgf967t5x}
............2@1968.671 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {bu5pu9r77rj7t}
.............2@1379.7 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {fhhg93axnx3a3}
..............2@2504.669 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {9w0wb60n8tvuq}
...............2@125.1281 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63330] {4bjwv5sp99589}
...............2@308.717 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {0n56a6hk42553}
................2@474.7 - SYS.AUD$(TABLE) - enq: TX - allocate ITL entry [24280] {8ydgkfhxf3mdm}
.................2@2779.6395 -   - row cache lock [3] {8ydgkfhxf3mdm}
.................2@3600.27 -   - row cache lock [2] {8ydgkfhxf3mdm}
.................2@3680.43037 - SYS.AUD$(TABLE) - row cache lock [1] {8ydgkfhxf3mdm}
................2@607.2029 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {4xvk29rqdkkba}
.................2@1729.2931 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {3d5pyqj9udu51}
..................2@2785.21 -   - row cache lock [2] {70xtajvb8vg80}
...................2@2788.7341 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {gbc0xfpnjmrak}
....................2@1007.271 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {6h42v2v1j2q2f}
.....................2@1606.25 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {c1sh35uq76vxh}
......................1@3606.1375 -   - enq: TX - allocate ITL entry [63258] {59vjj34vugaav}
......................2@2072.1703 -   - enq: TX - allocate ITL entry [63203] {bwt0pmxhv7qk7}
......................2@3250.75 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {d54nq2r6zj4ax}
.......................1@3249.6625 - SYS.SCHEDULER$_JOB(TABLE) - enq: TX - allocate ITL entry [62979] {4bjwv5sp99589}
........................1@3364.3 - SYS.SCHEDULER$_GLOBAL_ATTRIBUTE(TABLE) - enq: JS - queue lock [58125] {}
.......................2@711.1977 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {b1wuayvajfa64}
........................2@197.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {fn3akj947hpty}
.........................2@2544.35 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63342] {4bjwv5sp99589}
.........................2@3392.13 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63339] {4bjwv5sp99589}
.........................2@3462.16147 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {ak6t8bjy44f66}
..........................1@3198.585 -   - enq: TX - allocate ITL entry [63262] {59vjj34vugaav}
..........................2@1257.21 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {dvn1xr77da6ra}
...........................2@3402.7 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {6x5am3j3u3n1z}
............................2@2421.487 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {8ymnjz5uvrr25}
.............................2@1857.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {ayadjnh49j01z}
..............................2@2697.7 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {80y25g63db9rm}
...............................2@1360.143 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {8zc8xmm2zxj2r}
................................1@534.1737 -   - enq: TX - allocate ITL entry [63271] {59vjj34vugaav}
................................2@2931.19 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {7yau3vaq99r7a}
.................................1@2480.1111 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63341] {4bjwv5sp99589}
.................................2@2930.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {5udwzbjrwha9v}
..................................2@2077.35 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {g0u3fx9rk023z}
...................................2@850.2303 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {3rb4zwsmgkuaz}
....................................2@1965.7 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {6qcgpq814jtr0}
.....................................1@3676.1 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63341] {4bjwv5sp99589}
.....................................2@419.1641 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {7y8p7g5z9zwfc}
......................................1@2315.21449 - SYS.I_OBJ1(INDEX) - enq: TX - allocate ITL entry [63341] {59vjj34vugaav}
.......................................2@3014.16071 -   - row cache lock [0] {}
......................................2@1507.11 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {dhxvmvpdhfv1s}
.......................................2@2573.37 -   - row cache lock [0] {0v2nusrufxjfm}
........................................2@2391.1429 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {9kwa78pr71b6w}
.........................................2@3079.16241 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {4dq16jamcyw39}
..........................................2@3032.823 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {2g0f7m56yxfqq}
...........................................2@617.1471 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {7c74qc3mnr3gz}
............................................2@36.1071 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {a2qscspby48xz}
.............................................2@366.1099 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {7j95tn9j7dyy1}
..............................................2@632.2797 -   - enq: TX - allocate ITL entry [63198] {bwt0pmxhv7qk7}
..............................................2@1500.7 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {cr15w70sdbx4k}
...............................................1@299.1461 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63335] {4bjwv5sp99589}
...............................................1@1959.1327 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63306] {4bjwv5sp99589}
...............................................1@2370.3509 -   - enq: TX - allocate ITL entry [63305] {4bjwv5sp99589}
...............................................2@3602.5729 -   - row cache lock [3] {2vckpdzuzthjh}
................................................2@1841.51 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {3bpm8xm0g5vp8}
.................................................2@717.317 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {11gzrjaag1pjx}
..................................................2@72.25 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {7qfczckxgdurc}
...................................................1@1894.18487 -   - enq: TX - allocate ITL entry [63256] {59vjj34vugaav}
...................................................2@75.1265 -   - enq: TX - allocate ITL entry [63331] {4bjwv5sp99589}
...................................................2@1276.15 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {1zqhynv3axs0m}
....................................................2@5.1415 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {44kxhsxbqg0hw}
.....................................................2@981.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {d0a7z7qgj8z62}
......................................................2@1986.5 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {4xrg2kf0d570j}
.......................................................2@2601.4649 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {4nc5b2xsb3f82}
........................................................2@3745.5 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {3p3c6gtg54vsv}
.........................................................2@272.1851 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {34yp7a3fcvkuv}
..........................................................2@2212.5 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {1s99z5rw25kux}
...........................................................2@249.1715 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {3vd70d6n0hc0h}
............................................................1@178.469 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63324] {4bjwv5sp99589}
............................................................2@3366.2951 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {9w3fjfncud71b}
.............................................................2@1960.3113 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {adn4gk433muxt}
..............................................................2@2574.11 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {1p6k5k3n2z9yx}
...............................................................2@3438.2105 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {gz8s1adarn60q}
................................................................1@3430.53519 -   - enq: TX - allocate ITL entry [63292] {59vjj34vugaav}
................................................................2@1025.23 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {f442682t8r16f}
.................................................................2@3560.17443 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {4570cty0ggvx2}
..................................................................2@3030.27 -   - row cache lock [3] {bujrahka3pb6m}
...................................................................2@1743.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {87673srwpxrh7}
....................................................................2@2456.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {apydhymp3aynf}
.....................................................................2@2906.333 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {22yrd1fvrsdv6}
......................................................................2@1871.15 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {7dysg327qcq05}
.......................................................................2@3623.889 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {gkzhhytjjx1u3}
........................................................................2@3732.337 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {9d3mhv6sqmyd7}
.........................................................................2@70.721 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {dzr08d4wk8jcd}
..........................................................................1@1192.1639 -   - enq: TX - allocate ITL entry [63260] {59vjj34vugaav}
..........................................................................2@1249.123 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63311] {4bjwv5sp99589}
..........................................................................2@2335.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63325] {4bjwv5sp99589}
..........................................................................2@3631.7 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {cng9azjaxhgb5}
...........................................................................1@3024.1 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63300] {4bjwv5sp99589}
...........................................................................2@614.405 -   - row cache lock [1] {2nf27axmrzvuf}
............................................................................2@748.2961 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {f0wxsdh5pdqzd}
.............................................................................1@1777.953 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63311] {4bjwv5sp99589}
.............................................................................2@864.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {00swnpvas1df7}
..............................................................................2@3048.5 -   - row cache lock [2] {2msq18uq154bc}
...............................................................................2@1721.45 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {6wfk2bf2udbqn}
................................................................................2@1860.21 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {g1u3w0upxyr10}
.................................................................................2@2068.59 -   - enq: TX - allocate ITL entry [63341] {8nhg2pdrzs3ww}
..................................................................................2@957.1423 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63344] {4bjwv5sp99589}
.................................................................................2@3659.32559 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [0] {0tgrjcckf2akt}
..................................................................................1@2190.37713 -   - enq: TX - allocate ITL entry [63258] {59vjj34vugaav}
..................................................................................2@484.433 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {bf2jpw6816sxn}
...................................................................................2@121.1553 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {48fnk2k3yd91c}
....................................................................................2@3257.551 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {1askjs3715rx1}
.....................................................................................2@3699.1787 -   - row cache lock [0] {fh70a2au1tp8j}
......................................................................................2@3283.21 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {g7suptjvx1s33}
.......................................................................................1@3434.16369 -   - enq: TX - allocate ITL entry [63287] {59vjj34vugaav}
.......................................................................................2@2679.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {4xug3acq6k0j1}
........................................................................................2@1869.5 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [1] {bbwyhcta8a9rv}
.........................................................................................2@1146.7 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [3] {3guv8dr2z6mmk}
..........................................................................................2@479.1161 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {7yk436u8nyrxu}
...........................................................................................2@130.1853 -   - enq: TX - allocate ITL entry [63321] {bwt0pmxhv7qk7}
...........................................................................................2@1623.27 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - row cache lock [2] {9znsq1kq2x9hy}
............................................................................................1@3130.359 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63328] {4bjwv5sp99589}
............................................................................................2@380.475 -   - row cache lock [2] {dz7z5spuv0f4v}
..........................................................................................2@863.1925 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63314] {4bjwv5sp99589}
.........................................................................2@1497.25 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63341] {4bjwv5sp99589}
....................................................................2@3137.59 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63324] {4bjwv5sp99589}
....................................................2@1130.141 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63340] {8nhg2pdrzs3ww}
.....................................................1@832.705 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63313] {4bjwv5sp99589}
.....................................................2@3521.19 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63311] {4bjwv5sp99589}
....................................................2@1325.25639 -   - enq: TX - allocate ITL entry [62268] {4bjwv5sp99589}
...................................................2@2656.2773 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63306] {4bjwv5sp99589}
..............................................2@2192.1119 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63315] {4bjwv5sp99589}
.............................................2@671.1621 -   - enq: TX - allocate ITL entry [63320] {bwt0pmxhv7qk7}
..............................................2@544.15201 -   - enq: TX - allocate ITL entry [63842] {a44jdqsy96zpc}
..............................................2@1138.40947 -   - enq: TX - allocate ITL entry [63719] {a44jdqsy96zpc}
..............................................2@1957.5145 -   - enq: TX - allocate ITL entry [63841] {a44jdqsy96zpc}
..............................................2@3152.879 -   - enq: TX - allocate ITL entry [63841] {a44jdqsy96zpc}
..............................................2@3256.36199 -   - enq: TX - allocate ITL entry [63720] {a44jdqsy96zpc}
..............................................2@3624.2265 -   - enq: TX - allocate ITL entry [63720] {a44jdqsy96zpc}
......................................2@3637.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63344] {4bjwv5sp99589}
.....................................2@2804.21 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63337] {4bjwv5sp99589}
...................................2@2088.5 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63314] {4bjwv5sp99589}
...................................2@2304.1377 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63317] {4bjwv5sp99589}
...........2@2373.5887 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63343] {4bjwv5sp99589}
...2@1389.35 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63345] {4bjwv5sp99589}
.2@3405.9 - SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$(TABLE) - enq: TX - allocate ITL entry [63343] {4bjwv5sp99589}

158 rows selected

I confess:  once I saw this output, I made little effort to understand the details, and simply admired the aesthetics of it.  The output from this query grew to almost 100 tree levels, so we would have quite a job on our hands if we were to begin killing sessions.

As it was Friday evening by the time we had all of the bug information and an action plan in hand, we bounced the production RAC instances to clear the problem.  One-off patches for bug #13784384 exist for specific versions, including 11.2.0.3.0, 11.2.0.3.5, and 11.2.0.3.6, because the fix is not included until 11.2.0.4, which has not yet been released.  Fortunately, this is a software-only patch (i.e. no modifications made to the data dictionary), so it is “RAC rolling-upgradeable”, thus allowing a minimum of down-time.

It all just goes to show, a party can erupt even under the most banal of circumstances.

What makes a DBA?

I wrote this article as a foreword for the 2007 Apress book “RMAN Recipes for Oracle Database 11g: A Problem-Solution Approach” by Darl Kuhn, Sam Alapati, and Arup Nanda (ISBN 1590598512), and I’m pleased to learn it will be included in the exciting new Apress update “RMAN Recipes for Oracle Database 12c: A Problem-Solution Approach” (ISBN 143024836X), scheduled for 14-Aug 2013 publication, assuming that Oracle Database 12c^H^H^HNextGeneration is released prior to then…

RMAN Recipes 12c cover
What skills set the database administrator (DBA) apart from other technologists? Of the many responsibilities laid upon a DBA, which cannot be performed by someone else?
Adding database accounts? Creating tables and indexes? Installing and configuring databases? Optimizing the database and the applications that access and manipulate it?
All of these things are regularly performed by people who do not consider themselves database administrators. They consider themselves to be programmer/analysts, to be application developers, to be managers and directors, and they do all these things just to be able to move forward with their own job. Most application developers know how to run the Oracle Universal Installer – it’s just another graphical application, and accepting all the default choices is a perfectly valid way to get the job done, these days. Adding database accounts? That’s easy! Granting database privileges? Just give ‘em “DBA” or “SYSDBA” and no more problems! Creating tables and indexes? C’mon, that’s more of a developer’s job than the DBA’s job, isn’t it? Tuning Oracle databases is mostly about crafting efficient SQL statements, and while this job often falls to DBAs, it is best handled by the developers and programmers who write the SQL in the first place.
While many of these duties are correctly assigned to a DBA, they are not a hallmark of the job.
Think about the people flying airliners. With the degree of automation in aircraft cockpits now, it can be argued (with a lot of merit) that the planes can fly themselves, from take-off, through navigated flight, to touch-down. So, what are the pilots for?
If something goes wrong with the plane, you want the best pilots at the controls of that plane. Because when things go wrong, they go wrong in a hurry, and it takes somebody who knows exactly what all that PlayStation gadgetry is really controlling in that cockpit, and it takes somebody who can intelligently take control and land the thing safely when dozens of lights are flashing and dozens of alarms are buzzing. It’s not too hard to justify the presence of pilots on airplanes, in the end.
Likewise, fifty years ago, at the dawn of the American space program, a debate was underway then, as there is now – should space flights be manned or unmanned? There were good arguments in favor of the latter. The first astronauts weren’t human – they were dogs and chimps. When humans were finally included, the spacecraft engineers assured them that they were redundant, just along for the ride, superfluous, and that they were just “spam in a can”, went the gallows humor.
But it didn’t take long to prove those people wrong. The presence of a well-trained and comprehensively knowledgeable pilot in the spacecraft has proven its worth, time and time again. A classic example is the final 2 minutes of the historic Apollo 11 moon landing, when Neil Armstrong looked out the window of the Eagle lunar module and realized that their automated descent, controlled from Houston via computer, was dropping them into a boulder field. Only a few hundred meters from the lunar surface, Armstrong flipped the controls to manual and pushed the lunar module higher, seeking a more viable landing site. While Houston nervously and repeatedly queried for status, Armstrong calmly replied, “Hold, Houston” until, with only 30 seconds of fuel remaining, he set the lunar module down and declared that the Eagle had landed.
That why we have human astronauts. This is what sets “spam in a can” apart from a pilot. This is why airliners, while heavily automated, have highly-trained pilots at the controls.
Which brings us back to database administrators … I hope!
What sets a DBA apart from an ambitious programmer or a developer doing what needs to be done to move forward?
It is the ability to prepare for trouble and recover from it. Database recovery in the event of failure or mishap is the most vital skill in a DBA’s toolkit.
The Oracle RDBMS has been around now for about 30 years. The internal mechanisms for backup and recovery have changed very little in the past 20 years. Of course there have been enhancements, but the basic mechanism for basic “hot” or online backups has changed very little.
However, it is the mechanism for restore and recovery that took a great leap forward 10 years ago, when Oracle Recovery Manager (RMAN) was introduced with Oracle8. In a world where misnomers abound, Recovery Manager is quite aptly named. The focus of the product is not on automating backups, but rather on automating the steps of restore and recovery as much as possible.  Much of the early reluctance to adopt RMAN came about not from any failings in the product, but rather from disappointment that the product did not make the job of performing backups any easier.  Since backups are the operation that DBAs see most often, what RMAN does for recovery operations was not fully appreciated.
As I teach people how to use RMAN, I attempt to stress the mindset that RMAN is not just about performing backups. Rather, it is about “feeding” the RMAN “recovery catalog”.  Backups are not ends in themselves, but simply entries in the recovery catalog used by RMAN during restore and recovery operations. If a DBA considers it their duty to feed the recovery catalog with backup operations and other maintenance such as crosschecks, then we have someone who is truly preparing for the eventuality, not just the remote possibility, of restore and recovery. Someone who understands the tool, and is not just applying a different tool to bang in nails the same old way.
The knowledge and capability to recover a database from catastrophic failure is what separates a real DBA apart from someone who found the installer or who know how to do the clickety-clickety thing in Oracle Enterprise Manager. And not just once, by luck, but knowing how to use RMAN to its full advantage, to work around those confusing and misleading error messages, to verify backups and maintain and protect the recovery catalog(s) so as to virtually guarantee recoverability, each and every time.
It is this protective mindset, liberally seasoned with caution and pessimism, which separates DBAs from other technologists. Systems administrators and network administrators have much the same tendencies, but only databases administrators are made responsible for never losing data.  Systems and networks can be made redundant, and if they fail it is only a matter of bringing them back to service, but data loss is forever and is never forgiven.
Years ago, I worked with a very no-nonsense vice-president. She didn’t want to know the details of my job, and rightly so. She simply stated, very clearly, “Failures happen, but don’t EVER tell me that you could not recover my data”. Message received.
This book is written by seasoned professionals who have been using RMAN since its inception. They have recognized that RMAN can be confusing, and feel that everyone should not have to go through the same learning curve in order to arrive at the same conclusions. So, they have gathered together their best practices and tried-and-true procedures and compiled them into this wonderful book.
If you are an Oracle database administrator, this could very well be the most important book you read. Technology books are famous for becoming “shelf-ware”, pristine and unopened books adorning shelves everywhere. This book will be the exception – the book that is dog-eared and worn, the cover falling off and pages smudged, found more often opened face down on a desk than perched serenely on a shelf. The information within this book is the very essence of the job of the Oracle DBA, the most important facet of the job, and I am grateful to Sam, Arup, and Darl for sharing.

Remembering Gary Dodge…

The world lost a remarkable person this week, my friend and mentor Gary Dodge.

He is survived by his wife Luann, to whom he was married 33 years, by his daughter Brigid and by his son Ryan, and by a tight-knit and equally talented and accomplished family.  And by friends and admirers too numerous to count, worldwide.

As long as I knew him, his email signature stated, “Building tomorrow’s legacy systems today, one crisis at a time“, succinctly expressing his dry, lightly-warped sense of humor, suitable even in an uptight business environment.

He is deeply missed.  Thank you, Gary.