Data Virtualization and Greener Data Centers

On the Saturday before the Oracle OpenWorld 2014 conference started, I had the added bonus of finding out that the Data Center Journal had published my article on how data virtualization leads to greener data centers.

So, rather than reprise the article here (which I’m tempted to do), please instead click here and give it a read!

Oracle Open World logo

Lovin’ la vida Oracle

As we prepare for the week of Oracle OpenWorld 2014, I look back on the 25 years I have spent within the orbit of Oracle Corporation.

I joined Oracle Consulting Services (OCS) as an employee on 15-January 1990 and worked my way to Technical Manager when I resigned to start my own consultancy on 31-July 1998.  I worked as an independent Oracle consultant from then (with a side trip into company-building with friends) until 30-April this year.  On 01-May 2014, I joined startup Delphix.

Throughout this quarter-century of La Vida Oracle, I’ve made a great living, but it has also been a great way of life.  I started presenting at the Rocky Mountain Oracle Users Group in 1993, and joined the board of directors in 1995.  I’ve since worked with many other Oracle users groups as a volunteer and I’ve found the experiences to be incredibly educational, in so many ways.  I’ve also met a lot of amazing people through volunteering at Oracle users groups.  I met the junta of the Oak Table Network, and joined that group in 2002.  I was elected as an Oracle ACE in 2007, before I even knew the program existed, then I was made an ACE Director in 2012, which is an elevation I appreciate but still never sought.

But over it all, all throughout, is Oracle.  The Big Red O.  Some people have had bad experiences at Oracle Corporation, some have had REALLY bad experiences, just as people have good and bad experiences at any huge corporation.  In the spirit of a comment made famous by Winston Churchill, “Democracy is the absolute worst form of government.  Except for all the others.”  Oracle is populated by, and led by, some very human … beings.  I love them all, some more than others.

So for 25 years now, out of the 37 years Oracle has been in existence, I have had a really great life.  La vida Oracle.  I am so GLAD I met ya!  And I love this life!

And so it continues today.  For the first time in a quarter century, I’m out of the direct orbit of Oracle, now that I’m working at Delphix.  I’m still heavily involved with Oracle as an Oracle ACE Director and adviser to the boards of three local Oracle users groups (RMOUG, NoCOUG, and NEOOUG) and a board member at ODTUG.

Delphix builds data virtualization software for Oracle, PostgreSQL, SQL Server, and Sybase ASE, as well as file-system directories on Unix/Linux and Windows.  Virtualizing Oracle databases is a big part of Delphix’s business, but it is not the only part, and the non-Oracle parts are growing rapidly.  It’s refreshing to work with other database technologies.  But I still love working with Oracle Database, and I’m continually impressed by Oracle’s technology prowess, with the In-Memory option of Database12c a brilliant example.

Some say that Delphix competes with Oracle.  Be serious – please name a technology company that doesn’t compete with Oracle in one way or another, as the breadth of Oracle products and services is so expansive.

As an independent contractor at EvDBT for 16 years, I myself competed with Oracle Consulting in my own very small way.  But, at the same time I cooperated with Oracle by optimizing the implementation of Oracle technology.  I sure as heck understand who hold the tent up.

The same is true with Delphix.  As a company, Delphix products can be said to compete with Oracle Enterprise Manager 12c Cloud Control, in the niche area known as Database-As-A-Service (DBaaS) in the specific SnapClone functionality.  The Delphix software appliance is very similar to this SnapClone piece, but this part of the Oracle product is just a small part of the scope the vast EM12c Cloud Control product suite.

In the same way, I as an independent consultant could have been said to have competed with the EM12c diagnostics pack and performance tuning pack, because the techniques I used and taught tended to make people independent of those tools.

That’s not to say I steered people away from EM12c; it’s just that I myself didn’t use it for performance tuning, though gradually I learned to appreciate many of its features, not least through paying attention to my wife Kellyn Pot’vin.

In fact, the Oracle Enterprise Manager 12c Cloud Control, using the Cloud API, can fully administer virtual databases created by Delphix.  After all, Delphix is just an alternate mechanism to implement data virtualization.  Instead of using the mechanism of Oracle DBaaS SnapClone, customers can also use Delphix.  So Delphix can become a part of EM12c.

So there is no competition between Delphix and Oracle.  Delphix is an alternative to the SnapClone mechanism underlying DBaaS, but Delphix virtual databases can still be orchestrated through the EM12c console.  It need not be an either-or choice.

Of course, I still have to write that extension through the EM12c cloud API, and I’m getting right on that.  Unless someone else gets to it first.

Keep your eye on the Oracle EM12c Extension Exchange webpage for more progress on integrating Delphix within EM12c…

OakTable logo

#OakTable World at Oracle OpenWorld 2014

WhereChildren’s Creativity Museum, 221 4th St, San Francisco

When:  Mon-Tue, 29-30 September, 08:30 – 17:00 PDT

For the third year in a row at the same fantastic location right in the heart of the bustling Oracle OpenWorld 2014 extravaganza, OakTable World 2014 is bringing together the top geeks of the worldwide Oracle community to present on the topics not approved for the OpenWorld conference.  At the OpenWorld conference.  For free.

The beauty of this unconference is its ad-hoc nature.  In 2010, weary of flying from Europe to endure marketing-rich content, Mogens Norgaard conceived Oracle ClosedWorld as an informal venue for those who wanted to talk about cool deep-technical topics.  Oracle ClosedWorld was first held in the back dining room at Chevy’s Fresh Mex on 3rd and Howard, fueled by Mogens’ credit card holding an open tab.  The following year in 2011, ClosedWorld was moved a little ways down Howard Street to the upstairs room at the Thirsty Bear, once again fueled by Mogens’ (and other) credit cards keeping a tab open at the bar.

In 2012, Kyle Hailey took the lead, found a fantastic venue, herded all the cats to make a 2-day agenda, and arranged for corporate sponsorship from Delphix, Pythian, and Enkitec, who have continued to sponsor OakTable World each year since.

If you’re coming to Oracle OpenWorld 2014 and are hungry for good deep technical content, stop by at OakTable World 2014, located right between Moscone South and Moscone West, and get your mojo recharged.

If you’re local to the Bay Area but can’t afford Oracle OpenWorld, and you like deep technical stuff about Oracle database, stop by and enjoy the electricity of the largest Oracle conference in the world, and the best Oracle unconference right in the heart of it all.

OakTable World 2014 – driven by the OakTable Network, an informal society of drinkers with an Oracle problem.

Delphix logo on white 20140412

#CloneAttack at Oracle OpenWorld 2014

Delphix and Dbvisit will be at the OTN Lounge in the lobby of Moscone South from 3:30 – 5:00pm on Monday 29-Sept.  Come join us to hear about #CloneAttack and #RepAttack, two great hands-on learning opportunities.


#CloneAttack is your chance to install a complete Delphix lab environment on your Windows or Mac laptop for you to play with and experiment at any time.  Experts Kyle Hailey, Steve Karam, Adam Bowen, Ben Prusinski, and I will be sharing USB “thumb” drives with the virtual machine OVA files for the lab environment, and we will be working one-on-one with you to help you get everything up and running, then to show you basic use-cases for cloning with Delphix.

Bring your laptop, bring your VMware, and get some data virtualization into your virtual life!

At the same time, #CloneAttack will be joined by #RepAttack by Dbvisit, where Arjen Visser, Jan Karremans, and the team will be helping you replicate Oracle to Oracle for zero downtime upgrades.

This just in!  #MonitorAttack from Confio SolarWinds will also be joining the party at the CCM on Tuesday to show you how to quickly and easily install Confio Ignite and enjoy the great features there.


Children’s Creativity Museum, 221 4th St, San Francisco


Tuesday, Sept 30 from 10am – 5pm PDT

Before you arrive:

Hardware requirements (either Mac or Windows):

  • at least 8 GB RAM
  • at least 50 GB free disk space, but preferably 100 GB free
  • at least 2 Ghz CPU, preferably dual-core or better

Hello Delphix!

After almost 16 years as an independent consultant, with a couple side-steps into the world of small consulting-services startups, I’ve accepted an offer from Delphix, a startup building the future of information technology, enabling agile data management and storage virtualization.

I’m closing EvDBT as a business, since the employee count will reduce from one to zero, and finishing up my consulting engagements, starting with my new employer on 01-May 2014.

Thank you, EvDBT.  You were my lifeboat and my vehicle to a better career and a better life!

The DBA is dead. Again.

Mark Twain never said, “Reports of my death are greatly exaggerated.”  Instead, his comment in 1897 was less tongue-in-cheek than matter-of-fact.  Confronted with news reports that he was gravely ill he responded, “James Ross Clemens, a cousin of mine, was seriously ill two or three weeks ago in London, but is well now.  The report of my illness grew out of his illness; the report of my death was an exaggeration.”  I can only hope that, while being equally matter of fact, in the retelling my comments will also grow wittier than they were written.  It is a lot for which to hope, as past experience is that my comments generally provoke unintended offense.

Every few years, when wondrous new automation appears imminent, reports surface about the long-anticipated death of the role of the database administrator.  Sometimes it seems these reports arise out of sheer frustration that DBAs and databases still exist, as seemed to have happened in 2008 during a conversation on the Oak Table email list, which closely followed a similar discussion on the ORACLE-L list.  To whit:  the war is over, and we lost.

Alex Gorbachev commented succinctly at the time:

We have already “lost” the war many times, haven’t we?  We lost it to object-oriented databases (8i?)  We lost to XML databases (9i?)  We lost to grid databases (10g?)  And we are losing to what now with 11g?  The “fusion” will save us all with or *without* databases in the first place?  Yeah right … the end is close.

The focus of discussion on both email lists was a thought-provoking blog post in March 2008 by Dom Brooks entitled “The dea(r)th of Oracle RDBMS and contracting?” He commented that the tide of history had finally turned against the Oracle database and the highly-visible role of database administrator.  Stiff competition from open-source competitors, emerging scalable technologies, absurd license fees, and belt-tightening by many IT shops were the overwhelming trend.  Poor database design exacerbated by immature implementation; if you’re going to produce a disaster, probably best that it not cost as much as Oracle.

My response on both email threads on ORACLE-L and the Oak Table was this…

Back in the 1980s, I worked for a company that had built some really cool applications in the area of travel reservations.  Eventually, the travel providers (i.e. airlines, hotels, car rental agencies, etc) caught on to what we were doing and did it themselves, effectively putting us out of business overnight.  So, it came time to sell the company off in pieces.  We tried to sell the applications, but nobody wanted them — they had their own, or could buy or build better.  We sold the hardware and facilities, but for pennies on the dollar.  Then, when we tried to sell the data, we hit the jackpot — everybody wanted the data, and we were able to sell it over and over again, to multiple buyers.

I never forgot that lesson, and several years later traded being a programmer for being a DBA because (as Michael just said, below) I like working with data.  Data, not programs, is the only thing that matters — applications are transient and have no value except to acquire, manipulate, and display data.  Data is the only thing with value.  The long-term value of data is the reason I’ve moved toward data warehousing and business intelligence, too.

Data is important.  Databases manage data.  DBAs architect, configure, and manage databases.  So, being a skilled database administrator will always be necessary as long as data exists.  If the state of the art ceases advancing, then automation will finally catch up to extinguish the DBA role/job.  But until then, being a DBA is a career.

That’s my story.  And I’m stickin’ to it.

Doug Burns was following both threads and was kind enough to lend his support in a post to his blog entitled “There’s Hope For Us All“, in which he stated “although it doesn’t reflect my personal experience in the slightest, there was something about what he had to say and the way he said it that rung very true to me.”  Kinder words are rarely spoken, and thank you, Doug.  And thank you too Dom, for your follow-up comment to Doug’s post, “Solidarity Brother!  I’m sure Tim’s right and will continue to be right.  I was having an emotional moment… the flat earth society are everywhere!

We all have those moments.

And here we are again, having another moment.

Once again, the topic of discussion on the Oak Table list was a blog post from Kenny Gorman (no relation) entitled “The Database Administrator Is Dead.”  My father, who was a police officer for 25 years, worked in a profession much more dangerous, and certainly several people had wished him harmed or dead over his career and even acted in that direction, but in a general way my chosen profession has received more death threats, it seems.

Now, the forces opposing the DBA are not necessarily cheaper, different, or disruptive technology, but better automation and provisioning.  The role of the DBA will literally be smothered out of existence, as highly-automated management consoles extend to the ultimate capability.  “Database As A Service” or “DBaaS“, cloud provisioning for databases, is the next development to obsolesce the database administrator.

The synchronicity of these discussions is spooky.  During the week previous to the discussion of Mr. [Kenny] Gorman’s blog post, I had related another particular story 4-5 separate times to 4-5 separate people, and now I found that I was relating it yet again, this time to the Oak Table email list.  It was something of a continuation from my earlier story…

In the 1990s, when I chose to move from being a developer to a DBA, the trend of out-sourcing was quite abundantly evident, not quite augmented by the trend of offshoring yet.  In 1999 I did my first ever keynote address at a conference in Portland, Maine to the Maine’S Oracle Users Group (MSOUG) on the topic of being a DBA in a world of out-sourcing.  I described a visualization of one of those water-holes in the Sahara.  A water-hole that is brimming and supporting a lush oasis during the rainy season, but that dries up and shrinks to a small muddy puddle during the dry season, surrounded by dead vegetation and dead animals that didn’t quite make it to the water-hole or another.

Repeating the comments in Doug’s blog, code comes and goes but data is the only thing with lasting value.  I visualized that the dead vegetation and dead animals surrounding the muddy remainders of the water-hole were developers and DBAs whose jobs were outsourced.  Right in the middle of the muddy water were two eyes above the surface, and this was the skilled DBA, guarding the remainder of the water-hole, containing only the most important stuff that couldn’t be outsourced or offshored.  I had long decided that I would be that DBA, and stay as close to the data as I could, and especially the most strategic data (i.e. data warehousing).

I figure y’all might have as much fun as the good folks at MSOUG did with that visualization, especially when subjected to Freudian and Jungian dream analysis.

Though it has nothing to do with why I’ve related this story 4-5 times previously this week, in this context, the author of the article (we’re not related) talks about having been an Oracle DBA 15 years ago, which is about the time I did my keynote for MSOUG.

Perhaps he left the field too early too early?  :-)

I completely agree with his “automate or die” comment, and I might add “keeping learning or die”, and of course the job’s roles are changing, but besides DBaaS being a long way from the pointy-and-clicky utopia that this post implies, the question remains: who sets up the DBaaS environments?  DBaaS isn’t the end of the DBA role, it is more automation.

Who will set up DBaaS environments, if not DBAs?  Don’t get me wrong:  I agree that DBaaS is here.  And I think DBAs will set it up, use it, and improve on it.

That’s my story.  And I’m stickin’ to it.

15 years of EvDBT

I worked at Oracle Consulting for eight and a half years, from January 1990 until July 1998, starting as a senior consultant and finishing as a technical manager.  In the summer of 1998, I was experiencing a dual crisis in my career, directionally and ethically.

From the directional perspective, Oracle Consulting was sending very clear signals that the way Gary Dodge and I were doing business in the Denver consulting practice was not aligned with corporate goals.  The corporation wanted vertical “centers of expertise” with global and national scope.  In Denver, Gary and I managed about a dozen generalists, with experience ranging from very junior to very senior, who effectively covered all types of technology.  Our goal was to let each person work locally on the type of work they enjoyed, occasionally coercing some to try something different.  Many of us had families, and all of us lived in Colorado for a reason.

Attempting to adhere to corporate direction, when we received a request from a local customer, we began to first contact the relevant national or global “center of expertise”.  Most often, we would be told that nobody was available within the next few weeks (or months) and that, when they did become available, the rates charged would reflect a very senior person coupled with travel expenses.  We would feed that response back to the customer, who understandably became concerned or irate, and asked for one of our local generalists, whom they had probably used previously, which would have been our first response anyway.  In almost each case, we would end up staffing one of our local folks in the engagement, who completed the engagement often before the national or global group’s person became available.  As this continued, the pressure from corporate became more direct, complaining about a “black hole in the Rockies”.  So, looking ahead into the future at Oracle, I saw a model of business with which I wasn’t comfortable:  our local people getting on planes to work elsewhere, while out-of-town personnel were flying into Colorado to work here.  Perhaps it looked good from a higher level, but from our street-level view, it was absurd.

However, I also had a more serious ethical problem.  I had been sent to Los Angeles to work an engagement involving my primary expertise at the time:  Oracle Parallel Server on IBM RS6000/SP clusters.  The customer was a start-up website job board.  Both IBM and Oracle were determined to sell some massive hardware and software in there, and were working together toward common purpose with rare cooperation.

Except the customer wasn’t cooperating.

Instead, they had come up with a far less-expensive scheme involving dozens of commodity servers, where the one server contained a master database to which new job postings were added and changes were made, which was then replicated to dozens of read-only database servers using backup/restore, with a connection load-balancer directing traffic.  This allowed their read-mostly website to scale as needed by off-loading the reads from the master database and segregating writes from the read-only databases.  It was fast, cheap, and easy — a rare occasion when it wasn’t necessary to choose only two.  It was novel for the time, I was impressed, and said so.  Nowadays, such a thing is called a reader farm and can easily be implemented using Active Data Guard.

However, the IBM and Oracle teams were adamantly opposed – fast, cheap, and easy would ruin the lucrative deal they had planned for themselves.  So I was directly ordered by the regional vice-president in charge of the deal to reject as unworkable the customer’s plans and instead extol the virtues of Oracle Parallel Server and IBM RS6000/SP clustered servers one way or the other, and recommend it strongly in conclusion.

What to do?

I certainly did not enjoy being ordered to lie.  Not asked, but ordered.  On the other hand, I worked for Oracle and I had a boss and that boss stated very clearly what to do, as he had every right to do.  After all, no blood would be spilled, no babies would be killed.

So my solution to the ethical dilemma was:

  1. Complete the engagement as directed
  2. Prevent it from happening again

I am not smart enough to avoid making mistakes, but I believe in making mistakes only once.  I did what I was told to do, enduring the astonished looks from the good folks who couldn’t believe I was spouting such nonsense.  I subsequently resigned from Oracle, to avoid ever having to make that mistake again.  But having resigned from one well-regarded corporation, the question became:  are there any corporations, anywhere in the world, where I would not be asked to do something like that again?

The answer was simple and, in August 1998, Evergreen Database Technologies, Inc opened for business.

The first person I told of my decision to resign was Gary Dodge.  He wasn’t my supervisor, but we were peers.  I entered his office and closed the door, and he looked up and commented, “Oh, that’s not a good sign.”  I sat down and told him, and he nodded and said, “Well, good thing you closed the door, because I’m leaving also.”  He didn’t leave Oracle, but he left consulting, for the same directional reasons as I.  So, we didn’t inform our management together, but we informed them at the same time.

EvDBT hasn’t been open continuously over the past 15 years;  I have far too much to learn.  I spent a few years attempting to start another consulting-services company with some colleagues, and that ended unsuccessfully.  Any deal that starts with handshakes inevitably ends with lawyers, so my lesson is to always start with lawyers so that it ends with handshakes.

At one point, I hired in with Compaq Professional Services because they offered an intriguing opportunity.  However, my timing was bad, as Compaq was absorbed by HP a few months after I started, and knowing that I would not enjoy the noise and mess of the mating of the elephants, I moved on.

Thank you all for the past 15 years, and I look forward to the next 15 years.

Update on Friday 18-Oct 2013:  I’ve received some criticism and questions for my perceived criticism of Oracle in this article, particularly with the ethical dilemma described above.  I didn’t write this to criticize Oracle as a company, the situation simply happened while I was working there.  It is a large company like many others.  Corporations are comprised of people who respond in varying ways to the incentives given them.  I’m personally aware of many people with similar roles at Oracle who have not and never will react to their incentives in that particular way.  Likewise, I know of a few who would have reacted far worse.  It’s all part of the grand pageant of human behavior.

The person who ordered me to do my job was not himself facing an ethical dilemma.  He had brought me onto the engagement to expedite the deal, and he never imagined that I would balk;  it just wasn’t professional.

He had a task to do, and I began to jeopardize the success of that task.  I would hope to be as decisive and effective as he.

Keyword DETERMINISTIC is anything but…

According, the word “deterministic” means…

de·termin·istic adj. an inevitable consequence of antecedent sufficient causes

According to Wikipedia, the explanation of deterministic algorithm is…

In computer science, a deterministic algorithm is an algorithm which, given a particular
input, will always produce the same output, with the underlying machine always passing
through the same sequence of states.

In the Oracle PL/SQL Language documentation, it is used as a keyword, as follows…


Indicates that the function returns the same result value whenever it is called with the same values for its parameters.

You must specify this keyword if you intend to invoke the function in the expression of a function-based index or from the query of a materialized view that is marked REFRESH FAST or ENABLE QUERY REWRITE. When the database encounters a deterministic function in one of these contexts, it attempts to use previously calculated results when possible rather than re-executing the function. If you subsequently change the semantics of the function, then you must manually rebuild all dependent function-based indexes and materialized views.

Do not specify this clause to define a function that uses package variables or that accesses the database in any way that might affect the return result of the function. The results of doing so are not captured if the database chooses not to re-execute the function.

These semantic rules govern the use of the DETERMINISTIC clause:

  • You can declare a schema-level subprogram DETERMINISTIC.

  • You can declare a package-level subprogram DETERMINISTIC in the package specification but not in the package body.

  • You cannot declare DETERMINISTIC a private subprogram (declared inside another subprogram or inside a package body).

  • A DETERMINISTIC subprogram can invoke another subprogram whether the called program is declared DETERMINISTIC or not.

There is a subtle twist about this explanation.  It states that the keyword “indicates that the function returns the same result value whenever it is called with the same values for its parameters“, but if you think about the use of the verb indicates, you realize that they are conceding that the keyword itself doesn’t enforce the behavior.  Instead, it is curiously carefully-chosen language to sidestep the important fact that the PL/SQL language compiler does not actually enforce the necessary behavior.

So as a result, it is possible to write the following function…

SQL> create or replace function test_func(in_col1 in number)
  2           return number deterministic
  3  is
  4           v_col1  number;
  5  begin
  6           select  col1
  7           into    v_col1
  8           from    test_tbl2
  9           where   col1 = in_col1;
 10           return(v_col1);
 11  end test_func;
 12  /
SQL> show errors
No errors.

Is this function really deterministic?  No, of course not.  Anyone else changing data in the TEST_TBL2 table can change the outcome of this function.

Yet, the DETERMINISTIC keyword did not cause compilation of the function to fail, as it should have.  Only the use of the pragma restrict_references using the qualifiers RNDS (i.e. read no database state), RNPS (i.e. read no package state), WNDS (i.e. write no database state), and WNPS (i.e. write no package state) would do that…

SQL> create or replace package test_pkg
  2  as
  3          function test_func(in_col1 in number)
  4                  return number;
  5          pragma  restrict_references(test_func,RNPS,WNPS,RNDS,WNDS);
  6  end test_pkg;
  7  /

SQL> show errors
No errors.

SQL> create or replace package body test_pkg
  2  as
  3          function test_func(in_col1 in number)
  4                  return number
  5          is
  6                  v_col1  number;
  7          begin
  8                  select  col1
  9                  into    v_col1
 10                  from    test_tbl2
 11                  where   col1 = in_col1;
 12                  return(v_col1);
 13          end test_func;
 14  end test_pkg;
 15  /

Warning: Package Body created with compilation errors.

SQL> show errors

-------- -----------------------------------------------------------------
3/2      PLS-00452: Subprogram 'TEST_FUNC' violates its associated pragma

Notice that this pragma can only be used within a function declared within a PL/SQL package;  this pragma cannot be used within a standalone function.  But it proves that the PL/SQL compiler is capable of detecting the problem, and failing the compilation.  They have the technology.

Further, it is now possible to create a function-based index using this function…

SQL> create index test_tbl1_fbi on test_tbl1(test_func(col1))
  2  tablespace users compute statistics;

Index created.

…and that function-based index will be used by the Oracle optimizer for queries, after all, why shouldn’t it?

SQL> select t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
Plan hash value: 357717947
| Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |                 |    10 |   170 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_TBL1       |    10 |   170 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST_TBL1_FBI01 |     4 |       |     1   (0)| 00:00:01 |

SQL> select /*+ full(t1) */ t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
Plan hash value: 1370928414
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |           |    10 |   170 |     5   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_TBL1 |    10 |   170 |     5   (0)| 00:00:01 |

SQL> select  t1.col1 t1_col1, t2.col1 t2_ool1
  2  from    test_tbl1 t1, test_tbl2 t2
  3  where   t2.col1 = t1.col1
  4  and     t1.col1 = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
Plan hash value: 2884964714
| Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |              |     1 |     8 |     1   (0)| 00:00:01 |
|   1 |  NESTED LOOPS      |              |     1 |     8 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN| TEST_TBL2_PK |     1 |     4 |     1   (0)| 00:00:01 |
|*  3 |   INDEX UNIQUE SCAN| TEST_TBL1_PK |     1 |     4 |     0   (0)| 00:00:01 |

So, whether the query uses the function-based index, or whether it performs a simple FULL table scan, or whether the function-based index isn’t used at all, the results are the same.

But, now suppose another session changes that row in the TEST_TBL2?

SQL> update  test_tbl2
  2  set     col1 = 1700
  3  where   col1 = 170;

1 row updated.

SQL> commit;

Commit complete.

…and now someone performs a query using the function-based index?

SQL> select t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
Plan hash value: 357717947
| Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |                 |    10 |   170 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_TBL1       |    10 |   170 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST_TBL1_FBI01 |     4 |       |     1   (0)| 00:00:01 |

How can that be?  We know that the UPDATE changed this data?  And here is proof obtained by bypassing the function-based index during the WHERE clause by forcing a FULL table scan…

SQL> select /*+ full(t1) */ t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

no rows selected

Execution Plan
Plan hash value: 1370928414
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |           |    10 |   170 |     5   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_TBL1 |    10 |   170 |     5   (0)| 00:00:01 |

And here is further proof obtained by completely eliminating the function from the SELECT list and instead performing a simple inner-join…

SQL> select  t1.col1 t1_col1, t2.col1 t2_ool1
  2  from    test_tbl1 t1, test_tbl2 t2
  3  where   t2.col1 = t1.col1
  4  and     t1.col1 = 170;

no rows selected

Execution Plan
Plan hash value: 2884964714
| Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |              |     1 |     8 |     1   (0)| 00:00:01 |
|   1 |  NESTED LOOPS      |              |     1 |     8 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN| TEST_TBL2_PK |     1 |     4 |     1   (0)| 00:00:01 |
|*  3 |   INDEX UNIQUE SCAN| TEST_TBL1_PK |     1 |     4 |     0   (0)| 00:00:01 |

So, what PL/SQL has permitted us to do is create a situation where it would be reasonable for end-users to conclude that the database has corrupted data.  In a sense, it does — the corrupted data is within the function-based index, where deterministic data is expected.

I found this very situation on the loose in the wild;  that is, within a production application in use at a healthcare company.  Think about that.

I didn’t find it because someone complained about possible data corruption.  I found it because an AWR report pointed me at the SQL statement within the function, which was being executed 1.3 billion times over a 5 day period.  Each execution was quite fast, but if you do 1.3 billion of anything over less than a lifetime, someone will eventually notice.

If you consider that 1.3 billion executions over a 5 day period implies a average rate of about 3,000 executions per second, sustained, for every second of those 5 days, then you start to get an idea of the problem.  Especially when you consider that there were peaks and valleys in that activity.

So, I have raised the issue with the affected healthcare organization, and the problem is worming its way through change management.  In the meantime, this application continues to return incorrect information, over and over and over again.

Are you sure that none of your function-based indexes were built this way?

OOW13 small

OOW13 and OTW13

Abstract selection for Oracle Open World 2013 (OOW13) completed recently, and two of the eleven abstracts that I had submitted has been accepted, entitled “Scaling To Infinity: Making Star Transformations Sing” on Monday 23-Sep at 1:45pm and “RDBMS Forensics:  Troubleshooting using ASH” on Sunday 22-Sep at 10:30am.

Lesser known, but more fun than a barrel of drunken DBAs, is the un-conference formerly known as “Oracle Closed World“, now known as “Oak Table World“.  Oak Table World 2013 (OTW13) is in the planning stages for a second year following the wild success of Oak Table World 2012.  OTW12 was cooked up at the last minute, less than 4 weeks before OOW12, and it had the impromptu energy of a Friday night kegger after a long week at work.  I had the honor of being first up at that event with a presentation on “The Fastest UPDATE Is An INSERT“, and it was a blast.  The event is going to be much bigger and more popular this year, particularly since OOW13 turned down so many excellent abstracts and presenters.

Deep technical content not suitable for OOW13?  Fine, we’ll give it an airing at OTW13!

OTW13 will once again be held at the Children’s Creativity Museum in Yerba Buena Gardens in San Francisco, CA on Monday-Tuesday, 23-24 September 2013.  The driving forces behind this wonderful event are Mogens Norgaard and Kyle Hailey, who have recruited more corporate sponsors (to be announced soon) and yes, EvDBT will once again be a sponsor.

I plan to present on “Three Types of Table Compression” at Monday 23-Sep at 9:00am.  The talk has to do with an experience I had last summer with a customer running the Oracle Demantra application, which had several quirks that prevented the use of either of the two types of table compression offered by Oracle (i.e. BASIC/OLTP compression and HCC compression).  But there is a third, little-known form of table compression rooted back in ancient Oracle versions which served the purpose, so the most of the presentation is a detailed description of BASIC/OLTP compression and HCC compression, both of which are just a prelude to a discussion about trailing NULL columns.

So, if you make it to the Moscone Center this September, then be sure to take a brief detour on the trampled path from Moscone West to Moscone South and back, and take a peek behind the carousel at Yerba Buena Gardens behind which lies Oak Table World 2013 (OTW13), and see the top-notch technical presentations that were declined for Open World.

Below is an extremely cool panoramic shot taken during my presentation at OTW12…

OTW12 20121001


This was fun, and I’ve been trying to remember to post about the things that are fun.

I was working with a customer using a purchased application from a relatively over-bearing vendor.  You know the type:  they have very specific (and somewhat out-of-date) demands for the Oracle database environment on which their application is hosted.

My customer complained that some reports were taking 15-20 minutes to complete, when everyone who knew what the reports were doing were certain that they shouldn’t take more than a few seconds at most.  The search parameters to the report were very specific, the report didn’t do a whole lot, only supposed to bring back a few bits n pieces, not the whole kit n kaboodle.

So they extracted the text of the SQL statement from the application and emailed it.  Using that text, I searched for the text in the V$SQL view and found the SQL_ID.  Now we can have some fun…

First step is get some context of the problem, and that means looking at the recent “history” of executions (as displayed by my “sqlhistory.sql” script) which looked like…

 |Plan HV     Min Snap  Max Snap  Execs       LIO/Exec       PIO/Exec       CPU/Exec    Elapsd/Exec |
 |1834398219  30594     30595     1           15,244,622     0              1,277.87    1,633.08    |
 ========== PHV = 1834398219==========
 First seen from "06/13/13 10:30:22" (snap #30594)
 Last seen from  "06/13/13 11:00:23" (snap #30595)
 Execs          LIO            PIO            CPU            Elapsed
 =====          ===            ===            ===            =======
 1              15,244,622     0              1,277.87       1,633.08
 Plan hash value: 1834398219

 | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
 |   0 | SELECT STATEMENT                  |                  |       |       |   124K(100)|          |
 |   1 |  COUNT STOPKEY                    |                  |       |       |            |          |
 |   2 |   VIEW                            |                  |     1 |    95 |   124K  (2)| 00:10:15 |
 |   3 |    SORT ORDER BY STOPKEY          |                  |     1 |    94 |   124K  (2)| 00:10:15 |
 |   4 |     NESTED LOOPS SEMI             |                  |     1 |    94 |   124K  (2)| 00:10:15 |
 |   5 |      NESTED LOOPS SEMI            |                  |     1 |    88 |    35   (0)| 00:00:01 |
 |   6 |       NESTED LOOPS                |                  |     1 |    64 |    33   (0)| 00:00:01 |
 |   7 |        TABLE ACCESS BY INDEX ROWID| CC_CLAIM         |     1 |    55 |    32   (0)| 00:00:01 |
 |   8 |         INDEX RANGE SCAN          | CC000000D0U3     |    11 |       |    22   (0)| 00:00:01 |
 |   9 |        INDEX UNIQUE SCAN          | CC000000DKU3     |     1 |     9 |     1   (0)| 00:00:01 |
 |  10 |       INDEX RANGE SCAN            | CC000000D2U6     |   346K|  8110K|     2   (0)| 00:00:01 |
 |  11 |      VIEW                         | VW_NSO_1         |    20 |   120 |   124K  (2)| 00:10:15 |
 |  12 |       NESTED LOOPS                |                  |    20 |   680 |   124K  (2)| 00:10:15 |
 |  13 |        INDEX FULL SCAN            | PERSONLASTFIRSTN |    20 |   380 |   124K  (2)| 00:10:14 |
 |  14 |        INDEX RANGE SCAN           | CC000000D3U3     |     1 |    15 |     2   (0)| 00:00:01 |

                                     Summary Execution Statistics Over Time
                                                                               Avg                 Avg
 Snapshot                          Avg LIO             Avg PIO          CPU (secs)      Elapsed (secs)
 Time            Execs            Per Exec            Per Exec            Per Exec            Per Exec
 ------------ -------- ------------------- ------------------- ------------------- -------------------
 13-JUN 10:30        1        3,362,907.00                0.00              284.33              358.08
 13-JUN 11:00        0       11,881,715.00                0.00              993.54            1,275.00
              -------- ------------------- ------------------- ------------------- -------------------
 avg                          7,622,311.00                0.00              638.94              816.54
 sum                 1

So only one execution which started sometime between 10:30am and 11:00am, and continued into the 11:00 – 11:30am period, racking up a little over 27 minutes (1633.08 seconds) of run-time.

Please note the plan_hash_value of 1834398219;  my next task is to create a SQL*Plus script to provide a reproducible test case using the exact same execution plan ID.  Because I’m a command-line kind of guy…

I copied the text of the SQL statement, and prettied it up manually, a process which is time-consuming but also allows me to learn how the query is written intimately.  In order to find the data values to plug into the bind-variables used by the SQL statement, I queried the V$SQL_BIND_CAPTURE view as follows…

SQL> select  child_number, name, position, datatype_string, value_string
   2  from    v$sql_bind_capture
   3  where   sql_id = '&&V_SQL_ID'
   4  order by 1, 3;
 old   3: where    sql_id = '&&V_SQL_ID'
 new   3: where    sql_id = '4xmb4k9uhqn90'

 ------------ ---------- ---------- ------------------------------ ------------------------------
            0 :1                  1 NUMBER                         5220
              :2                  2 NUMBER                         2
              :3                  3 NUMBER                         1
              :4                  4 NUMBER                         4
              :5                  5 NUMBER                         6
              :6                  6 NUMBER                         8
              :7                  7 NUMBER                         15
              :8                  8 NUMBER                         20
              :9                  9 NUMBER                         32
              :10                10 VARCHAR2(32)                   lena
              :11                11 NUMBER                         10002
              :12                12 NUMBER                         92
              :13                13 NUMBER                         124
              :14                14 NUMBER                         4
              :15                15 NUMBER                         5221
              :16                16 NUMBER                         15

So, now I have a SQL*Plus script ready to try to replicate the plan hash value from real-life…

set echo on feedback on timing on linesize 300 tab off
set autotrace traceonly explain  /* AUTOTRACE TRACEONLY EXPLAIN just parses, doesn't execute the query */
col plan_plus_exp format a295    /* permits output from AUTOTRACE to display wide, and not wrap */
variable b1 number 
variable b2 number
variable b3 number
variable b4 number
variable b5 number
variable b6 number
variable b7 number
variable b8 number
variable b9 number
variable b10 varchar2(32)
variable b11 number
variable b12 number
variable b13 number
variable b14 number
variable b15 number
variable b16 number
exec :b1 := 5220;        /* plugging in the bind-variable values captured from V$SQL_BIND_CAPTURE */
exec :b2 := 2;
exec :b3 := 1;
exec :b4 := 4;
exec :b5 := 6;
exec :b6 := 8;
exec :b7 := 15;
exec :b8 := 20;
exec :b9 := 32;
exec :b10 := 'lena';
exec :b11 := 10002;
exec :b12 := 92;
exec :b13 := 124;
exec :b14 := 4;
exec :b15 := 5221;
exec :b16 := 15;
spool x10

(...SQL statement itself edited out, because it might be recognizable...)

spool off
set echo off feedback 6 timing off linesize 130 tab on autotrace off

So now, I can use the SQL*Plus AUTOTRACE TRACEONLY EXPLAIN command to determine if I have created a reproducible test case…

   2  FROM    (SELECT         qRoot.ID col0,

(...SQL statement body edited out, because it might be recognizable...)

  38  WHERE   rownum <= :b16;
 Elapsed: 00:00:00.00

 Execution Plan
 Plan hash value: 605492508      <== not the same Plan Hash Value

 | Id  | Operation                            | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
 |   0 | SELECT STATEMENT                     |                 |     1 |    95 |    44   (5)| 00:00:01 |
 |*  1 |  COUNT STOPKEY                       |                 |       |       |            |          |
 |   2 |   VIEW                               |                 |     1 |    95 |    44   (5)| 00:00:01 |
 |*  3 |    SORT ORDER BY STOPKEY             |                 |     1 |    88 |    44   (5)| 00:00:01 |
 |   4 |     NESTED LOOPS SEMI                |                 |     1 |    88 |    30   (0)| 00:00:01 |
 |   5 |      NESTED LOOPS                    |                 |     1 |    64 |    28   (0)| 00:00:01 |
 |*  6 |       TABLE ACCESS BY INDEX ROWID    | TABLE_A1        |     1 |    55 |    27   (0)| 00:00:01 |
 |*  7 |        INDEX RANGE SCAN              | XX000000D0U3    |     1 |       |    26   (0)| 00:00:01 |
 |   8 |         NESTED LOOPS                 |                 |       |       |            |          |
 |   9 |          NESTED LOOPS                |                 |     2 |    68 |    12   (0)| 00:00:01 |
 |  10 |           TABLE ACCESS BY INDEX ROWID| TABLE_B2        |     7 |   105 |     7   (0)| 00:00:01 |
 |* 11 |            INDEX RANGE SCAN          | XX000000D3N2    |     7 |       |     4   (0)| 00:00:01 |
 |* 12 |           INDEX UNIQUE SCAN          | SYS_C002390384  |     1 |       |     1   (0)| 00:00:01 |
 |* 13 |          TABLE ACCESS BY INDEX ROWID | TABLE_B3        |     1 |    19 |     2   (0)| 00:00:01 |
 |* 14 |       INDEX UNIQUE SCAN              | XX000000DKU3    |     1 |     9 |     1   (0)| 00:00:01 |
 |* 15 |      INDEX RANGE SCAN                | XX000000D2U6    |   508K|    11M|     2   (0)| 00:00:01 |

Sadly, it appeared that my SQL*Plus test script could not reproduce the exact same execution plan.  I looked at the CBO statistics on the underlying tables (and their associated indexes and columns) to see if a stats-regather since the most recent executions had changed the input values provided to the Oracle cost-based optimizer (CBO), but that was not the case.

Another area to check are the session-level parameter values used when the SQL statement was parsed, which is stored in the V$SQL_OPTIMIZER_ENV view, and here I hit the jack-pot…

SQL> select name, value, isdefault from v$sql_optimizer_env where sql_id = '4xmb4k9uhqn90' order by name;

 NAME                                     VALUE                     ISD
 ---------------------------------------- ------------------------- ---
 _always_semi_join                        nested_loops              NO
 _enable_row_shipping                     false                     NO
 _fast_full_scan_enabled                  false                     NO
 _hash_join_enabled                       false                     NO
 _optimizer_cost_based_transformation     off                       NO
 _optimizer_sortmerge_join_enabled        false                     NO
 _pga_max_size                            1677720 KB                NO
 active_instance_count                    1                         YES
 bitmap_merge_area_size                   1048576                   YES
 cell_offload_compaction                  ADAPTIVE                  YES
 cell_offload_plan_display                AUTO                      YES
 cell_offload_processing                  true                      YES
 cpu_count                                12                        YES
 cursor_sharing                           exact                     YES
 db_file_multiblock_read_count            128                       YES
 hash_area_size                           131072                    YES
 is_recur_flags                           0                         YES
 optimizer_capture_sql_plan_baselines     false                     YES
 optimizer_dynamic_sampling               2                         YES
 optimizer_features_enable                        YES
 optimizer_index_caching                  0                         YES
 optimizer_index_cost_adj                 100                       YES
 optimizer_mode                           all_rows                  YES
 optimizer_secure_view_merging            true                      YES
 optimizer_use_invisible_indexes          false                     YES
 optimizer_use_pending_statistics         false                     YES
 optimizer_use_sql_plan_baselines         false                     NO
 parallel_ddl_mode                        enabled                   YES
 parallel_degree                          0                         YES
 parallel_dml_mode                        disabled                  YES
 parallel_execution_enabled               true                      YES
 parallel_query_default_dop               0                         YES
 parallel_query_mode                      enabled                   YES
 parallel_threads_per_cpu                 2                         YES
 pga_aggregate_target                     8388608 KB                YES
 query_rewrite_enabled                    true                      YES
 query_rewrite_integrity                  enforced                  YES
 result_cache_mode                        MANUAL                    YES
 skip_unusable_indexes                    true                      YES
 sort_area_retained_size                  0                         YES
 sort_area_size                           65536                     YES
 star_transformation_enabled              false                     YES
 statistics_level                         typical                   YES
 transaction_isolation_level              read_commited             YES
 workarea_size_policy                     auto                      YES

 45 rows selected.

It turns out that the session running the SQL statement had eight (8) database parameters set by ALTER SESSION SET commands within the session; seven (7) of these parameters are undocumented “underscore” parameters.  It turns out that two (2) of these eight changed parameters have the same value as the global database instance (i.e. “_pga_max_size” and “optimizer_use_sql_plan_baselines”), but six (6) of the underscore parameters do not.  So, to understand whether these changes were the determining factor, I re-ran my test case with these six parameters changed to the values shown above at the session level…

Plan hash value: 1834398219

 | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
 |   0 | SELECT STATEMENT                  |                  |     1 |    95 |   804K  (1)| 01:06:08 |
 |*  1 |  COUNT STOPKEY                    |                  |       |       |            |          |
 |   2 |   VIEW                            |                  |     1 |    95 |   804K  (1)| 01:06:08 |
 |*  3 |    SORT ORDER BY STOPKEY          |                  |     1 |    94 |   804K  (1)| 01:06:08 |
 |   4 |     NESTED LOOPS SEMI             |                  |     1 |    94 |   804K  (1)| 01:06:08 |
 |   5 |      NESTED LOOPS SEMI            |                  |     1 |    88 |    41   (0)| 00:00:01 |
 |   6 |       NESTED LOOPS                |                  |     1 |    64 |    39   (0)| 00:00:01 |
 |*  7 |        TABLE ACCESS BY INDEX ROWID| CC_CLAIM         |     1 |    55 |    38   (0)| 00:00:01 |
 |*  8 |         INDEX RANGE SCAN          | CC000000D0U3     |    14 |       |    26   (0)| 00:00:01 |
 |*  9 |        INDEX UNIQUE SCAN          | CC000000DKU3     |     1 |     9 |     1   (0)| 00:00:01 |
 |* 10 |       INDEX RANGE SCAN            | CC000000D2U6     |   508K|    11M|     2   (0)| 00:00:01 |
 |* 11 |      VIEW                         | VW_NSO_1         |   338K|  1983K|   804K  (1)| 01:06:07 |
 |  12 |       NESTED LOOPS                |                  |   338K|    10M|   804K  (1)| 01:06:07 |
 |* 13 |        INDEX FULL SCAN            | PERSONLASTFIRSTN |   338K|  6274K|   126K  (4)| 00:10:27 |
 |* 14 |        INDEX RANGE SCAN           | CC000000D3U3     |     1 |    15 |     2   (0)| 00:00:01 |

…and that, my friends, is a direct hit.  I hadn’t expected the solution to be this easy.  Usually, a problem fights tooth-and-nail before yielding a solution.

So now the big question becomes:  which is truly the better plan?  The Oracle optimizer clearly state that plan hash value = 1834398219 (i.e. the existing execution plan) has an estimated cost of 894k and an estimated elapsed time of 1 hour, 6 minutes, 8 seconds whereas plan hash value = 605492508 (i.e. the execution plan using the default parameter values) has an estimated cost of 44 and an estimated elapsed time of 1 second.

And those estimates seem to be largely correct:  when I run the new execution plan with the same data value inputs, it returns no rows but does so in 0.3 seconds.  Subsequently changing the session parameters and re-running the same query took at least 5 mins before I killed it.

So the better execution for this query uses the default database level parameter settings, not the session-level settings.

Now the question becomes:  where are the session parameters being changed?

For that answer, I ran another query, first finding a list of all the recently-executed SQL statements whose session-level parameters have been changed, and then scanning the Oracle Active Session History (ASH) data for client information…

SQL> select   distinct module, action, client_id, sql_id
   2  from     v$active_session_history
   3  where    (sql_id,sql_child_number) in
   4           (select sql_id, child_number
   5            from   v$sql_optimizer_env
   6            where  name = '_fast_full_scan_enabled' and value = 'false');

 MODULE          ACTION                           CLIENT_ID       SQL_ID
 --------------- -------------------------------- --------------- -------------
 AppCenterXX     WebReq:SimpleXYZSearch           aconnell        1rbd5kmgy25fz
 AppCenterXX     WebReq:XYZProcess                scalderon       6w7zt7rgjrsnv
 sqlplus@prod01                                                   1wadmdpy49qg9

(...167 rows edited out for brevity...)

 AppCenterXX     WebReq:SimpleXYZSearch           jwilson         5239t67sg662v
 AppCenterXX     WebReq:SimpleXYZSearch           jlacaille       d9qtthk5000sd

 171 rows selected.

I highlighted my own SQL*Plus session in red typeface, which goes toward corroborating that it is ALTER SESSION SET commands that are changing the environment.  One other thing to note is that all of the sessions connecting as “ClaimCenter” are connecting to the same account on Oracle, so I checked if there were any AFTER LOGON database triggers on that (or other) accounts in the database, and while there were AFTER LOGON triggers, they did not cause these parameters to be re-set.

So, I can only conclude that the application code itself is changing these eight (8) database parameters, and we need to find out why.  It is quite possible that they provide some form of AFTER LOGON functionality within their application, permitting administrators to specify parameter changes if desired, but “seeded” with pre-configured values?  In that case, we just have to locate the capability to change those and change them.  The worst-case scenario would be having these ALTER SESSION SET commands hard-coded right into their application code somewhere;  let’s keep fingers crossed that it is not that!  :-)

At any rate, it was fun finding such a clear example of the pitfalls of fiddling with undocumented “underscore” parameters.  I’ll update this post if we find anything interesting in the application code.

Evergreen Database Technologies, Inc.