Category Archives: Oracle

RMOUG calendar page

RMOUG calendar pageFor those who are always seeking FREE technical learning opportunities, the calendar webpage of the Rocky Mountain Oracle Users Group (RMOUG) is a great resource to bookmark and check back on weekly.

RMOUG volunteers compile notifications of webinars, meetings, and meetups from the internet and post them here for everyone to use.

The information technology (IT) industry is always evolving and therefore always changing.

Free webinars, even if they seem too commercial at times, always have at least one solid nugget of solid information that can make the difference in a professional’s life and career.

You never know when the need for new information that nobody else knows is going to come in handy.

Stay a step ahead…

Avoiding Regret

After working for a variety of companies in the 1980s, after working for Oracle in the 1990s, after trying (and failing) to build a company with friends at the turn of the century, and after more than a decade working as an independent consultant in this new century, I found myself in a professional dilemma last year.

I know I need to work at least another ten years, probably more like fifteen years, to be able to retire.  I had survived the nastiest economic downturn since the Great Depression, the Great Recession of 2008-2011, while self-employed, and felt ready to take on the economic upswing, so I was confident that I could work steadily as an independent Oracle performance tuning consultant for the next 15 years or more.

Problem was: I was getting bored.

I loved my work.  I enjoy the “sleuthiness” and the forensic challenge of finding performance problems, testing and recommending solutions, and finding a way to describe it clearly so that my customer can make the right decision.  I am confident that I can identify and address any database performance problem facing an application built on Oracle database, and I’ve dozens of successful consulting engagements to bear witness.  I have a legion of happy customers and a seemingly endless supply of referrals.

Being confident is a great feeling, and I had spent the past several years just enjoying that feeling of confidence on each engagement, relishing the challenge, the chase, and the conclusion.

But it was becoming routine.  The best explanation I have is that I felt like a hammer, and I addressed every problem as if it was some form of nail.  I could feel my mental acuity ossifying.

Then, opportunity knocked, in an unexpected form from an unexpected direction.

I have a friend and colleague whom I’ve known for almost 20 years, named Kyle Hailey.  Kyle is one of those notably brilliant people, the kind of person to whom you pay attention immediately, whether you meet online or in person.  We had both worked at Oracle in the 1990s, and I had stayed in touch with him over the years since.

About four years ago, I became aware that he was involved in a new venture, a startup company called Delphix.  I wasn’t sure what it was about, but I paid attention because Kyle was involved.  Then, about 3 years ago, I was working as a DBA for a Colorado-based company who had decided to evaluate Delphix’s product.

Representing my customer, my job was to prevent a disaster from taking place.  I was to determine if the product had any merit, if the problems being experienced were insurmountable, and if so let my customer know so they could kill the project.

Actually, what my boss said was, “Get rid of them.  They’re a pain in my ass“.  I was just trying to be nice in the previous paragraph.

So OK.  I was supposed to give Delphix the bum’s rush, in a valid techie sort of way.  So I called the Delphix person handling the problem, and on the other end of the phone was Kyle.  Hmmm, I can give a stranger the bum’s rush, but not a friend.  Particularly, not someone whom I respect like a god.  So, instead I started working with him to resolve the issue.  And resolve it we did.  As a result, the company bought Delphix, and the rest is history.

Here’s how we did it…

But first, what does Delphix do?  The product itself is a storage appliance on a virtual machine in the data center.  It’s all software.  It uses sophisticated compression, deduplication, and copy-on-write technical to clone production databases for non-production usage such as development and testing.  It does this by importing a copy of the production database into the application, compressing that base copy down to 25-30% of it’s original size.  Then, it provides “virtual databases” from that base copy, each virtual database consuming almost no space at first, since almost everything is read from the base copy.  As changes are made to each virtual database, copy-on-write technology stores only those changes for only that virtual database.  So, each virtual database is presented as a full image of the source database, but costs practically nothing.  Even though the Oracle database instances reside on separate servers, the virtual database actually resides on the Delphix engine appliance and is presented via NFS.

I was asked to understand why the virtual databases were slow.

On the other end of the phone was Kyle, and he was easily able to show me with repeatable tests where and what the nature of the performance problems were, and that they were predictable and resolvable.

But I’m not really writing just about Delphix, even though it is very cool and quite earthshaking.  Rather, I’m writing about something bigger that has stormed into our industry, bringing to fruition something that I had tried — and failed — to accomplish at the turn of the century.  Back then, at the turn of the century, when some colleagues and I tried to build a hosted-application services company, we failed in two ways:  1) we were ahead of our time and 2) we chose the wrong customers.

Being ahead of one’s time is not a failure, strictly speaking.  It shows a clarity of vision, but bad timing.

However, choosing the wrong customers is absolutely a failure.

Before I explain, please be aware that I’m going to change most of the names, to protect any innocent bystanders…

After leaving Oracle in July 1998, I founded my own little company-of-one, called Evergreen Database Technologies (a.k.a. EvDBT).  The dot-com boom was booming, and I wanted to operate as an independent consultant.  I had been living in Evergreen in the foothills west of Denver for years, so the choice of name for my company was by no means a brilliant leap of imagination; I just named it after my hometown.  Business was brisk, even a bit overheated as Y2K approached, and I was busy.  And very happy.

In early 2000, I had been working with another young company called Upstart, and we felt that the information technology (IT) industry was heading in one inescapable direction:  hosted services.  So I joined Upstart and we decided that providing hosted and managed Oracle E-Business Suites (EBS) was a good business.  EBS is the world’s 2nd most prevalent Enterprise Resource Planning (ERP) system and can be dizzyingly complex to deploy.  It is deployed in hundreds of different industries and is infinitely customizable, so in order to avoid being eaten alive by customization requests by our potential customers, we at Upstart would have to focus on a specific industry and pre-customize EBS according to the best practices for that industry.  We chose the telecommunications industry, because it was an even bigger industry in Colorado then, as it is now.  We wanted to focus on small telecommunications companies, being small ourselves.  At that time, small telecommunications companies were plentiful because of governmental deregulation in the industry.  These companies offered DSL internet and phone services, and in June 2000 our market research told us it was a US$9 billion industry segment and growing.

Unfortunately, all of the big primary telecommunications carriers bought the same market research and were just as quick to catch on, and they undercut the DSL providers, provided cheaper and better service, and put the myriad of small startups out of business practically overnight.  “Overnight” is not a big exaggeration.

By October 2000, we at Upstart were stunned to discover that our customer base had literally ceased answering their phones, which is chillingly ominous when you consider they were telecommunication companies.   Of course, the carnage directly impacted the companies who had hoped to make money from those companies, such as Upstart.  Only 4 months in business ourselves, our target market had vanished like smoke.  You might say we were…. a little disconcerted.

We at Upstart had a difference of opinion on how to proceed, with some of the principals arguing to continue sucking down venture-capital funding and stay the course, while others (including me) argued that we had to find some way, any way, to resume generating our own revenue on which to survive.  I advocated returning to Upstart’s previous business model of consulting services, but the principals who wanted to stay the course with the managed services model couldn’t be budged.  By December 2000, Upstart was bankrupt, and the managed-services principals ransacked the bank accounts and took home golden parachutes for themselves.  I jumped back into my own personal lifeboat, my little consulting services company-of-one, Evergreen Database Technologies.  And continued doing what I knew best.

This was a scenario that I’m told was repeated in many companies that flamed out during the “dot-bomb” era.  There are a million stories in the big city, and mine is one of them.

But let’s just suppose that Upstart had chosen a different customer base, one that didn’t disappear completely within months?  Would it still have survived?

There is a good chance that we would still have failed due to being ahead of our times and also ahead of the means to succeed.  Hosted and managed applications, which today are called “cloud services”, were made more difficult back then by the fact that software was (and is) designed with the intention of occupying entire servers.  The E-Business Suites documentation from Oracle assumed so, and support at Oracle assumed the same.  This meant that we, Upstart, had to provision several entire server machines for each customer, which is precisely what they were doing for themselves.  There was little reason we could do it cheaper.  As a result, we could not operate much more cheaply than our customers had, resulting in very thin cost savings or efficiencies in that area, leaving us with ridiculously small profit margins.

Successful new businesses are not founded on incremental improvement.  They must be founded on massive change.

What we needed at that time was server virtualization, which came along a few years later in the form of companies like VMware.  Not until server virtualization permitted us to run enterprise software on virtual machines, which could be stacked en masse on physical server machines, could we have hoped to operated in a manner efficient enough to save costs and make money.

Fast forward to today.

Today, server virtualization is everywhere.  Server virtualization is deeply embedded in every data center.  You can create virtual machines on your laptop, a stack of blade servers, or a mainframe, emulating almost any operating system that has ever existed, and creating them in such a way that finally makes full use of all the resources of real, physical servers.  No longer would system administrators rack, wire, and network physical servers for individual applications using customized specifications for each server.  Instead, virtual machines could be configured according to the customized specifications, and those virtual machines run by the dozens on physical machines.

The advent of virtual machines also brought about the operations paradise of abstracting computer servers completely into software, so that they could be built, configured, operated, and destroyed entirely like the software constructs they were.  No more racking and wiring, one server per application.  Now, banks of “blades” were racked and wired generically, and virtual machines balanced within and across blades, with busier virtual machines moving toward available CPU, memory, and networking resources and quieter virtual machines yielding CPU, memory, and networking to others.  Best of all, all of this virtualization converted hardware into software, and could be programmed and controlled like software.

Everything is virtualized, and all was good.

Except storage.

Think about it.  It is easy and cheap to provision another virtual machine, using fractions of CPU cores and RAM.  But each of those virtual machines needs a full image of operating system, application software, and database.  While server virtualization permitted data centers to use physical servers more efficiently, it caused a positive supernova explosion in storage.  So much so that analysts like Gartner have predicted a “data crisis” before the end of the decade.

This is where Delphix comes in.

By virtualizing data as well as servers, it is now truly fast, cheap, and easy to provision entire virtual environments.  Delphix works with Oracle, and it also works with SQL Server, PostgreSQL, MySQL, DB2, and Sybase.  Even more importantly, it also virtualizes file-systems, so that application software as well as databases can be virtualized.

So back in early 2014, Kyle contacted me and asked if I would be interested in joining Delphix.  My first reaction was the one I always had, which is “no thanks, I’ve already got a job“.  I mean, I’m a successful and flourishing independent consultant.  Why would I consider working within any company anymore.  Business was brisk, I never had downtime, and the economy was improving.  I had successfully been operating as an independent consultant for most of the past 15 years.  Why fix what wasn’t broken?

But here was the crux of the matter…

I wanted to try something new, to expand beyond what I had been doing for the past 25 years since I first joined Oracle.  If it was a large established company beckoning, I wouldn’t have considered it for a second.  But a promising startup company, with a great idea and a four-year track record of success already, and still pre-IPO as well?

I couldn’t resist the gamble.  What’s not to like?

It’s possible I’ve made an enormous mistake, but I don’t think so.

Not to turn excessively morbid, but all of us are just a heartbeat away from our common destination.  I believe that, when I’m at the last moments of my life, the thing I fear will not be death itself, or pain, or leaving life behind.

It is regret that I fear.

And regret can take many forms, but the most painful regret will undoubtedly be what might have been, the opportunities passed or missed.  Career is only one aspect of life, and I don’t want to give it too much significance.  I’ve accumulated regrets in how I’ve lived my life, and how I’ve treated people in my life, and in some cases they are small but there are some which will always haunt me.

But with regards to my professional career, as Robert Frost said, I’ve taken the road less traveled, and that has made all the difference.  No regrets here.

Oracle ACE program

What is the Oracle ACE program?

The Oracle ACE program is a community advocacy program for Oracle technology evangelists and enthusiasts, sponsored by and managed by Oracle Corporation.  As stated on the ACE overview page at “http://oracle.com/technetwork/community/oracle-ace”, it is both a network as well as a resource for everyone in the Oracle community.  It is not a certification program, and there is no course to study and no test to pass.  Rather, the ACE program is a recognition program by one’s colleagues and peers, and joining is a process of nomination, review, and acceptance.

Who are Oracle ACEs?

They are your colleagues in the worldwide Oracle technology community, of which you and the RMOUG community here in Colorado are a significant part.  There are now more than 500 people in 55 countries around the world who have been recognized by their peers and by Oracle Corporation.  They are not employees of Oracle, but rather partners and customers of Oracle.

The ACE program is now 10 years old and comprises three levels…

ACE Associate logoACE Associate – this is the entry-point for the program and an ideal form of recognition for technology community advocates who are building their reputations

ACElogoACE– established advocates for Oracle technology who are well-known in their community

ACEDlogoACE Directors – top-tier members of the worldwide community who engage with many users groups and advocates and with Oracle Corporation

The ACE program is always growing, and the current members of the program are always happy to help you make the step up and jump to the next level in your career.  ACEs are expected to contribute to all or some of the following:

  • contribute actively to technical forums like OTN community forum or the ORACLE-L email list
  • publishing articles in newsletters, magazines, blogs
  • publishing books
  • tweet, post on social media
  • organize community activities such as Oracle users groups
  • present sessions at Oracle conferences and Oracle user group meetings

Few people do all these things, so don’t think that everyone does, but also please be aware that there are some who do, driving their careers above and beyond.

Joining the ACE program is not a one-time task, but an on-going commitment to contribution and sharing to build your community, whether that community is local to where you live, national, or world-wide.

To find out more about the program, go to the web page at “http://oracle.com/technetwork/community/oracle-ace/become-an-ace” or just contact one of the Oracle ACEs near you and ask questions.

That’s why we’re here.

If you want something done, ask a busy person…

This is a re-post I originally made on the ODTUG website on 17-Jan 2013 at the beginning of my two-year term on the board of directors...

This past weekend, I attended my first face-to-face Board of Directors meeting with ODTUG. Monty Latiolais, current president of ODTUG, asked me to let him know if there was anything “less than stellar” about my experience, and I have say the answer is “no”.  It was a stellar experience, all weekend.  Here’s why…

For 20 years, I’ve been a member of the Rocky Mountain Oracle Users Group.  My boss at Oracle at the time, Valerie Borthwick, told everyone in our team that the best thing we could do for our career and for our business practice was to “become famous”.  Not famous (or infamous) as in “celebrity” or “rock star”, but famous as in “known within our industry”.  Today, she would be telling us to blog and tweet, but back then, she was telling us to write and post white papers and to do presentations.  Put our ideas out there.  Discuss what we knew.  Submit to peer reviews.

The biggest thing I learned then is that you cannot claim to know something until you’ve tried to explain it to others.  Lots of people know something well.  But unless they’ve tried to explain it to others, there will be gaps in knowledge, fuzzy areas in understanding, and lack of depth.  Explaining to others fills gaps, clarifies fuzzy areas, and deepens the superficial.  Weak points are rapidly exposed while presenting information in public.  So, as I found ways to explain what I thought I already knew, I had to fix these problems, and my career flourished.

So in 1995, I joined the board of directors at RMOUG, because I wanted to spend more time around smart people, and see how they make things happen. That’s where I learned my next big lesson, which is when you have an important task, give it to a busy person, because they get it done.

It seems a bit counter-intuitive, but as you stop and observe those around you, it becomes obvious…

  • Some people like to think about doing things, but never do it
  • Others plan to do things, but never do it
  • Others talk themselves out of doing things before they ever get started, so they never do it
  • And others simply refuse to do anything

The people who are always busy are always getting things done.

That is what I found with the board of ODTUG, busy people who have plenty to do already, doing one more thing.

My favorite kind of people.

Lovin’ la vida Oracle

As we prepare for the week of Oracle OpenWorld 2014, I look back on the 25 years I have spent within the orbit of Oracle Corporation.

I joined Oracle Consulting Services (OCS) as an employee on 15-January 1990 and worked my way to Technical Manager when I resigned to start my own consultancy on 31-July 1998.  I worked as an independent Oracle consultant from then (with a side trip into company-building with friends) until 30-April this year.  On 01-May 2014, I joined startup Delphix.

Throughout this quarter-century of La Vida Oracle, I’ve made a great living, but it has also been a great way of life.  I started presenting at the Rocky Mountain Oracle Users Group in 1993, and joined the board of directors in 1995.  I’ve since worked with many other Oracle users groups as a volunteer and I’ve found the experiences to be incredibly educational, in so many ways.  I’ve also met a lot of amazing people through volunteering at Oracle users groups.  I met the junta of the Oak Table Network, and joined that group in 2002.  I was elected as an Oracle ACE in 2007, before I even knew the program existed, then I was made an ACE Director in 2012, which is an elevation I appreciate but still never sought.

But over it all, all throughout, is Oracle.  The Big Red O.  Some people have had bad experiences at Oracle Corporation, some have had REALLY bad experiences, just as people have good and bad experiences at any huge corporation.  In the spirit of a comment made famous by Winston Churchill, “Democracy is the absolute worst form of government.  Except for all the others.”  Oracle is populated by, and led by, some very human … beings.  I love them all, some more than others.

So for 25 years now, out of the 37 years Oracle has been in existence, I have had a really great life.  La vida Oracle.  I am so GLAD I met ya!  And I love this life!

And so it continues today.  For the first time in a quarter century, I’m out of the direct orbit of Oracle, now that I’m working at Delphix.  I’m still heavily involved with Oracle as an Oracle ACE Director and adviser to the boards of three local Oracle users groups (RMOUG, NoCOUG, and NEOOUG) and a board member at ODTUG.

Delphix builds data virtualization software for Oracle, PostgreSQL, SQL Server, and Sybase ASE, as well as file-system directories on Unix/Linux and Windows.  Virtualizing Oracle databases is a big part of Delphix’s business, but it is not the only part, and the non-Oracle parts are growing rapidly.  It’s refreshing to work with other database technologies.  But I still love working with Oracle Database, and I’m continually impressed by Oracle’s technology prowess, with the In-Memory option of Database12c a brilliant example.

Some say that Delphix competes with Oracle.  Be serious – please name a technology company that doesn’t compete with Oracle in one way or another, as the breadth of Oracle products and services is so expansive.

As an independent contractor at EvDBT for 16 years, I myself competed with Oracle Consulting in my own very small way.  But, at the same time I cooperated with Oracle by optimizing the implementation of Oracle technology.  I sure as heck understand who hold the tent up.

The same is true with Delphix.  As a company, Delphix products can be said to compete with Oracle Enterprise Manager 12c Cloud Control, in the niche area known as Database-As-A-Service (DBaaS) in the specific SnapClone functionality.  The Delphix software appliance is very similar to this SnapClone piece, but this part of the Oracle product is just a small part of the scope the vast EM12c Cloud Control product suite.

In the same way, I as an independent consultant could have been said to have competed with the EM12c diagnostics pack and performance tuning pack, because the techniques I used and taught tended to make people independent of those tools.

That’s not to say I steered people away from EM12c; it’s just that I myself didn’t use it for performance tuning, though gradually I learned to appreciate many of its features, not least through paying attention to my wife Kellyn Pot’vin.

In fact, the Oracle Enterprise Manager 12c Cloud Control, using the Cloud API, can fully administer virtual databases created by Delphix.  After all, Delphix is just an alternate mechanism to implement data virtualization.  Instead of using the mechanism of Oracle DBaaS SnapClone, customers can also use Delphix.  So Delphix can become a part of EM12c.

So there is no competition between Delphix and Oracle.  Delphix is an alternative to the SnapClone mechanism underlying DBaaS, but Delphix virtual databases can still be orchestrated through the EM12c console.  It need not be an either-or choice.

Of course, I still have to write that extension through the EM12c cloud API, and I’m getting right on that.  Unless someone else gets to it first.

Keep your eye on the Oracle EM12c Extension Exchange webpage for more progress on integrating Delphix within EM12c…

#OakTable World at Oracle OpenWorld 2014

WhereChildren’s Creativity Museum, 221 4th St, San Francisco

When:  Mon-Tue, 29-30 September, 08:30 – 17:00 PDT

For the third year in a row at the same fantastic location right in the heart of the bustling Oracle OpenWorld 2014 extravaganza, OakTable World 2014 is bringing together the top geeks of the worldwide Oracle community to present on the topics not approved for the OpenWorld conference.  At the OpenWorld conference.  For free.

The beauty of this unconference is its ad-hoc nature.  In 2010, weary of flying from Europe to endure marketing-rich content, Mogens Norgaard conceived Oracle ClosedWorld as an informal venue for those who wanted to talk about cool deep-technical topics.  Oracle ClosedWorld was first held in the back dining room at Chevy’s Fresh Mex on 3rd and Howard, fueled by Mogens’ credit card holding an open tab.  The following year in 2011, ClosedWorld was moved a little ways down Howard Street to the upstairs room at the Thirsty Bear, once again fueled by Mogens’ (and other) credit cards keeping a tab open at the bar.

In 2012, Kyle Hailey took the lead, found a fantastic venue, herded all the cats to make a 2-day agenda, and arranged for corporate sponsorship from Delphix, Pythian, and Enkitec, who have continued to sponsor OakTable World each year since.

If you’re coming to Oracle OpenWorld 2014 and are hungry for good deep technical content, stop by at OakTable World 2014, located right between Moscone South and Moscone West, and get your mojo recharged.

If you’re local to the Bay Area but can’t afford Oracle OpenWorld, and you like deep technical stuff about Oracle database, stop by and enjoy the electricity of the largest Oracle conference in the world, and the best Oracle unconference right in the heart of it all.

OakTable World 2014 – driven by the OakTable Network, an informal society of drinkers with an Oracle problem.

#CloneAttack at Oracle OpenWorld 2014

Delphix and Dbvisit will be at the OTN Lounge in the lobby of Moscone South from 3:30 – 5:00pm on Monday 29-Sept.  Come join us to hear about #CloneAttack and #RepAttack, two great hands-on learning opportunities.

What:

#CloneAttack is your chance to install a complete Delphix lab environment on your Windows or Mac laptop for you to play with and experiment at any time.  Experts Kyle Hailey, Steve Karam, Adam Bowen, Ben Prusinski, and I will be sharing USB “thumb” drives with the virtual machine OVA files for the lab environment, and we will be working one-on-one with you to help you get everything up and running, then to show you basic use-cases for cloning with Delphix.

Bring your laptop, bring your VMware, and get some data virtualization into your virtual life!

At the same time, #CloneAttack will be joined by #RepAttack by Dbvisit, where Arjen Visser, Jan Karremans, and the team will be helping you replicate Oracle to Oracle for zero downtime upgrades.

This just in!  #MonitorAttack from Confio SolarWinds will also be joining the party at the CCM on Tuesday to show you how to quickly and easily install Confio Ignite and enjoy the great features there.

Where:

Children’s Creativity Museum, 221 4th St, San Francisco

When:

Tuesday, Sept 30 from 10am – 5pm PDT

Before you arrive:

Hardware requirements (either Mac or Windows):

  • at least 8 GB RAM
  • at least 50 GB free disk space, but preferably 100 GB free
  • at least 2 Ghz CPU, preferably dual-core or better

The DBA is dead. Again.

Mark Twain never said, “Reports of my death are greatly exaggerated.”  Instead, his comment in 1897 was less tongue-in-cheek than matter-of-fact.  Confronted with news reports that he was gravely ill he responded, “James Ross Clemens, a cousin of mine, was seriously ill two or three weeks ago in London, but is well now.  The report of my illness grew out of his illness; the report of my death was an exaggeration.”  I can only hope that, while being equally matter of fact, in the retelling my comments will also grow wittier than they were written.  It is a lot for which to hope, as past experience is that my comments generally provoke unintended offense.

Every few years, when wondrous new automation appears imminent, reports surface about the long-anticipated death of the role of the database administrator.  Sometimes it seems these reports arise out of sheer frustration that DBAs and databases still exist, as seemed to have happened in 2008 during a conversation on the Oak Table email list, which closely followed a similar discussion on the ORACLE-L list.  To whit:  the war is over, and we lost.

Alex Gorbachev commented succinctly at the time:

We have already “lost” the war many times, haven’t we?  We lost it to object-oriented databases (8i?)  We lost to XML databases (9i?)  We lost to grid databases (10g?)  And we are losing to what now with 11g?  The “fusion” will save us all with or *without* databases in the first place?  Yeah right … the end is close.

The focus of discussion on both email lists was a thought-provoking blog post in March 2008 by Dom Brooks entitled “The dea(r)th of Oracle RDBMS and contracting?” He commented that the tide of history had finally turned against the Oracle database and the highly-visible role of database administrator.  Stiff competition from open-source competitors, emerging scalable technologies, absurd license fees, and belt-tightening by many IT shops were the overwhelming trend.  Poor database design exacerbated by immature implementation; if you’re going to produce a disaster, probably best that it not cost as much as Oracle.

My response on both email threads on ORACLE-L and the Oak Table was this…

Back in the 1980s, I worked for a company that had built some really cool applications in the area of travel reservations.  Eventually, the travel providers (i.e. airlines, hotels, car rental agencies, etc) caught on to what we were doing and did it themselves, effectively putting us out of business overnight.  So, it came time to sell the company off in pieces.  We tried to sell the applications, but nobody wanted them — they had their own, or could buy or build better.  We sold the hardware and facilities, but for pennies on the dollar.  Then, when we tried to sell the data, we hit the jackpot — everybody wanted the data, and we were able to sell it over and over again, to multiple buyers.

I never forgot that lesson, and several years later traded being a programmer for being a DBA because (as Michael just said, below) I like working with data.  Data, not programs, is the only thing that matters — applications are transient and have no value except to acquire, manipulate, and display data.  Data is the only thing with value.  The long-term value of data is the reason I’ve moved toward data warehousing and business intelligence, too.

Data is important.  Databases manage data.  DBAs architect, configure, and manage databases.  So, being a skilled database administrator will always be necessary as long as data exists.  If the state of the art ceases advancing, then automation will finally catch up to extinguish the DBA role/job.  But until then, being a DBA is a career.

That’s my story.  And I’m stickin’ to it.

Doug Burns was following both threads and was kind enough to lend his support in a post to his blog entitled “There’s Hope For Us All“, in which he stated “although it doesn’t reflect my personal experience in the slightest, there was something about what he had to say and the way he said it that rung very true to me.”  Kinder words are rarely spoken, and thank you, Doug.  And thank you too Dom, for your follow-up comment to Doug’s post, “Solidarity Brother!  I’m sure Tim’s right and will continue to be right.  I was having an emotional moment… the flat earth society are everywhere!

We all have those moments.

And here we are again, having another moment.

Once again, the topic of discussion on the Oak Table list was a blog post from Kenny Gorman (no relation) entitled “The Database Administrator Is Dead.”  My father, who was a police officer for 25 years, worked in a profession much more dangerous, and certainly several people had wished him harmed or dead over his career and even acted in that direction, but in a general way my chosen profession has received more death threats, it seems.

Now, the forces opposing the DBA are not necessarily cheaper, different, or disruptive technology, but better automation and provisioning.  The role of the DBA will literally be smothered out of existence, as highly-automated management consoles extend to the ultimate capability.  “Database As A Service” or “DBaaS“, cloud provisioning for databases, is the next development to obsolesce the database administrator.

The synchronicity of these discussions is spooky.  During the week previous to the discussion of Mr. [Kenny] Gorman’s blog post, I had related another particular story 4-5 separate times to 4-5 separate people, and now I found that I was relating it yet again, this time to the Oak Table email list.  It was something of a continuation from my earlier story…

In the 1990s, when I chose to move from being a developer to a DBA, the trend of out-sourcing was quite abundantly evident, not quite augmented by the trend of offshoring yet.  In 1999 I did my first ever keynote address at a conference in Portland, Maine to the Maine’S Oracle Users Group (MSOUG) on the topic of being a DBA in a world of out-sourcing.  I described a visualization of one of those water-holes in the Sahara.  A water-hole that is brimming and supporting a lush oasis during the rainy season, but that dries up and shrinks to a small muddy puddle during the dry season, surrounded by dead vegetation and dead animals that didn’t quite make it to the water-hole or another.

Repeating the comments in Doug’s blog, code comes and goes but data is the only thing with lasting value.  I visualized that the dead vegetation and dead animals surrounding the muddy remainders of the water-hole were developers and DBAs whose jobs were outsourced.  Right in the middle of the muddy water were two eyes above the surface, and this was the skilled DBA, guarding the remainder of the water-hole, containing only the most important stuff that couldn’t be outsourced or offshored.  I had long decided that I would be that DBA, and stay as close to the data as I could, and especially the most strategic data (i.e. data warehousing).

I figure y’all might have as much fun as the good folks at MSOUG did with that visualization, especially when subjected to Freudian and Jungian dream analysis.

Though it has nothing to do with why I’ve related this story 4-5 times previously this week, in this context, the author of the article (we’re not related) talks about having been an Oracle DBA 15 years ago, which is about the time I did my keynote for MSOUG.

Perhaps he left the field too early too early?  :-)

I completely agree with his “automate or die” comment, and I might add “keeping learning or die”, and of course the job’s roles are changing, but besides DBaaS being a long way from the pointy-and-clicky utopia that this post implies, the question remains: who sets up the DBaaS environments?  DBaaS isn’t the end of the DBA role, it is more automation.

Who will set up DBaaS environments, if not DBAs?  Don’t get me wrong:  I agree that DBaaS is here.  And I think DBAs will set it up, use it, and improve on it.

That’s my story.  And I’m stickin’ to it.

15 years of EvDBT

I worked at Oracle Consulting for eight and a half years, from January 1990 until July 1998, starting as a senior consultant and finishing as a technical manager.  In the summer of 1998, I was experiencing a dual crisis in my career, directionally and ethically.

From the directional perspective, Oracle Consulting was sending very clear signals that the way Gary Dodge and I were doing business in the Denver consulting practice was not aligned with corporate goals.  The corporation wanted vertical “centers of expertise” with global and national scope.  In Denver, Gary and I managed about a dozen generalists, with experience ranging from very junior to very senior, who effectively covered all types of technology.  Our goal was to let each person work locally on the type of work they enjoyed, occasionally coercing some to try something different.  Many of us had families, and all of us lived in Colorado for a reason.

Attempting to adhere to corporate direction, when we received a request from a local customer, we began to first contact the relevant national or global “center of expertise”.  Most often, we would be told that nobody was available within the next few weeks (or months) and that, when they did become available, the rates charged would reflect a very senior person coupled with travel expenses.  We would feed that response back to the customer, who understandably became concerned or irate, and asked for one of our local generalists, whom they had probably used previously, which would have been our first response anyway.  In almost each case, we would end up staffing one of our local folks in the engagement, who completed the engagement often before the national or global group’s person became available.  As this continued, the pressure from corporate became more direct, complaining about a “black hole in the Rockies”.  So, looking ahead into the future at Oracle, I saw a model of business with which I wasn’t comfortable:  our local people getting on planes to work elsewhere, while out-of-town personnel were flying into Colorado to work here.  Perhaps it looked good from a higher level, but from our street-level view, it was absurd.

However, I also had a more serious ethical problem.  I had been sent to Los Angeles to work an engagement involving my primary expertise at the time:  Oracle Parallel Server on IBM RS6000/SP clusters.  The customer was a start-up website job board.  Both IBM and Oracle were determined to sell some massive hardware and software in there, and were working together toward common purpose with rare cooperation.

Except the customer wasn’t cooperating.

Instead, they had come up with a far less-expensive scheme involving dozens of commodity servers, where the one server contained a master database to which new job postings were added and changes were made, which was then replicated to dozens of read-only database servers using backup/restore, with a connection load-balancer directing traffic.  This allowed their read-mostly website to scale as needed by off-loading the reads from the master database and segregating writes from the read-only databases.  It was fast, cheap, and easy — a rare occasion when it wasn’t necessary to choose only two.  It was novel for the time, I was impressed, and said so.  Nowadays, such a thing is called a reader farm and can easily be implemented using Active Data Guard.

However, the IBM and Oracle teams were adamantly opposed – fast, cheap, and easy would ruin the lucrative deal they had planned for themselves.  So I was directly ordered by the regional vice-president in charge of the deal to reject as unworkable the customer’s plans and instead extol the virtues of Oracle Parallel Server and IBM RS6000/SP clustered servers one way or the other, and recommend it strongly in conclusion.

What to do?

I certainly did not enjoy being ordered to lie.  Not asked, but ordered.  On the other hand, I worked for Oracle and I had a boss and that boss stated very clearly what to do, as he had every right to do.  After all, no blood would be spilled, no babies would be killed.

So my solution to the ethical dilemma was:

  1. Complete the engagement as directed
  2. Prevent it from happening again

I am not smart enough to avoid making mistakes, but I believe in making mistakes only once.  I did what I was told to do, enduring the astonished looks from the good folks who couldn’t believe I was spouting such nonsense.  I subsequently resigned from Oracle, to avoid ever having to make that mistake again.  But having resigned from one well-regarded corporation, the question became:  are there any corporations, anywhere in the world, where I would not be asked to do something like that again?

The answer was simple and, in August 1998, Evergreen Database Technologies, Inc opened for business.

The first person I told of my decision to resign was Gary Dodge.  He wasn’t my supervisor, but we were peers.  I entered his office and closed the door, and he looked up and commented, “Oh, that’s not a good sign.”  I sat down and told him, and he nodded and said, “Well, good thing you closed the door, because I’m leaving also.”  He didn’t leave Oracle, but he left consulting, for the same directional reasons as I.  So, we didn’t inform our management together, but we informed them at the same time.

EvDBT hasn’t been open continuously over the past 15 years;  I have far too much to learn.  I spent a few years attempting to start another consulting-services company with some colleagues, and that ended unsuccessfully.  Any deal that starts with handshakes inevitably ends with lawyers, so my lesson is to always start with lawyers so that it ends with handshakes.

At one point, I hired in with Compaq Professional Services because they offered an intriguing opportunity.  However, my timing was bad, as Compaq was absorbed by HP a few months after I started, and knowing that I would not enjoy the noise and mess of the mating of the elephants, I moved on.

Thank you all for the past 15 years, and I look forward to the next 15 years.

Update on Friday 18-Oct 2013:  I’ve received some criticism and questions for my perceived criticism of Oracle in this article, particularly with the ethical dilemma described above.  I didn’t write this to criticize Oracle as a company, the situation simply happened while I was working there.  It is a large company like many others.  Corporations are comprised of people who respond in varying ways to the incentives given them.  I’m personally aware of many people with similar roles at Oracle who have not and never will react to their incentives in that particular way.  Likewise, I know of a few who would have reacted far worse.  It’s all part of the grand pageant of human behavior.

The person who ordered me to do my job was not himself facing an ethical dilemma.  He had brought me onto the engagement to expedite the deal, and he never imagined that I would balk;  it just wasn’t professional.

He had a task to do, and I began to jeopardize the success of that task.  I would hope to be as decisive and effective as he.

Keyword DETERMINISTIC is anything but…

According TheFreeDictionary.com, the word “deterministic” means…

deterministic
de·termin·istic adj. an inevitable consequence of antecedent sufficient causes

According to Wikipedia, the explanation of deterministic algorithm is…

In computer science, a deterministic algorithm is an algorithm which, given a particular
input, will always produce the same output, with the underlying machine always passing
through the same sequence of states.

In the Oracle PL/SQL Language documentation, it is used as a keyword, as follows…

DETERMINISTIC

Indicates that the function returns the same result value whenever it is called with the same values for its parameters.

You must specify this keyword if you intend to invoke the function in the expression of a function-based index or from the query of a materialized view that is marked REFRESH FAST or ENABLE QUERY REWRITE. When the database encounters a deterministic function in one of these contexts, it attempts to use previously calculated results when possible rather than re-executing the function. If you subsequently change the semantics of the function, then you must manually rebuild all dependent function-based indexes and materialized views.

Do not specify this clause to define a function that uses package variables or that accesses the database in any way that might affect the return result of the function. The results of doing so are not captured if the database chooses not to re-execute the function.

These semantic rules govern the use of the DETERMINISTIC clause:

  • You can declare a schema-level subprogram DETERMINISTIC.

  • You can declare a package-level subprogram DETERMINISTIC in the package specification but not in the package body.

  • You cannot declare DETERMINISTIC a private subprogram (declared inside another subprogram or inside a package body).

  • A DETERMINISTIC subprogram can invoke another subprogram whether the called program is declared DETERMINISTIC or not.

There is a subtle twist about this explanation.  It states that the keyword “indicates that the function returns the same result value whenever it is called with the same values for its parameters“, but if you think about the use of the verb indicates, you realize that they are conceding that the keyword itself doesn’t enforce the behavior.  Instead, it is curiously carefully-chosen language to sidestep the important fact that the PL/SQL language compiler does not actually enforce the necessary behavior.

So as a result, it is possible to write the following function…

SQL> create or replace function test_func(in_col1 in number)
  2           return number deterministic
  3  is
  4           v_col1  number;
  5  begin
  6           select  col1
  7           into    v_col1
  8           from    test_tbl2
  9           where   col1 = in_col1;
 10           return(v_col1);
 11  end test_func;
 12  /
SQL> show errors
No errors.

Is this function really deterministic?  No, of course not.  Anyone else changing data in the TEST_TBL2 table can change the outcome of this function.

Yet, the DETERMINISTIC keyword did not cause compilation of the function to fail, as it should have.  Only the use of the pragma restrict_references using the qualifiers RNDS (i.e. read no database state), RNPS (i.e. read no package state), WNDS (i.e. write no database state), and WNPS (i.e. write no package state) would do that…

SQL> create or replace package test_pkg
  2  as
  3          function test_func(in_col1 in number)
  4                  return number;
  5          pragma  restrict_references(test_func,RNPS,WNPS,RNDS,WNDS);
  6  end test_pkg;
  7  /

SQL> show errors
No errors.

SQL> create or replace package body test_pkg
  2  as
  3          function test_func(in_col1 in number)
  4                  return number
  5          is
  6                  v_col1  number;
  7          begin
  8                  select  col1
  9                  into    v_col1
 10                  from    test_tbl2
 11                  where   col1 = in_col1;
 12                  return(v_col1);
 13          end test_func;
 14  end test_pkg;
 15  /

Warning: Package Body created with compilation errors.

SQL> show errors
Errors for PACKAGE BODY TEST_PKG:

LINE/COL ERROR
-------- -----------------------------------------------------------------
3/2      PLS-00452: Subprogram 'TEST_FUNC' violates its associated pragma

Notice that this pragma can only be used within a function declared within a PL/SQL package;  this pragma cannot be used within a standalone function.  But it proves that the PL/SQL compiler is capable of detecting the problem, and failing the compilation.  They have the technology.

Further, it is now possible to create a function-based index using this function…

SQL> create index test_tbl1_fbi on test_tbl1(test_func(col1))
  2  tablespace users compute statistics;

Index created.

…and that function-based index will be used by the Oracle optimizer for queries, after all, why shouldn’t it?

SQL> select t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
----------------------------------------------------------
Plan hash value: 357717947
-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                 |    10 |   170 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_TBL1       |    10 |   170 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST_TBL1_FBI01 |     4 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

SQL> select /*+ full(t1) */ t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
----------------------------------------------------------
Plan hash value: 1370928414
-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |    10 |   170 |     5   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_TBL1 |    10 |   170 |     5   (0)| 00:00:01 |
-------------------------------------------------------------------------------

SQL> select  t1.col1 t1_col1, t2.col1 t2_ool1
  2  from    test_tbl1 t1, test_tbl2 t2
  3  where   t2.col1 = t1.col1
  4  and     t1.col1 = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
----------------------------------------------------------
Plan hash value: 2884964714
-----------------------------------------------------------------------------------
| Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |              |     1 |     8 |     1   (0)| 00:00:01 |
|   1 |  NESTED LOOPS      |              |     1 |     8 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN| TEST_TBL2_PK |     1 |     4 |     1   (0)| 00:00:01 |
|*  3 |   INDEX UNIQUE SCAN| TEST_TBL1_PK |     1 |     4 |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

So, whether the query uses the function-based index, or whether it performs a simple FULL table scan, or whether the function-based index isn’t used at all, the results are the same.

But, now suppose another session changes that row in the TEST_TBL2?

SQL> update  test_tbl2
  2  set     col1 = 1700
  3  where   col1 = 170;

1 row updated.

SQL> commit;

Commit complete.

…and now someone performs a query using the function-based index?

SQL> select t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

             T1_COL1              T2_OOL1
-------------------- --------------------
                 170                  170

Execution Plan
----------------------------------------------------------
Plan hash value: 357717947
-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                 |    10 |   170 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_TBL1       |    10 |   170 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST_TBL1_FBI01 |     4 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

How can that be?  We know that the UPDATE changed this data?  And here is proof obtained by bypassing the function-based index during the WHERE clause by forcing a FULL table scan…

SQL> select /*+ full(t1) */ t1.col1 t1_col1, test_func(t1.col1) t2_ool1
  2  from test_tbl1 t1 where test_func(t1.col1) = 170;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1370928414
-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |    10 |   170 |     5   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST_TBL1 |    10 |   170 |     5   (0)| 00:00:01 |
-------------------------------------------------------------------------------

And here is further proof obtained by completely eliminating the function from the SELECT list and instead performing a simple inner-join…

SQL> select  t1.col1 t1_col1, t2.col1 t2_ool1
  2  from    test_tbl1 t1, test_tbl2 t2
  3  where   t2.col1 = t1.col1
  4  and     t1.col1 = 170;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 2884964714
-----------------------------------------------------------------------------------
| Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |              |     1 |     8 |     1   (0)| 00:00:01 |
|   1 |  NESTED LOOPS      |              |     1 |     8 |     1   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN| TEST_TBL2_PK |     1 |     4 |     1   (0)| 00:00:01 |
|*  3 |   INDEX UNIQUE SCAN| TEST_TBL1_PK |     1 |     4 |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

So, what PL/SQL has permitted us to do is create a situation where it would be reasonable for end-users to conclude that the database has corrupted data.  In a sense, it does — the corrupted data is within the function-based index, where deterministic data is expected.

I found this very situation on the loose in the wild;  that is, within a production application in use at a healthcare company.  Think about that.

I didn’t find it because someone complained about possible data corruption.  I found it because an AWR report pointed me at the SQL statement within the function, which was being executed 1.3 billion times over a 5 day period.  Each execution was quite fast, but if you do 1.3 billion of anything over less than a lifetime, someone will eventually notice.

If you consider that 1.3 billion executions over a 5 day period implies a average rate of about 3,000 executions per second, sustained, for every second of those 5 days, then you start to get an idea of the problem.  Especially when you consider that there were peaks and valleys in that activity.

So, I have raised the issue with the affected healthcare organization, and the problem is worming its way through change management.  In the meantime, this application continues to return incorrect information, over and over and over again.

Are you sure that none of your function-based indexes were built this way?