Category Archives: Oracle

Azure host-caching with Oracle database workloads on Azure VMs

There are almost 2,000 Azure customers running Oracle database workloads on Azure VMs, and that number is growing rapidly all the time.  Running Oracle on Azure is not the first idea to occur to many people, but it should be, as there are several types of VMs appropriate for Oracle workloads within Azure, as well as multiple choices for storage, ranging from inexpensive to high-performance.  Overall, Oracle performs very well in Azure, and there are several great features in Azure to make running Oracle even easier, such as Azure VM Backup (i.e. topic for another post).

One of the most common configurations is running one or more Oracle databases on an Azure VM with premium SSD managed disk.  When attaching managed disk to a VM, one has the option of enabling ReadOnly host-caching or ReadWrite host-caching for the managed disk device.

What is host-caching?  Essentially it is locally-attached SSD on the physical host underlying the virtual machines, and it is documented HERE.  When the VM is allocated to a physical host and running, locally-attached SSD is used with a simple LRU1 caching algorithm for I/O between the VM and the virtualized network-attached managed disk device.  ReadOnly host-caching caches only read I/O from the VM to the managed disk device, while ReadWrite host-caching caches both read and write I/O operations from the VM to the managed disk.

Please note that ReadWrite host-caching is intended either as the default setting for the VM OsDisk, or for data disks using filesystems with barrier functionality enabled, as cited HERE.  It is not a great idea to use ReadWrite host-caching with Oracle database files, because of possible data-integrity issues resulting from lost writes should the VM crash.

But the rest of this article pertains to the use of ReadOnly host-caching for managed disk for Oracle database workloads.

If the Oracle online redo log files are isolated from other database files and reside on their own ReadOnly host-cached disk, then…

Generally each block in online redo log files are written once and then read only once.  ReadOnly host-caching ignores the write altogether and does the work to cache the single read, as it is designed.  However, if there is never second, third, or subsequent read, then the maintenance of the cache chains and the cache buffers performed at the first read is entirely wasted.  Worse still, this happens with every single block written to the online redo log files, over and over again;  the cache is constantly manipulated, consuming compute resources, but to no benefit.  I don’t believe that this activity slows I/O operations for the online redo log files significantly, but simply increases the consumption of compute resources which could otherwise be used productively, which of course means that the overall system is capable of accomplishing less work.

If the Oracle online redo log files are not isolated and reside on the same ReadOnly host-cached disk as other database files…

This is where actual performance decreases can result.  Oracle datafiles, tempfiles, controlfiles, BFILEs, files for external tables, block change tracking files, and Flashback log files do in fact use ReadOnly host-caching appropriately, with numerous reads of the same block after each write, not to mention the buffer caching within the RDBMS itself.  If Oracle online redo log files activity is mixed into the same workload as the Oracle datafiles, tempfiles, controlfiles, etc, then ReadOnly host-caching is forced to cache the online redo log files as well, even though there is zero benefit, along with the rest of the database files.  As a result, blocks stay in the cache for less time due to the constant rollover of buffers enforced by the sequential online redo log files workload, making the ReadOnly cache less effective for all files, and causing more “overhead” as pages/blocks needlessly “flushed” from the cache by the sequentially-written redo need to re-acquired.  This is the scenario where the ReadOnly host-caching is noticeably less effective, and the average I/O latency does not improve.

Also, if using the D_v3 or E_v3 or M series of VMs…

ReadOnly host-caching actually provides slower I/O throughput on these VM instance types than uncached I/O, which is impressively fixed with the Dds_v4 and Eds_v4 series.  For example, this table (below) summarizes from the documentation pages the differences in IOPS and I/O throughput limits of the Standard_E32s_v3 series and the Standard_E32ds_v4 instance types, as illustration…

SizevCPUMemory (GB)Max cached and temp storage IOPSMax cached and temp storage throughput (MBPS)Max uncached disk IOPSMax uncached disk throughput (MBps)
Standard_E32s_v3322566400051251200768
Standard_E32ds_v432256308000193651200768
Comparison of cached throughput limits of the same VM instance type in v3 vs v4

As you can see, I/O throughput for cached storage is lower than the I/O throughput for uncached storage for v3, while a more intuitive result is apparent for v4.

The same is true for other size instance types, of course; I just didn’t display them.

In summary, if you wish to reap the expected benefits of ReadOnly host-caching, do not use D_v3 or E_v3 instance or anything in the M-series instance types.

So in summary, I think these are good rules for using Azure VM host-caching with Oracle database workloads…

  1. Don’t use ReadWrite host-caching with Oracle database files
  2. Isolate the transaction logs from other database files to separate managed disk devices and disable host-caching
  3. Use ReadOnly host-caching for all other Oracle database files
  4. Do not use the D_v3, E_v3, or M-series VMs because host-caching and ephemeral SSD is slow compared to Dds_v4 and Eds_v4 series VMs

Oak Table World 2018

Oak Table World 2018 (OTW) just completed at the Children’s Creativity Museum in San Francisco.  The event website is “https://otw18.org“.

This year, it was my turn to organize this remarkable little un-conference on behalf of the Oak Table Network, which is often described as a drinking society with an Oracle problem.

What is an “un-conference”?  The dictionary definition is…

un·con·fer·ence   /ˈənkänf(ə)rəns/

noun: unconference; plural noun: unconferences

a loosely structured conference emphasizing the
informal exchange of information and ideas between
participants, rather than following a conventionally
structured program of events. Generally this manifests
as a "participant-driven" event.

OTW started out in 2010 with Mogens Norgaard holding a set of tables in the back room of the Chevy’s Tex-Mex Bar & Grill in San Francisco (since closed), just across 3rd St from the Moscone Center where Oracle OpenWorld was being held.

Rather than drinking from the flood pouring from the corporate marketing machines across the street, friends joined Mogens for beer and Tex-Mex to discuss and argue technical topics and drink beer.

Thus was born “Oracle ClosedWorld” as a true unconference.  The following year in 2011, Oracle ClosedWorld was held in the upstairs room at The Thirsty Bear, a few doors down from Chevy’s on Howard St, this time with an agenda and a more structured format.

However, the ever-vigilant legal department at Oracle Corporation was not in the least amused by the gentle gibe of the name “Oracle ClosedWorld” from a bunch of drunk geeks, and so after the event they quickly regurgitated a letter offering detailed demands and dire consequences, and so of course we sweetly complied, and the unconference was renamed to “Oak Table World” (OTW).  To date, Oracle’s legal team to not impose their will on our continued use of the word “World”, perhaps at least until Oracle achieves actual world domination.  So we wait with bated breath for their next move.

The following year, in 2012, Oak Table World found a new home at the Children’s Creativity Museum, which is located at the corner of 4th Street and Howard Street, smack dab in the middle of the Moscone complex.  The unconference was then organized by Kyle Hailey, who continued to do so in 2013, 2014, and 2015.  In 2016, it was organized by Kellyn Pot’vin-Gorman, in 2017 by Jeremiah Wilton, and in 2018 by Tim Gorman.

The event seems to be slipping away from its original un-conference format, becoming more professionalized, becoming more of a conference-within-a-bigger-conference.  I’m not sure if that is a good thing or a bad thing, but once you start laying out money for a venue and catering, things get more conventional pretty quickly.

Singing The NoCOUG Blues

This is a re-post of an interview between myself and Iggy Fernandez, editor of the Journal of the Northern California Oracle Users Group (NoCOUG), Oracle ACE, OakTable member, blogger, and simply an amazing person.  The interview starts on page 4 of the August 2014 issue of the NoCOUG Journal, and demonstrates how a gifted interviewer can make someone being interviewed more interesting and coherent.

Singing The NoCOUG Blues

You are old, father Gorman (as the young man said) and your hair has become very white. You must have lots of stories. Tell us a story!

Well, in the first place, it is not my hair that is white. In point of fact, I’m as bald as a cue ball, and it is my skin that is pale from a youth misspent in data centers and fluorescent-lit office centers.

It is a mistake to think of wisdom as something that simply accumulates over time. Wisdom accumulates due to one’s passages through the world, and no wisdom accumulates if one remains stationary. It has been said that experience is what one receives soon after they need it, and experience includes both success and failure. So wisdom accumulates with experience, but it accumulates fastest as a result of failure.

About four years ago, or 26 years into my IT career, I dropped an index on a 60 TB table with 24,000 hourly partitions; the index was over 15 TB in size. It was the main table in that production application, of course.
Over a quarter-century of industry experience as a developer, production support, systems administrator, and database administrator: if that’s not enough time to have important lessons pounded into one’s head, then how much time is needed?

My supervisor at the time was amazing. After the shock of watching it all happen and still not quite believing it had happened, I called him at about 9:00 p.m. local time and told him what occurred. I finished speaking and waited for the axe to fall—for the entirely justified anger to crash down on my head. He was silent for about 3 seconds, and then said calmly, “Well, I guess we need to fix it.”

And that was it.

No anger, no recriminations, no humiliating micro-management. We launched straight into planning what needed to happen to fix it.

He got to work notifying the organization about what had happened, and I got started on the rebuild, which eventually took almost 2 weeks to complete.


Side note…

I didn’t explain how we rebuilt the index within the NoCOUG article, but I did so recently in a recent post to the ORACLE-L email forum…

Here is a high-level description of what worked to rebuild it…

1) Run “create partitioned index … unusable” to create the partitioned index with all partitions empty.

2) Create a shell-script to run NN SQL*Plus processes simultaneously, where “NN” is a number of your choice, each process doing the following…

alter index <index-name> rebuild partition <partition-name> parallel <degree> nologging compute statistics

We ordered the SQL*Plus calls inside the shell-script so that the partitions for the most-recent partitions (i.e. the table was partitioned by a DATE column) were populated first, and then let the builds progress back in time.  Depending on the application, you can be doing some or all of the normal activities on the table.  Our assumption (which proved correct) was that all DML occurs against the newest partitions, so those were the partitions that needed to be made “usable” first.

This approach won’t eliminate downtime or performance problems, but it will likely minimize them.


It truly happens to all of us. And anyone who pretends otherwise simply hasn’t been doing anything important.

How did I come to drop this index? Well, I wasn’t trying to drop it; it resulted from an accident. I was processing an approved change during an approved production outage. I was trying to disable a unique constraint that was supported by the index. I wanted to do this so that a system-maintenance package I had written could perform partition exchange operations (which were blocked by an enabled constraint) on the table. When I tested the disabling of the constraint in the development environment, I used the command ALTER TABLE … DISABLE CONSTRAINT and it indeed disabled the unique constraint without affecting the unique index. Then I tested the same operation again in the QA/Test environment successfully. But when it came time to do so in production, it dropped the index as well.

Surprise!

I later learned that the unique constraint and the supporting unique index had been created out of line in the development and QA/test environments. That is, first the table was created, then the unique index was created, and finally the table was altered to create the unique constraint on the already-existing unique index.

But in production, the unique constraint and the supporting unique index had been created in-line. When the table was created, the CREATE TABLE statement had the unique constraint within, along with the USING INDEX clause to create the unique index.

So when I altered the table in production, disabling the constraint also caused the index to be dropped.

After the mishap, I found the additional syntax for KEEP INDEX, which could have been added to the end of the ALTER TABLE … DISABLE CONSTRAINT command because Oracle recognized the difference in default behaviors.

But that was a discovery I experienced after I needed it.

As to why my supervisor was so calm and matter-of-fact throughout this disaster, I was not surprised; he was always that way, it seemed. What I learned over beers long after this incident is that, in his early life, he learned the true meaning of the words “emergency” and “catastrophe.”

He was born in Afghanistan, and he was a young child during the 1980s after the Soviets invaded. His family decided to take refuge in Pakistan, so they sought the help of professional smugglers, similar to what we call “coyotes” on the Mexican-American border. These smugglers moved through the mountains bordering Afghanistan and Pakistan at night on foot, using camels to carry baggage and the very old, the sick and injured, and the very young.

My supervisor was about 9 years old at the time, so the smugglers put him on a camel so he would not slow them down. During the night, as they were crossing a ridge, they were spotted by the Soviets, who opened fire on them using machine guns with tracer bullets. Everyone in the caravan dove to the ground to take cover. Unfortunately, they all forgot about the 9-year-old boy on top of the 8-foot-high camel. My supervisor said he saw the bright tracer bullets arching up toward him from down below in the valley, passing over his head so close that he felt he could just reach up and grab them. He wanted to jump down, but he was so high off the ground he was terrified. Finally, someone realized that he was exposed and they pulled him down off the camel.

As he told this story, he laughed and commented that practically nothing he encountered in IT rose to the level of what he defined as an emergency. The worst that could happen was no more catastrophic than changing a tire on a car.

I’ve not yet been able to reach this level of serenity, but it is still something to which I aspire.

We love stories! Tell us another story!

A little over 10 years ago, I was working in downtown L.A. and arrived in the office early (5:00 a.m.) to start a batch job. I had a key card that got me into the building and into the office during the day, but I was unaware that at night they were locking doors in the elevator lobby. I banged on the doors and tried calling people, to no avail. Finally, after a half-hour, out of frustration, I grabbed one of the door handles and just yanked hard.

It popped open.

I looked at it in surprise, thought “sweet!”, walked in to the cubicle farm, sat down, and started my batch job. All was good.

Around 7:00 a.m., the LAPD showed up. There were about a dozen people in the office now, so the two officers began questioning folks nearest the door. From the opposite side of the room, I stood up, called out “Over here,” and ’fessed up.

They told me that if I hadn’t called them over immediately, they would have arrested me by the time they got to me.

Have a nice day, sir.

The NoCOUG Blues

NoCOUG membership and conference attendance have been declining for years. Are user groups still relevant in the age of Google? Do you see the same trends in other user groups? What are we doing wrong? What can we do to reverse the dismal trend? Give away free stuff like T-shirts and baseball caps? Bigger raffles? Better food?

Yes, the same trends are occurring in other users groups. IT organizations are lean and can’t spare people to go to training. The IT industry is trending older as more and more entry-level functions are sent offshore.
Users groups are about education. Education in general has changed over the past 20 years as online searches, blogs, and webinars have become readily available.

The key to users groups is the quality of educational content that is offered during live events as opposed to online events or written articles.

Although online events are convenient, we all know that we, as attendees, get less from them than we do from face-to-face live events. They’re better than nothing, but communities like NoCOUG have the ability to provide the face-to-face live events that are so effective.

One of the difficulties users groups face is fatigue. It is difficult to organize events month after month, quarter after quarter, year after year. There is a great deal of satisfaction in running such an organization, especially one with the long and rich history enjoyed by NoCOUG. But it is exhausting. Current volunteers have overriding work and life conflicts.

New volunteers are slow to come forward.

One thing to consider is reaching out to the larger national and international Oracle users groups, such as ODTUG, IOUG, OAUG, Quest, and OHUG. These groups have similar missions and most have outreach programs. ODTUG and IOUG in particular organize live onsite events in some cities, and have webinar programs as well. They have content, and NoCOUG has the membership and audience. NoCOUG members should encourage the board to contact these larger Oracle users groups for opportunities to partner locally.

Another growing trend is meet-ups, specifically through Meetup.com. This is a resource that has been embraced by all manner of tech-savvy people, from all points on the spectrum of the IT industry. I strongly urge all NoCOUG members to join Meetup.com, indicate your interests, and watch the flow of announcements visit your inbox. The meet-ups run the gamut from Hadoop to Android to Oracle Exadata to In-Memory to Big Data to Raspberry Pi to vintage Commodore. I think the future of local technical education lies in the intersection of online organization of local face-to-face interaction facilitated by Meetup.com.

Four conferences per year puts a huge burden on volunteers. There have been suggestions from multiple quarters that we organize just one big conference a year like some other user groups. That would involve changing our model from an annual membership fee of less than $100 for four single-day conferences (quarterly) to more than $300 for a single multiple-day conference (annual), but change is scary and success is not guaranteed. What are your thoughts on the quarterly vs. annual models?

I disagree with the idea that changing the conference format requires increasing annual dues. For example, RMOUG in Colorado (http://rmoug.org/) has one large annual conference with three smaller quarterly meetings, and annual dues are $75 and have been so for years. RMOUG uses the annual dues to pay for the three smaller quarterly education workshops (a.k.a. quarterly meetings) and the quarterly newsletter; the single large annual “Training Days” conference pays for itself with its own separate registration fees, which of course are discounted for members.

Think of a large annual event as a self-sufficient, self-sustaining organization within the organization, open to the public with a discount for dues-paying members.

Other Oracle users groups, such as UTOUG in Utah (http://utoug.org/), hold two large conferences annually (in March and November), and this is another way to distribute scarce volunteer resources. This offers a chance for experimentation as well, by hiring one conference-coordinator company to handle one event and another to handle the other, so that not all eggs are in one basket.

The primary goal of larger conferences is ongoing technical education of course, but a secondary goal is to raise funds for the continued existence of the users group and to help subsidize and augment the website, the smaller events, and the newsletter, if necessary.

It costs a fortune to produce and print the NoCOUG Journal, but we take a lot of pride in our unbroken 28-year history, in our tradition of original content, and in being one of the last printed publications by Oracle user groups. Needless to say it also takes a lot of effort. But is there enough value to show for the effort and the cost? We’ve been called a dinosaur. Should we follow the other dinosaurs into oblivion?

I don’t think so. There are all kinds of formats for publication, from tweets to LinkedIn posts to blogs to magazines to books. Magazines like the NoCOUG Journal are an important piece of the educational ecosystem. I don’t think that any of the Oracle users groups who no longer produce newsletters planned to end up this way. They ceased publishing because the organization could no longer sustain them.

I think today the hurdle is that newsletters can no longer be confined within the users group. Both NoCOUG and RMOUG have independently come to the realization that the newsletter must be searchable and findable online by the world, which provides the incentive for authors to submit content. Today, if it cannot be verified online, it isn’t real. If it isn’t real, then there is little incentive for authors to publish.

So making the NoCOUG Journal available online has been key to its own viability, and NoCOUG membership entitles one to a real hard-copy issue, which is a rare and precious bonus in this day and age.

Oracle Database 12c

Mogens Norgaard (the co-founder of the Oak Table Network) claims that “since Oracle 7.3, that fantastic database has had pretty much everything normal customers need,” but the rest of us are not confirmed Luddites. What are the must-have features of Oracle 12c that give customers the incentive to upgrade from 11g to 12c? We’ve heard about pluggable databases and the in-memory option, but they are extra-cost options aren’t they?

I know for a fact that the Automatic Data Optimization (ADO) feature obsolesces about 3,000 lines of complex PL/SQL code that I had written for Oracle 8i, 9i, 10g, and 11g databases. The killer feature within ADO is the ability to move partitions online, without interrupting query operations. Prior to Oracle 12c, accomplishing that alone consumed hundreds of hours of code development, testing, debugging, and release management.

Combining ADO with existing features like OLTP compression and HCC compression truly makes transparent “tiers” of storage within an Oracle database feasible and practical. The ADO feature alone is worth the effort of upgrading to Oracle 12c for an organization with longer data retention requirements for historical analytics or regulatory compliance.

What’s not to love about pluggable databases? How different is the pluggable database architecture from the architecture of SQL Server, DB2, and MySQL?

I think that first, in trying to explain Oracle pluggable databases, most people make it seem more confusing than it should be.

Stop thinking of an Oracle database as consisting of software, a set of processes, and a set of database files.

Instead, think of a database server as consisting of an operating system (OS) and an Oracle 12c container database software; a set of Oracle processes; and the basic control files, log files, and a minimal set of data files. When “gold images” of Oracle database servers are created, whether for jumpstart servers or for virtual machines, the Oracle 12c CDB should be considered part of that base operating system image.

Pluggable databases (PDBs) then are the data files installed along with the application software they support. PDBs are just tablespaces that plug into the working processes and infrastructure of the CDBs.

When PDBs are plugged in, all operational activities involving data protection—such as backups or redundancy like Data Guard replication—are performed at the higher CDB level.

Thus, all operational concerns are handled at the CDBs and the operational infrastructure from the PDBs and the applications.

Once the discussion is shifted at that high level, then the similarities are more visible between the Oracle 12c database and other multitenant databases, such as SQL Server and MySQL. Of course there will always be syntactic and formatting differences, but functionally Oracle 12c has been heavily influenced by its predecessors, such as SQL Server and MySQL.

Bonus Question
Do you have any career advice for the younger people reading this interview so that they can be like you some day? Other than actively participating in user groups!

This sounds corny and trite, but there is no such thing as a useless experience, and while it may be frustrating, it presents the opportunity to build. Understand that everyone starts at the bottom, and enjoy the climb.
Understand that learning causes stress. Stress is stress and too much can be unhealthy, but if it is a result of learning something new, then recognize it for what it is, know it is temporary and transitory, tough it out, and enjoy knowing the outcome when it arrives.

Also, don’t voice a complaint unless you are prepared to present at least one viable solution, if not several. Understand what makes each solution truly viable and what makes it infeasible. If you can’t present a solution to go with the complaint, then more introspection is needed. The term “introspection” is used deliberately, as it implies looking within rather than around.

Help people. Make an impact. Can we go wrong in pursuing either of those as goals? Sometimes I wish I had done more along these lines. Never do I wish I had done less.

Presenting “How Data Recovery Mechanisms Complicate Security, and What To Do About It” at #c16lv

Personally-identifiable information (PII) and classified information stored within an Oracle database is quite secure, as the capabilities of Oracle database ensure that it is accessible only by authenticated and authorized users.

But the definition of authentication and authorization can change, in a sense.authentication

In a production application, authentication means verifying that one is a valid application user and authorization means giving that application user the proper privileges to access and manipulate data.

But now let’s clone the data from that production application to non-production systems (i.e. development, QA, testing, training, etc), so that we can develop new application functionality, fix existing functionality, test it, and deliver user training.

By doing so, in effect the community of valid users has changed, and now instead of being production application users, the community of valid users is developers and testers.  Or it is newly-hired employees being trained using “live” information from production.

And the developers and testers and trainees are indeed authenticated properly, and are thus authorized to access and manipulate this same data in their non-production systems.temptation  The same information which can be used to defraud, steal identities, and cause all manner of mayhem, if one so chose.  This honor system is the way things have been secured in the information technology (IT) industry for decades.

Now of course, I can hear security officers from every point of the compass screaming and wildly gesticulating, “NO! That’s NOT true! The definition of authentication and authorization does NOT change, just because systems and data are copied from production to non-production!”  OK, then you explain what has been happening for the past 30 years?

In the case of classified data, these developers and testers go through a security clearance process to ensure that they can indeed be trusted with this information and that they will never misuse it.  Violating one’s security clearance in any respect is grounds for criminal prosecution, and for most people that is a scary enough prospect to wipe out any possible temptation.

Outside of government, organizations have simply relied on professionalism and trust to prevent this information from being used and abused.Medal of Honor  And for the most part, for all these many decades, that has been effective.  I know that I have never even been tempted to share PII or classified data in which I’ve come in contact.  I’m not bragging, it is just part of the job.

Now, that doesn’t mean that I haven’t been tempted to look up my own information.  This is essentially the same as googling oneself, except here it is using non-public information.

I recall a discussion with the CFO of an energy company, who was upset that the Sarbanes-Oxley Act of 2002 held she and her CEO criminally liable for any information breaches within her company.  She snarled, “I’ll be damned if I’m going to jail because some programmer gets greedy.”  I felt this is an accurate analysis of the situation, though I had scant sympathy (“That’s why you’re paid the big bucks“).  Since that time we have all seen efforts to rely less on honor and trust, and more on securing data in non-production.  Because I think everyone realizes that the bucks just aren’t big enough for that kind of liability.

This has given rise to products and features which use encryption for data at rest and data in flight.  But as pointed out earlier, encryption is no defense against a change in the definition of authentication and authorization.  I mean, if you authenticate correctly and are authorized, then encryption is reversible obfuscation, right?

To deal with this reality, it is necessary to irreversibly obfuscate PII and classified data, which is also known as data maskingGuyFawkes mask There are many vendors of data masking software, leading to a robust and growing industry.  If you irreversibly obfuscate PII and classified information within production data as it is cloned to non-production, then you have completely eliminated the risk.

After all, from a practical standpoint, it is extremely difficult as well as conceptually questionable to completely artificially generate life-like data from scratch.  It’s a whole lot easier to start with real, live production data, then just mask the PII and classified data out of it.

Problem solved!

Or is it?

Unfortunately, there is one more problem, and it has nothing to do with the concept of masking and irreversible obfuscation.  Instead, it has to do with the capabilities of the databases in which data is stored.

Most (if not all) of this PII and classified data is stored within databases, which have mechanisms built in for the purpose of data recovery in the event of a mishap or mistake.  In Oracle database, such mechanisms include Log Miner and Flashback Database.

Since data masking mechanisms use SQL within Oracle databases, then data recovery mechanisms might be used to recover the PII and classified data that existed before the masking jobs were executed.  It is not a flaw in the masking mechanism, but rather it is the perversion of a database feature for an unintended purpose.

Ouch.

This topic became a 10-minute “lightning talk” presentation on “How Oracle Data Recovery Mechanisms Complicate Data Security, and What To Do About It” on Wednesday 13-April 2016 at the Collaborate 2016 conference in Las Vegas.  Please read the slidedeck for my notes on how to deal with the situation presented here.

You can download the slidedeck here

.

Accelerating Oracle E-Business Suites upgrades

Taking backups during development, testing, or upgrades

The process of upgrading an Oracle E-Business Suites (EBS) system is one requiring dozens or hundreds of steps. Think of it like rock climbing. Each handhold or foothold is analogous to one of the steps in the upgrade.

What happens if you miss a handhold or foothold?

If you are close to the ground, then the fall isn’t dangerous. You dust yourself off, take a deep breath, and then you just start climbing again.

But what happens if you’re 25 or 35 feet off the ground? Or higher?

A fall from that height can result in serious injury, perhaps death, unless you are strapped to a climbing rope to catch your fall.  And hopefully that climbing rope is attached to piton or chock not too far from your current position.  The higher you climbed from your most recent anchor, the further you fall, and more painful.  If you set the last chock 10 feet below, that means you’re going to fall 20 feet, which is going to HURT.  It makes more sense to set chocks every 5 feet, so your longest drop will only be 10 feet at worst.

And that is the role that backups play in any lengthy process with many discrete steps, such as an EBS upgrade. Performing a backup of the systems you’re upgrading every half-dozen or dozen steps allows you to restore to the most recent backup and resume work while having only lost a relatively small amount of work.

But think of the costs of taking a full backup of the systems you’re upgrading?

Let’s assume that the EBS database is 1 TB in size, the dbTier is 4 GB, and the appsTier is another 4 GB. All told each backup takes up over a TB of backup media and takes 2 hours to perform, start to finish. So, for an upgrade process of 100 steps, and performing a backup every dozen steps, means performing 8-9 backups, at 2 hours apiece, totaling 16-18 hours alone! So not only do the backups represent a significant portion of the total number of steps to perform, but those steps also consume a significant portion of the total elapsed time of the entire upgrade process.

So, in the event of a mistake or mishap, it will probably take about 2 hours to restore to the previous backup, and then the intervening steps have to be replayed, which might take another couple of painstaking hours. Assuming there’s nothing wrong with the backup. Assuming that tapes haven’t been sent offsite to Iron Mountain. Assuming… yeah, assuming…

So just as with climbing, taking periodic backups of your work protects you against a fall.  But if each backup takes 2 hours, are you going to do that after every 15 mins?  Or after every hour?  After every 2 hours?  After every 4 hours?  Doing those time-consuming backups allows more productive work, but think of how far you’re going to fall if you slip?

Because of this high cost of failure, believe it that each step will be double-checked and triple-checked, perhaps by a second or third person, before it is committed or the ENTER key is pressed.

No wonder upgrading EBS is so exhausting and expensive!

The fastest operation is the one you never perform

What if you never had to perform all those backups, but you could still recover your entire EBS systems stack to any point-in-time as if you had?

This is what it is like to use data virtualization technology from Delphix. You start by linking a set of “source” EBS components into Delphix, and these sources are stored in compressed format within the Delphix appliance. Following that, these sources can produce virtual copies taking up very little storage in a matter of minutes. A Delphix administrator can provision virtual copies of the entire EBS systems stack into a single Delphix JetStream container for use by an individual or a team. The components of the container, which include the EBS database, the EBS dbTier, and the EBS appsTier, become the environment upon which the upgrade team is working.

Delphix captures all of the changes made to the EBS database, the EBS dbTier, and the EBS appsTier, which is called a timeflow. Additionally, Delphix JetStream containers allow users to create bookmarks, or named points-in-time, within the timeflow to speed up the location of a significant point-in-time, and easily restore to it.

So each developer or team can have their own private copy of the entire EBS systems stack. As the developer is stepping through each step of the upgrade, they can create a bookmark, which takes only seconds, after every couple steps. If they make a mistake or a mishap occurs, then they can rewind the entire JetStream container, consisting of the entire EBS systems stack, back to the desired bookmark in minutes, and then resume from there.

Let’s step through this, using the following example EBS system stack already linked and ready to use within a Delphix appliance…

Understanding concepts: timeflows, snapshots, and bookmarks

A dSource is the data loaded into Delphix from a source database or filesystem. Initially a dSource is loaded (or linked) into Delphix using a full backup, but after linking, all changes to the source database or filesystem are captured and stored in the Delphix dSource.

A timeflow is an incarnation of a dSource. The full lifecycle of a dSource may include several incarnations, roughly corresponding to a RESETLOGS event in an Oracle database. Delphix retains not just the current point-in-time of a dSource (as like replication packages like Golden Gate or Data Guard), but every action ever performed from all incarnations, from the time the dSource was initially linked into Delphix, up to the current point-in-time.

Even though Delphix compresses, de-duplicates, and performs other clever things to make the data loaded much smaller (i.e. usually about 30% of the original size), you still cannot retain everything forever. It’s just impossible. To deal with this, data is purged from timeflows that are older than the retention policy configured by the Delphix administrator.

Another useful concept to know is a snapshot, which is a small data structure that represents a usable full virtual image of a timeflow at a specific point-in-time.   Snapshots in Delphix can portray a full Oracle database as a virtual image of a point-in-time, and snapshots are subject to purge during the retention policy configured with the timeflow.

A bookmark is a named snapshot that is not subject to the configured retention policy on the timeflow, but instead can be configured with its own private retention policy, which defaults to “forever”.  Bookmarks are intended to represent points-in-time that have meaning to the end user.

An example EBS system stack in Delphix

So here we see a full E-Business Suites R12.1 system stack linked into Delphix. In the screenshot below, you can see three dSource objects in the left-hand navigation bar, under the Sources database group or folder. These three dSource objects are named…

  1. EBS R12.1 appsTier
  2. EBS R12.1 dbTechStack
  3. EBSDB

The first two items are vFiles, containing the file system sub-directories representing the E-Business Suites R12.1 applications tier and database tier, respectively. The last item is a virtual database or VDB, containing the Oracle database for the E-Business Suites R12.1 system. Each are indicated as dSources by the circular black dSource icon with the letters “dS” within (indicated below by the red arrows)…

pointing out dSource icons

dSource icons indicated by red arrows in the main Delphix administration console

Registering EBS dSources as JetStream templates

The three dSources have been encapsulated within the Delphix JetStream user-interface within a templates, as shown by the JetStream icons showing a tiny hand holding out a tiny cylinder representing a datasource (indicated below by the red arrows) just to the right of the dSource icons…

pointing out JetStream template icons

JetStream template icons indicated by red arrows in the main Delphix administration console

Within the JetStream user-interface itself, we see that all three dSources are encapsulated within one JetStream template, with the three dSources comprising the three datasources for the template…

indicating JetStream template datasources

red arrows pointing to the names of the E-Business Suite system components comprising datasources

Encapsulating EBS VDBs as JetStream containers

Once dSources are encapsulated as a JetStream template, then we can encapsulate VDBs and vFiles created from those dSources as JetStream containers. Here, we see the database group or folder called Targets circled in the left-hand navigation bar, and the blue icons for VDBs and vFiles with the letter “V” within are indicated by red arrow, while the darker icons for JetStream Containers are indicated by the green arrow

pointing to icons for VDBs and JetStream containers

Red arrow pointing to the icon for a VDB, and the green arrow pointing to the icon for a JetStream container

Working with JetStream containers for EBS

With an entire EBS system stack under the control of Delphix JetStream, we can begin the lengthy process of upgrading EBS. Before we begin, we should create a JetStream bookmark called Upgrade Step 0 to indicate the starting point of the upgrade…

starting with a timeflow within a JetStream container

starting with a timeflow within a JetStream container

Then, after performing the first five steps of the upgrade in the EBS system stack, come back to the JetStream user-interface and create a JetStream bookmark named Upgrade Step 5, as shown below…

creating JetStream bookmark

Choosing a point-in-time on a JetStream timeflow within a JetStream container, and creating a bookmark

After every couple steps of the upgrade, or prior to any lengthy step, come back to the Delphix JetStream user-interface and create a bookmark named appropriate for the step in the upgrade process, as shown below…

Naming a bookmark

Naming a JetStream bookmark

Performing a rewind in a JetStream container for EBS

So we’re cruising along, performing productive steps in the EBS upgrade process, and not worrying about making backups.

WHOOPS! At upgrade step 59, we made a serious mistake!

If we had been making backups every 12 steps or so, then the most recent backup may have been Upgrade Step 48, which was 11 steps ago. Besides the multiple hours it may take to restore the backup in the database, the dbTier, and the appsTier, we are going to have to perform 11 steps over again.

In contrast, using Delphix JetStream, we have better choices. From the Delphix JetStream user-interface, we can restore to any point-in-time on the timeflow, whether it is one of the bookmarks we previously created at a known step in the upgrade process, or to any point-in-time location on the timeflow line, which may or may be located in the middle of one the upgrade steps, and thus not a viable point from which to restart.

So, because we want to restart the upgrade from a known and viable step in the process, we’re going to restore the entire EBS systems stack consistently back to the JetStream bookmark Upgrade Step 55

Selecting a bookmark

Selecting a JetStream bookmark in preparation for performing a restore operation

So, first let’s click on the JetStream bookmark named Upgrade Step 55 and then click on the icon for the Restore operation to start the rewind…

Starting a restore

Starting a restore operation to a JetStream bookmark

After being prompted Are You Sure and then confirming Yes I Am, then the Restore process starts to create a brand new timeflow for all three datasources in the container…

Restore in progress

Seeing the restore operation in progress, and the creation of a new timeflow within the container

Please notice that the old timeflow is still visible and accessible to the left in the JetStream user-interface, even with the new timeflow being restored to the right.

Finally, the restore operation completes, and now all three parts of the EBS systems stack (i.e. appsTier, dbTier, and database) have been restored to Upgrade Step 55 in about 15 minutes elapsed time, a fraction of the 2 hours it would have taken to restore from backups.

New timeflow

Completion of the restore operation, showing the old timeflow still available, and the new timeflow following the restore operation

Now we can resume the EBS upgrade process with barely any time lost!

Summary of benefits

So a few things to bear in mind from this example when using Delphix JetStream…

  1. Once a JetStream container has been provisioned by the Delphix administrators (usually a role occupied by database administrators or DBAs), all the operations shown above are performed by JetStream data users, who are usually the developers performing the upgrade themselves
    • No need to coordinate backups or restores between teams == less time wasted
  2. Using Delphix removed the need to perform backups, because all changes across the timeflow of all three components of the EBS systems stack (i.e. appsTier, dbTechStack, and database) are recorded automatically within Delphix
    • No backups == less storage consumed == less time wasted
  3. Using Delphix removed the need to restore from backups, because the timeflow can be restored to any previous point-in-time in a fraction of the time it takes to perform a restore from backup
    • No restores == less time wasted

It is difficult to imagine not performing EBS upgrades using the methods we’ve used for decades, but the that time is here. Data virtualization, like server virtualization, is fast becoming the new norm, and it is changing everything for the better.

Free webinar

Please view the joint webinar on these topics, featuring Mike Swing of TruTek and Kyle Hailey of Delphix and myself, on the topic of “Avoiding The Pitfalls Of An Oracle E-Business Suites Upgrade Project” about how data virtualization can reduce the risk and accelerate the completion of EBS upgrades, recorded on 07-Apr 2016.

You can also download the slidedeck from the webinar here. 


Presenting “Accelerating DevOps Using Data Virtualization” at #C16LV

Borrowing from Wikipedia, the term DevOps is defined as…

DevOps (a clipped compound of "development" and "operations") is a culture, movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes. It aims at establishing a culture and environment where building, testing, and releasing software, can happen rapidly, frequently, and more reliably.

Now, I hate buzzwords as much as the next BTOM (a.k.a. bitter twisted old man)…eastwood

…but the idea behind DevOps, of building, testing, and releasing software more rapidly and reliably is simply amazing and utterly necessary.

As system complexity has increased, as application functionality has ballooned, and as the cost of production downtime has skyrocketed, writing and testing code leaves one a long way from the promised land of published and deployed production code.

As explained by DevOps visionaries like Gene Kim, the biggest barrier to that promised land is data, in the form of databases cloned from production for development and testing, in the form of application stacks cloned from production systems for development and testing.

The amount of time wasted waiting for data on which to develop or test dwarfs the amount of time spent developing or testing.  Consequently, IT has learned to be satisfied with only occasional refreshes of dev/test systems from production, resulting in humorously inadequate dev/test systems, and that has been the norm.

There is a new norm in town.

Data virtualization, like server virtualization, breaks through the constraint.  Over the past 10 years, IT has learned to wallow in the freedom of server virtualization, using tools like VMware and OpenStack to provision virtual machines for any purpose.

Unfortunately, data and storage did not benefit from virtuaMatrix1lization as well.  This has resulted in a white-hot nova in the storage industry, and while that is good news for the storage industry, it still means that IT has cloned from production to non-production the same way it has done the past 40 years, in other words slowly, expensively, and painfully.

Matrix2And we have continued to do it the old way, slowly, expensively, and painfully, because we didn’t know any better.

The IT industry couldn’t see any better way to do clone from production to non-production.  Slow and painful was the norm.

But once one realizes the nature of making copies, and how modern file-system technology can share at the block-level, compress, anMatrix3d de-duplicate, suddenly making copies of databases and file-system directories becomes easier and inexpensive.

Here is a thought-provoking question:  why doesn’t every individual developer and tester have their own private full systems stack?  Why can’t they have several of them, one or more for each task on which they’re working?

I can literally hear all of the other BTOM’s scoffing at that question:  “Nobody has that much infrastructure, you idiot!

And that is the point.  You certainly do.

You just don’t have the right infrastructure.

This was presented at the Collaborate 2016 conference in Las Vegas on Monday 11-April 2016.

You can download the slidedeck here.

Presenting “UNIX/Linux Tools For The Oracle DBA” at #C16LV

Going back to the invention of the graphical user interface (GUI) in the 1970s, there has been tension between the advocates of the magical pointy-clickety GUI and the clickety-clackety command-line interface (CLI).

Part of it is stylistic… GUI’s are easier, faster, more productive.

Part of it is ego… CLI’s require more expertise and are endlessly customizable.

Given the evolutionary pressures on technology, the CLI should have gone extinct decades ago, as more and more expertise is packed into better and better GUI’s.  And in fact, that has largely happened, but the persistence of the CLI can be explained by four persistent justifications…

  1. not yet supported in the GUI
  2. need to reproduce in the native OS
  3. automation and scripting
  4. I still drive stick-shift in my car, too

This is the prime motivation for the presentation entitled “UNIX/Linux Tools For The Oracle DBA” which I gave at the Collaborate 2016 conference in Las Vegas (#c16lv) on Monday 11-April 2016.

When your monitoring tool gets to the end of the information it can provide, it can be life-saving to be able to go to the UNIX/Linux command-line and get more information.  Knowing how to interpret the output from CLI tools like “top”, “vmtstat”, and “netstat” can be the difference between waiting helplessly for deus ex machina or a flat-out miracle, and obtaining more information to connect the dots to a resolution.

Oftentimes, you have a ticket open with vendor support, and they can’t use screenshots from the GUI tool or report you’re using.  Instead, they need information straight from the horse’s mouth, recognizable, trustworthy, and uninterpreted.  In that case, knowing how to obtain OS information from tools like “ps”, “pmap”, and “ifconfig” can keep a support case moving smoothly toward resolution.

Likewise, the continued survival of CLI’s has a lot to do with custom scripting and automation.  Sure, there are all kinds of formal APIs, but a significant portion of the IT world runs on impromptu and quickly-developed scripts based on shell scripts and other CLI’s.

Last, my car still has a manual transmission, and both my kids drive stick as well.  There is no rational reason for this.  We don’t believe we go faster or do anything better than automatic transmission.  After almost 40 years behind the wheel, it can be argued that I’m too old to change, but that’s certainly not true of my kids.  We just like being a part of the car.

Hopefully all of those reasons help explain why it is useful for Oracle DBAs to learn more about the UNIX/Linux command-line.  You’re not always going to need it, but when you do, you’ll be glad you did.

You can download the slidedeck here.

RMOUG calendar page

RMOUG calendar pageFor those who are always seeking FREE technical learning opportunities, the calendar webpage of the Rocky Mountain Oracle Users Group (RMOUG) is a great resource to bookmark and check back on weekly.

RMOUG volunteers compile notifications of webinars, meetings, and meetups from the internet and post them here for everyone to use.

The information technology (IT) industry is always evolving and therefore always changing.

Free webinars, even if they seem too commercial at times, always have at least one solid nugget of solid information that can make the difference in a professional’s life and career.

You never know when the need for new information that nobody else knows is going to come in handy.

Stay a step ahead…

Avoiding Regret

After working for a variety of companies in the 1980s, after working for Oracle in the 1990s, after trying (and failing) to build a company with friends at the turn of the century, and after more than a decade working as an independent consultant in this new century, I found myself in a professional dilemma last year.

I know I need to work at least another ten years, probably more like fifteen years, to be able to retire.  I had survived the nastiest economic downturn since the Great Depression, the Great Recession of 2008-2011, while self-employed, and felt ready to take on the economic upswing, so I was confident that I could work steadily as an independent Oracle performance tuning consultant for the next 15 years or more.

Problem was: I was getting bored.

I loved my work.  I enjoy the “sleuthiness” and the forensic challenge of finding performance problems, testing and recommending solutions, and finding a way to describe it clearly so that my customer can make the right decision.  I am confident that I can identify and address any database performance problem facing an application built on Oracle database, and I’ve dozens of successful consulting engagements to bear witness.  I have a legion of happy customers and a seemingly endless supply of referrals.

Being confident is a great feeling, and I had spent the past several years just enjoying that feeling of confidence on each engagement, relishing the challenge, the chase, and the conclusion.

But it was becoming routine.  The best explanation I have is that I felt like a hammer, and I addressed every problem as if it was some form of nail.  I could feel my mental acuity ossifying.

Then, opportunity knocked, in an unexpected form from an unexpected direction.

I have a friend and colleague whom I’ve known for almost 20 years, named Kyle Hailey.  Kyle is one of those notably brilliant people, the kind of person to whom you pay attention immediately, whether you meet online or in person.  We had both worked at Oracle in the 1990s, and I had stayed in touch with him over the years since.

About four years ago, I became aware that he was involved in a new venture, a startup company called Delphix.  I wasn’t sure what it was about, but I paid attention because Kyle was involved.  Then, about 3 years ago, I was working as a DBA for a Colorado-based company who had decided to evaluate Delphix’s product.

Representing my customer, my job was to prevent a disaster from taking place.  I was to determine if the product had any merit, if the problems being experienced were insurmountable, and if so let my customer know so they could kill the project.

Actually, what my boss said was, “Get rid of them.  They’re a pain in my ass“.  I was just trying to be nice in the previous paragraph.

So OK.  I was supposed to give Delphix the bum’s rush, in a valid techie sort of way.  So I called the Delphix person handling the problem, and on the other end of the phone was Kyle.  Hmmm, I can give a stranger the bum’s rush, but not a friend.  Particularly, not someone whom I respect like a god.  So, instead I started working with him to resolve the issue.  And resolve it we did.  As a result, the company bought Delphix, and the rest is history.

Here’s how we did it…

But first, what does Delphix do?  The product itself is a storage appliance on a virtual machine in the data center.  It’s all software.  It uses sophisticated compression, deduplication, and copy-on-write technical to clone production databases for non-production usage such as development and testing.  It does this by importing a copy of the production database into the application, compressing that base copy down to 25-30% of it’s original size.  Then, it provides “virtual databases” from that base copy, each virtual database consuming almost no space at first, since almost everything is read from the base copy.  As changes are made to each virtual database, copy-on-write technology stores only those changes for only that virtual database.  So, each virtual database is presented as a full image of the source database, but costs practically nothing.  Even though the Oracle database instances reside on separate servers, the virtual database actually resides on the Delphix engine appliance and is presented via NFS.

I was asked to understand why the virtual databases were slow.

On the other end of the phone was Kyle, and he was easily able to show me with repeatable tests where and what the nature of the performance problems were, and that they were predictable and resolvable.

But I’m not really writing just about Delphix, even though it is very cool and quite earthshaking.  Rather, I’m writing about something bigger that has stormed into our industry, bringing to fruition something that I had tried — and failed — to accomplish at the turn of the century.  Back then, at the turn of the century, when some colleagues and I tried to build a hosted-application services company, we failed in two ways:  1) we were ahead of our time and 2) we chose the wrong customers.

Being ahead of one’s time is not a failure, strictly speaking.  It shows a clarity of vision, but bad timing.

However, choosing the wrong customers is absolutely a failure.

Before I explain, please be aware that I’m going to change most of the names, to protect any innocent bystanders…

After leaving Oracle in July 1998, I founded my own little company-of-one, called Evergreen Database Technologies (a.k.a. EvDBT).  The dot-com boom was booming, and I wanted to operate as an independent consultant.  I had been living in Evergreen in the foothills west of Denver for years, so the choice of name for my company was by no means a brilliant leap of imagination; I just named it after my hometown.  Business was brisk, even a bit overheated as Y2K approached, and I was busy.  And very happy.

In early 2000, I had been working with another young company called Upstart, and we felt that the information technology (IT) industry was heading in one inescapable direction:  hosted services.  So I joined Upstart and we decided that providing hosted and managed Oracle E-Business Suites (EBS) was a good business.  EBS is the world’s 2nd most prevalent Enterprise Resource Planning (ERP) system and can be dizzyingly complex to deploy.  It is deployed in hundreds of different industries and is infinitely customizable, so in order to avoid being eaten alive by customization requests by our potential customers, we at Upstart would have to focus on a specific industry and pre-customize EBS according to the best practices for that industry.  We chose the telecommunications industry, because it was an even bigger industry in Colorado then, as it is now.  We wanted to focus on small telecommunications companies, being small ourselves.  At that time, small telecommunications companies were plentiful because of governmental deregulation in the industry.  These companies offered DSL internet and phone services, and in June 2000 our market research told us it was a US$9 billion industry segment and growing.

Unfortunately, all of the big primary telecommunications carriers bought the same market research and were just as quick to catch on, and they undercut the DSL providers, provided cheaper and better service, and put the myriad of small startups out of business practically overnight.  “Overnight” is not a big exaggeration.

By October 2000, we at Upstart were stunned to discover that our customer base had literally ceased answering their phones, which is chillingly ominous when you consider they were telecommunication companies.   Of course, the carnage directly impacted the companies who had hoped to make money from those companies, such as Upstart.  Only 4 months in business ourselves, our target market had vanished like smoke.  You might say we were…. a little disconcerted.

We at Upstart had a difference of opinion on how to proceed, with some of the principals arguing to continue sucking down venture-capital funding and stay the course, while others (including me) argued that we had to find some way, any way, to resume generating our own revenue on which to survive.  I advocated returning to Upstart’s previous business model of consulting services, but the principals who wanted to stay the course with the managed services model couldn’t be budged.  By December 2000, Upstart was bankrupt, and the managed-services principals ransacked the bank accounts and took home golden parachutes for themselves.  I jumped back into my own personal lifeboat, my little consulting services company-of-one, Evergreen Database Technologies.  And continued doing what I knew best.

This was a scenario that I’m told was repeated in many companies that flamed out during the “dot-bomb” era.  There are a million stories in the big city, and mine is one of them.

But let’s just suppose that Upstart had chosen a different customer base, one that didn’t disappear completely within months?  Would it still have survived?

There is a good chance that we would still have failed due to being ahead of our times and also ahead of the means to succeed.  Hosted and managed applications, which today are called “cloud services”, were made more difficult back then by the fact that software was (and is) designed with the intention of occupying entire servers.  The E-Business Suites documentation from Oracle assumed so, and support at Oracle assumed the same.  This meant that we, Upstart, had to provision several entire server machines for each customer, which is precisely what they were doing for themselves.  There was little reason we could do it cheaper.  As a result, we could not operate much more cheaply than our customers had, resulting in very thin cost savings or efficiencies in that area, leaving us with ridiculously small profit margins.

Successful new businesses are not founded on incremental improvement.  They must be founded on massive change.

What we needed at that time was server virtualization, which came along a few years later in the form of companies like VMware.  Not until server virtualization permitted us to run enterprise software on virtual machines, which could be stacked en masse on physical server machines, could we have hoped to operated in a manner efficient enough to save costs and make money.

Fast forward to today.

Today, server virtualization is everywhere.  Server virtualization is deeply embedded in every data center.  You can create virtual machines on your laptop, a stack of blade servers, or a mainframe, emulating almost any operating system that has ever existed, and creating them in such a way that finally makes full use of all the resources of real, physical servers.  No longer would system administrators rack, wire, and network physical servers for individual applications using customized specifications for each server.  Instead, virtual machines could be configured according to the customized specifications, and those virtual machines run by the dozens on physical machines.

The advent of virtual machines also brought about the operations paradise of abstracting computer servers completely into software, so that they could be built, configured, operated, and destroyed entirely like the software constructs they were.  No more racking and wiring, one server per application.  Now, banks of “blades” were racked and wired generically, and virtual machines balanced within and across blades, with busier virtual machines moving toward available CPU, memory, and networking resources and quieter virtual machines yielding CPU, memory, and networking to others.  Best of all, all of this virtualization converted hardware into software, and could be programmed and controlled like software.

Everything is virtualized, and all was good.

Except storage.

Think about it.  It is easy and cheap to provision another virtual machine, using fractions of CPU cores and RAM.  But each of those virtual machines needs a full image of operating system, application software, and database.  While server virtualization permitted data centers to use physical servers more efficiently, it caused a positive supernova explosion in storage.  So much so that analysts like Gartner have predicted a “data crisis” before the end of the decade.

This is where Delphix comes in.

By virtualizing data as well as servers, it is now truly fast, cheap, and easy to provision entire virtual environments.  Delphix works with Oracle, and it also works with SQL Server, PostgreSQL, MySQL, DB2, and Sybase.  Even more importantly, it also virtualizes file-systems, so that application software as well as databases can be virtualized.

So back in early 2014, Kyle contacted me and asked if I would be interested in joining Delphix.  My first reaction was the one I always had, which is “no thanks, I’ve already got a job“.  I mean, I’m a successful and flourishing independent consultant.  Why would I consider working within any company anymore.  Business was brisk, I never had downtime, and the economy was improving.  I had successfully been operating as an independent consultant for most of the past 15 years.  Why fix what wasn’t broken?

But here was the crux of the matter…

I wanted to try something new, to expand beyond what I had been doing for the past 25 years since I first joined Oracle.  If it was a large established company beckoning, I wouldn’t have considered it for a second.  But a promising startup company, with a great idea and a four-year track record of success already, and still pre-IPO as well?

I couldn’t resist the gamble.  What’s not to like?

It’s possible I’ve made an enormous mistake, but I don’t think so.

Not to turn excessively morbid, but all of us are just a heartbeat away from our common destination.  I believe that, when I’m at the last moments of my life, the thing I fear will not be death itself, or pain, or leaving life behind.

It is regret that I fear.

And regret can take many forms, but the most painful regret will undoubtedly be what might have been, the opportunities passed or missed.  Career is only one aspect of life, and I don’t want to give it too much significance.  I’ve accumulated regrets in how I’ve lived my life, and how I’ve treated people in my life, and in some cases they are small but there are some which will always haunt me.

But with regards to my professional career, as Robert Frost said, I’ve taken the road less traveled, and that has made all the difference.  No regrets here.

Oracle ACE program

What is the Oracle ACE program?

The Oracle ACE program is a community advocacy program for Oracle technology evangelists and enthusiasts, sponsored by and managed by Oracle Corporation.  As stated on the ACE overview page at “http://oracle.com/technetwork/community/oracle-ace”, it is both a network as well as a resource for everyone in the Oracle community.  It is not a certification program, and there is no course to study and no test to pass.  Rather, the ACE program is a recognition program by one’s colleagues and peers, and joining is a process of nomination, review, and acceptance.

Who are Oracle ACEs?

They are your colleagues in the worldwide Oracle technology community, of which you and the RMOUG community here in Colorado are a significant part.  There are now more than 500 people in 55 countries around the world who have been recognized by their peers and by Oracle Corporation.  They are not employees of Oracle, but rather partners and customers of Oracle.

The ACE program is now 10 years old and comprises three levels…

ACE Associate logoACE Associate – this is the entry-point for the program and an ideal form of recognition for technology community advocates who are building their reputations

ACElogoACE– established advocates for Oracle technology who are well-known in their community

ACEDlogoACE Directors – top-tier members of the worldwide community who engage with many users groups and advocates and with Oracle Corporation

The ACE program is always growing, and the current members of the program are always happy to help you make the step up and jump to the next level in your career.  ACEs are expected to contribute to all or some of the following:

  • contribute actively to technical forums like OTN community forum or the ORACLE-L email list
  • publishing articles in newsletters, magazines, blogs
  • publishing books
  • tweet, post on social media
  • organize community activities such as Oracle users groups
  • present sessions at Oracle conferences and Oracle user group meetings

Few people do all these things, so don’t think that everyone does, but also please be aware that there are some who do, driving their careers above and beyond.

Joining the ACE program is not a one-time task, but an on-going commitment to contribution and sharing to build your community, whether that community is local to where you live, national, or world-wide.

To find out more about the program, go to the web page at “http://oracle.com/technetwork/community/oracle-ace/become-an-ace” or just contact one of the Oracle ACEs near you and ask questions.

That’s why we’re here.