Automating Stored Procedures When Refreshing Virtual SQL Server Databases

So… you’re using a Delphix virtualization engine to provision SQL Server virtual databases (VDBs), but you’re finding that your SQL account passwords are being reset to those used in production after every refresh?  Likewise, all of the settings in the application supported by the database?

Provisioning a new virtual databases is easy.  Go into the Delphix administrative console, find the dSource you want, find the snapshot for the point-in-time you want on that dSource, click the Provision button, specify the database server where the new database should reside, and hey presto!  In 10 minutes or less, a new virtual database is ready for use.

Except not quite…

That new VDB still has all of the data from the source database, and if that source database is production, then that means that it still has production passwords.  And it still has confidential data.  All of that has to be fixed before the database can be used by developers or testers.

So, even though you can now clone from production in minutes instead of hours or days, you still have to do the same post-clone and post-refresh processing tasks you’ve always had to do prior to Delphix.  Now, you just do them immediately rather than later.

If you’re willing to automate those tasks, whether in T-SQL stored procedures or in Windows Powershell scripts, then Delphix can help by embedding them as part of the operations of provision or refresh.

Delphix offers hooks, which are programmatic callouts which fire before and after certain actions by the Delphix virtualization engine, such as as a refresh action…

Hooks provide the ability to execute Powershell code on the target Windows server as the Windows domain account registered with Delphix, either before or after the successful completion of an action.  Here is what the form for adding hooks within a VDB looks like…

Here are the actions for which hooks can be entered, also seen listed in the screenshot above…

  • Provision
    • Configure Clone hook fires after the action completes
  • Refresh

    • Pre-Refresh hook fires before the action begins
    • Post-Refresh hook fires after the action completes successfully
    • Configure Clone hook fires after the Post-Refresh hook completes successfully
  • Rewind

    • Pre-Rewind hook fires before the action begins
    • Post-Rewind hook fires after the action completes successfully
  • Snapshot
    • Pre-Snapshot hook fires before the action begins
    • Post-Snapshot hook fires after the action completes successfully
  • Start
    • Pre-Start hook fires before the action begins
    • Post-Start hook fires after the action completes successfully
  • Stop

    • Pre-Stop hook fires before the action begins
    • Post-Stop hook fires after the action completes successfully

So back to the problem at hand…

We want some actions to take place automatically each time we refresh our virtual database (VDB).  As it turn out, we have two Transact-SQL stored procedures already coded to do the job…

  1. stored procedure MSDB.DBO.CAPTURE_ACCTS_PASSWDS
    • Saves all of the current accounts and account passwords to a set of tables in the MSDB system database
  2. stored procedure MSDB.DBO.REAPPLY_ACCTS_PASSWDS
    • Re-applies all of the current accounts and account passwords from the information previously stored in the MSDB system database

“@DatabaseName”, which is the name of the VDB, is the only parameter for these stored procedures.

I haven’t posted the T-SQL code for these stored procedures, partly because it is always going to be very customized to its environment, but mostly because I am not proficient with T-SQL myself, and I would just be copying someone else’s code for a time.

So looking at our list of Delphix hooks, it should be clear that we need to call the CAPTURE_ACCTS_PASSWDS stored procedure during the Pre-Refresh hook, and call the REAPPLY_ACCTS_PASSWDS stored procedure during the Post-Refresh hook.  Since hooks only call Powershell code and not T-SQL, here is some Powershell code we can use…

#======================================================================
# File:      callsp.ps1
# Type:      powershell script
# Author:    Delphix Professional Services
# Date:      02-Nov 2015
#
# Copyright and license:
#
#       Licensed under the Apache License, Version 2.0 (the "License");
#       you may not use this file except in compliance with the
#       License.
#
#       You may obtain a copy of the License at
#     
#               http://www.apache.org/licenses/LICENSE-2.0
#
#       Unless required by applicable law or agreed to in writing,
#       software distributed under the License is distributed on an
#       "AS IS" basis, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
#       either express or implied.
#
#       See the License for the specific language governing permissions
#       and limitations under the License.
#     
#       Copyright (c) 2015 by Delphix.  All rights reserved.
#
# Description:
#
#    Call the appropriate stored procedure within the DBO schema in
#    the MSDB database on behalf of the VDB.  The stored procedure
#    name the name of the database as a parameter called
#    "@DatabaseName"..
#
# Command-line parameters:
#
#    $fqSpName        fully-qualified stored procedure name
#
# Environment inputs expected:
#
#    VDB_DATABASE_NAME    SQL Server database name for the VDB
#    VDB_INSTANCE_NAME    SQL Server instance name for the VDB
#    VDB_INSTANCE_PORT    SQL Server instance port number for the VDB
#    VDB_INSTANCE_HOST    SQL Server instance hostname for the VDB
#
# Note:
#
# Modifications:
#    TGorman    02nov15    first version
#======================================================================
param([string]$fqSpName = "~~~")
#
#----------------------------------------------------------------------
# Verify the "$dirPath" and "$fqSpName" command-line parameter
# values...
#----------------------------------------------------------------------
 if ( $fqSpName -eq "~~~" ) {
     throw "Command-line parameter 'fqSpName' not found"
 } 
#
#----------------------------------------------------------------------
# Clean up a log file to capture future output from this script...
#----------------------------------------------------------------------
 $dirPath = [Environment]::GetFolderPath("Desktop")
 $timeStamp = Get-Date -UFormat "%Y%m%d_%H%M%S"
 $logFile = $dirPath + "\" + $env:VDB_DATABASE_NAME + "_" + $timeStamp + "_SP.LOG"
 "logFile is " + $logFile
#
#----------------------------------------------------------------------
# Output the variable names and values to the log file...
#----------------------------------------------------------------------
 "INFO: dirPath = '" + $dirPath + "'" | Out-File $logFile
 "INFO: fqSpName = '" + $fqSpName + "'" | Out-File $logFile -Append
 "INFO: env:VDB_INSTANCE_HOST = '" + $env:VDB_INSTANCE_HOST + "'" | Out-File $logFile -Append
 "INFO: env:VDB_INSTANCE_NAME = '" + $env:VDB_INSTANCE_NAME + "'" | Out-File $logFile -Append
 "INFO: env:VDB_INSTANCE_PORT = '" + $env:VDB_INSTANCE_PORT + "'" | Out-File $logFile -Append
 "INFO: env:VDB_DATABASE_NAME = '" + $env:VDB_DATABASE_NAME + "'" | Out-File $logFile -Append
#
#----------------------------------------------------------------------
# Housekeeping: remove any existing log files older than 15 days...
#----------------------------------------------------------------------
 "INFO: removing log files older than 15 days..." | Out-File $logFile -Append
 $ageLimit = (Get-Date).AddDays(-15)
 $logFilePattern = $env:VDB_DATABASE_NAME + "_*_SP.LOG"
 "INFO: logFilePattern = '" + $logFilePattern + "'" | Out-File $logFile -Append
 Get-ChildItem -Path $dirPath -recurse -include $logFilePattern |
     Where-Object { !$_.PSIsContainer -and $_.CreationTime -lt $ageLimit } |
     Remove-Item
#
#----------------------------------------------------------------------
# Run the stored procedure...
#----------------------------------------------------------------------
 "INFO: Running stored procedure '" + $fqSpName + "' within database '" +
     $env:VDB_DATABASE_NAME + "'..." | Out-File $logFile -Append
 try {
     "INFO: open SQL Server connection..." | Out-File $logFile -Append
     $sqlServer = $env:VDB_INSTANCE_HOST + "\" + $env:VDB_INSTANCE_NAME + ", " + $env:VDB_INSTANCE_PORT
     "INFO: sqlServer = '" + $sqlServer + "'" | Out-File $logFile -Append
#     [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null;
     $conn = New-Object System.Data.SqlClient.SqlConnection
     $conn.ConnectionString = "Server=$sqlServer; Database=MSDB; Integrated Security=SSPI;"
     "INFO: conn.ConnectionString = '" + $conn.ConnectionString + "'" | Out-File $logFile -Append
     $conn.Open()
     $cmd1 = New-Object System.Data.SqlClient.SqlCommand($fqSpName, $conn)
     $cmd1.CommandType = [System.Data.CommandType]::StoredProcedure
     $cmd1.Parameters.Add('@DatabaseName', $env:VDB_DATABASE_NAME) | Out-null
     "INFO: calling " + $fqSpName + ", @DatabaseName = " + $env:VDB_DATABASE_NAME | Out-File $logFile -Append
     $exec1 = $cmd1.ExecuteReader()
     $exec1.Close()
     $conn.Close()
 } catch { Throw $Error[0].Exception.Message | Out-File $logFile -Append }
 #
 "INFO: completed stored procedure '" + $fqSpName + "' within database '" +
     $env:VDB_DATABASE_NAME + "' successfully" | Out-File $logFile -Append
#
#----------------------------------------------------------------------
# Exit with success status...
#----------------------------------------------------------------------
exit 0

The formatting of the code is a little constrained by the narrow viewing area, so if you want the actual code correctly formatted, it can be downloaded from HERE.

Of course, this code does more than just calling the stored procedure with the database name as the sole parameter;  it is also qualifying command-line parameters, creating and updating a log file, and handling possible error conditions.

Once this script is saved on the target Windows host server where the VDB resides, then we can call it from the Pre-Refresh and Post-Refresh hooks.

Here we see the Pre-Refresh hook being called with a call to the CALLSP.PS1 powershell script located within the C:\DELPHIX\SCRIPTS directory on the target Windows server, to call the CAPTURE_ACCTS_PASSWDS stored procedure…

Likewise, we see here how the Post-Refresh hook is constructed to call the REAPPLY_ACCTS_PASSWDS stored procedure…

And finally, with both hooks created, we can see them in the Configuration panel together like this…

So now, whenever the refresh operation is performed on the VDB named VV11, these hooks will execute the Powershell script, which will in turn execute the specified stored procedures, and so when the refresh operation is complete, the account and password settings which were present prior to the refresh operation will still be present.

I hope this is useful?


This post can also be found on the Delphix website HERE


If you have any questions about this topic, or any questions about data virtualization or data masking using Delphix, please feel free to contact me at “tim.gorman@delphix.com”.

SQL Saturday 588 – May 20 in NYC

I’m really excited about speaking at SQL Saturday #588 at the Microsoft office in Times Square, I’ll be discussing Accelerating TDM Using Data Virtualization.  This is my favorite topic, because for the 20+ years that I worked as a database administrator (DBA), I ran into the same challenges time and again.

The challenges of data volume.

More than 20 years ago, a friend named Cary Millsap was asked to define the term very large database, because its acronym (VLDB) had become popular.  Knowing that numbers are relative and that they obsolesce rapidly, he refused to cite a number.  Instead, he replied, “A very large database is one that is too large for you to manage comfortably“, which of course states the real problem, as well as the solution.

The problem of data volume has, if anything, become more severe, and there is no sign that it will abate and allow storage and networking technology to catch up.

So it becomes necessary to cease doing things the way we’ve done them for the past several decades, and find a more clever way.

And that is why I work for the solution.  Delphix data virtualization addresses the root of the problem, providing a far faster, less expensive way to provision and refresh huge volumes of data.  The result is revolutionary for the beleaguered software testing industry.

Every mishap or delay in the software development life-cycle means time pressure on the last step:  testing.  Creating full-size and fully-functional copies of the environment to be tested is also time-consuming, putting more time pressure on testing to prevent the slippage of delivery dates.  The end result is untested or poorly tested systems in production, despite heroic efforts by under-appreciated testers.

Constraining the creation of test data is data volume, which is growing beyond the capability of most enterprises.  “Storage is cheap” is not merely a cheap lie to dismiss the problem without solving it, but it is irrelevant and beside the point.

The real issue is time, because it takes a lot of time to push terabytes around from place to place.  It is just physics.  It is time which is most expensive and, by the way, storage really isn’t cheap, especially not in the quality that is demanded, and certainly not in the quantities in which it is being demanded.

Provisioning a full environment for each tester, one for each task, of each project, seems unrealistic when each environment might require terabytes of storage.  As a result, testers are limited to working in shared DEV or TEST environments which are refreshed only every few months, if ever.  Code quality suffers because testing is incomplete, and the pace of delivery fails to live up to business needs.

Server virtualization unleashed a surge of productivity in IT infrastructure over the past ten years.  But each virtual server still requires actual storage, making storage and data the constraint which holds everything back.

Data virtualization is the solution.

Come learn how to accelerate testing and how to accelerate the pace of innovation.

Come learn how to inspire more A-ha! moments by liberating QA and testing, and eliminating the constraint of data under which the software development life-cycle — from waterfall to agile — has labored over the past decades.

Liberate testing now!

SQL Saturday – 25-March 2017 – Colorado Springs CO

I’m really excited about speaking at SQL Saturday on my favorite topic!

SQLSaturday is a free training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence and Analytics.  I’m all for that!

At this event, I’ll be discussing Accelerating DevOps and TDM Using Data Virtualization.  This is my favorite topic, because for the 20 years that I worked as a DBA, I ran into the same roadblock time and again.  The roadblock of data volume.

“DevOps” is a conflation of parts of the words “development” and “operations”, and it represents the grass-roots movement to merge application development, application testing, and IT operations into one continuous stream.  All tasks from coding to testing to operations must be automated so that new features and fixes can be delivered on a continual flow.

“TDM” is Test Data Management, the optimization of software quality testing to ensure that applications are built and maintained to operate according to their specifications.

Constraining both these goals is data volume, which is growing beyond the capability of most enterprises.  “Storage is cheap” is not merely a cheap lie to dismiss the problem without solving it, but it is irrelevant and beside the point.

The real issue is time, because it takes a lot of time to push terabytes around from place to place; it is just physics.  It is time which is most expensive and, by the way, storage really isn’t cheap, especially not in the quality that is demanded, and certainly not in the quantities in which it is being demanded.

Provisioning a full environment for each developer or tester, one for each task, of each project, seems unrealistic when each environment might require terabytes of storage.  As a result, developers and testers are limited to working in shared DEV or TEST environments which are refreshed only every few months, if ever.  Code quality suffers, testing suffers, and the pace of delivery fails to live up to business needs.

Server virtualization unleashed a surge of productivity in IT infrastructure over the past ten years.  But each virtual server still requires actual storage, making storage and data the constraint which holds everything back.

Data virtualization is the solution.

Come learn how to accelerate development, how to accelerate testing, and how to accelerate the pace of innovation.

Come learn how to inspire more “a-ha!” moments by eliminating the constraint of data under which the software development lifecycle has labored over the past decades.

Presenting “How Data Recovery Mechanisms Complicate Security, and What To Do About It” at #c16lv

Personally-identifiable information (PII) and classified information stored within an Oracle database is quite secure, as the capabilities of Oracle database ensure that it is accessible only by authenticated and authorized users.

But the definition of authentication and authorization can change, in a sense.authentication

In a production application, authentication means verifying that one is a valid application user and authorization means giving that application user the proper privileges to access and manipulate data.

But now let’s clone the data from that production application to non-production systems (i.e. development, QA, testing, training, etc), so that we can develop new application functionality, fix existing functionality, test it, and deliver user training.

By doing so, in effect the community of valid users has changed, and now instead of being production application users, the community of valid users is developers and testers.  Or it is newly-hired employees being trained using “live” information from production.

And the developers and testers and trainees are indeed authenticated properly, and are thus authorized to access and manipulate this same data in their non-production systems.temptation  The same information which can be used to defraud, steal identities, and cause all manner of mayhem, if one so chose.  This honor system is the way things have been secured in the information technology (IT) industry for decades.

Now of course, I can hear security officers from every point of the compass screaming and wildly gesticulating, “NO! That’s NOT true! The definition of authentication and authorization does NOT change, just because systems and data are copied from production to non-production!”  OK, then you explain what has been happening for the past 30 years?

In the case of classified data, these developers and testers go through a security clearance process to ensure that they can indeed be trusted with this information and that they will never misuse it.  Violating one’s security clearance in any respect is grounds for criminal prosecution, and for most people that is a scary enough prospect to wipe out any possible temptation.

Outside of government, organizations have simply relied on professionalism and trust to prevent this information from being used and abused.Medal of Honor  And for the most part, for all these many decades, that has been effective.  I know that I have never even been tempted to share PII or classified data in which I’ve come in contact.  I’m not bragging, it is just part of the job.

Now, that doesn’t mean that I haven’t been tempted to look up my own information.  This is essentially the same as googling oneself, except here it is using non-public information.

I recall a discussion with the CFO of an energy company, who was upset that the Sarbanes-Oxley Act of 2002 held she and her CEO criminally liable for any information breaches within her company.  She snarled, “I’ll be damned if I’m going to jail because some programmer gets greedy.”  I felt this is an accurate analysis of the situation, though I had scant sympathy (“That’s why you’re paid the big bucks“).  Since that time we have all seen efforts to rely less on honor and trust, and more on securing data in non-production.  Because I think everyone realizes that the bucks just aren’t big enough for that kind of liability.

This has given rise to products and features which use encryption for data at rest and data in flight.  But as pointed out earlier, encryption is no defense against a change in the definition of authentication and authorization.  I mean, if you authenticate correctly and are authorized, then encryption is reversible obfuscation, right?

To deal with this reality, it is necessary to irreversibly obfuscate PII and classified data, which is also known as data maskingGuyFawkes mask There are many vendors of data masking software, leading to a robust and growing industry.  If you irreversibly obfuscate PII and classified information within production data as it is cloned to non-production, then you have completely eliminated the risk.

After all, from a practical standpoint, it is extremely difficult as well as conceptually questionable to completely artificially generate life-like data from scratch.  It’s a whole lot easier to start with real, live production data, then just mask the PII and classified data out of it.

Problem solved!

Or is it?

Unfortunately, there is one more problem, and it has nothing to do with the concept of masking and irreversible obfuscation.  Instead, it has to do with the capabilities of the databases in which data is stored.

Most (if not all) of this PII and classified data is stored within databases, which have mechanisms built in for the purpose of data recovery in the event of a mishap or mistake.  In Oracle database, such mechanisms include Log Miner and Flashback Database.

Since data masking mechanisms use SQL within Oracle databases, then data recovery mechanisms might be used to recover the PII and classified data that existed before the masking jobs were executed.  It is not a flaw in the masking mechanism, but rather it is the perversion of a database feature for an unintended purpose.

Ouch.

This topic became a 10-minute “lightning talk” presentation on “How Oracle Data Recovery Mechanisms Complicate Data Security, and What To Do About It” on Wednesday 13-April 2016 at the Collaborate 2016 conference in Las Vegas.  Please read the slidedeck for my notes on how to deal with the situation presented here.

You can download the slidedeck here

.

Accelerating Oracle E-Business Suites upgrades

Taking backups during development, testing, or upgrades

The process of upgrading an Oracle E-Business Suites (EBS) system is one requiring dozens or hundreds of steps. Think of it like rock climbing. Each handhold or foothold is analogous to one of the steps in the upgrade.

What happens if you miss a handhold or foothold?

If you are close to the ground, then the fall isn’t dangerous. You dust yourself off, take a deep breath, and then you just start climbing again.

But what happens if you’re 25 or 35 feet off the ground? Or higher?

A fall from that height can result in serious injury, perhaps death, unless you are strapped to a climbing rope to catch your fall.  And hopefully that climbing rope is attached to piton or chock not too far from your current position.  The higher you climbed from your most recent anchor, the further you fall, and more painful.  If you set the last chock 10 feet below, that means you’re going to fall 20 feet, which is going to HURT.  It makes more sense to set chocks every 5 feet, so your longest drop will only be 10 feet at worst.

And that is the role that backups play in any lengthy process with many discrete steps, such as an EBS upgrade. Performing a backup of the systems you’re upgrading every half-dozen or dozen steps allows you to restore to the most recent backup and resume work while having only lost a relatively small amount of work.

But think of the costs of taking a full backup of the systems you’re upgrading?

Let’s assume that the EBS database is 1 TB in size, the dbTier is 4 GB, and the appsTier is another 4 GB. All told each backup takes up over a TB of backup media and takes 2 hours to perform, start to finish. So, for an upgrade process of 100 steps, and performing a backup every dozen steps, means performing 8-9 backups, at 2 hours apiece, totaling 16-18 hours alone! So not only do the backups represent a significant portion of the total number of steps to perform, but those steps also consume a significant portion of the total elapsed time of the entire upgrade process.

So, in the event of a mistake or mishap, it will probably take about 2 hours to restore to the previous backup, and then the intervening steps have to be replayed, which might take another couple of painstaking hours. Assuming there’s nothing wrong with the backup. Assuming that tapes haven’t been sent offsite to Iron Mountain. Assuming… yeah, assuming…

So just as with climbing, taking periodic backups of your work protects you against a fall.  But if each backup takes 2 hours, are you going to do that after every 15 mins?  Or after every hour?  After every 2 hours?  After every 4 hours?  Doing those time-consuming backups allows more productive work, but think of how far you’re going to fall if you slip?

Because of this high cost of failure, believe it that each step will be double-checked and triple-checked, perhaps by a second or third person, before it is committed or the ENTER key is pressed.

No wonder upgrading EBS is so exhausting and expensive!

The fastest operation is the one you never perform

What if you never had to perform all those backups, but you could still recover your entire EBS systems stack to any point-in-time as if you had?

This is what it is like to use data virtualization technology from Delphix. You start by linking a set of “source” EBS components into Delphix, and these sources are stored in compressed format within the Delphix appliance. Following that, these sources can produce virtual copies taking up very little storage in a matter of minutes. A Delphix administrator can provision virtual copies of the entire EBS systems stack into a single Delphix JetStream container for use by an individual or a team. The components of the container, which include the EBS database, the EBS dbTier, and the EBS appsTier, become the environment upon which the upgrade team is working.

Delphix captures all of the changes made to the EBS database, the EBS dbTier, and the EBS appsTier, which is called a timeflow. Additionally, Delphix JetStream containers allow users to create bookmarks, or named points-in-time, within the timeflow to speed up the location of a significant point-in-time, and easily restore to it.

So each developer or team can have their own private copy of the entire EBS systems stack. As the developer is stepping through each step of the upgrade, they can create a bookmark, which takes only seconds, after every couple steps. If they make a mistake or a mishap occurs, then they can rewind the entire JetStream container, consisting of the entire EBS systems stack, back to the desired bookmark in minutes, and then resume from there.

Let’s step through this, using the following example EBS system stack already linked and ready to use within a Delphix appliance…

Understanding concepts: timeflows, snapshots, and bookmarks

A dSource is the data loaded into Delphix from a source database or filesystem. Initially a dSource is loaded (or linked) into Delphix using a full backup, but after linking, all changes to the source database or filesystem are captured and stored in the Delphix dSource.

A timeflow is an incarnation of a dSource. The full lifecycle of a dSource may include several incarnations, roughly corresponding to a RESETLOGS event in an Oracle database. Delphix retains not just the current point-in-time of a dSource (as like replication packages like Golden Gate or Data Guard), but every action ever performed from all incarnations, from the time the dSource was initially linked into Delphix, up to the current point-in-time.

Even though Delphix compresses, de-duplicates, and performs other clever things to make the data loaded much smaller (i.e. usually about 30% of the original size), you still cannot retain everything forever. It’s just impossible. To deal with this, data is purged from timeflows that are older than the retention policy configured by the Delphix administrator.

Another useful concept to know is a snapshot, which is a small data structure that represents a usable full virtual image of a timeflow at a specific point-in-time.   Snapshots in Delphix can portray a full Oracle database as a virtual image of a point-in-time, and snapshots are subject to purge during the retention policy configured with the timeflow.

A bookmark is a named snapshot that is not subject to the configured retention policy on the timeflow, but instead can be configured with its own private retention policy, which defaults to “forever”.  Bookmarks are intended to represent points-in-time that have meaning to the end user.

An example EBS system stack in Delphix

So here we see a full E-Business Suites R12.1 system stack linked into Delphix. In the screenshot below, you can see three dSource objects in the left-hand navigation bar, under the Sources database group or folder. These three dSource objects are named…

  1. EBS R12.1 appsTier
  2. EBS R12.1 dbTechStack
  3. EBSDB

The first two items are vFiles, containing the file system sub-directories representing the E-Business Suites R12.1 applications tier and database tier, respectively. The last item is a virtual database or VDB, containing the Oracle database for the E-Business Suites R12.1 system. Each are indicated as dSources by the circular black dSource icon with the letters “dS” within (indicated below by the red arrows)…

pointing out dSource icons
dSource icons indicated by red arrows in the main Delphix administration console

Registering EBS dSources as JetStream templates

The three dSources have been encapsulated within the Delphix JetStream user-interface within a templates, as shown by the JetStream icons showing a tiny hand holding out a tiny cylinder representing a datasource (indicated below by the red arrows) just to the right of the dSource icons…

pointing out JetStream template icons
JetStream template icons indicated by red arrows in the main Delphix administration console

Within the JetStream user-interface itself, we see that all three dSources are encapsulated within one JetStream template, with the three dSources comprising the three datasources for the template…

indicating JetStream template datasources
red arrows pointing to the names of the E-Business Suite system components comprising datasources

Encapsulating EBS VDBs as JetStream containers

Once dSources are encapsulated as a JetStream template, then we can encapsulate VDBs and vFiles created from those dSources as JetStream containers. Here, we see the database group or folder called Targets circled in the left-hand navigation bar, and the blue icons for VDBs and vFiles with the letter “V” within are indicated by red arrow, while the darker icons for JetStream Containers are indicated by the green arrow

pointing to icons for VDBs and JetStream containers
Red arrow pointing to the icon for a VDB, and the green arrow pointing to the icon for a JetStream container

Working with JetStream containers for EBS

With an entire EBS system stack under the control of Delphix JetStream, we can begin the lengthy process of upgrading EBS. Before we begin, we should create a JetStream bookmark called Upgrade Step 0 to indicate the starting point of the upgrade…

starting with a timeflow within a JetStream container
starting with a timeflow within a JetStream container

Then, after performing the first five steps of the upgrade in the EBS system stack, come back to the JetStream user-interface and create a JetStream bookmark named Upgrade Step 5, as shown below…

creating JetStream bookmark
Choosing a point-in-time on a JetStream timeflow within a JetStream container, and creating a bookmark

After every couple steps of the upgrade, or prior to any lengthy step, come back to the Delphix JetStream user-interface and create a bookmark named appropriate for the step in the upgrade process, as shown below…

Naming a bookmark
Naming a JetStream bookmark

Performing a rewind in a JetStream container for EBS

So we’re cruising along, performing productive steps in the EBS upgrade process, and not worrying about making backups.

WHOOPS! At upgrade step 59, we made a serious mistake!

If we had been making backups every 12 steps or so, then the most recent backup may have been Upgrade Step 48, which was 11 steps ago. Besides the multiple hours it may take to restore the backup in the database, the dbTier, and the appsTier, we are going to have to perform 11 steps over again.

In contrast, using Delphix JetStream, we have better choices. From the Delphix JetStream user-interface, we can restore to any point-in-time on the timeflow, whether it is one of the bookmarks we previously created at a known step in the upgrade process, or to any point-in-time location on the timeflow line, which may or may be located in the middle of one the upgrade steps, and thus not a viable point from which to restart.

So, because we want to restart the upgrade from a known and viable step in the process, we’re going to restore the entire EBS systems stack consistently back to the JetStream bookmark Upgrade Step 55

Selecting a bookmark
Selecting a JetStream bookmark in preparation for performing a restore operation

So, first let’s click on the JetStream bookmark named Upgrade Step 55 and then click on the icon for the Restore operation to start the rewind…

Starting a restore
Starting a restore operation to a JetStream bookmark

After being prompted Are You Sure and then confirming Yes I Am, then the Restore process starts to create a brand new timeflow for all three datasources in the container…

Restore in progress
Seeing the restore operation in progress, and the creation of a new timeflow within the container

Please notice that the old timeflow is still visible and accessible to the left in the JetStream user-interface, even with the new timeflow being restored to the right.

Finally, the restore operation completes, and now all three parts of the EBS systems stack (i.e. appsTier, dbTier, and database) have been restored to Upgrade Step 55 in about 15 minutes elapsed time, a fraction of the 2 hours it would have taken to restore from backups.

New timeflow
Completion of the restore operation, showing the old timeflow still available, and the new timeflow following the restore operation

Now we can resume the EBS upgrade process with barely any time lost!

Summary of benefits

So a few things to bear in mind from this example when using Delphix JetStream…

  1. Once a JetStream container has been provisioned by the Delphix administrators (usually a role occupied by database administrators or DBAs), all the operations shown above are performed by JetStream data users, who are usually the developers performing the upgrade themselves
    • No need to coordinate backups or restores between teams == less time wasted
  2. Using Delphix removed the need to perform backups, because all changes across the timeflow of all three components of the EBS systems stack (i.e. appsTier, dbTechStack, and database) are recorded automatically within Delphix
    • No backups == less storage consumed == less time wasted
  3. Using Delphix removed the need to restore from backups, because the timeflow can be restored to any previous point-in-time in a fraction of the time it takes to perform a restore from backup
    • No restores == less time wasted

It is difficult to imagine not performing EBS upgrades using the methods we’ve used for decades, but the that time is here. Data virtualization, like server virtualization, is fast becoming the new norm, and it is changing everything for the better.

Free webinar

Please view the joint webinar on these topics, featuring Mike Swing of TruTek and Kyle Hailey of Delphix and myself, on the topic of “Avoiding The Pitfalls Of An Oracle E-Business Suites Upgrade Project” about how data virtualization can reduce the risk and accelerate the completion of EBS upgrades, recorded on 07-Apr 2016.

You can also download the slidedeck from the webinar here. 


Presenting “Accelerating DevOps Using Data Virtualization” at #C16LV

Borrowing from Wikipedia, the term DevOps is defined as…

DevOps (a clipped compound of "development" and "operations") is a culture, movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes. It aims at establishing a culture and environment where building, testing, and releasing software, can happen rapidly, frequently, and more reliably.

Now, I hate buzzwords as much as the next BTOM (a.k.a. bitter twisted old man)…eastwood

…but the idea behind DevOps, of building, testing, and releasing software more rapidly and reliably is simply amazing and utterly necessary.

As system complexity has increased, as application functionality has ballooned, and as the cost of production downtime has skyrocketed, writing and testing code leaves one a long way from the promised land of published and deployed production code.

As explained by DevOps visionaries like Gene Kim, the biggest barrier to that promised land is data, in the form of databases cloned from production for development and testing, in the form of application stacks cloned from production systems for development and testing.

The amount of time wasted waiting for data on which to develop or test dwarfs the amount of time spent developing or testing.  Consequently, IT has learned to be satisfied with only occasional refreshes of dev/test systems from production, resulting in humorously inadequate dev/test systems, and that has been the norm.

There is a new norm in town.

Data virtualization, like server virtualization, breaks through the constraint.  Over the past 10 years, IT has learned to wallow in the freedom of server virtualization, using tools like VMware and OpenStack to provision virtual machines for any purpose.

Unfortunately, data and storage did not benefit from virtuaMatrix1lization as well.  This has resulted in a white-hot nova in the storage industry, and while that is good news for the storage industry, it still means that IT has cloned from production to non-production the same way it has done the past 40 years, in other words slowly, expensively, and painfully.

Matrix2And we have continued to do it the old way, slowly, expensively, and painfully, because we didn’t know any better.

The IT industry couldn’t see any better way to do clone from production to non-production.  Slow and painful was the norm.

But once one realizes the nature of making copies, and how modern file-system technology can share at the block-level, compress, anMatrix3d de-duplicate, suddenly making copies of databases and file-system directories becomes easier and inexpensive.

Here is a thought-provoking question:  why doesn’t every individual developer and tester have their own private full systems stack?  Why can’t they have several of them, one or more for each task on which they’re working?

I can literally hear all of the other BTOM’s scoffing at that question:  “Nobody has that much infrastructure, you idiot!

And that is the point.  You certainly do.

You just don’t have the right infrastructure.

This was presented at the Collaborate 2016 conference in Las Vegas on Monday 11-April 2016.

You can download the slidedeck here.

Presenting “UNIX/Linux Tools For The Oracle DBA” at #C16LV

Going back to the invention of the graphical user interface (GUI) in the 1970s, there has been tension between the advocates of the magical pointy-clickety GUI and the clickety-clackety command-line interface (CLI).

Part of it is stylistic… GUI’s are easier, faster, more productive.

Part of it is ego… CLI’s require more expertise and are endlessly customizable.

Given the evolutionary pressures on technology, the CLI should have gone extinct decades ago, as more and more expertise is packed into better and better GUI’s.  And in fact, that has largely happened, but the persistence of the CLI can be explained by four persistent justifications…

  1. not yet supported in the GUI
  2. need to reproduce in the native OS
  3. automation and scripting
  4. I still drive stick-shift in my car, too

This is the prime motivation for the presentation entitled “UNIX/Linux Tools For The Oracle DBA” which I gave at the Collaborate 2016 conference in Las Vegas (#c16lv) on Monday 11-April 2016.

When your monitoring tool gets to the end of the information it can provide, it can be life-saving to be able to go to the UNIX/Linux command-line and get more information.  Knowing how to interpret the output from CLI tools like “top”, “vmtstat”, and “netstat” can be the difference between waiting helplessly for deus ex machina or a flat-out miracle, and obtaining more information to connect the dots to a resolution.

Oftentimes, you have a ticket open with vendor support, and they can’t use screenshots from the GUI tool or report you’re using.  Instead, they need information straight from the horse’s mouth, recognizable, trustworthy, and uninterpreted.  In that case, knowing how to obtain OS information from tools like “ps”, “pmap”, and “ifconfig” can keep a support case moving smoothly toward resolution.

Likewise, the continued survival of CLI’s has a lot to do with custom scripting and automation.  Sure, there are all kinds of formal APIs, but a significant portion of the IT world runs on impromptu and quickly-developed scripts based on shell scripts and other CLI’s.

Last, my car still has a manual transmission, and both my kids drive stick as well.  There is no rational reason for this.  We don’t believe we go faster or do anything better than automatic transmission.  After almost 40 years behind the wheel, it can be argued that I’m too old to change, but that’s certainly not true of my kids.  We just like being a part of the car.

Hopefully all of those reasons help explain why it is useful for Oracle DBAs to learn more about the UNIX/Linux command-line.  You’re not always going to need it, but when you do, you’ll be glad you did.

You can download the slidedeck here.

Avoiding Regret

After working for a variety of companies in the 1980s, after working for Oracle in the 1990s, after trying (and failing) to build a company with friends at the turn of the century, and after more than a decade working as an independent consultant in this new century, I found myself in a professional dilemma last year.

I know I need to work at least another ten years, probably more like fifteen years, to be able to retire.  I had survived the nastiest economic downturn since the Great Depression, the Great Recession of 2008-2011, while self-employed, and felt ready to take on the economic upswing, so I was confident that I could work steadily as an independent Oracle performance tuning consultant for the next 15 years or more.

Problem was: I was getting bored.

I loved my work.  I enjoy the “sleuthiness” and the forensic challenge of finding performance problems, testing and recommending solutions, and finding a way to describe it clearly so that my customer can make the right decision.  I am confident that I can identify and address any database performance problem facing an application built on Oracle database, and I’ve dozens of successful consulting engagements to bear witness.  I have a legion of happy customers and a seemingly endless supply of referrals.

Being confident is a great feeling, and I had spent the past several years just enjoying that feeling of confidence on each engagement, relishing the challenge, the chase, and the conclusion.

But it was becoming routine.  The best explanation I have is that I felt like a hammer, and I addressed every problem as if it was some form of nail.  I could feel my mental acuity ossifying.

Then, opportunity knocked, in an unexpected form from an unexpected direction.

I have a friend and colleague whom I’ve known for almost 20 years, named Kyle Hailey.  Kyle is one of those notably brilliant people, the kind of person to whom you pay attention immediately, whether you meet online or in person.  We had both worked at Oracle in the 1990s, and I had stayed in touch with him over the years since.

About four years ago, I became aware that he was involved in a new venture, a startup company called Delphix.  I wasn’t sure what it was about, but I paid attention because Kyle was involved.  Then, about 3 years ago, I was working as a DBA for a Colorado-based company who had decided to evaluate Delphix’s product.

Representing my customer, my job was to prevent a disaster from taking place.  I was to determine if the product had any merit, if the problems being experienced were insurmountable, and if so let my customer know so they could kill the project.

Actually, what my boss said was, “Get rid of them.  They’re a pain in my ass“.  I was just trying to be nice in the previous paragraph.

So OK.  I was supposed to give Delphix the bum’s rush, in a valid techie sort of way.  So I called the Delphix person handling the problem, and on the other end of the phone was Kyle.  Hmmm, I can give a stranger the bum’s rush, but not a friend.  Particularly, not someone whom I respect like a god.  So, instead I started working with him to resolve the issue.  And resolve it we did.  As a result, the company bought Delphix, and the rest is history.

Here’s how we did it…

But first, what does Delphix do?  The product itself is a storage appliance on a virtual machine in the data center.  It’s all software.  It uses sophisticated compression, deduplication, and copy-on-write technical to clone production databases for non-production usage such as development and testing.  It does this by importing a copy of the production database into the application, compressing that base copy down to 25-30% of it’s original size.  Then, it provides “virtual databases” from that base copy, each virtual database consuming almost no space at first, since almost everything is read from the base copy.  As changes are made to each virtual database, copy-on-write technology stores only those changes for only that virtual database.  So, each virtual database is presented as a full image of the source database, but costs practically nothing.  Even though the Oracle database instances reside on separate servers, the virtual database actually resides on the Delphix engine appliance and is presented via NFS.

I was asked to understand why the virtual databases were slow.

On the other end of the phone was Kyle, and he was easily able to show me with repeatable tests where and what the nature of the performance problems were, and that they were predictable and resolvable.

But I’m not really writing just about Delphix, even though it is very cool and quite earthshaking.  Rather, I’m writing about something bigger that has stormed into our industry, bringing to fruition something that I had tried — and failed — to accomplish at the turn of the century.  Back then, at the turn of the century, when some colleagues and I tried to build a hosted-application services company, we failed in two ways:  1) we were ahead of our time and 2) we chose the wrong customers.

Being ahead of one’s time is not a failure, strictly speaking.  It shows a clarity of vision, but bad timing.

However, choosing the wrong customers is absolutely a failure.

Before I explain, please be aware that I’m going to change most of the names, to protect any innocent bystanders…

After leaving Oracle in July 1998, I founded my own little company-of-one, called Evergreen Database Technologies (a.k.a. EvDBT).  The dot-com boom was booming, and I wanted to operate as an independent consultant.  I had been living in Evergreen in the foothills west of Denver for years, so the choice of name for my company was by no means a brilliant leap of imagination; I just named it after my hometown.  Business was brisk, even a bit overheated as Y2K approached, and I was busy.  And very happy.

In early 2000, I had been working with another young company called Upstart, and we felt that the information technology (IT) industry was heading in one inescapable direction:  hosted services.  So I joined Upstart and we decided that providing hosted and managed Oracle E-Business Suites (EBS) was a good business.  EBS is the world’s 2nd most prevalent Enterprise Resource Planning (ERP) system and can be dizzyingly complex to deploy.  It is deployed in hundreds of different industries and is infinitely customizable, so in order to avoid being eaten alive by customization requests by our potential customers, we at Upstart would have to focus on a specific industry and pre-customize EBS according to the best practices for that industry.  We chose the telecommunications industry, because it was an even bigger industry in Colorado then, as it is now.  We wanted to focus on small telecommunications companies, being small ourselves.  At that time, small telecommunications companies were plentiful because of governmental deregulation in the industry.  These companies offered DSL internet and phone services, and in June 2000 our market research told us it was a US$9 billion industry segment and growing.

Unfortunately, all of the big primary telecommunications carriers bought the same market research and were just as quick to catch on, and they undercut the DSL providers, provided cheaper and better service, and put the myriad of small startups out of business practically overnight.  “Overnight” is not a big exaggeration.

By October 2000, we at Upstart were stunned to discover that our customer base had literally ceased answering their phones, which is chillingly ominous when you consider they were telecommunication companies.   Of course, the carnage directly impacted the companies who had hoped to make money from those companies, such as Upstart.  Only 4 months in business ourselves, our target market had vanished like smoke.  You might say we were…. a little disconcerted.

We at Upstart had a difference of opinion on how to proceed, with some of the principals arguing to continue sucking down venture-capital funding and stay the course, while others (including me) argued that we had to find some way, any way, to resume generating our own revenue on which to survive.  I advocated returning to Upstart’s previous business model of consulting services, but the principals who wanted to stay the course with the managed services model couldn’t be budged.  By December 2000, Upstart was bankrupt, and the managed-services principals ransacked the bank accounts and took home golden parachutes for themselves.  I jumped back into my own personal lifeboat, my little consulting services company-of-one, Evergreen Database Technologies.  And continued doing what I knew best.

This was a scenario that I’m told was repeated in many companies that flamed out during the “dot-bomb” era.  There are a million stories in the big city, and mine is one of them.

But let’s just suppose that Upstart had chosen a different customer base, one that didn’t disappear completely within months?  Would it still have survived?

There is a good chance that we would still have failed due to being ahead of our times and also ahead of the means to succeed.  Hosted and managed applications, which today are called “cloud services”, were made more difficult back then by the fact that software was (and is) designed with the intention of occupying entire servers.  The E-Business Suites documentation from Oracle assumed so, and support at Oracle assumed the same.  This meant that we, Upstart, had to provision several entire server machines for each customer, which is precisely what they were doing for themselves.  There was little reason we could do it cheaper.  As a result, we could not operate much more cheaply than our customers had, resulting in very thin cost savings or efficiencies in that area, leaving us with ridiculously small profit margins.

Successful new businesses are not founded on incremental improvement.  They must be founded on massive change.

What we needed at that time was server virtualization, which came along a few years later in the form of companies like VMware.  Not until server virtualization permitted us to run enterprise software on virtual machines, which could be stacked en masse on physical server machines, could we have hoped to operated in a manner efficient enough to save costs and make money.

Fast forward to today.

Today, server virtualization is everywhere.  Server virtualization is deeply embedded in every data center.  You can create virtual machines on your laptop, a stack of blade servers, or a mainframe, emulating almost any operating system that has ever existed, and creating them in such a way that finally makes full use of all the resources of real, physical servers.  No longer would system administrators rack, wire, and network physical servers for individual applications using customized specifications for each server.  Instead, virtual machines could be configured according to the customized specifications, and those virtual machines run by the dozens on physical machines.

The advent of virtual machines also brought about the operations paradise of abstracting computer servers completely into software, so that they could be built, configured, operated, and destroyed entirely like the software constructs they were.  No more racking and wiring, one server per application.  Now, banks of “blades” were racked and wired generically, and virtual machines balanced within and across blades, with busier virtual machines moving toward available CPU, memory, and networking resources and quieter virtual machines yielding CPU, memory, and networking to others.  Best of all, all of this virtualization converted hardware into software, and could be programmed and controlled like software.

Everything is virtualized, and all was good.

Except storage.

Think about it.  It is easy and cheap to provision another virtual machine, using fractions of CPU cores and RAM.  But each of those virtual machines needs a full image of operating system, application software, and database.  While server virtualization permitted data centers to use physical servers more efficiently, it caused a positive supernova explosion in storage.  So much so that analysts like Gartner have predicted a “data crisis” before the end of the decade.

This is where Delphix comes in.

By virtualizing data as well as servers, it is now truly fast, cheap, and easy to provision entire virtual environments.  Delphix works with Oracle, and it also works with SQL Server, PostgreSQL, MySQL, DB2, and Sybase.  Even more importantly, it also virtualizes file-systems, so that application software as well as databases can be virtualized.

So back in early 2014, Kyle contacted me and asked if I would be interested in joining Delphix.  My first reaction was the one I always had, which is “no thanks, I’ve already got a job“.  I mean, I’m a successful and flourishing independent consultant.  Why would I consider working within any company anymore.  Business was brisk, I never had downtime, and the economy was improving.  I had successfully been operating as an independent consultant for most of the past 15 years.  Why fix what wasn’t broken?

But here was the crux of the matter…

I wanted to try something new, to expand beyond what I had been doing for the past 25 years since I first joined Oracle.  If it was a large established company beckoning, I wouldn’t have considered it for a second.  But a promising startup company, with a great idea and a four-year track record of success already, and still pre-IPO as well?

I couldn’t resist the gamble.  What’s not to like?

It’s possible I’ve made an enormous mistake, but I don’t think so.

Not to turn excessively morbid, but all of us are just a heartbeat away from our common destination.  I believe that, when I’m at the last moments of my life, the thing I fear will not be death itself, or pain, or leaving life behind.

It is regret that I fear.

And regret can take many forms, but the most painful regret will undoubtedly be what might have been, the opportunities passed or missed.  Career is only one aspect of life, and I don’t want to give it too much significance.  I’ve accumulated regrets in how I’ve lived my life, and how I’ve treated people in my life, and in some cases they are small but there are some which will always haunt me.

But with regards to my professional career, as Robert Frost said, I’ve taken the road less traveled, and that has made all the difference.  No regrets here.

One year at Delphix

It’s been over a year since I leapt into the void.

OK, more than a little melodramatic.  In many respects, I was leaping from the void by joining a promising and exciting startup company like Delphix.

Business was still brisk as an independent consultant at EvDBT, but for the past several years, I was experiencing what I called “just-in-time engagements”.  That is, new consulting engagements were just mysteriously showing up at the right time just before the current one was ending.  Frankly, it was getting a bit spooky, and I had been on pins and needles for a couple years watching it happen, wondering like a farmer if the day would come when the rain did not appear on time.  That day had shown up previously, during the recession of 2001 – 2002, when I experienced about 2-3 weeks of no work early in 2002, but that was the only dry spell I encountered in almost 16 years at EvDBT.  However, I wasn’t eager to see another one…

So a little over twelve months ago, on 01-May 2014, I left the world of Oracle technology consulting that I had first entered on January 15, 1990.  Well, I haven’t really left Oracle technology, but I’m no longer making a living at it.  Oracle is a big part of Delphix, but only a part.

What have been the highlights during my first year at Delphix?

  • learning data virtualization
  • learning how to tune virtual databases for Oracle and SQL Server
  • learning VMware and earning my VCA certification
  • carving a personal niche within a small company
  • became proficient at Windows Powershell    <- no kidding!  true!
  • continuing to present at Oracle conferences, often traveling and presenting with my brilliant wife, Kellyn Gorman

Yes, Kellyn and I got married during the past year!  It took us each a few tries at marriage, but we each hung in there, and got it right this time.  They say that remarriage is the triumph of optimism over experience, and I’m delighted to say that optimism trumps experience.  We did the deed in grand style, spending the week at Oracle Open World in San Francisco, coming home on Friday and getting married that Sunday in a tiny ceremony with immediate family only.  We finished up the RMOUG TD2015 call-for-papers on Wednesday and Kellyn publshed the agenda on Thursday, and then on Saturday we flew to Europe to do a joint-keynote and present at the Slovenian Oracle Users Group in Ljubljana and the Croatian Oracle Users Group in Rovinj.  After spending time with Joze and his lovely wife Lili, we blasted out of Croatia and scooted over to beautiful Venice for a quiet, blissful week-long honeymoon, meeting with Lothar and his lovely wife Birgit during the week as well.

Then, it was back to reality through the end of the year, getting swept up in the preparations for the RMOUG conference.

Delphix had a spectacular Q4 ending on 31-January 2015, where that financial quarter alone equaled the earnings from the entire previous fiscal year.  Lots of celebrations, victory laps, and high fives at the company all-hands meeting in early February, but what none of us in the Professional Services division saw was the looming tsunami of freshly-sold new deployments cresting just over our heads.  That wave crested and crashed down on us, and I found myself buried in work.  Just now, four months later in the new fiscal year, I’m finally able to look up and look around to find that winter and spring have passed and summer has arrived.My second year at Delphix has begun, and I’m curious as to what it will bring.

I’m continuing to heed the advice of Tim Minchin, who counsels against pursuing one’s Big Dream and instead suggests passionate dedication to short-term goals, in short that one be “micro-ambitious”.  That is say, that thing that is right in front of you right now?  Do it the very best you can, and put your back into it.  Whether it is a blog post, cutting the lawn, an email, or a new wax ring for the toilet in the basement.  Especially the new wax ring – don’t bugger that up!  And then do the next thing the same way.  And the next.  And the next.  Before you know it, you’ve climbed a mountain and have made a pretty good career besides.

Not unexpectedly at a small fast-growing company, hints have been made of a transition from the technical track to the managerial track.  This would finally reverse the track switch I made over 20 years ago while at Oracle, when I stepped off the managerial track in favor of the technical track.  I’ve never looked back on that decision, but should I be “micro-ambitious” here as well, take the new task right in front of me, and work to excel?  Or stick to my guns, stay with the technical track?

I’ve learned that it is a mistake to just “go with the flow” and bob along with the prevailing current.  If one is to succeed at something new, it must be a whole-hearted plunge accompanied by a full-throated war cry.

So, if you hear a strange noise rather like a cross between a “Tarzan yell” and someone choking on an avocado pit, don’t be alarmed.  Just listen a little longer to find whether you hear the “splat” of my corpse hitting pavement, or the “whoosh” as I learn to fly again.

And rest assured, that wax ring in the downstairs toilet?  Nailed it.

Will the real data virtualization please stand up?

There is a post from a good friend at Oracle entitled “Will the REAL SnapClone functionality please stand up?” and, as well-written and technically rich as the post is, I am particularly moved to comment on the very last and conclusive sentence in the post…

So with all of that, why would you look at a point solution that only covers one part of managing your Oracle infrastructure?

The post does not refer to Delphix by name, and it could in fact be referring to any number of companies, but Delphix is the market leader in this space, so it is reasonable to assume that the “Product X” mentioned throughout the post is Delphix.  The same holds true for any post commenting on relational database technology, which can reasonably be assumed to refer to Oracle.  Regardless, I was struck by the use of the phrase point solution in that final sentence of the post, and how it really is a matter of perspective, and how interesting is that perspective.

First of all, before we go any further, please let me say that, as an Oracle DBA for the past 20 years, I think that the current release of Oracle’s Enterprise Manager, EM12c, is the finest and most complete release of the product since I tested early versions of Oracle EM alongside the Oracle8i database in the late 1990s.  At that time, the product was full of promise, but it wasn’t something upon which an enterprise could truly rely.  That has certainly changed, and it has been a long time coming, starting with the advent of utilities like AWR, ASH, and Active Session History.  If you have extensive Oracle technology in your organization, you should be using EM12c to manage it.  Not EM11g, or EM10g, but EM12c.  It really is that good, and it is getting better, and there are talented people behind it, and you simply need it if you want to maximize your investment in Oracle technology.

But just because EM12c is the center of the universe of Oracle technology, what about organizations for whom Oracle technology is merely a component?  Many organizations have diverse IT infrastructures comprising Microsoft, IBM, SAP, and open-source technologies, and all of those technology components share the need for the basic use-cases of quickly and economically cloning production to create non-production environments to support development, testing, reporting, archival, and training activities.

Should those diverse IT organizations employ a silo tool like EM12c just for cloning Oracle databases, and then find the same functionality separately for each of those other separate technologies?  Would doing so be a tactical or a strategic decision?

So in response to the final question in the SnapClone post, I ask another question in turn…

Why would one look at a point solution that covers only Oracle database?

Access to data for development and testing is the biggest constraint limiting development and testing, so it doesn’t make sense to not enable data virtualization for all applications, regardless of whether they are comprised of Oracle technology or not.  IT agility is a strategic capability important to the entire business, not a technical challenge for a component silo.

But perhaps, in the interest of continuing the Oracle-only focus of the SnapClone post, we could stay inside the bounds of Oracle.  Fair enough, as a theoretical exercise…

So, even if we limit the discussion only to Oracle technology, it quickly becomes obvious that another important question looms…

Why would one look at a point solution that covers only the Oracle database, leaving the application software, database software, configuration files, and all the other necessary parts of an application as a further problem to be solved?

Anybody who has managed IT environments knows that the database is just one part of a complete application stack.  This is true for applications by Oracle (i.e. E-Business Suites, PeopleSoft, JDEdwards, Demantra, Retek, etc), as well as prominent applications like SAP, and every other application vendor on the planet, and beyond.

To do this, one needs a solution that virtualizes file-system directories with software, files, and everything that comprises the application, not just an Oracle database.

To provision those complete environments for developers and testers quickly and inexpensively, one needs both server virtualization and data virtualization.

Unless one has spent the past 10 years in deep space chasing a comet, you’ve already got server virtualization on board.  Check.

Now, for data virtualization, you need to virtualize Oracle databases, check.  And you also need to virtualize SQL Server databases, check.  And PostgreSQL and Sybase databases, check and check.  In the near future, Delphix will likely be virtualizing IBM DB2 and MySQL databases, not to mention MongoDB and Hadoop, ‘cuz that’s what we do.  Check, check, … check-a-mundo dudes and dudettes.

Despite this, even if you’re a single-vendor organization, you need to virtualize files directories and files, on UNIX/Linux platforms as well as Windows servers.

Delphix does all of the above, which is one reason why it is the market leader in this space.

And it has been in general use for years, and so a substantial portion of the Fortune 500 already relies on data virtualization from Delphix today, across their entire technology portfolio, as the partial list online here shows.

Perhaps it is only a point solution from one perspective, but be sure that your perspective is aligned with that of your whole IT organization, and that you’re not just thinking of a strategic business capability as merely “functionality” within a silo.