Friday, January 28, 2011

Spectral decomposition at Dickman

In our ongoing effort to relate seismic attributes to geologic features at Dickman, we have generated a series of narrow-band attribute volumes from the migrated data. Unlike traditional time-frequency methods (Chakraborty and Okaya, 1994) that seek a trade-off of time and frequency resolution, we have chosen to use a pure frequency isolation algorithm. In fact, the method we use is traditional Fourier bandpass filtering with a very narrow response centered on the frequency of interest. In Figure 1 we show (a) the original 3D migrated data spectrum as extracted in a 5x5 bin area, (b) the same after narrow band filtering around 6 Hz and (c) after 43 Hz narrow band filtering.

Figure 1. Spectrum of Dickman seismic data and narrow band decomposition. (a) Fourier amplitude spectrum of 3D migrated data in a 5x5 bin area from 0-2 seconds. (b) Spectrum after narrow bandpass filtering centered on 6 Hz. (c) Spectrum after 43 Hz narrow bandpass filtering.

In the vertical view (Figure 2) the narrow band results are not very enlightening. It is tempting to conclude the new data has no time information content, but in fact there is time localization in the amplitude modulation of each trace although it seems to have little value in the vertical view.

Figure 2. Representative seismic line before and after filtering. (a) Original broadband data. (b) Narrow band 6 Hz data. (c) Narrow band 43 Hz data.

Figure 3 shows a time slice through the broadband data at 848 ms, roughly coincident with the Miss/Penn unconformity. The prominent incised channel is clearly shown. Note there are no clear trends in the data aligned with the yellow dash lines and the channel does not seem to approach the tip of the yellow arrow.

Figure 3. Broadband timeslice at 848 ms, approximately coincident with the Miss/Penn unconformity. Features observed in narrow band data (Figs 12 and 13) but not on the broadband data are indicated in yellow.

A coincident time slice through the 6 Hz data (Figure 4) shows a strong and remarkable diagonal alignment parallel to the yellow dash lines. One always suspects acquisition footprint when linear features show up in seismic data, so Figure 5 shows the shot and receiver orientation for the Dickman 3D data. While it is possible the NW-SE trend is related to source line orientation, the conjugate direction has no association with shooting geometry. We suspect these features indicate fracture orientation as described using curvature by Nissen et al. (2004, 2006). We plan to investigate this alignment further in the next quarter.

Figure 4. Narrow band (6 Hz) timeslice at 848 ms. Yellow dash lines indicate orientation of diagonal features (perhaps related to fractures) not seen in Fig 3.

Figure 5. Shooting geometry orientation for Dickman 3D, shots (L) and receivers (R).

At a higher frequency band (43 H, Figure 6) we observe a dark channel-like feature as indicated at the tip of the yellow arrow. We are currently working to confirm or deny the reality of this channel feature. The process will consist of studying well logs inside and outside the feature; particularly concentrating on basal Pennsylvanian sediments and indications of a subtle structural low at the top Mississippian.

Figure 6. Narrow band (43 Hz) timeslice at 848 ms. Yellow arrow indicates channel feature not seen in Figure 3.


Spectral decomposition (SD) figures shown above were generated using SeismicUnix (SU) on segy data output from SMT's Kingdom software. Specifically the sufilter program was driven by a shell script that implemented the following steps:

for each frequency of interest, stepping by 1 Hz {
__read segy and convert to SU format
__set params to isolate one frequency
__apply sufilter
__extract time slice
__create pdf figure

This allowed quick narrow band data scanning for features of interest in a common time slice. Once a particular narrow band was selected for further analysis, sufilter was applied to the original data and the entire SD attribute volume was output in segy format. This was imported to SMT for further analysis.

While the multiple-frequency scanning cannot be currently done in SMT, a narrow band SD of the type described here can be calculated (Figures 8 and 9) using Tools>>Trace_Pak...>>Process_Multiple_Traces....

Figure 8. Parameter window to create narrow band data directly in SMT Kingdom using TracePak, generating a new data type named amp_6Hz_SMT.

Figure 8. Narrow band (6 Hz) time slice generated and displayed in SMT Kingdom. Compare Figure 4 showing 6 Hz data generated and displayed using SeismicUnix.

Similarity of the SU and SMT result is reassuring, suggesting the linear features are robust features of the data and not merely artifacts related to a particular implementation of bandpass filtering.


Nissen, S. E.,T. R. Carr, and K. J. Marfurt, 2006, Using New 3-D Seismic attributes to identify subtle fracture trends in Mid-Continent Mississippian carbonate reservoirs: Dickman Field, Kansas, Search and Discovery, Article #40189.

Nissen, S. E., K. J. Marfurt, and T. R. Carr, 2004, Identifying Subtle Fracture Trends in the Mississippian Saline Aquifer Unit Using New 3-D Seismic Attributes: Kansas Geological Survey, Open-file Report 2004-56.

Wednesday, January 19, 2011

Oil Production Update: 2009

Since my time at Saudi Aramco (2005-2007) it has been a hobby to keep up with oil production numbers. Specifically, I track Oklahoma, Texas, United States, and world production of oil, condensate, and natural gas liquids.

Before getting to the production plots, I'll show an interesting cross plot of world population (GGDC data) and daily oil production (BP data), including a best fit quadratic curve.  There are bumps and valleys in this data, but the fitted curve captures the concave upward nature of the data.  That means population is growing faster than oil production.  Fascinating.

You will notice that all plots relate to oil production, not reserves.  Reserves are subject to significant, unknown uncertainty.  Production is real data, tangible and measurable.

Below are my latest oil production plots updated through the end of 2009 (2008 for Oklahoma).   USA is not shown, I'm still working on that.

Monday, January 17, 2011

Processing vs. Inversion

Gearing up for a new semester of teaching Geophysical Data Processing has me updating slides and thinking about the big picture. Part of that has been include an opening lecture about world oil production and CO2 since both will impact employment of geophysicists for decades to come. I have written elsewhere on CO2 (plus here) and may add an entry on oil production soon.

Anyway, another key concept is the distinction between data processing (P) and inversion (I). It is important enough that Jon Claerbout has written an excellent book with that phrase in the title... EARTH SOUNDINGS ANALYSIS: Processing versus Inversion (free download here).

The main point for me is a question of input and output. Observed data is input to both P and I, but the outputs are fundamentally different things. Processing spits out processed data that goes on down the line to the next process, ultimately yielding a migrated image of the subsurface. Inversion is designed to output not data, but an estimate of earth parameters.  For seismic inversion, this means velocity and density as a function of (x,y,z). I am thinking here of prestack inversion, but a similar argument can be made for poststack inversion. The later case, however, is a hybrid of P and I since the starting point for this kind of inversion is a migrated image.

For processing, at least seismic data processing, it is only a mild feedback loop and that is implemented manually.  By this I mean, each process is run and the output inspected by an expert user (hopefully) who judges the output data based on experience.  Parameter updates tend to also be done manually with the user tweaking the process parameters and re-running the job a few times at most.  In well-worked geographic areas, many processes are virtually automatic since parameters are well understood from previous jobs.  Along the road from raw seismic field data to final seismic image, there are likely to be dozens of individual processes.  This decoupling allows data quality and improvement to be judged many times and each process tends to be relatively quick, inexpensive, and involve the minimum number of user-chosen parameters.  A decoupled processing flow works fine so long as the earth is well behaved.  But in situations of extreme topography, velocity contract, and structure this simplified way of processing data breaks down and is unable to deliver a geologically reasonable subsurface image.  In such cases, one grand processing scheme called prestack depth migration is applied.  Cost wise, it may rival inversion, but the output is still a seismic image, albeit an expensive one.

In the case of inversion, there three key questions.  1) How much physics do we include in the simulation?  This is where physics and finance collide, since more physics means longer compute time and higher cost.  2) How do we compare simulated and observed data... simple difference, least squares, correlation, something more exotic?  3) How can we update the earth parameters based on the misfit?  This brings in the whole field of optimization theory, conjugate gradients, genetic algorithms, and so on.

To help get the basic ideas across, I used GraphViz (blog entry, web site) to make a couple of flow charts. In each, the input is shown in a blue box and the output is red. The code for each is also given below.

Processing Flow Diagram

Inversion Flow Diagram

GraphViz code for processing flow chart:

"Observed Data" ->Process
"Proc Params"->Process
Process->"Judge Quality"
"Judge Quality"->"OK"
"Judge Quality"->"Not OK"
"Not OK"->"Update Params"
"Update Params"->Process
"OK"->"Processed Data"
"Processed Data"->"Next Process"
"Observed Data" [shape=box, color=blue]
"Earth Model" [shape=box, color=red]

Code for inversion flow chart:

"Initial Earth Params" ->Simulation
Simulation->"Sim Data"
"Observed Data"->Compare
"Sim Data"->Compare
Compare->"Fit OK"
Compare->"Fit Not OK"
"Fit Not OK"->"Update Params"
"Update Params"->Simulation
"Fit OK"->"Earth Model"
"Observed Data" [shape=box, color=blue]
"Earth Model" [shape=box, color=red]

Tuesday, January 11, 2011


[Note: A version of this blog entry appears in World Oil (October, 2010)]

Anyone keeping an eye on seismic advertisements in trade publications must have noticed the proliferation of azimuth (AZ). We are bombarded with WAZ (wide azimuth), NAZ (narrow azimuth), FAZ (full azimuth), RAZ (rich azimuth), etc. What is going on here? Why the blitz of promotion for AZ?

First, we need to understand that a prestack seismic trace is like a child, it has two parents: A source and a receiver. Each parent has a physical location and the trace is considered to live at exactly the half-way point between source and receiver. This location is called the midpoint, so each trace is said to have a midpoint coordinate. But like a child, the trace inherits other characteristics that are a blend of the parents. Looking around from the trace location, we see there is a certain distance between parents, a quantity termed the offset. Furthermore, there is an imaginary line from source to midpoint to receiver whose orientation we call azimuth. This is the bearing that you would read off a compass; 0 is North, 90 is East, and so on around to 360 which is, again, North.

We said earlier that a prestack seismic trace lives at the midpoint between source and receiver. This is true until we migrate the data. Migration is a process that maps observed seismic data into the earth. For example, imagine the source and receiver are very close together and we observe nothing on our trace except a reflection at 1 second.

How are we to make sense of this? Let's say the earth velocity is 2000 m/s, that means the wave went out, reflected, and came back; all in 1 second. Clearly the outward time is half of this, 1/2 second, traveling at 2000 m/s. So the reflecting object must be 1000 m away from the source/receiver/midpoint (S/R/M) location.

We know the object is 1000 m away, but in what direction? Ah, there is the rub. We do not, and cannot, know which way to look for the reflection point when we only have one prestack trace. But, strangely, that is no problem. Back in the 1940s some smart guys figured it out, and sone other smart guys coded it up in the 1970s. The trick is this: Since we don't know where the reflection comes from, we put it everywhere it could possibly have come from.

In our example, we know the reflector is 1000 m away from one point (where S/R/M all live). In other words, a bowl (or hemisphere) centered on that location. Let's be a little more specific. Underneath the data we have built an empty 3D grid that we call 'image space'. We grab the trace, note it's midpoint and reflection time, then take the observed reflection amplitude and spread it out evenly over the bowl. If we have only one trace with one event, that would be the migration result shipped to the client. A bowl-shaped feature embedded in a vast array of zeros. Good luck getting payment on that.

Those who know something about geology will immediately complain that the earth does not contain bowl-shaped objects. True. But remember, this is the result of migrating only one trace and we actually have many millions of traces in a modest 3D survey. All of these are migrated to generate a collection of closely spaced bowls in the subsurface and, when we add them up, something remarkable happens. The bowls tend to cancel in those places where nothing exists in the earth. But where the reflections actually originated the bowls constructively interfere to generate an image of the subsurface. Furthermore, this process can build all the interesting geology we want, including faults, channels, synclines, anticlines, and so on. Quite amazing actually.

Where does azimuth come in? So far we have considered the prestack seismic trace to have S/R/M all at the same location. In other words, there is no offset and no azimuth. When the source and receiver separate they do so along an azimuth, which the seismic trace inherits. Migration now involves the same kind of process described earlier, except the primitive migration shape is now a stretched bowl with the long axis along the S/R azimuth (technically, an ellipsoid).

Before all the excitement and activity about WAZ, marine surveys were shot with one, or a few, cables towed behind a ship steaming in, say, E-W lines. This is a narrow azimuth survey. Since all the traces have about the same azimuth and those millions of migration bowls are lined up E-W along the acquisition azimuth. Not surprisingly, when all the bowls are summed in the subsurface image, we get a good view of geology with an E-W dip direction. Geology oriented any other way is blurred, smeared, or just not visible.

Strange things can appear in such a data set, taxing the interpretation ability of even the most experienced geologist. Faults are a good case in point. Consider our narrow azimuth E-W survey shot in an area with faults oriented parallel (E-W) and perpendicular (N-S) to the shooting direction. The explanation is a bit technical, but the bottom line is that N-S faults will be beautifully imaged, while those running E-W will be poorly seen, if at all.

With this kind of narrow azimuth data, the industry spent decades using ever faster computers and better migration algorithms, only to see the data improvements get smaller and smaller. But then overnight, it seemed, the introduction of wide azimuth shooting brought a quantum leap in image quality. Of course, WAZ came along late and unleashed all that pent-up computing and algorithm power making the improvement seem all the more remarkable.

Almost from the first days of petroleum seismology, practitioners knew that to unravel 3D dip required shooting in different directions. This lesson was always heeded in the development of land 3D. But offshore operational difficulties, and related costs, pushed the full azimuth goal aside. Then subsalt prospecting introduced a new class of imaging problem and narrow azimuth data was just not good enough. WAZ is here to stay.

Friday, January 7, 2011

Kurdish oil seep

About a month ago I was in Erbil (captial of Iraqi Kurdistan) teaching classes for Aspect on seismic migration and thrust belt interpretation.  I had a chance to go into the field near the Gulf Keystone Shaikan-1 well.  This discovery well was drilled in 2009 and had estimated reserves of 4.2 billion barrels on an anticline with beautiful surface exposure.  Details can be found in this investor PDF

 The photo below was taken near the currently drilling Atrush-1 exploration well, showing an outcrop with natural oil seeps.