Monday, September 28, 2009

Dispersion and Processing: Attenuation and Anisotropy

The distinction between intrinsic and apparent frequency-dependent seismic properties is nowhere greater than in the areas of attenuation and anisotropy. First a brief overview of the attenuation problem.

If we take a rock sample from a well core and test it in the lab, we are likely to find that there is some small amount of intrinsic attenuation, meaning the irrecoverable loss or conversion of wave energy into heat. This will yield the frequency-dependent wave energy loss as observed over the length of a 1 inch rock sample. In the band of surface seismic data, 5-100 HZ, this attenuation is relatively constant and exceedingly small. Extrapolating this minute intrinsic attenuation to a geologic section several kilometers thick will still predict only a modest loss of wave energy, so small is the intrinsic attenuation effect. But what is actually observed in field data? From VSP and other downhole measurements we can monitor the evolving wave field and estimate attenuation through the rock column. It is seen to be significant, much stronger than the lab results would suggest. The reason is layering. For layered media, the waves continuously undergo reflection, transmission, and mode conversion. All of these mechanisms conserve energy (no conversion to heat) and thus do not quality as attenuation in the intrinsic sense. But the observed down going wave field will rapidly loose amplitude due to these effects, and the amount of loss will be a strong function of frequency. High frequencies in the wavelet amplitude spectrum will erode much faster than low frequencies. On the other hand, interbed multiples cascade to reinforce the down going wave field as shown by O'Doherty and Anstey (1971). So the picture that emerges about apparent attenuation is not one of a rock property that can be measured in the lab, but a frequency-dependent attenuation field.

The term attenuation ‘field’ deserves some explanation. One of the great unifying concepts of physics is the field. There are many kinds of fields, but our view of apparent attenuation is that of a scalar field that associates with each point in space a number representing the attenuation at that location. Imagine someone does a 20 Hz calculation for total attenuation in a layered medium and comes up with a value at some location. If we were to go to that location in the earth and take a physical sample of the material then test it in the lab, we will only find the intrinsic attenuation and are left wondering about this total attenuation value. Now our someone redoes the calculation for 10 Hz and assigns a different total attenuation value to the same location. Again we go there, extract a sample and test in the lab. The results are of course the same, since the intrinsic attenuation (a rock property) has not changed. Yet waves of 10 and 20 Hz moving through our earth model will actually experience different levels of attenuation in line with the total attenuation calculation.

To summarize, the attenuation a wave will see at any given location is composed of two parts, the intrinsic attenuation of the material at that spot and a frequency-dependent apparent attenuation field due to layering effects. Furthermore, this attenuation field is not just dispersive (a function of frequency), it is also anisotropic (a function of propagation direction). Attenuation anisotropy is a current area of research (Zhu et al., 2007).

This naturally brings up the topic of seismic velocity anisotropy and how it depends on frequency. We will restrict our comments here to VTI-type anisotropy which is velocity variation with respect to the vertical axis in a horizontally layered earth. Unlike core plug attenuation measurements, lab-scale rock samples can show significant velocity anisotropy. Occurring on this fine scale, we would term this intrinsic anisotropy. It is not a function of frequency and represents a rock property like density or bulk modulus. Of the sedimentary rock types, only shale is seen to be significantly anisotropic at this small scale. The origin of this behavior is the alignment of clay minerals at the microscopic level. Several recent publications have shown that shale velocity anisotropy is ubiquitous. For sandstone and carbonate rocks, VTI behavior develops in proportion to shale content. So intrinsic anisotropy is predictable in the rock column at any particular location from knowledge of the shale volume, and this in turn can be determined from standard well logging analysis. This is the shale part of the VTI problem, the other part is layering.

Long before shale anisotropy was well-understood, there was a theoretical interest in waves traveling through layered media. There are many effects that arise from layering, we have already mentioned waveguides as a good example. But waveguides are a thick layer problem. Now we are discussing the effects of thin layers, meaning layer thickness is much smaller than the seismic wavelength. In fact, with respect to velocity you can think of a continuum of behavior as we go from high to low frequency. At very high frequencies, the wavelength is much smaller than the layer thickness and the waves see a homogeneous medium. At longer wavelengths the medium seems to be heteogenous or v(z). Finally, at very long wavelengths compared to the layer thicknesses, the material behaves like an anisotropic medium. The question is how to calculate the apparent anisotropic parameters of the layered medium as seen by very long waves. Backus (1962) solved this problem for the case where the layers are either isotropic or VTI, although he was hardly the first to work on it. He capped off 25 years of investigation on this problem by many researchers. Today, we have full wave sonic and density logs that give Vp, Vs, and density every half-foot down a borehole. Shale intervals can be detected by gamma logs, but there is no substitute for lab measurements on core to find intrinsic shale VTI parameters. This gives us all the raw material that Backus said we need, a thin layered elastic medium composed of some combination of isotropic (3 parameters) and VTI layers (5 parameters).

Armed with a layered model, Backus says we need to do a kind of averaging to find the equivalent medium. His theory showed that if we do the averaging with a suitable averaging length in depth, then as far as wave propagation was concerned the two models are the same. Let's be careful and precise about this. The original model consists of fine elastic layers with properties that vary arbitrarily from one layer to the next. Waves sent through such a medium can be measured at the top of the stack (reflected field) or at the bottom (transmitted field). Let's call these observations the original wave field. Now we do Backus averaging to come up with a new model that is smoother and more anisotropic than the original. We send the same waves through the new model and measure the field at top and bottom. What Backus said is this: If the averaging distance is small enough the original and new wave fields will be the same, even though the original and new earth models look quite different. That is, these two earth models are identical with respect to wave propagation. Here we should also make clear the distinction between intrinsic anisotropy due to shale layers and layer-induced anisotropy that can occur even when every individual layer is isotropic. The total anisotropy is a combination of the two.

So where does dispersion come into all this? It is buried in the thorny question of the averaging length. As the averaging length increases, the medium becomes smoother and more anisotropic, and the wave fields are only the same for long wavelengths or, conversely, low frequency. But it is only the layer-induced anisotropy that depends on the averaging length, shale anisotropy does not. This means that layer-induced anisotropy, and therefore total anisotropy, is dispersive.

As with attenuation, we come away with a concept that VTI-type anisotropy is a frequency-dependent field. A 20 Hz wave will see a different version of earth anisotropy than a 30 or 60 Hz wave. In principle, each frequency has a unique anisotropy and attenuation field. A challenge for seismic imaging in the future is to exploit this phenomena, perhaps leading to frequency-dependent anisotropy and attenuation estimation as descendents of today’s migration velocity analysis.


Next: Dispersion and Interpretation: Rough Surface Scattering

Refs:
Backus, G., 1962, Long-wave elastic anisotropy produced by horizontal layering: J. Geophys. Res., 67, 4427--4440.
O'Doherty, R. F. and Anstey, N. A., 1971, Reflections on amplitudes: Geophys. ‘’Prosp.’’, Eur. Assn. Geosci. Eng., 19, 430-458
Zhu , Y., Tsvankin , I., Dewangan, F., and van Wijk, K., 2007, Physical modeling and analysis of P-wave attenuation anisotropy in transversely isotropic media, Geophysics, 72 , D1-7

Monday, September 21, 2009

Dispersion and Processing: Near Surface

We are usually taught in college that dispersion is not an issue in seismic data processing. Sure, we are told, when we try to match rock physics, sonic log, and surface seismic estimates of velocity we find discrepancies, but that is because we are passing through orders of magnitude difference in frequency. In the 10-100 Hz band of typical surface seismic data velocity is independent of frequency.

But that is a sloppy compression of reality. It is pretty nearly true for seismic body waves (P, S, and mode converted) moving around the far subsurface. In the near surface, however, velocities often show strong dispersion and the description is terribly inaccurate. Strangely, this is especially the case in marine shooting over shallow water. I say strangely because the speed of sound in water is independent of frequency to an exceptional degree, although it does depend on temperature, pressure, and salinity. It is only at immense frequencies, where wavelengths become vanishingly small, that sound speed begins to have any dependence on frequency. Yet, our standard 10-100 Hz data in shallow water leads to measured velocities well above and below the physical speed of sound waves in water.

This paradox arises because shallow water over an elastic earth forms a waveguide, bounded above by air and below by the seafloor. Like an organ pipe, sound gets trapped in the water layer and interferes to form a series of normal modes. Guitars and other stringed musical instruments are perhaps the most familiar example of such modes. The string is anchored at each end and can support a wave that spans the entire string, or harmonics of that wave that progressively scan one-half, one-quarter, etc. of the total string length.

The understanding of trapped or guided waves involves generalizing our usual concept of velocity. At a basic level, we think of a wave traveling a certain distance in a certain time and the ratio of these is the velocity of the wave. This definition works to determine the speed of sound in water at high precision as pressure, temperature, and salinity are varied. But now fix all these so that our lab measurement the speed of sound in water is, say, 1500 m/s. Fill an ocean with such water, make it a few tens of meters deep, set off an impulsive source, and listen with a sensor in the water a kilometer and a half away. We expect to observe the water wave arrival at one second (1500 m divided by 1500 m/s). This is the case for the highest frequencies in our data that have wavelengths (velocity divided by frequency) much smaller than the depth of the water column. In effect, they are not influenced by the seafloor. But lower frequencies have longer wavelengths, they feel the seafloor, tilting and jostling to fit in a water layer that looks increasingly thin as the frequency gets lower. In this regime, the concept of wave speed splits into two kinds of velocity, group and phase, neither of which is equal to the actual sound speed in water and both of which show dramatic, complicated variation with frequency. In other words, they are dramatically dispersive.

It is interesting that early work in quantum mechanics was also closely linked to phase and group velocity. In 1905 Einstein established that light particles, or quanta, had mass and other properties of matter. Twenty years later de Broglie flipped this around and asserted that matter had to have wave characteristics. The matter waves were investigated by means of a thought experiment in which a plane wave of frequency 'w' (omega) and wavenumber 'k'. For such a wave the speed is the phase velocity given by v=w/k. When this wave is summed with a second plane wave having slightly different frequency 'w+dw' and wavenumber 'k+dk'. The result is a low-frequency wave packet of frequency dw traveling at the group velocity 'dw/dk', and inside the wave packet the original wave is traveling with speed v=w/k. The picture is one of a wave packet moving at group velocity and and a monochromatic wave moving inside at a different speed. It is the group velocity that made physical sense in the case of matter waves, being simply the mechanical speed of the particle. The phase velocity is not so easy to understand, since it turned out to always be greater than the speed of light -- in apparent contradiction to the special theory of relativity.

Returning to the case of acoustic waves in a shallow water waveguide, we find mathematically similar phenomena. At a distance far from the source, compared to the water depth, the trapped waves form a spatial wave packet. The low frequency carrier wave travels at group velocity and represents the rate of energy transport by the wave field. The group velocity at high frequency is asymptotic to sound speed in water, then drops with decreasing frequency, until at low enough frequency it is no longer primarily controlled by the water layer but the elastic substrate. As it approaches a cur-off frequency, the group velocity is about equal to the Rayleigh wave speed of the seafloor.

As the wave packet is traversing this complicated velocity dispersion life cycle, the wave structures interior to the wave packet are traveling at the phase velocity. Phase velocity is also a strong function of frequency, but behaves differently. At high frequency, the phase velocity is, like group velocity, equal to sound speed in water. As frequency decreases, however, the phase velocity rises and always is greater than sound speed. We can think of this in terms of a plane wave front. At high frequency, this is vertical and represents the direct wave from the source. But as the frequency drops, the wave front tilts and receivers along the sea surface now are measuring the apparent velocity 'vw/cos(a)' where 'vw' is sound speed in water and 'a' is the propagation angle away from the horizontal. With lower and lower frequency, the wavefront has to lay down ever more to fit the increasingly long wavelength in the water column. As the cut-off frequency is approached, phase and group velocity meet once again at Rayleigh wave speed of the substrate.

Since about 1981 there have been processing tools to image phase velocity curves of the kind that develop in shallow water exploration. Park et al. (1998) found a scanning method that works will with real 2D or 3d data which is often irregularly sampled in space. Imaging of dispersive group velocity curves will be discussed at the 2009 SEG meeting in Houston in a paper by Liner, Bell, and Verm. The concept this: if we look at a single trace far from the source in shallow water shooting, a time-frequency decomposition of this trace will reveal that low frequencies are traveling slower than high frequencies, precisely the behavior we expect for group velocity curves. Quantitative investigation of the observed curves supports this interpretation.

From a data processing viewpoint, dispersive guided waves represent strong noise in the data. It is therefore a prime target for some kind of filtering technology. But phase and group velocity curves also posses valuable information about elastic properties of the sea floor, particularly shear wave speeds that are difficult to estimate otherwise. In principle, every shot record could be used to estimate laterally a varying shear wave model for use in converted wave exploration.

Next... Dispersion and Processing: Attenuation and Anisotropy

Tuesday, September 15, 2009

Dispersion and acquisition

If we were to survey the universe of seismic sources in use today for production seismic data in the petroleum industry, it would reveal only a few serious contenders. Over the last 80 years or so, many seismic sources have been developed, tested, and tossed into the Darwinian struggle for market survival as a reliable commercial source. At present, there are three sources that account for the vast majority of data acquisition.

In marine seismic applications the airgun is ubiquitous. There are several dispersive effects related to airguns and airgun arrays, including ghosting and radiation patterns. Recall that we are using dispersion in a generalized sense meaning frequency-dependent phenomena, not just seismic velocity variation with frequency. The ghost is an interesting example of dispersion where the physical source interacts with the ocean surface to form a plus-minus dipole that is a strong function of frequency. For a given source depth, the radiated field can have one or several interference notches along certain angular directions away from the source. These show up in the measured seismic data as spectral nulls called a ghost notch. To further complicate the picture, ghosting occurs on both the source and receiver side of acquisition. The radiation pattern associated with an airgun array is an exercise in the theory of antenna design and analysis, again complicated by dipole characteristics due to ghosting.

For land seismic data, there are two major sources in use worldwide: explosives and vibroseis. The explosive source has, in principle, the weakest dependence on frequency. Certainly it has a bandwidth determined by shot characteristics and local geology, but is an approximately impulsive point source. A buried explosive shot will, like the marine airgun, develope a dipole nature due to ghosting. But this is often not as well-developed as in the marine case, likely due to lateral variations in near surface elastic properties and topography.

The other significant land source, vibroseis, has a host of dispersive effects. For a single vibe we can mention two fascinating phenomena, radiation pattern and harmonics. The theory of radiation for a circular disk on an isotropic elastic earth was developed by several investigators in the 1950's, most notably Miller and Pursey. They were able to show the power emmitted in various wave types (P, S, Rayleigh) ultimately depends only on the Poisson ratio. But even though the total power for a single vibe is not a function of frequency, in real world applications it is common to use a source array which will radiate seismic waves in a way that strongly depends on frequency.

A vibroseis unit injects a source signal (sweep) into the earth over the course of several seconds. The sweep is defined by time-frequency (T-F) characteristics and for simplicity we will consider a linear upsweep here (very common in practice). The emitted signal bounces around in the earth and is recorded by a surface sensor, the resulting time series being an uncorrelated seismic trace. Conceptually, when this uncorrelated time trace is transformed into the T-F plane by a suitable spectral decomposition method, we should see a representation of the sweep with a decaying tail of reflection energy. This is observed, but we also commonly see a series of other linear T-F features at frequencies higher than the sweep at any given time. These are vibroseis harmonics. Since the observed uncorrelated seismic trace is the summation of all frequencies in the T-F plane, these harmonics can interfere and distort the weak reflection events we are trying to measure.

The origin of harmonics can be understood in relation to human hearing. As first discussed by Helmoltz in the 1860's, when a sound wave interacts with the human hearing apparatus something very interesting happens. First we need to realize that away from any obstacle, a sound wave proceeds by vibratory motion of the air particles and this motion is symmetric (equal amplitude fore and aft). But when a sound wave encounters the ear it pushes against the eardrum which is a stretched elastic membrane with fluid behind. The amount of power in the sound wave is fixed, and that power will compress the eardrum (due to its elasticity) less than the sound wave will compress air. If we think of, say, a 200 Hz wave as a cosine, this interaction means the deflection will be asymmetric. It will be a waveform that repeats 200 times per second, but it will not be a symmetric cosine wave. How can something repeat 200 times per second and not be a pure 200 Hz wave? Helmoltz found the answer: it must a 200 Hz wave plus a series of harmonics (400 Hz, 800 Hz, etc.). The fact that the material properties of the ear impede the motion due to sound necessarily means that harmonics will be generated.

Now back to the vibroseis case, when the mechanical apparatus of the vibrator pushes down against the earth it is resisted by the elastic nature of the near surface. On the upstroke the motion is unimpeded, asymmetry develops, and harmonics are generated. All this happens despite some pretty amazing engineering in the system. With modern T-F methods, we can think up various ways to remove the harmonics by data processing the uncorrelated data traces. There is also ongoing discussion about how to use the harmonics rather than filter them out.

Next time.... Dispersion and processing

Tuesday, September 8, 2009

Seismic dispersion

All of seismology is based on waves and a primary property of any wave is the velocity (v) at which it travels. This is related to wavelength (L) and frequency (f) through v = f L. This shows that as the frequency changes, so does the wavelength in just such a way that their product is preserved as the constant velocity. But it is important to note that velocity itself is not a function of frequency, a situation termed nondispersive wave propagation. As the frequency is ramped up, the wavelength drops, and the waves always travel at the same speed. This is the case with waves in unbounded ideal gases, fluids, and elastic materials.

Porous media is another matter. Wave speed is a function of material properties (matrix and fluid) and environment variables (pressure, temperature, stress). Luckily for us, in the low frequency range (0-100 Hz) of surface seismic data, the velocity does not depend on frequency to within measurable tolerance. However, as frequency ramps up to sonic logging (10-20 KHZ) and bench top rock physics (MHZ) the wave speeds do become dispersive (classic paper is Liu et al., 1976).

This is the classical meaning of the word 'dispersion', velocity is a function of frequency. Here we will take a more general definition that includes any wavefield property, not just speed. Examples will include velocity, of course, but also attenuation, anisotropy, and reflection characteristics. We could also lump all of these things into the name 'frequency-dependance', but 'dispersion' is already out there with respect to velocity and it seems better to go with the shorter, more familiar term.

I am a bit embarrassed to admit that I made a strong point to a colleague (Jack Dvorkin, I believe) a few years ago about his use of 'dispersion' for something other than frequency-dependent velocity. I think he was talking about attenuation. Anyway, my tardy apologies because I have arrived at the same terminology.

It is curious that so much of classical seismology and wave theory is nondispersive: basic theory of P and S waves, Rayleigh waves in a half-space, geometric spreading, reflection and transmission coefficients, head waves, etc. Yet when we look at real data, strong dispersion abounds. The development of spectral decomposition has served to highlight this fact.

We will distinguish two kinds of dispersion. If the effect exists in unbounded media then we will consider it to be 'intrinsic' and thus a rock property that can be directly measured in the lab. On the other hand, if the dispersion only presents itself when layering is present then we will term it 'apparent', this case being responsible for the vast majority of dispersive wave behavior in the lower frequency band of 0-100 Hz.

To make some sense of the seismic dispersion universe, we will break down our survey into the traditional areas of acquisition, processing, and interpretation.

It is a fascinating and sometimes challenging topic. We will not seek out mathematical complexities for their own sake. Rather we will gather up interesting and concise results, presented in a common notation, and dwell more on the physical basis, detection and modeling tools, and especially the meaning of dispersive phenomena.

Reference:
Liu, H.-P., Anderson, D. L., and Kanamori, H., 1976, Velocity dispersion due to anelasticity; implications for seismology and mantle composition, Geophys. J. R. astr. Soc., 47, 41-58.

Monday, September 7, 2009

Comeback, DISC, and SMT depth conversion

After a hiatus of about 9 months, I am dusting off the Seismos blog. A few things have helped me come to this decision.

First, I set up a blog for my wife Dolores, former SEG assistant editor for The Leading Edge. If you are interested, here is the link: Proubasta Reader The experience of setting up and customizing her blog gave me some good ideas about how to maintain my own. Back in January 2009, I was just playing around with the blog and found it onerous to come up with daily or even weekly entries. Now I see the point is not to wait for big things to write about, but say a few words as things come up. Closer to a postcard than a book chapter.

Another push came from my being nominated for SEG Distinguished Instructor Short Course (DISC) for 2012. It still requires approval of the SEG/EAGE Executive Committees in about 6 weeks at the SEG convention here in Houston, which brings me to another push. For this approval meeting, I need to supply the DISC committee chairman, Tad Smith, with a 2 page summary of what I have in mind for my DISC. What better place for this to develop than on the blog, leading to a Seismos column in TLE (which will surely have to be published after the convention due to editorial backlog).

And finally, assuming I am approved the DISC instructor must write a book that is used as notes whenever the 1-day short course is given. So as I get the book in shape over the next few months I can track progress and interesting topics on the blog.

Enough for now about the comeback and DISC.

************************ SMT depth conversion note ******************

I'm teaching a graduate 3D seismic interpretation class this semester at U Houston (GEOL6390). The software used for this class is Seismic MicroTechnology's (SMT) Kingdom software (v.8.4). We have a generous and important 30-seat license donation that makes this popular class possible. This semester we have 26 students, limited by good hardware seats and optimum instructor-student interaction.

I am also principal investigator for a DOE-funded CO2 sequestration characterization project in the Dickman field of Ness County, Kansas.

In both cases, the issue of depth conversion comes up. For the class we have 3 assignments, the last of which is prospect generation in the Gulf of Mexico using data donated by Fairfield Industries (thank you very much John Smythe). There is one well in the project that allows for depth control. For the Dickman project we have 135+ wells and a 3D seismic volume, so a more ambitious integrated depth map is in order.

But just today I was testing various ways to track and depth convert the Mississippian horizon at Dickman. First I carefully did 2D picking in each direction, keeping an eye on the event as it passed through each well with a Miss formation top pick. Using a shared time-depth (TD) curve all the wells lined up nicely with the seismic event. Next came 3D infill picking that did a good job.

But how to convert the time structure horizon to depth? I created grids to infill a few small tracking holes due to noise and weak amplitude. It seemed logical to then depth convert the time-structure grid to depth using the shared TD curve. But strangely, this did not let me constrain to an existing polygon. Further, it required some additional gridding parameters, even though you would think it just needs to look up each already gridded time value and find the associated depth from the TD curve.  I was also hoping for the ability to cross-plot the seismic gridded depth values at all wells that have a Miss formation top picked.  From the cross-plot there is enough information to generate a v(x,y) velocity field, extending the v(z) time-depth curve, so that all known tops are matched exactly and some kind of interpolation happens between known points.  No can do, as far as I can see.

Finally, it would be nice to be able to plot the TD curve as a simple time-depth crossplot, and do this with several TD curves to investigate lateral variability.

A different, perhaps better, approach is to depth convert the seismic data directly using the TD curve (TracePAK) and then re-track the Miss event through the depth volume. SMT does not allow the tracked time horizon to be extracted through the depth volume, again strange since the TD curve is known.

The jury is still out on this one. Perhaps you have a better idea....