Thursday, December 17, 2009

Six O'Clock News Followup

The KTRK ABC Channel 13 news CO2 story mentioned in an earlier blog entry was aired last night. My cameo role was integrated seamlessly with the larger story line. An excellent background piece on local aspects of the CO2 sequestration issue. Thank you Ted Oberg for an enlightened and well-constructed piece.

Tuesday, December 15, 2009

iPhone update from AGU

My first AGU meeting. Lot's of CO2 sequestration talks yesterday; fault permeability, basin-scale storage capacity, steel tubulars corrosian rates, CO2 accounting strategies.

AGU is quite different from SEG. Abstracts are only a paragraph or two, rather than the 4-page exanded SEG format. Talks are similar length/style, but session chairmen give a nice intro that SEG could use. Another good idea at AGU is in open areas near meeting rooms, these are populated with large round tables with 8-10 chairs. On each floor of Moscone West were maybe 50-60 such tables. Great discussion pods. Also, a series of long tables with power strips for laptop hookup.

The exhibition floor is smaller than SEG and only opens on Tuesday (curious). But the shocker is the poster area: Easily larger than SEG exhibit floor, a vast warehouse of poster stands including large theme signs. Seismology, tectonophysics, deep earth physics, etc. Clearly the real scientific exchange here is at the poster. Compare this to SEG which puts the poster area in strange places that seem designed to deter all but the hardiest. At SEG 2009, the posters sat in a cavernous concrete space away from exhibit hall and presentation rooms. Downright gloomy, if not actually depressing.

Anyway, AGU got the poster thing right and SEG could learn a lesson or two from them.

Thursday, December 10, 2009

Six O'Clock News

Interesting phone call came in today. It was on the cell phone and caller ID gave an unfamiliar number. Below is my recall of the conversation.

"Good morning, is this Professor Liner? I'm Ted Oberg with KTRK Channel 13 News."
"Ah, Good morning."
"Do you have a minute to talk?"
"Sure, what's up?"
"Well I am reporting a story about CO2 sequestration and I saw you in the video at the University of Houston."    [Note: I am about half way in]
"You seem to make the concepts very simple, simple enough to explain on the air."
"Interesting, and thank you"
"The story is centered on an energy company in Dallas who is planning a CO2 pipeline from a CO2 field to an oil field for enhanced oil recovery. But they are also going to run it near potential CO2 capture industrial sites. The hope is that as CO2 capture and sequestration takes off, they will be able to have people tap into their new pipeline."
"I see."
"So do you think we could get together for a short interview?"
"When did you have in mind?"
"How about today? We could come by your office, say 1:30 or 2:00."
"Actually, I'm off campus today. How about if I come to the studio at 1:30?"
"That would be great."
"OK, see you then."

So the meeting came off as planned and I would like to thank Ted in this semi-public medium for the chance to bring some of these issues to a wider audience. I am honored.

In the interview we talked on camera for about 20 minutes and did a 'walking shot' down the hallway. He wanted to know about the big picture, how the carbon capture and sequestration activity might affect business and individuals. All this will be boiled down to a few comments embedded in the bigger story of the proposed pipeline, CO2 sequestration in Texas (there is very little so far), and the Copenhagen climate meeting.

The piece involving my interview will run on the six o'clock news on Dec 16, toward the end of the Copenhagen meeting when CO2 will be very much in the news. Unless you are a pro like Ted, you never know how you will appear in front of the camera. I will be a nervous viewer.

For an overview of my take on CO2 capture and sequestration, a good source is the first half of a recent seminar at I gave at The University of Texas.

For the more industrious readers, I maintain a wiki of CO2-related links. It gives some small indication of the scope of what is going on in these very early days of carbon capture and sequestration.

Saturday, December 5, 2009

Seismic goes multisource

[Note: A version of this blog entry will appear in World Oil (Jan, 2010)]

The role of seismic data in hydrocarbon exploration is risk reduction: Dry holes, marginal producers, or getting reserves seriously wrong. The method had early and spectacular successes, beginning with the 1924 discovery of the Orchard Field in Ft. Bend County, Texas. Those early surveys relied exclusively on explosives as an energy source, but this has obvious issues with safety and environmental damage. Just as limiting is the fact that explosive sources give us very little control over the emitted waveform; basically our only knob to turn for higher or lower frequency is the charge size. It is analogous to modern marine airguns. An individual airgun emits a characteristic spectrum that varies with gun size. Like a bell choir, small guns emit higher frequencies and big guns emit lower frequencies. In marine seismic acquisition we can compose any spectrum we want by designing an appropriate airgun array, but to do this onshore with drilled shot holes is a very costly enterprise. To be sure it was tested in the old days of land shooting when there were no other viable source options. There are some very interesting 1950s papers in GEOPHYSICS in which empirical equations were found relating charge size and depth to dimensions of the resulting blast hole. It must have been a wild for the researchers since charges up to 1 million pounds were tested.

But even as those experiments were underway, the seismic world was changing. An explosive charge contains a broad band of frequencies that form a pulse of energy injected into the earth over a brief time span, perhaps 1/20th of a second (50 milliseconds). In fact, Fourier theory tells us that the only way to build such a short duration pulse is by adding up a wide array of frequencies. William Doty and John Crawford of Conoco were issued a patent in 1954 describing a new kind of land seismic source that did not involve explosives. The new technology, called vibroseis, involved a truck-mounted vibrating baseplate. Vibroseis applies the Fourier concept literally, operating one frequency at a time, stepping through the desired frequency range in a matter of 10-14 s. Think for a moment about just one aspect of this advance, namely seismic power. With an explosive source the only way to put more power in the ground is to use a bigger charge. If one big charge is used we have to live with lower frequency, if an array of charges are used the cost skyrockets. With vibroseis there are many options for getting more power into the earth, these include using a bigger vibe truck, multiple vibes, longer sweeps, or some combination of all these. In addition to customizing power, vibroseis also allowed for complete control over the source spectrum. Little wonder that vibroseis soon became the source of choice for land applications worldwide, particularly after the original patents expired in the early 1970s. Today, explosives are used only in places where a truck cannot go, or because of some business rationale, like crew availability, or personal preference. From a science point of view vibroseis is a clear winner.

Over the last four decades, the land seismic industry has transitioned from 2D shooting with a few dozen channels, to current 3D practice involving tens of thousands of channels. The higher channel count allows shooting with tighter trace spacing (bin size), better azimuth coverage, and higher fold. These advances have lead to improved seismic data through better signal extraction, noise suppression, and improved suitability for processing algorithms. They have also lead to astromomical growth in survey size, data volume, and acquisition time. A big shoot in 1970 was a few weeks, today it can be a year or more. This is not just a matter of money, although this new kind of data is very expensive; it is time that matters most. Large seismic surveys are now acquired on time scales equal to drilling several wells, and are even approaching lease terms. If it takes two years to shoot and one year to process a big 3D, then a three year lease starts to look pretty short.

So what is the bottleneck? Why is it taking so long to shoot these surveys? The main answer goes back to a practice that was born with the earliest seismic experiments. The idea is to lay out the receiver spread, hook everything up, then head for the first shot point. With the source at this location, a subset of the receivers on the ground are lit up waiting to get a trigger signal telling them to record ground motion. The trigger comes, the source simultaneously acts, the receivers listen for a while, and data flows back to the recording system along all those channels. The first shot is done. Now the source moves to shot location 2, the appropriate receiver subset is lit up, the source acts, and so on. So it was when Karcher shot the first seismic lines near the Belle Isle library in Oklahoma City back in the 1920s and so it is with most land crews today.

The key feature of this procedure is that only one source acts at a time. Over the years, there has been great progress in efficiency of this basic model. One popular version (ping-pong shooting) has two or more sources ready to go and they trigger sequentially at the earliest possible moment, just barely avoiding overlap of earth response from one source to the next. There are many other clever methods, but in any form this is single source technology. It carries a fundamental time cost because no two sources are ever active at the same time, for good reason.

If two sources are active at once we will see overlapping data, similar to the confusion you would experience with a different conversation coming in each ear. Interference from a nearby seismic crew was observed and analyzed in the Gulf of Mexico in the 1980s, leading to rules of cooperation among seismic contractors to minimize the interference effect. Things stood pretty much right there until a recent resurgence of interest in overlapping sources, now termed simultaneous source technology (SST). Both land and marine shooting is amenable to SST, but we will explain the concept in relation to land data.

The promise of simultaneous source technology is clear and compelling. If we can somehow use two sources simultaneously, then the acquisition time for a big survey is effectively cut in half. Of course it is not quite that simple; some aspects of the survey time budget are unchanged such as mobilization and demobilization, laydown, and pickup times, etc. But source time is a big part of the time budget and SST directly reduces it. Field tests using four or more simultaneous sources have been successfully carried out in international operations, and the time is ripe for SST to come onshore in the US.

As if we required another reason to prefer it over explosives, the high-tech aspects of vibroseis lead to elegant and accurate methods of simultaneous shooting. The details need not concern us, but theory has been developed and field tests done that show various ways of shooting vibroseis SST data. The nature of the universe has not changed: When multiple sources overlap in time, the wave field we measure is a combination of the earth response to each source. But with some high-powered science, one response can be separated almost entirely, just as we manage to isolate one conversation in a loud room from all the others. The data corresponding to each simultaneous source is pulled out in turn to make a shot record, as if that source had acted alone. In about the time it took to make one shot record the old way, we have many shot records. A survey that would take a couple of years with a single source could be done in a couple of months by using, say, 12 simultaneous sources. An amazing case of "If you can't fix it, feature it".

For many years we have been shooting 3D land seismic data to fit the pocketbook, making compromises at every turn. Physics tells us what should be done, but we do what we can afford. Bin sizes are too large, fold is too low, only vertical component sensors are used, azimuth and offset distributions are far from ideal. When physics and finance collide, finance wins.

But now the game is changing. With simultaneous sources, the potential is there to make a quantum leap in seismic acquisition efficiency thereby driving time- and dollar-cost down. The seismic data we really need for better imaging and characterization of tough onshore problems may actually become affordable.

Monday, September 28, 2009

Dispersion and Processing: Attenuation and Anisotropy

The distinction between intrinsic and apparent frequency-dependent seismic properties is nowhere greater than in the areas of attenuation and anisotropy. First a brief overview of the attenuation problem.

If we take a rock sample from a well core and test it in the lab, we are likely to find that there is some small amount of intrinsic attenuation, meaning the irrecoverable loss or conversion of wave energy into heat. This will yield the frequency-dependent wave energy loss as observed over the length of a 1 inch rock sample. In the band of surface seismic data, 5-100 HZ, this attenuation is relatively constant and exceedingly small. Extrapolating this minute intrinsic attenuation to a geologic section several kilometers thick will still predict only a modest loss of wave energy, so small is the intrinsic attenuation effect. But what is actually observed in field data? From VSP and other downhole measurements we can monitor the evolving wave field and estimate attenuation through the rock column. It is seen to be significant, much stronger than the lab results would suggest. The reason is layering. For layered media, the waves continuously undergo reflection, transmission, and mode conversion. All of these mechanisms conserve energy (no conversion to heat) and thus do not quality as attenuation in the intrinsic sense. But the observed down going wave field will rapidly loose amplitude due to these effects, and the amount of loss will be a strong function of frequency. High frequencies in the wavelet amplitude spectrum will erode much faster than low frequencies. On the other hand, interbed multiples cascade to reinforce the down going wave field as shown by O'Doherty and Anstey (1971). So the picture that emerges about apparent attenuation is not one of a rock property that can be measured in the lab, but a frequency-dependent attenuation field.

The term attenuation ‘field’ deserves some explanation. One of the great unifying concepts of physics is the field. There are many kinds of fields, but our view of apparent attenuation is that of a scalar field that associates with each point in space a number representing the attenuation at that location. Imagine someone does a 20 Hz calculation for total attenuation in a layered medium and comes up with a value at some location. If we were to go to that location in the earth and take a physical sample of the material then test it in the lab, we will only find the intrinsic attenuation and are left wondering about this total attenuation value. Now our someone redoes the calculation for 10 Hz and assigns a different total attenuation value to the same location. Again we go there, extract a sample and test in the lab. The results are of course the same, since the intrinsic attenuation (a rock property) has not changed. Yet waves of 10 and 20 Hz moving through our earth model will actually experience different levels of attenuation in line with the total attenuation calculation.

To summarize, the attenuation a wave will see at any given location is composed of two parts, the intrinsic attenuation of the material at that spot and a frequency-dependent apparent attenuation field due to layering effects. Furthermore, this attenuation field is not just dispersive (a function of frequency), it is also anisotropic (a function of propagation direction). Attenuation anisotropy is a current area of research (Zhu et al., 2007).

This naturally brings up the topic of seismic velocity anisotropy and how it depends on frequency. We will restrict our comments here to VTI-type anisotropy which is velocity variation with respect to the vertical axis in a horizontally layered earth. Unlike core plug attenuation measurements, lab-scale rock samples can show significant velocity anisotropy. Occurring on this fine scale, we would term this intrinsic anisotropy. It is not a function of frequency and represents a rock property like density or bulk modulus. Of the sedimentary rock types, only shale is seen to be significantly anisotropic at this small scale. The origin of this behavior is the alignment of clay minerals at the microscopic level. Several recent publications have shown that shale velocity anisotropy is ubiquitous. For sandstone and carbonate rocks, VTI behavior develops in proportion to shale content. So intrinsic anisotropy is predictable in the rock column at any particular location from knowledge of the shale volume, and this in turn can be determined from standard well logging analysis. This is the shale part of the VTI problem, the other part is layering.

Long before shale anisotropy was well-understood, there was a theoretical interest in waves traveling through layered media. There are many effects that arise from layering, we have already mentioned waveguides as a good example. But waveguides are a thick layer problem. Now we are discussing the effects of thin layers, meaning layer thickness is much smaller than the seismic wavelength. In fact, with respect to velocity you can think of a continuum of behavior as we go from high to low frequency. At very high frequencies, the wavelength is much smaller than the layer thickness and the waves see a homogeneous medium. At longer wavelengths the medium seems to be heteogenous or v(z). Finally, at very long wavelengths compared to the layer thicknesses, the material behaves like an anisotropic medium. The question is how to calculate the apparent anisotropic parameters of the layered medium as seen by very long waves. Backus (1962) solved this problem for the case where the layers are either isotropic or VTI, although he was hardly the first to work on it. He capped off 25 years of investigation on this problem by many researchers. Today, we have full wave sonic and density logs that give Vp, Vs, and density every half-foot down a borehole. Shale intervals can be detected by gamma logs, but there is no substitute for lab measurements on core to find intrinsic shale VTI parameters. This gives us all the raw material that Backus said we need, a thin layered elastic medium composed of some combination of isotropic (3 parameters) and VTI layers (5 parameters).

Armed with a layered model, Backus says we need to do a kind of averaging to find the equivalent medium. His theory showed that if we do the averaging with a suitable averaging length in depth, then as far as wave propagation was concerned the two models are the same. Let's be careful and precise about this. The original model consists of fine elastic layers with properties that vary arbitrarily from one layer to the next. Waves sent through such a medium can be measured at the top of the stack (reflected field) or at the bottom (transmitted field). Let's call these observations the original wave field. Now we do Backus averaging to come up with a new model that is smoother and more anisotropic than the original. We send the same waves through the new model and measure the field at top and bottom. What Backus said is this: If the averaging distance is small enough the original and new wave fields will be the same, even though the original and new earth models look quite different. That is, these two earth models are identical with respect to wave propagation. Here we should also make clear the distinction between intrinsic anisotropy due to shale layers and layer-induced anisotropy that can occur even when every individual layer is isotropic. The total anisotropy is a combination of the two.

So where does dispersion come into all this? It is buried in the thorny question of the averaging length. As the averaging length increases, the medium becomes smoother and more anisotropic, and the wave fields are only the same for long wavelengths or, conversely, low frequency. But it is only the layer-induced anisotropy that depends on the averaging length, shale anisotropy does not. This means that layer-induced anisotropy, and therefore total anisotropy, is dispersive.

As with attenuation, we come away with a concept that VTI-type anisotropy is a frequency-dependent field. A 20 Hz wave will see a different version of earth anisotropy than a 30 or 60 Hz wave. In principle, each frequency has a unique anisotropy and attenuation field. A challenge for seismic imaging in the future is to exploit this phenomena, perhaps leading to frequency-dependent anisotropy and attenuation estimation as descendents of today’s migration velocity analysis.


Next: Dispersion and Interpretation: Rough Surface Scattering

Refs:
Backus, G., 1962, Long-wave elastic anisotropy produced by horizontal layering: J. Geophys. Res., 67, 4427--4440.
O'Doherty, R. F. and Anstey, N. A., 1971, Reflections on amplitudes: Geophys. ‘’Prosp.’’, Eur. Assn. Geosci. Eng., 19, 430-458
Zhu , Y., Tsvankin , I., Dewangan, F., and van Wijk, K., 2007, Physical modeling and analysis of P-wave attenuation anisotropy in transversely isotropic media, Geophysics, 72 , D1-7

Monday, September 21, 2009

Dispersion and Processing: Near Surface

We are usually taught in college that dispersion is not an issue in seismic data processing. Sure, we are told, when we try to match rock physics, sonic log, and surface seismic estimates of velocity we find discrepancies, but that is because we are passing through orders of magnitude difference in frequency. In the 10-100 Hz band of typical surface seismic data velocity is independent of frequency.

But that is a sloppy compression of reality. It is pretty nearly true for seismic body waves (P, S, and mode converted) moving around the far subsurface. In the near surface, however, velocities often show strong dispersion and the description is terribly inaccurate. Strangely, this is especially the case in marine shooting over shallow water. I say strangely because the speed of sound in water is independent of frequency to an exceptional degree, although it does depend on temperature, pressure, and salinity. It is only at immense frequencies, where wavelengths become vanishingly small, that sound speed begins to have any dependence on frequency. Yet, our standard 10-100 Hz data in shallow water leads to measured velocities well above and below the physical speed of sound waves in water.

This paradox arises because shallow water over an elastic earth forms a waveguide, bounded above by air and below by the seafloor. Like an organ pipe, sound gets trapped in the water layer and interferes to form a series of normal modes. Guitars and other stringed musical instruments are perhaps the most familiar example of such modes. The string is anchored at each end and can support a wave that spans the entire string, or harmonics of that wave that progressively scan one-half, one-quarter, etc. of the total string length.

The understanding of trapped or guided waves involves generalizing our usual concept of velocity. At a basic level, we think of a wave traveling a certain distance in a certain time and the ratio of these is the velocity of the wave. This definition works to determine the speed of sound in water at high precision as pressure, temperature, and salinity are varied. But now fix all these so that our lab measurement the speed of sound in water is, say, 1500 m/s. Fill an ocean with such water, make it a few tens of meters deep, set off an impulsive source, and listen with a sensor in the water a kilometer and a half away. We expect to observe the water wave arrival at one second (1500 m divided by 1500 m/s). This is the case for the highest frequencies in our data that have wavelengths (velocity divided by frequency) much smaller than the depth of the water column. In effect, they are not influenced by the seafloor. But lower frequencies have longer wavelengths, they feel the seafloor, tilting and jostling to fit in a water layer that looks increasingly thin as the frequency gets lower. In this regime, the concept of wave speed splits into two kinds of velocity, group and phase, neither of which is equal to the actual sound speed in water and both of which show dramatic, complicated variation with frequency. In other words, they are dramatically dispersive.

It is interesting that early work in quantum mechanics was also closely linked to phase and group velocity. In 1905 Einstein established that light particles, or quanta, had mass and other properties of matter. Twenty years later de Broglie flipped this around and asserted that matter had to have wave characteristics. The matter waves were investigated by means of a thought experiment in which a plane wave of frequency 'w' (omega) and wavenumber 'k'. For such a wave the speed is the phase velocity given by v=w/k. When this wave is summed with a second plane wave having slightly different frequency 'w+dw' and wavenumber 'k+dk'. The result is a low-frequency wave packet of frequency dw traveling at the group velocity 'dw/dk', and inside the wave packet the original wave is traveling with speed v=w/k. The picture is one of a wave packet moving at group velocity and and a monochromatic wave moving inside at a different speed. It is the group velocity that made physical sense in the case of matter waves, being simply the mechanical speed of the particle. The phase velocity is not so easy to understand, since it turned out to always be greater than the speed of light -- in apparent contradiction to the special theory of relativity.

Returning to the case of acoustic waves in a shallow water waveguide, we find mathematically similar phenomena. At a distance far from the source, compared to the water depth, the trapped waves form a spatial wave packet. The low frequency carrier wave travels at group velocity and represents the rate of energy transport by the wave field. The group velocity at high frequency is asymptotic to sound speed in water, then drops with decreasing frequency, until at low enough frequency it is no longer primarily controlled by the water layer but the elastic substrate. As it approaches a cur-off frequency, the group velocity is about equal to the Rayleigh wave speed of the seafloor.

As the wave packet is traversing this complicated velocity dispersion life cycle, the wave structures interior to the wave packet are traveling at the phase velocity. Phase velocity is also a strong function of frequency, but behaves differently. At high frequency, the phase velocity is, like group velocity, equal to sound speed in water. As frequency decreases, however, the phase velocity rises and always is greater than sound speed. We can think of this in terms of a plane wave front. At high frequency, this is vertical and represents the direct wave from the source. But as the frequency drops, the wave front tilts and receivers along the sea surface now are measuring the apparent velocity 'vw/cos(a)' where 'vw' is sound speed in water and 'a' is the propagation angle away from the horizontal. With lower and lower frequency, the wavefront has to lay down ever more to fit the increasingly long wavelength in the water column. As the cut-off frequency is approached, phase and group velocity meet once again at Rayleigh wave speed of the substrate.

Since about 1981 there have been processing tools to image phase velocity curves of the kind that develop in shallow water exploration. Park et al. (1998) found a scanning method that works will with real 2D or 3d data which is often irregularly sampled in space. Imaging of dispersive group velocity curves will be discussed at the 2009 SEG meeting in Houston in a paper by Liner, Bell, and Verm. The concept this: if we look at a single trace far from the source in shallow water shooting, a time-frequency decomposition of this trace will reveal that low frequencies are traveling slower than high frequencies, precisely the behavior we expect for group velocity curves. Quantitative investigation of the observed curves supports this interpretation.

From a data processing viewpoint, dispersive guided waves represent strong noise in the data. It is therefore a prime target for some kind of filtering technology. But phase and group velocity curves also posses valuable information about elastic properties of the sea floor, particularly shear wave speeds that are difficult to estimate otherwise. In principle, every shot record could be used to estimate laterally a varying shear wave model for use in converted wave exploration.

Next... Dispersion and Processing: Attenuation and Anisotropy

Tuesday, September 15, 2009

Dispersion and acquisition

If we were to survey the universe of seismic sources in use today for production seismic data in the petroleum industry, it would reveal only a few serious contenders. Over the last 80 years or so, many seismic sources have been developed, tested, and tossed into the Darwinian struggle for market survival as a reliable commercial source. At present, there are three sources that account for the vast majority of data acquisition.

In marine seismic applications the airgun is ubiquitous. There are several dispersive effects related to airguns and airgun arrays, including ghosting and radiation patterns. Recall that we are using dispersion in a generalized sense meaning frequency-dependent phenomena, not just seismic velocity variation with frequency. The ghost is an interesting example of dispersion where the physical source interacts with the ocean surface to form a plus-minus dipole that is a strong function of frequency. For a given source depth, the radiated field can have one or several interference notches along certain angular directions away from the source. These show up in the measured seismic data as spectral nulls called a ghost notch. To further complicate the picture, ghosting occurs on both the source and receiver side of acquisition. The radiation pattern associated with an airgun array is an exercise in the theory of antenna design and analysis, again complicated by dipole characteristics due to ghosting.

For land seismic data, there are two major sources in use worldwide: explosives and vibroseis. The explosive source has, in principle, the weakest dependence on frequency. Certainly it has a bandwidth determined by shot characteristics and local geology, but is an approximately impulsive point source. A buried explosive shot will, like the marine airgun, develope a dipole nature due to ghosting. But this is often not as well-developed as in the marine case, likely due to lateral variations in near surface elastic properties and topography.

The other significant land source, vibroseis, has a host of dispersive effects. For a single vibe we can mention two fascinating phenomena, radiation pattern and harmonics. The theory of radiation for a circular disk on an isotropic elastic earth was developed by several investigators in the 1950's, most notably Miller and Pursey. They were able to show the power emmitted in various wave types (P, S, Rayleigh) ultimately depends only on the Poisson ratio. But even though the total power for a single vibe is not a function of frequency, in real world applications it is common to use a source array which will radiate seismic waves in a way that strongly depends on frequency.

A vibroseis unit injects a source signal (sweep) into the earth over the course of several seconds. The sweep is defined by time-frequency (T-F) characteristics and for simplicity we will consider a linear upsweep here (very common in practice). The emitted signal bounces around in the earth and is recorded by a surface sensor, the resulting time series being an uncorrelated seismic trace. Conceptually, when this uncorrelated time trace is transformed into the T-F plane by a suitable spectral decomposition method, we should see a representation of the sweep with a decaying tail of reflection energy. This is observed, but we also commonly see a series of other linear T-F features at frequencies higher than the sweep at any given time. These are vibroseis harmonics. Since the observed uncorrelated seismic trace is the summation of all frequencies in the T-F plane, these harmonics can interfere and distort the weak reflection events we are trying to measure.

The origin of harmonics can be understood in relation to human hearing. As first discussed by Helmoltz in the 1860's, when a sound wave interacts with the human hearing apparatus something very interesting happens. First we need to realize that away from any obstacle, a sound wave proceeds by vibratory motion of the air particles and this motion is symmetric (equal amplitude fore and aft). But when a sound wave encounters the ear it pushes against the eardrum which is a stretched elastic membrane with fluid behind. The amount of power in the sound wave is fixed, and that power will compress the eardrum (due to its elasticity) less than the sound wave will compress air. If we think of, say, a 200 Hz wave as a cosine, this interaction means the deflection will be asymmetric. It will be a waveform that repeats 200 times per second, but it will not be a symmetric cosine wave. How can something repeat 200 times per second and not be a pure 200 Hz wave? Helmoltz found the answer: it must a 200 Hz wave plus a series of harmonics (400 Hz, 800 Hz, etc.). The fact that the material properties of the ear impede the motion due to sound necessarily means that harmonics will be generated.

Now back to the vibroseis case, when the mechanical apparatus of the vibrator pushes down against the earth it is resisted by the elastic nature of the near surface. On the upstroke the motion is unimpeded, asymmetry develops, and harmonics are generated. All this happens despite some pretty amazing engineering in the system. With modern T-F methods, we can think up various ways to remove the harmonics by data processing the uncorrelated data traces. There is also ongoing discussion about how to use the harmonics rather than filter them out.

Next time.... Dispersion and processing

Tuesday, September 8, 2009

Seismic dispersion

All of seismology is based on waves and a primary property of any wave is the velocity (v) at which it travels. This is related to wavelength (L) and frequency (f) through v = f L. This shows that as the frequency changes, so does the wavelength in just such a way that their product is preserved as the constant velocity. But it is important to note that velocity itself is not a function of frequency, a situation termed nondispersive wave propagation. As the frequency is ramped up, the wavelength drops, and the waves always travel at the same speed. This is the case with waves in unbounded ideal gases, fluids, and elastic materials.

Porous media is another matter. Wave speed is a function of material properties (matrix and fluid) and environment variables (pressure, temperature, stress). Luckily for us, in the low frequency range (0-100 Hz) of surface seismic data, the velocity does not depend on frequency to within measurable tolerance. However, as frequency ramps up to sonic logging (10-20 KHZ) and bench top rock physics (MHZ) the wave speeds do become dispersive (classic paper is Liu et al., 1976).

This is the classical meaning of the word 'dispersion', velocity is a function of frequency. Here we will take a more general definition that includes any wavefield property, not just speed. Examples will include velocity, of course, but also attenuation, anisotropy, and reflection characteristics. We could also lump all of these things into the name 'frequency-dependance', but 'dispersion' is already out there with respect to velocity and it seems better to go with the shorter, more familiar term.

I am a bit embarrassed to admit that I made a strong point to a colleague (Jack Dvorkin, I believe) a few years ago about his use of 'dispersion' for something other than frequency-dependent velocity. I think he was talking about attenuation. Anyway, my tardy apologies because I have arrived at the same terminology.

It is curious that so much of classical seismology and wave theory is nondispersive: basic theory of P and S waves, Rayleigh waves in a half-space, geometric spreading, reflection and transmission coefficients, head waves, etc. Yet when we look at real data, strong dispersion abounds. The development of spectral decomposition has served to highlight this fact.

We will distinguish two kinds of dispersion. If the effect exists in unbounded media then we will consider it to be 'intrinsic' and thus a rock property that can be directly measured in the lab. On the other hand, if the dispersion only presents itself when layering is present then we will term it 'apparent', this case being responsible for the vast majority of dispersive wave behavior in the lower frequency band of 0-100 Hz.

To make some sense of the seismic dispersion universe, we will break down our survey into the traditional areas of acquisition, processing, and interpretation.

It is a fascinating and sometimes challenging topic. We will not seek out mathematical complexities for their own sake. Rather we will gather up interesting and concise results, presented in a common notation, and dwell more on the physical basis, detection and modeling tools, and especially the meaning of dispersive phenomena.

Reference:
Liu, H.-P., Anderson, D. L., and Kanamori, H., 1976, Velocity dispersion due to anelasticity; implications for seismology and mantle composition, Geophys. J. R. astr. Soc., 47, 41-58.

Monday, September 7, 2009

Comeback, DISC, and SMT depth conversion

After a hiatus of about 9 months, I am dusting off the Seismos blog. A few things have helped me come to this decision.

First, I set up a blog for my wife Dolores, former SEG assistant editor for The Leading Edge. If you are interested, here is the link: Proubasta Reader The experience of setting up and customizing her blog gave me some good ideas about how to maintain my own. Back in January 2009, I was just playing around with the blog and found it onerous to come up with daily or even weekly entries. Now I see the point is not to wait for big things to write about, but say a few words as things come up. Closer to a postcard than a book chapter.

Another push came from my being nominated for SEG Distinguished Instructor Short Course (DISC) for 2012. It still requires approval of the SEG/EAGE Executive Committees in about 6 weeks at the SEG convention here in Houston, which brings me to another push. For this approval meeting, I need to supply the DISC committee chairman, Tad Smith, with a 2 page summary of what I have in mind for my DISC. What better place for this to develop than on the blog, leading to a Seismos column in TLE (which will surely have to be published after the convention due to editorial backlog).

And finally, assuming I am approved the DISC instructor must write a book that is used as notes whenever the 1-day short course is given. So as I get the book in shape over the next few months I can track progress and interesting topics on the blog.

Enough for now about the comeback and DISC.

************************ SMT depth conversion note ******************

I'm teaching a graduate 3D seismic interpretation class this semester at U Houston (GEOL6390). The software used for this class is Seismic MicroTechnology's (SMT) Kingdom software (v.8.4). We have a generous and important 30-seat license donation that makes this popular class possible. This semester we have 26 students, limited by good hardware seats and optimum instructor-student interaction.

I am also principal investigator for a DOE-funded CO2 sequestration characterization project in the Dickman field of Ness County, Kansas.

In both cases, the issue of depth conversion comes up. For the class we have 3 assignments, the last of which is prospect generation in the Gulf of Mexico using data donated by Fairfield Industries (thank you very much John Smythe). There is one well in the project that allows for depth control. For the Dickman project we have 135+ wells and a 3D seismic volume, so a more ambitious integrated depth map is in order.

But just today I was testing various ways to track and depth convert the Mississippian horizon at Dickman. First I carefully did 2D picking in each direction, keeping an eye on the event as it passed through each well with a Miss formation top pick. Using a shared time-depth (TD) curve all the wells lined up nicely with the seismic event. Next came 3D infill picking that did a good job.

But how to convert the time structure horizon to depth? I created grids to infill a few small tracking holes due to noise and weak amplitude. It seemed logical to then depth convert the time-structure grid to depth using the shared TD curve. But strangely, this did not let me constrain to an existing polygon. Further, it required some additional gridding parameters, even though you would think it just needs to look up each already gridded time value and find the associated depth from the TD curve.  I was also hoping for the ability to cross-plot the seismic gridded depth values at all wells that have a Miss formation top picked.  From the cross-plot there is enough information to generate a v(x,y) velocity field, extending the v(z) time-depth curve, so that all known tops are matched exactly and some kind of interpolation happens between known points.  No can do, as far as I can see.

Finally, it would be nice to be able to plot the TD curve as a simple time-depth crossplot, and do this with several TD curves to investigate lateral variability.

A different, perhaps better, approach is to depth convert the seismic data directly using the TD curve (TracePAK) and then re-track the Miss event through the depth volume. SMT does not allow the tracked time horizon to be extracted through the depth volume, again strange since the TD curve is known.

The jury is still out on this one. Perhaps you have a better idea....

Thursday, January 15, 2009

Exact Verhulst Solution


Date: Thu, 8 Jan 2009 14:54:12 -0700 (MST)
From: Willy Hereman
To: John Stockwell
Cc: Doug Baldwin
Subject: Re: Chris Liner's paper using the Verhulst eq

John and Doug,

I read Liner's article (attached for you Doug).  Interesting!  I also did some work on the Verhulst equation, i.e. eq. (1) in his paper:

q'(t) = a_1 q - (a_1/a_2) q^2,

where a_1 is the intrinsic growth rate and a_2 is the saturation level
(also called the carrying capacity).

Eq. (1) has an exact solution which can be computed by separation of variables or by treating (1) as a Bernoulli equation. The solution is then represented as a rational expression involving an exponential function. That form of the solution can be found in almost any book on ODEs.

However, by looking at Liner's curve of the derivative, q'(t), in Fig. 1 (b), it came to me that the exact solution might be expressible in terms of a tanh function for its derivative is then sech-squared (a bell shaped, not to be confused with a true Gaussian curve, although they look alike).

Several years ago, Douglas Baldwin and I desigend a Mathematica program that automatically computes the exact tanh solutions of ODE and PDEs.  So, I tried our program and here is the nice closed form solution of Eq. (1) produced by the code:

q(t) = (1/2) a_2 { 1 + tanh[ (1/2) a_1 t + delta ] }

and its derivative

q'(t) = (1/4) a_1 a_2 sech^2 [ (1/2) a_1 t + delta ]

These are the exact mathematical expressions of the curves Liner
plotted in Fig. 1 (a) and (b), respectively.

Well, I learned something today.  I had not realized up to now that Verhulst's logistic equation had a simple tanh solution!

I have attached the Mathematica notebook with the result obtained
by our PDESpecialSolutionsV2.m code (the code is also attached).

Best,

Willy

ps (5-apr-2011)

The delta in the solution is an arbitrary constant. It is equivalent to writing the solution as

q(t) = (1/2) a_2 { 1 + tanh[ (1/2) a_1 (t - t_0) ] }

for an arbitrary t_0, i.e., an initial value for time t. The remaining two constants are a_1 and a_2.

Willy

------------

Dr. Willy A. Hereman, Professor
Department of Mathematical and Computer Sciences
Colorado School of Mines
Golden, CO 80401-1887, U.S.A.

**************************************

The article discussed here is Seismos: To peak or not to peak (Liner, 2008, The Leading Edge 27, p.610), and this is the figure:


Nonlinear waves

A bit late for this post, but I wanted to get it up here anyway to acknowledge the kindness of the sender.  The original email date was 10/22/2008.

*********************************************************************
Dear Chris,

I hope this e-mail is finding you well.

I read with interest your TLE column this month on harmonics.

I find it to be a fascinating topic.

Note that not all sources of non-linearity are due to imbalances in up and down strokes.

In water for example, non-linearity can come from change of velocity with pressure.

The higher the pressure, the higher the velocity; so when you send a sine function through water (with enough energy to affect velocity) peaks travel faster than troughs.

Thus the sine function gradually transforms into a see-saw function.

The Fourier transform of a see-saw function is a series of spikes (harmonics) with an amplitude following 1/n.

(Note that water also suffers from imbalance of up and down strokes: it is easier to push water than to pull it (a hard pull creates a vacuum in a phenomenon known as cavitation).)

Best regards,

Guillaume [Cambois]