Saturday, December 5, 2009

Seismic goes multisource

[Note: A version of this blog entry will appear in World Oil (Jan, 2010)]

The role of seismic data in hydrocarbon exploration is risk reduction: Dry holes, marginal producers, or getting reserves seriously wrong. The method had early and spectacular successes, beginning with the 1924 discovery of the Orchard Field in Ft. Bend County, Texas. Those early surveys relied exclusively on explosives as an energy source, but this has obvious issues with safety and environmental damage. Just as limiting is the fact that explosive sources give us very little control over the emitted waveform; basically our only knob to turn for higher or lower frequency is the charge size. It is analogous to modern marine airguns. An individual airgun emits a characteristic spectrum that varies with gun size. Like a bell choir, small guns emit higher frequencies and big guns emit lower frequencies. In marine seismic acquisition we can compose any spectrum we want by designing an appropriate airgun array, but to do this onshore with drilled shot holes is a very costly enterprise. To be sure it was tested in the old days of land shooting when there were no other viable source options. There are some very interesting 1950s papers in GEOPHYSICS in which empirical equations were found relating charge size and depth to dimensions of the resulting blast hole. It must have been a wild for the researchers since charges up to 1 million pounds were tested.

But even as those experiments were underway, the seismic world was changing. An explosive charge contains a broad band of frequencies that form a pulse of energy injected into the earth over a brief time span, perhaps 1/20th of a second (50 milliseconds). In fact, Fourier theory tells us that the only way to build such a short duration pulse is by adding up a wide array of frequencies. William Doty and John Crawford of Conoco were issued a patent in 1954 describing a new kind of land seismic source that did not involve explosives. The new technology, called vibroseis, involved a truck-mounted vibrating baseplate. Vibroseis applies the Fourier concept literally, operating one frequency at a time, stepping through the desired frequency range in a matter of 10-14 s. Think for a moment about just one aspect of this advance, namely seismic power. With an explosive source the only way to put more power in the ground is to use a bigger charge. If one big charge is used we have to live with lower frequency, if an array of charges are used the cost skyrockets. With vibroseis there are many options for getting more power into the earth, these include using a bigger vibe truck, multiple vibes, longer sweeps, or some combination of all these. In addition to customizing power, vibroseis also allowed for complete control over the source spectrum. Little wonder that vibroseis soon became the source of choice for land applications worldwide, particularly after the original patents expired in the early 1970s. Today, explosives are used only in places where a truck cannot go, or because of some business rationale, like crew availability, or personal preference. From a science point of view vibroseis is a clear winner.

Over the last four decades, the land seismic industry has transitioned from 2D shooting with a few dozen channels, to current 3D practice involving tens of thousands of channels. The higher channel count allows shooting with tighter trace spacing (bin size), better azimuth coverage, and higher fold. These advances have lead to improved seismic data through better signal extraction, noise suppression, and improved suitability for processing algorithms. They have also lead to astromomical growth in survey size, data volume, and acquisition time. A big shoot in 1970 was a few weeks, today it can be a year or more. This is not just a matter of money, although this new kind of data is very expensive; it is time that matters most. Large seismic surveys are now acquired on time scales equal to drilling several wells, and are even approaching lease terms. If it takes two years to shoot and one year to process a big 3D, then a three year lease starts to look pretty short.

So what is the bottleneck? Why is it taking so long to shoot these surveys? The main answer goes back to a practice that was born with the earliest seismic experiments. The idea is to lay out the receiver spread, hook everything up, then head for the first shot point. With the source at this location, a subset of the receivers on the ground are lit up waiting to get a trigger signal telling them to record ground motion. The trigger comes, the source simultaneously acts, the receivers listen for a while, and data flows back to the recording system along all those channels. The first shot is done. Now the source moves to shot location 2, the appropriate receiver subset is lit up, the source acts, and so on. So it was when Karcher shot the first seismic lines near the Belle Isle library in Oklahoma City back in the 1920s and so it is with most land crews today.

The key feature of this procedure is that only one source acts at a time. Over the years, there has been great progress in efficiency of this basic model. One popular version (ping-pong shooting) has two or more sources ready to go and they trigger sequentially at the earliest possible moment, just barely avoiding overlap of earth response from one source to the next. There are many other clever methods, but in any form this is single source technology. It carries a fundamental time cost because no two sources are ever active at the same time, for good reason.

If two sources are active at once we will see overlapping data, similar to the confusion you would experience with a different conversation coming in each ear. Interference from a nearby seismic crew was observed and analyzed in the Gulf of Mexico in the 1980s, leading to rules of cooperation among seismic contractors to minimize the interference effect. Things stood pretty much right there until a recent resurgence of interest in overlapping sources, now termed simultaneous source technology (SST). Both land and marine shooting is amenable to SST, but we will explain the concept in relation to land data.

The promise of simultaneous source technology is clear and compelling. If we can somehow use two sources simultaneously, then the acquisition time for a big survey is effectively cut in half. Of course it is not quite that simple; some aspects of the survey time budget are unchanged such as mobilization and demobilization, laydown, and pickup times, etc. But source time is a big part of the time budget and SST directly reduces it. Field tests using four or more simultaneous sources have been successfully carried out in international operations, and the time is ripe for SST to come onshore in the US.

As if we required another reason to prefer it over explosives, the high-tech aspects of vibroseis lead to elegant and accurate methods of simultaneous shooting. The details need not concern us, but theory has been developed and field tests done that show various ways of shooting vibroseis SST data. The nature of the universe has not changed: When multiple sources overlap in time, the wave field we measure is a combination of the earth response to each source. But with some high-powered science, one response can be separated almost entirely, just as we manage to isolate one conversation in a loud room from all the others. The data corresponding to each simultaneous source is pulled out in turn to make a shot record, as if that source had acted alone. In about the time it took to make one shot record the old way, we have many shot records. A survey that would take a couple of years with a single source could be done in a couple of months by using, say, 12 simultaneous sources. An amazing case of "If you can't fix it, feature it".

For many years we have been shooting 3D land seismic data to fit the pocketbook, making compromises at every turn. Physics tells us what should be done, but we do what we can afford. Bin sizes are too large, fold is too low, only vertical component sensors are used, azimuth and offset distributions are far from ideal. When physics and finance collide, finance wins.

But now the game is changing. With simultaneous sources, the potential is there to make a quantum leap in seismic acquisition efficiency thereby driving time- and dollar-cost down. The seismic data we really need for better imaging and characterization of tough onshore problems may actually become affordable.