Tuesday, November 30, 2010

End of the Rainbow

[Note: A version of this blog entry appears in World Oil (December, 2010)]

Over the last 12 months this column has covered topics from shale gas to seismic migration. When it comes to seismic data used for hydrocarbon exploration and production, the pot of gold, so to speak, is the Reflection Coefficient (RC) arising from layer boundaries deep in the earth. The RC indirectly contains information about rock and fluid properties. I say indirectly, because the RC mathematically just depends on velocity and density, but these in turn depend on porosity, pressure, oil saturation, and other reservoir parameters. One key parameter the RC does not even claim to supply is permeability, although it is sometimes estimated using an elaborate workflow involving well logs, core, and seismic (See this blog entry). Another, more direct, seismic permeability estimator is receiving attention these days and that is what I would like to talk about.

We won't be writing down any equations, but reflection coefficients are mathematical in origin. The simplest case involves a seismic wave traveling perpendicular to a layer boundary in an earth characterized by velocity and density. When this wave hits the boundary it splits into reflected and transmitted parts. The size of each is determined by two Boundary Conditions (BC) at the interface, conservation of energy and continuity of amplitude. The two BCs lead to two equations in two unknowns, the reflection and transmission coefficients. Because it is based on a simple model of the earth where each point is described by P-wave speed and density, the RC found in this way depends only on these parameters. Importantly, the RC in this case is just a number, like 0.12, and it is the same number if a 10 Hz wave is involved or 100 Hz.

Now imagine a more complicated view of rock involving such things as porosity, mineralogy, permeability, and pore fluid properties (modulus, density, saturations, viscosity). A theory of waves traveling through such a porous, fluid-saturated rock was developed in the 1950s by Maurice Biot. It is the foundation of poro-elasticity theory and the subject of hundreds of papers making readers worldwide thankful he had a short name. Some odd predictions came from the Biot theory, in particular a new kind of P-wave. The usual kind of P-wave (called fast P) is a disturbance traveling through the mineral frame of the rock, but influenced by the pore fluid. New wave (slow P) is a sound wave in the pore fluid, but influenced by the rock frame. In particular, the slow wave has to twist and turn through the pore space compressing and decompressing fluid and thus has a natural connection to permeability. Before long the slow wave was seen in the lab and the theory set on firm experimental footing.

By the 1960s, researchers figured out how to calculate the reflection coefficient for an interface separating two Biot layers. Since there are three waves in each layer (fast P, slow P, and S) there are 3 reflected and 3 transmitted wave types, meaning we need 6 boundary conditions to solve everything. I won't rattle them off, but there are indeed 6 BCs and the reflection coefficient was duly found, although it is enormously complicated. It would take several pages of small type equations to write it down. As you can imagine, it took a while for people to understand the Biot Reflection Coefficient (BRC), a process by no means completed.

One tantalizing feature of the BRC is its dependence on permeability and pore fluid viscosity. This holds the hope of mapping things of direct use by reservoir engineers, and doing it without punching a hole in the ground. But things are not as easy as that. These important properties are competing with porosity, mineralogy, and other rock properties to influence the BRC. If the BRC were just a number, like the classic RC, then there would be little hope of unraveling all this.

But the BRC is not just a number, it is dispersive (a function of frequency). This means that a low frequency wave will see a different BRC than a high frequency one. It may not seem like this is much help, but there has luckily been a decade or two of research and development on something called Spectral Decomposition (SD). Like white sunlight bent and split by water droplets to form a rainbow of colors, SD pulls apart a broadband seismic trace into its constituent frequencies. This fancy trick has lead to a universe of seismic attributes revealing ever more geological detail in 3D seismic data.

One result of SD applied around the world is a growing realization that seismic data is always a strong function of frequency. We shoot seismic data with a bandwidth of about 10-100 Hz, but looking at, say, the 20 Hz part we see quite a different picture than 30 Hz, or 40 Hz. The main reason for this is a complex interference pattern set up by classical reflection coefficients in the earth. But researchers and companies are also thinking about mining this behavior for the frequency-dependent Biot reflection coefficient.

The BRC is naturally suited to high-porosity conventional sandstone reservoirs. But shale also has some very interesting properties that may be illuminated by the BRC. We now understand that a vast spectrum of rock type goes by the name of 'shale'. These rocks tend to have low (but variable) permeability, and anomalous attenuation affected by fluid viscosity that is dramatically different for gas, condensate, and oil.

There is much work to do in following this rainbow, but unraveling the many competing effects is a next logical step in seismic reservoir characterization. Stay tuned for Biot Attributes.

A fond farewell... With this column my year as a World Oil columnist comes to an end. Other duties call, including a book project titled A Practical Guide to Seismic Dispersion, requiring my full attention for the next few months. I have the deepest appreciation for the WO editorial staff who gave me this exceptional opportunity, and to the many readers who wrote with their thoughts on the column. You can keep up with me, as always, through my Seismos blog. Adios.

Sunday, November 7, 2010

Snap, Crackle, Pop

[Note: A version of this blog entry appears in World Oil (November, 2010)]

In an earlier column about shale gas (May, 2010) I mentioned microseismic data, but had no room there to develop the subject. Here is our chance.

Recall that conventional seismic is the result of generating waves with an active source, such as vibroseis or explosives, at the earth's surface. The waves bounce around in the subsurface and those that return are measured by surface sensors. This is the nature of 3D seismic, a multibillion dollar industry with nearly a century of development behind it. This kind of seismic data is processed using migration to create a subsurface image, as discussed in earlier columns.

Microseismic (MS) data is fundamentally different in three ways. First, the seismic source is a break or fracture in the rock, deep beneath the surface. In response to reservoir operations like fluid production, injection, or a frac job, stresses change and rocks crack, groan, and pop like the rigging of an old schooner. Unlike standard controlled source seismology, in MS each event is a small seismic source with an unknown (x,y,z) location and an unknown source time.

Second, microearthquakes are very weak sources with Richter magnitudes of 0 to -4. To put this in perspective, it would take about 8000 of those little -4 events to match the energy in an ordinary firecracker. Seismic waves from such a source are generally too weak to register at the earth surface due to scattering and absorption by weathered near-surface layers. Consequently, MS data are best recorded by downhole sensors below the weathering zone. This means that, unlike conventional seismic data, MS data acquisition requires monitoring boreholes. Furthermore, with surface seismic data we tell the source when to act and can begin recording data at that time. Since the MS sources can act anytime we need to listen continuously.

Third, what it means to process MS data and the kind of product created is essentially different than conventional seismic imaging. MS is fundamentally and unavoidably elastic, meaning we must deal with both primary and shear (P and S) waves generated by each MS source. A method of detecting P and S arrivals is needed and must be coordinated among all sensors to ensure that picked events are correctly associated with a common MS event. Assuming all of that is done correctly, we are left with a triangulation problem to locate the MS event location in 3D space. While conventional seismic creates a 3D image of the subsurface, microseismic generates a cloud of subsurface points.

There is more. From a long history of global earthquake science, we know that when rock ruptures, seismic waves are generated with different strength in various take-off directions away from the source. In other words, each MS event has not only a time and location, but also a radiation pattern. The radiation pattern is packed with information; it can be inverted to determine an equivalent set of forces that would create the same pattern, and this in turn can be used to determine the nature and orientation of the slippage that caused the MS event. Fractures that open horizontally can be discriminated from vertical ones, N-S trending fractures from those oriented E-W, etc. This is a vast amount of important information.

The heaviest use of MS data to date is frac job monitoring in shale gas plays. Although earlier MS work was done, some of it looking at mapping flow pathways related to conventional hydrocarbon production, shale gas has cranked up the effort by orders of magnitude. Service companies are forming MS divisions, small and nimble new service companies are springing up, and academic research efforts are underway.

So what is this vital information that MS tells us about frac jobs? Certainly we are not interested in the minutiae of a single event. It is the pattern of vast numbers of events that must matter. Shale is so tight that virtually no gas is produced from unfractured rock. Without MS technology it is simply assumed that the frac job has modified the desired volume of reservoir rock. With MS we can plot event locations and associate them with stages of the frac job to confirm the affected rock volume. This feedback, well after well, allows an operator to change practices and procedures to optimize frac coverage and thus maximize gas yield. The best operators and service companies are on a steep learning curve that is resulting in estimated ultimate recoveries increasing dramatically in just the last 2-3 years.

As good as this is, we are still left with an uneasy feeling, like the old joke about a man looking for his keys under a street lamp because that’s where the light is best. Frac job monitoring only happens where the well is drilled. But we also want to know if we are drilling in the best location, in the mythical Sweet Spot. Here, too, MS has the potential to help.

Over the last two decades the seismic industry has made tremendous advances in data quality through acquisition and processing research. In parallel, the entire field of 3D seismic attributes has developed until they number in the hundreds: coherence, curvature, variance, spice, and too many more to name. Many of these afford extraordinary views of the subsurface, including long linear features that appear to be fracture fairways and trends. But as anyone who interprets satellite imagery will tell you, lineaments (as they call them) are darn near everywhere and you only know what they mean by ground checking.

Where can we find ground truth about the many conflicting fracture indicators we get from 3D seismic? Consider a 3D seismic volume in which a horizontal shale well is drilled. The well is fractured and MS data acquired. Frac jobs tend to open up the rock first along pre-existing zones of weakness like natural fractures. By integrating MS data into the seismic volume, we can explore the universe of 3D attributes looking for a connection. Is there an attribute, or combination of attributes, that can highlight the natural fracture trends indicated by the frac job? If so, we have something new in the world: A validated fracture-mapping tool based on 3D seismic.

The game’s afoot.

Monday, October 25, 2010

WAZ up?

[Note: A version of this blog entry appears in World Oil (October, 2010)]

Anyone keeping an eye on seismic advertisements in trade publications must have noticed the proliferation of azimuth (AZ). We are bombarded with WAZ (wide azimuth), NAZ (narrow azimuth), FAZ (full azimuth), RAZ (rich azimuth), etc. What is going on here? Why the blitz of promotion for AZ?

First, we need to understand that a prestack seismic trace is like a child, it has two parents: A source and a receiver. Each parent has a physical location and the trace is considered to live at exactly the half-way point between source and receiver. This location is called the midpoint, so each trace is said to have a midpoint coordinate. But like a child, the trace inherits other characteristics that are a blend of the parents. Looking around from the trace location, we see there is a certain distance between parents, a quantity termed the offset. Furthermore, there is an imaginary line from source to midpoint to receiver whose orientation we call azimuth. This is the bearing that you would read off a compass; 0 is North, 90 is East, and so on around to 360 which is, again, North.

We said earlier that a prestack seismic trace lives at the midpoint between source and receiver. This is true until we migrate the data. Migration is a process that maps observed seismic data into the earth. For example, imagine the source and receiver are very close together and we observe nothing on our trace except a reflection at 1 second.

How are we to make sense of this? Let's say the earth velocity is 2000 m/s, that means the wave went out, reflected, and came back; all in 1 second. Clearly the outward time is half of this, 1/2 second, traveling at 2000 m/s. So the reflecting object must be 1000 m away from the source/receiver/midpoint (S/R/M) location.

We know the object is 1000 m away, but in what direction? Ah, there is the rub. We do not, and cannot, know which way to look for the reflection point when we only have one prestack trace. But, strangely, that is no problem. Back in the 1940s some smart guys figured it out, and sone other smart guys coded it up in the 1970s. The trick is this: Since we don't know where the reflection comes from, we put it everywhere it could possibly have come from.

In our example, we know the reflector is 1000 m away from one point (where S/R/M all live). In other words, a bowl (or hemisphere) centered on that location. Let's be a little more specific. Underneath the data we have built an empty 3D grid that we call 'image space'. We grab the trace, note it's midpoint and reflection time, then take the observed reflection amplitude and spread it out evenly over the bowl. If we have only one trace with one event, that would be the migration result shipped to the client. A bowl-shaped feature embedded in a vast array of zeros. Good luck getting payment on that.

Those who know something about geology will immediately complain that the earth does not contain bowl-shaped objects. True. But remember, this is the result of migrating only one trace and we actually have many millions of traces in a modest 3D survey. All of these are migrated to generate a collection of closely spaced bowls in the subsurface and, when we add them up, something remarkable happens. The bowls tend to cancel in those places where nothing exists in the earth. But where the reflections actually originated the bowls constructively interfere to generate an image of the subsurface. Furthermore, this process can build all the interesting geology we want, including faults, channels, synclines, anticlines, and so on. Quite amazing actually.

Where does azimuth come in? So far we have considered the prestack seismic trace to have S/R/M all at the same location. In other words, there is no offset and no azimuth. When the source and receiver separate they do so along an azimuth, which the seismic trace inherits. Migration now involves the same kind of process described earlier, except the primitive migration shape is now a stretched bowl with the long axis along the S/R azimuth (technically, an ellipsoid).

Before all the excitement and activity about WAZ, marine surveys were shot with one, or a few, cables towed behind a ship steaming in, say, E-W lines. This is a narrow azimuth survey, since all the traces have about the same orientation and those millions of migration bowls are lined up E-W along the acquisition azimuth. Not surprisingly, when all the bowls are summed in the subsurface image, we get a good view of geology with an E-W dip direction. Geology oriented any other way is blurred, smeared, or just not visible.

Strange things can appear in such a data set, taxing the interpretation ability of even the most experienced geologist. Faults are a good case in point. Consider our narrow azimuth E-W survey shot in an area with faults oriented parallel (E-W) and perpendicular (N-S) to the shooting direction. The explanation is a bit technical, but the bottom line is that N-S faults will be beautifully imaged, while those running E-W will be poorly seen, if at all.

With this kind of narrow azimuth data, the industry spent decades developing ever faster computers and better migration algorithms, only to see the data improvements get smaller and smaller. But then overnight, it seemed, the introduction of wide azimuth shooting brought a quantum leap in image quality. Of course, WAZ came along late and unleashed all that pent-up computing and algorithm power making the improvement seem all the more remarkable.

Almost from the first days of petroleum seismology, practitioners knew that to unravel 3D dip required shooting in different directions. This lesson was always heeded in the development of land 3D. But offshore operational difficulties, and related costs, pushed the full azimuth goal aside. Then subsalt prospecting introduced a new class of imaging problem and narrow azimuth data was just not good enough. WAZ is here to stay.

Tuesday, October 12, 2010

SEG Governance Changes

As some of you SEG members may know there is a move afoot to change the governance structure of the organization. The details are not hard to find, there are SEG emails, newsletter, the latest issue of Leading Edge, and, of course, online. Each member will have to consider the proposals and decide on his/her own conscience.

In my opinion, there is great danger in the proposed structure. The excom currently is a president and six others, all elected at large by the entire SEG membership. The new rule would be an excom of 18, an executive group much bigger than those of vastly larger organizations such as Apple, Walmart, or the IEEE. Furthermore, certain seats are to be reserved on the excom for regions. Currently, the awards nominating committee is a small group of distinguished members with the charge to identify the best of the best in our society for special recognition, but this is slated to change. The new rule will require something like proportional representation of awards to mirror the composition of the society.

Both of these smack of political correctness to me, and a dispersal of influence and power to the corners of the earth from where it currently resides in North America. Ours is a mixed industrial and academic society and the industrial center of gravity, like it or not, is Houston. If the SEG spreads too thin, too far, too fast, I suspect there could be a rift in the society. If Texas spawned its own society in competition with SEG it would immediately be nearly its rival in power and resources, even if only 10% of the size.

When the SEG broke away from AAPG in the 1930's there was good reason. The culture and goals of the two groups did not align. The men, for they were men, who set up the structure of our society did it with full knowledge of what they were doing. Power is distributed with checks and balances. It is not perfect, but it is more than adequate to serve the society and perhaps save it from a debilitating pulse of globalism for the wrong reasons.

Although I now live in Houston, I am not a Texan by birth or nature. And while I am an American I harbor no American agenda. The fact is that no one set out to make North America the power center of the SEG, it developed naturally from the energy, innovation, and passion of the geophysical industries born there. When this rare combination of enterprise, efficiency, and hard work springs up taller and stronger elsewhere in the world I suspect the SEG power center will move with it, easily adapting within it's original governance framework.

The engine of SEG governance may need a tune up, but not an overhaul.

Monday, September 13, 2010

Kinds of Migration

[Note: A version of this blog entry appears in World Oil (September, 2010)]

In are earlier column (The Age of Migration) we discussed the history of seismic migration. Here we consider a related question: “What are the different kinds of migration, and which one should I choose?”

Migration is an important and expensive process applied to reflection seismic data before interpretation. It is the last major process to hit the data and likely to be blamed for everything from low resolution to inconsistent amplitudes, even though these problems may arise from acquisition or earlier processing steps.

To effectively discuss imaging it helps to know about kinds of migration. It is a waste of time and money to re-migrate data because an inappropriate migration technique was recommended or requested. Here we develop a classification scheme to bring order out of apparent chaos and get everyone talking the same language.

Dimensionality is the first consideration: Migration is either 2D or 3D.

  • 2D: Appropriate only for a pure dip-line. Even then, out-of-plane energy remains that can blur the data.
  • 3D: The right thing to do, but needs data acquired in 3D. A close grid of 2D lines can be merged into a 3D volume and migrated, but this is not really 3D data even if you have 100 square miles of the stuff. What makes data 3D is rich azimuth content; a 2D line has only one azimuth (the compass direction along the line).

Next we have to think about the form of the data being input to the migration: Poststack or prestack.

  • Poststack: Migration of the stack data volume, i.e., one trace per bin for 3D data. This is much less expensive than prestack migration, but also less accurate in structurally complex areas.
  • Prestack: Migration of the prestack data volume containing many traces per bin. Every blip of amplitude on every prestack trace is processed, requiring huge computational effort. Much more expensive than poststack migration.

Another consideration is the handling of lateral velocity variation. Across faults, salt boundaries, and steep dips, velocity can change dramatically over a short distance bending seismic rays. For small velocity contrast (e.g., Gulf of Mexico above salt), we can use shortcuts that save processing time and money. From the viewpoint of physics: Migration can be time or depth.

  • Time: Time migration gives the correct treatment of constant and depth-dependent velocity. Time migration can be interpolated for lateral velocity changes, but is inferior to depth migration for strong variations.
  • Depth: Correct physics treatment of strong lateral velocity variations, rays and wavefronts are accurately bent through the velocity field. Tends to be much more expensive that time migration.

Note the migration terms ‘time’ and ‘depth’ are unrelated to whether the ouput data have a time or depth axis. Any migration can be delivered in time or depth. In current practice it is common to interpret the migrated data in time and depth convert particular horizons using sonic logs and vertical seismic profiles. This makes depth conversion part of the interpretation process and depths can be guaranteed to match log tops at well locations. In areas of strong velocity variation, depth migration is used and output directly in depth. However, migration depth accuracy is limited and the depth section often needs final adjustment based on well data.

In the jargon of migration, a given method can be classified by concatenating the terms given above. For example, we can say 2D poststack time migration or 3D prestack depth migration.

It is important to realize that data are never migrated just once. Migration velocity analysis involves iterating the migration many times. There are clever ways of avoiding repeated migration of the entire data volume, but iterating even part of a large survey can add up. More input traces mean more cost. For a given number of input traces, depth migration will be more expensive and (we hope) more accurate than time migration.

When should we request depth migration and when will time migration suffice? What about prestack and poststack? The controlling factors are dimensionality of the data, structural complexity and velocity variation. The migration should be 2D or 3D depending on dimensionality of the data. However, to hold down costs, selected 2D lines are often extracted from the 3D data for detailed prestack depth migration. These 2D lines should be extracted in the dip direction, if one exists, to minimize out-of-plane effects. If a migration with too many shortcuts is used on properly acquired 3D data, the clue will be lack of geological sense in the final image volume. On the other hand, you can always overkill a job with prestack depth migration and incur unnecessary costs.

Note we have classified migration at a rather abstract level; there is nothing here about individual algorithms. Something like 3D prestack depth migration can be accomplished by many methods: Kirchhoff, beam, wave equation, reverse time, etc. Choosing an optimum method is the realm of the seismic imaging specialist.

At first sight, prestack time migration seems curiously unwise. Much extra expense is incurred by working with prestack data, but no improved image can be expected because only time migration physics is going into the algorithm. Prestack time migration’s main role in the world is to prepare data for prestack interpretation, primarily Amplitude Versus Offset (AVO) analysis. We want to migrate before AVO work to improve lateral resolution, but not spend big money on depth migration and related velocity analysis...plus we have more faith in time migration amplitude behavior.

In summary, two things make migration expensive: More data or more physics, or both. More data comes from 3D versus 2D and prestack versus poststack. The level of physics is implied by the terms ‘time migration’ (less physics) and ‘depth migration’ (more physics).

This column does not prepare you to write, or even use, complex migration software. But you might be able to deal with contractors, processors, and contribute to asset team discussions. By way of analogy, we are not out to build a car, just be a savvy, knowledgeable buyer who can kick the tires, peek under the hood, and make informed decisions.

Tuesday, August 3, 2010

Alpha flow

Remember the great oil spill of 2010?

The BP well blew out on April 20 and live video feeds were soon available online. After viewing the video, I sent out an email to friends on May 13 estimating the flow at 68 000 bbl/day.

The basis of the calculation was a 9 inch pipe and visual estimate of the oil flow rate being 10 ft/s. The calculation (linked below) was done in Wolfram|Alpha. It is a good example of using WA to do tricky conversions between units.

The official estimate came out yesterday, 2 months and 20 days later.

The long-awaited result? 62 000 bbl/day.

*************** Original Email ********************************

from Chris Liner
date Thu, May 13, 2010 at 1:05 PM
subject I'm no petroleum engineer...

... but the live feed oil spill video looks like a 9 inch pipe flowing at least 10 ft/sec. If you work up the numbers it comes out to about 68 000 barrels of fluid per day....

Wednesday, July 14, 2010

The Age of Migration

[Note: A version of this blog entry will appear in World Oil (July, 2010)]

Seismic data processing plays a key role in exploration. In the modern search for hydrocarbons, few wells are drilled with seismic data. The role of seismic is reduction of risk; risk of drilling dry holes, marginal producers, or getting reserve estimates seriously wrong. It seems appropriate to pause in 2010 and consider progress in seismic data processing over, say, the last 30 years. It is a big subject, the annual SEG meeting alone generates over 1100 expanded abstracts, the vast majority on seismic topics. We will touch here on a couple of first-order advances (migration and anisotropy) that have changed the way seismic work is done around the world every day.

First we should define seismic data processing. It is a vast and growing arsenal of computational techniques that attempt to remove wave propagation effects or noise in order to create an image of the subsurface. A popular free software package called SeismicUn*x, for example, consists of over 300 individual programs for the processing, manipulation, and display of seismic data. Importantly, seismic data processing does not include calculations like attributes, AVO, and inversion that mine the data for further information. These methods are important, but not properly part of our subject.

By 1982 the broad outlines of 2D prestack migration were taking firm shape. 2D may seem quaint today, but it was a massive strain on computer power. Furthermore, prestack migration was already known to be very sensitive to velocity errors, meaning not one, but many, passes of prestack migration were needed.

One approach to this difficulty was decoupling of prestack migration into separate steps. In this view, prestack migration was sequential application of normal moveout (NMO), a mystery process, common midpoint stacking, and finally poststack migration. This mystery process carried the burden of making the decoupled processing flow give precisely the same result as prestack migration. It came to be called dip moveout or DMO.

A working commercial implementation was available by 1978 and DMO was in general use by 1984. Over the next two decades, it was extended to include velocity variation (vertical and lateral), mode conversion, anisotropy, and 3D. Some migration purists see DMO as a trick; an annoying distraction of resources and effort that should have gone into the real problem of prestack migration. Perhaps. But DMO served a definite purpose in bridging the gap between 1980s computer power and the computational needs of prestack migration.

Seismic migration had a long history before this. Lateral positioning errors were understood almost as soon as people started shooting data. There was important work in the 1950s, and in the early 1970s a comprehensive view evolved of what migration was and how to do it. Then the great foundation papers came in 1978 – Kirchhoff, phase shift, and F-K migration – all in a single issue of GEOPHYSICS. Seismic migration was a mature science by 1985.

From the late 1980s through today, we have seen the age of prestack depth migration. The need for this technology arose from the failure of standard processing. The flow of NMO, DMO, common midpoint stack, and post- stack migration, gives a satisfactory image when the earth is simple. But exploration was pressing into deeper water, snooping around salt overhangs, testing subsalt rock formations, dealing with extreme topography, and trying to image overthrust areas. All these cases involve significant 3D lateral velocity variation. When things get tough enough everything is a migration problem. Under these conditions the decoupled processing flow simply fails to provide a geologically meaningful image and prestack depth migration is needed.

Standard processes are ultimately based on constant velocity physics or perhaps vertical variation. In a continuum of progress the migrators have incorporated more and more physics into the migration process – anisotropy, 3D, and strong lateral velocity variations. There has been remarkable theoretical progress in finding new ways to implement migration; various Fourier methods, finite difference (including reverse time), gaussian beams, screen propagators, and the venerable time-space domain Kirchhoff migration.

Even today Kirchhoff depth migration is the dominant technique, perhaps because of its superior ability to accommodate arbitrary coordinates for each source and receiver. Increasingly during the late 1980’s and early 1990’s the burden of travel time computation was spun off from Kirchhoff migration and sequestered in ray tracers of every increasing complexity. Radiation patterns, attenuation, various elastic wave phenomena, anisotropy, multi-pathing, and whatever else we think is important can ultimately be jammed into ray tracing. The migration program itself begins to look more and more like a database matching up coordinates, travel times, and amplitudes.

A bit more about anisotropy is in order. The tendency of seismic waves to have directional velocity was well-known fifty years ago and the theory was worked out in detail fifty years before that. But anisotropy was profoundly ignored in seismic data processing until the late 1980’s. Why? In standard land shooting we measure only the vertical component of motion then do everything we can to supress shear waves, and anisotropy is an elastic effect that was primarily studied in relation to shear waves. It seemed unlikely you could do anything to estimate it or remove it’s effects without measuring three component data to observe the full elastic wave field. This all changed in 1986 when it was shown that anisotropy influences standard P-wave data. A major theme in migration since then has been inclusion of ever more complex anisotropy.

Once you look for anisotropy it is everywhere, due to shale, fractures, thin layering, regional stress, etc. The important thing is this: If the subsurface is significantly anisotropic and you do isotropic data processing the image is degraded – amplitudes are wrong, reflector segments are not at the correct depth or lateral position, fault terminations are smeared, and so on. The interpreter is compromised by the processors’ isotropic worldview.

The last 30 years deserve to be remembered as the foundation age of seismic migration. Progress will continue, but the foundation is only built once.

Thursday, June 10, 2010

Teaching Statement

Trite as it may sound, the goal of teaching is for students to learn. The courses I now teach are graduate classes and tailored to that audience. When it makes sense for a subject, I like to emphasize project data, teamwork, writing, and presentation skills. Below I discuss a couple of specific examples.

As an indication of quality, I was ranked 2nd for teaching in the 2010 internal faculty evaluation of the EAS Department, a large group (27) that includes three past winners of the college outstanding teacher award.

Not all teaching is in the classroom, of course. One-on-one work with graduate students is essential and I run two open meetings per week for students I advise. I also maintain a blog with helpful information and notices.

During my 15 years at The University of Tulsa I taught a wide array of courses. These included freshman-level Physical Geology and The Earth in Space (which I developed). Owing to the small faculty size, many courses were taught as mixed undergrad/grad. In this category were Petroleum Seismology, Advanced Petroleum Seismology, Advanced Global Seismology, and Environmental Geophysics. In addition, the courses Advanced Seismic Data Processing and Seismic Stratigraphy where pure graduate classes. The Environmental Geophysics course included field exercises using electrical conductivity and ground-penetrating radar equipment. Most of these graduate courses had less than eight students, while at The University of Houston there were 49 in my spring 2010 Geophysical Data Analysis (largest graduate course in the department). One has to be flexible and patient to deal with these extremes.

At the University of Houston I have been involved in the 2009 and 2011 Geophysical Field Camp held near Red Lodge Montana. My field work at camp has involved 2D and 3D seismic seismic surveys using single and multicomponent receivers and various sources. In the classroom at UH, I routinely teach the graduate classes 3D Seismic Interpretation I (synthetic seismogram, girdding, horizon tracking, faults, prospect generation) and Geophysical Data Processing (the science behind seismic data processing).

Seismic Interpretation I is a good example of how I teach. It would be possible to teach such a class in traditional lecture format, but that would leave the students not ready to do significant work themselves on, say, a thesis project. So I break the class into three major assignments. Each is a mini-workshop lasting 5-7 class periods and involves working with real data in a commercial interpretation system. The hard deadline is intentional to ramp up the sense importance and simulate real-life deadlines for company projects, lease sales, and proposal submission deadlines. The first assignment is not orally presented, but graded for slide style, clarity, logic, and completeness. Assignment 2 is more ambitious, on a shorter deadline, and slide element grading stricter. This process can lead to some truly remarkable improvement in the mechanics of learning sophisticated software and preparing a presentation. The last assignment is to generate a prospect using industry Gulf of Mexico data. Two class periods are used to generate three leads, one of which is chosen at random as 'the prospect'. This project requires fault mapping, structural interpretation, amplitude analysis, reserve estimates, and economics. In the fashion of the industry, prospects are named and will be promoted to an audience. Once the prospect presentations are locked down (deadline again!), each student presents his/her prospect to the class in random order and completes an evaluation form on all other prospects. In a class of 30, this process takes five class periods. At the end of each, the class votes for Best Prospect of the Day. When all presentations are complete, the daily winners give a brief review and the class votes for Best Prospect. To make it fun, daily winners get a small surprise gift and the Best Prospect is awarded a trophy I buy at a local charity thrift store. The fall 2010 Best Prospect was Ambrosia and the trophy was a ceramic owl signifying wisdom. The last requirement of each student is to turn in one evaluation form I randomly specify. Not knowing which will be asked for, they must attend and stay engaged for all presentations.

This is a lot of detail, but the details make teaching effective or not. What does the student have at the end of this course? A respect for deadlines, the ability to jumpstart into complex software, knowledge of petroleum prospecting as well as structural, stratigraphic, and amplitude interpretation of modern 3D seismic data, and tangible evidence of this knowledge in the form of a presentation to show recruiters.

Some classes do not fit into the workshop format. My other recurring graduate class is Geophysical Data Analysis, a survey course on the physics and computing methods behind seismic data processing. The class will open with each student receiving a blank paper on which to write their name and a number called out as we head-count around the room. The papers are collected and I show a list of numbered topics (statics, fractures, velocity analysis, etc.). Thus every student is assigned a random subject and I emphasize that if they have already studied the subject they will get a different one. There are three assignments related to the subject: 1) write an SEG 4-page abstract, 2) prepare a 15 min presentation, and 3) prepare a one-panel poster. For full credit the presented work must include a computational aspect done by the student. Hard deadlines are staged for each item in the order shown. The first 70% of the course is lecture format and the remainder is presentations (again random, with co-evaluation forms, and single random turn-in).

In this way the traditional material is taught along with the method and capability for quickly digging into a new subject (including bibliography, state of the field, recent progress), and students learn to present the new knowledge to a group of peers.

Another theme in my teaching is exposure to open source software. Research often uses powerful commercial software such as Matlab and Mathematica, as well as compiled languages. However, free open source software like SeismicUnix, ImageJ and MeVisLab are launchpads for building custom tools in an advanced development environment. I maintain an open source blog to help students get started.

In summary, I enjoy the variety and constant challenge of teaching, and my methods will continue to evolve.

------------------------- Selected Student Comments -------------------------


Research Statement

At the University of Houston, most of my students work on some form of seismic interpretation topic.  This can be fairly basic (horizon mapping in 3D seismic for CO2 sequestration) or very advanced (seismic simulation from flow simulator output, or deep amplitude anomalies in the Gulf of Mexico).

My published work includes a textbook, peer reviewed publications, dozens of meeting abstracts, and many general-audience articles (World Oil contributing editor for 2010). There are two central themes to my peer-review published body of work.

First is the broad field of wave propagation, including phenomena, numerical modeling, and inversion.  The second theme is digital data analysis including seismic data processing, multichannel time series analysis, and image processing.

My earliest work centered on a wave equation framework for prestack partial migration (also known as dip moveout, or DMO), a theory that formed important middle ground between poststack and prestack migration during the 1980s as computer power was ramping up to handle the full prestack imaging problem. Earlier researchers had established the kinematics, or travel time, aspects of DMO, but amplitude was not previously related to the physics of wave propagation.  A side benefit of my DMO work was an amplitude preserving relationship for inverse DMO that can be used for interpolation or regularization of prestack data.

In the theory of DMO amplitude, one comes across the idea of 2.5 dimensional wave propagation.  The concept is a wave in the coordinate system of a 2D seismic line, but having accurate 3D amplitude behavior. In the course of my investigations, I discovered a 2.5D wave equation with exactly these properties.  It turned out to be a form of the Klein-Gordon equation well known in quantum mechanics.  It is not often one finds an unpublished, fundamental wave equation in the field of classical wave theory.

Another discovery relates to Rayleigh waves. There were three ways of computing the wave speed for the constant-parameter isotropic case: Analytically solve the Rayleigh polynomial, numerically solve it, or use a rough approximation that Rayleigh wave speed is 92% of the shear wave speed. The first two are tricky because the polynomial has multiple roots that may contribute to the solution, while the 92% rule is not accurate. I was able to expand the Rayleigh polynomial about the 92% rule and derive an expression for the wave speed that is accurate across the entire range of parameters encountered in seismic exploration. The new expression is far better than the 92% rule and more straightforward than computing Rayleigh's polynomial.

In 2002 I became interested in time-frequency methods and their application to seismic processing and analysis.  This is the topic of my SEG Distinguished Instructor Short Course (DISC) to be given worldwide in 2012, including a book in preparation with the working title of Seismic Dispersion.  Working in 2004 with PhD student Chun-Feng Li (now Tongji University), the continuous wavelet transform was used to generate a seismic attribute we termed SPICE (spectral imaging of correlative events). The idea is computation of a pointwise estimate of singularity strength (Hölder exponent) at every time level and every trace in a 2D or 3D migrated data volume.  The result is a remarkable view of geologic features in the data that are difficult or impossible to interpret otherwise. SPICE was patented by the University of Tulsa and commercialized by Fairfield Industries.

In a series of papers between 2006 and 2009 I worked with Saudi Aramco colleagues on layer-induced anisotropy, anisotropic prestack depth migration, and near-surface parameter estimation. In particular, I was able to show modern full wave sonic logs, that deliver both P-wave and S-wave velocities, are ideally suited to estimation of anisotropy parameters through a method originally published by Backus in 1962. The estimated anisotropy parameters were shown to improve prestack depth migration results.

A fundamental area of current research is carbon capture and sequestration. In 2008 I stepped in as principal investigator on a CO2 sequestration site characterization project and have since been lead CO2 sequestration researcher at the University of Houston. Our DOE-funded study site in Ness County, Kansas, involves 3D seismic, over 140 wells, digital well logs, production data, etc. My CO2 research team has progressed from seismic interpretation through geological model building, to scenario testing (cap rock integrity, fault and well bore leakage), and flow simulation spanning several hundred years. Site characterization and monitoring will rely heavily on geophysics as carbon capture begins on a large scale in the United States and worldwide. In 2010 I initiated research coordination with the CIUDEN carbon capture and sequestration project in Spain, one of the largest in Europe.

Time-frequency methods are central to ongoing areas of research.  In a recent paper with B. Bodmann, we re-examined a 1937 result by A. Wolf showing that reflection from a vertical transition zone results in a frequency-dependent reflection coefficient. Using modern analytical tools and methods, we showed how such phenomena could be detected and mined for important information. Another significant time-frequency application I developed is direct imaging of group velocity dispersion curves for observed data in shallow water settings around the world.  Phase velocity dispersion curves have been imaged since 1981, but this was the first reported method to image the group velocity curves that contain rich information about seafloor properties.

A theory of full-bandwidth signal recovery from zero-crossings of the short-time Fourier transform is now under development in collaboration with B. Bodmann. The first paper was delivered at a 2010 American Mathematical Society meeting.

My research will continue in the broad field of advanced seismic interpretation, wave propagation, signal analysis, and time-frequency methods to develop new tools and deeper insight into petroleum seismic data, CO2 sequestration problems, and near surface characterization.

Tuesday, June 8, 2010

House of Mirrors

[Note: A version of this blog entry will appear in World Oil (June, 2010)]

Let's take a ride on a seismic wave. The setting is offshore and a cylindrical steel airgun is just now charging up, the pressure building till a signal triggers the release of compressed air. The gas expands rapidly, generating a bubble and pressure wave in the water; a seismic wave is born. The wave takes off in every direction, but let's follow it straight down toward the seafloor.

This wave has a certain amplitude, the excess pressure above ambient conditions as it passes by, and a corresponding amount of energy. Striking the seafloor, it splits into two parts, an upgoing reflected wave and a downgoing transmitted wave. Whether the seafloor is hard or soft, not much of the wave passes through the water-sediment interface. This may seem surprising since we have all seen those beautiful 3D seismic offshore images, and that must come from waves that made it through the seafloor. But the fact is that much of the wave action does not get through.

Even in the case of a muddy, soft seafloor, something like 45% of the wave is reflected back up into the water. For a hard seafloor, we can expect 60% or more of the wave to reflect. Now remember, we are following a vertical wave so this reflection heads back toward the ocean surface. The trip upward is not very exciting till the wave hits the water/air interface (or 'free surface').

You might think that with air being so compressible and low density compared to water, that the sound wave would pass right on through. But, in fact, the opposite is true. Nothing gets through, and the wave reflects at full strength back to the seafloor, half of it bounces there, then all of it reflects again from the free surface, and so on. The multiples are periodic, each round trip taking the same amount of time. Have you ever been on an elevator with mirrors on two sides? Remember the infinite copies of yourself that that peer back? That is the situation a receiver in the water sees as these waves ping back and forth between seafloor and surface. These are called free-surface multiples since they are bouncing off the air/water interface. Another kind of multiple can occur deeper in the earth, where extra bounces are taken in one or more layers. These are termed internal multiples.

Multiples of any kind are big trouble to seismic imaging for a couple of reasons. First, it is not hard to imagine that our free-surface multiple will continue for a long as we record. That means eventually there will be a weak reflection we want from somewhere deep in the earth, and a multiple is likely to crash right into it. It is all too common around the world that our delicate, carefully nurtured reservoir reflection sits under a big, strong multiple. The second reason imaging suffers, is more subtle. For half a century researchers have been devising ever better ways to image (or migrate) seismic data. But almost all of them have one thing in common: Only primary (one bounce) reflections are used for imaging, not multiples. So our data is full of multiples, but we consider it noise not signal. That means multiples have to be removed before migration.

So how do we get rid of these multiples? The classic method is deconvolution, a word with a lot of meaning. It can be used for things like spectral whitening, source signature removal, wave-shaping, and many others. The application of decon to multiple removal goes back to the earliest days of digital seismic processing. The method also goes by the more descriptive name of prediction error filtering.

As an analogy, imagine that you are walking across a parking lot on a completely dark night. As you pace along, you suddenly bump your toe on a curb. Stepping over it, you continue and hit another curb farther on, and another. After a while, you start counting steps and find you can predict the next curb; not perfectly at first, but you get better and finally can avoid them perfectly. In effect, this is what the mathematical machinery of deconvolution does when processing a seismic trace. Based on earlier data values, decon predicts what will come a few time steps ahead, then goes there, checks the value, updates the prediction, and keeps trying. Since our multiple is periodic, decon will be able to predict it and thus remove it from the data. Reflections due to deeper geology, however, are by nature unpredictable and will not be removed. Decon has the remarkable ability to find the multiple pattern, or patterns, embedded in a random sequence of geological reflections. Very clever.

Going back to the dark parking lot, what if we change the situation so the repeating curbs are very far apart. So far, in fact, that as you walk across the lot you only hit one or two. In that case you would not be able to figure out the repeat pattern because you get too few looks at it. In the same way, decon is great for short-period multiples from shallow water, but fails for long-period multiples that occur in deep water. Over the last 20 years or so, a completely different multiple removal technique has been developed for exactly this situation. It has the cumbersome name of surface-related multiple elimination (SRME).

SRME is a vital tool in the modern exploration for deep water resources. It can handle much more complicated cases than the simple, vertical multiple we described above. SRME can handle situations where decon breaks down, like seafloor topography, non-vertical waves, and 3D scattering effects. The way SRME works is by considering the raypath for a particular free-surface multiple, and breaking it down to look like two primary reflections glued together. This insight allows a free-surface multiple that shows up only one time to be neatly and effectively removed. It has lead to a vast improvement in detailed image quality in the deep water Gulf of Mexico and elsewhere.

Tuesday, May 18, 2010

Technology of Shale Gas Plays

[Note: A version of this blog entry will appear in World Oil (May, 2010)]

There is no easy way to write about shale gas. It is remarkable how soon the previously unthinkable becomes routine. As recently as the late 1990s, it was commonplace during drilling to get a burp of gas while passing through shale, and ignore it. Shale was the 'background rock' that held the interesting formations: conventional sandstone and carbonate reservoirs with good porosity, permeability, and structural settings. Below conventional reservoirs, shale has always been important as source rock; a basic picture that has not changed.

In ancient oceans plankton and clay settled to the sea floor and were preserved by the low-oxygen environment. Sediment piled up over geologic time compressing, burying, and heating the organic-rich mud, transforming it to black shale containing a mixture of organic material called kerogen. Further burial and heating of the shale cracks the enormous kerogen molecules to generate oil and, at higher temperature, gas. Thus shale is the hydrocarbon kitchen generating oil and gas, then expelling it to be pooled and trapped farther up in those well-studied reservoirs. What we did not know until recently was that enough gas stayed behind to make shale itself a viable target. A century of study has been pointed at the conventional reservoir, while the idea of shale as a primary gas reservoir is barely 20 years old.

Shale is the finest-grained member of the clastic rock family that includes sandstone, siltstone, and mudstone. In a sense, grain size is all that defines a shale, but our common notion also includes something about clay minerals, depositional environment (e.g., deep water), and organic content. Shale variability is bewildering, including wide ranges of porosity (3-15%), permeability (milli to micro darcies), mineralogy (clay, silica, carbonate), total organic carbon (2-10%), and mechanical properties. A complicated picture indeed, and a case could be made that many of the famous 'shales' are not shale at all -- parts of the Bakken are dolomite, sandstone, and siltstone; the Barnett is mostly mudstone; and the Marcellus is part siltstone with up to 60% silica content. One is reminded of the old MIT physics story about a PhD qualifying exam where the candidate was asked to define the universe and give three examples.

But shale gas plays are here to stay. The motivation is clear. Take, for example, the Barnett play in Texas where shale gas production grew from 100 BCF/yr in 2000 to 1400 BCF/yr in 2008, an annual growth rate of 34%. The play names themselves have taken on a billion dollar buzz: Barnett, Haynesville, Utica, Woodford, Marcellus, and even the quaint Fayetteville. This is quite a shift of fortune for a shale previously famous only for a fault in the local railroad cut on Dickson Street in Fayetteville, AR.

Unconventional gas resources are generally grouped into tight gas sands, coal bed methane, and shale gas. Tight gas is the leader in proved reserves, but recent growth seen in U.S. reserves has been due almost entirely to shale gas. Recent estimates for 2008 indicate that shale gas accounts for more than 30% of U.S. gas reserves. What is it that sparked this shale gas boom, when we had been drilling right through it for decades?

Consider the most mature and best-studied shale gas play, the Barnett. From the first well in 1981, shale gas was on a slow growth curve until two key technologies combined to blow the cork out of the bottle -- horizontal drilling and massive fracture jobs. All those years when shale gas was just a curious show on the way down to a conventional target, the shale was being tested by a vertical well bore. Even the thickest gas shales have a relatively thin vertical zone with the best production. A vertical well encounters only this thin gas-rich zone, but a horizontal bore can open up several thousand feet of the good stuff. The best horizontal Barnett wells can make about three times the gas production of the best vertical wells. Since 2003, horizontal drilling has been standard procedure in every shale gas play. Meanwhile, a change in fracturing technology around 1998 brought the cost of frac jobs down significantly and allowed operators to do bigger treatments. As with the geology of shales, frac technology is a vast field of study. The goal is to expose more reservoir rock by injecting fluid to create fractures, and proppants to keep them open. Big jobs in the Barnett can involve pumping 7-8 millions gallons of material down the well from an armada of pump trucks. No circus coming to town can match the spectacle.

As a geophysicist, I would be remiss not to mention seismic technologies related to shale gas. As with all things shale, the literature is vast; over 400 pages of technical papers published by the Society of Exploration Geophysicists alone. Daunting, but I'll mention a few highlights.

When faults are nearly vertical, the chances of cutting one in a vertical well are small. But when horizontal wells are steered laterally for several thousand feet the chances increase. At up to $10 million per well, an unmapped fault is not a pleasant surprise. Only seismic imaging can lead to optimized drill plans by mapping fault networks in 3D, while also indicating important natural fracture trends. Modern full azimuth 3D seismic data can deliver a truly remarkable view of the subsurface, including very small faults, using advanced seismic attributes like curvature and coherence. Finally, I can only mention the recent, and extraordinary, progress in microseismic monitoring of frac jobs.

For now concentrated in North America, there is a mad scramble to find shale gas analogs elsewhere in the world. But like all unconventional resource plays, shale gas is a thin margin business, even in a hypercompetitive, high technology, open market situation like the U.S. Success elsewhere will depend on regulatory and business environments every bit as much as geology.

Acknowledgments

I would like to thank Larry Rairden (EOS Energy) and Mary Edrich (Geokinetics) for useful discussions and sharing of information.

Reference Links

O. Skagen, 2010, "Global gas reserves and resources: Trends, discontinuities and uncertainties" (Statiol) PPT

DOE Energy Information Administration (EIA), U.S. Shale Gas Production

R. LaFollette, 2007, "An Investors guide to shale gas" (BJ Services) PDF

Potential Gas Committee Report

Thursday, May 13, 2010

Why CO2 and global warming is a hard sell

This week I was at the 9th DOE conference on carbon capture and sequestration (CCS) in Pittsburgh. One of the speakers, Paal Frisvold of the NGO Bellona Foundation, asked for a show of hands as to who thought climate change was a major problem (maybe half) and how many thought that anthropogenic CO2 was a cause (less than half). And this was at a CCS meeting!

I have written elsewhere in this blog about my worldview of carbon dioxide (CO2).

At the 2010 OTC in Houston, I gave an overview talk on CCS. Part of the discussion is on point with this topic. The CO2 chain of effect is shown in the diagram below. Burning fossil fuel generates energy, water, and CO2. This is chemistry and cannot be refuted. Furthermore, there is direct, tangible evidence that atmospheric CO2 levels are tracking fossil fuel combustion.

The primary effect of increasing atmospheric CO2 is typically given as 'climate change', specifically global temperature increase (global warming). But as the diagram shows, anthropogenic CO2 is competing with many other climate change drivers, including the natural CO2 cycle. Although there is good scientific evidence for connecting atmospheric CO2 and global temperature, it is a confusing and ambiguous argument for public consumption. Global warming is easy ridicule, since much of the world population experiences a significant winter season every year.  A global rise of 0.5 degrees C is alarming to researchers, but undetectable in everyday life.

A more direct effect of human CO2 emission is acidity of the ocean. As atmospheric CO2 rises, uptake by the ocean increases, forming a weak acid and lowering the ocean PH. Unlike climate change where there will be winners and losers, increasing global PH disrupts food chains and thus affects everyone on the planet.

In my opinion, ocean acidification -- rather than global warming -- should be the primary public message and motivation for CO2 emissions reduction.


Tuesday, May 11, 2010

Day 2 CCS Conference

Every time someone sees I am with U Houston the Economides paper comes up. This claimed the entire CCS scientific community was full of crap about geologic storage capacity. Rebuttal from EU-Zero emissions platform says it is erroneous.

Heard on the floor:

It takes energy equivalent of 1/3 bbl of oil to produce 1 bbl of heavy oil from shale. Not much energy profit there.

A 500 MW power plant burns 200,000 kg of coal per hour (can anyone confirm this?)



Speaker notes:

1. James Markowsky (DOE Asst. Sec. Fossil Fuels)

Obama administration committed to CCS. Presidential CCS task force in place. Planning 10 large scale CCS in US by 2014.

2. Paal Frisvold (Bellona Foundation)

Fossil fuels --CCS--> renewables

CCS is the bridging technology.

www.bellona.org

3. Nick Otter (Global CCS Institute; Founded 2009)

Fossil fuels ---> renewables

No silver bullet, silver buckshot .... Every viable action will be needed

4. Brendan Beck (International Energy Agency)...Stand-in Speaker for Beck

Need 50% CO2 emissions reduction to top out at 450 ppm CO2 (by 2050) to limit temp increase to 2 deg C.  This will require ~3000 CCS projects worldwide.  Projects in UK = 4.  Now a dedicated CCS unit within IEA.

Q: how to handle intellectual property rights in CCS technology? Public funds to developed shared knowledge, thorny.

Q: Is climate change skepticism on the rise?  Climate change is CCS driver. At political level there is none.  At popular level, there is a vocal faction that jumps on every opportunity.

5.  John Quigly (PA Dept Conservation and Nat Resources)

PA climate is marching north, by 2050 PA climate will be like northern Alabama today. Climate change is key issue.

Central issue for onshore CCS in US is assembling the necessary pore space. One 500 MW coal power plant CCS will require about 100 sq.mi. of pore space. (In Canada, everything below 15 m is owned by the crown).

PA is considering a '3% requirement', where 3% of electricity in PA must come from CCS-enabled power plants.

http://www.dcnr.state.pa.us/info/carbon/index.aspx

Q: Can pore space come from state and federal park system? Not in PA, and not likely overall. There has been some discussion about federalizing deep saline aquifers, but nothing definite.

6. Stu Dalton (Elec Powre Res Institute)

US 2005 CO2 emissions = 588 MtCO2/yr. Worldwide funding of CCS about $30B.

Natural gas CO2 emission about half that of coal, but after 2030 even natural gas without CCS will not be good enough.

CCS will happen when: cost of CCS < cost of not doing CCS

CO2 Chain...... Capture, Compression, Transport, and Storage

7. David Mohler (CEO Duke Energy)

Current mix is 75% coal, by 2030 it will be 30%.

China claims CO2 capture cost for coal power at $12/tonne (unverified, in response to question)

Monday, May 10, 2010

Day 1 CCS Conference

This is day 1 of the DOE-sponsored carbon capture and sequestration (CSS) conference in Pittsburgh, Pennsylvania. My first trip to Pitt and I took some time yesterday and today to walk around and get a feel for the place. The Hilton (convention hotel) is downtown so that is the area I explored. The town was, surprisingly, founded by George Washington himself in 1758. How many places can claim that? Market square is a standard destination easy walking distance from the hotel, but completely torn up due to construction. Yesterday I had the 'big fish' at Primanti Bros Bar and Grill. Fabulous, but not for the faint of heart. Cole slaw, french fries, and cheese ON the sandwich.

General impression of downtown is one of cool art deco buildings (or older) and, compared to Houston, not much activity. A workday in downtown Houston is like a beehive, here it is like a few beetles strolling around. Nice, but you get the feeling you came in late on a really good party.

First event at the conference was registration and happy hour. Funny how this small (~900) conference comes up with great finger food and free bar, when the SEG with 9000 has 1 drink ticket then a cash bar, and food is nothing but chaos. I visited every booth in an hour and had good conversations at each.

Met a guy I knew from email, Mark Wilkinson (Baker Hughes), and had a couple of drinks. Finally found out the cost of CO2 for oil enhanced oil recovery (EOR) projects in the US onshore. I had kept hearing oil people simply say 'CO2 is expensive', but with no info to back it up. A prof from New Mexico Tech said it was a closely held number that depends on pipeline access, purpose of use, recovery after use, etc. But basically, it is $1/MCF (~$19/tonne). It turns out, contrary to my previous suspicions, that CO2 used in EOR is pretty accurately accounted for on return to the surface. Something like 50% of the CO2 is lost to the formation. At the end of an oil EOR project, it is common for the operator to 'blow down' the reservoir in order to recover what CO2 he can for reuse or resale. An interesting discussion ensued about the CO2 credit of $10/tone in the 2008 emergency stabilization bill. If this were upped to $20 (equal to the pure sequestration credit), and an end-of-project bonus added for CO2 left in the reservoir, then economics of oil and tax incentives would go a long way toward our CO2 sequestration goals.

At dinner there was great food, another open bar, and everyone in a big room. The keynote was by a BP bigwig, who had the double sad duty of talking about the Gulf oil spill and trying to get his talk out over 800 loud people eating dinner. One of the organizers mentioned that the Obama administration had a CCS working group, but had accepted the euro phrase of 'carbon capture and storage', so it sounds like the term 'sequestration' is on the way out.

I can see the advantages of a small conference. Interesting that people were complaining about the shuttle busses, I guess earlier conferences were even smaller and just done in the hotel.

Monday, April 19, 2010

Fossil Fuel and CO2 Release (Chemical Reactions)

For those interested in the relationship between atmospheric CO2 and fossil fuels, you can use WolframAlpha to compose balanced chemical equations. A breakdown of average composition for produced natural gas can be found here.

Consider the dominant hydrocarbon present in natural gas, methane. Alpha shows that for each methane molecule burned, one CO2 molecule is created along with 2 water molecules. (Methane, CO2, and water vapor are all greenhouse gases.) One reason we have a growing CO2 concentration in the atmosphere is, of course, because we burn methane like crazy to get the other output of the reaction: energy (890 kJ/mole, an exothermic reaction).

The combustible part of crude oil is dominated by naphthenes, a complicated family of long-chain hydrocarbon compounds. But taking napthene itself as typical, Alpha shows that burning naphthene results in 10 CO2 molecules. Generally, more CO2 is generated per molecule as the hydrocarbon chain gets longer. It has to be this way, because the carbon atom in each CO2 molecule is coming from a carbon atom in the burned molecule. Burning C10H8 (naphthene) gives 10 CO2; burning C22H16 (name?) gives 22 CO2; and so on.

All this leads to a general statement so often bandied about: gas is much more CO2-friendly than oil. But oil has its advantages, too. It packs more of a whallop -- burning the same number of methane and naphthene molecules, the naphthene releases 5.7 times as much energy. In other words, to get the same energy from burning gas (methane) and oil (naphthene) would require burning 5.7 times as much gas (in a molecular sense). But the CO2 score for gas (5.7 CO2) still comes out ahead of oil (10 CO2).

We have not mentioned coal so far, but let me just say it is hard to even put a chemical formula on it. One approximate chemical formula is C135H96O9NS (ppt file, slide 5). From what was said above, we know that burning one molecule of coal will generate 135 CO2 molecules. It would be nice to do the equal-energy comparison as we did with gas and oil, but even Alpha comes up empty on energy release for this reaction. But I think we can get an approximate answer.

Burning 1 kg of gas (methane) gives 2.74 kg of CO2 and 55.7 mJ energy

Burning 1 kg of oil (naphthene) gives 3.43 kg of CO2 and 40.3 mJ energy

Burning 1 kg of coal (C135H96O9NS) gives 2.58 kg of CO2 and 23.5 mJ energy (use 1 kilogram = 0.00110231131 short tons in this coal calculator)

On the basis of gCO2/mJ, fossil fuel CO2 output is:

gas.... 49.2
oil..... 85.1
coal... 109.8

where these numbers represent grams of CO2 created per megajoule of energy released by combustion (approximately). Normalizing this to coal, the following chart summarizes the result:


That is a big CO2 difference per unit energy generated. It says, for example, that a coal-fired electrical generation plant converted to natural gas reduces CO2 output by about 55%. The question then becomes the cost of coal-to-gas conversion vs the cost of say, carbon capture and sequestration. But cost can be more than just money, for large-scale subsurface injection and storage of CO2 there are also the issues of public acceptance, MVA, pore-space ownership, etc.

Tuesday, March 23, 2010

Salt

[Note: A version of this blog entry will appear in World Oil (April, 2010)]

Offshore, the new giants are rumbling. We see a stream of news about Angola's Kwanza and Congo Basin ultra deep water oil, Brazil's giant Tupi field, and deep water Gulf of Mexico headline discoveries like Tiber, Thunder Horse, and dozens more. Aside from being offshore, you might wonder what all these have in common. Two things, really.

First is a relentless push into deeper water, a natural expansion into less explored territory. Successful drilling in ten thousand feet of water has been reported, and five thousand is now almost routine. Like a medical scan, seismic is used to identify features of interest and reduce various kinds of exploration and production risk. From a seismic point of view, deep water presents no special problems. For years, academic geophysicists have been gathering and processing reflection data in some of the deepest water on earth. There are some peculiarities, like a sea floor reflection time of 14 seconds in the Marianas trench (11 km of water), but no intrinsic difficulties.

The second common denominator is salt: a massive headache, and an opportunity, for geophysicists. Salt is very simple and benign stuff when first deposited. It forms in low-slope coastal areas with tidal influx of sea water rich in minerals. Stranded waters evaporate to leave thin salt layers; the tide comes and goes, leading to more evaporation and more salt. In favorable circumstances the salt can build up to great thickness. But at this stage it is just a vast slab of salt. With geologic time, tectonic subsidence, and sedimentation, the salt is buried under an ever-thickening wedge of sandstone, shale, and limestone. As the sediments become more deeply buried, they lithify into rocks and, importantly, become more dense. Salt density changes little with burial, so at some point it is less dense than the overlying rock and it begins to move. Slowly over tens of millions of years, the buoyant salt grinds upward deforming, bending, fracturing, and faulting the overlying rock.

We see today a snapshot of this slow, powerful process. Gone are the days when we think of simple domes composed of smooth, ghost-like blobs of salt. We now understand that salt flows to form a vast and bizarre bestiary of shapes. But what is it that makes salt so seismically difficult?

The problem comes, not from density, but the speed at which seismic waves travel through salt. In the Gulf of Mexico, for example, as we pass down from the ocean surface we first have water with a seismic velocity of about 1500 m/s, then sediments at maybe 2000 m/s, below that a progression of shale and sandstone with wave speeds of 2500-3500 m/s (depending on various rock frame and pore fluid properties), and finally salt at

5000 m/s. This sets up a difficult situation. It is often useful to think about seismic waves as a family of rays, like pencils of laser light. When a ray travels through the sediment and hits the salt, it bends according to a simple rule called Snell's law. The law depends only on the velocity contrast and the angle of the ray relative to a line perpendicular to the salt face. Importantly, the ray bends gently in the pile of overlying sediment, but kinks dramatically at the top salt interface and again at the base when the ray passes back into sedimentary rocks. To make things worse, it turns out geologic salt bodies are rarely smooth. They are irregular, deformed interfaces kicking the rays off in crazy directions. A ray and its neighbor can end up miles apart after whacking into salt.

We care about rays because they must be accurately mapped for some kinds of seismic imaging to work. One of the lessons of the last decade or so is this: The salt is sometimes so complicated that no one can figure out the rays. But rays are a human invention, a useful and simplifying approximation when the earth is not too complicated. For extreme cases, like imaging through 2 km of salt in a soft-sediment basin, the ray idea breaks down or becomes enormously complicated. Consequently, there has been a subsalt push to move away from ray-based imaging (termed Kirchhoff migration) in favor of algorithms that use waves directly (wave equation migration). Unlike rays, wave fields are smooth, continuous, and easy to compute. The ultimate version of wave equation imaging is reverse time migration or RTM. Although RTM has been theoretically understood since about 1982, it is only recently that computer power has enabled people to do 3D prestack RTM on large surveys.

While we were building an understanding of salt tectonics, wave equation migration (and the computer science to make it work), there was a growing sense that something was missing. As researchers went to ever greater lengths to improve imaging algorithms, the improvements were becoming progressively smaller. Rays, waves, better physics, faster computers... it all started to look the same, like we were up against a wall; some kind of fundamental limit to image quality in complex subsalt areas. As it turns out, the next level of imaging came not from better algorithms or computers, but good old fashioned communication. Over the decades, two groups had grown up in offshore exploration, acquisition and imaging. The one a pragmatic field campaign of cables, airguns, and high seas. The other cloistered in research labs, deriving and programming equations on super computers. You can imagine how the company picnic split up.

There have always been voices calling out the message that a fundamental link exists between acquisition and imaging, and that significant advances can only come by tuning both. We know this new way of seismic shooting as wide or full azimuth, but it is hardly new in concept. Land 3D shooting has been full azimuth for decades. Now it is happening offshore.

It is the twin advance of wave equation imaging and wide azimuth acquisition that has allowed us to peer better into the deep, unlocking a subsalt treasure trove around the world.

Friday, March 19, 2010

SeismicArt

Lately I've been doing a lot of finite difference simulation in realistic layered models. The code I'm running is Tong Fei's SeismicUnix (SU) program sufctanismod, although I have had to make a few modifications for large-scale simulations (3000 x 3000 grid, 9000 time steps). My modified code has been sent to John Stockwell at Colorado School of Mines, who is the keeper of SU.

One area of interest is development of dispersive waves in shallow water. The first figure below is a simulated shot record, data(t,x). The second figure is it's time Fourier transform, data(f,x) showing a lovely pattern. To those versed in the art, the curved features in the (f,x) plot indicate dispersive wave modes. Pretty enough to frame.

Fig 1.  Synthetic (t,x) shot record generated by SU code sufctanismod


Fig 2. The lovely Fourier transform (f,x) amplitude of the data in Fig 1.

Friday, February 12, 2010

What value VSP?

[Note: A version of this blog entry will appear in World Oil (March, 2010)]

From a geoscience point of view, there are two worlds in hydrocarbon exploration. First is the depth realm of geology as revealed by drilling and wireline logs. Second is the reflection time world of geology as seen by seismic data. The connection between time and depth is, predictably, a time-depth curve derived from sonic log or vertical seismic profile (VSP).

The sonic device is a long metal cylinder lowered into the well on wireline like any other logging tool. Once in place at some depth in the well, it is slowly pulled up. As the tool crawls up the well, it is generating data by emitting at one end high-frequency pulses and listening for pulse arrivals at the other end. The operating frequency for most sonic logs is ten to fifteen thousand Hertz, far beyond the 10-100 Hz range of surface seismic data. When the source acts, sound waves move through the drilling fluid and interact with rock along the well bore wall that typically has a much greater sound speed. This sets up a thing called a head wave that runs along the bore hole wall, as opposed to a reflection that would only travel in the fluid and bounce off the wall. Anyway, since the exact distance between the source and receiver is known, along with the fluid sound speed and hole diameter, the wave speed in the rock formation can be found. The sonic log actually outputs the delay time divided by the source-receiver distance in units of microseconds per foot, and the formation velocity in ft/s can be found using (1 000 000)/sonic. The quantity measured by the sonic log is termed the interval velocity because it represents the local formation P-wave speed averaged along a short interval of the well bore equal to the source-receiver spacing of 1-2 m. A digital sonic log yields an interval velocity reading every 0.3 m (0.5 ft) in depth.

The process of converting a sonic log to a time-depth curve involves integration. In effect the sonic log is a model of the earth consisting of thin layers. Each layer is the same thickness (0.3 m) and we know the velocity, so the vertical two-way travel time through each layer can be calculated. Starting from the top, we sum up these layer times to find the reflection time to all depths in the well. The result is a time-depth curve, but one with many potential sources of error.

First, a sonic log never reaches the earth surface. Tool specifications require the bore hole diameter to be less than about 50 cm (20 in), meaning that onshore sonic logs in petroleum exploration and production wells rarely get within 100 m of the surface. In seismic terms, this is the weathered zone and likely to contain unpredictable low-velocity rock. The sonic contains no information about this interval, so it is necessary to somehow figure out the reflection time from ground surface to the top of the sonic log and add this to the time-depth curve.

Sonic logs are sensitive to washouts and other hole problems, and it is not easy to get accurate sonic velocity in slow formations (velocity less then sound speed in mud). We also have the problem of frequency mismatch between sonic and 3D surface seismic data. Note only is the sonic seeing a tiny volume of rock compared surface seismic waves, it is also well-established that seismic velocity in fluid saturated porous rock varies with frequency. This leads to the notoriously difficult upscaling problem that involves modifying observed sonic readings to better match the long wavelength velocities seen by surface seismic data. Not many people agree on how to do this.

To address the missing near surface and other time-depth problems, a check shot survey can be run in conjunction with sonic logging. In a check shot survey receivers are located sparsely down the well, usually at casing points and key geologic boundaries. The measured quantity is just the first arrival time.

Compare all of this to a vertical seismic profile recorded using a source at the surface and many receiver locations down the well. The receivers record full traces for interpretation and receiver spacing is determined by spatial aliasing considerations, usually something like 3 m (10 ft). This gives actual traveltimes from the surface to points in the earth. The VSP considered here is often called a zero offset VSP, meaning that only a single source position is used and that it is as close to the wellhead as possible. There are also multioffset and multiazimuth VSPs which use many source locations. These are much more expensive and sometimes useful for local, high resolution imaging. However, a zero offset VSP is sufficient for event identification and other standard uses.

A zero offset VSP is the best and most direct method of associating a 3D seismic event with a geologic horizon since it has about the same frequency range (20-200 Hz) as surface data, the wavefield actually passes through the same near surface, and it is not sensitive to hole problems. And there are many important side-benefits. In standard 3D shooting we use various acquisition techniques to isolate P waves, but the earth is actually elastic and all types of shear and mode-converted waves are bouncing around down there confusing our interpretation. The VSP is unique in its ability to distinguish upgoing from downgoing waves, S from P and mode-converted waves, and primary reflections from multiples. The last point is critically important. The entire machinery of seismic imaging is based on primary events that have reflected only one time. Over the last half-century an arsenal methods have been developed to detect and remove multiples in surface seismic data. Even so, our abilities are limited by the nature of the data. VSP data is fundamentally different and allows direct observation of these multiples (and non-P waves).

You don't need a hundred VSPs in a project area, but you are at a competitive disadvantage without at least one.

Fig 1. Finite difference modeling was used on the velocity model to create the VSP and shot record data. The model, VSP, and shot are arranged to show the connecting role that VSP plays in relating velocity model in depth and the shot record in time.  Can you identify a multiple?

Monday, February 8, 2010

Turhan Taner

I was saddened to hear today that Dr. Turhan Taner passed away over the weekend. He lived here in Houston and had been suffering from a long illness. My wife Dolores was a great friend of his and, through her, I had the chance visit him several times. He did first class work in many aspects of applied geophysics, most recently in seismic attributes. His 1960s papers on travel time curves in layered media are still the foundation for understanding this key subject.

As you can see from the SEG Honors and Awards list, Tury won SEG Honory Membership (1978), Best Paper Presentation at the Annual Meeting (1978), and the prestigious Maurice Ewing Medal (1993). In addition to being an excellent scientist, he was a kind and generous man. Many benefited from association with him, and he will be missed by all. A good biography of Tury can be found here.


***** Text of Tury's Obituary *******

Dr. M. Turhan “Tury” Taner passed away Saturday, February 6, 2010, in Houston, Texas at the age of 82. Dr. Taner will be laid to rest next to his parents in Istanbul, Turkey. A memorial service will be held after his family returns from Turkey at Emerson Unitarian Church, 1900 Bering Dr. on Sunday, March 21st at 3:00 pm.

Turhan was born in Akhisar, Turkey to Izzet and Kadriye Taner. He received a Diplome Engineer in 1950 from the Technical University of Istanbul and came to the United States in 1953 to the University of Minnesota for a postgraduate program in engineering. He co‐founded Scientific Computers in 1959, and in 1964 co‐founded Seiscom Delta, a geophysical service company, where he served as chairman, director of research, and later, senior VP for technology. In 1980 he started Seismic Research Corporation (SRC), and in 1998 SRC merged with Petrosoft and Discovery Bay to create Rock Solid Images. Widely known within the geophysical community, Tury was the recipient of numerous accolades including the SEG’s highest award, the Maurice Ewing Medal in 1993 and The EAGE’s highest recognition, the Desiderius Erasmus Award for lifetime contribution in 2004. Tury was a pioneer, teacher, scholar, great practitioner and a household name in geophysics. During his career he authored or co‐authored several groundbreaking papers on geophysical methods and contributed to the development of many technologies still in use today. In addition to his passion for creating and developing geophysical algorithms such as semblance and multitudes of other seismic attributes, he loved music, food and wine, traveling, art, and soccer, but most of all, he loved his friends and family.

Tury is survived by his loving family, including his son Jeffrey Taner and wife Andrea; daughter Jane Harris and her husband Christopher; son John Taner and his wife Julie; sister Turcan Sozeri; niece Selen Ozel and her husband Haluk; grandchildren Adam Harris, Emily Taner, Lilly Taner, Daniel Taner, Jack Taner; great‐niece Beren Ozel, and great‐nephew Devin Ozel. In lieu of flowers, the family requests that donations in honor of Dr. M. Turhan Taner be made to the National Parkinson Foundation, 1501 NW 9th Avenue, Bob Hope Road, Miami Florida 33136‐1494.

Saturday, February 6, 2010

miniInterview

Last sunday at a gathering of Spanish expats in Houston (my wife is from Barcelona), geologist Daniel Minisini chatted with me and wanted to know if I would do a short video interview. He has a blog site, miniGeology, that archives brief (< 5 min) interviews with geoscientists. Check it out sometime. Unscripted, unbriefed, and unedited, the interview format yields a short spontaneous discussion. I've added my interview video (complete with interruption by the cleaning staff) to my Links sidebar.

It is an honor to be an early part of this creative and interesting video archive project. Thank you Daniel.

Monday, February 1, 2010

Millennium Mark

Just noticed today that the total number of visitors to this blog has passed 1000. Thanks to all who have taken the time to drop in. The visitor level has accelerated lately due to the World Oil column (What's New in Exploration) that appeared in January 2010 and footnoted the blog address. Another column is on the way for February. My original plan was to contribute 3-4 columns for 2010, but I'm honored to report that the Editor and Management of World Oil have asked me to serve as Contributing Editor. I have agreed to take on the monthly column for the next 12 months.

On the subject of writing, I was again honored to hear last week that ExxonMobil will order over a hundred copies of my book, Elements of 3D Seismology, as a reference for staff geoscientists in the technology group. This is good motivation for work toward the 3rd edition.

On an unrelated (?) note, I am in the third year of teaching the graduate class Geophysical Data Processing at U Houston. The class size numbers are: 2008 (18), 2009 (25), and 2010 (46). It is the largest graduate class in the Department of Earth and Atmospheric Sciences at U. Houston. Anybody see a trend here? This is one of the required core courses for MS Geophysics students, but that only accounts for 30. It is optional for all the rest. Interesting....


Class photo for University of Houston class GEO7341 Geophysical Data Processing for spring 2010 (head count = 46).