Friday, October 19, 2018

SEG 2018 Trip Report

Notice: 2018 SEG Abstracts are online till 22 Oct. After that, they will only be available to SEG members.

Saturday 10/13

Started with an early drive to Dana Point, a lovely seaside town. The waves were up and it was drizzling, but still beautiful. Breakfast at RJ's Cafe which I highly recommend, but it is busy so get there early or expect to wait in line. Back in Pasadena I ran across a Shakey's Pizza Parlor that brought back fond memories of the 1960s in Tulsa when the young Liner family used to go to Shakey's. Later, wandered to The Cave, a local watering hole, with friends Kyle Spikes and Jeff Shragge then finished off the day with a solo Sushi dinner at the Tokyo Love Japanese Restaurant.


Attended the SEG Council Meeting. Nancy House presidential address showed SEG finances recovering nicely from the crash of 2015. The big news was an SEG Bylaws change that was approved on split vote. SEG trivia: a motion before the Council requires a 2/3 majority vote to pass, assuming a quorum of 1/3 of all members has been established, meaning that 2/3rds of 1/3rd, or 22%, of the Council is the minimum number of votes needed to pass a motion. The motion in question here was to allow Associate Members to vote in SEG elections, a right currently reserved for Active Members. For information on membership categories, see the SEG Bylaws Article III, Section 1. For the new rule to become the law of the land, the motion must be posted in The Leading Edge and then voted on by the Active SEG Members. In effect, the Active Members will be asked to dilute their own voting power, since the SEG membership composition is about 38% Active, 32% Associate and 29% Student (2015 numbers). The Associate Member group is generally younger and more diverse than the Active group. Some of the Actives will want to do this, some won't, that is why we take the vote.

At the icebreaker a strong crowd appeared in good spirits. I spent most of the time at the Fairfield Technologies booth with friends John Smythe and Paul Docherty. Liner trivia: I worked for Fairfield in 1989 when they were still Golden Geophysical. The icebreaker buzz was in two areas (1) the Houston Astros playoff game and (2) machine learning, meaning artificial intelligence and neural networks.

Before the meeting I was expecting the buzz to be full waveform inversion (FWI), distributed acoustic sensing (DAS) or, maybe, drone geophysics. These are all game changers. FWI is advancing steadily with computer power, particularly graphics processing unit (GPU) clusters, and algorithm development. When it works, the result is a subsurface velocity model of amazing clarity and detail, almost ready for interpretation in its own right away from the data that generated it. DAS is revolutionizing some kinds of seismic acquisition by doing away with discrete sensors and replacing it with a continuous DAS cable. Drone geophysics is a big deal in shallow studies. Various kinds of sensors enable quick, detailed mapping. Drones are also used for deployment of some seismic survey elements.

Despite all that, machine learning (ML) was the mega-buzz at SEG before one talk was given. Before diving into what ML is, let me say what it used to be. When I went to Colorado School of Mines (CSM) in the mid-1980s, there was an artificial intelligence (AI) project underway in geophysics. The goal was to develop an 'expert system' by coding up all the decision knowledge of a subject expert in a bewildering array of IF-THEN-ELSE kind of statements in the Lisp programming language. Some progress was made, I suppose, but this kind of expert system is not the main branch of ML history. The early story of ML is the early history of neural networks (NN) that can be found on many sites (e.g., Stanford). In SEG Abstracts literature, the first with 'neural network' in the title was Minimum-variance deconvolution using artificial neural networks (Zhao and Mendel, 1988). Fittingly, they were from USC which is not far from Anaheim where the 2018 SEG meeting was held.  The first TLE article was Waveform recognition using neural networks (Palaz and Weger, 1990) and for Geophysics it was Adaptive minimum prediction-error deconvolution and source wavelet estimation using Hopfield neural networks (Wang and Mendel, 1992). Heady stuff and a far cry from the ML that was buzzing at the 2018 SEG where a kind of ML was being touted that is within technological reach of anyone and applies to the interpretation of data.

The terms used in this revolution can be confusing. Machine learning (ML) seems to be the most general term, with neural network (NN) being a specific approach to and subset of ML, and artificial intelligence (AI) out there in broader culture. To my taste, ML is the better term. 'Artificial intelligence' pulls along with it all the concepts related to human intelligence, such as rationality, consciousness, self awareness. All are misleading and unwelcome distractions from the actual technology status today. To my taste, ML is the preferred term. From the Google Trends five-year data below, it looks like I am not the only one.


The opening ceremony was very well attended, standing room only. The leadoff speakers were General Chair Dan Hollis (nice job!) and SEG President Nancy House, but the main speaker was Darryl Willis, VP of oil, Gas and Energy at Google Cloud. Excellent speaker with strong petroleum industry experience,  he has been seven months with Google and the new group has 25 employees, likely to be near 100 in early 2019. The team is composed of energy professionals (geology, geophysics, engineers, etc.) to focus and drive Google to significant problems that machine learning can tackle.  (Could Google have more geophysicists than a mid-size oil company by 2019?) Willis gave a vigorous pep talk insisting that progress in the energy sector is not coming fast enough. The only example given was reduction of timeline for seismic acquisition/processing/interpretation from five to one-and-one-half years (I think). An interactive session followed with written questions from the audience. In response to "How does someone get started?" Willis recommended the book Machine Platform Crowd ... which I ordered on my phone two minutes after he mentioned it. The ultimate instant gratification. The question I wrote but did not submit: "Should intelligence, artificial or otherwise, be proprietary?"

Lunch with John Stockwell, recently retired out of CSM but still teaching seismic data processing. After lunch I went to Art Weglein's talk (abstract) about multiples, but really about migration and high frequency (HF) approximations. He explained that migration has two parts (1) wave field simulation and (2) imaging condition. The HF approximation is limiting and can be in either (1) or (2); even reverse time migration (RTM) using the full wave equation for simulation can be a high frequency method if the imaging condition is HF. Art discussed three classic imaging conditions two of which are HF and one that is not.

Later, on the exhibit floor I got much-needed help at the GeoTeric booth from Geosciences Manager Rachael Moore. Even in the age of machine learning, loading SEGY data can be a challenge; I had been unable to load SEGY on my own at U Arkansas using an academic GeoTeric license  (thank you!). GeoTeric has a great import tool and interface, but I needed to have someone walk me through it one time. Big shout out to Rachael for her invaluable help.

Maarten de Hoop's paper Deep neural-network architectures arising in seismic-inverse problems packed a big room. One section of the talk caught my attention: Maarten described building a deep neural network. (The term 'deep' is a code word among practitioners for a NN with more than one hidden layer. If you really want to drill down in ML, see the course notes for this Carnegie Mellon class) After constructing his NN, Maarten applied it to data from a seismic station near a place where a landslide occurred. He had some data before, during, and after the landslide. Applying his NN, he found (asked for?) three classifications that you might call 'background state', 'landslide event', and everything else. It is the everything else that is interesting. This class began to show up well ahead of the landslide with increasing insistency as the landslide time approached. The amazing things about this are (1) the precursor class cannot be seen in the raw data, (2) the NN picked out the precursor class with no knowledge of landslide or seismic physics, (3) the trained NN could immediately be applied to areas with potential landslide hazard as an early warning system. Of course, people who study landslides have many kinds of non-seismic data, such as strain meter output, satellite or GPS ground motion, and even ground water sampling. Neural nets are marvels at working with big, diverse data sets. Presumably, Maarten's NN prediction would benefit from stuffing in all these data.

It is interesting to contemplate the roles of the human and machine in machine learning. Currently the human roles are (1) posing the problem and writing the NN code, (2) preparing data for input to the NN, (3) training the NN, and (4) interpreting the NN output. The threat that concerns those who think about such things is automation of item (1), that would mean machines that are 'smart' enough to write ever-improving ML algorithms better than humans and we begin along a very dangerous path. For now, that is science fiction. But the other human roles will be growth areas in the ML age. The interpretation of ML output, is intriguing. In Maarten's example, the NN provide the three classes, but he (a human) studied the output and realized the precursor was in there. So maybe ML will not so much replace human interpreters as move them to a new level. Like a microscope or any other investigation tool, ML provides new kind of information that someone has to interpret to find meaning. But those lofty, new-world keepers and interpreters of ML may be few compared to the legions of raw data interpreters we have now.

Next it was on to the Repro Zoo with Matt Hall of Agile Blog fame. The idea of Repro Zoo is get coders and wanna-be coders together and hack together a few figures from classic papers as little steps toward reproducible research. I was flattered to see my elastic attenuation (negative Q) paper was on the list. On this day, I was back seat driver for Matt on a figure from Connelly's 1999 classic elastic impedance paper. You can find the results of the Repro Zoo on GitHub.  While at the zoo, I met with my PhD student Lanrè Aboaba to review his slides for our Wednesday talk 3D Seismic interpretation of Paleozoic karst features in the Arkoma Basin of Oklahoma.

Later in the evening was the always fabulous Editor's Reception and awards. Valentina Socco is finishing up her term as Geophysics Editor and has done an outstanding job. Her successor is yet to be officially named. When Sven Treitel became Geophysics Editor in 1995, there was one Assistant Editor (Larry Lines), a number Associate Editors, and since time immemorial the Editor was expected to review every paper. Sven added a second Assistant Editor (me) and pointed out that even if he could find time to review every paper, he should not do so because no one is an expert in all areas of geophysics. Now there are seven Assistant Editors. I am told there is even a double-blind review system in place to minimize bias of various kinds from the peer review process. I suggested this in my term as Editor (1999-2001), but it was soundly rejected. Maybe I should have pressed the issue a bit harder. Glad to see it finally happened.


Went to Tom Smith's Geophysical Insights booth for a Rocky Roden talk on machine learning. If anyone can explain ML in a way that I can understand, Rocky is the guy. I was not disappointed; it was a  overview with examples of application to interpretation. Tom saw the ML wave coming a few years ago and has created a fine product called Paradise.

After a chat about SEG politics with Lee Bell, we lunched with Larry Morely. Very nice to catch up. I also bumped into Gerard Wiggerink of EAGE on the exhibit floor, a Seismos Blog follower and good friend. I must say, that Gerard wins the best-dressed award with a blue suit and matching shoe laces.

The last thing I managed to do at the convention was to seek help at the DownUnder Geosciences (DUG) booth to get some help, yes, loading SEGY data. The very capable Russell Holroyd who wrestled my data into DUG Insight then gave me a quick overview of the interface and capabilities. DIG Insight is another product that may help with research projects and course development back at the U of Arkansas. Thank you Russell and DUG for the great help.

During the exhibit floor refreshment hour, John Stockwell and I chatted comparing notes on talks and concepts seen at the meeting. In a philosophical moment, I asked him "If I hit a neural network with a hammer does it feel pain?" His reply, "No, but you might - if you were fond of it."

The last event of my 2018 SEG meeting was the President's Reception and Awards Ceremony. The award winners for this year can be found here. A highlight of the evening was the Maurice Ewing Medal awarded posthumously to Albert Tarantola who passed in 2009. Albert's work in the 1980's laid the the foundation for full waveform inversion (FWI). Later advances in algorithms and computer power have made FWI a reality and it is only fitting that today's SEG should honor this remarkable pioneering scientist.

No comments: