January 2010
Columns

What’s new in exploration

Seismic goes multisource

Vol. 231 No. 1  
Production
CHRISTOPHER LINER, PROFESSOR, UNIVERSITY OF HOUSTON    

Seismic goes multisource

The role of seismic data in hydrocarbon exploration is risk reduction: It seeks to avoid dry holes, marginal producers, and getting reserves seriously wrong. Early surveys relied exclusively on explosives as an energy source, but this has obvious safety and environmental issues. Just as limiting is the fact that explosive sources give us very little control over the emitted waveform; basically, our only “knob” to turn for higher or lower frequency is the charge size. Geophysics has some very interesting 1950s papers in which empirical equations were found relating charge size and depth to dimensions of the resulting blast hole. It must have been a wild ride for the researchers, since charges up to 1 million lb were tested.

But even as those experiments were underway, the seismic world was changing. An explosive charge contains a broad band of frequencies that form a pulse of energy injected into the earth over a brief time span, perhaps one-twentieth of a second (50 ms). In fact, Fourier theory tells us that the only way to build such a short-duration pulse is by adding up a wide array of frequencies. William Doty and John Crawford of Conoco were issued a patent in 1954 describing a new kind of land seismic source that did not involve explosives. The new technology, called vibroseis, used a truck-mounted vibrating baseplate. Vibroseis applies the Fourier concept literally, operating one frequency at a time, stepping through the desired frequency range in a matter of 10–14 s. Vibroseis soon became the source of choice for land applications.

Over the last four decades, the land seismic industry has transitioned from 2D shooting with a few dozen channels to current 3D practice involving tens of thousands of channels. The higher channel count allows shooting with tighter trace spacing (bin size), better azimuth coverage and higher fold. A big shoot in 1970 was a few weeks; today it can be a year or more. This is not just a matter of money; although this new kind of data is very expensive, it is time that matters most. Large seismic surveys are now acquired on time scales equal to that of drilling several wells, and are even approaching lease terms. If it takes two years to shoot and one year to process a big 3D, then a three-year lease starts to look pretty short.

So what is the bottleneck? Why is it taking so long to shoot these surveys? The main answer goes back to a practice that was born with the earliest seismic experiments. The idea is to lay out the receiver spread, hook everything up, then head for the first shot point. With the source at this location, a subset of the receivers on the ground are lit up waiting to get a trigger signal telling them to record ground motion. The trigger comes, the source simultaneously acts, the receivers listen for a while, and data flows back to the recording system along all those channels. The first shot is done. Now the source moves to shot location 2, the appropriate receiver subset is lit up, the source acts, and so on.

The key feature of this procedure is that only one source acts at a time. Over the years, there has been great progress in efficiency of this basic model. One popular version (ping-pong shooting) has two or more sources ready to go, and they trigger sequentially at the earliest possible moment, just barely avoiding overlap of earth response. There are many other clever methods, but, in any form, this is single-source technology. It carries a fundamental time cost because no two sources are ever active at the same time, for good reason.

If two sources are active at once, we will see overlapping data. In the 1980s, rules of cooperation were established among seismic contractors to minimize interference. Things stood pretty much right there until a recent resurgence of interest in overlapping sources, now termed simultaneous source technology (SST). Both land and marine shooting are amenable to SST, but let’s focus on land data.

The promise of simultaneous source technology is clear and compelling. If we can somehow use two sources simultaneously, then the acquisition time for a big survey is effectively cut in half. Of course, it is not quite that simple; some aspects of the survey time budget are unchanged, such as mobilization and demobilization, laydown and pickup times. But source time is a big part of the time budget, and SST directly reduces it. Field tests using four or more simultaneous sources have been successfully carried out in international operations, and the time is ripe for SST to come onshore in the US.

The high-tech aspects of vibroseis lead to elegant and accurate methods of simultaneous shooting. Details need not concern us, but theory has been developed and field tests done that show various ways of shooting vibroseis SST data. The nature of the universe has not changed: When multiple sources overlap in time, the wave field we measure is a combination of the earth response to each source. But with some high-powered science, one response can be separated almost entirely, just as we manage to isolate one conversation in a loud room from all the others. The data corresponding to each source is pulled out in turn to make a shot record, as if that source had acted alone. In about the time it took to make one shot record the old way, we have many shot records. A survey that would take a couple of years with a single source could be done in a couple of months by using, say, 12 simultaneous sources—an amazing case of “If you can’t fix it, feature it.”

For many years, we have been shooting 3D land seismic data to fit the pocketbook, making compromises at every turn. Physics tells us what should be done, but we do what we can afford. Bin sizes are too large, fold is too low, only vertical component sensors are used, and azimuth and offset distributions are far from ideal. When physics and finance collide, finance wins.

But now the game is changing. With simultaneous sources, the potential is there to make a quantum leap in seismic acquisition efficiency. The data we really need for better imaging and characterization of tough onshore problems may actually become affordable. wo-box_blue.gif


C. L. Liner, a professor at the University of Houston, researches petroleum seismology and CO2 sequestration. He is the former Editor of Geophysics, author of the 2004 textbook Elements of 3D Seismology, and a member of SEG, AAPG, AGU and the European Academy of Sciences. To read a longer version of this column, visit his blog at http://seismosblog.blogspot.com/2009/12/bell-choir-seismology.html.


Comments? Write: cliner@uh.edu

 

 

 

 

 

Related Articles FROM THE ARCHIVE
Connect with World Oil
Connect with World Oil, the upstream industry's most trusted source of forecast data, industry trends, and insights into operational and technological advances.