May 1998
Features

What's new in geophysics/ geology

Fault interpretation with automated 3-D software * Neural network research by DeepLook * Raster log technology for display/ analysis * Java computing language for E&P * Solid streamers and computing network cut seismic turnaround

May 1998 Vol. 219 No. 5 
Feature Article 

What's new in geophysics / geology

Five new technologies help the E&P industry acquire, process and interpret seismic and log data to better evaluate and manage oil / gas reservoirs

Robert E. Snyder, Editor

The science of utilizing massive new data acquisitions both from seismic surveys and well logs to help operating and contracting / service companies get the truest picture of what the potential reservoir looks like, and / or what is happening as it is produced, is advancing by giant strides every year. The integration of services needed to acquire and evaluate the data is a major part of the operation.

To aid in that process, here are five innovations from the service sector and from operator / service sector partnerships and alliances that are helping; these include: 1) automated 3-D software from GeoQuest to better define faults; 2) definition and status of the DeepLook industry collaboration; 3) Raster logs, what they are, what they can do for you; 4) a new computing language called Java, from Sun Microsystems; and 5) how Western Geophysical used solid streamers and a worldwide computing network to get a record-time offshore data turnaround. WO

line

Automated 3-D software interprets fault systems

Rutger Gras, GeoQuest, Houston

Accurate interpretation of fault surfaces from a three-dimensional perspective can be done in a time-efficient manner with the use of an automated 3-D fault interpretation workflow.

Today's high-resolution 3-D seismic data often reveals the complex relationships between faults. We know certain rules apply to faults, They almost never occur in isolation; and they are generally a member of a logically arranged fault-family, which reflects the regional tectonic setting, as well as local influences. Faults also are finite-dipping surfaces with a central area of maximum displacement, which decreases from this center toward the tip lines at the edge of the fault surface. By using an automated 3-D fault interpretation workflow, which honors the above rules, geoscientists can expect significant gains in precision and efficiency.

The challenge: Interpreting faults in 3-D. Because of better subsurface detail due to 3-D seismic technology, interpretation of fault surfaces in a true three-dimensional perspective has become increasingly challenging. The challenges are three-fold. First, for any reservoir in a faulted terrain, a principal structural feature is the fault trace — the intersection of a horizon and a fault surface in map view.

Second, the displacement (or throw) of a fault in relation to any horizon surface is of key importance. Because of the displacement, a reservoir may be sealed across a fault, or the reservoir may be segmented into discrete flow units by permeability barriers. The displacement vector is a 3-D property due to displacement variations, both in vertical and horizontal sense.

Third, faults may crosscut each other in fault-fault intersections, in which case it is necessary to determine the age relationship between faults to identify which fault displaces the other one. The established rule then has to be honored in the entire 3-D volume. Extrapolation of fault surfaces should not extend beyond younger intersecting fault surfaces.

Fault traces, displacement patterns and fault-fault intersections all follow logically from the spatial arrangements of fault surfaces and intersected horizon surfaces. The entire collection of fault surfaces forms a systematic fault framework. Definition of this framework is a requirement for correct and truly integrated fault / surface interpretation.

The solution: Framework 3D. Each of the three elements noted above — fault trace, displacement and fault-fault intersection — may be interpreted for any horizon surface in isolation. Because of the systematic dip and displacement variations of each fault surface, every horizon surface has a unique fault trace and displacement pattern. However, if we were to know the precise extent, geometry and the intersection rules of all the faults in a 3-D volume, all the above parameters could be calculated automatically, and significant improvements in accuracy and efficiency can be made.

GeoQuest has developed a sophisticated software named Framework 3D to automate the fault interpretation process. The system provides a consistent fault framework that is intersected with all interpreted surfaces, and calculates fault traces and displacement patterns for these horizons. Infill horizons generated by conformance mapping will automatically have a geometrically correct fault interpretation. In addition, having the fault framework in place also allows for more precise volumetrics.

Case study: Gulf of Mexico survey. A large 3-D survey in the West Cameron area in the Gulf of Mexico was analyzed using the new software. The structural style of the area consists of predominantly south-dipping extensional listric faults. Individual faults often are overlapping and coalescing. A second-order fault class consists of north dipping faults; generally these occur between, and terminate against, the first-order south-dipping faults. First-order faults intersect the entire volume and can be traced from the deepest reflectors all the way up to the surface. Second-order faults are much more limited both in areal extent and displacement.

Production is almost exclusively from structural traps, both tilted fault block and low-side fault plays, reservoired in a multitude of stacked Tertiary clastic pay zones. Clearly, correct and efficient handling of the fault framework in this area is critical for successful exploration / development.

3-D fault interpretation workflow. Initially, the faults were mapped as part of the seismic interpretation of a mid-Miocene horizon, using: both inlines and crosslines, correlation maps based on timeslices, and interpretation in the three-dimensional workspace. Wherever possible, geophysical fault interpretation was confirmed through fault gaps seen on geological cross sections. Once the interpretation was completed, individual fault cuts for 27 fault surfaces were loaded into the new software application using CPS-3 mapping and modeling software, and subsequently gridded, as shown in Fig. 1. During the gridding, several inconsistencies in fault interpretation were automatically highlighted and could be interactively remedied.

At this point, a raw, non-truncated fault framework was generated. As a next step, the faults were automatically sorted by geometry and the major-minor relationship was established. The software determined the correct fault hierarchy without user interaction; however, modifications in crosscutting relationships could have been made at this point if needed. Rules generated in this first pass were applied to produce a truncated fault framework.

Fig. 2 shows how intersection of the fault framework with the interpreted horizon surface resulted in an accurately cut and gridded surface with well-defined contacts. The entire workflow and associated data results, Fig. 3, were saved and can be repeated and refined with additional interpreted surfaces. The process described was completed in less than one day.

Benefits. Generating a fault framework has huge benefits in the interpretation of faulted areas. A validated fault framework truly honors the three-dimensional character of faults with their interaction to surfaces and other faults. Truncation ensures that faults do not crosscut each other; and extrapolation warrants that faults are correctly extended beyond the original interpretation.

During framework generation, erroneous fault interpretation will reveal itself automatically, allowing for rapid interpretation quality control. Integration of the fault framework with the horizon surfaces guarantees correct fault traces and displacement vectors, also for infill horizons that may be required downstream in the workflow. The process is repeatable with minimum effort, saving valuable time. And by properly honoring the volume below the fault shadow, the fault framework leads to correct volumetric, hence more-precise estimation of reserves / resources. WO

line

DeepLook researches a new strategy: Neural networks

Ed Stoessel, BP Exploration; Virginia Johnson and Leah Rogers, Lawrence Livermore National Laboratory; and Jay Scheevel, Chevron

DeepLook, a consortium of, presently, eight oil companies and five oil industry service companies, is supporting research in emerging technologies which will aid reservoir management decisions, particularly in recovering unswept oil.1 One of the initial projects involves rigorously searching for optimal field development strategies using artificial neural networks, a technology that is the computational analog for the function of brain cells. Researchers at Lawrence Livermore National Laboratory have developed neural network methods for optimal well design to clean-up contaminated groundwater,2, 3, 4 and are now adapting their methods to oil recovery strategies.

The neural network. This is a type of computing that takes its name from the networks of nerve cells, i.e., neurons, in the brain. Artificial intelligence researchers borrowed some of the human brain qualities, i.e., simple processors, yet extremely interconnected, and developed new computational algorithms.5 Neural networks excel at some problems which are cumbersome with traditional computational approaches; they are particularly adept in pattern recognition and classification. In witness to this, one may note that individual electric connections in the human brain are quite slow relative to today's supercomputers, yet even the youngest child can recognize a parent despite a change in his or her haircut.

This act of simple pattern recognition is quite challenging to duplicate using traditional ways of computing. In contrast, it is quite possible to "train" neural networks much as we train a child, by reasonable examples. This is known as supervised learning.

In the following approach, the reasonable examples will be input-output relationships known from the runs of the reservoir simulator. The inputs will be locations and completion histories, both producers and injectors, chosen from a finite list of possible well locations. The neural network outputs will be diagnostic parameters strategy, e. g., cumulative oil, gas or water production, used to judge the success of each hypothetical development scenario.

Figure 4
The well optimization process.
Click for enlarged view.

Reservoir modeling optimization. Many new techniques have been used historically to decide where to drill the next production or injection well. In this effort, much of the detailed character of the reservoir is incorporated in a complex geologic model, then upscaled, i. e., coarsened or simplified — appropriately one hopes — into a reservoir model used by a numerical simulator to predict future performance histories for various field development scenarios. Whatever their current limitations, reservoir models combined with flow simulation are the current state of the art for integration of field data with fluid transport theory aimed at predicting future reservoir behavior.

Since we believe that such reservoir models will continue to be used to make reservoir management decisions, how do we best exploit or leverage the considerable effort that goes into construction and computationally-intensive calibration of these models during our search for an optimal development scenario?

Neural network based optimization is one path to exploit an existing reservoir model. Optimization techniques are commonly used to find strategies for industrial efficiency, providing a formal, mathematical search which maximizes the objectives while considering system constraints. In this area, example objectives might be keeping costs down, or production up, and constraints might be a total water production limit or limit of number of wells a drilling platform can support.

In a well-field design problem, optimization techniques need to evaluate the success of many individual take-point and injection scenarios. The dilemma is that part of the evaluation of just one of these scenarios takes a call to the reservoir simulator which may take several hours or even days to run on a fast system. How can one possibly evaluate thousands of development strategies in the search for a splendid one. The use of neural networks poses one possible solution to this difficult question.

Training the network. A neural network can be trained to predict essential elements of flow simulation results from the reservoir model for different well combinations. Using this technique, one may rely on the neural network rather than a full numerical flow simulation of the reservoir model during the search for an optimal development scenario. By using the neural network, different development strategies can be evaluated in fractions of a second rather than several hours.

To provide training data for the network, an initial computational investment is required, namely, running a suite of actual flow simulations that would cover the range of possible development scenarios and, hopefully, production extremes. Once this base of actual simulation data is obtained, training runs are performed allowing the network to "fill-in" performance predictions for scenarios bracketed, but not specifically modeled, by the flow simulator.

These runs can be done in parallel once the knowledge base is created, i. e. , the training and testing set of actual flow simulation results. Such a knowledge base may be recyclable. In other words, if one wants to change the objective function or constraints, we can begin a new search without more runs of the reservoir model.

Gulf of Mexico test. Initial DeepLook projects are focusing on the Pompano field in the deepwater Gulf of Mexico as a test site for the developing methodologies; BP operates this field, which has been in production for a couple of years. The company has developed and calibrated a reservoir model for the Miocene section; and flow simulations have been run using Landmark's VIP software.

One management question posed by BP for the field is whether a water injection program will help maximize production. This waterflood question has been used as the cornerstone in formulating field management objectives. The search is for which combinations of up to four injector wells from 25 locations — preselected based on high transmissivity and spatial coverage — would maximize production and minimize cost. Thus a knowledge base has been created of more than 500 flow simulation runs of the Pompano VIP reservoir model. This knowledge base has been used to train neural networks to predict oil, gas and water production levels over a seven-year production history.

A genetic algorithm is being used as the search driver. The algorithm, based on concepts of natural selection and "survival of the fittest" creates patterns similar to the way that chromosome division and recombination creates new combinations of genes.6 The version of the genetic algorithm being used employs the operator's reproduction, crossover and mutation, with both random and pattern fitness ranking components to create new combinations of well suites and development history combinations.

The genetic algorithm relies on neural network predictions of how much oil, gas and water will be produced from these well combinations. Cost calculations involve the conversion of producer to injector, or the drilling of a new injector combined with additional refinements of cost, based on where the location is within the reservoir and relative to existing facilities.

The project is expected to complete its initial phase in June 1998. Continuation may involve issues of uncertainty and using neural network prediction of actual fluid production-history curves instead of diagnostic summary numbers for total production.

The DeepLook Collaboration

The industry collectively faces a major and growing problem in how best to deal with unswept oil. In 1965, BP's Ed Stoessel issued "the call" to prospective collaborators on Fluid Imaging Technology at the SEG Development and Production Forum; a follow-up proposal was issued at the 65th Annual Meeting. An "Oil Industry Outreach" program was inaugurated in early 1996; and that summer, multi-company technical teams conducted brainstorming tours of 18 "far market" facilities. The group's initial projects began in fourth quarter 1997.

Figure 5
DeepLook: A new model for collaboration.
Click for enlarged view
.

Stoessel says, "DeepLook is the response of our oil company members (BP, Chevron, Conoco, Mobil, Saudi Aramco, Shell, Texaco and Unocal) to the unswept oil problem. We believe we represent a new direction in the genesis of oil industry technology."1 Projects focus on long-range indirect detection of hydrocarbons; sensor technology for continuous downhole and in-reservoir monitoring; and technologies for physical deployment of sensors, data mining, integration, fusion, modeling and interpretation.

In describing DeepLook, Stoessel says, "We prefer 'collaboration' because the term implies actively working together with a common purpose, even though the collaborators are competitors in other contexts. The common purpose is to access technologies that will deliver the mission. There are three major components of DeepLook: 1) member oil / gas producers, 2) member industry service companies, and 3) suppliers of 'far market' technology. DeepLook is open to additional membership at any time. Companies interested in membership may participate in meetings with a guest status. For further information, contact Ed Stoessel at: E-mail. stoesset@bp.com, Tel. 281 560 3266.

Literature Cited

  1. Stoessel, E. T., "Access and integration of emerging technologies: Keys to successful imaging of reservoir fluids in depth and time, " paper OTC 8761, presented at the Offshore Technology Conference, Houston, May 4 – 7, 1998.
  2. Rogers, L. L. and F. U. Dowla, "Optimal groundwater remediation using artificial neural networks with parallel solute transport," Water Resources Research, 30 (2), 1994, pp. 457 – 497.
  3. Rogers, L. L., V. M. Johnson and F. U. Dowla, "Optimal field-scale groundwater remediation using neural networks and the genetic algorithm," Environmental Science and Technology, 29 (5), 1995, pp. 1,145 – 1,155.
  4. Johnson, V. M. and L. L. Rogers, "Location analysis in ground-water remediation using neural networks," Ground Water, 33 (5), 1995, pp. 749 – 758.
  5. Rumelhart, D. E. and J. L. McClelland, Parallel distributed processing: Exploration in the microstructure of cognition, MIT Press, Cambridge, Massachusetts, Vol. 1, 1986, pp. 318 – 362.
  6. Goldberg, D. E., Genetic algorithms in search optimization and machine learning, Addison-Wesley, Reading, Massachusetts, 1989.
line

Raster log technology: A new standard for log display / analysis

Scott L. Montgomery, Petroleum Consultant, USA

Well logs comprise the data backbone of the petroleum industry. Log data is routinely used by geologists, geophysicists and engineers in a host of applications, from rank exploration to tertiary recovery. Current estimates place the total number of well logs worldwide at more than five million. The vast majority of this data exists in hardcopy form, i.e., paper and, to a lesser extent, microfiche, placing considerable limitations on storage, access and preservation of such data, much of which is historical and irreplaceable. Any technology capable of reducing these restrictions, while enhancing efficient use of log data, will provide a resource of enormous import and impact.

The widespread introduction of digitizing technology over the past two decades has marked an important advance in this area, but does not constitute a general solution. Though digital (vector) logs allow for excellent online manipulation, editing and quantitative analysis, they remain time-consuming and expensive. Interpretive judgment is often required for log digitization, especially with older data. Moreover, existing and foreseeable applications require quantitative analysis only of selected intervals, making whole-log digitization frequently cost ineffective.

Recent advances in raster log technology provide a high-quality, low-cost alternative to both hardcopy and full-digital logs. Pioneered by Interpretive Imaging Inc. of Denver, Colorado, raster log technology allows for full online log display and manipulation, with integrated depth-calibration, cross section building, well data posting and zone-selective digitization capabilities. Per-log costs are comparable to those of paper logs.

In addition, the technology is designed for use on an ordinary PC, obviating any requirement for specialized equipment. The ease-of-use and flexibility of this technology, coupled with its potential employment in all existing and future log-related applications, make it likely to become a universal standard in the industry.

Capabilities: Basic systems. Raster logs are scanned versions of hardcopy logs with multiple software enhancements. A log of this type is known as an "intelligent raster," the most advanced version of which is the smartRASTER log produced by Interpretive Imaging. As a whole, raster technology can be divided into two systems. In the basic system, depth-calibrated well logs, with or without integrated tops data, can be displayed, examined and annotated individually, or placed side-by-side at a chosen spacing for correlation purposes.

Logs can be: 1) enlarged, reduced and overlapped; 2) standardized or altered in vertical and horizontal scale; 3) located on an accompanying map with a cross section line; and 4) edited in terms of newly designated zones of interest, e.g., porosity zone, marker shale, facies boundary, which can then be entered into the existing tops database. All relevant data is compatible with, and easily exported to, existing software mapping packages. Displayed raster logs can be instantly hung from any chosen datum and at any scale; both parameters may be changed with a single click.

Basic raster systems have limited drawing capabilities and do not produce finished cross sections. However, they take up a minimum of RAM and are extremely easy to use in applications that benefit from rapid well-to-well correlation; individual log analysis; systematic, e.g., field-wide, identification of net or bypassed pay; reservoir quality distribution, and the like.

Advanced systems. Advanced raster systems include the capabilities of the basic system, plus data posting, cross section building enhancements, selective digitization and import of digital LAS log curves. Logs for display can be retrieved either from a map with plotted locations or from a candidate well list for a chosen area; a range of search functions is also available for locating and filtering desired wells. In cross section view, inter-well correlations can be given separate color designations coded to each specific interval.

Sections can be scrolled vertically and horizontally, re-sized, re-scaled, and re-hung on a new stratigraphic or structural datum, cropped to a selected vertical portion, and printed with presentation-style clarity, all with a point-and-click operation. Particularly helpful is the ability to instantly reposition any individual log vertically, to change well log order, add new logs, delete others and space logs horizontally proportional to actual map distance or along a line of projection.

The most recent software also adds to ability to post production test, DST, perforation, casing point, mud log and other information either on a borehole schematic image or in the well log depth track column. Such capability suggests that highly useful "cross sections" of any such data might be built. For example, when companies enter a new play or take over an existing field, engineers and drillers need information on such parameters as completion practices, perforation intervals, casing design, mud weight, ROP and other data.

Display of such information for many wells, in a cross section format, would be valuable in almost any circumstance. Expanded possibilities along these lines can be envisioned for such data as sonic velocity, subsurface pressures, BHT, well treatments, and more.

It is the integration of selective digitizing capabilities and the ability to import LAS log curves, however, that make advanced raster technology an excellent candidate for a new universal standard in log data format. There are two reasons for this. First, selective digitization — which, in a sense, is what geologists do in any case — is the only cost-effective, affordable means for taking advantage of LAS technology in a routine manner. Second, as digital logs represent a format largely incompatible with hardcopy logs, the ability to combine these into a single, comprehensive log display and manipulation package marks a significant advance.

Case histories. Raster log technology is presently used in a wide variety of applications by both major and independent operators. In a number of cases, the technology has allowed companies to conceive and execute projects otherwise not possible due to time and cost constraints.

San Juan basin. An excellent example is work by Amoco Production Co. in the San Juan basin, located in NW New Mexico and SW Colorado. This work was intended to discover new pay opportunities and improve existing production within a large acreage position having a significant well density.

Using the basic raster system, Amoco geologists were able to systematically perform the following operations on a section by section, township by township basis: 1) identify pay zones and perforated intervals for productive formations in each well within company acreage; 2) establish gamma ray, log porosity and resistivity cutoffs for pay; 3) pick net pay thickness and respective porosity and resistivity data; 4) compute and map gas-in-place; 5) plot pay zones vs. perforations for each well; 6) map unopened and bypassed pay for each well; and 7) identify optimal areas for recompletion and / or new drilling. Part of an end product map generated by this study is given in Fig. 1, showing total pay, thickness of unperforated pay, and ranked quality of pay.

Green River basin. The map and cross section of Fig. 2 and Fig. 3 were generated by the smartSECTION advanced raster system for an ongoing study of overpressured Cretaceous sandstone reservoirs in SW Wyoming's Green River basin. This area is currently the focus of considerable interest in the U.S., due to large estimated volumes of natural gas at moderate depths, in an area where data is fairly abundant and pipeline and treatment facilities are well established.

The map of Fig. 2 includes detailed well locations and a cross section line, whose easternmost well has been selected (highlighted in red) for display of the data in the boxes to the right. This information includes formation tops, DST, IP and production data, which is color coded for simultaneous display on the borehole schematic image. The map also indicates (red asterisks within well symbols) all those wells with relevant data on the chosen zone of interest, in this case the Upper Cretaceous Almond formation.

The cross section of Fig. 3 posts this same data in color-coded form (in this case perforation and IP interval), plus well header, inter-well spacing, location and other information. Designated tops are labeled on each log. As shown, the software will draw in complex formation or facies boundaries where required. This helps highlight updip pinchout of the lowermost Lewis shale zone and a related change in sand development within the uppermost Almond.

The ability to generate a cross section such as this in a matter of minutes has allowed for rapid three dimensional analysis of the discontinuous yet abundant Almond sandbodies, deposited in a range of shallow marine, estuarine and coastal — e.g., barrier island — settings. Most sands, it should be noted, are less than 50 ft thick, making use of 2-D seismic data tentative or unreliable for detailed mapping. Employing advanced raster technology in this area thus satisfies the needs for conventional log analysis, while providing opportunities for enhanced, practical reservoir studies beyond those allowed by hardcopy logs, and at a fraction of the cost required for digital logs. WO

line

New computing language opens E&P opportunities

In an article presented in a question and answer format, Sun Microsystems, Inc.'s, Mark Tolliver, vice president, market development, discusses how his company's two-year-old computing language, called Java, raises industry's ability to process / access critical seismic data. The new computing language offers software developers the ability to write applications once that will run on any computing system, regardless of operating system or instruction set.

This "Write Once, Run Anywhere" benefit has attracted strong interest among companies around the world because it means potentially saving millions of dollars in developing / deploying software. You don't have to waste time / money writing for several operating environments. It also offers improved network maintenance because you can "fix it once, fix it everywhere."

Because the petroleum industry is using applications for geographically dispersed asset teams — and employing a variety of computer system environments — the new technology can fill an existing void in E&P software development and support. Here's why the technology is well-suited to oil / gas exploration and why the developer is so sure its invention will be a big part of corporate networks.

Why is it so important for the petroleum industry to pay attention to this new computing language?

Most information system (IS) professionals know that getting different computing systems to operate in concert with one another — particularly on large seismic data and other exploration projects — can be a logistical / budgetary nightmare. So a lot of forward-looking companies are turning to network — or Web-based — computing to deal with this issue. Network computing has become the standard because it offers a unique opportunity to lower overall information transfer (IT) costs, while significantly improving critical seismic data processing / access on projects.

The boom of the Internet and World Wide Web has extended that model so that, today, we can think in terms of: 1) global network (the Internet), 2) a network that spans the globe but only within our own company (the intranet), or 3) a network between our company and select customers or vendors (an extranet).

Why is this important?

In the old computing model, systems companies relied on the concept of a captive audience. A CIO would pay the price of admission by purchasing that provider's platform, and then the doors would shut behind him and he would be locked in. Once you bought into the platform provider's system, you would have to dance to their tune.

Many IS managers have been there before. When you're locked in and your applications work only on one platform, there is no choice. You're at a significant competitive and economic disadvantage. And your developers have to write applications for each of your platforms, spending time on interoperability that could be better spent addressing the scientific issues the software is meant to solve.

In the new model, innovation, creativity and time-to-market are becoming more important factors in determining product success. Every geoscientist or oil / gas executive should understand the importance of this.

The platform no longer locks a company into a certain application pool. The platform purchase is based on the feature set, the technology integrity and its reliability and scalability. A company does not have to accept the concern that, to get the latest and most appropriate technology, they will have to rip and replace everything in favor of the "total solution." In fact, most oil companies are looking for ways to squeeze even more functionality out of what they already have in place for E&P.

With that in mind, many oil companies have stopped or drastically reduced in-house E&P development applications. Instead, they are buying E&P software from independent software vendors or exploration service companies. These software vendors are writing competitive applications and targeting several hardware platforms. At this layer, the end users are no longer tied to a particular platform. When it comes time to upgrade, they can choose from any of the hardware vendors supported by the software vendor, and transfer the license should they change hardware.

Software vendors now have the burden of supporting the broad range of platforms that can be found in an E&P organization. Although Java currently does not have the number crunching performance to do the bulk of the seismic processing, it is a good choice for the user interface. Since 30% to 40% of the effort of writing and maintaining an E&P software package involves the graphical user interface (GUI), the new computing language would be a good choice because of the properties of "Write Once, Run Anywhere," and the ease of development / deployment.

What do users get with this new technology?

The software vendors get the benefit of only maintaining one version of their GUI, not one for each platform, as well as a reduced cost of development / maintenance. The oil company gets the benefit of not having to worry about the end user's choice of desktop system. The desktop can be chosen to fit the user's primary tasks, without losing the ability to run the applications targeted at secondary tasks, i.e., the company does not have to tailor the licenses to the current mix of desktop devices.

For example, with the technology, you could have geoscientists or E&P professionals working on the new Sun Ultra 60 workstations in one office, an SGI machine in another, and an H-P machine in a third — and they're all going to be able to share the same set of applications.

Under this new model, developers write applications in the new programming language using the enterprise Java application programming interfaces for accessing existing middleware services and applications. These applications can then be deployed on any server, running any operating system, with any installed base of middleware. The bottomline benefit here is that you're extending your technology investment life and leveraging the existing infrastructure. So the platform is breathing new life into E&P hardware, operating systems and software while giving enterprises critical access to existing databases and applications.

Do you think this is widely understood by the petroleum industry, specifically the E&P community?

The industry as a whole already understands the new rules of the IT game, which are characterized by one word: Open. Open interfaces, open file formats, open protocols. No one owns them; no one entity controls them. That means that industry is competing on implementations based on these open standards. This drives competition; competition drives innovation; and innovation drives customer choice.

These benefits have spurred a new movement in computing toward open standards, and that's why you are seeing so much fuss over this new technology in not only petroleum and E&P industries, but in other industries as well.

In the petroleum industry, there is an alliance called OpenSpirit that's been charged with devising an application-independent software platform to enable the plug-and-play integration of software applications across the exploration and production life cycle.

Note. The OpenSpirit Alliance is an alliance of E&P companies, including Shell, Elf Aquitaine, Statoil, Chevron, BG plc, CGG Petrosystems, IBM, Jason GeoSystems, De Bril-Groot Earth Sciences, Foster Findlay Associates, Prism Technologies and Shared Earth Technologies. For more information, check out the OpenSpirit Web site at http://www.prismtechnologies.com/ products/ openspirit/ index.html).

This alliance is working on a framework to provide both generic E&P components, such as coordinate transformations, and components that will be specific to subsurface interpretation. They've already decided that the GUI software components are going to be done in the new programming language because of its cross-platform portability and because it will support Web-based applications. Work is underway on the first version of the platform, to be out later this year.

Figure 9
Technical director at Interactive Network Technologies, Inc., Houston, works with his company's Java-based tools for writing visualization software.
 

So when are developers who are writing software for the petroleum industry going to climb aboard?

It's already happening. Many developers are closely watching consortiums like OpenSpirit. Meanwhile, a number of companies are writing applications in the new programming language. In Houston, a company called Interactive Network Technologies, Inc. has a powerful 2-D graphics tool kit for writing visualization software. INT took Carnac's main C++ components and wrapped them with a thin layer of Java technology, making it possible to write a complete Carnac application or component entirely in the new programming language.

What is the biggest misperception about the new language in the market?

Some people in petroleum think the new tool is being promoted as a way to do heavy number crunching or other complicated data-intensive work. For now, the benefits are really those already outlined, ability to: reduce total ownership cost, speed application development / deployment, allow true interoperability among various platforms, and ease the administration burden.

The Internet has been mentioned; how would geoscientists / E&P professionals capitalize on the technology and the Web?

First of all, it should be mentioned that the new system's applications (or applets) can be fully integrated into Hypertext Markup Language (HTML) pages, and accessed via a World Wide Web browser with the Java Virtual Machine across a variety of operating environments and geographically-dispersed project teams.

Along those lines, the company has created JavaSeis, a prototype environment for delivery of seismic data processing services via the Web. This network — or Web-based — computing model provides a collaborative framework for accessing / processing seismic data within a heterogeneous, distributed computing environment. Previous attempts at developing such a framework have been based on C++ and message-passing libraries.

An example of how the technology can improve seismic processing applications can be seen at ARCO, where they use the ARCO Seismic Benchmark Suite (ASBS) as a seismic prototyping environment. Using the suite, which is designed to be flexible and extendible, an employee can manage seismic data as parallel distributed objects, using traditional procedural computing languages (C and Fortran). And new processes are developed using "inheritance by copying templates."

As a result, system changes often require extensive modifications to all copies of a few basic system templates. Parallelism is managed through an application programming interface (API) defined by the services required for implementing common geophysical data processing algorithms.

The existing object-based structure of ASBS and the well-defined parallel behavior of many geophysical algorithms makes this application suite a natural fit with the constructs defined in the new language. Currently, maintenance and portability of application in ASBS are made more difficult by the lack of a coherent object model and consistent parallel programming model. Implementation of ASBS on the new language platform would significantly improve the portability and maintainability of seismic computing applications, and would also provide a pathway to the use of network-based parallel computing for E&P seismic processing. WO

line

Solid streamer / worldwide computing network speed up the seismic cycle

Chuck Toles, Vice President Far East Operations, Western Geophysical

"I want it yesterday," a demand often hear by service companies, is an increasingly common request from oil and gas companies to the seismic contractors. The urgency may stem from a lease-commitment deadline or the basic goal of earning a faster return on E&P investment. Recent concerted efforts by Western Geophysical's data acquisition, software development, and data processing groups have cut typical seismic data acquisition and processing cycles from months to weeks.

The dramatic time reduction is exemplified by the new industry record for data turnaround set by the Western Legend vessel, Fig. 1, for 3-D surveys in the Natuna Sea, offshore Indonesia. The Legend delivered 1,800 km2 of fully processed 3-D data just 27 days after the last shotpoint, cutting the previously announced record by 43 days.

Fast, reliable acquisition with solid streamer. The contractor has made major strides toward faster, more reliable data acquisition with deployment of its Sentry solid-streamer service. The solid system incorporates seismic sensors in a solid, yet flexible material, an improvement over conventional oil-filled plastic cables, which are prone to physical damage from external factors such as shark bites and floating debris.

The data acquisition window in hostile environments is also increased as a result of the system's lack of response to low-frequency excitation, such as that caused by paravanes in rough seas. This low-noise characteristic allows the system to record high-quality data in the presence of sea noise. For example, during the Natuna Sea survey, Legend was able to continue data acquisition during rough seas, which forced shutdown of conventional-streamer vessels operating in the same area. And the Western Patriot, during its recent offshore Croatia survey, was also able to take advantage of an extended weather window using the solid streamer.

Shipboard data integration, quality control. The high volume of 3-D data acquisition and complexity of positioning data have severely taxed data-handling facilities aboard marine seismic vessels. To substantially automate the process, Western developed the MIDAS system (Marine Integrated Data Acquisition System), which uses a robotic tape library linked to one or more IBM SP-2 parallel computers, coordinated by the contractor's Prospect Data Logger (PDL). As a result, front-end processing, such as seismic noise analyses, geometry assignment and geometry-based positioning validation using seismic first breaks, are performed automatically onboard the vessel.

On-board / remote processing. To speed up processing and ensure on-site quality control, more of the data processing tasks have been steadily moved to the survey site, Fig. 2. In-field processing does not occur in isolation; onboard processing facilities are linked to the contractor's worldwide data processing network through high-speed VSAT communication links.

Such links allow the onboard processors to consult with data processing experts at the company's centers, or with client staff, or make use of in-center processing facilities as needed. An important factor in the seamless integration of in-center and onboard processing is use of the contractor's Omega Seismic Processing System software throughout the processing network. Consequently, the data processing and user interfaces are consistent at all sites.

The onboard data processing capabilities extend from basic acquisition support analysis and 2-D and 3-D "hazard" surveys to fully processed, 3-D volumes. For the Natuna Sea survey, Legend's onboard staff performed the full processing flow, including: RADON demultiple; 3-D DMO, three 3-D AVA volume stacks, final full-stack 3-D volume, and final 3-D migrations of all four final 3-D volumes. All of these tasks were accomplished onboard in just 27 days without sacrificing data quality. Frequency content of 100-Hz plus was obtained at the objective times.

Achieving fast seismic turnaround is not simply a matter of moving processing resources to the field. Best results can only be achieved with interactive consultation between the field crew, the contractor's in-center operations and technical management, and client staff during all phases of acquisition / processing. With the creation of a worldwide computing and communications network, the contractor can help oil and gas clients achieve their exploration and production goals with the benefit of latest technology, geoscientific expertise, and worldwide experience. WO

contents   Home   current

Copyright © 1999 World Oil
Copyright © 1999 Gulf Publishing Company

FROM THE ARCHIVE
Connect with World Oil
Connect with World Oil, the upstream industry's most trusted source of forecast data, industry trends, and insights into operational and technological advances.