November 2004
Features

Forces converging to make interoperability a reality

With the characteristics of openness and plug and play already present, the need to access real-time data is pushing a move toward standardization of data types.
Vol. 225 No. 11

Information Technology

Forces converging to make interoperability a reality

Software vendors continue to offer proprietary open standards, while others offer translation software as a substitute. Real-time data is driving the need for workable standards that focus on metadata, or data about the data, offering hope for standardizing some data types.

David Gorsuch, Schlumberger Information Solutions, Houston

The characteristics of openness and plug and play have been recognized as desirable qualities for E&P software for more than 20 years. However, as you look around the industry, you see remarkably little of either.

For there to be widespread uptake of a new technology, two conditions must be met. First, the technology must evolve to a point at which it is easier to use than not to use. For example, the personal computer uses the same principles as the IBM 360 the author used at college, but it fits on a desk and has a familiar keyboard as an interface. Over the years, as the price of PCs dropped, more and more people could afford to buy and use them. Innovation and evolution were essential to making computers user-friendly and affordable.

The second requirement for widespread uptake is a compelling need or desire. For example, messaging technology for mobile phones was created by the people who developed them so they could communicate privately. For the general public, it remained an inconvenient curiosity for a long time. Then teenagers started to buy mobile phones. They loved the instant messaging technology because they could communicate with their friends at school without attracting the teacher's attention. Messaging has now become a huge business because of the compelling need and subsequent adoption by this new group of users.

In the oil and gas industry, plug and play and open technologies have evolved to the point where both of these conditions are satisfied. The technological evolution, with its lower costs and higher usability, has bumped up against a compelling need for real-time and near-real-time oilfield data. The expected increase in uptake is already underway.

UNDERSTANDING THE TERMS

First, let's define openness and plug and play. Openness means providing knowledge and facilities that enable others to use your open technology the same way you do. Openness can exist on a number of levels. At one extreme is open source software such as Linux. Linux is fully open, meaning that you can even modify the source code.

In the middle is proprietary open technology, which includes reservoir, drilling, financial and most other vendor-supplied software. You can use the published developer kits so your own software works closely with the vendor's, but you cannot change the underlying code. At the other end of the spectrum is proprietary software such as computer games. You either use it as it is, or not at all.

Plug and play means you can use a particular component with a range of others, commonly from different suppliers. Plug and play typically relies on adoption of industry standards. An ordinary PC is a great example of standards-based plug and play:

  • The VGA standard means that you can view the video output on a monitor, on a projector or in a visualization center that uses this standard.
  • The USB standard port allows connection of a wide range of hardware devices.
  • The networking standard means that you can exchange data by connecting to other machines using the same network standard.
  • Standard operating systems ensure that the software you buy will run on the target machine that is enabled for a particular operating system.

EVOLUTION OF STANDARDS

The first approach to getting technologies to work interoperably was the standard data model. This can be likened to every nation learning to speak a purpose-built world language, such as Esperanto. If everyone agreed on how data should be stored, then everyone would be able to get to it and use it. Plug and play, at least at the data level, would be achieved and everyone would be happy, right?

Not necessarily. Although this approach might deliver interoperability, it also has some less desirable side effects and raises some difficult issues:

  • Who would be custodian of this standard data model?
  • How would extensions and changes be implemented?
  • Who would decide what the standard should be?
  • Wouldn't the process of evolving the model inevitably be slow and unwieldy?

Those issues were not the biggest ones, however. Many of the vendor companies did not really see this approach as being in their best interests. They feared that the rigidities of the system would make the process of evolving the data model too slow and would stifle their ability to innovate. So, although vendors “played along,” their hearts really weren't in it at all.

An alternative approach was to develop a “translation layer” between the different data stores and a common representation of the data. This can be likened to speaking your own language and employing an interpreter when you need to talk to people of other nationalities. The translation layer looks after the relationship between the “standard” representation and the way data is stored in various vendor data stores. This has the great advantage of not requiring vendors to standardize the way they store data, which is much more palatable to vendor companies.

In the past, however, the available technologies have tended to introduce rigidities into the creation and use of the translation layer. The software has been challenging to use, deploy and extend to new data types. This has led to translation layers that cover only a subset of the overall required range of data types, causing difficulties in operational environments.

What is so exciting today? In a word, XML, or extensible markup language, a form of HTML, the language of the Internet. Moving from HTML to XML is like bundling an instruction manual with a product. Fig. 1 compares XML and HTML. The most prominent difference between HTML and XML is that HTML describes presentation and XML describes content. An HTML document rendered in a Web browser is readable by people. XML is aimed toward being both human and machine readable, which makes the data far more useful.

Fig 1
 

Fig. 1. Difference between HTML and XML. HTML (left) describes presentation; XML (right) describes content.
Click for enlarged view

XML is a standard, albeit a flexible standard. Flexible standard means that XML is self-describing, using tags to describe what sort of data each element is. This requires that the XML file contain reference to a schema, which contains more information about the nature of the various types of data, so a programmer can know what you can do with them. This data about the data is known as metadata, and it makes XML both very powerful and very flexible.

The concept of metadata can also be applied to software applications to make them self-describing. This means that you can create a Web services software environment where multiple applications can be run interchangeably. XML and, in general, metadata need not be confined to Web applications. Through the use of a program called a parser, XML can be added to any application code in any language.

Self-description lowers the obstacles to delivering interoperability, openness and plug and play. It allows for easier translation between data models because, even when there are changes to the way data are stored, the metadata can be used to rebuild the standard representation. The widespread adoption of XML as the medium for this self-description has virtually eliminated the risk of failure due to multiple and/or proprietary standards. This evolution and innovation in the technology has reached the point where there is a dramatic increase in the viability of openness and plug and play.

REAL-TIME DATA IS THE DRIVER

The compelling business need for openness and plug and play has proved to be the increasing volume of real-time information. In the downstream world of refining and petrochemicals, real-time data has long been the norm. Only recently has it become a practical reality in the upstream sector, which is quite different from its downstream relative.

The oil field is a very hostile environment for facilities and equipment. Vendor companies invest considerable effort and resources to not only create downhole measurement and mechanical devices, but also to achieve the required levels of durability and reliability. In addition, there is considerable fragmentation and specialization in the upstream vendor arena. Different companies or divisions may have expertise in different parts of the exploration, drilling, completions and production processes. Last, upstream processes and variables are more complex than those of our downstream cousins. For example, if the temperature at the top of a distillation column is too high, you turn down the steam. But if the watercut in a well is too high, there is a wide range of possible responses to choose among, and, to do so, we need good information and a good method to interpret it.

Consider an average drilling job. Saipem might drill the well using Sperry-Sun MWD tools and mud logging services provided by Expro. All of these operations provide streams of real-time data. The wireline logging data from Schlumberger are transmitted to the head office in real time. Then the well is tested by Schlumberger and completed by Baker Hughes using downhole sensors and intelligent completions. In the past, decisions were made by the company man on the rig using a small subset of the available information printed on field reports. The experts back at the operator's headquarters or the vendor offices were not involved because they couldn't get the information fast enough to contribute in a timely fashion. Whether they received the data the following day or the following week made no difference.

The advent of real-time data has changed all this. Real-time data can now be on the desktops of professionals at headquarters or vendor offices within seconds of being acquired, so these experts can bring their training and expertise to bear on every aspect of field development. But they need to have suitable tools to use the data as fast as it arrives, no matter which vendor supplied the data in which format. These tools need to work together, be able to access data from wherever it is stored, and be able to keep up with the pace at which decisions are required.

The benefits of being able to make immediate operational decisions are both tangible and quantifiable, unlike the abstract and qualitative advantages used to justify IT investments in the past. For instance, how many hours does an expensive offshore rig sit idle while waiting for a completion or abandonment decision? When can you stop drilling a horizontal well because it has enough reservoir contact to meet the production target? The ability to calculate these things quickly is finally driving the E&P software industry to deliver on its long-awaited promise of openness and plug and play.

ARE WE THERE YET?

Ideally, all the data streams from the various contractors at the wellsite should arrive in a common, standard format and flow smoothly into both applications and long-term storage. We would like to be able to use the incoming data – along with data in existing commercial and proprietary data stores – in a variety of mainstream commercial applications. And finally, we all would like to use our own proprietary applications to interpret the data to maximum advantage.

The industry-standard format for transmitting real-time drilling, completion and well services data from the wellsite to the corporate office is now here. It is called WITSML, or Wellsite Information Transfer Standard Markup Language. It is XML-based and it covers a wide range of well-related data types, Table 1. WITSML is enjoying a widespread uptake by both operators and vendors. The Petrotechnical Open Standards Consortium (POSC) has taken over running and caring for the WITSML project, ensuring that it will remain a vendor-neutral industry standard.

   Table 1. Data objects supported by WITSML.   
   BHA Run Conventional core    
   Server capabilities Subscription   
   Cement job Fluids report   
   Sidewall core Survey program   
   Formation marker Operations report   
   Target Wellbore*   
   Log* Real time   
   Trajectory* Well*   
   Message Tubular/Bit record/Open hole   
   Mud Log Wellbore geometry   
   Rig/Rig equipment      
           
   *Initial implementation delivered in 2002   

As for access to existing data, the OpenSpirit Corp. has established itself as the leader in this field and is now moving rapidly to embrace the benefits of two new technologies. First, by moving to a meta-data-based description of their data model, the company will greatly increase its ability to support additional data types in a timely manner. And second, the new approach will dramatically improve the performance of the company's translation layer, when accessing many data types. The combination of these two changes will greatly enhance the practicality of multistore data access.

In addition to providing adaptors, the company has made a commitment to openness by providing its customers with the ability to develop their own adaptors, to make data available from their own data stores to any OpenSpirit-enabled application. The company is already providing a vendor-neutral, data-store-independent integration framework today and, over the next 18 months, will be releasing additional functionality that will provide even more rapid access to a significantly wider range of data than it does today.

A NEW OPEN DEVELOPMENT FRAMEWORK

Schlumberger Information Solutions is developing a proprietary open development framework for E&P software called Ocean. Based on the .NET framework from Microsoft, it is designed to accelerate the delivery of innovation by providing much of the infrastructure that is common to many E&P applications. For example, many geoscience applications today need to present information or results on a 3D canvas. Much of the capability of a 3D canvas is generic – it makes little sense to create a new one every time. So, the new framework provides a 3D canvas. Applications can extend this 3D canvas to meet their individual needs, but they do not have to re-invent the core capability. By applying this approach to a large range of facilities such as data access, data selection, presentation environment and back-office compute servers, significant efficiencies in application development can be attained.

In addition to the re-use of common functionalities, the new framework facilitates interoperability by using a Shared Earth Model Framework to provide a common view of different earth models. It also streamlines the upstream workflow, which can involve numerous applications and data sets. Using the Process Manager, users can create a path to support a specific workflow that is repeated often. And rather than having to migrate existing data stores, the new framework will use OpenSpirit to access data wherever it is kept, in whatever format.

Ocean has been architected to be open at every level. This means that not only will the full range of SIS workflows fit into it, but even competitors will be able to use it to develop E&P applications that will work interoperably with it, and, thus, SIS software.

Imagine, for example, that researchers at Some University developed a new technique for calculating contributing porosity in dolomitic limestones. They can use the new framework to deliver the technique to potential users as an attractive and usable software package, as well as have it usable with other SIS modeling and simulation software.

BENEFITS TO THE ENTERPRISE

The benefits of openness and plug and play will flow to geoscientists and engineers, software development teams and the IT department. Geoscientists will get to use powerful, highly usable software components that work the same way and share common characteristics. They will be able to integrate software products from different sources and disciplines, including mainstream commercial software vendors, niche providers and in-house developed solutions. They will be able to create and store their own workflows, weaving their way through capabilities from different sources. Much of their interpretation process relies on checking models for consistency with data. The wider the range of data that is consistent with the model, the greater is the probability that the model is valid. They will also get to integrate their technical workflows tightly with their office productivity tools, reducing the time required to generate technical reports.

Software development teams will get the chance to focus on innovative, unique functionalities rather than re-inventing existing technology. The combination of modern IT infrastructure and access to pre-fab components can dramatically reduce the time and cost involved in delivering new applications. Thus, programmers can put new capabilities into the hands of users much faster. Even small, niche software developers will be able to have their products plug and play with the mammoth application suites developed by the big firms.

IT departments will enjoy a lower total cost of ownership because employees will use the same machine for both technical and office productivity applications. Using standard data formats and applications will reduce the burdens of the IT support and maintenance staff members.

SUMMARY

Technological evolution now allows openness and plug and play to be delivered in practical, cost-effective ways. At the same time, the increase in real-time data has created a compelling need to link the wellsite with the office using a finely meshed web of data sources and software tools. Software vendors are positioning themselves to provide the full benefits of this combination to customers by adopting vendor-neutral industry standards and delivering a fully open software development environment. WO


THE AUTHOR

Gorsuch

David Gorsuch is the product champion for Ocean, the software framework underlying the next generation of software solutions from Schlumberger. Prior to this, he served as Schlumberger Information Solutions (SIS) operations manager for Continental Europe, responsible for all SIS activities in the area. Gorsuch has 23 years of experience in the E&P industry. He initially worked as a reservoir engineer and, during the past 15 years, has held a variety of positions involved with the development, marketing and support of E&P software. Gorsuch holds an MA degree in chemical engineering from Cambridge University, an MSc degree in petroleum engineering from Imperial College, London, and an MBA from Warwick University.

 

       
Connect with World Oil
Connect with World Oil, the upstream industry's most trusted source of forecast data, industry trends, and insights into operational and technological advances.