October 2013
Columns

What’s new in exploration

Processing squared

William J. Pike / World Oil

It seems like only yesterday, that I was slipping a 5½-in. floppy disk into my newly acquired TRS 80. I was at the top of the computing heap. I had spreadsheets, word processing programs—the works. Never mind that the machine did not have enough internal memory to hold even one of these programs—that they all had to be run from the floppy disks. I was there, immersed in the nascent computing boom.

In the 30 years since I first switched on that primitive machine, progress in computing has been exponential. Nowhere is that more true than in seismic processing. Fast-forward 15 years from that first machine—light years in data volume and processing terms—to a tour of a next-generation data storage facility that had been installed recently by my then-employer PGS. In it were large carousels holding storage drives that could be retrieved from their holders and written on/read from. It was cutting-edge stuff, with terabytes (TB) of combined storage in the frigid warehouse-like room. I was, as they say, gobsmacked, both by the storage capacity and by the computing power that it required. Yet, in today’s environment, that facility was hardly earth-shaking.

Today’s processing and storage capabilities almost defy description. And, yet, while the technology leaps forward, the same basic considerations have governed computing logic and design over the past 30 years. Those considerations, as related to seismic processing, were presented in a recent World Oil webcast:

  • Are you getting good ROI on your latest CPU and GPU (graphics processing unit) investments?
  • Are your latest seismic surveys significantly bigger than before?
  • Are you interpreting larger volumes, but infrastructure can’t keep up?
  • Have you audited your data, catalogued your applications, and profiled interaction between the two?
  • Are you trying the latest generation of analytical tools to get more out of all your data?

The answers to these questions provide a roadmap of current, and future, processing and storage accomplishments. First is computing power. Today’s computers are lightning fast. In our industry, a large part of that speed can be attributed to GPU computing. This computing uses a GPU, together with a CPU (central processing unit) to accelerate scientific and engineering applications. It offloads intensive parts of the application to the GPU, while the remainder runs on the CPU, greatly enhancing speed. According to Nvidia, which developed GPU computing, CPU + GPU is powerful, because CPUs consist of a few cores optimized for serial processing, while GPUs consist of thousands of smaller, more efficient cores designed for parallel performance. Serial portions of the code run on the CPU, while parallel portions run on the GPU.

But that is only half the story in today’s continually changing processing story. All that data, being processed at the speed of light, has to reside somewhere. Enter serious advances in data storage. Today, my home office has a couple of terabytes of storage in portable devices. A terabyte external storage drive is now the norm for many personal computers. However, in seismic applications, it can be a limiting factor.

As mentioned above, evolutions in computing regularly deliver faster CPUs and GPUs. PCle 3.0 is available with the latest motherboards. This allows more throughput to, and from, GPUs. Combine that with the massive amounts of data inherent in seismic processing, and the result is job bottlenecks in storage. Thus, to many minds, storage capacity and access time are keys to raising the bar in computing speed and efficiency. At present, three primary options exist for realistic data storage and retrieval in seismic processing—NAND flash storage, hard disk drives (HDD) and Solid State Drives (SSD). Each has its merits and pitfalls.

NAND flash is a card-and-chip-based memory option, typically used in applications where large volumes of data are frequently uploaded or replaced. USB drives and digital devices use NAND flash memory. However, it can only be written to, a finite number of times, before individual cell failure leads to overall degradation. At the appropriate point in the wear cycle, a NAND card can be replaced with no adverse effects to the device. Efforts continue to make NAND flash smaller and more robust, with increased storage and shorter access times. However, NAND flash is base memory, with no accompanying control or data management functions, making it somewhat suspect for long-term, intense data storage.

HDDs are all around us, and have been for ages. The 1956 IBM 350 RAMAC offered 3.75 megabytes of storage, in space the size of two commercial refrigerators. Of course, storage has increased, and sizes have shrunk. Current HDDs max out around 4 TB for 3.5-in. drives and 2 TB for 2.5-in. drives. Hard drives are, essentially, metal discs with magnetic coating. The discs spin at high speed while a read/write head records and/or accesses the data. Tried, and long lasting, HDDs are vulnerable to impact—but they are much less expensive than SSDs.

SSDs are much more recent, with the types we know dating back only to the late 2000s. The major benefits of SSDs are swifter data access and no moving parts, which makes them somewhat more impact-proof. The tradeoff however, is price. SSDs are up to six times more expensive than traditional HDDs.

But, these options are the here and now. In the works, undoubtedly, are systems that will make current computer technology look like a kid’s Lego set. That’s what we got a glimpse of in the webinar.  wo-box_blue.gif

About the Authors
William J. Pike
World Oil
William J. Pike has 47 years’ experience in the upstream oil and gas industry, and serves as Chairman of the World Oil Editorial Advisory Board.
FROM THE ARCHIVE
Connect with World Oil
Connect with World Oil, the upstream industry's most trusted source of forecast data, industry trends, and insights into operational and technological advances.