Amazon oil and gas blogger-in-chief Don Bulmer reports on ‘serverless*’ seismic data management as proposed in a White Paper from UK-based seismic boutique Osokey. Osokey services include SEG-Y and SEG-D data storage in the Amazon S3 for cloud. Data is then available to processes running as AWS Lambda instances. Other tools included in the offering include the DynamoDB metadata store and ‘Athena’ for search.
The whitepaper describes a cloud-based seismic data management
offering that enables a ‘lift and shift’ of SEG-Y or SEG-D
formatted data into Amazon Simple Storage Service (S3). An
‘event-driven’ architecture ingests seismic data and
generates a file inventory that can be searched using Amazon Athena.
Data is passed through AWS Lambda to extract header information which
is stored in Amazon DynamoDB. Trace-level indexes allow for data
viewing output to geoscience applications o, premise. Using multiple
AWS Lambda instances, an aggregate read performance of 42 GB/s was
achieved. Amazon S3 batch operations are said to be a cost-effective
way of bulk processing files and de-duplicating SEG-D files.More from
* You may be wondering what ‘serverless’ actually means. In this context it appears to refer to Amazon AWS Lambda which Wikipedia tells us is a virtual machine that spins-up for the duration of a process, running Amazon’s own-brand Linux.
Another seismic blog, authored by Amazon’s Mehdi Far investigates the automated interpretation of 3D seismics with Amazon SageMaker. SageMaker enables data scientists to build, train and run machine learning models in the cloud. Far used the Apache MXNet deep learning library running within SageMaker to create a horizon picking algorithm. Far shows how to identify salt bodies on seismics using U-Net semantic segmentation trained on the TGS Kaggle public domain image dataset.
Comment: Amazon Athena, DynamoDB, Lambda, S3, SageMaker … anyone smell ‘lock-in?’ See this month’s editorial.
© Oil IT Journal - all rights reserved.