The ‘International Semantic and FAIR* Knowledge Graph Alliance’ aka the KGA was inaugurated on the 10/11/2023 in Brussels, Belgium. The KGA is to promote the use of a Semantic Knowledge Graph (SKG) by research organizations, universities and standardization bodies to facilitate the development of industry-standard models and ‘alleviate the uncertainty of ROI in SKG transition projects’. The KGA believes that the future demand of SKG in the industrial sector will require and ‘immense effort’ in the development of standardized semantic models, methodology, and tools for multifarious industrial domains and applications. The SKG will enable storing, managing, representing, exchanging, and analyzing industrial data though standardization and harmonization of semantic data models and development of methodology and tools for knowledge specific to industrial domain. KGA also claims relevance in future application of AI technologies in the context of Industry 5.0.
TotalEnergies’s Frédérick Verhelst cited a potential use case of the in data ‘liberation’ notably by making the PPDM upstream data model more ‘precise’ and extending it to more data domains. Also of note is TotalEnergies’ Semantic Framework to ‘further disambiguate data and prepare for ontology-based interoperability’. The KGA is to address governance of reference data in complex master data systems including digital twins and ERP systems. Again, the KGA is presented as a prerequisite for input to AI and ML tools.
TotalEnergies’ Jean-Charles Leclerc provided some background to the TotalEnergies Semantic Framework, quoting none other than Albert Camus who said, ‘Misnaming things adds to the world’s unhappiness’. Thus ‘the value of adopting common semantic modeling practices is huge.’ Alignment with semantic web standards including OWL and RDF Graph technology herald corporate knowledge modeling, the breakdown of information silos, and enable interoperability and AI. Leclerc introduced a knowledge hierarchy, quoting Emanuel Kant as saying, ‘All our knowledge begins with the senses, proceeds then to the understanding, and ends with reason. There is nothing higher than reason.’ The hierarchy extends from tables, through graphs, taxonomies and on to ‘semantic knowledge graphs’, with only the latter capable of capturing ‘knowledge and wisdom’. Unfortunately not everybody is developing ontologies the same way. Leclerc compared (unfavorably?) the Cfihos semantic approach to the KGA, his ‘preferred ontology’.
Also of note is the work performed by TotalEnergies semantic guru (now retired) Claude Fauconnet who is the developer of SousLeSens, a suite of tools designed to navigate, visualize and enrich SKOS vocabularies and OWL ontologies. It is free under MIT license and available as a GitHub repository. Fauconnet’s SousLeSensVocables are described as the engine and toolbox to setup an information methodology in a flexible and sustainable direction.
KGA membership costs run from €250 individual to €15,000 for a corporate with over €1 billion turnover.
* FAIR refers to the 2016 ‘FAIR Guiding Principles for scientific data management and stewardship’ as per facilitating Findability, Accessibility, Interoperability and Reuse of digital assets.
At the inaugural Digital Innovation Forum of the Petroleum Technology Alliance Canada, David Chan, from AMII, the Alberta Machine Intelligence Institute, demonstrated how Generative AI will impact the energy industry. Generative AI that leverages large language models à la ChatGPT eliminates model training, allowing users to go straight to predictions for some classes of problem. LLMs can be used out of the box for ideation, embedded in a toolset such as the Jasper AI copilot XXXX https://www.jasper.ai/ or as a chatbot. LLMs can be fine-tuned with domain-specific text corpuses to create a model that better understands a specific domain. LLMs can also be used from scratch but this is extremely expensive with costs in the millions of dollars (Chan advises ‘forget it’).
OpenAI’s Chat GPT has been trained on the public internet with no particular specialization. This has led to ‘hallucinations’. Chan showed one such example, asking ChatGPT what is the world record for crossing the English channel on foot? ChatGPT returned with some nonsensical facts and figures (WR holder is Christof Wandratsch in 14 hours and 51 minutes). ChatGPT makes-up such rubbish information and this is a serious problem. Another potential problem is information leakage between users. Using LLMs in a commercial context, retraining them by adding proprietary information, may expose this to your competitors.
Hallucinations and other issues can be mitigated using an approach known as retrieval-augmented generation (RAG). According to IBM*, RAG is ‘an AI framework for retrieving facts from an external knowledge base to ground large language models on the most accurate, up-to-date information and to give users insight into the LLM’s generative process’. RAG combines information retrieval with natural language generation and is used in question and answer systems, chatbots or in specialized LLMs for a specific domain that assures confidentiality. An in-house RAG system can be used to create a knowledge base of your own domain, concocting its responses from in-house documents. The LLM technology then returns information to the user in natural language. Chan concluded saying that Chat GPT has ‘opened the door to information access much more quickly than anyone thought possible’.
* Earlier this year, IBM unveiled its new AI and data platform, Watsonx that includes a RAG function. More on RAG in a business environment from IBM.
Mark Derry explained how Cenovus Energy used to deploy a data pipeline connecting data files, scripts, models, analytical tools and visualization. This approach suffers from many potential points of failure. If a password changes, or if a server is down, ‘everything blows up’, the whole thing breaks, people don’t rely on it and stop using the solution. Cenovus has now deployed Databricks in its corporate data platform, collapsing the whole data pipeline into a single system that provides lineage, line of sight, error tracking and visualization in PwerBI. The open source Delta Lake from the Linux Foundation forms the base layer of the system. The open source nature of the Delta Lake prevents vendor lock-in. Switching from Databricks to another solution in the future could be achieved without the pain associated with systems such as SAP and Oracle.
Previously the many different pathways to data made it hard to know which versions of data people were using. In the new system, every user has an individual account. All their queries are logged such that Cenovus knows exactly who’s touching what data, what different data is being used in an analysis and so on. This is achieved through a single query interface for the entire company, with some 10,000 users provisioned in the system. The system is work in progress. Derry figures that it is about 30% complete now with a target of 90% completeness in a couple of years. Running atop the whole thing is a ChatGPT-like assistant which helps users generate SQL queries for the system from English language inputs. Derry emphasized that the Cenovus system is different to OpenAI or Azure AI models in that is can provide the context of the tables and data that is being queried. Derry also gave a shout-out to MLFlow, an open source platform tracking machine learning models that has dramatically sped up model development.
More from PTAC and watch the Forum video here.
Back in the last millennium, Sir Tim Berners-Lee came up with what was to be the next big thing for the world-wide web (his invention), the ‘semantic web’. This was an attempt to transform the WWW from a collection of idiosyncratic pages into something more structured. As befits a pronouncement from the web’s guru in chief, the semweb got a lot of attention. Far more than it deserved in fact. TBL imagined a web where information items were all tagged such that they became machine readable. His original suggestion for the tagging mechanism was the resource description format (RDF). Immediately the IT world went mad, believing that RDF was going to make the web infinitely more useful. The flaw in this reasoning is that it was never the tagging mechanism that was going to change the world. What could have done this was a massive effort on the part of developers to make their web sites more accessible to machines (whether via RDF or another mechanism). Needless to say, this was a big ask. Website owners were more concerned with making cute stuff that allowed adverts to ‘pop up’ and pester folks. The flaw in TBLs semweb revolution was that website owners in general do not want their stuff ‘exposed’ to machine to machine communications. The idiosyncrasy of the WWW is a feature, not a bug!
On the shelves in my office I count ten books (remember them?) covering the semweb, ontologies and such, dating from the noughties. These often spoke enthusiastically of the semweb’s potential. The blurb from one book, ‘Spinning the Semantic Web*’ speaks of‘ an exciting new type of hierarchy and standardization that will replace the current web of links with a web of meaning. I think it is fair to say that this has not happened.
What did happen in the early years of the 21st century was that the semweb was taken on board by the research establishment (at least in Europe) in a big way. The workings of this may not be familiar to you. In other contexts, the generally accepted principle for the advancement of humanity is via a multiplicity of competing initiatives of which the best will thrive. This is not how things work in EU research where the politicos that decide what research to fund are constantly on the look out for the next big thing. Most recently this has been AI. Before that, albeit briefly, it was blockchain. And before blockchain it was the semantic web before that, business objects and so on. The EU then announces the direction that researchers will have to take to get funded. And off they go. It is a curious way of doing business. A multiplicity of institutions ‘compete’ for funding. But the competition is limited to what has previously been selected as ‘worthwhile’. The approach is more like the monolithic culture of the Soviet Union than Silicon Valley.
While the semweb has not by any stretch of the imagination fulfilled its original promise, its impact of EU research has been huge and lives on today. SEMIC 2023, the Semantic Interoperability Conference broke all records with some 458 on site visitors and 803 followers online. In Brussels recently, another semantic initiative saw the light of day, the TotalEnergies-backed Knowledge Graph Alliance (see elsewhere in this issue) is seeking to get the semantic ball rolling, echoing a much earlier failed attempt by the W3C to get the oil and gas industry to finance further semantic research.
At ECIM (and, we have it on good authority, in some EU Majors’ databases) we heard that naming stuff remains an issue, despite the problem having been ‘solved’ twenty years ago. Why is this? Why not have a ‘standard’ convention? Something along the lines of the KGA or a host of other proposals that have been made over the years (I expect OSDU has its own …. it should). But the existence of a standard way of writing a well name does not fix the problem. These problems stay with us because the standards focus on the container (a database format, an ontology or what have you) and not on the content.
You might be curious to know how the problem was solved twenty years ago. It was solved by having a look-up table of well names across the various databases. Petris Winds (later acquired by Halliburton) was one early example of a commercial implementation of such a system. This simple system just leaves on other problem to fix. That is making sure that the well name or whatever attribute you are concerned with is always the same in each database or application, now and for all future data entries. If you have a lot of legacy data, and a lot of new ‘big’ data, then this is a hard problem that involves the ‘management’ part of data management. Over the years I have seen companies shy away from these mundane issues (bad, missing data, wrong name, duplicates etc.) to focus on the more intellectually stimulating problems of standards and formats. Vendors don’t help. People who want to ‘move fast and break things’ don’t help. Managers who are blindsided by their IT/data folks don’t help. The issue of in-house vs. outsourced services plays a big role here, but that’s a whole other story (and editorial?).
* Spinning the Semantic Web, Dieter Fesnsl et al. MIT Press 2003 ISBN0-262-06232-1.
ECIM is definitely the best oil and gas data event in Europe and probably in the world. The format is unbeatable. No ties to any particular data organization. Great presentations from the EU Majors. And probably best of all, a format that encourages vendors to present their wares in a reasonably open fashion, instead of the information-retentive presentations of the major tradeshows. Our summary below only covers a fraction of the 80 plus presentations. Choosing which to attend is hard. But ECIM provides post-event access to all the presentations to registered attendees.
Louis Vos (NPD*) presented Diskos 2.0, Norway’s energy data bank. The Norwegian Petroleum Directorate re-tenders the Diskos contract every few years and the system (previously operated by CGG) is now back with Halliburton/Landmark, with Kadme providing the trade module. The latest Landmark Diskos went live in April 2003 and is described as exposing an ‘open API’, supporting virtual data rooms and as an ‘OSDU-compliant*’ iEnergy cloud platform. Diskos’ 80 member companies have access to the 16PB online resource. Migration to the new platform involved some 176 AWS Snowball Edge devices and a 1GB/sec connection. On the desktop, Landmark’s DecisionSpace 365 suite provides inter alia a data completeness function for QC. Norway has freed-up its data confidentiality rules (Petroleum Regulations Section 85), making more data public. Interpreted data is now public after 5 years, ‘commercial’ after 10 and ‘other’ data after two years (down from 20). The latter change has placed some 30k data sets into the public domain. Vos gave a shout out to Troika which advised on the use of the SEGY Rev 2.0/2.1 seismic data formats that allow for automated data ingestion and QC. A report on the ‘Value of Data’ executed by Menon Economics) for the NPD found that ‘The annual gains for licensees and operators on the Norwegian amount to NOK 1.5 billion in time and resources saved’. The report is publicly available here (in Norwegian). Vos stressed the intangible added value of the Norwegian data with notably an AI Proof of Concept study by Earth Science Analytics underway, using machine learning to build a geological model of the whole of the Norwegian continental shelf.
*In the Q&A, Vos was questioned on what was meant by ‘OSDU-compliant’. He replied that ‘If Industry goes the way of OSDU we will be able to follow’.
* The NPD has just changed name to the Norwegian Offshore Directorate.
Gianluca Monachese (Kadme) presented on machine access to entitled data in the cloud. Kadme connects its software to National Data Repositories around the word leveraging various application programming interfaces. APIs play a pivotal role in that they enable machine to machine communication, minimizing human interaction and maximizing data use. Monachese cited oil and gas APIs from Diskos, the UK NSTA and Norway’s License2Share. There have been attempts to standardize APIs from ODSU, Esri/ArcGIS and OData but these bring issues with functionality, dependencies, performance and support. The Diskos Trade module depends on the DISKOS API, on DocuSign and on an SMS API for notifications. But, warns Monachese, code is like humor, ‘when you have to explain it it’s bad’. Poorly-designed APIs pose problems with authentication, documentation, error handling and in some cases, obsolete technology. User accounts may be locked for ‘unaccepted behavior’. Metadata tags (like ‘rock&core’ … ‘rock_core’ etc. ) should all map to the same category (this, an issue that was ‘fixed‘ over 20 years ago!). Entitlements are hard to share across APIs leading to mistrust by users. All of which impacts on the developer’s costs and reputation. How to fix? Talk to API providers re documentation, error handling, versioning etc. The effort is worth it. APIs are much more important than the user interface. Machine to machine communications is where the action is!
Lee Hatfield (Flare) was up next with what appears to be a solution to part of the nomenclature issue (the one that was solved 20 years ago). A UK NDR reporting and compliance case study found that source data for reporting was scattered across operators’ SharePoint, DMS and physical devices. Using the new NDR API for ingestion was plagued with inconsistent data. An operator may locate a VSP QC document but not the VSP itself. The solution is added search using (Flare’s) energy taxonomy. This provides a semi-automated assignment of classification tags. C-Tags and well names. All done in Flare MiNDR. NDR API data is published via a Microsoft Graph endpoint to the cloud. Retagging provides quick wins. Similar workflow and tools could be used in Norway or other NDRs.
Paulo de Tarso Silva Antunes from Brazil’s ANP presented its 44 petabyte BDEP E&P database, a JV between APN and Brazil’s Geological Survey. This runs alongside ‘Hermes’ an IBM tape robot. ANP’s PROMAR program leverages BDEP data to promote mature offshore basins.
The OSDU* boot camp was one of the best attended events of the show. We only caught the closing moments and heard how the OSDU domain data management services DDMS are extending (from well logs) to other domains including wellbore, seismic and reservoir models. Well delivery, rock and fluid samples and production monitoring are in the offing and ‘later’ windfarm, solar and hydrogen are ‘envisaged’**. The H2 high level architecture patterns are described as ‘data mesh-y’.
* Originally the open subsurface data universe. Now just ‘OSDU’.
** As a matter of fact Shell presented an OSDU-based Hydrogen Digital Platform back in 2020.
James Tomlinson (Ikon Science) observed that while large organizations are piloting OSDU pilots, there remain some who are (still) happy with the Petrel/Kingdon project approach (app and data on the workstation). The question for those looking to execute novel workflows is, are the data foundations deep enough? As an example, a workflow that spans petrophysical to fluid/pore pressure analysis might involve a 10 day process. A ‘log scale prediction’ workflow might mean 20 days on data management for a couple of days interpretation. A single well pilot study for Arc Resources (Canada) involved 11k files and 44GB of data. The presentation turned into a soft sell for Ikon Curate and its API that allows a broad range of data (core, imagery, logs …) to be assembled for such novel workflows. Also key is the addition of ‘consistent naming’ (again!) and depth matching. All built into a digital core database exposed through an with API for added AI/ML.
Steffan Sørenes is defining Equinor’s new IT and ‘digital direction’ (in what has come to be know as the Lindesnes project). Equinor is a ‘learning organization’. Complex problems are cross-functional and are addressed by long-lived teams. It is no longer about ‘us and them (the technology providers) it is about we’. Computer security is teamwork, built into the workflow from day zero. Data is handled as ‘products’, leveraging a data mesh architecture to overcome silos. The data mesh is currently being implemented ‘at scale’. Data governance requires balance, between centralized control and distributed anarchy. This is not easy. A resilient architecture prepares the organization for change. It is not just about technology and open APIs etc. but also the architecture of the teams and how they interact. There is huge potential for automation, especially in IT operations and regulatory reporting and compliance. ‘We ask our people to do less manual admin, leverage automation and let people do what they are trained and like to do.’ Sørenes uses Design Thinking and ‘agile’ values. ‘Everything we have developed has been open since day one’. Users get ownership of a huge collective knowledge base. Some 1,000 employees have taken part in Lindesnes workshops. The plan is now to open up to partners and suppliers.
More on Equinor’s new IT direction and the Lindesnes project.
Possibly noteworthy is the absence of any mention of OSDU in the Lindesnes literature.
Jone Myhre-Bakkevig explained how Aker BP is using large language models in the enterprise. ChatGPT represents a paradigm shift. Aker BP has build its own CGPT-like system and plans to be the ‘world’s first data driven E&P company’. Myhre-Bakkevig’s works on data science, analytics and a multitude of other ‘data’ activities. His team was tasked to get its own CGPT up and running ‘in a week’. The AkerBP CGPT was built with Azure Open AI services and involved some IT, Data Science and CGPT itself! A lot of manual work was involved, using CGPT to ask questions of previous reports. Now, asking the system for what are the main challenges to a particular development, the answer comes back with ‘the key challenge is to reduce drilling risk in the xyz formation’. This involves analysis of inconsistent drilling reports and varying nomenclature (again!) across, for example, BOP function tests. An operations query ‘show me the compressor discharge sensor data for such and such’ returns the appropriate data set. In the Q&A, Myhre-Bakkevig acknowledged that this is a pilot; the technology is not mature. ‘We cannot keep pace with Microsoft and Google’s LLM efforts. There is one ongoing LLM use case in exploration.
More on AkerBP’s LibraryChat project.
Olivia Winck and David Holmes (Dell Technologies) presented on the topic of sustainability and how to accelerate the energy transition, advance decarbonization and ensure energy security. Carbon data is hard to characterize and uncertainty in measurement is glossed over. Dell advocates a shift from qualitative to quantitative carbon data capture, notably in a joint venture with Intel and the American association for an energy efficient economy ACEEE. BEACN, UC Berkeley’s student-run environmental and strategy consulting group got a plug for its work comparing emissions reductions accounting tools and methods from Shell, the Open Footprint Forum, GRI and Engie. Dell is working with Context Labs to evaluate emissions throughout the supply chain. The familiar issues of data, metadata, data quality, formats are all present. They have not been solved but there is a big opportunity to apply upstream data management learnings to carbon data. Holmes observed that in the US, some 9 million work in finance. In carbon accounting? A few thousand. Can environmental performance be monetized? Seemingly it can, when carbon data ‘becomes an asset’ thanks to accurate emissions measurement and optimizing the use of clean electricity. ‘The history of energy will be written in next three decades as follows: 20s Digital Energy Transformation, 30s accelerate and scale decarbonization, 40s transformation of energy ecosystem.’
In a panel debate on ‘disruption as the new normal’, David Holmes (Dell) observed that the biggest disruptor is the transition to sustainable energy where there is an ‘existential need to act’. Steffan Sørenes (Equinor) responded saying ‘I think a lot about what Equinor will be like in 2050’. The war in Ukraine has underlined the energy trilemma* of – sustainable, security, and affordability. Holmes agreed, before the war there was zero possibility of building new LNG terminals, opening coalmines and so on. Now energy security is the N°1 priority. Mirroring this is the disrupter of the cloud. Many oils have done deals with the cloud providers although there is some rebalancing as the bills come in. John Spens( Thoughtworks) agreed, today, big checks are being written to run models in the cloud, but the cost savings are not ‘ubiquitous’. The opportunity from cloud providers is less in cost saving, but more from the new tools that allow employees to concentrate on higher order tasks. Mahdis Moradi( NNTU) observed that the cloud was expected to free users from vendor lock in. This has led to ‘unrealistic expectations’ as the problems of migrate apps and different clouds emerge.
Sørenes added that the cloud paradigm is now coming out to the edge, allowing for operation without connectivity and respecting privacy/legal requirements. In any case, today there is so much local data on a facility the cost of a move to cloud would be ‘insane’. On the topic of cloud-agnosticism, Holmes reported that, in the context of OSDU deployment, Schlumberger is now talking about ‘polycloud’ deployments that leverage the ‘unique characteristics of individual cloud providers’. Sørenes said it depends what problems you are trying to solve. ‘I respect re agnostic but does have a cost’. You need to avoid over-complicated processes and architectures and avoid lock-in. ‘I am a proponent of agnostic/polycloud’. Finally the discussion turned to AI as the great disrupter. The issues of AI ethics, of job destruction (and creation) and coding (or not) the AI beast were touched upon without any startling pronouncements although Holmes did wonder if we could ‘ask ChatGPT to write an OSDU’.
* See for instance the UK Parliament debate on the energy trilemma.
John Tomlinson (Halliburton/Landmark) dove deeper into Diskos 2.0. Today’s cloud capabilities have changed a lot, notably shifting expenditure from capex to opex. Halliburton is working to achieve performance in the cloud while keeping costs under control. Diskos 2.0 runs on Landmark’s iEnergy platform in the Oslo AWS public cloud. The NPD’s FactPages and FactMaps are crawled for metadata on well names and NPD guidelines. 16 companies currently use the API. Diskos now dovetails with the Landmark environment with applications accessing Diskos data through the DecisionSpace Integration Foundation. OSDU appeared on the overview slide but as one of seven ‘disparate data sources’. Inter-company data transfer is now a matter of entitled data download. This paradigm is to be extended in the future with a flexible, configurable environment, ‘not just a product’. Landmark and Diskos 2 is a ‘journey, from implementation project to design, and validation workshops, with over 100 developers involved in 10 scrum teams and Diskos reference groups for user acceptance testing. The operation took about a year. Technology has progressed a lot since the last migration. In the Q&A Oil IT Journal asked if a user could just ‘switch on your workstation and start interpreting’. The answer us no, not today although this is a goal. Tomlinson was also quizzed as to how OSDU could be leveraged in this environment. The answer is that the Haliburton environment is proprietary. The R&D roadmap envisages making it visible to OSDU. ‘That’s how it will be OSDU compliant’.
Tobias Beiche, Mamadou Guessere Gaye teamed to present Terra Nova, Wintershall DEA’s ‘first taste of OSDU’, the open subsurface data universe. Terra Nova is actually a combination of OSDU and Schlumberger’s Enterprise Data Solution (SES) that make up a ‘fully tested OSDU data platform that’s e-ready for business’. Work started in Q4 2022 and is currently (Q3 2023) a ‘minimum viable product’. The platform includes Schlumberger Delfi, Dataiku’s AI and Earth Science Analytics. The MVP shows how OSDU can enhance data management workflows and governance. One test of a bid round package (the New Zealand Minerals dataset) ingestion was executed by Schlumberger with data mapping through the OSDU well know structure format for various data types. The SES data workspace is used to browse and QC data in OSDU. A video showed data streaming into Petrel ‘not copied but streaming from the OSDU environment’.
In the Q&A Oil IT Journal asked if this would displace any SLB data tools in either Wintershall or SLB. The answer was not exactly clear although on SLB wag suggested that ‘maybe Studio could be rebuilt with an OSDU platform in perhaps 15, 20 or 30 years!’
Lars Gåseby presented Shells’s journey with OSDU. The aim of the exercise is for efficiency and faster decision making from a single central catalog of all data, along with lineage and quality metadata. Shell’s target landscape differs from Halliburton and Wintershall (above), placing the OSDU core platform at the base of the whole stack. Shell is ‘all-in with OSDU systems of record, insight and engagement’. With regard to Shell’s vast legacy data platforms, the idea is to keep managing data in legacy systems, but develop data copy pipelines into an OSDU system of record. At the same time data ‘mastership’ (which we understand covers data ownership and terminological master data) is moving to OSDU. However data is currently in the original databases, waiting on migration next year. This will be ‘a much harder job’ for which Shell will need global data standards in face of many local standards and usages. The program involves crawling multiple Recall instances around the world and then deciding on what new standard will look like. It is hard to tweak the ingestion pipeline for every datatype and situation. So a staging database will be deployed to assemble standard data before bulk loading to OSDU. OSDU is to be the system of record for Shell’s new data. But what will happed to Shell’s application portfolio? Applications and dashboards will have to be retired or retooled to be OSDU-compliant. Gåseby’s philosophy is ‘think ESSA - eliminate standardize simplify and automate’. In the Q&A Gåseby was quizzed on the role of software vendors in the new environment. Will there be collaboration? Maybe, Shell is looking for products in the OSDU marketplace and is working with vendors on this.
In the concluding plenary session, Michael Cleminson (SLB) presented on ‘AI and the radically modernized data platform’. SLB’s ‘Data & Al Platform’ is built on OSDU plus the clouds (for E&P) and on Cognite (for operations). Use cases include ML-assisted fault extraction from seismics, document ‘insights’ from scans, etc. The next big thing is generative AI and LLMs. However, Cleminson, citing SLB tech guru Jerome Massot, warned ‘if you don’t deal with [data] provenance from a legal, correctness and domain perspective now, problems will compound as the value proposition for this technology is so strong’. LLMs have a role to play in the QC of scanned documents and are claimed to best manual or pre-defined correction models. Likewise olde-worlde Google-style keyword search is to be replaced by a ChatGPT engine, constrained by the SLB technical glossary. But there is another caveat. Here is Massot again ‘We always had garbage in, garbage out. But the difference now is that the garbage out is much more difficult to spot!’ Cleminson wound up with another quote-cum-caveat, from Snowflake CEO Frank Slootman viz. ‘You cannot just indiscriminately let these LLMs loose on data that people don’t understand* in terms of its quality, definition and lineage’. So there you have it!
* That of course begs the question as whether the LLM ‘understands’ the data!
The next ECIM is scheduled for September 2024. More from ECIM.
Altair’s ‘A Human’s Guide to Generative AI’ is a good starting point for those seeking enlightenment on large language model technology à la ChatGPT. Following a backgrounder on traditional AI and deep learning, the Guide jumps into a very readable explanation of the humongous foundation models on which generative AI applications are built. The main foundation model for business is the large language model, trained on text. These perform the same classification, pattern recognition, prediction functions as simpler machine learning models, but can also create sophisticated, human-sounding content. Key examples of LLMs are Google’s BERT (Bidirectional Encoder Representations from Transformers) and OpenAI’s GPT (Generative Pre-trained Transformer), both released in 2018. Google’s Language Model for Dialogue Applications (LaMDA), adds a conversational interface while Microsoft and Nvidia have codeveloped an open-source model called Megatron-Turing Natural Language Generation (MT-NLG). While the sheer size of the computer power necessary to develop these models puts their development out of reach for most, there are many ways to use them in business.
One way is to accessing a foundation model through an intermediary or by fine-tuning a model on corporate data. This likely means working with one of the major players like Google or Amazon, who can help a business create a walled garden for generative AI access. These keeps company data private both from the outside world and the provider itself. Another option is to use a programmable LLMs such as OpenAI’s Codex. This is one of the options that Altair has integrated into its RapidMiner platform that provides access to models like ChatGPT to build LLMs for new or proprietary use cases. Users get access to some 300,000 Hugging Face models with ‘that can compete with some of the biggest models on the market’.
Cognite has published the ‘definitive guide’ to generative AI for industry or ‘How digital mavericks are redefining the rules of digital transformation, available as a 93 page download. Large Language Models, (LLMs) are ‘perhaps the biggest buzzword since blockchain’, but ‘unlike its complex and challenging-to-implement predecessor, business professionals can leverage LLMs without extensive prerequisites’. The Guide provides a short history of AI in industry before introducing ‘hybrid AI’, that combines AI’s analytical power with domain knowledge. Hybrid AI blends physics and AI analytics to provide a ‘more comprehensive and understandable rationale its recommendations, increasing trust and facilitating human acceptance’. Generative AI, unlike ‘traditional AI’, can generate new data, content, or solutions based on patterns and insights derived from existing data. Generative AI models can learn from context-enriched data without explicit guidance, enabling them to create novel outputs that mimic the characteristics of the training data. Generative AI models can ingest diverse data sets, including historical maintenance records, sensor data, work orders, and unstructured data such as maintenance reports or equipment manuals. Ideally input data should be augmented with ‘semantic, meaningful relationships’ (context). Conversational AI means that operators can interact with models using everyday language, making the technology more accessible to a broader range of users. One significant development is retrieval augmented generation (RAG) that ingests industrial data to the LLM. RAG lets us use off-the-shelf LLMs and control their behavior with private contextualized data. The approach is said to provide ‘deterministic’ answers as opposed to a ‘probabilistic response’ based on existing public information. Cognite has prototyped an AI copilot for reliability-centered maintenance using LangChain technology to better equip operators and reliability engineers to check damaged equipment. The copilot incorporates standards, documentation, and images to run high-fidelity engineering calculations through a human-language interface. Cognite’s Guide does a geed job of introducing some of the key concepts and blending them with its existing industrial data offering. The Guide winds up with a short Q&A with CTO Jason Schern who advises on two main misconceptions. Users of ChatGPT may feed that LLMs understand text. They do not! Moreover generative AI cannot understand industrial data. Industrial data lacks the contextual cues that text provides. Sensor data from machinery lacks such context. ‘The machine this sensor belongs to, the work orders that have previously been performed, operating conditions, operating throughput, maintenance history, and other critical contextual information are not included in the sensor data’.
DNV has issued a new recommended practices for the ‘safe application of industrial AI’. AI-enabled systems assurance (DNV-RP-0671) describes a framework for assuring AI-enabled systems, providing guidance on how to assure that AI-enabled systems are trustworthy and managed responsibly throughout their entire life cycle. The advent of AI requires a new approach to risk. ‘Whereas conventional mechanical or electric systems degrade over years, AI-enabled systems change within milliseconds’. Consequently, a conventional certificate provided by DNV, which normally has a three- to five-year validity, could be invalidated with each collected data point (!). This necessitates a different assurance methodology and a thorough understanding of the intricate interplay between system and AI, allowing for a proper assessment of failure modes as well as potential for real-world performance enhancement’. The new RP is an addition to DNV’s digital recommended practices that provide guidelines and best practices for procurement, development, and operation of AI and other digital solutions. Access to the AI-enabled systems assurance document available via a Rules and Standards Explorer subscription. Sign-up here for a 14 day free trial subscription. According to Remi Eriksen, DNV Group President and CEO, the new RP is will help companies meet the requirement of the EU Artificial Intelligence Act, the world’s first AI law.
An explainer from Esri proposes to Demystify GeoAI. GeoAI, a ‘revolutionary technology’ fuses AI with geospatial data, science, and technology to ‘accelerate workflows, uncover valuable insights, and solve spatial problems’. GeoAI is said to offer better situational awareness, insights and predictions. GeoAI applies neural net-based deep learning to spatial data to automate the extraction, classification, and detection of geospatial information from imagery, videos, point clouds and text. GeoAI solutions include out-of-the-box pretrained models, models that can be fine-tuned to address specific issues and custom models for specific requirements. Pretrained GeoAI models are available for highway maintenance with automated road crack detection, or to help with disaster response following a hurricane, by comparing before and after imagery. Esri offers some 50 pretrained deep learning models for various use cases which can be fine-tuned to specific needs. More from the ArcGIS GeoAI capability page.
The IBM Institute for Business Value’s CEO’s Guide to Generative AI sustainability has it that ‘old school sustainability is obsolete’. Generative AI is ‘unlike any technology that has come before. It’s swiftly disrupting business and society, forcing leaders to rethink their assumptions, plans, and strategies in real time’. Generative AI’s ‘whiz-kid’ capabilities can ‘analyze environmental data in an instant, uncovering patterns that lead to game-changing insights, offering solutions to stubborn problems across the sustainability spectrum’. Generative AI can ‘optimize operations for both sustainability and profitability, helping leaders avoid suboptimal tradeoffs’.
A recent two-day event hosted by the US National Academies looked at the present and future of AI in advancing scientific discovery. Yolanda Gil, principal scientist at the University of Southern California’s Information Sciences Institute speculated that AI scientists would have the capacity to perform scientists’ core competencies, not just gathering and analyzing data but also providing a ‘reflection process’ and the creativity to come up with new paradigms and ideas. Hiroaki Kitano, CEO of Sony AI, explained his proposal for the Nobel Turing Challenge, to come up with AI systems by 2050 that can make major discoveries autonomously, at the level of discoveries worthy of a Nobel Prize. The National Academies’ workshop sessions can be viewed here.
On Oct. 30, President Biden signed Executive Order 14110 directing the National Science Foundation to establish a pilot program for a National Artificial Intelligence Research Resource (NAIRR), a shared national research infrastructure that will connect US researchers to ‘responsible and trustworthy AI resources’, as well as the needed computational, data, software, training and educational resources to fuel AI research and discovery. NAIRR ‘seeks to democratize access to AI innovation and support critical work advancing trustworthy AI’. The Executive Order directs NSF to: expand its network of National AI Research Institutes, and advance the development of privacy-enhancing technologies (PETs)
Another Executive Order has led the National Institute of Standards and Technology (NIST) to call for participants in a new consortium supporting development of innovative methods for evaluating artificial intelligence systems to improve the rapidly growing technology’s safety and trustworthiness. The new NIST-led U.S. AI Safety Institute was announced at the recent AI Safety Summit held at Bletchley Park, UK. More too from the Government AI home page.
For a more measured analysis of AI in the context of search we watched Charlie Hull’s (Open Source Connections presentation on Yotube at the recent Search Solutions event run by the British Computer Society . Hull described the AI, LLMs, GPT ‘buzzword soup’ that is changing search with better (no need for exact word matching), multimodal search for images with text and multilingual search. Machines can now write plausible text although ‘plausible does not mean good or correct’. LLMs don’t ‘know’ when they’re wrong which can be very dangerous in some settings. Some are trying to measure this, notably Vectara which proposes to ‘cut the bull’ by detecting hallucinations in LLMs. There are also issues with IP and licensing – even of notionally open source LLMs such as Meta’s Llama2. Other issues are panicked regulators, crazy money and the ‘huge, planet-destroying compute footprint’. Returning to his specialization, search, Hull opined that Vector search (LLMs) isn’t magic, ‘exciting as it is, it can’t do everything’. A hybrid approach is worth considering for real-world applications such as reciprocal rank fusion (RRF) that ‘combines multiple result sets with different relevance indicators - like lexical and dense or sparse vector search’. One problem with LLM/GPT search offerings is that ‘they do not come with a manual! They come with a ‘Twitter influencer manual instead’, where lots of people online loudly boast about the things they can do (with a very low accuracy rate), which is really frustrating.’ Hull summed up saying that search hasn’t gone away, it isn’t being replaced ‘but we are getting some great new tools’. The specialists have been using ML in search for years and the traditional techniques are still the best for many use cases where exact matching is required.
Is it worth it? Yes according to an IBM IBV study of AI and automation that found that ‘Three out of four CEOs say their competitive advantage rests on generative AI’ (already??). Moreover ‘companies at the forefront of generative AI adoption and data-led innovation are already reaping outsized rewards, reporting 72% greater annual net profits and 17% more annual revenue growth than peers’.
IBMs findings appear to be at odds with a recent Financial Times analysis of company filings. The FT reported that while chief executives extol the benefits of AI in earnings calls, ‘rushing to show how they will be beneficiaries of the new technology’, analysis of their regulatory filings suggests much of the talk is just … talk. Almost 40% companies in the blue-chip S&P 500 index have mentioned AI or related terms in earnings calls in the latest financial quarter, according to data from Alphasense. But fewer than 16% mentioned it in their corresponding regulatory filings, ‘highlighting how AI has yet to make a material impact for the vast majority of companies’. Asset manager Bryant Van Cronkhite, said ‘Some companies are saying they’re doing AI when they’re really just trying to figure out the basics of automation. The pretenders will be shown up for that at some point’. Among the non-filing AI boosters was KFC owner Yum Brands and Chipotle that tout AI as ‘making better tortilla chips’.
According to a new report from the IBM Institute for Business Value, to navigate an ever-changing energy landscape, oil and gas companies must accelerate their transformation into ‘digital energy’ companies. To date, Oil IT Journal has published some 74 articles referring to ‘digital energy’, the first in 1999. So how come in 2023, digital energy is still tantalizingly a transformation to come? Well IBM proposes no less than eight strategic domains that require focus and offers lessons learned from a subgroup of companies that are excelling in these areas.
What is new is the conflation of digital energy and the ‘daunting sustainability challenge’. Oil and gas are deemed major factors in global warming, and oil company operations account for 15% of total energy-related greenhouse gas emissions. Facing mounting pressure from the public and regulators to shift to renewable energy sources and clean up production, transport, and processing operations, ‘these companies find themselves at a critical point’.
Ok so far so good… now for the complete non sequitur … ‘[Oil Companies] must evolve into digital energy companies of the future—organizations that embrace energy transition and leverage data and digital technologies to operate their assets more cleanly, safely, securely and reliably’.
The IBV study of 2,000 global oil and gas executives, co-authored by SAP found that ‘most oil and gas companies are making strides in this direction’ (really?). But there is still work to be done since ‘only two in five of the organizations have even set a net-zero emissions target, and only 39% say they are effective at executing digital transformation.’ (a reverse non sequitur?)
IBM sees eight domains where focus is required to become a ‘future-ready, digital energy company’. Domain N° 1 is future energy – i.e. solar, nuclear, hydrogen, and biofuels. Here one intriguing finding is that the companies surveyed are actually planning to reduce spend in solar and offshore wind over the next five years. Domain N° 2 GHG reduction sees a surprising 62% of respondents divesting from hydrocarbons. One suspects a bit of EU major bias here. In a similar vein, some 58% say they plan to repurpose/retrofit hydrocarbon assets to produce ‘other products’ in five years. Again, interesting, but hardly ‘digital’. Domain N° 3 is data. Here ‘Visionary vanguards and data-driven deciders are well ahead of their peers in cultivating a data-driven culture’ which sounds like tautology. Needless to say, Generative AI and cloud computing are highlighted with a plug for the Shell.ai initiative.
Domains 4 (automation), 5 (mobility) and 6 (training) are hardly revolutionary in 2023. The new business models of Domain 7 sound a lot like the Future Energy of Domain 1. Domain 8, ‘establishing an ecosystem of green partners’ is perhaps an entreaty to give IBM some of the digital cake.
The study concludes with a three-step plan for transforming to a digital energy company of the future. A combination of all of the above will ‘empower companies to usher in a new era of eco-friendly operations and groundbreaking products that propel the world toward a low-carbon future’.
Of course there is practically nothing an oil company can do to eliminate the vast majority of its emissions (à la Scope 3). Fugitive emissions, flaring and other operational ‘pollution’ can be addressed, but the cloud/big data/AI tropes are secondary to a real desire to eliminate – or at least report – emissions, notably with the installation of measuring devices in the field and out on satellites. See for instance the recent Bloomberg story on ExxonMobil’s methane rule-breaking.
AspenONE V14 includes some 140 sustainability sample models, now ‘enhanced with AI’. These include new ‘green H2’ Aspen Plus/HYSYS and new GHG emissions monitoring and reporting in Aspen AURA. Aspen SeisEarth has new machine learning capabilities for ‘quick and accurate analysis of rock properties’.
CGG has announced an ‘Outcome-as-a-Service’ offering for high performance computing and artificial intelligence scientific and engineering applications. OaaS offers a ‘guaranteed cost-effective approach’ for users wanting to transition their HPC and AI workloads to a production-based model aligned with their business objectives. It removes the requirement for significant CAPEX and avoids unpredictable usage-based billing. The offering includes AMD EPYC Genoa CPUs, Intel Xeon Sapphire Rapids CPUs and Nvidia H100 GPUs. More from CGG.
Codesys has ‘substantially revised’ its Static Analysis platform for IEC 61131-3-compliant automation systems. The new version 5.0 of CODESYS Static Analysis reports problems in the code directly upon input including bad array accesses, divisions by 0 and null pointers, with a virtual programming assistant that detects bad code ‘smell’ and provides tips for correction. More from Codesys.
Ceetronics has announced release 2023.10 of ResInsight its open source reservoir flow modeler. More in the release notes and on Github.
Denodo reports ‘significant enhancements’ to its eponymous data management platform with a new FinOps Dashboard that provides production operations and finance staff with views and reports of the various costs incurred, from all analytic and operational data workloads managed by the system. Denodo now offers embedded massively parallel processing capabilities based on Presto, an open-source parallel SQL query engine, to improve performance when processing large data volumes. More from Denodo.
East Daley Analytics has rolled out a new Gas Customer and Shipper Contract Pipeline Data product to view and analyze gas pipeline contracts geospatially across the United States. The new product I available through East Daley’s Energy Data Studio.
Version 4.28 of the ArcGIS Maps SDK for JavaScript has introduced three new date-focused field types: date-only, time-only and timestamp-offset, compliant with the ISO8601 formats. Timestamp-offset fields are used when mapping events that occur across various time zones such as airline schedules, crime statistics or (perhaps) methane emissions. More from Esri.
Emerson’s DeltaV Edge includes a sandbox environment to test applications with ‘easy, secure, contextualized’ data access. Users can deploy and execute artificial intelligence analytics ‘close to the data source’ with secure connectivity to contextualized OT data across the cloud and enterprise. The solution is said to expose data that is otherwise ‘trapped’ beneath layers of systems and networks.
Emerson has also announced the Plantweb Insight Valve Health application, combining Fisher control valve expertise with advanced analytic algorithms. Users can monitor an entire connected fleet of valves and prioritize actions based on the health index of each valve.
GE Aerospace Research’s new Sensiworm looks ‘eerily close’ to a living organism. Derived from GE’s Pipe-worm robot, the ‘Soft ElectroNics Skin-Innervated robot worm’ is an inches-long device that can act as eyes, ears, nose and fingers for machine service operators. The device moves autonomously to inspect machinery from the inside, ‘gripping surfaces, exploring environments, and mapping out internal networks’ and achieve an array of inspections and repairs. The device leverages Flex Tech technology from SEMI, an industry-led public-private partnership focused on innovations in hybrid electronics. Check out the worm here.
The new edition of Geographix’ GeoVerse, aka ‘ORION 2023 brings new features to the geoscience interpretation package. These include auto-track digitizing of curves on raster images, IsoMap layer display and Spotfire integration. More from Geographix.
IOTech’s Edge Central is described as a flexible, open edge data platform that makes industrial data easily accessible, actionable, and manageable. Edge Central is a ‘commercialized’ version of the EdgeX Foundry, said to be the world’s ‘top open-source edge data platform’. Edge Central provides aggregation and analysis of sensor data via standard APIs and ‘seamless’ data flow to and from the cloud with support for AWS, Azure, Google and IBM clouds. More from IOTech.
The Acuity Industrial Cloud Suite from Yokogawa unit KBC will henceforth serve as the umbrella for cloud delivery of KBC’s software and solutions. Powered by the Yokogawa Cloud Platform, Acuity provides centralized plant data management, ‘single pane of glass’ visualization and analytics and native integration with Petro-SIM, Maximus and other KBC applications.
On which topic, KBC has just released PetroSim 7.4, now officially a ‘digital twin’ platform for the hydrocarbon value chain across refineries, petrochemicals, upstream and LNG.
Esri bloggeuse Michelle Bush reports that Kubernetes deployment was a popular topic at the 2023 UC. What does it mean for an ArcGIS Enterprise deployment, when should you consider moving to ArcGIS Enterprise on Kubernetes? Read her blog to find out.
Altair RapidMiner 2023 includes low-code and no-code capabilities and new tools for integrating large language models into business applications. An enterprise can create a version of ChatGPT fine-tuned to its nomenclature, product universe, applications and clients. RapidMiner utilizes ChatGPT’s new API so users can further customize without to writing code. Users can access all 300,000 Hugging Face models and fine-tune models with billions of parameters. Expanded AutoML and No-Code development is said to ‘bring data science to all’. Altair AI Cloud workspaces allow developers to use a standard IDE to develop production-ready Python code based on governed, centrally-provisioned environments. More from Altair.
SLB has announced the availability of machine learning for Petrel 2022.9 and 2023.3. The new ML solutions target horizon and fault prediction, extraction and property modeling in Petrel. Tasks that previously took ‘weeks, or even months’ can now be completed in ‘days or even hours’. More from SLB and maybe also from this 2021 video by Third I Geoscience.
A new Wellsite Watch application from Intelligent Wellhead Systems provides simplified personnel management for wellsite operations. The application can be accessed via a personal device or tablet with an intelligent digital sign-in either directly via the application or by scanning a QR code. Digital check-in and out ‘replaces days of clipboards and paper’. Data is handled electronically with cloud backup and maintenance to provide a consistently up-to-date roster. More from IWS.
Peloton’s new Platform V2 includes direct access to the latest versions of Peloton’s applications and a new version of the Web API (Data API V2.1). Peloton now also offers renewable wind and solar land data management with a combination of LandView and Peloton Map. The new tool supports global agreements and jurisdictions, both onshore and offshore and is scalable to different environments. More from Peloton.
Esri has announced the ‘PUG Knowledge Hub’, a compendium of past proceedings from Esri Petroleum User Group events and related online materials, all tagged by source, workflow, tech trend, product, and more.
Siemens has announced new cloud services, devices and low-code software for its Industrial Edge ecosystem. The new tools promise better integration of information technology and operations. One component, Industrial Edge Management is a software portal for managing IoT solutions across a plant, providing central management of all devices, applications and users. The solution is also delivered as a hosted, cloud-based system. Industrial Edge is a component of the Siemens Xcelerator business platform. More from Siemens.
The British Antarctic Survey, has been testing a new drone from UK-based Windracers in the extreme conditions of the Antarctic Peninsula. Windracers’ ‘Ultra’ is an autonomous, twin-engine, 10m fixed-winged aircraft, capable of carrying 100kg of cargo and/or sensors up to 1,000km. The flexible platform can also be configured to carry a range of sensors for collecting scientific data. More from Windracers.
IOTech has appointed David King, former CEO of FogHorn as board member and company advisor.
Riverbend MD Eric Danziger is joining the Qube Technologies board of directors following the recent funding round (see Done Deals).
The Construction Industry Institute’s executive committee and The University of Texas Cockrell School of Engineering have confirmed Jamie Gerbrecht (hitherto Interim Senior Director) to the position of CII’s Executive Director. Similarly, Daniel Oliveira sees the ‘Interim’ tag dropped. He is now Director of Research and Operations.
Abdulaziz Al-Gudaimi, formerly with Saudi Aramco has joined institutional investor EIG as Senior Advisor and Chairman of MENA Operations.
Kurt Machnizh has joined Eliis as VP North America. He hails from AspenTech.
Howard Energy Partners has appointed Scott Tinker to its bord of directors. He was previously with the US Bureau of Economic Geology.
Ralph Haupter has joined the Hexagon AB board. He hails from Microsoft.
The International Oil & Gas Producers Association (IOGP) has named Faye Gerard as new Energy Transition Director, succeeding Concetto Fischetti who returns to Eni. Gerard is seconded from BP and will be based at IOGP’s Houston office.
Intelligent Wellhead Systems has appointed Tracy Gray as Director of Strategy & Marketing and to its executive team. She hails from Sodexo.
NOV has announced the retirement of Isaac Joseph, president, wellbore technologies, and Kirk Shelton, president of completion and production solutions. The company is also consolidating its reporting structure into two segments: Energy Equipment (led by Joe Rovig) and Energy Products and Services (with Scott Livingston as president).
Sabine Brink is now Principal with Shell Ventures. Her previous position as ‘Global blockchain and Web3 manager’ has been taken by Vikram Seth.
Sid Perkins (CEO) and Ike Perkins have founded Snapper Creek Energy to offer a ‘data-driven, client-first’ approach to energy trading. Sid hails from ION Energy Group, Ike from Mercuria Energy America.
Ashraf Jahangir is joining Verdantas as Chief Strategy Officer. He was previously with Kleinfelder.
W Energy Software has appointed Gary Napotnik as SVP Marketing and Trey Simonton as SVP Revenue. Napotnik hails from M-Surge, Simonton from MadCap Software.
Deaths
James Grant, CIO with New European Offshore, died recently of a heart attack. Read his own memento mori on LinkedIn.
Object Management Group Chairman and CEO Richard Soley has died. More on the OMG website.
Jean Burrus a basin modeling pioneer and head of the Geology and Reservoir Engineering departments at IFPen has died. His death, aged 66, was accidental and occurred as he was distilling schnaps in his family home in Villé, Alsace (France). More on LinkedIn.
Denodo provider of a logical data warehouse to, inter alia, Oxy, recently announced a $336 million investment from ‘alternative’ asset manager TPG.
CGG has sold its 49% stake in Arabian Geophysical and Surveying Company (ARGAS) to Industrialization and Energy Services Company (TAQA). CEO Sophie Zurquiyah described the sale as ‘the final step in CGG’s move to become an asset-light company, exiting the data acquisition services business and strengthening the focus on our differentiated high-end technology businesses’.
Open source seismic interpretation software developer dGB Earth Sciences is inviting the ‘free’ users of its OpendTect flagship to contribute to a GoFundMe campaign to fund the development of a state-of-the-art well tie module that will be accessible for everyone. dGB plans to overhaul the existing module and create a one with industry-standard displays, QC-tools and new functionality such as Roy White’s wavelet estimation method. The project will span 3 to 4 months and requires $75,000. Chip in on GoFundMe.
Expro has acquired ‘Hook-to-Hanger’ subsea well completions specialist PRT Offshore in an approx. $106 million cash and paper deal. RBC Capital Markets advised on the deal.
Forum Energy Technologies is to acquire downhole technology solutions provider Variperm Energy Services for a consideration of $150 million of cash and 2 million shares of FET’s common stock
The French Ministry of Economy, as part of its foreign direct investment review process, has rejected Flowserve’s acquisition of Velan Inc. Flowserve will now terminate the deal. No fees are payable by either party.
IOTech, an open IoT edge computing specialist, has received an investment from Dell Technologies Capital. Existing investors SPDG (Société Anonyme de Participation et De Gestion), the holding company of the Périer-D’Ieteren family, Northstar Ventures and Scottish Enterprise also took part in the fundraising.
Kahuna Workforce Solutions has secured some $21 million Series B funding in a round led by Resolve Growth Partners.
One Equity Partners is to acquire TechnipFMC’s Measurement Solutions Business. OEP has completed twenty-one similar transactions across its latest three funds, including eleven operating in industrial-focused markets.
Continuous emissions monitoring solutions provider Qube has secured Series B Funding from Riverbend Energy Group. The monies will be used to further develop Qube’s solutions including ‘physics-guided AI algorithms’.
SAP has completed its acquisition of LeanIX, a provider of enterprise IT architecture management. LeanIX’ software-as-a-service solutions enables organizations to visualize, assess and manage the transition towards a target IT architecture. The combined offering is said to provide a comprehensive foundation for ‘AI-enabled’ transformation.
The five-day Face to Face meetup of the IOGP’s CFIHOS*, hosted by Eastman in Kingsport, Tennessee, brought together over 100 attendees from 47 companies. For a backgrounder on this report you may like to read or re-read our update on Oil country construction standards in our last issue.
Manuel Becker* from German industrial gases and engineering company Linde showed how Cfihos, along with other international standards have informed Linde’s project information management. Linde has extended the Cfihos data model into its own interpretation of the US Construction Industry Institute’s (CII) Advanced Work Package (AWP). In practice, Linde has separate AWPs for engineering, procurement, construction and installation. The AWP approach is said to ‘steer’ each activity with data-driven project tracking.
* Becker is project manager for Linde in the EU ‘Backbone ‘ program that is to deliver ‘hydrogen on tap’.
Shell’s Anders Thostrup also addressed the issue of multiple construction industry standards. Emerging standards such as Cfihos, the CII’s AWP, ISO 19008 and RDS PP (the Referenced Designation System for Power Plants) are ‘overlapping, incomplete and inconsistent’. Shell’s approach is to map its data requirements to the relevant international standards using an ‘envelope concept’. This allows for the use of requirements from standards via references or mappings and allows for multiple standards to be embedded in corporate standards and software, adapted to local requirements. The approach also has the potential to feed back into and improve the parent standards. The Envelope, an overarching engineering information specification, is a superset of information requirements for sub-domains of asset, project and contract information management. Cfihos specifications, data model and implementation guides play a significant role in the system but the Shell data model ‘includes additional entities that are not part of the international standards’. Shell provides EPCs with an overview of the system to show ‘what is aligned with and what deviates from the standards’. Thostrup also touched on Shell’s 3D Model DEP, currently under review, that is based on the USPI-NL Facility Lifecycle 3D Model Standard (FL3DMS). Duhwan Mun, who is a Professor in the School of Mechanical Engineering at Korea University, cited Sword’s Phusion as an example of a commercial handover package that supports the Cfihos data model and meets handover procedures and requirements. Phusion is appreciated by owner-operators and data management contractors alike. Some owner-operators have deployed (and paid for) Phusion to be used by project participants. This means that a data management contractor tool and content can be passed on to the owner at handover. Hanwha Ocean and Samsung Heavy Industries have leveraged Phusion on the Icthys Inpex LNG project. Mun is now engaged on mapping the Cfihos data model to legacy systems from Aras PLM and ERPNext’s ‘free and open source’ ERP software built on the Frappe low-code framework According to Sudharshan Nambiar (Petronas), many contractors have never heard of Cfihos. But Petronas has been an enthusiastic adopter of the standard in its engineering projects since 2021, proactively engaging with EPC contractors for Cfihos adoption throughout the project cycle. This means helping contractors transfer data from their own datasheets into the Cfihos format. Nambiar observed that mapping between engineering tools, O&M systems and Cfihos ‘requires a lot of effort’ but ensures standardization. Cfihos is a component of Petronas’ Nested Engineering approach to handover and data centric review. A key component of the ‘nested single source of truth’ is eMTR, the electronic master tag register (eMTR) that enables progressive data handover. The eMTR is accessible by all parties from a dedicated URL. See also the P-EDMS portal to the Pengerang refinery complex.
* The Capital facilities information handover standard workgroup of the UK-headquartered International Oil and Gas Producers’ association. Also known as the IOGP JIP 36.
Cybersecurity in Energy Infrastructure, a joint publication from two Linux Foundation entities, LF Energy and the Open Source Security Foundation is an 18 page explainer of the OSSF’s philosophy as it applies to the energy transition. ‘Contrary to common misconceptions, OSS offers not just affordability and adaptability but also a robust shield against cyber threats.’ We learn, inter alia, that ‘Blockchain technology … allows consumers with micro-generation power to purchase and sell excess electricity to other consumers rather than the local utility’. ‘Utilities will no longer have total control over energy systems when distributed energy resources replace centralized power generation’. The OSSF project’s core values include ‘public good, openness and transparency, maintainers first, diversity, inclusion, and representation, agility and delivery, credit where credit is due, neutrality, and empathy’. Sentiments which are unlikely to cut much ice with the hackers.
Blogging on the ISA website, Nikhil Kapoor (Bechtel) proposes an introduction to cyber risk assessment of operational technology/industrial control systems with the implementation of standards like IEC 62443. Assessing OT cybersecurity risk is challenging. Thee are misconceptions about cybersecurity as it relates to the process industry, limited industry databases on cybersecurity events, rapidly changing technology and a continually evolving threat landscape. Kapoor shows how an IT cyber risk matrix can be adapted to OT/ICS specifics.
Another ISA blogger, Kaashif Nawaz, takes a closer look at the IT/OT airgap. In theory an air gap sounds like a good strategy, but it’s not that simple. Assessments often prove that most assumed air gaps aren’t really air gapped. ICS adjustments, fixes and updates often require a connection to the OT network. Employees may charge their phones with a USB connection! Air gapped networks may inspire ‘workaround’ tactics like mobile wi-fi hotspots. Air gaps don’t protect against spies, criminals, disgruntled, tired, or lazy staff carrying out dangerous or malicious activities. Finally, computers ‘leak’ electromagnetic spectrum that can be picked up remotely.
Siemens has announced SINEC, a new ‘all-in-one’ cyber security test suite for industrial networks. SINEC provides access to a software framework including a suite of cybersecurity tools and solutions for asset detection and identification, compliance checks, malware scans and vulnerability checks. The SINEC Security Inspector was originally developed for internal use by Siemens and is now available for any industrial environment. Along with tools developed by Siemens, the SSI includes vulnerability management solutions from Tenable More on the SSI from Siemens.
The US NIST has released an Interagency Report (NIST IR 8476) from its 3rd High-Performance Computing Security Workshop. HPC systems often have unique hardware, software, and user environments that pose distinct cybersecurity challenges. The workshop heard, inter alia, from researchers working on an encrypted Message Passing Interface (MPI) library called CryptMPI, on the detection of configuration-based vulnerabilities in HPC workload managers, on searchable encryption for scientific data, on regulated research and the NIST SP 800-223 HPC security standard?
As reported by XBRL.org the US Securities and Exchange Commission (SEC) has published its final rule on Cybersecurity risk management, strategy, governance, and incident disclosure by public companies, requiring companies to make annual disclosures on cybersecurity risk management, strategy, and governance, as well as additional disclosures following any material incidents (using Inline XBRL). More from the SEC’s announcement and the final rule itself.
TXOne Networks has released its EdgeOne V2 engine to protect complex, large-scale OT environments. Edge V2’s automatic rule generation capability protects sectors, including manufacturing, oil and gas and automotive, by simplifying network segmentation and keeping operations running throughout cybersecurity events.
The Secure Supply Chain Consumption Framework (S2C2F) , a ‘robust strategy for the secure use of open source software tools to build software’. S2C2F uses a threat-based risk-reduction approach to mitigate real world threats. The protocol was originally built by Microsoft and is has now been adopted by the OpenSSF under the Supply Chain Integrity Working Group. More too from Microsoft.
A new report from DNV, ‘Energy Cyber Priority 2023’ , draws on a survey of 601 energy professionals along with a number of in-depth interviews. The reports a step change in cyber threat awareness, but the energy transition is mandating even more robust cyber security. ‘Digitally advanced’ organizations are more likely to believe cyber-attacks are a major threat to their organization than the average. In these companies, security by design and supply chain vulnerabilities are at the top of the agenda although investment, skills shortage and poor collaboration remain ‘major challenges’. DNV found that while there is an increased awareness of heightened cyber security risk (over 2022), their has been a ‘failure to invest accordingly’. There is hope though that regulation, (especially the EU’s NIS2 directive) as ‘the foremost driver of investment in cyber security in today’s energy industry’, is going to ‘unlock increased budget at these organizations’. Another catalyst for increased spending is a cyber incident, or a near miss.
Petras (a.k.a. the UK National Centre of Excellence for the Internet of Things) has issued a six-part animation video series on real-world scenarios posed by Internet of Things cybersecurity for industry sectors, and how its researchers are tackling the challenges. Complex IoT cyber challenges are presented through the lens of various industry sectors including mobility, supply chains and control systems More from Petras and on the Petras YouTube channel.
The 4.2 release of Network Perception’s NP-View platform is powered by a second-generation path analysis algorithm that offers performance improvements, including faster loading of access rules and object groups reports. OT network auditors now have greater visibility into rule review, and therefore greater context for organizational rulesets, and the ability to analyze networks more quickly. NP-View also offers enhanced parsing capabilities for configuration files with a large number of access rules (10,000+ per device) and object groups (30,000+ per device). Improved parallel processing performance reduces large file analysis to less than one hour. More from Network Perceptions.
The National Cybersecurity Center of Excellence (NCCoE) has published the final version of NIST Interagency Report (NIST IR) 8406, Cybersecurity Framework Profile for Liquefied Natural Gas. The LNG Cybersecurity Framework Profile can be used by the LNG industry to address and mitigate cybersecurity risks associated with LNG processes and systems.
EverLine has opened a Security Operations Center to provide tailored OT security incident response services to industrial users. The SOC deploys tools from Darktrace, Claroty and Amulet to provide 24x7x365 OT network monitoring, situational awareness, threat intelligence, incident response, and physical security monitoring. More from EverLine.
The German Namur User Association of Automation Technology in Process Industries has published a ‘WG-PRAXIS’ document covering attack detection pursuant to the German IT Security Act 2.0. As of May 2023, operators of energy supply networks and energy facilities defined as critical infrastructure must deploy adequate attack detection systems in their information technology systems, components and processes. Such systems shall continuously and automatically capture and evaluate attack parameters, be able to identify and prevent threats and initiate suitable action to eliminate disruptions. More from Namur.
Mandiant (part of Google Cloud) has just published M-Trends 2023, a 118 page report on recent cyber activity. Of particular note is a 12 page section covering the invasion of Ukraine: Cyber Operations During Wartime. The comprehensive report covers a lot including a ‘red team’ challenge of clients networks that adopted the same ‘brazen tactics’ as the hackers, gaining initial access by showing up in person pretending to be a technician. Download the must-read M-Trends free here.
The 2023 edition of The NightWatch, an annual event hosted by DNV’s Applied Risk unit addressed the future of operational technology security. Jelle Niemantsverdriet (Microsoft) presented Microsoft’s Digital Defense Report and suggested ‘leveraging technologies such as cloud computing and artificial intelligence in the industrial sector’ to build a ‘more resilient, faster-moving and innovative business’. However, technology cannot mitigate people and process problems or ‘fix the burning platform that we are all on!’ (!!). In the Q&A, Alfred Schroder (IV Offshore) observed that much of today’s infrastructure (Ethernet and IP) are legacy protocols from the 1970s and are ‘the roots of all cybersecurity risks’. ‘Is it not time for the large corporations such as Microsoft to direct the vast resources at their disposal into developing a replacement for Ethernet and IP that is 100% secure and totally hack proof?’ Another questioner with a process safety background added that ‘cloud adoption is a challenge for us as cloud = hacked!’ Another asked if Microsoft could make a non-consumer grade operations technology OS with specific security features with free upgrades for a minimum defined period, ‘so our sensitive industrial assets don't end up connected to unpatched, unsupported operating systems every time Microsoft release a consumer grade OS upgrade’. We have to report that Niemantsverdriet was stumped by these attacks. ‘I guess we could [deliver a non-consumer grade OS], it’s a hard question’.
Shell’s Madina Doup asked ‘are we ready’ for the technology disruption that is heading for operations technology. OT is at the heart and core of Shell, it ‘touches’ all Shell’s businesses and is ‘very vulnerable’. While ‘OT is not IT’, disruption is coming to both as OT opens-up with bidirectional connections to the cloud. Of course disruption has benefits, resource optimization, less asset downtime. But there are risks in cybersecurity and in the workforce’s understanding of new tech. OT is a ‘slow, old fashioned environment’, how do we bring operators up to speed. Industrial internet of things (IIoT) is ‘growing like mushrooms across assets, are we in control?’ ‘How do we protect direct to cloud access?’ AI is another risk as artefacts can be hard to distinguish from reality, although we can use AI to combat AI! We are also looking at edge computing where things are moving ‘as fast as AI’? But this is even more dangerous as vendors’ dashboards and analytics are out of Shell’s control. Regarding the challenge of data volumes (up 3000x over IT), Doup expects quantum computing to help. Doup’s problem as enterprise architect is ‘How do I ensure standardization and data scope across of all this?’ ‘People are doing a lot of stuff independently. Doup is working with the TOGAF methodology to establish an enterprise architecture for OT. This is ‘quite a journey!’ Folks are reluctant! So far, EA governance of OT has been established, with a strategy and reference architecture that puts control and monitoring ‘all in one place’. This is now under review by OT Governance. Doup wants to ‘bypass all the politics’ (OT in Shell is quite political!) to build a reference architecture ‘speaking with one voice across assets’ to mitigate technology disruption.’ Asked in the Q&A what standards would be embedded in the new architecture Doup replied ‘OPC UA, OPAC, and security standards’. Vendor architectures are to be merged through the OPC reference architecture. One questioner observed that ‘OT is close to maintenance and other disciplines, it has nothing to do with IT!’ Doup admitted that when she started the environment was ‘kingdom based’. The disruptive move to bi directional cloud connectivity was ‘not what companies want’ but rather ‘imposed by technology evolution’.
Marianne Mangersnes spoke to a shift from security-focus to resilience in Equinor’s OT. Using the bow tie risk analysis paradigm, Mangersnes stated that the traditional defense is on the left hand side of the bow tie to reduce attacks. But even if this is OK, there will always be a way for a dedicated attacker to succeed. Hence the shift to resilience. In the last year there has been more data exchange between IT and OT, to gather production data and enable remote operations. Use cases include machine learning for condition monitoring, robotics and Industrie 4.0 style interoperability with vendors. There are many problems with OT. Patches cause problems that may mean shut down and production loss. OT has a longer lifetime that IT. Some obsolete components may not update. Cyber protection may be limited. Mangersnes advises ‘don’t fix what ain’t broke’, training is key. Cybersecurity needs to be a distinct discipline. Focus on the right hand side of the bow tie, on post-incident recovery that allows production to continue even with an ongoing attack. Establish recovery plans and exercises across the value chain. Here the scenario must be realistic and severity should be the worst case. Risk never is never zero, resilience is crucial. In the Q&A Mangersnes proffered some more advice. Vendors will bring their own devices, this is OK but you need to virus check devices and USB sticks. Backups can’t be tested on live systems so test them on a simulator to make sure a backup is a real backup. In greenfield development, cybersecurity requirements are written into contracts.
More from The Night Watch home page.
Aveva and Microsoft are to team on the ‘industrial metaverse’. What is it? An ‘interconnected, immersive and persistent virtual universe where teams can interact with each other and digital objects in real time’, a.k.a. the ‘digital twin on steroids’. More (much more!) in a similar vein here.
ABB has leveraged technology from Luna Innovations in what is claimed to be the world’s largest fiber optic-based infrastructure monitoring system, deployed along the Trans-Anatolian Natural Gas Pipeline (TANAP) from Azerbaijan into the EU. The system provides leak detection, pipeline and facility security across the 1,811 km route, in a project spanning six years.
BP is to ‘leverage the power of generative AI’ with a deployment of Microsoft’s Copilot for Office 365. BP is the one of the first companies globally to act as a launch partner for the ‘intelligent AI assistant’ which is to roll out access across a substantial part of its global workforce from early 2024.
CGG has signed with AI boutique LightOn to leverage CGG’s industrial High-Performance Computing (HPC) solutions to evaluate and test large language models in industry. LightOn’s AI Paradigm platform is to run at CGG’s HPC & AI Centre of Excellence using ‘proprietary immersive cooling and renewable energy’.
In another deal, CGG is teaming with French data center operator Eclairion to establish a state-of-the-art, energy-efficient infrastructure capable of hosting the high-power densities of next-generation servers. CGG’s HPC and immersion cooling expertise is to be deployed in a new ‘low-carbon’ modular data center in Bruyères-le-Châtel, France to serve industrial, academic and government users.
Dragos has signed a memorandum of understanding with Saudi Aramco for the cyber protection of industrial infrastructure in Saudi Arabia. The MOU envisages the creation of a local hardware assembly facility and OT cyber training academy for the Kingdom, Aramco and its affiliates.
Halliburton is teaming with Norwegian Sekal AS on the joint development of well construction automation solutions as part of a longer-term strategy to deliver fully automated drilling operations. Halliburton’s well construction solutions are to combine with the Sekal DrillTronics automation platform. More in the release.
Equinor is to deploy four cutting-edge K-Sim Offshore simulators delivered by Kongsberg Digital across its North Sea operations. The simulators will be integrated with Kongsberg’s K-Pos Dynamic Positioning systems and Norbit’s Oil Spill Detection system, Aptomar. The simulator suite will be installed at the North Cape Simulator Centre in Honningsvaag, Norway.
MFE Inspection Solutions Partners is partnering with Dnota to provide air quality monitoring solutions. MFE’s Canadian offices will offer Dnota’s Bettair IoT Nodes for precise, large-scale mapping of pollution in cities or industrial environments.
Peloton is now ‘Powered by Snowflake’. This lets the company manage its large confidential client datasets, sharing critical well data via Snowflake under strict governance. Clients with their own Snowflake implementation can include well data from Peloton without copy or cloning to their own instance, combining it with sensor and downhole data for further insights.
ENI has awarded SLB (formerly Schlumberger) a global methane emissions reporting project. SLB’s End-to-end Emissions Solutions business (SEES) will deliver comprehensive fugitive methane emissions measurement and reporting plans for Eni’s global operating facilities in alignment with the UN’s Oil & Gas Methane Partnership 2.0 directives. More in the release.
Siemens has partnered with workflow specialist ServiceNow to offer ‘transparency’ in industrial asset management. The Software-as-a-Service is said to provide recognition, identification, and management of OT devices with the NowPlatform from ServiceNow. The status of all devices on the network, regardless of manufacturer or device type, is now available from a single tool. More in the release and from Siemens.
Microsoft has made Azure RTOS, including all of its components, available as the Eclipse Thread open source project, a vendor-neutral, open source, safety-certified OS for real-time applications.
The Open Geospatial Consortium (OGC) has approved version 1.0 of CoverageJSON as an official OGC Community Standard. CoverageJSON enables the development of interactive visualizations that display and manipulate spatio-temporal data within a web browser. OGC has also kicked off an Open Science Persistent Demonstrator (OSPD) pilot, a multi-year project that focuses on connecting geospatial and earth observation data. The pilot will produce a web portal to demonstrate how platforms operated by different organizations can be used for collaborative research and data representation.
A new report, IOGP 696 - Remotely Piloted Aircraft Systems (a free download) provides recommended practices for RPAS operations that are either operated directly or subcontracted by IOGP member companies. This report is part of IOGP’s Oil and Gas Aviation Recommended Practices. IOGP has also announced a joint industry ‘sprint’ to develop a material digital passport specification. The MDP aims to reduce fraud by enhancing traceability and proof of origin of commodities in the supply chain. MDP will deploy component tagging, unique universal identifiers (UUID) and ‘digital technologies’. More from the IOGP’s Digital Transformation Committee.
The OPC Foundation has published a standard for the ‘Metaverse’! The standard is the outcome of an OPC workgroup chaired by Erich Barnstedt, (Microsoft) that has leveraged OPC UA standards for information modelling, information exchange, cloud connectivity and asset identification. Use cases include remote or on-premises condition monitoring using aHoloLens for remote assisted machinery maintenance and processes training in safe, simulated environments. Code for these solutions is open-source and published on the OPC Foundation GitHub repositories as per the above links. See too the OPC Foundation YouTube channel for more on condition monitoring across wind farm assets.
The Open Source Geospatial Foundation has released version 3.8 of its C++ geospatial data access library for raster and vector file formats, databases and web services. More on the new release from OSGEO and the repository.
The POSC Caesar Association has released its Industrial Data Ontology (IDO), a W3C OWL 2 ontology for use across the life cycle industrial assets and processes. IDO is used to build vocabularies and manage asset models that employ Reference Data Libraries (RDLs). IDO is suitable as a top-level ontology for industrial data terminology that defines generic terms for things that exist in more than one industrial domain. The ontology was previously known as ISO 15926-14. IDO is the initial part of the new multipart ISO standard ISO/NP 23726 Ontology Based Interoperability with the number ISO/NP 23726-3. The initial working draft of IDO is available.
PIDX is in the process of finalizing its ETDX Scope 3 Emissions Reporting Standard and aligning it with the Partnership for Carbon Transparency’s WBCSD standard. A production-ready standard is planned for year-end 2023.
Speaking at the recent Open Group Summit in Houston (as reported on LinkedIn) Petrobras’ Pedro Vieira floated a proposal for a new engineering data standard. The standard takes inspiration from The Open Group’s OSDU and has been provisionally named the Open Engineering Data Universe (OEDU).
The Society of Exploration Geophysicists has released an update to the SEG-Y seismic data exchange standard. SEG-Y Revision 2.1 provides a method of capturing and recording user knowledge via an XML file, written between the binary header and the trace data. This renders datasets machine readable and suited as inputs for machine-learning and artificial intelligence applications. The new format can be downloaded from the SEG Technical Standards web page. A PDF is available here. This is (old?) music to our ears as Oil IT Journal, back in 1999, argued for a ‘re-looking’ of SEG-Y à la XML!
Researchers from Saudi Aramco and U Massachusetts have come up with a new solution for IoT-style monitoring of pumps and other instrumentation is harsh hydrocarbon environments. A new ‘epoxy and fluoroelastomer’ glue was used to attach an accelerometer to a downhole pump. After extensive testing of submersion in high-temperature hydrocarbon fluid at 150 °C for over 10,000 hours, microscopic imaging and FTIR spectroscopy revealed negligible degradation. The study was said to ‘enhance reliability and safety for vertical oil pump condition monitoring in downhole applications, benefiting the oil and gas sector’.
What we found intriguing is that this 18 page paper* was published in a Nature Scientific Report. One might have thought that a modest SPE paper would be more appropriate. How does this grunt-level engineering work get into Nature? Seemingly the back door to Nature is via the Springer Nature SharedIt content-sharing initiative. As we understand it, researchers apply (and pay) for a DOI and then just push their paper up to the Nature shared repository, bypassing any editorial control**. Maybe we are somewhat conspiratorial, but perhaps a publication in Nature is worth more to a researcher than an SPE paper. Who knows?
* Wankhede, S.P., Du, X., Brashler, K.W. et al. Encapsulating commercial accelerometers with epoxy and fluoroelastomer for harsh hydrocarbon fluid environment. Sci Rep 13, 19815 (2023) and on Nature.
** We did ask the Springer/Nature team for clarification. Our query has been ‘escalated’ to the production team but so far, no clear answer.
SAP has just celebrated the ‘momentous’ 40th anniversary of its own programming language, ABAP (Advanced business application programming). Sonja Liénard, VP ABAP Developer Tools at SAP said, ‘If you really want to have an impact on the world, without anyone noticing it, then you should learn ABAP. Because ABAP is everywhere. About 80% of all business transactions run on ABAP.’
ABAP is SAP’s own programming language used to build and operate business-critical applications. SAP co founder Klaus Tschira had an idea for a document analysis program using assembler macros to process real-time financial and materials management data. In the 1980s, Gerd Rodé created the first ABAP version as a standalone fourth generation programming language (4GL) that, along with Tschira’s system, became a front-end to SAP R/2. In the early 1990s, SAP R/3 was developed with ABAP. SAP continues to invest in ABAP, lately with the addition of ‘completely refactored’ cloud edition.
We were curious to see what ABAP code looked like and found the Discovering ABAP resource. A quick spin through some of the examples suggests that ABAP has adopted the object-oriented paradigm. Also it would appear that a good knowledge of database programming with SQL might be handy to a newcomer. Low code it is not! For those seeking simpler manipulation of the SAP ERP system, there is SAP’s own low-code offering.