SMi/TNO Production Monitoring Master Class

Production monitoring is a pre-requisite to i-Field applications—and requires a significant planning effort for ‘sustainability.’ Production monitoring and optimization is not your average IT project!

Anton Leemhuis introduced the SMi/TNO Real Time Production Monitoring master class held last month in London asking ‘why monitor? While the goal of production optimization (PO) is fairly clear, the reality is that take-up and sustainability has been limited. Frequently, the immediate need is for better monitoring—‘you can only control what is monitored, and you can only optimize what is controlled.’ One significant goal is to reduce engineers’ dependency on spreadsheets and better information flows can enable predictive maintenance and reduced downtime. There is no framework or cookbook for production monitoring which needs to be implemented on a case by case basis.

TNO’s Ruud van der Linden took over to explain the basic concepts of monitoring. If you know the complete state of a system at a point in time, you can compute its future behavior from inputs without a knowledge of its past. But in reality, a system’s state is only partially observable and therefore only partially known. The essence of monitoring is to use past measurement to reconstruct the state of a partially observable system. This is why you need high frequency monitoring for control. The downside of high frequency is that it leads to a data ‘tsunami.’ Even so, TNO warns, ‘don’t compress data, don’t throw away information.’ RT means different things to different communities. The trick is to keep all user communities happy with a common data set.

The move from offline to RT involves upheaval. Data reconciliation is required to leverage models such as OLGA, HySys and Petex. Models can also be built from data using correlation, neural nets, etc. TNO analyzes the model landscape in terms of their knowledge of underlying physical process and the data volumes involved. The resulting white/grey/black box models each have particular areas of application. Oil and gas models typically fall in the ‘grey box’ area with low data input and medium knowledge of physical processes due to uncertainties at the reservoir. But the trend is ‘up’ with bigger models and more measurement.

TNO advocates using in-house models that capture process knowledge and minimize vendor lock-in. These leverage components such as HySys and OLGA and the ubiquitous PI System. The masterclass went on to look at implementation noting that ‘monitoring is not a regular IT project.’ But it is definitely gaining traction. At the 2009 Intelligent Operations conference in Trondheim, BP showed how they use models everywhere.

Model management and maintenance is itself an under-appreciated discipline. Models may be built for front-end engineering design (FEED) and then abandoned. But in the last decade or so, FEED models are seeing take-up in the plant. Model based optimization is already accepted in the downstream. More from and links/1001_10.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.