'Rapid,’ ML4Shale

Apache has tried conventional decline curve analysis and found it wanting. Enter ‘Rapid,’ machine learning for shale production forecasting. But is it ‘reliable technology’ à la SEC?

Speaking at the 12th Annual Ryder Scott reserves conference this month, David Fulford described Apache Corp.’s use of machine learning to model and forecast liquids-rich shale wells. Working on production data from its unconventional wells, Apache noticed a few issues. First, popular methods of decline curve analysis gave a poor fit to the data. Second, least squares curve fitting failed to accurately forecast ultimate recovery. Third, human surveillance of production is impractical, given the huge number of wells that need forecasting every quarter.

But Apache has a lot of data, as one of the first movers in the Eagle Ford play. Some of its 2008 wells are amongst the oldest multi-fractured horizontal shale wells in the world. This has allowed for extensive look-back ‘hindcasting’ of production, testing various methods for forecasting both production and ultimate recovery.

Shale liquids production is complicated by the fact that flow regimes change as time goes on. In the early stages, when fractures are wide open, a linear flow regime predominates. After a while, the fracs begin to close up and subtle changes to the production mechanism occur. These are relatively well understood and can be characterized by a variety of parameters controlling production, both inside the different flow regimes and during the transitions.

The fly in the ointment of the conventional flow-modeling/physics-based approach is that evaluating the different flow parameters, particularly the onset and duration of the transition period, is hard and subjective. Apache found that its forecasting was yielding unreliable results, particularly using the ‘overwhelmingly popular’ approach to reserves forecasting, the modified hyperbolic model.

Fulford summarized the situation saying ‘for the specific case of forecasting production from shale wells there is no theoretical justification or convincing empirical validation of the modified hyperbolic model.’

So if the physics is faulty, how about letting the machine learn from the data? Enter Apache’s ‘Rapid,’ (rate analytics with probabilistic inference and diagnostics.) Rapid uses Markov chain Monte Carlo simulation, a ‘proven technology with over 20 years of oilfield use.’ In fact Fulford believes the approach could qualify as ‘reliable technology’ in the sense of the SEC. In all events, Apache has got something right with its recently announced ‘3 billion barrel’ find on southern Delaware basin’s Alpine High.

Comment – if Rapid is not ‘reliable technology’ where does that leave the conventional approaches it was designed to fix? Visit this and other presentations from the conference and brush up on the SEC’s position as de facto ‘rating agency’ for shale operators in the downturn.

Click here to comment on this article

Click here to view this article in context on a desktop

© Oil IT Journal - all rights reserved.