A recent article, ‘How to make models more useful’ in the Proceedings of the (US) National Academy of Science (PNAS) posits that while ‘computational modeling has become a valuable tool for science and policy […] community standards to share model details have not kept pace’. For research to be replicated, evaluated and improved, the computer code in the model should be comprehensible and published alongside the articles that describe the results. This is not yet the case for most modeling science. Models in PNAS’ sights include earth tectonics, global temperature change, sea-level rise, loss of biodiversity and more.
The problem is that published articles do not usually contain enough information to reproduce the models and their source code, if available, may not be understandable and runnable by others, a particularly important consideration when computational models are the basis for high-impact policy decisions regarding such things as climate change and disease spread.
While a lot has been written about open sharing of data and software, notably on the ‘FAIR’ (findability, accessibility, interoperability and reusability) principles, a recent survey of some 8,000 articles on model-based research found that a majority do not make the code available. For the most recent articles, over 80% do not provide access to the model code. While peer review follows widely understood and accepted scientific norms, there are no equivalent standards for code. There are currently no guidelines on applying the FAIR principles to model code.
The 2009 ‘climategate’ affair, when climate researchers emails were hacked, was particularly damaging to public confidence in climate science, because of the lack of scientific transparency and restricted access to climate models and data sets. More than a decade later, ‘little has changed’.
Confusingly, the PNAS authors consider the attacks on climate models as ‘somewhat ironic’ since they have ‘some of the most rigorously tested and reliable scientific code’. While the Community Earth System Model (CESM) from the US National Center for Atmospheric Research stands out as ‘one of the few climate models used by the IPCC to make its code and data openly accessible and documented, this is not enough’ [ … ], ‘the CESM is complicated to download and difficult to install and run’.
These issues, articulated a decade ago, still persist. Why are models not more accessible? Developers may be concerned they will not receive recognition or rewards for the extra effort involved. Plagiarism is also a concern. To meet these challenges, representatives of leading organizations that support computational modeling met in December 2021 to establish the Open Modeling Foundation (OMF) to proselytize the use of FAIR principles in computer modeling. A central mission of the OMF is to adopt existing standards or develop new ones, if needed, to help modeling researchers, research and academic organizations, journals, funders, and other stakeholders to define what it means for a model to be FAIR. It will also offer guidance to help researchers meet these standards. More from the OMF’s development site on Git.
© Oil IT Journal - all rights reserved.