Five Essential Tips to Keep a Lid on Spending When Running Oil and Gas Simulations in the Cloud
Few industries pose a more significant high-performance computing (HPC) challenge than oil and gas exploration and production (E&P). Datasets gathered from a single seismic survey can be in the range of 1 PB or more.[i] HPC clusters are used in everything from seismic analysis to reservoir modeling to well, drill, and pipeline flow modeling. HPC environments also increasingly support large-scale analytic and machine learning (ML) workloads, analyzing the massive amounts of data collected from increasingly instrumented oil fields.
With survey costs in the range of USD 30K per square kilometer and the cost of drilling an offshore well being more than USD 50M, there is little margin for error.[ii] Oil and gas companies depend on computer simulation to devise extraction strategies that minimize cost, pose the least environmental risk, and maximize field yield and productivity.
Cloud Computing Brings New Opportunities
Cloud computing is attractive to energy companies for good reasons. They can reduce capital investments in IT, simplify operations, and take advantage of state-of-the-art instance types, GPUs, interconnects, and file systems. More importantly, companies can run simulations at a larger scale, providing precious additional time for reservoir and project analysis. Given the capital-intensive nature of oil and gas exploration, it often makes sense to spend aggressively on large-scale simulations. Firms can complete higher-fidelity simulations faster and avoid people and expensive assets sitting idle.
While some seismic models are challenging to shift to the cloud due to the sheer amount of data involved, other applications are much more cloud friendly. For example, in reservoir simulation, a compute-intensive activity, seismic data sets are distilled into more manageable input files in the range of hundreds of megabytes to gigabytes depending on the model resolution. Other applications involve even smaller datasets.
Given the benefits, E&P companies are rapidly moving workloads to the cloud. Some oil and gas independent software vendors (ISVs) have even set up private-label cloud services to address this market need.[iii]
Managing Cloud Spending is a Major Challenge
While cloud computing provides clear advantages for some workloads, it also comes with risks. Gartner Consulting estimates that 80% of organizations are likely to overspend on infrastructure-as-a-service (IaaS) services. In a March 2020 report, Gartner explains some of the reasons, including complex and multifaceted pricing schemes, granular cloud bills, lack of standardization among cloud platforms, and the ease with which cloud services can be over-provisioned by users.[iv]
To help keep a lid on cloud spending, we offer the following recommendations:
- Stay flexible as to where workloads run. Many energy companies have sunk investments in on-premises data centers. While it is sensible to embrace the cloud for some workloads, other jobs may run better on-premises. As oil and gas ISVs align themselves with specific cloud providers, it is prudent to pursue a hybrid, multi-cloud strategy. E&P companies should try and avoid becoming locked-in to a single cloud.
- Ensure that cloud bursting is seamless, frictionless, and transparent. To achieve the flexibility described above, organizations need management software that delivers a consistent user experience both on-premises and across multiple clouds. The use of containers and a cloud-friendly workload manager will make it easier to accommodate business demands by shifting workloads to maximize utilization and throughput while managing costs.
- Minimize the time that cloud assets are deployed and maximize their utilization. Orphaned or idle cloud instances and unused data in file systems and object stores are chronic causes of over-spending. Ensure that workload and cloud management tools can detect idle and under-utilized instances and cloud services and ensure that cloud instances are only provisioned for the time they are needed.
- Consider data gravity and store cloud data assets at the appropriate tier and service level. Data locality should be considered when scheduling workloads. If you decide to bring data-intensive workloads to the cloud, factor data transfer into the cost-benefit analysis. Avoid storing unneeded datasets in costly storage tiers such as parallel file systems. Leverage dependent jobs or post-job execution features to migrate datasets to lower-cost object storage classes and shutdown unused file system services automatically.[v]
- Ensure adequate monitoring and spend-management controls are in place. A golden rule in business is that “you can’t manage what you can’t measure.” Reporting systems and dashboards need to provide up-to-date accounting on cloud utilization and spending by group, project, business unit, and cost center. Ideally, cloud usage and spending-related metrics should be linked to workload management policies to avoid groups accidentally exceeding their cloud spending budgets.
How Altair can Help
Altair workload managers, including Altair PBS Professional® and Altair Grid Engine®, support leading seismic and reservoir modeling applications, including CMGL, Rockflow Dynamics tNavigator®, Haliburton SeisSpace®, Landmark Nexus®, Emerson Roxar Tempest™, Schlumberger® ECLIPSE®, and others.[vi] Both provide seamless cloud bursting to quickly marshal resources on leading public clouds along with rich policy controls to maximize throughput and utilization to minimize costs.
Reservoir modeling is inherently multi-disciplinary. Altair Access™ improves productivity by enabling domain experts worldwide to collaborate on the same models and datasets. Altair Control™ and Altair NavOps™ provide centralized cloud automation for managing, optimizing, and forecasting HPC resource usage and spending both on-premises and in the cloud.
To learn how to improve throughput and reduce cloud spending for oil and gas workloads, you can request a free no-obligation consultation and demonstration.
[i] SeismicZone – Big Data Challenges Facing Seismic Data and Exploration Geophysics - https://www.seismiczone.com/big-data-challenges-facing-seismic-data-and-exploration-geophysics/
[ii] Actual costs will vary depending on the rig type, water depth, well depth, distance offshore and time required. Source: Reservoir Exploration and Appraisal, 2013, Luiz Amado - https://www.sciencedirect.com/topics/engineering/drilling-cost
[iii] Schlumberger offer a secure private cloud for E&P customers. https://www.software.slb.com/services/digital-transformation-services/transition-services/infrastructure-as-a-service-solution
[iv] Gartner Information Technology Research – How to Manage and Optimize Costs of Public Cloud IaaS and PaaS - https://www.gartner.com/en/documents/3982411/how-to-manage-and-optimize-costs-of-public-cloud-iaas-an
[v] Features such as prologue and epilogue scripts and job dependencies can be used in Altair PBS Professional to trigger these kinds of automated actions. https://www.altair.com/pdfs/pbsworks/PBSUserGuide2020.1.pdf. Altair Grid Engine provides similar functionality.
[vi] All of these listed ISVs advertise support for PBS, OpenPBS, PBS Professional and/or Sun Grid Engine (SGE), Oracle Grid Engine (OGE), Univa Grid Engine (UGE) or Altair Grid Engine – CMGL, tNavigator, Haliburton SeisSpace, Landmark Nexus, Roxar Tempest, Schlumberger Software.