TECHNICAL PROGRAMME | Energy Technologies – Future Pathways
The Energy Transition: The Role of Digitalisation, AI, and Cybersecurity
Forum 23 | Digital Poster Plaza 4
30
April
10:00
12:00
UTC+3
Digitalisation, AI, and cybersecurity are key enablers of this transition, providing the tools and frameworks needed to manage complex energy systems, optimise operations, and protect against cyber threats. AI offers a broad scope including the creation of virtual replicas of physical assets, processes and systems, using real-time data and simulations and can help to automate complex and repetitive tasks, such as drilling and production, which improves the efficiency, quality, and consistency of operations and reduces costs. This session will explore the latest advancements in these areas and discuss how they are transforming the energy industry to meet future challenges.
Objectives/Scope:
We have previously shown that deep interaction neural networks based on graphs, can learn complex flow physics relationships from reservoir models to accelerate forward simulations [1]. The generalization and scale-up to a deep learning architecture with Subsurface Graph Network Simulator (SGNS) [2] renders fast and accurate, long-term spatio-temporal predictions of fluid and pressure propagation in structurally diverse reservoir model grids [3]. This paper builds on the later and introduces the benchmarking and deployment of SGNS in operational engineering simulation environment.
Methods, Procedures, Process:
The SGNS is a data-driven surrogate framework that consists of a subsurface graph neural network (SGNN) to model the evolution of fluids, and a 3D-U-Net to model the evolution of pressure. The SGNN uses encoder-processor-decoder architecture to encode the properties and dynamics of each grid cell and the cell-cell relation into graph node and edge features. The multilayer perceptron computes the interaction between neighboring cells and updates the state of the cells. The pressure dynamics manifests shorter equilibrium time and faster global spatial evolution than the fluid and is better captured with hierarchical structure and modified order of operation of 3D-U-Net convolution layers.
Results, Observations, Conclusions:
We deploy the SGNS on a synthetic, single-porosity/single-permeability reservoir model with multi-phase flow, large-scale simulation grid and large numbers of injectors and producers with variable positioning and geometry. We construct a network graph with nodes, representing reservoir grid cells are encoded with tens of static, dynamic, computed and control features. The wells are encoded via well completion factors. The graph edges represent interactions between the nodes with encoded features like transmissibility, direction and fluxes. We implement sector-based training with multi-step rollout to avail for the use of large-scale models. The loss function is the joint mean squared error, combining misfits in oil and water volumes and pressure. We present the comparative results between the SGNS and the full-physics simulation for up to 30-year prediction of the 3D pressure, oil and water saturation as well as the dynamic well responses.
Novelty/Significance/Additive Information:
The SGNS framework is a novel, industry-unique technology with promising scalability, generalization and prediction accuracy. The immediate applications involve accelerated well placement and production forecasting studies. Going forward, we are in the process of integrating the well prediction model in the roll-out of the evolution model, incorporating the encoding of well production constraints into the training scheme and deploying state-of-the-art architectures to enable multi-GPU training.
References:
We have previously shown that deep interaction neural networks based on graphs, can learn complex flow physics relationships from reservoir models to accelerate forward simulations [1]. The generalization and scale-up to a deep learning architecture with Subsurface Graph Network Simulator (SGNS) [2] renders fast and accurate, long-term spatio-temporal predictions of fluid and pressure propagation in structurally diverse reservoir model grids [3]. This paper builds on the later and introduces the benchmarking and deployment of SGNS in operational engineering simulation environment.
Methods, Procedures, Process:
The SGNS is a data-driven surrogate framework that consists of a subsurface graph neural network (SGNN) to model the evolution of fluids, and a 3D-U-Net to model the evolution of pressure. The SGNN uses encoder-processor-decoder architecture to encode the properties and dynamics of each grid cell and the cell-cell relation into graph node and edge features. The multilayer perceptron computes the interaction between neighboring cells and updates the state of the cells. The pressure dynamics manifests shorter equilibrium time and faster global spatial evolution than the fluid and is better captured with hierarchical structure and modified order of operation of 3D-U-Net convolution layers.
Results, Observations, Conclusions:
We deploy the SGNS on a synthetic, single-porosity/single-permeability reservoir model with multi-phase flow, large-scale simulation grid and large numbers of injectors and producers with variable positioning and geometry. We construct a network graph with nodes, representing reservoir grid cells are encoded with tens of static, dynamic, computed and control features. The wells are encoded via well completion factors. The graph edges represent interactions between the nodes with encoded features like transmissibility, direction and fluxes. We implement sector-based training with multi-step rollout to avail for the use of large-scale models. The loss function is the joint mean squared error, combining misfits in oil and water volumes and pressure. We present the comparative results between the SGNS and the full-physics simulation for up to 30-year prediction of the 3D pressure, oil and water saturation as well as the dynamic well responses.
Novelty/Significance/Additive Information:
The SGNS framework is a novel, industry-unique technology with promising scalability, generalization and prediction accuracy. The immediate applications involve accelerated well placement and production forecasting studies. Going forward, we are in the process of integrating the well prediction model in the roll-out of the evolution model, incorporating the encoding of well production constraints into the training scheme and deploying state-of-the-art architectures to enable multi-GPU training.
References:
- M. Maucec, R. Jalali (2022). “GeoDIN-Geoscience-Based Deep Interaction Networks for Predicting Flow Dynamics in Reservoir Simulation Models”, SPE-203952-PA, SPE Journal, 27 (03): 1671–1689
- T. Wu et al. (2022). “Learning Large-scale Subsurface Simulations with a Hybrid Graph Network Simulator”, 2022 ACM SIGKDD.
- M. Maucec, R. Jalali; H. Hamam (2024). “Predicting Subsurface Reservoir Flow Dynamics at Scale with Hybrid Neural Network Simulator”. IPTC-24367-MS.
Objectives/Scope:
Piping and Instrumentation Diagrams (P&IDs) are schematic representations of functional relationships between equipment, pipelines, and instrumentation in industrial processes. They are critical for design, operation, and maintenance in industries like oil and gas and chemicals. This paper presents an AI-driven approach for digitizing P&IDs, transforming image-based diagrams into structured, analyzable data for efficient and accurate digital representation.
Methods, Procedures, Process:
This study employs a comprehensive AI pipeline integrating object detection, Optical Character Recognition (OCR), and advanced machine learning techniques. The process includes identifying symbols, instruments, and lines from scanned P&IDs and associating them with relevant textual data. Methods like template matching, advanced vessel detection, and OCR are combined with data augmentation and parallel processing to handle complex P&ID layouts while ensuring high accuracy and scalability.
Results, Observations, Conclusions:
The AI-driven system demonstrated exceptional precision and recall in digitizing P&IDs, achieving over 97% accuracy in symbol detection and nearly 100% accuracy in text recognition. Key outcomes include accurate identification of symbols, association with relevant metadata, and efficient handling of large-scale, complex diagrams. The system efficiently processed diverse diagram layouts and orientations, demonstrating robustness against variations in diagram styles, symbol sizes, and text placements. Advanced OCR methods mitigated common text recognition challenges, such as distinguishing similar characters, improving overall reliability.
The resulting structured digital data facilitates enhanced usability in applications like system modeling, operational analysis, and compliance reporting. Visual outputs, such as annotated diagrams, allow for seamless verification and validation of results. By significantly reducing manual effort and minimizing errors, this approach accelerates industrial workflows, supporting informed decision-making. The pipeline’s scalability ensures adaptability to extensive industrial datasets, offering a transformative solution for managing and digitizing P&IDs at scale.
Novel/Additive Information:
This paper introduces a novel end-to-end AI solution for P&ID digitization, combining advanced OCR, template matching, and object detection to achieve unparalleled accuracy and efficiency. It sets a new standard in industrial diagram digitalization, addressing long-standing challenges in processing complex P&ID layouts.
Piping and Instrumentation Diagrams (P&IDs) are schematic representations of functional relationships between equipment, pipelines, and instrumentation in industrial processes. They are critical for design, operation, and maintenance in industries like oil and gas and chemicals. This paper presents an AI-driven approach for digitizing P&IDs, transforming image-based diagrams into structured, analyzable data for efficient and accurate digital representation.
Methods, Procedures, Process:
This study employs a comprehensive AI pipeline integrating object detection, Optical Character Recognition (OCR), and advanced machine learning techniques. The process includes identifying symbols, instruments, and lines from scanned P&IDs and associating them with relevant textual data. Methods like template matching, advanced vessel detection, and OCR are combined with data augmentation and parallel processing to handle complex P&ID layouts while ensuring high accuracy and scalability.
Results, Observations, Conclusions:
The AI-driven system demonstrated exceptional precision and recall in digitizing P&IDs, achieving over 97% accuracy in symbol detection and nearly 100% accuracy in text recognition. Key outcomes include accurate identification of symbols, association with relevant metadata, and efficient handling of large-scale, complex diagrams. The system efficiently processed diverse diagram layouts and orientations, demonstrating robustness against variations in diagram styles, symbol sizes, and text placements. Advanced OCR methods mitigated common text recognition challenges, such as distinguishing similar characters, improving overall reliability.
The resulting structured digital data facilitates enhanced usability in applications like system modeling, operational analysis, and compliance reporting. Visual outputs, such as annotated diagrams, allow for seamless verification and validation of results. By significantly reducing manual effort and minimizing errors, this approach accelerates industrial workflows, supporting informed decision-making. The pipeline’s scalability ensures adaptability to extensive industrial datasets, offering a transformative solution for managing and digitizing P&IDs at scale.
Novel/Additive Information:
This paper introduces a novel end-to-end AI solution for P&ID digitization, combining advanced OCR, template matching, and object detection to achieve unparalleled accuracy and efficiency. It sets a new standard in industrial diagram digitalization, addressing long-standing challenges in processing complex P&ID layouts.
As shale gas wells enter the middle and late stages of development, abnormal wellbore conditions such as liquid loading, tubing blockage, and tubing perforation occur with increasing frequency, severely constraining production capacity. Accurate and timely diagnosis of daily production monitoring data is therefore essential to ensure the efficient development of shale gas resources. Conventional diagnostic methods for abnormal wellbore conditions are limited by their strong dependence on labeled data, inadequate capability for multimodal information integration, and the continued requirement for manual post-interpretation of results. To address these challenges, an intelligent diagnostic approach based on a vision–language model (VLM) is proposed. In this method, historical production time-series data are normalized, segmented by sliding windows, and encoded into two-dimensional image representations, which are then combined with text-based expert knowledge prompts to construct structured training samples. The VLM is fine-tuned using task-specific instructions and structured training data, enabling anomaly type classification, start–end time identification, diagnostic interpretation, and drainage strategy recommendation. In addition, structured output templates are introduced to ensure that the generated results are not only accurate but also consistent and interpretable. The proposed method was validated on 500 expert-annotated shale gas well cases and compared with traditional machine learning methods and time-series diagnostic approaches based on large language models. The results demonstrated that, under limited labeled data conditions, the VLM maintained high diagnostic accuracy and, in particular, significantly outperformed existing methods in reducing false alarm rates. The model was further shown to automatically generate operator-oriented textual explanations of abnormal conditions, thereby reducing the need for manual interpretation and enhancing both human–machine collaboration and real-time performance in the diagnostic process. Additional analysis revealed that the VLM exhibits clear advantages in handling long-term production monitoring data from shale gas wells: by preserving both local fluctuations and global trends in the time series through image representations, the model can capture short-term anomalies as well as long-term cumulative effects, thereby improving diagnostic robustness under complex operating conditions. Moreover, the incorporation of textual prompts effectively compensates for the lack of domain knowledge in purely data-driven models, ensuring that diagnostic outputs align more closely with field operational logic and provide directly actionable recommendations. In summary, the proposed VLM-based diagnostic framework for shale gas well anomalies not only enhances the accuracy and reliability of anomaly diagnosis while reducing reliance on manual intervention, but also offers a scalable and low-maintenance solution for real-time diagnosis and response under complex operating conditions. This work opens a new avenue for intelligent anomaly diagnosis and supports the advancement of digital and automated production management in the oil and gas industry.
Co-author/s:
Liang Xue, Associate Dean, China University of Petroleum.
Dr. Haiyang Chen, Student, China University of Petroleum.
Shengdon Zhang, Student, Computer Network Information Center,Chinese Academy of Sciences.
Co-author/s:
Liang Xue, Associate Dean, China University of Petroleum.
Dr. Haiyang Chen, Student, China University of Petroleum.
Shengdon Zhang, Student, Computer Network Information Center,Chinese Academy of Sciences.
The Kuwait Integrated Digital Field (KwIDF) is an intelligent oil field platform that integrates artificial intelligence (AI), advanced data analytics, and a management-by-exception framework. This platform aligns with Kuwait Oil Company’s (KOC) digitalization strategy by significantly enhancing operational efficiency, sustainability, and resilience in oil production.
KwIDF enables objective detection of production anomalies through comprehensive real-time data (RTD) analytics, facilitating timely interventions and informed decision-making. This abstract highlights four real-world use cases from active oil fields in Kuwait, showcasing how data-driven methodologies delivered operational improvements:
Well-0001 (T1): Identified as producing gas exclusively, deviating from expected liquid output.
Well-0002: Showed significant variance from historical production benchmarks.
Well-0003: Displayed irregular flow measurements, suggesting impaired operations.
Well-0004: Experienced unexplained production decline despite equipment operating within normal parameters.
Targeted interventions based on these insights included zone adjustments, historical production evaluations, workover operations, and real-time decision-making. The platform employed advanced analytical techniques such as K-means clustering for production grouping, statistical process control (SPC) for adaptive benchmarking, principal component analysis (PCA) for managing multivariate data, and linear regression with signal normalization for predictive modeling.
As a result, the platform recovered an estimated 2,000 to 2,700 barrels per day across the four wells, demonstrating substantial operational and economic benefits. Additionally, KwIDF enhanced safety by reducing manual interventions, increased manpower productivity through automation, and promoted operational excellence through predictive decision-making.
Further analysis revealed a broader optimization potential, identifying 308,752 barrels/day as recoverable lost production across 1,258 wells, and 183,414 barrels/day as actionable opportunities across 854 wells.
In summary, these use cases validate how AI-driven tools and analytics within KwIDF have improved production efficiency, enabled timely action, and uncovered untapped potential in mature oil fields.
KwIDF enables objective detection of production anomalies through comprehensive real-time data (RTD) analytics, facilitating timely interventions and informed decision-making. This abstract highlights four real-world use cases from active oil fields in Kuwait, showcasing how data-driven methodologies delivered operational improvements:
Well-0001 (T1): Identified as producing gas exclusively, deviating from expected liquid output.
Well-0002: Showed significant variance from historical production benchmarks.
Well-0003: Displayed irregular flow measurements, suggesting impaired operations.
Well-0004: Experienced unexplained production decline despite equipment operating within normal parameters.
Targeted interventions based on these insights included zone adjustments, historical production evaluations, workover operations, and real-time decision-making. The platform employed advanced analytical techniques such as K-means clustering for production grouping, statistical process control (SPC) for adaptive benchmarking, principal component analysis (PCA) for managing multivariate data, and linear regression with signal normalization for predictive modeling.
As a result, the platform recovered an estimated 2,000 to 2,700 barrels per day across the four wells, demonstrating substantial operational and economic benefits. Additionally, KwIDF enhanced safety by reducing manual interventions, increased manpower productivity through automation, and promoted operational excellence through predictive decision-making.
Further analysis revealed a broader optimization potential, identifying 308,752 barrels/day as recoverable lost production across 1,258 wells, and 183,414 barrels/day as actionable opportunities across 854 wells.
In summary, these use cases validate how AI-driven tools and analytics within KwIDF have improved production efficiency, enabled timely action, and uncovered untapped potential in mature oil fields.
The demand for intelligent systems capable of real-time decision-making is rapidly increasing as the global energy sector advances toward net-zero emissions. This study introduces an AI-augmented framework for Distributed Fiber-Optic Sensing, designed specifically to monitor the evolution of hydraulic fractures in enhanced geothermal systems (EGS) and unconventional oil and gas reservoirs. By utilizing high-resolution data from Distributed Acoustic Sensing (DAS), Distributed Temperature Sensing (DTS), and Distributed Strain Sensing (DSS), the system integrates advanced machine learning algorithms and digital twin technologies to provide continuous, real-time fracture diagnostics. This approach tracks fracture dynamics from initiation through propagation, offering valuable insights into fracture geometry and behavior over time.
The proposed system enhances operational efficiency by enabling precise fracture monitoring, optimizing energy extraction, and minimizing operational risks. Integrating AI-powered predictive analytics facilitates early detection of potential issues, allowing for proactive maintenance and reduced downtime. Additionally, the system offers environmental benefits by optimizing hydraulic fracturing processes, minimizing water and energy consumption, and reducing the ecological footprint of energy extraction.
Supported by large-scale physical modeling and field validation in high-pressure, high-temperature environments, the framework ensures its applicability in real-world conditions. Key innovations include an adaptive signal interpretation engine for accurate fracture characterization and an AI-driven framework that continually adapts based on real-time data inputs. A resilient cybersecurity architecture further ensures the integrity of the data across the monitoring infrastructure, protecting sensitive information throughout energy operations.
This AI-enhanced approach represents a transformative leap in real-time fracture monitoring, providing unprecedented insights into subsurface dynamics. By combining AI with state-of-the-art fiber-optic sensing technologies, the system responds dynamically to complex subsurface conditions. This integrated solution optimizes operational performance, strengthens safety protocols, and improves the overall efficiency of energy production systems.
The framework’s real-time monitoring of hydraulic fractures is essential for optimizing energy production efficiency. Its predictive capabilities enable proactive maintenance and operational adjustments, ensuring continuous optimization. Additionally, the system contributes to environmental sustainability by helping energy producers reduce waste and mitigate ecological risks typically associated with energy extraction.
Aligned with the broader goals of the energy transition, this study demonstrates how the convergence of AI, fiber-optic sensing, and real-time monitoring technologies can unlock new levels of sustainability, efficiency, and safety in energy systems. By providing a holistic solution that optimizes fracture characterization, mitigates risks, and enhances energy production efficiency, this approach represents a significant step forward in the development of sustainable and resilient energy systems for the future.
The proposed system enhances operational efficiency by enabling precise fracture monitoring, optimizing energy extraction, and minimizing operational risks. Integrating AI-powered predictive analytics facilitates early detection of potential issues, allowing for proactive maintenance and reduced downtime. Additionally, the system offers environmental benefits by optimizing hydraulic fracturing processes, minimizing water and energy consumption, and reducing the ecological footprint of energy extraction.
Supported by large-scale physical modeling and field validation in high-pressure, high-temperature environments, the framework ensures its applicability in real-world conditions. Key innovations include an adaptive signal interpretation engine for accurate fracture characterization and an AI-driven framework that continually adapts based on real-time data inputs. A resilient cybersecurity architecture further ensures the integrity of the data across the monitoring infrastructure, protecting sensitive information throughout energy operations.
This AI-enhanced approach represents a transformative leap in real-time fracture monitoring, providing unprecedented insights into subsurface dynamics. By combining AI with state-of-the-art fiber-optic sensing technologies, the system responds dynamically to complex subsurface conditions. This integrated solution optimizes operational performance, strengthens safety protocols, and improves the overall efficiency of energy production systems.
The framework’s real-time monitoring of hydraulic fractures is essential for optimizing energy production efficiency. Its predictive capabilities enable proactive maintenance and operational adjustments, ensuring continuous optimization. Additionally, the system contributes to environmental sustainability by helping energy producers reduce waste and mitigate ecological risks typically associated with energy extraction.
Aligned with the broader goals of the energy transition, this study demonstrates how the convergence of AI, fiber-optic sensing, and real-time monitoring technologies can unlock new levels of sustainability, efficiency, and safety in energy systems. By providing a holistic solution that optimizes fracture characterization, mitigates risks, and enhances energy production efficiency, this approach represents a significant step forward in the development of sustainable and resilient energy systems for the future.
Today digital technologies driven by data science hold immense potential in enhancing the oil and gas industry performance. Value realisation governance, and adoption focus, are key factors in ensuring sustainable digital investment strategies. NOC established a digital program in 2019 to support the growth of the operations and to optimize resources and cost, during a period where the assets was doubling in size. Key regional challenges and opportunities for further innovation were evident in the following areas: Vast reservoirs and sustainable production methods, maintaining aging infrastructure and legacy wells and facilities, and balancing sustainability and cost efficiency. The digitalization journey started with uncertainty on the associated value.
This framework is a structured approach that helps the organization assess, prioritize, and measure the impact of digital initiatives. The importance of it, is to ensure alignment with business goals, maximize value, and provide a systematic way to evaluate digital projects.
Key transformation areas were identified as main value streams of the program driven by the following transformation needs: Occupational and process safety ,Sustainability ,Harmonizing legacy with innovation, and Managing complex operations.
The transformational and cost efficiency drivers were addressed through several use cases covering different value streams including
The key components of the value framework are:
Finally, this Framework with estimated value contributed to a 15% increase in digital program value. It was driven by challenges including precise methods of measuring value leading to underestimation. lack of data driven decision impeding focus on investment in high value digital UCs. Lack of baseline capabilities of business entities.
The Best practices that were captured from this project include harnessing the potential of cross functional collaboration. The digital team is providing value consultation as a service to business entities. This effort is aiding in data driven validation of value to demonstrate the value case. Finally, like all frameworks there is room for refining the framework and enriching it further.
This framework is a structured approach that helps the organization assess, prioritize, and measure the impact of digital initiatives. The importance of it, is to ensure alignment with business goals, maximize value, and provide a systematic way to evaluate digital projects.
Key transformation areas were identified as main value streams of the program driven by the following transformation needs: Occupational and process safety ,Sustainability ,Harmonizing legacy with innovation, and Managing complex operations.
The transformational and cost efficiency drivers were addressed through several use cases covering different value streams including
- Sustainability/ Cost efficiency: Offshore logistics optimization
- Production: Production Optimisation tool , Data Analytics tool
- Reservoir and Geoscience : Pressure conditioning recommendations tool
- Reliability / process safety: Health Check for rotating Equipment ( predective maintainance)
- Maintenance Planning and scheduling : Offshore Activity Optimisation tool t
The key components of the value framework are:
- Business objective alignment through creating a predefined checklist
- Value levers predefinition to navigate a list of value levers enabling the business to find the dimension of value matching their business case
- Prioritization and KPI driven decision-making framework through value realization tree
- Assumption registers to create a common understanding of all the assumptions made while defining the business case
Finally, this Framework with estimated value contributed to a 15% increase in digital program value. It was driven by challenges including precise methods of measuring value leading to underestimation. lack of data driven decision impeding focus on investment in high value digital UCs. Lack of baseline capabilities of business entities.
The Best practices that were captured from this project include harnessing the potential of cross functional collaboration. The digital team is providing value consultation as a service to business entities. This effort is aiding in data driven validation of value to demonstrate the value case. Finally, like all frameworks there is room for refining the framework and enriching it further.
In mature reservoirs, the identification and placement of infill wells with high productivity indices (PI) is critical for maximizing recovery and extending field life. Traditional methods of infill well planning rely heavily on reservoir simulation, geostatistical mapping, and expert judgment, which can be time-consuming and may overlook complex, nonlinear relationships between subsurface properties and well performance. Recent advancements in Artificial Intelligence (AI) and Machine Learning (ML) offer transformative potential to enhance infill well planning by leveraging large volumes of historical and real-time reservoir data to identify optimal drilling locations and predict well productivity with greater accuracy.
This paper presents a novel AI/ML-driven workflow for designing high-productivity infill wells by integrating multi-source data, including static (geological, petrophysical) and dynamic (production, pressure, injection) datasets. The workflow begins with a thorough data pre-processing and feature engineering phase, followed by supervised ML modeling techniques such as gradient boosting, random forest, and deep neural networks to predict productivity index (PI) and other key performance indicators (KPIs). Spatial correlation techniques and unsupervised learning methods, such as clustering and self-organizing maps (SOMs), are then used to identify underdeveloped sweet spots and optimize well placement within the reservoir.
A case study from a mature carbonate field in the Middle East demonstrates the practical application and benefits of this AI/ML-based approach. Historical production data from over 150 wells were used to train and validate the model, achieving a PI prediction accuracy of over 85% when compared to actual results. The AI-recommended well locations not only avoided areas of interference and pressure depletion but also targeted zones with higher remaining hydrocarbons and better reservoir quality, leading to a 25% improvement in average PI compared to traditionally planned infill wells.
The results underscore the potential of AI and ML to significantly improve the efficiency and success rate of infill drilling campaigns. By automating pattern recognition and decision-making based on vast and complex datasets, these technologies can reduce planning cycles, enhance reservoir understanding, and ultimately improve economic outcomes. This study advocates for a paradigm shift in well planning strategies, where AI/ML complements domain expertise to achieve optimized field development in a cost-effective and timely manner.
This paper presents a novel AI/ML-driven workflow for designing high-productivity infill wells by integrating multi-source data, including static (geological, petrophysical) and dynamic (production, pressure, injection) datasets. The workflow begins with a thorough data pre-processing and feature engineering phase, followed by supervised ML modeling techniques such as gradient boosting, random forest, and deep neural networks to predict productivity index (PI) and other key performance indicators (KPIs). Spatial correlation techniques and unsupervised learning methods, such as clustering and self-organizing maps (SOMs), are then used to identify underdeveloped sweet spots and optimize well placement within the reservoir.
A case study from a mature carbonate field in the Middle East demonstrates the practical application and benefits of this AI/ML-based approach. Historical production data from over 150 wells were used to train and validate the model, achieving a PI prediction accuracy of over 85% when compared to actual results. The AI-recommended well locations not only avoided areas of interference and pressure depletion but also targeted zones with higher remaining hydrocarbons and better reservoir quality, leading to a 25% improvement in average PI compared to traditionally planned infill wells.
The results underscore the potential of AI and ML to significantly improve the efficiency and success rate of infill drilling campaigns. By automating pattern recognition and decision-making based on vast and complex datasets, these technologies can reduce planning cycles, enhance reservoir understanding, and ultimately improve economic outcomes. This study advocates for a paradigm shift in well planning strategies, where AI/ML complements domain expertise to achieve optimized field development in a cost-effective and timely manner.
Objective/Scope:
Calibrating simulation models and estimating uncertain parameters heavily relies on history matching. However, the presence of subsurface uncertainties often makes this process highly intricate. To tackle this complexity, we present a new methodology that combines Bayesian inversion with the Coarse-grid Network model, offering a more efficient and streamlined path for history matching.
Methods, Procedures, Process:
The process follows a structured five-step framework. Initially, key variables such as injection rate, porosity, capillary pressure, oil viscosity, density, and relative permeability are selected through Global Sensitivity Analysis (GSA). Their initial ranges and statistical distributions are established based on prior information. In the second stage, Latin Hypercube Sampling (LHS) is applied to generate both training and testing datasets, which are then simulated using a high-resolution numerical model. A Coarse-grid Network (CgNet) model is subsequently developed to learn the input-output relationships of these parameters, while Bayesian optimization is employed to fine-tune the model's hyperparameters automatically. In the fourth step, real-world field data are incorporated into the Bayesian inversion framework, where the Markov Chain Monte Carlo (MCMC) algorithm is used to compute posterior distributions of the uncertain parameters. Lastly, the resulting parameter estimates are verified by comparing the simulated pressure outcomes with the actual observations. Should significant mismatches occur, the performance of the CgNet surrogate model is reassessed and enhanced accordingly.
Results, Observations, Conclusions:
The workflow is applied to the COSTA model, which accounts for a range of physical phenomena, including capillary, viscous, and gravitational forces. This model features a high-resolution grid of 100 million cells and includes 240 injection and production wells, all operating under realistic control scenarios. The results demonstrate that the integration of Bayesian inversion with the CgNet model successfully identifies uncertain parameters, significantly narrowing the range of subsurface uncertainties. The difference between the actual and estimated pressure responses is minimal, highlighting the accuracy of the method. Additionally, when compared to other surrogate-based Bayesian inversion techniques—such as kriging, support vector machines, and polynomial chaos expansion—the CgNet-based method shows improved performance. This enhancement is attributed to CgNet's strength in handling temporal data efficiently, making it particularly well-suited for the dynamic nature of reservoir simulation and history matching.
Calibrating simulation models and estimating uncertain parameters heavily relies on history matching. However, the presence of subsurface uncertainties often makes this process highly intricate. To tackle this complexity, we present a new methodology that combines Bayesian inversion with the Coarse-grid Network model, offering a more efficient and streamlined path for history matching.
Methods, Procedures, Process:
The process follows a structured five-step framework. Initially, key variables such as injection rate, porosity, capillary pressure, oil viscosity, density, and relative permeability are selected through Global Sensitivity Analysis (GSA). Their initial ranges and statistical distributions are established based on prior information. In the second stage, Latin Hypercube Sampling (LHS) is applied to generate both training and testing datasets, which are then simulated using a high-resolution numerical model. A Coarse-grid Network (CgNet) model is subsequently developed to learn the input-output relationships of these parameters, while Bayesian optimization is employed to fine-tune the model's hyperparameters automatically. In the fourth step, real-world field data are incorporated into the Bayesian inversion framework, where the Markov Chain Monte Carlo (MCMC) algorithm is used to compute posterior distributions of the uncertain parameters. Lastly, the resulting parameter estimates are verified by comparing the simulated pressure outcomes with the actual observations. Should significant mismatches occur, the performance of the CgNet surrogate model is reassessed and enhanced accordingly.
Results, Observations, Conclusions:
The workflow is applied to the COSTA model, which accounts for a range of physical phenomena, including capillary, viscous, and gravitational forces. This model features a high-resolution grid of 100 million cells and includes 240 injection and production wells, all operating under realistic control scenarios. The results demonstrate that the integration of Bayesian inversion with the CgNet model successfully identifies uncertain parameters, significantly narrowing the range of subsurface uncertainties. The difference between the actual and estimated pressure responses is minimal, highlighting the accuracy of the method. Additionally, when compared to other surrogate-based Bayesian inversion techniques—such as kriging, support vector machines, and polynomial chaos expansion—the CgNet-based method shows improved performance. This enhancement is attributed to CgNet's strength in handling temporal data efficiently, making it particularly well-suited for the dynamic nature of reservoir simulation and history matching.
David Smethurst
Chair
Oil and Gas Production Department Director
Board of Various Start ups
Eszter Varga
Vice Chair
Downstream Portfolio Evaluation & Strategy Expert, Downstream Evaluation & Long-term planning
MOL plc
The Kuwait Integrated Digital Field (KwIDF) is an intelligent oil field platform that integrates artificial intelligence (AI), advanced data analytics, and a management-by-exception framework. This platform aligns with Kuwait Oil Company’s (KOC) digitalization strategy by significantly enhancing operational efficiency, sustainability, and resilience in oil production.
KwIDF enables objective detection of production anomalies through comprehensive real-time data (RTD) analytics, facilitating timely interventions and informed decision-making. This abstract highlights four real-world use cases from active oil fields in Kuwait, showcasing how data-driven methodologies delivered operational improvements:
Well-0001 (T1): Identified as producing gas exclusively, deviating from expected liquid output.
Well-0002: Showed significant variance from historical production benchmarks.
Well-0003: Displayed irregular flow measurements, suggesting impaired operations.
Well-0004: Experienced unexplained production decline despite equipment operating within normal parameters.
Targeted interventions based on these insights included zone adjustments, historical production evaluations, workover operations, and real-time decision-making. The platform employed advanced analytical techniques such as K-means clustering for production grouping, statistical process control (SPC) for adaptive benchmarking, principal component analysis (PCA) for managing multivariate data, and linear regression with signal normalization for predictive modeling.
As a result, the platform recovered an estimated 2,000 to 2,700 barrels per day across the four wells, demonstrating substantial operational and economic benefits. Additionally, KwIDF enhanced safety by reducing manual interventions, increased manpower productivity through automation, and promoted operational excellence through predictive decision-making.
Further analysis revealed a broader optimization potential, identifying 308,752 barrels/day as recoverable lost production across 1,258 wells, and 183,414 barrels/day as actionable opportunities across 854 wells.
In summary, these use cases validate how AI-driven tools and analytics within KwIDF have improved production efficiency, enabled timely action, and uncovered untapped potential in mature oil fields.
KwIDF enables objective detection of production anomalies through comprehensive real-time data (RTD) analytics, facilitating timely interventions and informed decision-making. This abstract highlights four real-world use cases from active oil fields in Kuwait, showcasing how data-driven methodologies delivered operational improvements:
Well-0001 (T1): Identified as producing gas exclusively, deviating from expected liquid output.
Well-0002: Showed significant variance from historical production benchmarks.
Well-0003: Displayed irregular flow measurements, suggesting impaired operations.
Well-0004: Experienced unexplained production decline despite equipment operating within normal parameters.
Targeted interventions based on these insights included zone adjustments, historical production evaluations, workover operations, and real-time decision-making. The platform employed advanced analytical techniques such as K-means clustering for production grouping, statistical process control (SPC) for adaptive benchmarking, principal component analysis (PCA) for managing multivariate data, and linear regression with signal normalization for predictive modeling.
As a result, the platform recovered an estimated 2,000 to 2,700 barrels per day across the four wells, demonstrating substantial operational and economic benefits. Additionally, KwIDF enhanced safety by reducing manual interventions, increased manpower productivity through automation, and promoted operational excellence through predictive decision-making.
Further analysis revealed a broader optimization potential, identifying 308,752 barrels/day as recoverable lost production across 1,258 wells, and 183,414 barrels/day as actionable opportunities across 854 wells.
In summary, these use cases validate how AI-driven tools and analytics within KwIDF have improved production efficiency, enabled timely action, and uncovered untapped potential in mature oil fields.
Objectives/Scope:
Piping and Instrumentation Diagrams (P&IDs) are schematic representations of functional relationships between equipment, pipelines, and instrumentation in industrial processes. They are critical for design, operation, and maintenance in industries like oil and gas and chemicals. This paper presents an AI-driven approach for digitizing P&IDs, transforming image-based diagrams into structured, analyzable data for efficient and accurate digital representation.
Methods, Procedures, Process:
This study employs a comprehensive AI pipeline integrating object detection, Optical Character Recognition (OCR), and advanced machine learning techniques. The process includes identifying symbols, instruments, and lines from scanned P&IDs and associating them with relevant textual data. Methods like template matching, advanced vessel detection, and OCR are combined with data augmentation and parallel processing to handle complex P&ID layouts while ensuring high accuracy and scalability.
Results, Observations, Conclusions:
The AI-driven system demonstrated exceptional precision and recall in digitizing P&IDs, achieving over 97% accuracy in symbol detection and nearly 100% accuracy in text recognition. Key outcomes include accurate identification of symbols, association with relevant metadata, and efficient handling of large-scale, complex diagrams. The system efficiently processed diverse diagram layouts and orientations, demonstrating robustness against variations in diagram styles, symbol sizes, and text placements. Advanced OCR methods mitigated common text recognition challenges, such as distinguishing similar characters, improving overall reliability.
The resulting structured digital data facilitates enhanced usability in applications like system modeling, operational analysis, and compliance reporting. Visual outputs, such as annotated diagrams, allow for seamless verification and validation of results. By significantly reducing manual effort and minimizing errors, this approach accelerates industrial workflows, supporting informed decision-making. The pipeline’s scalability ensures adaptability to extensive industrial datasets, offering a transformative solution for managing and digitizing P&IDs at scale.
Novel/Additive Information:
This paper introduces a novel end-to-end AI solution for P&ID digitization, combining advanced OCR, template matching, and object detection to achieve unparalleled accuracy and efficiency. It sets a new standard in industrial diagram digitalization, addressing long-standing challenges in processing complex P&ID layouts.
Piping and Instrumentation Diagrams (P&IDs) are schematic representations of functional relationships between equipment, pipelines, and instrumentation in industrial processes. They are critical for design, operation, and maintenance in industries like oil and gas and chemicals. This paper presents an AI-driven approach for digitizing P&IDs, transforming image-based diagrams into structured, analyzable data for efficient and accurate digital representation.
Methods, Procedures, Process:
This study employs a comprehensive AI pipeline integrating object detection, Optical Character Recognition (OCR), and advanced machine learning techniques. The process includes identifying symbols, instruments, and lines from scanned P&IDs and associating them with relevant textual data. Methods like template matching, advanced vessel detection, and OCR are combined with data augmentation and parallel processing to handle complex P&ID layouts while ensuring high accuracy and scalability.
Results, Observations, Conclusions:
The AI-driven system demonstrated exceptional precision and recall in digitizing P&IDs, achieving over 97% accuracy in symbol detection and nearly 100% accuracy in text recognition. Key outcomes include accurate identification of symbols, association with relevant metadata, and efficient handling of large-scale, complex diagrams. The system efficiently processed diverse diagram layouts and orientations, demonstrating robustness against variations in diagram styles, symbol sizes, and text placements. Advanced OCR methods mitigated common text recognition challenges, such as distinguishing similar characters, improving overall reliability.
The resulting structured digital data facilitates enhanced usability in applications like system modeling, operational analysis, and compliance reporting. Visual outputs, such as annotated diagrams, allow for seamless verification and validation of results. By significantly reducing manual effort and minimizing errors, this approach accelerates industrial workflows, supporting informed decision-making. The pipeline’s scalability ensures adaptability to extensive industrial datasets, offering a transformative solution for managing and digitizing P&IDs at scale.
Novel/Additive Information:
This paper introduces a novel end-to-end AI solution for P&ID digitization, combining advanced OCR, template matching, and object detection to achieve unparalleled accuracy and efficiency. It sets a new standard in industrial diagram digitalization, addressing long-standing challenges in processing complex P&ID layouts.
Objective/Scope:
Calibrating simulation models and estimating uncertain parameters heavily relies on history matching. However, the presence of subsurface uncertainties often makes this process highly intricate. To tackle this complexity, we present a new methodology that combines Bayesian inversion with the Coarse-grid Network model, offering a more efficient and streamlined path for history matching.
Methods, Procedures, Process:
The process follows a structured five-step framework. Initially, key variables such as injection rate, porosity, capillary pressure, oil viscosity, density, and relative permeability are selected through Global Sensitivity Analysis (GSA). Their initial ranges and statistical distributions are established based on prior information. In the second stage, Latin Hypercube Sampling (LHS) is applied to generate both training and testing datasets, which are then simulated using a high-resolution numerical model. A Coarse-grid Network (CgNet) model is subsequently developed to learn the input-output relationships of these parameters, while Bayesian optimization is employed to fine-tune the model's hyperparameters automatically. In the fourth step, real-world field data are incorporated into the Bayesian inversion framework, where the Markov Chain Monte Carlo (MCMC) algorithm is used to compute posterior distributions of the uncertain parameters. Lastly, the resulting parameter estimates are verified by comparing the simulated pressure outcomes with the actual observations. Should significant mismatches occur, the performance of the CgNet surrogate model is reassessed and enhanced accordingly.
Results, Observations, Conclusions:
The workflow is applied to the COSTA model, which accounts for a range of physical phenomena, including capillary, viscous, and gravitational forces. This model features a high-resolution grid of 100 million cells and includes 240 injection and production wells, all operating under realistic control scenarios. The results demonstrate that the integration of Bayesian inversion with the CgNet model successfully identifies uncertain parameters, significantly narrowing the range of subsurface uncertainties. The difference between the actual and estimated pressure responses is minimal, highlighting the accuracy of the method. Additionally, when compared to other surrogate-based Bayesian inversion techniques—such as kriging, support vector machines, and polynomial chaos expansion—the CgNet-based method shows improved performance. This enhancement is attributed to CgNet's strength in handling temporal data efficiently, making it particularly well-suited for the dynamic nature of reservoir simulation and history matching.
Calibrating simulation models and estimating uncertain parameters heavily relies on history matching. However, the presence of subsurface uncertainties often makes this process highly intricate. To tackle this complexity, we present a new methodology that combines Bayesian inversion with the Coarse-grid Network model, offering a more efficient and streamlined path for history matching.
Methods, Procedures, Process:
The process follows a structured five-step framework. Initially, key variables such as injection rate, porosity, capillary pressure, oil viscosity, density, and relative permeability are selected through Global Sensitivity Analysis (GSA). Their initial ranges and statistical distributions are established based on prior information. In the second stage, Latin Hypercube Sampling (LHS) is applied to generate both training and testing datasets, which are then simulated using a high-resolution numerical model. A Coarse-grid Network (CgNet) model is subsequently developed to learn the input-output relationships of these parameters, while Bayesian optimization is employed to fine-tune the model's hyperparameters automatically. In the fourth step, real-world field data are incorporated into the Bayesian inversion framework, where the Markov Chain Monte Carlo (MCMC) algorithm is used to compute posterior distributions of the uncertain parameters. Lastly, the resulting parameter estimates are verified by comparing the simulated pressure outcomes with the actual observations. Should significant mismatches occur, the performance of the CgNet surrogate model is reassessed and enhanced accordingly.
Results, Observations, Conclusions:
The workflow is applied to the COSTA model, which accounts for a range of physical phenomena, including capillary, viscous, and gravitational forces. This model features a high-resolution grid of 100 million cells and includes 240 injection and production wells, all operating under realistic control scenarios. The results demonstrate that the integration of Bayesian inversion with the CgNet model successfully identifies uncertain parameters, significantly narrowing the range of subsurface uncertainties. The difference between the actual and estimated pressure responses is minimal, highlighting the accuracy of the method. Additionally, when compared to other surrogate-based Bayesian inversion techniques—such as kriging, support vector machines, and polynomial chaos expansion—the CgNet-based method shows improved performance. This enhancement is attributed to CgNet's strength in handling temporal data efficiently, making it particularly well-suited for the dynamic nature of reservoir simulation and history matching.
Objectives/Scope:
We have previously shown that deep interaction neural networks based on graphs, can learn complex flow physics relationships from reservoir models to accelerate forward simulations [1]. The generalization and scale-up to a deep learning architecture with Subsurface Graph Network Simulator (SGNS) [2] renders fast and accurate, long-term spatio-temporal predictions of fluid and pressure propagation in structurally diverse reservoir model grids [3]. This paper builds on the later and introduces the benchmarking and deployment of SGNS in operational engineering simulation environment.
Methods, Procedures, Process:
The SGNS is a data-driven surrogate framework that consists of a subsurface graph neural network (SGNN) to model the evolution of fluids, and a 3D-U-Net to model the evolution of pressure. The SGNN uses encoder-processor-decoder architecture to encode the properties and dynamics of each grid cell and the cell-cell relation into graph node and edge features. The multilayer perceptron computes the interaction between neighboring cells and updates the state of the cells. The pressure dynamics manifests shorter equilibrium time and faster global spatial evolution than the fluid and is better captured with hierarchical structure and modified order of operation of 3D-U-Net convolution layers.
Results, Observations, Conclusions:
We deploy the SGNS on a synthetic, single-porosity/single-permeability reservoir model with multi-phase flow, large-scale simulation grid and large numbers of injectors and producers with variable positioning and geometry. We construct a network graph with nodes, representing reservoir grid cells are encoded with tens of static, dynamic, computed and control features. The wells are encoded via well completion factors. The graph edges represent interactions between the nodes with encoded features like transmissibility, direction and fluxes. We implement sector-based training with multi-step rollout to avail for the use of large-scale models. The loss function is the joint mean squared error, combining misfits in oil and water volumes and pressure. We present the comparative results between the SGNS and the full-physics simulation for up to 30-year prediction of the 3D pressure, oil and water saturation as well as the dynamic well responses.
Novelty/Significance/Additive Information:
The SGNS framework is a novel, industry-unique technology with promising scalability, generalization and prediction accuracy. The immediate applications involve accelerated well placement and production forecasting studies. Going forward, we are in the process of integrating the well prediction model in the roll-out of the evolution model, incorporating the encoding of well production constraints into the training scheme and deploying state-of-the-art architectures to enable multi-GPU training.
References:
We have previously shown that deep interaction neural networks based on graphs, can learn complex flow physics relationships from reservoir models to accelerate forward simulations [1]. The generalization and scale-up to a deep learning architecture with Subsurface Graph Network Simulator (SGNS) [2] renders fast and accurate, long-term spatio-temporal predictions of fluid and pressure propagation in structurally diverse reservoir model grids [3]. This paper builds on the later and introduces the benchmarking and deployment of SGNS in operational engineering simulation environment.
Methods, Procedures, Process:
The SGNS is a data-driven surrogate framework that consists of a subsurface graph neural network (SGNN) to model the evolution of fluids, and a 3D-U-Net to model the evolution of pressure. The SGNN uses encoder-processor-decoder architecture to encode the properties and dynamics of each grid cell and the cell-cell relation into graph node and edge features. The multilayer perceptron computes the interaction between neighboring cells and updates the state of the cells. The pressure dynamics manifests shorter equilibrium time and faster global spatial evolution than the fluid and is better captured with hierarchical structure and modified order of operation of 3D-U-Net convolution layers.
Results, Observations, Conclusions:
We deploy the SGNS on a synthetic, single-porosity/single-permeability reservoir model with multi-phase flow, large-scale simulation grid and large numbers of injectors and producers with variable positioning and geometry. We construct a network graph with nodes, representing reservoir grid cells are encoded with tens of static, dynamic, computed and control features. The wells are encoded via well completion factors. The graph edges represent interactions between the nodes with encoded features like transmissibility, direction and fluxes. We implement sector-based training with multi-step rollout to avail for the use of large-scale models. The loss function is the joint mean squared error, combining misfits in oil and water volumes and pressure. We present the comparative results between the SGNS and the full-physics simulation for up to 30-year prediction of the 3D pressure, oil and water saturation as well as the dynamic well responses.
Novelty/Significance/Additive Information:
The SGNS framework is a novel, industry-unique technology with promising scalability, generalization and prediction accuracy. The immediate applications involve accelerated well placement and production forecasting studies. Going forward, we are in the process of integrating the well prediction model in the roll-out of the evolution model, incorporating the encoding of well production constraints into the training scheme and deploying state-of-the-art architectures to enable multi-GPU training.
References:
- M. Maucec, R. Jalali (2022). “GeoDIN-Geoscience-Based Deep Interaction Networks for Predicting Flow Dynamics in Reservoir Simulation Models”, SPE-203952-PA, SPE Journal, 27 (03): 1671–1689
- T. Wu et al. (2022). “Learning Large-scale Subsurface Simulations with a Hybrid Graph Network Simulator”, 2022 ACM SIGKDD.
- M. Maucec, R. Jalali; H. Hamam (2024). “Predicting Subsurface Reservoir Flow Dynamics at Scale with Hybrid Neural Network Simulator”. IPTC-24367-MS.
In mature reservoirs, the identification and placement of infill wells with high productivity indices (PI) is critical for maximizing recovery and extending field life. Traditional methods of infill well planning rely heavily on reservoir simulation, geostatistical mapping, and expert judgment, which can be time-consuming and may overlook complex, nonlinear relationships between subsurface properties and well performance. Recent advancements in Artificial Intelligence (AI) and Machine Learning (ML) offer transformative potential to enhance infill well planning by leveraging large volumes of historical and real-time reservoir data to identify optimal drilling locations and predict well productivity with greater accuracy.
This paper presents a novel AI/ML-driven workflow for designing high-productivity infill wells by integrating multi-source data, including static (geological, petrophysical) and dynamic (production, pressure, injection) datasets. The workflow begins with a thorough data pre-processing and feature engineering phase, followed by supervised ML modeling techniques such as gradient boosting, random forest, and deep neural networks to predict productivity index (PI) and other key performance indicators (KPIs). Spatial correlation techniques and unsupervised learning methods, such as clustering and self-organizing maps (SOMs), are then used to identify underdeveloped sweet spots and optimize well placement within the reservoir.
A case study from a mature carbonate field in the Middle East demonstrates the practical application and benefits of this AI/ML-based approach. Historical production data from over 150 wells were used to train and validate the model, achieving a PI prediction accuracy of over 85% when compared to actual results. The AI-recommended well locations not only avoided areas of interference and pressure depletion but also targeted zones with higher remaining hydrocarbons and better reservoir quality, leading to a 25% improvement in average PI compared to traditionally planned infill wells.
The results underscore the potential of AI and ML to significantly improve the efficiency and success rate of infill drilling campaigns. By automating pattern recognition and decision-making based on vast and complex datasets, these technologies can reduce planning cycles, enhance reservoir understanding, and ultimately improve economic outcomes. This study advocates for a paradigm shift in well planning strategies, where AI/ML complements domain expertise to achieve optimized field development in a cost-effective and timely manner.
This paper presents a novel AI/ML-driven workflow for designing high-productivity infill wells by integrating multi-source data, including static (geological, petrophysical) and dynamic (production, pressure, injection) datasets. The workflow begins with a thorough data pre-processing and feature engineering phase, followed by supervised ML modeling techniques such as gradient boosting, random forest, and deep neural networks to predict productivity index (PI) and other key performance indicators (KPIs). Spatial correlation techniques and unsupervised learning methods, such as clustering and self-organizing maps (SOMs), are then used to identify underdeveloped sweet spots and optimize well placement within the reservoir.
A case study from a mature carbonate field in the Middle East demonstrates the practical application and benefits of this AI/ML-based approach. Historical production data from over 150 wells were used to train and validate the model, achieving a PI prediction accuracy of over 85% when compared to actual results. The AI-recommended well locations not only avoided areas of interference and pressure depletion but also targeted zones with higher remaining hydrocarbons and better reservoir quality, leading to a 25% improvement in average PI compared to traditionally planned infill wells.
The results underscore the potential of AI and ML to significantly improve the efficiency and success rate of infill drilling campaigns. By automating pattern recognition and decision-making based on vast and complex datasets, these technologies can reduce planning cycles, enhance reservoir understanding, and ultimately improve economic outcomes. This study advocates for a paradigm shift in well planning strategies, where AI/ML complements domain expertise to achieve optimized field development in a cost-effective and timely manner.
Today digital technologies driven by data science hold immense potential in enhancing the oil and gas industry performance. Value realisation governance, and adoption focus, are key factors in ensuring sustainable digital investment strategies. NOC established a digital program in 2019 to support the growth of the operations and to optimize resources and cost, during a period where the assets was doubling in size. Key regional challenges and opportunities for further innovation were evident in the following areas: Vast reservoirs and sustainable production methods, maintaining aging infrastructure and legacy wells and facilities, and balancing sustainability and cost efficiency. The digitalization journey started with uncertainty on the associated value.
This framework is a structured approach that helps the organization assess, prioritize, and measure the impact of digital initiatives. The importance of it, is to ensure alignment with business goals, maximize value, and provide a systematic way to evaluate digital projects.
Key transformation areas were identified as main value streams of the program driven by the following transformation needs: Occupational and process safety ,Sustainability ,Harmonizing legacy with innovation, and Managing complex operations.
The transformational and cost efficiency drivers were addressed through several use cases covering different value streams including
The key components of the value framework are:
Finally, this Framework with estimated value contributed to a 15% increase in digital program value. It was driven by challenges including precise methods of measuring value leading to underestimation. lack of data driven decision impeding focus on investment in high value digital UCs. Lack of baseline capabilities of business entities.
The Best practices that were captured from this project include harnessing the potential of cross functional collaboration. The digital team is providing value consultation as a service to business entities. This effort is aiding in data driven validation of value to demonstrate the value case. Finally, like all frameworks there is room for refining the framework and enriching it further.
This framework is a structured approach that helps the organization assess, prioritize, and measure the impact of digital initiatives. The importance of it, is to ensure alignment with business goals, maximize value, and provide a systematic way to evaluate digital projects.
Key transformation areas were identified as main value streams of the program driven by the following transformation needs: Occupational and process safety ,Sustainability ,Harmonizing legacy with innovation, and Managing complex operations.
The transformational and cost efficiency drivers were addressed through several use cases covering different value streams including
- Sustainability/ Cost efficiency: Offshore logistics optimization
- Production: Production Optimisation tool , Data Analytics tool
- Reservoir and Geoscience : Pressure conditioning recommendations tool
- Reliability / process safety: Health Check for rotating Equipment ( predective maintainance)
- Maintenance Planning and scheduling : Offshore Activity Optimisation tool t
The key components of the value framework are:
- Business objective alignment through creating a predefined checklist
- Value levers predefinition to navigate a list of value levers enabling the business to find the dimension of value matching their business case
- Prioritization and KPI driven decision-making framework through value realization tree
- Assumption registers to create a common understanding of all the assumptions made while defining the business case
Finally, this Framework with estimated value contributed to a 15% increase in digital program value. It was driven by challenges including precise methods of measuring value leading to underestimation. lack of data driven decision impeding focus on investment in high value digital UCs. Lack of baseline capabilities of business entities.
The Best practices that were captured from this project include harnessing the potential of cross functional collaboration. The digital team is providing value consultation as a service to business entities. This effort is aiding in data driven validation of value to demonstrate the value case. Finally, like all frameworks there is room for refining the framework and enriching it further.
Jin Tang
Speaker
Senior Engineer
Research Institute of Petroleum Exploration and Development, PetroChina
The demand for intelligent systems capable of real-time decision-making is rapidly increasing as the global energy sector advances toward net-zero emissions. This study introduces an AI-augmented framework for Distributed Fiber-Optic Sensing, designed specifically to monitor the evolution of hydraulic fractures in enhanced geothermal systems (EGS) and unconventional oil and gas reservoirs. By utilizing high-resolution data from Distributed Acoustic Sensing (DAS), Distributed Temperature Sensing (DTS), and Distributed Strain Sensing (DSS), the system integrates advanced machine learning algorithms and digital twin technologies to provide continuous, real-time fracture diagnostics. This approach tracks fracture dynamics from initiation through propagation, offering valuable insights into fracture geometry and behavior over time.
The proposed system enhances operational efficiency by enabling precise fracture monitoring, optimizing energy extraction, and minimizing operational risks. Integrating AI-powered predictive analytics facilitates early detection of potential issues, allowing for proactive maintenance and reduced downtime. Additionally, the system offers environmental benefits by optimizing hydraulic fracturing processes, minimizing water and energy consumption, and reducing the ecological footprint of energy extraction.
Supported by large-scale physical modeling and field validation in high-pressure, high-temperature environments, the framework ensures its applicability in real-world conditions. Key innovations include an adaptive signal interpretation engine for accurate fracture characterization and an AI-driven framework that continually adapts based on real-time data inputs. A resilient cybersecurity architecture further ensures the integrity of the data across the monitoring infrastructure, protecting sensitive information throughout energy operations.
This AI-enhanced approach represents a transformative leap in real-time fracture monitoring, providing unprecedented insights into subsurface dynamics. By combining AI with state-of-the-art fiber-optic sensing technologies, the system responds dynamically to complex subsurface conditions. This integrated solution optimizes operational performance, strengthens safety protocols, and improves the overall efficiency of energy production systems.
The framework’s real-time monitoring of hydraulic fractures is essential for optimizing energy production efficiency. Its predictive capabilities enable proactive maintenance and operational adjustments, ensuring continuous optimization. Additionally, the system contributes to environmental sustainability by helping energy producers reduce waste and mitigate ecological risks typically associated with energy extraction.
Aligned with the broader goals of the energy transition, this study demonstrates how the convergence of AI, fiber-optic sensing, and real-time monitoring technologies can unlock new levels of sustainability, efficiency, and safety in energy systems. By providing a holistic solution that optimizes fracture characterization, mitigates risks, and enhances energy production efficiency, this approach represents a significant step forward in the development of sustainable and resilient energy systems for the future.
The proposed system enhances operational efficiency by enabling precise fracture monitoring, optimizing energy extraction, and minimizing operational risks. Integrating AI-powered predictive analytics facilitates early detection of potential issues, allowing for proactive maintenance and reduced downtime. Additionally, the system offers environmental benefits by optimizing hydraulic fracturing processes, minimizing water and energy consumption, and reducing the ecological footprint of energy extraction.
Supported by large-scale physical modeling and field validation in high-pressure, high-temperature environments, the framework ensures its applicability in real-world conditions. Key innovations include an adaptive signal interpretation engine for accurate fracture characterization and an AI-driven framework that continually adapts based on real-time data inputs. A resilient cybersecurity architecture further ensures the integrity of the data across the monitoring infrastructure, protecting sensitive information throughout energy operations.
This AI-enhanced approach represents a transformative leap in real-time fracture monitoring, providing unprecedented insights into subsurface dynamics. By combining AI with state-of-the-art fiber-optic sensing technologies, the system responds dynamically to complex subsurface conditions. This integrated solution optimizes operational performance, strengthens safety protocols, and improves the overall efficiency of energy production systems.
The framework’s real-time monitoring of hydraulic fractures is essential for optimizing energy production efficiency. Its predictive capabilities enable proactive maintenance and operational adjustments, ensuring continuous optimization. Additionally, the system contributes to environmental sustainability by helping energy producers reduce waste and mitigate ecological risks typically associated with energy extraction.
Aligned with the broader goals of the energy transition, this study demonstrates how the convergence of AI, fiber-optic sensing, and real-time monitoring technologies can unlock new levels of sustainability, efficiency, and safety in energy systems. By providing a holistic solution that optimizes fracture characterization, mitigates risks, and enhances energy production efficiency, this approach represents a significant step forward in the development of sustainable and resilient energy systems for the future.
As shale gas wells enter the middle and late stages of development, abnormal wellbore conditions such as liquid loading, tubing blockage, and tubing perforation occur with increasing frequency, severely constraining production capacity. Accurate and timely diagnosis of daily production monitoring data is therefore essential to ensure the efficient development of shale gas resources. Conventional diagnostic methods for abnormal wellbore conditions are limited by their strong dependence on labeled data, inadequate capability for multimodal information integration, and the continued requirement for manual post-interpretation of results. To address these challenges, an intelligent diagnostic approach based on a vision–language model (VLM) is proposed. In this method, historical production time-series data are normalized, segmented by sliding windows, and encoded into two-dimensional image representations, which are then combined with text-based expert knowledge prompts to construct structured training samples. The VLM is fine-tuned using task-specific instructions and structured training data, enabling anomaly type classification, start–end time identification, diagnostic interpretation, and drainage strategy recommendation. In addition, structured output templates are introduced to ensure that the generated results are not only accurate but also consistent and interpretable. The proposed method was validated on 500 expert-annotated shale gas well cases and compared with traditional machine learning methods and time-series diagnostic approaches based on large language models. The results demonstrated that, under limited labeled data conditions, the VLM maintained high diagnostic accuracy and, in particular, significantly outperformed existing methods in reducing false alarm rates. The model was further shown to automatically generate operator-oriented textual explanations of abnormal conditions, thereby reducing the need for manual interpretation and enhancing both human–machine collaboration and real-time performance in the diagnostic process. Additional analysis revealed that the VLM exhibits clear advantages in handling long-term production monitoring data from shale gas wells: by preserving both local fluctuations and global trends in the time series through image representations, the model can capture short-term anomalies as well as long-term cumulative effects, thereby improving diagnostic robustness under complex operating conditions. Moreover, the incorporation of textual prompts effectively compensates for the lack of domain knowledge in purely data-driven models, ensuring that diagnostic outputs align more closely with field operational logic and provide directly actionable recommendations. In summary, the proposed VLM-based diagnostic framework for shale gas well anomalies not only enhances the accuracy and reliability of anomaly diagnosis while reducing reliance on manual intervention, but also offers a scalable and low-maintenance solution for real-time diagnosis and response under complex operating conditions. This work opens a new avenue for intelligent anomaly diagnosis and supports the advancement of digital and automated production management in the oil and gas industry.
Co-author/s:
Liang Xue, Associate Dean, China University of Petroleum.
Dr. Haiyang Chen, Student, China University of Petroleum.
Shengdon Zhang, Student, Computer Network Information Center,Chinese Academy of Sciences.
Co-author/s:
Liang Xue, Associate Dean, China University of Petroleum.
Dr. Haiyang Chen, Student, China University of Petroleum.
Shengdon Zhang, Student, Computer Network Information Center,Chinese Academy of Sciences.


