-
公开(公告)号:US20200167647A1
公开(公告)日:2020-05-28
申请号:US16662460
申请日:2019-10-24
Applicant: ExxonMobil Research and Engineering Company
Inventor: Stijn De Waele , Myun-Seok Cheon , Kuang-Hung Liu , Shivakumar Kameswaran , Francisco Trespalacios , Dimitri J. Papageorgiou
Abstract: Aspects of the technology described herein comprise a surrogate model for a chemical production process. A surrogate model is a machine learned model that uses a collection of inputs and outputs from a simulation of the chemical production process and/or actual production data as training data. Once trained, the surrogate model can estimate an output of a chemical production process given an input to the process. Surrogate models are not directly constrained by physical conditions in a plant. This can cause them to suggest optimized outputs that the not possible to produce in the real world. It is a significant challenge to train a surrogate model to only produce outputs that are possible. The technology described herein improves upon previous surrogate models by constraining the output of the surrogate model to outputs that are possible in the real world.
-
公开(公告)号:US10915073B2
公开(公告)日:2021-02-09
申请号:US16218650
申请日:2018-12-13
Applicant: ExxonMobil Research and Engineering Company
Inventor: Thomas A. Badgwell , Kuang-Hung Liu , Niranjan A. Subrahmanya , Wei D. Liu , Michael H. Kovalski
Abstract: Systems and methods are provided for using a Deep Reinforcement Learning (DRL) agent to provide adaptive tuning of process controllers, such as Proportional-Integral-Derivative (PID) controllers. The agent can monitor process controller performance, and if unsatisfactory, can attempt to improve it by making incremental changes to the tuning parameters for the process controller. The effect of a tuning change can then be observed by the agent and used to update the agent's process controller tuning policy. It has been unexpectedly discovered that providing adaptive tuning based on incremental changes in tuning parameters, as opposed to making changes independent of current values of the tuning parameters, can provide enhanced or improved control over a controlled variable of a process.
-
公开(公告)号:US20190187631A1
公开(公告)日:2019-06-20
申请号:US16218650
申请日:2018-12-13
Applicant: ExxonMobil Research and Engineering Company
Inventor: Thomas A. Badgwell , Kuang-Hung Liu , Niranjan A. Subrahmanya , Wei D. Liu , Michael H. Kovalski
Abstract: Systems and methods are provided for using a Deep Reinforcement Learning (DRL) agent to provide adaptive tuning of process controllers, such as Proportional-Integral-Derivative (PID) controllers. The agent can monitor process controller performance, and if unsatisfactory, can attempt to improve it by making incremental changes to the tuning parameters for the process controller. The effect of a tuning change can then be observed by the agent and used to update the agent's process controller tuning policy. It has been unexpectedly discovered that providing adaptive tuning based on incremental changes in tuning parameters, as opposed to making changes independent of current values of the tuning parameters, can provide enhanced or improved control over a controlled variable of a process.
-
公开(公告)号:US20200302293A1
公开(公告)日:2020-09-24
申请号:US16785855
申请日:2020-02-10
Applicant: ExxonMobil Research and Engineering Company
Inventor: Kuang-Hung Liu , Michael H. Kovalski , Myun-Seok Cheon , Xiaohui Wu
IPC: G06N3/08 , G05B19/4155
Abstract: An example apparatus for optimizing output of resources from a predefined field can comprise an Artificial Intelligence (AI)-assisted reservoir simulation framework configured to produce a performance profile associated with resources output from the field. The apparatus can further comprise an optimization framework configured for determining one or more financial constraints associated with the field, the optimization framework providing the one or more financial constraints to the AI-assisted reservoir simulation framework, and a deep learning framework configured for training a neural network for use by the optimization framework. The AI-assisted reservoir simulation framework determines, as an output, a plurality of actions for optimizing output of resources from the field.
-
-
-