-
1.
公开(公告)号:US12001934B2
公开(公告)日:2024-06-04
申请号:US18196897
申请日:2023-05-12
发明人: Edwin Olson , Dhanvin H. Mehta , Gonzalo Ferrer
CPC分类号: G06N3/02 , G06N3/008 , G06N3/084 , G06N7/01 , H04N1/00002
摘要: In Multi-Policy Decision-Making (MPDM), many computationally-expensive forward simulations are performed in order to predict the performance of a set of candidate policies. In risk-aware formulations of MPDM, only the worst outcomes affect the decision making process, and efficiently finding these influential outcomes becomes the core challenge. Recently, stochastic gradient optimization algorithms, using a heuristic function, were shown to be significantly superior to random sampling. In this disclosure, it was shown that accurate gradients can be computed—even through a complex forward simulation—using approaches similar to those in dep networks. The proposed approach finds influential outcomes more reliably, and is faster than earlier methods, allowing one to evaluate more policies while simultaneously eliminating the need to design an easily-differentiable heuristic function.
-
2.
公开(公告)号:US11087200B2
公开(公告)日:2021-08-10
申请号:US15923577
申请日:2018-03-16
发明人: Edwin Olson , Dhanvin H. Mehta , Gonzalo Ferrer
摘要: In Multi-Policy Decision-Making (MPDM), many computationally-expensive forward simulations are performed in order to predict the performance of a set of candidate policies. In risk-aware formulations of MPDM, only the worst outcomes affect the decision making process, and efficiently finding these influential outcomes becomes the core challenge. Recently, stochastic gradient optimization algorithms, using a heuristic function, were shown to be significantly superior to random sampling. In this disclosure, it was shown that accurate gradients can be computed-even through a complex forward simulation—using approaches similar to those in dep networks. The proposed approach finds influential outcomes more reliably, and is faster than earlier methods, allowing one to evaluate more policies while simultaneously eliminating the need to design an easily-differentiable heuristic function.
-
3.
公开(公告)号:US11681896B2
公开(公告)日:2023-06-20
申请号:US17371221
申请日:2021-07-09
发明人: Edwin Olson , Dhanvin H. Mehta , Gonzalo Ferrer
CPC分类号: G06N3/02 , G06N3/008 , G06N3/084 , G06N7/01 , H04N1/00002
摘要: In Multi-Policy Decision-Making (MPDM), many computationally-expensive forward simulations are performed in order to predict the performance of a set of candidate policies. In risk-aware formulations of MPDM, only the worst outcomes affect the decision making process, and efficiently finding these influential outcomes becomes the core challenge. Recently, stochastic gradient optimization algorithms, using a heuristic function, were shown to be significantly superior to random sampling. In this disclosure, it was shown that accurate gradients can be computed-even through a complex forward simulation—using approaches similar to those in dep networks. The proposed approach finds influential outcomes more reliably, and is faster than earlier methods, allowing one to evaluate more policies while simultaneously eliminating the need to design an easily-differentiable heuristic function.
-
-