-
公开(公告)号:US20230249083A1
公开(公告)日:2023-08-10
申请号:US17650287
申请日:2022-02-08
发明人: Peter Wurman , Leon Barrett , Piyush Khandelwal , Dion Whitehead , Rory Douglas , Houmehr Aghabozorgi , Justin V Beltran , Rabih Abdul Ahad , Bandaly Azzam
IPC分类号: A63F13/67 , A63F13/352
CPC分类号: A63F13/67 , A63F13/352 , A63F2300/531 , A63F2300/6027
摘要: An artificial intelligent agent can act as a player in a video game, such as a racing video game. The game can be completely external to the agent and can run in real time. In this way, the training system is much more like a real world system. The consoles on which the game runs for training the agent are provided in a cloud computing environment. The agents and the trainers can run on other computing devices in the cloud, where the system can choose the trainers and agent compute based on proximity to console, for example. Users can choose the game they want to run and submit code which can be built and deployed to the cloud system. A resource management service can monitor game console resources between human users and research usage and identify experiments for suspension to ensure enough game consoles for human users.
-
公开(公告)号:US20230249082A1
公开(公告)日:2023-08-10
申请号:US17650275
申请日:2022-02-08
发明人: Peter Wurman , Leon Barrett , Piyush Khandelwal , Dion Whitehead , Rory Douglas , Houmehr Aghabozorgi , Justin V Beltran , Rabih Abdul Ahad , Bandaly Azzam
IPC分类号: A63F13/67 , A63F13/352 , G06F8/71
CPC分类号: A63F13/67 , A63F13/352 , G06F8/71
摘要: An artificial intelligent agent can act as a player in a video game, such as a racing video game. The agent can race against, and often beat, the best players in the world. The game can be completely external to the agent and can run in real time. In this way, the training system is much more like a real world system. The consoles on which the game runs for training the agent are provided in a cloud computing environment. The agents and the trainers can run on other computing devices in the cloud, where the system can choose the trainers and agent compute based on proximity to console, for example. Users can choose the game they want to run and submit code which can be built and deployed to the cloud system. Metrics and logs and artifacts from the game can be sent to cloud storage.
-
公开(公告)号:US11745109B2
公开(公告)日:2023-09-05
申请号:US17650287
申请日:2022-02-08
发明人: Peter Wurman , Leon Barrett , Piyush Khandelwal , Dion Whitehead , Rory Douglas , Houmehr Aghabozorgi , Justin V Beltran , Rabih Abdul Ahad , Bandaly Azzam
IPC分类号: A63F13/67 , A63F13/352
CPC分类号: A63F13/67 , A63F13/352 , A63F2300/531 , A63F2300/6027
摘要: An artificial intelligent agent can act as a player in a video game, such as a racing video game. The game can be completely external to the agent and can run in real time. In this way, the training system is much more like a real world system. The consoles on which the game runs for training the agent are provided in a cloud computing environment. The agents and the trainers can run on other computing devices in the cloud, where the system can choose the trainers and agent compute based on proximity to console, for example. Users can choose the game they want to run and submit code which can be built and deployed to the cloud system. A resource management service can monitor game console resources between human users and research usage and identify experiments for suspension to ensure enough game consoles for human users.
-
公开(公告)号:US20230237370A1
公开(公告)日:2023-07-27
申请号:US17650295
申请日:2022-02-08
发明人: Thomas J. Walsh , Varun Kompella , Samuel Barrett , Michael D. Thomure , Patrick MacAlpine , Peter Wurman
IPC分类号: G06N20/00
CPC分类号: G06N20/00
摘要: A method for training an agent uses a mixture of scenarios designed to teach specific skills helpful in a larger domain, such as mixing general racing and very specific tactical racing scenarios. Aspects of the methods can include one or more of the following: (1) training the agent to be very good at time trials by having one or more cars spread out on the track; (2) running the agent in various racing scenarios with a variable number of opponents starting in different configurations around the track; (3) varying the opponents by using game-provided agents, agents trained according to aspects of the present invention, or agents controlled to follow specific driving lines; (4) setting up specific short scenarios with opponents in various racing situations with specific success criteria; and (5) having a dynamic curriculum based on how the agent performs on a variety of evaluation scenarios.
-
公开(公告)号:US20230249074A1
公开(公告)日:2023-08-10
申请号:US17650300
申请日:2022-02-08
IPC分类号: A63F13/5375 , A63F13/573 , A63F13/58 , A63F13/803 , A63F13/49 , G06N3/02
CPC分类号: A63F13/5375 , A63F13/573 , A63F13/58 , A63F13/803 , A63F13/49 , G06N3/02 , A63F2300/8017
摘要: Dynamic driving aides, such as driving lines, turn indicators, braking indicators and acceleration indicators, for example, can be provided for players participating in a racing game. Typically, driving lines are provided for each class of cars. However, even within a class of cars, each car differs enough that the ideal driving lines and breaking points can vary. Therefore, with an agent trained via reinforcement learning, an ideal lines and other driving aides can be established for every individual car. These guides can even be varied to account for variations in the weather or other track conditions.
-
公开(公告)号:US20220365493A1
公开(公告)日:2022-11-17
申请号:US17314351
申请日:2021-05-07
发明人: Samuel Barrett , James MacGlashan , Varun Kompella , Peter Wurman , Goker Erdogan , Fabrizio Santini
摘要: Systems and methods are used to adapt the coefficients of a proportional-integral-derivative (PID) controller through reinforcement learning. The approach for adapting PID coefficients can include an outer loop of reinforcement learning where the PID coefficients are tuned to changes in the environment and an inner loop of PID control for quickly reacting to changing errors. The outer loop can learn and adapt as the environment changes and be configured to only run at a predetermined frequency, after a given number of steps. The outer loop can use summary statistics about the error terms and any other information sensed about the environment to calculate an observation. This observation can be used to evaluate the next action, for example, by feeding it into a neural network representing the policy. The resulting action is the coefficients of the PID controller and the tunable parameters of things such as the filters.
-
公开(公告)号:US12017148B2
公开(公告)日:2024-06-25
申请号:US17804716
申请日:2022-05-31
发明人: Rory Douglas , Dion Whitehead , Leon Barrett , Piyush Khandelwal , Thomas Walsh , Samuel Barrett , Kaushik Subramanian , James MacGlashan , Leilani Gilpin , Peter Wurman
IPC分类号: A63F13/67 , A63F13/573 , A63F13/803
CPC分类号: A63F13/67 , A63F13/573 , A63F13/803
摘要: A user interface (UI), for analyzing model training runs, tracking and visualizing various aspects of machine learning experiments, can be used when training an artificial intelligent agent in, for example, a racing game environment. The UI can be web-based and can allow researchers to easily see the status of their experiments. The UI can include an experiment synchronized event viewer that can synchronizes visualizations, videos, and timeline/metrics graphs in the experiment. This viewer allows researchers to see how experiments unfold in great detail. The UI can further include experiment event annotations that can generate event annotations. These annotations can be displayed via the synchronized event viewer. The UI can be used to consider consolidated results across experiments and can further consider videos. For example, the UI can provide a reusable dashboard that can capture and compare metrics across multiple experiments.
-
公开(公告)号:US20230381660A1
公开(公告)日:2023-11-30
申请号:US17804716
申请日:2022-05-31
发明人: Rory Douglas , Dion Whitehead , Leon Barrett , Piyush Khandelwal , Thomas Walsh , Samuel Barrett , Kaushik Subramanian , James MacGlashan , Leilani Gilpin , Peter Wurman
IPC分类号: A63F13/67 , A63F13/573 , A63F13/803
CPC分类号: A63F13/67 , A63F13/573 , A63F13/803
摘要: A user interface (UI), for analyzing model training runs, tracking and visualizing various aspects of machine learning experiments, can be used when training an artificial intelligent agent in, for example, a racing game environment. The UI can be web-based and can allow researchers to easily see the status of their experiments. The UI can include an experiment synchronized event viewer that can synchronizes visualizations, videos, and timeline/metrics graphs in the experiment. This viewer allows researchers to see how experiments unfold in great detail. The UI can further include experiment event annotations that can generate event annotations. These annotations can be displayed via the synchronized event viewer. The UI can be used to consider consolidated results across experiments and can further consider videos. For example, the UI can provide a reusable dashboard that can capture and compare metrics across multiple experiments.
-
公开(公告)号:US12083429B2
公开(公告)日:2024-09-10
申请号:US17650300
申请日:2022-02-08
IPC分类号: A63F13/5375 , A63F13/49 , A63F13/573 , A63F13/58 , A63F13/803 , G06N3/02
CPC分类号: A63F13/5375 , A63F13/49 , A63F13/573 , A63F13/58 , A63F13/803 , G06N3/02 , A63F2300/8017
摘要: Dynamic driving aides, such as driving lines, turn indicators, braking indicators and acceleration indicators, for example, can be provided for players participating in a racing game. Typically, driving lines are provided for each class of cars. However, even within a class of cars, each car differs enough that the ideal driving lines and breaking points can vary. Therefore, with an agent trained via reinforcement learning, an ideal lines and other driving aides can be established for every individual car. These guides can even be varied to account for variations in the weather or other track conditions.
-
公开(公告)号:US20220101064A1
公开(公告)日:2022-03-31
申请号:US17036913
申请日:2020-09-29
发明人: Varun Kompella , James MacGlashan , Peter Wurman , Peter STONE
摘要: A task prioritized experience replay (TaPER) algorithm enables simultaneous learning of multiple RL tasks off policy. The algorithm can prioritize samples that were part of fixed length episodes that led to the achievement of tasks. This enables the agent to quickly learn task policies by bootstrapping over its early successes. Finally, TaPER can improve performance on all tasks simultaneously, which is a desirable characteristic for multi-task RL. Unlike conventional ER algorithms that are applied to single RL task learning settings or that require rewards to be binary or abundant, or are provided as a parameterized specification of goals, TaPER poses no such restrictions and supports arbitrary reward and task specifications.
-
-
-
-
-
-
-
-
-