DEEP REINFORCEMENT LEARNING INTELLIGENT DECISION-MAKING PLATFORM BASED ON UNIFIED ARTIFICIAL INTELLIGENCE FRAMEWORK

    公开(公告)号:US20240338570A1

    公开(公告)日:2024-10-10

    申请号:US18747561

    申请日:2024-06-19

    CPC classification number: G06N3/092

    Abstract: A deep reinforcement learning (DRL) intelligent decision-making platform based on a unified AI framework includes a parameter configuration module, a general-purpose module, an original environment module, an environment vectorization module, an environments maker, a mathematical utilities module, a model library and a runner. Parameters of a DRL model are selected through the parameter configuration module, and read by the general-purpose module. Based on the read parameters, a representer, a policy module, a learner, and an intelligent agent are called from the model library and created, where necessary function definitions and optimizers are called from the mathematical utilities module. Based on the read parameters, the environment vectorization module is created based on the original environment. The intelligent agent and environments are input into the runner to compute an action output, which executes the action output to realize the intelligent decision making.

    DYNAMIC OBSTACLE AVOIDANCE METHOD BASED ON REAL-TIME LOCAL GRID MAP CONSTRUCTION

    公开(公告)号:US20230161352A1

    公开(公告)日:2023-05-25

    申请号:US18147484

    申请日:2022-12-28

    Abstract: A dynamic obstacle avoidance method based on real-time local grid map construction includes: acquiring and inputting Red-Green-Blue-RGBD image data of a real indoor scene into a trained obstacle detection and semantic segmentation network to extract obstacles of different types and semantic segmentation results in the real indoor scene and generate 3D point cloud data with semantic information; according to the 3D point cloud data, extracting and inputting state information of a dynamic obstacle to a trained dynamic obstacle trajectory prediction model, and predicting a dynamic obstacle trajectory in the real indoor scene to build a local grid map; and based on a dynamic obstacle avoidance model, sending a speed instruction in real time to the mobile robot to avoid various obstacles during the navigation process.

    Energy Management Method Based on Multi-Agent Reinforcement Learning in Energy-Constrained Environments

    公开(公告)号:US20250166093A1

    公开(公告)日:2025-05-22

    申请号:US18754120

    申请日:2024-06-25

    Abstract: The present invention relates to an energy flow scheduling method based on multi-agent reinforcement learning, and the method comprising: designing an energy flow transmission mode for clustered islands, so as to describe energy transmission processes in between the clustered islands; building an energy flow transmission model for the clustered islands based on the energy flow transmission mode; establishing an energy system energy management model for the clustered islands; and realizing energy flow scheduling for the clustered islands based on multi-agent reinforcement learning methods and solving an energy management strategy. In the present invention, based on multi-agent reinforcement learning methods, in consideration of location characteristics of the clustered islands, reserves of renewable resources and mobile energy storage of electric vessels, self-adaption to changes in load requirements of islands with human settlements is satisfied.

Patent Agency Ranking