Formation Flight of Unmanned Aerial Vehicles

    公开(公告)号:US20210403159A1

    公开(公告)日:2021-12-30

    申请号:US17285684

    申请日:2019-01-22

    摘要: A method (100) for managing a group of Unmanned Aerial Vehicles (UAVs) operable to fly in a formation is disclosed, each UAV being programmed with a task to be performed by the UAV. The method is performed in a controller UAV of the group and comprises receiving UAV status information from UAVs in the group (110), obtaining information on a current formation of the group (120) and combining the received UAV status information with the information on current group formation to form a representation of a first state of the group (130). The method further comprises using a machine learning model to predict, on the basis of the first state of the group, an optimal formation transition to a new formation (140) and to instruct the UAVs in the group to perform the predicted optimal formation transition (150). An optimal formation transition is a transition to a formation that will minimise predicted total energy consumption for all UAVs in the group to complete their tasks.

    Adjusting alignment for microwave transmissions based on an RL model

    公开(公告)号:US11894881B2

    公开(公告)日:2024-02-06

    申请号:US17432311

    申请日:2019-02-20

    摘要: It is provided a method for adjusting alignment for microwave transmissions from a microwave transmitter to a microwave receiver based on a reinforcement learning, RL, model. The method comprises the steps of: obtaining state space comprising external state space and internal state space, the external state space comprising at least one value of a parameter related to environmental conditions, and the internal state space relates to alignment of the microwave transmitter; determining an action in an action space, the action space comprising actions to adjust alignment of the microwave transmitter; obtaining a measurement of path loss for a transmission from the microwave transmitter to the microwave receiver; determining a reward value based on the path loss, wherein an increase in path loss results in a reduced reward value; and adjusting the RL model based on the obtained state space, the determined action and the determined reward value.