OFFLOADING METHOD, OF SATELLITE-TO-GROUND EDGE COMPUTING TASK, ASSISTED BY SATELLITE AND HIGH-ALTITUDE PLATFORM

    公开(公告)号:US20230156072A1

    公开(公告)日:2023-05-18

    申请号:US17628571

    申请日:2021-04-25

    CPC classification number: H04L67/10 H04B7/0456 H04B7/18504 H04B7/18513

    Abstract: An offloading method, of a satellite-to-ground edge computing task, assisted by a satellite and a high-altitude platform can offload a computing task of a ground user equipment (GUE) to a low earth-orbit satellite (LEO SAT), to meet a computing requirement of the GUE and to reduce latency and energy consumption. The method includes four main steps: 1. The GUE selects an associated high-altitude platform. 2. The GUE uses multi-input and multi-output (MIMO) transmission to offload the computing task to the high-altitude platform. 3. The high-altitude platform may also use the MIMO transmission to offload the computing task of the GUE to the LEO SAT. 4. The high-altitude platform and the LEO SAT cooperate to process the computing task of GUE, and reasonably allocate a computing resource to reduce energy consumption; and in MIMO edge computing, the GUE or the high-altitude platform uses the same time-domain and frequency-domain resource .

    INTENTION-DRIVEN REINFORCEMENT LEARNING-BASED PATH PLANNING METHOD

    公开(公告)号:US20240219923A1

    公开(公告)日:2024-07-04

    申请号:US17923114

    申请日:2021-12-13

    CPC classification number: G05D1/644 B63B79/40 G05D2101/15 G05D2109/30

    Abstract: The present invention discloses an intention-driven reinforcement learning-based path planning method, including the following steps: 1: acquiring, by a data collector, a state of a monitoring network; 2: selecting a steering angle of the data collector according to positions of surrounding obstacles, sensor nodes, and the data collector; 3: selecting a speed of the data collector, a target node, and a next target node as an action of the data collector according to an ε greedy policy; 4: determining, by the data collector, the next time slot according to the selected steering angle and speed; 5: obtaining rewards and penalties according to intentions of the data collector and the sensor nodes, and updating a Q value; 6: repeating step 1 to step 5 until a termination state or a convergence condition is satisfied; and 7: selecting, by the data collector, an action in each time slot having the maximum Q value as a planning result, and generating an optimal path. The method provided in the present invention can complete the data collection path planning with a higher probability of success and performance closer to the intention.

Patent Agency Ranking