RADAR-AIDED SINGLE IMAGE THREE-DIMENSIONAL DEPTH RECONSTRUCTION

    公开(公告)号:US20200286247A1

    公开(公告)日:2020-09-10

    申请号:US16809348

    申请日:2020-03-04

    Abstract: Disclosed are techniques for radar-aided single-image three-dimensional (3D) depth reconstruction. In an aspect, at least one processor of an on-board computer of an ego vehicle receives, from a radar sensor of the ego vehicle, at least one radar image of an environment of the ego vehicle, receives, from a camera sensor of the ego vehicle, at least one camera image of the environment of the ego vehicle, and generates, using a convolutional neural network (CNN), a depth image of the environment of the ego vehicle based on the at least one radar image and the at least one camera image.

    THREE-DIMENSIONAL TARGET ESTIMATION USING KEYPOINTS

    公开(公告)号:US20230087261A1

    公开(公告)日:2023-03-23

    申请号:US17480016

    申请日:2021-09-20

    Abstract: Systems and techniques are described for performing object detection and tracking. For example, a tracking object can obtain an image comprising a target object at least partially in contact with a surface. The tracking object can obtain a plurality of two-dimensional (2D) keypoints based on one or more features associated with one or more portions of the target object in contact with the surface in the image. The tracking object can obtain information associated with a contour of the surface. Based on the plurality of 2D keypoints and the information associated with the contour of the surface, the tracking object can determine a three-dimensional (3D) representation associated with the plurality of 2D keypoints.

    EGO-VELOCITY ESTIMATION USING RADAR OR LIDAR BEAM STEERING

    公开(公告)号:US20220171069A1

    公开(公告)日:2022-06-02

    申请号:US17107421

    申请日:2020-11-30

    Abstract: Methods, systems, computer-readable media, and apparatuses for radar or LIDAR measurement are presented. Some configurations include transmitting, via a transceiver, a first beam having a first frequency characteristic; calculating a distance between the transceiver and a moving object based on information from at least one reflection of the first beam; transmitting, via the transceiver, a second beam having a second frequency characteristic that is different than the first frequency characteristic, wherein the second beam is directed such that an axis of the second beam intersects a ground plane; and calculating an ego-velocity of the transceiver based on information from at least one reflection of the second beam. Applications relating to road vehicular (e.g., automobile) use are described.

    RESOLUTION OF ELEVATION AMBIGUITY IN ONE-DIMENSIONAL RADAR PROCESSING

    公开(公告)号:US20200217950A1

    公开(公告)日:2020-07-09

    申请号:US16734779

    申请日:2020-01-06

    Abstract: Systems and methods for resolving elevation ambiguity include acquiring, using a 1-D horizontal radar antenna array, a radar frame with range and azimuth information, and predicting a target elevation based on the frame by computing a depth map with a plurality of target depths assigned to corresponding azimuth-elevation pairs. Computing the depth map includes processing the radar frame with an encoder-decoder structured deep convolutional neural network (CNN). The CNN may be trained with a dataset including training radar frames acquired in a number of environments, and compensated ground truth depth maps associated with those environments. The compensated ground truth depth maps may be generated by subtracting a ground-depth from a corresponding ground truth depth map. The ground truth depth maps may be acquired with a 2-D range sensor, such as a LiDAR sensor, a 2-D radar sensor, and/or an IR sensor. The radar frame may also include Doppler data.

    RADAR-AIDED SINGLE IMAGE THREE-DIMENSIONAL DEPTH RECONSTRUCTION

    公开(公告)号:US20220180537A1

    公开(公告)日:2022-06-09

    申请号:US17678955

    申请日:2022-02-23

    Abstract: Disclosed are techniques for radar-aided single-image three-dimensional (3D) depth reconstruction. In an aspect, at least one processor of an on-board computer of an ego vehicle receives, from a radar sensor of the ego vehicle, at least one radar image of an environment of the ego vehicle, receives, from a camera sensor of the ego vehicle, at least one camera image of the environment of the ego vehicle, receives, from a light detection and ranging (LiDAR) sensor of the ego vehicle, at least one LiDAR image of the environment of the ego vehicle, and generates a depth image of the environment of the ego vehicle based on the at least one radar image, the at least one LiDAR image, and the at least one camera image.

Patent Agency Ranking