RADAR-CAMERA FUSION FOR VEHICLE NAVIGATION

    公开(公告)号:US20240391494A1

    公开(公告)日:2024-11-28

    申请号:US18691634

    申请日:2022-10-18

    Abstract: Systems and methods for navigating a host vehicle based on RADAR-camera fusion are disclosed. In one implementation, a system includes a processor configured to receive images acquired by a camera onboard the host vehicle; identify a representation of a target vehicle in one of the images; receive from a RADAR system an indicator of a range between the host vehicle and the target vehicle; based on analysis of the images, identify in the image a ground intersection point associated with the target vehicle and a road surface; and determine an elevation value for the road surface based on the indicator of the range between the host vehicle and the target vehicle, the determined ground intersection point, and an angle of inclination between an optical axis of the camera and a ray directed toward a location of the ground intersection point.

    Navigation based on radar-cued visual imaging

    公开(公告)号:US10690770B2

    公开(公告)日:2020-06-23

    申请号:US16375554

    申请日:2019-04-04

    Abstract: A navigation system for a vehicle may include at least one image capture device configured to acquire a plurality of images of an environment of a vehicle and a radar sensor to detect an object in the environment of the vehicle and to provide and output including range information indicative of at least one of a range or range rate between the vehicle and the object. The system may also include at least one processing device programmed to: receive the plurality of images from the at least one image capture device; receive the output from the radar sensor; determine, for each of a plurality of image segments in a first image, from among the plurality of images, and corresponding image segments in a second image, from among the plurality of images, an indicator of optical flow; use range information determined based on the output of the radar sensor together with the indicators of optical flow determined for each of the plurality of image segments in the first image and the corresponding image segments in the second image to calculate for each of a plurality of imaged regions at least one value indicative of a focus of expansion; identify a target object region, including at least a subset of the plurality of imaged regions that share a substantially similar focus of expansion; and cause a system response based on the identified target object region.

    LIDAR and rem localization
    3.
    发明授权

    公开(公告)号:US11573090B2

    公开(公告)日:2023-02-07

    申请号:US17809632

    申请日:2022-06-29

    Abstract: A navigation system for a host vehicle may include a processor programmed to: receive, from an entity remotely located relative to the host vehicle, a sparse map associated with at least one road segment to be traversed by the host vehicle; receive point cloud information from a LIDAR system onboard the host vehicle, the point cloud information being representative of distances to various objects in an environment of the host vehicle; compare the received point cloud information with at least one of the plurality of mapped navigational landmarks in the sparse map to provide a LIDAR-based localization of the host vehicle relative to at least one target trajectory; determine an navigational action for the host vehicle based on the LIDAR-based localization of the host vehicle relative to the at least one target trajectory; and cause the at least one navigational action to be taken by the host vehicle.

    Navigation based on radar-cued visual imaging

    公开(公告)号:US10274598B2

    公开(公告)日:2019-04-30

    申请号:US15597148

    申请日:2017-05-16

    Abstract: A navigation system for a vehicle may include at least one image capture device configured to acquire a plurality of images of an environment of a vehicle and a radar sensor to detect an object in the environment of the vehicle and to provide and output including range information indicative of at least one of a range or range rate between the vehicle and the object. The system may also include at least one processing device programmed to: receive the plurality of images from the at least one image capture device; receive the output from the radar sensor; determine, for each of a plurality of image segments in a first image, from among the plurality of images, and corresponding image segments in a second image, from among the plurality of images, an indicator of optical flow; use range information determined based on the output of the radar sensor together with the indicators of optical flow determined for each of the plurality of image segments in the first image and the corresponding image segments in the second image to calculate for each of a plurality of imaged regions at least one value indicative of a focus of expansion; identify a target object region, including at least a subset of the plurality of imaged regions that share a substantially similar focus of expansion; and cause a system response based on the identified target object region.

    Vehicle navigation based on aligned image and LIDAR information

    公开(公告)号:US11953599B2

    公开(公告)日:2024-04-09

    申请号:US16478994

    申请日:2018-01-25

    Abstract: Systems and methods are provided for navigating an autonomous vehicle. In one implementation, a navigational system for a host vehicle may include at least one processor programmed to: receive a stream of images captured by a camera onboard the host vehicle, wherein the captured images are representative of an environment surrounding the host vehicle; and receive an output of a LIDAR onboard the host vehicle, wherein the output of the LIDAR is representative of a plurality of laser reflections from at least a portion of the environment surrounding the host vehicle. The at least one processor may also be configured to determine at least one indicator of relative alignment between the output of the LIDAR and at least one image captured by the camera; attribute LIDAR reflection information to one or more objects identified in the at least one image based on the at least one indicator of the relative alignment between the output of the LIDAR and the at least one image captured by the camera; and use the attributed LIDAR reflection information and the one or more objects identified in the at least one image to determine at least one navigational characteristic associated with the host vehicle.

Patent Agency Ranking