PARALLEL PROCESSING OF VEHICLE PATH PLANNING SUITABLE FOR PARKING

    公开(公告)号:US20220404829A1

    公开(公告)日:2022-12-22

    申请号:US17352777

    申请日:2021-06-21

    Abstract: To determine a path through a pose configuration space, trajectories of poses may be evaluated in parallel based at least on translating the trajectories along at least one axis of the pose configuration space (e.g., an orientation axis). A trajectory may include at least a portion of a turn having a fixed turn radius. Turns or turn portions that have the same turn radius and initial orientation can be translatively shifted along and processed in parallel along the orientation axis as they are translated copies of each other, but with different starting points. Trajectories may be evaluated based at least on processing variables used to evaluate reachability as bit vectors with threads effectively performing large vector operations in synchronization. A parallel reduction pattern may be used to account for dependencies that may exist between sections of a trajectory for evaluating reachability, allowing for the sections to be processed in parallel.

    GROUND TRUTH DATA GENERATION FOR DEEP NEURAL NETWORK PERCEPTION IN AUTONOMOUS DRIVING APPLICATIONS

    公开(公告)号:US20220277193A1

    公开(公告)日:2022-09-01

    申请号:US17187350

    申请日:2021-02-26

    Abstract: An annotation pipeline may be used to produce 2D and/or 3D ground truth data for deep neural networks, such as autonomous or semi-autonomous vehicle perception networks. Initially, sensor data may be captured with different types of sensors and synchronized to align frames of sensor data that represent a similar world state. The aligned frames may be sampled and packaged into a sequence of annotation scenes to be annotated. An annotation project may be decomposed into modular tasks and encoded into a labeling tool, which assigns tasks to labelers and arranges the order of inputs using a wizard that steps through the tasks. During the tasks, each type of sensor data in an annotation scene may be simultaneously presented, and information may be projected across sensor modalities to provide useful contextual information. After all annotation tasks have been completed, the resulting ground truth data may be exported in any suitable format.

    LANE MASK GENERATION FOR AUTONOMOUS MACHINE APPLICATIONS

    公开(公告)号:US20210241005A1

    公开(公告)日:2021-08-05

    申请号:US17234487

    申请日:2021-04-19

    Abstract: In various examples, object fence corresponding to objects detected by an ego-vehicle may be used to determine overlap of the object fences with lanes on a driving surface. A lane mask may be generated corresponding to the lanes on the driving surface, and the object fences may be compared to the lanes of the lane mask to determine the overlap. Where an object fence is located in more than one lane, a boundary scoring approach may be used to determine a ratio of overlap of the boundary fence, and thus the object, with each of the lanes. The overlap with one or more lanes for each object may be used to determine lane assignments for the objects, and the lane assignments may be used by the ego-vehicle to determine a path or trajectory along the driving surface.

    OBJECT FENCE GENERATION FOR LANE ASSIGNMENT IN AUTONOMOUS MACHINE APPLICATIONS

    公开(公告)号:US20210241004A1

    公开(公告)日:2021-08-05

    申请号:US17234475

    申请日:2021-04-19

    Abstract: In various examples, object fence corresponding to objects detected by an ego-vehicle may be used to determine overlap of the object fences with lanes on a driving surface. A lane mask may be generated corresponding to the lanes on the driving surface, and the object fences may be compared to the lanes of the lane mask to determine the overlap. Where an object fence is located in more than one lane, a boundary scoring approach may be used to determine a ratio of overlap of the boundary fence, and thus the object, with each of the lanes. The overlap with one or more lanes for each object may be used to determine lane assignments for the objects, and the lane assignments may be used by the ego-vehicle to determine a path or trajectory along the driving surface.

    OBJECT DETECTION AND CLASSIFICATION USING LIDAR RANGE IMAGES FOR AUTONOMOUS MACHINE APPLICATIONS

    公开(公告)号:US20210063578A1

    公开(公告)日:2021-03-04

    申请号:US17005788

    申请日:2020-08-28

    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.

    MAP CREATION AND LOCALIZATION FOR AUTONOMOUS DRIVING APPLICATIONS

    公开(公告)号:US20210063199A1

    公开(公告)日:2021-03-04

    申请号:US17008100

    申请日:2020-08-31

    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.

    LEVERAGING OBSTACLE AND LANE DETECTIONS TO DETERMINE LANE ASSIGNMENTS FOR OBJECTS IN AN ENVIRONMENT

    公开(公告)号:US20210042535A1

    公开(公告)日:2021-02-11

    申请号:US16535440

    申请日:2019-08-08

    Abstract: In various examples, object fence corresponding to objects detected by an ego-vehicle may be used to determine overlap of the object fences with lanes on a driving surface. A lane mask may be generated corresponding to the lanes on the driving surface, and the object fences may be compared to the lanes of the lane mask to determine the overlap. Where an object fence is located in more than one lane, a boundary scoring approach may be used to determine a ratio of overlap of the boundary fence, and thus the object, with each of the lanes. The overlap with one or more lanes for each object may be used to determine lane assignments for the objects, and the lane assignments may be used by the ego-vehicle to determine a path or trajectory along the driving surface.

    INTERSECTION POSE DETECTION IN AUTONOMOUS MACHINE APPLICATIONS

    公开(公告)号:US20200341466A1

    公开(公告)日:2020-10-29

    申请号:US16848102

    申请日:2020-04-14

    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection.

Patent Agency Ranking