Abstract:
A method of calibrating multiple vehicle-based image capture devices of a vehicle. An image is captured by at least one image capture device. A reference object is identified in the captured image. The reference object has known world coordinates. Known features of the vehicle are extracted in the captured image. A relative location and orientation of the vehicle in world coordinates is determined relative to the reference object. Each of the multiple image capture devices is calibrated utilizing intrinsic and extrinsic parameters of the at least one image capture device as a function of the relative location and orientation of the vehicle in world coordinates.
Abstract:
A system and method for registering range images from objects detected by multiple LiDAR sensors on a vehicle. The method includes aligning frames of data from at least two LiDAR sensors having over-lapping field-of-views in a sensor signal fusion operation so as to track objects detected by the sensors. The method defines a transformation value for at least one of the LiDAR sensors that identifies an orientation angle and position of the sensor and provides target scan points from the objects detected by the sensors where the target scan points for each sensor provide a separate target point map. The method projects the target point map from the at least one sensor to another one of the LiDAR sensors using a current transformation value to overlap the target scan points from the sensors.
Abstract:
A system and method for providing target selection and threat assessment for vehicle collision avoidance purposes that employ probability analysis of radar scan returns. The system determines a travel path of a host vehicle and provides a radar signal transmitted from a sensor on the host vehicle. The system receives multiple scan return points from detected objects, processes the scan return points to generate a distribution signal defining a contour of each detected object, and processes the scan return points to provide a position, a translation velocity and an angular velocity of each detected object. The system selects the objects that may enter the travel path of the host vehicle, and makes a threat assessment of those objects by comparing a number of scan return points that indicate that the object may enter the travel path to the number of the scan points that are received for that object.
Abstract:
A method for calculating a virtual target path around a target object that includes providing scan points identifying detected objects and separating the scan points into target object scan points and other object scan points. The method identifies a closest scan point from the target object scan points and identifies a path point that is a predetermined safe distance from the closest scan point. The method determines a straight target line adjacent to the target object that goes through the path point, and determines a distance between the target line and each of the other objects and determines whether all of the distances are greater than a predetermined threshold distance. The method identifies curve points for each other object whose distance is less than the predetermined threshold distance, and identifies a curve path that connects the curve points to be the virtual target path using a quadratic polynomial function.
Abstract:
An exemplary cruise control system includes an application that integrates curvature speed control, speed limit control, and adaptive speed control and generates an optimized speed profile that is used to control the vehicle.
Abstract:
A system and method for fusing the outputs from multiple LiDAR sensors on a vehicle that includes cueing the fusion process in response to an object being detected by a radar sensor and/or a vision system. The method includes providing object files for objects detected by the LiDAR sensors at a previous sample time, where the object files identify the position, orientation and velocity of the detected objects. The method projects object models in the object files from the previous sample time to provide predicted object models. The method also includes receiving a plurality of scan returns from objects detected in the field-of-view of the sensors at a current sample time and constructing a point cloud from the scan returns. The method then segments the scan points in the point cloud into predicted scan clusters, where each cluster identifies an object detected by the sensors.
Abstract:
A method for crowd-sourcing lane line map data for a vehicle includes receiving a plurality of observations. The method also includes classifying the plurality of observations into a plurality of observation categories. Each of the plurality of observation categories includes at least one of the plurality of observations. The method also includes determining a plurality of aligned point clouds based at least in part on the plurality of observations. One of the plurality of aligned point clouds corresponds to each of the plurality of observation categories. The method also includes determining a plurality of lane line maps based at least in part on the plurality of aligned point clouds. One of the plurality of lane line maps corresponds to each of the plurality of aligned point clouds. The method also includes updating a map database based at least in part on the plurality of lane line maps.
Abstract:
A method for validating an autonomous vehicle performance using nearby traffic patterns includes receiving remote vehicle data. The remote vehicle data includes at least one remote-vehicle motion parameter about a movement of a plurality of remote vehicles during a predetermined time interval. The method further includes determining a traffic pattern of the plurality of remote vehicles using the at least one remote-vehicle motion parameter. The method includes determining a similarity between the traffic pattern of the plurality of remote vehicles and movements of the host vehicle. Further, the method includes determining whether the similarity between the traffic pattern of the plurality of remote vehicles and movements of the host vehicle is less than a predetermined threshold. Also, the method includes commanding the host vehicle to adjust the movements thereof to match the traffic pattern of the plurality of remote vehicles.
Abstract:
A perception system is adapted to receive visual data from a camera and includes a controller having a processor and tangible, non-transitory memory on which instructions are recorded. A subsampling module, an object detection module and an attention module are each selectively executable by the controller. The controller is configured to sample an input image from the visual data to generate a rescaled whole image frame, via the subsampling module. The controller is configured to extract feature data from the rescaled whole image frame, via the object detection module. A region of interest in the rescaled whole image frame is identified, based on an output of the attention module. The controller is configured to generate a first image based on the rescaled whole image frame and a second image based on the region of interest, the second image having a higher resolution than the first image.
Abstract:
A method for determining a lane centerline includes detecting a remote vehicle ahead of a host vehicle includes determining a trajectory of the remote vehicle that is ahead of the host vehicle, extracting features of the trajectory of the remote vehicle that is ahead of the host vehicle to generate a trajectory feature vector, and classifying the trajectory of the remote vehicle that is ahead of the host vehicle using the trajectory feature vector to determine whether the trajectory of the remote vehicle includes a lane change. The method further includes determining a centerline of the current lane using the trajectory of the remote vehicle that does not include the lane change and commanding the host vehicle to move autonomously along the centerline of the current lane to maintain the host vehicle in the current lane.