Abstract:
Technical solutions are described for vehicle collision prevention for a vehicle when the vehicle is in a parked condition. An example method includes performing a stationary safety monitoring when the vehicle is in parked condition. The stationary safety monitoring includes detecting presence of a moving object within a predetermined region from the vehicle. Further, the method includes, in response to detecting the moving object in the predetermined region initiating a notification for the moving object to prevent collision with the vehicle.
Abstract:
A system and method to fuse a radar system and a vision sensor system include obtaining radar reflections resulting from transmissions of radio frequency (RF) energy. The method includes obtaining image frames from one or more vision sensor systems, and generating region of interest (ROI) proposals based on the radar reflections and the image frames. Information is provided about objects detected based on the ROI proposals.
Abstract:
An autonomic vehicle control system includes a perception module of a spatial monitoring system that is disposed to monitor a spatial environment proximal to the autonomous vehicle. A method for evaluating vehicle dynamics operation includes determining a desired trajectory for the autonomous vehicle, wherein the desired trajectory includes desired vehicle positions including an x-position, a y-position and a heading. Vehicle control commands are determined based upon the desired trajectory, and include a commanded steering angle, an acceleration command and a braking command. Actual vehicle states responsive to the vehicle control commands are determined. An estimated trajectory is determined based upon the actual vehicle states, and a trajectory error is determined based upon a difference between the desired trajectory and the estimated trajectory. The trajectory error is monitored over a time horizon, and a first state of health (SOH) is determined based upon the trajectory error over the time horizon.
Abstract:
Systems and methods are provided for controlling a vehicle. In one embodiment, a method includes receiving vehicle and object environment data. A search graph is generated based upon the received data. The search graph contains a grid of points for locating objects and is used to determine a desired trajectory for the vehicle.
Abstract:
A method and apparatus for controlling movement of a liftgate are provided. The method includes: capturing an image of a pattern on a surface; determining a distance between the liftgate and the surface based on the image of the pattern; and controlling a movement of the liftgate based on the determined distance.
Abstract:
A method for localizing a vehicle in a digital map. GPS raw measurement data is retrieved from satellites. A digital map of a region traveled by the vehicle based on the raw measurement data is retrieved from a database. The digital map includes a geographic mapping of a traveled road and registered roadside objects. The registered roadside objects are positionally identified in the digital map by earth-fixed coordinates. Roadside objects are sensed in the region traveled by the vehicle using distance data and bearing angle data. The sensed roadside objects are matched on the digital map. A vehicle position is determined on the traveled road by fusing raw measurement data and sensor measurements of the identified roadside objects. The position of the vehicle is represented as a function of linearizing raw measurement data and the sensor measurement data as derived by a Jacobian matrix and normalized measurements, respectively.
Abstract:
A method and system are disclosed for tracking a remote vehicle which is driving in a lateral position relative to a host vehicle. Target data from two radar sensors are provided to an object detection fusion system. Wheels on the remote vehicle are identified as clusters of radar points with essentially the same location but substantially varying Doppler range rate values. If both wheels on the near side of the remote vehicle can be identified, a fusion calculation is performed using the wheel locations measured by both radar sensors, yielding an accurate estimate of the position, orientation and velocity of the remote vehicle. The position, orientation and velocity of the remote vehicle are used to trigger warnings or evasive maneuvers in a Lateral Collision Prevention (LCP) system. Radar sensor alignment can also be calibrated with an additional fusion calculation based on the same wheel measurement data.
Abstract:
A method for providing redundant vehicle speed estimation. The method includes providing sensor output signals from a plurality of primary sensors and providing inertial measurement signals from an inertial measurement unit. The method also includes estimating the vehicle speed in a primary module using the primary sensor signals, and buffering the estimated vehicle speed values from the primary module for a predetermined period of time. The method further includes determining that one or more of the primary sensors or the primary module has failed, and if so, estimating the vehicle speed in a secondary module using the buffered vehicle speed values and the inertial measurement signals. The method can use GPS signal data and/or range data from static objects to improve the estimated vehicle speed in the secondary module if they are available.
Abstract:
A method for autonomously aligning a tow hitch ball on a towing vehicle and a trailer drawbar on a trailer through a human-machine interface (HMI) assisted visual servoing process. The method includes providing rearview images from a rearview camera. The method includes touching the tow ball on a display to register a location of the tow ball in the image and touching the drawbar on the display to register a location of a target where the tow ball will be properly aligned with the drawbar. The method provides a template pattern around the target on the image and autonomously moves the vehicle so that the tow ball moves towards the target. The method predicts a new location of the target as the vehicle moves and identifies the target in new images as the vehicle moves by comparing the previous template pattern with an image patch around the predicted location.
Abstract:
A method for providing redundant vehicle speed estimation. The method includes providing sensor output signals from a plurality of primary sensors and providing inertial measurement signals from an inertial measurement unit. The method also includes estimating the vehicle speed in a primary module using the primary sensor signals, and buffering the estimated vehicle speed values from the primary module for a predetermined period of time. The method further includes determining that one or more of the primary sensors or the primary module has failed, and if so, estimating the vehicle speed in a secondary module using the buffered vehicle speed values and the inertial measurement signals. The method can use GPS signal data and/or range data from static objects to improve the estimated vehicle speed in the secondary module if they are available.