Anchor lane selection method using navigation input in road change scenarios
    1.
    发明授权
    Anchor lane selection method using navigation input in road change scenarios 有权
    在道路变化情景下使用导航输入的锚车道选择方法

    公开(公告)号:US08706417B2

    公开(公告)日:2014-04-22

    申请号:US13561755

    申请日:2012-07-30

    IPC分类号: G05D1/00 G08G1/137

    摘要: A method for selecting an anchor lane for tracking in a vehicle lane tracking system. Digital map data and leading vehicle trajectory data are used to predict lane information ahead of a vehicle. Left and right lane boundary markers are also detected, where available, using a vision system. The lane marker data from the vision system is combined with the lane information from the digital map data and the leading vehicle trajectory data in a lane curvature fusion calculation. The left and right lane marker data from the vision system are also evaluated for conditions such as parallelism and sudden jumps in offsets, while considering the presence of entrance or exit lanes as indicated by the map data. An anchor lane for tracking is selected based on the evaluation of the vision system data, using either the fused curvature calculation or the digital map and leading vehicle trajectory data.

    摘要翻译: 一种用于在车道跟踪系统中选择用于跟踪的锚车道的方法。 数字地图数据和前导车辆轨迹数据用于预测车辆前方的车道信息。 还可以使用视觉系统检测左右车道边界标记。 来自视觉系统的车道标记数据与车道曲线融合计算中的数字地图数据和前车辆轨迹数据的车道信息相结合。 考虑到地图数据所示的入口或出口通道的存在,还可以评估来自视觉系统的左右车道标记数据,例如平行度和偏移突然跳跃等条件。 基于对视觉系统数据的评估,使用融合曲率计算或数字地图和前导车辆轨迹数据来选择用于跟踪的锚定车道。

    ANCHOR LANE SELECTION METHOD USING NAVIGATION INPUT IN ROAD CHANGE SCENARIOS
    2.
    发明申请
    ANCHOR LANE SELECTION METHOD USING NAVIGATION INPUT IN ROAD CHANGE SCENARIOS 有权
    在道路变化情景下使用导航输入的锚索选择方法

    公开(公告)号:US20140032108A1

    公开(公告)日:2014-01-30

    申请号:US13561755

    申请日:2012-07-30

    IPC分类号: G06K9/62 G01C21/34

    摘要: A method for selecting an anchor lane for tracking in a vehicle lane tracking system. Digital map data and leading vehicle trajectory data are used to predict lane information ahead of a vehicle. Left and right lane boundary markers are also detected, where available, using a vision system. The lane marker data from the vision system is combined with the lane information from the digital map data and the leading vehicle trajectory data in a lane curvature fusion calculation. The left and right lane marker data from the vision system are also evaluated for conditions such as parallelism and sudden jumps in offsets, while considering the presence of entrance or exit lanes as indicated by the map data. An anchor lane for tracking is selected based on the evaluation of the vision system data, using either the fused curvature calculation or the digital map and leading vehicle trajectory data.

    摘要翻译: 一种用于在车道跟踪系统中选择用于跟踪的锚车道的方法。 数字地图数据和前导车辆轨迹数据用于预测车辆前方的车道信息。 还可以使用视觉系统检测左右车道边界标记。 来自视觉系统的车道标记数据与车道曲线融合计算中的数字地图数据和前车辆轨迹数据的车道信息相结合。 考虑到地图数据所示的入口或出口通道的存在,还可以评估来自视觉系统的左右车道标记数据,例如平行度和偏移突然跳跃等条件。 基于对视觉系统数据的评估,使用融合曲率计算或数字地图和前导车辆轨迹数据来选择用于跟踪的锚定车道。

    Lane tracking system
    3.
    发明授权
    Lane tracking system 有权
    车道跟踪系统

    公开(公告)号:US09139203B2

    公开(公告)日:2015-09-22

    申请号:US13289517

    申请日:2011-11-04

    IPC分类号: G06F19/00 B60W30/12

    CPC分类号: B60W30/12 B60W2420/42

    摘要: A lane tracking system for tracking the position of a vehicle within a lane includes a camera configured to provide a video feed representative of a field of view and a video processor configured to receive the video feed from the camera and to generate latent video-based position data indicative of the position of the vehicle within the lane. The system further includes a vehicle motion sensor configured to generate vehicle motion data indicative of the motion of the vehicle, and a lane tracking processor. The lane tracking processor is configured to receive the video-based position data, updated at a first frequency; receive the sensed vehicle motion data, updated at a second frequency; estimate the position of the vehicle within the lane from the sensed vehicle motion data; and fuse the video-based position data with the estimate of the vehicle position within the lane using a Kalman filter.

    摘要翻译: 用于跟踪车道在车道内的位置的车道跟踪系统包括被配置为提供表示视野的视频馈送的照相机和被配置为从照相机接收视频馈送并且产生潜在视频的位置的视频处理器 指示车辆在车道内的位置的数据。 该系统还包括被配置为产生指示车辆的运动的车辆运动数据的车辆运动传感器和车道跟踪处理器。 车道跟踪处理器被配置为接收以第一频率更新的基于视频的位置数据; 接收以第二频率更新的感测车辆运动数据; 从感测的车辆运动数据估计车道在车道内的位置; 并使用卡尔曼滤波器将基于视频的位置数据与车道内的车辆位置的估计融合。

    LANE TRACKING SYSTEM
    4.
    发明申请
    LANE TRACKING SYSTEM 有权
    LANE跟踪系统

    公开(公告)号:US20130116854A1

    公开(公告)日:2013-05-09

    申请号:US13289517

    申请日:2011-11-04

    IPC分类号: G06F7/00

    CPC分类号: B60W30/12 B60W2420/42

    摘要: A lane tracking system for tracking the position of a vehicle within a lane includes a camera configured to provide a video feed representative of a field of view and a video processor configured to receive the video feed from the camera and to generate latent video-based position data indicative of the position of the vehicle within the lane. The system further includes a vehicle motion sensor configured to generate vehicle motion data indicative of the motion of the vehicle, and a lane tracking processor. The lane tracking processor is configured to receive the video-based position data, updated at a first frequency; receive the sensed vehicle motion data, updated at a second frequency; estimate the position of the vehicle within the lane from the sensed vehicle motion data; and fuse the video-based position data with the estimate of the vehicle position within the lane using a Kalman filter.

    摘要翻译: 用于跟踪车道在车道内的位置的车道跟踪系统包括被配置为提供表示视野的视频馈送的照相机和被配置为从照相机接收视频馈送并且产生潜在视频的位置的视频处理器 指示车辆在车道内的位置的数据。 该系统还包括被配置为产生指示车辆的运动的车辆运动数据的车辆运动传感器和车道跟踪处理器。 车道跟踪处理器被配置为接收以第一频率更新的基于视频的位置数据; 接收以第二频率更新的感测车辆运动数据; 从感测的车辆运动数据估计车道在车道内的位置; 并使用卡尔曼滤波器将基于视频的位置数据与车道内的车辆位置的估计融合。

    Enhanced data association of fusion using weighted Bayesian filtering
    5.
    发明授权
    Enhanced data association of fusion using weighted Bayesian filtering 有权
    使用加权贝叶斯滤波增强融合数据关联

    公开(公告)号:US08705797B2

    公开(公告)日:2014-04-22

    申请号:US13413861

    申请日:2012-03-07

    IPC分类号: G06K9/00 H04N5/225

    摘要: A method of associating targets from at least two object detection systems. An initial prior correspondence matrix is generated based on prior target data from a first object detection system and a second object detection system. Targets are identified in a first field-of-view of the first object detection system based on a current time step. Targets are identified in a second field-of-view of the second object detection system based on the current time step. The prior correspondence matrix is adjusted based on respective targets entering and leaving the respective fields-of-view. A posterior correspondence matrix is generated as a function of the adjusted prior correspondence matrix. A correspondence is identified in the posterior correspondence matrix between a respective target of the first object detection system and a respective target of the second object detection system.

    摘要翻译: 将来自至少两个物体检测系统的目标相关联的方法。 基于来自第一对象检测系统和第二对象检测系统的先前目标数据生成初始先前对应矩阵。 基于当前时间步长,在第一对象检测系统的第一视场中识别目标。 基于当前时间步长,在第二对象检测系统的第二视场中识别目标。 基于进入和离开相应视场的各个目标来调整先前的对应矩阵。 作为调整后的对应矩阵的函数产生后向对应矩阵。 在第一对象检测系统的相应目标和第二对象检测系统的相应目标之间的后向对应矩阵中识别对应关系。

    ENHANCED DATA ASSOCIATION OF FUSION USING WEIGHTED BAYESIAN FILTERING
    6.
    发明申请
    ENHANCED DATA ASSOCIATION OF FUSION USING WEIGHTED BAYESIAN FILTERING 有权
    使用加权贝叶斯滤波的增强数据协会

    公开(公告)号:US20130236047A1

    公开(公告)日:2013-09-12

    申请号:US13413861

    申请日:2012-03-07

    IPC分类号: G06K9/00

    摘要: A method of associating targets from at least two object detection systems. An initial prior correspondence matrix is generated based on prior target data from a first object detection system and a second object detection system. Targets are identified in a first field-of-view of the first object detection system based on a current time step. Targets are identified in a second field-of-view of the second object detection system based on the current time step. The prior correspondence matrix is adjusted based on respective targets entering and leaving the respective fields-of-view. A posterior correspondence matrix is generated as a function of the adjusted prior correspondence matrix. A correspondence is identified in the posterior correspondence matrix between a respective target of the first object detection system and a respective target of the second object detection system.

    摘要翻译: 将来自至少两个物体检测系统的目标相关联的方法。 基于来自第一对象检测系统和第二对象检测系统的先前目标数据生成初始先前对应矩阵。 基于当前时间步长,在第一对象检测系统的第一视场中识别目标。 基于当前时间步长,在第二对象检测系统的第二视场中识别目标。 基于进入和离开相应视场的各个目标来调整先前的对应矩阵。 作为调整后的对应矩阵的函数产生后向对应矩阵。 在第一对象检测系统的相应目标和第二对象检测系统的相应目标之间的后向对应矩阵中识别对应关系。

    FUSION OF OBSTACLE DETECTION USING RADAR AND CAMERA
    7.
    发明申请
    FUSION OF OBSTACLE DETECTION USING RADAR AND CAMERA 有权
    使用雷达和摄像机进行障碍物检测的融合

    公开(公告)号:US20140035775A1

    公开(公告)日:2014-02-06

    申请号:US13563993

    申请日:2012-08-01

    IPC分类号: G01S13/86 G01S13/93

    摘要: A vehicle obstacle detection system includes an imaging system for capturing objects in a field of view and a radar device for sensing objects in a substantially same field of view. The substantially same field of view is partitioned into an occupancy grid having a plurality of observation cells. A fusion module receives radar data from the radar device and imaging data from the imaging system. The fusion module projects the occupancy grid and associated radar data onto the captured image. The fusion module extracts features from each corresponding cell using sensor data from the radar device and imaging data from the imaging system. A primary classifier determines whether an extracted feature extracted from a respective observation cell is an obstacle.

    摘要翻译: 车辆障碍物检测系统包括用于在视野中捕获物体的成像系统和用于在基本相同的视场中感测物体的雷达装置。 基本上相同的视场划分成具有多个观察单元的占用网格。 融合模块从雷达装置接收雷达数据并从成像系统接收成像数据。 融合模块将占用网格和相关的雷达数据投影到捕获的图像上。 融合模块使用来自雷达装置的传感器数据和来自成像系统的成像数据从每个对应的小区中提取特征。 主分类器确定从相应观察单元提取的提取特征是否是障碍物。

    Fusion of obstacle detection using radar and camera
    8.
    发明授权
    Fusion of obstacle detection using radar and camera 有权
    使用雷达和摄像机融合障碍物检测

    公开(公告)号:US09429650B2

    公开(公告)日:2016-08-30

    申请号:US13563993

    申请日:2012-08-01

    摘要: A vehicle obstacle detection system includes an imaging system for capturing objects in a field of view and a radar device for sensing objects in a substantially same field of view. The substantially same field of view is partitioned into an occupancy grid having a plurality of observation cells. A fusion module receives radar data from the radar device and imaging data from the imaging system. The fusion module projects the occupancy grid and associated radar data onto the captured image. The fusion module extracts features from each corresponding cell using sensor data from the radar device and imaging data from the imaging system. A primary classifier determines whether an extracted feature extracted from a respective observation cell is an obstacle.

    摘要翻译: 车辆障碍物检测系统包括用于在视野中捕获物体的成像系统和用于在基本相同的视场中感测物体的雷达装置。 基本上相同的视场划分成具有多个观察单元的占用网格。 融合模块从雷达装置接收雷达数据并从成像系统接收成像数据。 融合模块将占用网格和相关的雷达数据投影到捕获的图像上。 融合模块使用来自雷达装置的传感器数据和来自成像系统的成像数据从每个对应的小区中提取特征。 主分类器确定从相应观察单元提取的提取特征是否是障碍物。

    NOVEL SENSOR ALIGNMENT PROCESS AND TOOLS FOR ACTIVE SAFETY VEHICLE APPLICATIONS
    9.
    发明申请
    NOVEL SENSOR ALIGNMENT PROCESS AND TOOLS FOR ACTIVE SAFETY VEHICLE APPLICATIONS 有权
    新型传感器对准方法和主动安全车辆应用工具

    公开(公告)号:US20120290169A1

    公开(公告)日:2012-11-15

    申请号:US13104704

    申请日:2011-05-10

    IPC分类号: G06F7/00 G06F17/10

    摘要: A method and tools for virtually aligning object detection sensors on a vehicle without having to physically adjust the sensors. A sensor misalignment condition is detected during normal driving of a host vehicle by comparing different sensor readings to each other. At a vehicle service facility, the host vehicle is placed in an alignment target fixture, and alignment of all object detection sensors is compared to ground truth to determine alignment calibration parameters. Alignment calibration can be further refined by driving the host vehicle in a controlled environment following a leading vehicle. Final alignment calibration parameters are authorized and stored in system memory, and applications which use object detection data henceforth adjust the sensor readings according to the calibration parameters.

    摘要翻译: 一种用于在车辆上虚拟对准物体检测传感器而不必物理地调节传感器的方法和工具。 通过将不同的传感器读数相互比较,在主车辆的正常驾驶期间检测传感器未对准状态。 在车辆服务设施中,主车辆被放置在对准目标夹具中,并且将所有物体检测传感器的对准与地面真实相比较以确定对准校准参数。 通过在主导车辆之后的受控环境中驾驶本车辆,可以进一步改善对准校准。 最终对准校准参数被授权并存储在系统存储器中,并且使用对象检测数据的应用从此根据校准参数调整传感器读数。

    Sensor alignment process and tools for active safety vehicle applications
    10.
    发明授权
    Sensor alignment process and tools for active safety vehicle applications 有权
    用于主动安全车辆应用的传感器校准过程和工具

    公开(公告)号:US08775064B2

    公开(公告)日:2014-07-08

    申请号:US13104704

    申请日:2011-05-10

    IPC分类号: G06F7/00 G06F17/10 G01M11/00

    摘要: A method and tools for virtually aligning object detection sensors on a vehicle without having to physically adjust the sensors. A sensor misalignment condition is detected during normal driving of a host vehicle by comparing different sensor readings to each other. At a vehicle service facility, the host vehicle is placed in an alignment target fixture, and alignment of all object detection sensors is compared to ground truth to determine alignment calibration parameters. Alignment calibration can be further refined by driving the host vehicle in a controlled environment following a leading vehicle. Final alignment calibration parameters are authorized and stored in system memory, and applications which use object detection data henceforth adjust the sensor readings according to the calibration parameters.

    摘要翻译: 一种用于在车辆上虚拟对准物体检测传感器而不必物理地调节传感器的方法和工具。 通过将不同的传感器读数相互比较,在主车辆的正常驾驶期间检测传感器未对准状态。 在车辆服务设施中,主车辆被放置在对准目标夹具中,并且将所有物体检测传感器的对准与地面真实相比较以确定对准校准参数。 通过在主导车辆之后的受控环境中驾驶本车辆,可以进一步改善对准校准。 最终对准校准参数被授权并存储在系统存储器中,并且使用对象检测数据的应用从此根据校准参数调整传感器读数。