METHOD AND SYSTEM FOR AUTOMATED SEQUENCING OF VEHICLES IN SIDE-BY-SIDE DRIVE-THRU CONFIGURATIONS VIA APPEARANCE-BASED CLASSIFICATION
    82.
    发明申请
    METHOD AND SYSTEM FOR AUTOMATED SEQUENCING OF VEHICLES IN SIDE-BY-SIDE DRIVE-THRU CONFIGURATIONS VIA APPEARANCE-BASED CLASSIFICATION 审中-公开
    通过基于外观的分类,在侧面驱动 - 配置中的车辆自动排序的方法和系统

    公开(公告)号:US20170039432A1

    公开(公告)日:2017-02-09

    申请号:US15297572

    申请日:2016-10-19

    Abstract: This disclosure provides a method and system for automated sequencing of vehicles in side-by-side drive-thru configurations via appearance-based classification. According to an exemplary embodiment, an automated sequencing method includes computer-implemented method of automated sequencing of vehicles in a side-by-side drive-thru, the method comprising: a) an image capturing device capturing video of a merge-point area associated with multiple lanes of traffic merging; b) detecting in the video a vehicle as it traverses the merge-point area; c) classifying the detected vehicle associated with traversing the merge-point area as coming from one of the merging lanes; and d) aggregating vehicle classifications performed in step c) to generate a merge sequence of detected vehicles.

    Abstract translation: 本公开提供了一种通过基于外观的分类来并行驱动通过配置的车辆的自动排序的方法和系统。 根据示例性实施例,自动排序方法包括以并行驱动的车辆的自动排序的计算机实现的方法,所述方法包括:a)图像捕获装置,其捕获与相关联的合并点区域的视频 多个车道合并; b)当车辆横穿合并点区域时,视频中检测车辆; c)将与遍历合并点区域相关联的检测到的车辆分类为来自合并车道之一; 以及d)聚合在步骤c)中执行的车辆分类,以产生检测到的车辆的合并序列。

    Video tracking based method for automatic sequencing of vehicles in drive-thru applications
    83.
    发明授权
    Video tracking based method for automatic sequencing of vehicles in drive-thru applications 有权
    基于视频跟踪的方法,用于在直通应用中自动排序车辆

    公开(公告)号:US09471889B2

    公开(公告)日:2016-10-18

    申请号:US14260915

    申请日:2014-04-24

    Abstract: A method for updating an event sequence includes acquiring video data of a queue area from at least one image source; searching the frames for subjects located at least near a region of interest (ROI) of defined start points in the video data; tracking a movement of each detected subject through the queue area over a subsequent series of frames; using the tracking, determining if a location of the a tracked subject reaches a predefined merge point where multiple queues in the queue area converge into a single queue lane; in response to the tracked subject reaching the predefined merge point, computing an observed sequence of where the tracked subject places among other subjects approaching an end-event point; and, updating a sequence of end-events to match the observed sequence of subjects in the single queue lane.

    Abstract translation: 一种用于更新事件序列的方法包括从至少一个图像源获取队列区域的视频数据; 在所述帧中搜索位于所述视频数据中定义的起点的至少接近感兴趣区域(ROI)的对象; 在随后的一系列帧中跟踪每个检测到的对象通过队列区域的移动; 使用所述跟踪,确定被跟踪对象的位置是否到达所述队列区域中的多个队列收敛到单个队列通道中的预定义合并点; 响应于被跟踪的对象到达预定义的合并点,计算被跟踪对象在接近终点事件点的其他主体中所在的观察序列; 并且更新结束事件序列以匹配在单个队列通道中观察到的受试者序列。

    COMPUTER-VISION BASED PROCESS RECOGNITION
    85.
    发明申请
    COMPUTER-VISION BASED PROCESS RECOGNITION 审中-公开
    基于计算机视觉的过程识别

    公开(公告)号:US20160234464A1

    公开(公告)日:2016-08-11

    申请号:US14688230

    申请日:2015-04-16

    Abstract: A computer-vision based method for validating an activity workflow of a human performer includes identifying a target activity. The method includes determining an expected sequence of actions associated with the target activity. The method includes receiving a video stream from an image capture device monitoring an activity performed by an associated human performer. The method includes determining an external cue in the video stream. The method includes associating a frame capturing the external cue as a first frame in a key frame sequence. The method includes determining an action being performed by the associated human performer in the key frame sequence. In response to determining the action in the key frame sequence matching an expected action in the target activity, the method includes verifying the action as being performed in the monitored activity. In response to not determining the action in the key frame sequence, the method includes generating an alert indicating an error in the monitored activity.

    Abstract translation: 用于验证人类表演者的活动工作流程的基于计算机视觉的方法包括识别目标活动。 该方法包括确定与目标活动相关联的预期动作序列。 该方法包括从监视由相关联的人类执行者执行的活动的图像捕获设备接收视频流。 该方法包括确定视频流中的外部提示。 该方法包括将捕获外部提示的帧作为关键帧序列中的第一帧进行关联。 该方法包括确定在关键帧序列中由相关联的人类执行者正在执行的动作。 响应于确定与目标活动中的预期动作匹配的关键帧序列中的动作,该方法包括将该动作验证为在所监视的活动中执行。 响应于不确定关键帧序列中的动作,该方法包括生成指示所监视活动中的错误的警报。

    Reconstructing an image of a scene captured using a compressed sensing device

    公开(公告)号:US09412185B2

    公开(公告)日:2016-08-09

    申请号:US14753238

    申请日:2015-06-29

    Abstract: A method for reconstructing an image of a scene captured using a compressed sensing device. A mask is received which identifies at least one region of interest in an image of a scene. Measurements are then obtained of the scene using a compressed sensing device comprising, at least in part, a spatial light modulator configuring a plurality of spatial patterns according to a set of basis functions each having a different spatial resolution. A spatial resolution is adaptively modified according to the mask. Each pattern focuses incoming light of the scene onto a detector which samples sequential measurements of light. These measurements comprise a sequence of projection coefficients corresponding to the scene. Thereafter, an appearance of the scene is reconstructed utilizing a compressed sensing framework which reconstructs the image from the sequence of projection coefficients.

    MODEL-LESS BACKGROUND ESTIMATION FOR FOREGROUND DETECTION IN VIDEO SEQUENCES
    88.
    发明申请
    MODEL-LESS BACKGROUND ESTIMATION FOR FOREGROUND DETECTION IN VIDEO SEQUENCES 有权
    在视频序列中用于前缀检测的模型无背景估计

    公开(公告)号:US20160217575A1

    公开(公告)日:2016-07-28

    申请号:US14606469

    申请日:2015-01-27

    Abstract: A camera outputs video as a sequence of video frames having pixel values in a first (e.g., relatively low dimensional) color space, where the first color space has a first number of channels. An image-processing device maps the video frames to a second (e.g., relatively higher dimensional) color representation of video frames. The mapping causes the second color representation of video frames to have a greater number of channels relative to the first number of channels. The image-processing device extracts a second color representation of a background frame of the scene. The image-processing device can then detect foreground objects in a current frame of the second color representation of video frames by comparing the current frame with the second color representation of a background frame. The image-processing device then outputs an identification of the foreground objects in the current frame of the video.

    Abstract translation: 相机将视频输出为具有第一(例如相对低维)色彩空间中的像素值的视频帧序列,其中第一颜色空间具有第一数量的通道。 图像处理设备将视频帧映射到视频帧的第二(例如较高维度)的颜色表示。 映射使得视频帧的第二颜色表示相对于第一数量的信道具有更多数量的信道。 图像处理装置提取场景的背景帧的第二颜色表示。 然后,图像处理装置可以通过将当前帧与背景帧的第二颜色表示进行比较来检测视频帧的第二颜色表示的当前帧中的前景对象。 然后,图像处理装置输出视频的当前帧中的前景对象的标识。

    Handheld cellular apparatus for volume estimation
    90.
    发明授权
    Handheld cellular apparatus for volume estimation 有权
    用于体积估计的手持蜂窝设备

    公开(公告)号:US09377294B2

    公开(公告)日:2016-06-28

    申请号:US13920241

    申请日:2013-06-18

    CPC classification number: G01B11/00 G01B11/2513 H04M2250/52

    Abstract: What is disclosed is a wireless cellular device capable of determining a volume of an object in an image captured by a camera of that apparatus. In one embodiment, the present wireless cellular device comprises an illuminator for projecting a pattern of structured light with known spatial characteristics, and a camera for capturing images of an object for which a volume is to be estimated. The camera is sensitive to a wavelength range of the projected pattern of structured light. A spatial distortion is introduced by a reflection of the projected pattern off a surface of the object. And processor executing machine readable program instructions for performing the method of: receiving an image of the object from the camera; processing the image to generate a depth map; and estimating a volume of the object from the depth map. A method for using the present wireless cellular device is also provided.

    Abstract translation: 所公开的是能够确定由该设备的照相机拍摄的图像中的物体的体积的无线蜂窝设备。 在一个实施例中,本无线蜂窝设备包括用于投射具有已知空间特征的结构化光的图案的照明器,以及用于捕获要估计音量的对象的图像的照相机。 相机对结构光的投影图案的波长范围敏感。 通过将投影图案从物体的表面反射出来引入空间失真。 以及处理器执行机器可读程序指令,用于执行以下方法:从所述摄像机接收所述对象的图像; 处理图像以生成深度图; 并从深度图估计对象的体积。 还提供了一种使用本无线蜂窝设备的方法。

Patent Agency Ranking