-
公开(公告)号:US20110044536A1
公开(公告)日:2011-02-24
申请号:US12543141
申请日:2009-08-18
申请人: Wesley Kenneth Cobb , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Min-Jung Seow , Gang Xu , Lon William Risinger , Jeff Graham
发明人: Wesley Kenneth Cobb , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Min-Jung Seow , Gang Xu , Lon William Risinger , Jeff Graham
CPC分类号: G06K9/46 , G06K9/3233 , G06K9/4647 , G06K9/48 , G06K2009/485 , H04N7/18
摘要: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
摘要翻译: 公开了用于基于一个或多个图像的特征在像素级提取微特征的技术。 重要的是,提取是无监督的,即独立于定义特定对象的任何训练数据执行,允许行为识别系统放弃训练阶段,并且对象分类进行而不受特定对象定义的约束。 不需要训练数据的微特征提取器在执行提取时是自适应和自动训练。 提取的微特征被表示为可以被输入到基于微特征向量将对象分组成对象类型聚类的微分类器的微特征向量。
-
公开(公告)号:US08131012B2
公开(公告)日:2012-03-06
申请号:US12028484
申请日:2008-02-08
申请人: John Eric Eaton , Wesley Kenneth Cobb , Dennis Gene Urech , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Lon William Risinger , Kishor Adinath Saitwal , Ming-Jung Seow , David Marvin Solum , Gang Xu , Tao Yang
发明人: John Eric Eaton , Wesley Kenneth Cobb , Dennis Gene Urech , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Lon William Risinger , Kishor Adinath Saitwal , Ming-Jung Seow , David Marvin Solum , Gang Xu , Tao Yang
IPC分类号: G06K9/00
CPC分类号: G06K9/00771 , G08B13/19608 , G08B13/19613
摘要: Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.
摘要翻译: 本发明的实施例提供了一种基于所获取的视频帧流来分析和学习行为的方法和系统。 基于对视频帧的分析来确定流中描绘的对象。 每个对象可以具有用于跟踪对象的帧到帧的运动的对应的搜索模型。 确定对象的类,并生成对象的语义表示。 语义表示用于确定对象的行为,并了解在所获取的视频流中描绘的环境中发生的行为。 通过这种方式,系统通过分析环境中的动作或活动或不存在这些环境,快速,实时地对任何环境的正常和异常行为进行学习,并根据所学到的知识和预测异常和可疑行为。
-
公开(公告)号:US08620028B2
公开(公告)日:2013-12-31
申请号:US13413549
申请日:2012-03-06
申请人: John Eric Eaton , Wesley Kenneth Cobb , Dennis Gene Urech , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Lon William Risinger , Kishor Adinath Saitwal , Ming-Jung Seow , David Marvin Solum , Gang Xu , Tao Yang
发明人: John Eric Eaton , Wesley Kenneth Cobb , Dennis Gene Urech , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Lon William Risinger , Kishor Adinath Saitwal , Ming-Jung Seow , David Marvin Solum , Gang Xu , Tao Yang
IPC分类号: G06K9/62
CPC分类号: G06K9/00771 , G08B13/19608 , G08B13/19613
摘要: Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.
摘要翻译: 本发明的实施例提供了一种基于所获取的视频帧流来分析和学习行为的方法和系统。 基于对视频帧的分析来确定流中描绘的对象。 每个对象可以具有用于跟踪对象的帧到帧的运动的对应的搜索模型。 确定对象的类,并生成对象的语义表示。 语义表示用于确定对象的行为,并了解在所获取的视频流中描绘的环境中发生的行为。 通过这种方式,系统通过分析环境中的动作或活动或不存在这些环境,快速,实时地对任何环境的正常和异常行为进行学习,并根据所学到的知识和预测异常和可疑行为。
-
公开(公告)号:US20080193010A1
公开(公告)日:2008-08-14
申请号:US12028484
申请日:2008-02-08
申请人: JOHN ERIC EATON , Wesley Kenneth Cobb , Dennis Gene Urech , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Lon William Risinger , Kishor Adinath Saitwal , Ming-Jung Seow , David Marvin Solum , Gang Xu , Tao Yang
发明人: JOHN ERIC EATON , Wesley Kenneth Cobb , Dennis Gene Urech , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Lon William Risinger , Kishor Adinath Saitwal , Ming-Jung Seow , David Marvin Solum , Gang Xu , Tao Yang
IPC分类号: G06F15/18
CPC分类号: G06K9/00771 , G08B13/19608 , G08B13/19613
摘要: Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.
摘要翻译: 本发明的实施例提供了一种基于所获取的视频帧流来分析和学习行为的方法和系统。 基于对视频帧的分析来确定流中描绘的对象。 每个对象可以具有用于跟踪对象的帧到帧的运动的对应的搜索模型。 确定对象的类,并生成对象的语义表示。 语义表示用于确定对象的行为,并了解在所获取的视频流中描绘的环境中发生的行为。 通过这种方式,系统通过分析环境中的动作或活动或不存在这些环境,快速,实时地对任何环境的正常和异常行为进行学习,并根据所学到的知识和预测异常和可疑行为。
-
公开(公告)号:US20110043689A1
公开(公告)日:2011-02-24
申请号:US12543281
申请日:2009-08-18
申请人: Wesley Kenneth Cobb , Dennis Gene Urech , Bobby Ernest Blythe , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Tao Yang , Lon William Risinger
发明人: Wesley Kenneth Cobb , Dennis Gene Urech , Bobby Ernest Blythe , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Tao Yang , Lon William Risinger
CPC分类号: H04N5/275 , G06K9/00765 , G06T7/246 , G06T2207/10016 , H04N5/147
摘要: Techniques are disclosed for detecting a field-of-view change for a video feed. These techniques differentiate between a new or changed scene and a temporary variation in the scene to accurately detect field-of-view changes for the video feed. A field-of-view change is detected when the position of a camera providing the video feed changes, the video feed is switched to a different camera, the video feed is disconnected, or the camera providing the video feed is obscured. A false-positive field-of-view change is not detected when the scene changes due to a sudden variation in illumination, obstruction of a portion of the camera providing the video feed, blurred images due to an out-of-focus camera, or a transition between bright and dark light when the video feed transitions between color and near infrared capture modes.
摘要翻译: 公开了用于检测视频馈送的视场改变的技术。 这些技术区分新的或改变的场景和场景中的临时变化,以准确地检测视频馈送的视场变化。 当提供视频馈送的相机的位置改变,视频馈送被切换到不同的相机,视频馈送被断开或提供视频馈送的相机被遮蔽时,检测到视野改变。 当场景由于照明的突然变化,提供视频馈送的摄像机的一部分的阻塞,由于离焦照相机引起的模糊图像而变化时,不会检测到假阳性视场改变,或 视频馈送在彩色和近红外捕获模式之间转换时,亮光和暗光之间的转换。
-
公开(公告)号:US20110043625A1
公开(公告)日:2011-02-24
申请号:US12543223
申请日:2009-08-18
申请人: Wesley Kenneth Cobb , Bobby Ernest Blythe , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Gang Xu , Tao Yang
发明人: Wesley Kenneth Cobb , Bobby Ernest Blythe , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Gang Xu , Tao Yang
CPC分类号: G06K9/00771 , G06T7/254 , G06T2207/20081 , G06T2207/30232
摘要: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
摘要翻译: 公开了用于将由监视系统接收的图像的当前背景场景与每个表示先前捕获的背景场景的场景预设画廊进行匹配的技术。 当场景照明改变(包括包含过饱和/欠饱和的部分)或内容的一部分改变时,使用四叉树分解分析来提高匹配操作的鲁棒性。 处理当前背景场景以产生包括多个窗口部分的四叉树分解。 每个窗口部分被处理以产生多个相位谱。 然后将相位谱投影到一个或多个场景预设的相应的多个场景预设图像矩阵上。 当当前背景场景与其中一个场景预设之间的匹配被识别时,匹配的场景预设被更新。 否则,将根据当前的背景场景创建一个新的场景预设。
-
7.
公开(公告)号:US08797405B2
公开(公告)日:2014-08-05
申请号:US12551332
申请日:2009-08-31
申请人: Wesley Kenneth Cobb , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Ming-Jung Seow , Gang Xu
发明人: Wesley Kenneth Cobb , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Ming-Jung Seow , Gang Xu
CPC分类号: G06K9/00771 , G06K9/00973
摘要: Techniques are disclosed for visually conveying classifications derived from pixel-level micro-features extracted from image data. The image data may include an input stream of video frames depicting one or more foreground objects. The classifications represent information learned by a video surveillance system. A request may be received to view a classification. A visual representation of the classification may be generated. A user interface may be configured to display the visual representation of the classification and to allow a user to view and/or modify properties associated with the classification.
摘要翻译: 公开了用于视觉地传送从图像数据提取的像素级微特征得出的分类的技术。 图像数据可以包括描绘一个或多个前景对象的视频帧的输入流。 分类表示视频监控系统学到的信息。 可以接收到查看分类的请求。 可以生成分类的视觉表示。 用户界面可以被配置为显示分类的可视表示,并允许用户查看和/或修改与分类相关联的属性。
-
公开(公告)号:US09805271B2
公开(公告)日:2017-10-31
申请号:US12543223
申请日:2009-08-18
申请人: Wesley Kenneth Cobb , Bobby Ernest Blythe , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Gang Xu , Tao Yang
发明人: Wesley Kenneth Cobb , Bobby Ernest Blythe , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Gang Xu , Tao Yang
CPC分类号: G06K9/00771 , G06T7/254 , G06T2207/20081 , G06T2207/30232
摘要: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
-
9.
公开(公告)号:US20110050897A1
公开(公告)日:2011-03-03
申请号:US12551332
申请日:2009-08-31
申请人: Wesley Kenneth Cobb , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Ming-Jung Seow , Gang Xu
发明人: Wesley Kenneth Cobb , Bobby Ernest Blythe , David Samuel Friedlander , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal , Ming-Jung Seow , Gang Xu
IPC分类号: H04N7/16
CPC分类号: G06K9/00771 , G06K9/00973
摘要: Techniques are disclosed for visually conveying classifications derived from pixel-level micro-features extracted from image data. The image data may include an input stream of video frames depicting one or more foreground objects. The classifications represent information learned by a video surveillance system. A request may be received to view a classification. A visual representation of the classification may be generated. A user interface may be configured to display the visual representation of the classification and to allow a user to view and/or modify properties associated with the classification.
摘要翻译: 公开了用于视觉地传送从图像数据提取的像素级微特征得出的分类的技术。 图像数据可以包括描绘一个或多个前景对象的视频帧的输入流。 分类表示视频监控系统学到的信息。 可能会收到请求以查看分类。 可以生成分类的视觉表示。 用户界面可以被配置为显示分类的可视表示,并允许用户查看和/或修改与分类相关联的属性。
-
公开(公告)号:US08705861B2
公开(公告)日:2014-04-22
申请号:US13494605
申请日:2012-06-12
申请人: John Eric Eaton , Wesley Kenneth Cobb , Bobby Ernest Blythe , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal
发明人: John Eric Eaton , Wesley Kenneth Cobb , Bobby Ernest Blythe , Rajkiran Kumar Gottumukkal , Kishor Adinath Saitwal
CPC分类号: G06K9/00771 , G06K9/00335 , G06T7/11 , G06T2207/10016
摘要: Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated.
摘要翻译: 本发明的实施例提供了一种用于映射可以由机器学习行为识别系统使用的所获取的视频帧流中描绘的场景的方法和系统。 场景的背景图像被分割成表示背景图像的各种对象的多个区域。 统计上相似的地区可能会合并并相关联。 分析这些区域以使用提供视频帧和其他区域的流的视频捕获设备来确定它们的z深度顺序,使用区域之间的遮挡以及关于场景中的前景物体的数据。 创建并更新描述已识别区域及其属性的注释地图。
-
-
-
-
-
-
-
-
-