USER INTENTION ANALYSIS APPARATUS AND METHOD BASED ON IMAGE INFORMATION OF THREE-DIMENSIONAL SPACE
    1.
    发明申请
    USER INTENTION ANALYSIS APPARATUS AND METHOD BASED ON IMAGE INFORMATION OF THREE-DIMENSIONAL SPACE 有权
    基于三维空间图像信息的用户注意分析装置和方法

    公开(公告)号:US20160335485A1

    公开(公告)日:2016-11-17

    申请号:US15092726

    申请日:2016-04-07

    Inventor: Jin Woo KIM

    Abstract: Provided are a user intention analysis apparatus and method based on image information of a three-dimensional (3D) space. The user intention analysis apparatus includes a 3D space generator configured to generate a 3D virtual space corresponding to an ambient environment, based on physical relative positions of a plurality of cameras and image information generated by photographing the ambient environment with the plurality of cameras, a 3D image analyzer configured to estimate a relative position between a first object and a second object included in the image information in the 3D virtual space and generate contact information of the first object and the second object, based on the relative positions of the first object and the second object, an action pattern recognizer configured to compare the contact information with a pre-learned action pattern to recognize an action pattern of a user who manipulates the first object or the second object, and a user intention recognizer configured to infer a user intention corresponding to the recognized action pattern, based on ontology.

    Abstract translation: 提供了基于三维(3D)空间的图像信息的用户意图分析装置和方法。 用户意图分析装置包括:3D空间生成器,被配置为基于多个摄像机的物理相对位置和通过利用多个摄像机拍摄周围环境而生成的图像信息,生成与周围环境对应的3D虚拟空间; 3D 图像分析器被配置为基于3D虚拟空间中的图像信息中的第一对象和第二对象之间的相对位置,基于第一对象和第二对象的相对位置生成第一对象和第二对象的联系人信息 第二目的,一种动作图案识别器,被配置为将联系人信息与预先学习的动作模式进行比较,以识别操作第一对象或第二对象的用户的动作模式;以及用户意图识别器,被配置为推断对应于 基于本体的公认行动模式。

    SYSTEM AND METHOD FOR FUSION RECOGNITION USING ACTIVE STICK FILTER

    公开(公告)号:US20210224616A1

    公开(公告)日:2021-07-22

    申请号:US17150391

    申请日:2021-01-15

    Abstract: Provided is a system and method for fusion recognition using an active stick filter. The system for fusion recognition using the active stick filter includes a data input unit configured to receive input information for calibration between an image and a heterogeneous sensor, a matrix calculation unit configured to calculate a correlation for projection of information of the heterogeneous sensor, a projection unit configured to project the information of the heterogeneous sensor onto an image domain using the correlation, and a two-dimensional (2D) heterogeneous sensor fusion unit configured to perform stick calibration modeling and design and apply a stick calibration filter.

    METHOD AND APPARATUS FOR ANALYZING CONCENTRATION LEVEL OF DRIVER
    3.
    发明申请
    METHOD AND APPARATUS FOR ANALYZING CONCENTRATION LEVEL OF DRIVER 有权
    用于分析驱动器浓度水平的方法和装置

    公开(公告)号:US20140218188A1

    公开(公告)日:2014-08-07

    申请号:US14161057

    申请日:2014-01-22

    CPC classification number: G08B21/02 B60K28/06

    Abstract: Provided is an apparatus and method for analyzing a concentration level of a driver, the method including analyzing quantitative data associated with a time at which a line of sight (LOS) of a driver is dispersed and a time at which the LOS of the driver is focused, analyzing a reaction speed of a human machine interface (HMI) when a command based on device input information, voice information, and gesture information is input, and evaluating a degree of LOS dispersion of the driver based on the quantitative data of the driver and the reaction speed of a user interface (UI) of the HMI.

    Abstract translation: 提供了一种用于分析驾驶员的集中度的装置和方法,该方法包括分析与驾驶员的视线(LOS)分散的时间和驾驶员的LOS的时间相关联的定量数据 在基于设备输入信息,语音信息和手势信息的命令被输入时分析人机接口(HMI)的反应速度,并且基于驾驶员的定量数据来评估驾驶员的LOS分散度 以及HMI的用户界面(UI)的反应速度。

    APPARATUS AND METHOD OF LEARNING POSE OF MOVING OBJECT

    公开(公告)号:US20190164301A1

    公开(公告)日:2019-05-30

    申请号:US16198536

    申请日:2018-11-21

    Inventor: Jin Woo KIM

    Abstract: Provided is a method of learning a pose of a moving object. The method includes determining 3D feature points in the 3D mesh model obtained by previously modeling a general shape of a moving object, fitting the 3D mesh model to the 2D learning image obtained by previously photographing the real shape of the moving object with respect to the determined 3D feature points, obtaining learning data associated with pose estimation of the moving object from the 2D learning image with the 3D mesh model fitted thereto, and learning a pose estimation model estimating a pose of a target moving object included in one real image obtained by the camera by using the learning data.

Patent Agency Ranking