APPARATUS AND METHOD FOR DETECTING OBJECT USING MULTI-DIRECTIONAL INTEGRAL IMAGE
    2.
    发明申请
    APPARATUS AND METHOD FOR DETECTING OBJECT USING MULTI-DIRECTIONAL INTEGRAL IMAGE 有权
    用于使用多方向整合图像检测对象的装置和方法

    公开(公告)号:US20160110631A1

    公开(公告)日:2016-04-21

    申请号:US14529767

    申请日:2014-10-31

    CPC classification number: G06K9/4647 G06K9/4614 G06K2019/06253

    Abstract: An apparatus and method for detecting an object using a multi-directional integral image are disclosed. The apparatus includes an area segmentation unit, an integral image calculation unit, and an object detection unit. The area segmentation unit places windows having a size of x*y on a full image having w*h pixels so that they overlap each other at their edges, thereby segmenting the full image into a single area, a double area and a quadruple area. The integral image calculation unit calculates a single directional integral image for the single area, and calculates multi-directional integral images for the double and quadruple areas. The object detection unit detects an object for the full image using the single directional integral image and the multi-directional integral images.

    Abstract translation: 公开了一种使用多方向积分图像检测物体的装置和方法。 该装置包括区域分割单元,整体图像计算单元和对象检测单元。 区域分割单元将具有x * y大小的窗口放置在具有w * h个像素的完整图像上,使得它们在其边缘处彼此重叠,从而将完整图像分割成单个区域,双面区域和四区域。 积分图像计算单元计算单个区域的单向积分图像,并计算双倍和四倍区域的多方向积分图像。 物体检测单元使用单向积分图像和多方向积分图像检测全图像的物体。

    HUMAN BEHAVIOR RECOGNITION APPARATUS AND METHOD

    公开(公告)号:US20200074158A1

    公开(公告)日:2020-03-05

    申请号:US16213833

    申请日:2018-12-07

    Abstract: Disclosed herein are a human behavior recognition apparatus and method. The human behavior recognition apparatus includes a multimodal sensor unit for generating at least one of image information, sound information, location information, and Internet-of-Things (IoT) information of a person using a multimodal sensor, a contextual information extraction unit for extracting contextual information for recognizing actions of the person from the at least one piece of generated information, a human behavior recognition unit for generating behavior recognition information by recognizing the actions of the person using the contextual information and recognizing a final action of the person using the behavior recognition information and behavior intention information, and a behavior intention inference unit for generating the behavior intention information based on context of action occurrence related to each of the actions of the person included in the behavior recognition information.

    APPARATUS AND METHOD FOR CREATING PROBABILITY-BASED RADIO MAP FOR COOPERATIVE INTELLIGENT ROBOTS
    4.
    发明申请
    APPARATUS AND METHOD FOR CREATING PROBABILITY-BASED RADIO MAP FOR COOPERATIVE INTELLIGENT ROBOTS 有权
    用于创建合作智能机器人的基于概率的无线电映射的装置和方法

    公开(公告)号:US20140195049A1

    公开(公告)日:2014-07-10

    申请号:US14017718

    申请日:2013-09-04

    Abstract: An apparatus for creating a radio map includes a radio signal acquiring unit that acquires information on radio signals between one or more cooperative intelligent robots, a radio environment modeling unit that estimates radio strength for each cell configuring the radio map from the information on radio signals acquired by the radio signal acquiring unit, and a radio map creating unit that classifies a communication region of each cell and models the radio map according to the radio strength for each cell estimated by the radio environment modeling unit.

    Abstract translation: 用于创建无线电地图的装置包括:无线电信号获取单元,其获取关于一个或多个协作智能机器人之间的无线电信号的信息;无线电环境建模单元,从获取的无线电信号的信息中估计配置无线电图的每个小区的无线电强度 通过无线电信号获取单元,以及无线电图创建单元,对每个小区的通信区域进行分类,并且根据由无线电环境建模单元估计的每个小区的无线电强度对无线电图进行建模。

    APPARATUS AND METHOD FOR CLASSIFYING CLOTHING ATTRIBUTES BASED ON DEEP LEARNING

    公开(公告)号:US20230053151A1

    公开(公告)日:2023-02-16

    申请号:US17496588

    申请日:2021-10-07

    Abstract: Disclosed herein are an apparatus and method for classifying clothing attributes based on deep learning. The apparatus includes memory for storing at least one program and a processor for executing the program, wherein the program includes a first classification unit for outputting a first classification result for one or more attributes of clothing worn by a person included in an input image, a mask generation unit for outputting a mask tensor in which multiple mask layers respectively corresponding to principal part regions obtained by segmenting a body of the person included in the input image are stacked, a second classification unit for outputting a second classification result for the one or more attributes of the clothing by applying the mask tensor, and a final classification unit for determining and outputting a final classification result for the input image based on the first classification result and the second classification result.

    APPARATUS FOR DETERMINING SPEECH PROPERTIES AND MOTION PROPERTIES OF INTERACTIVE ROBOT AND METHOD THEREOF

    公开(公告)号:US20190164548A1

    公开(公告)日:2019-05-30

    申请号:US16102398

    申请日:2018-08-13

    Abstract: Disclosed herein are an apparatus and method for determining the speech and motion properties of an interactive robot. The method for determining the speech and motion properties of an interactive robot includes receiving interlocutor conversation information including at least one of voice information and image information about an interlocutor that interacts with an interactive robot, extracting at least one of a verbal property and a nonverbal property of the interlocutor by analyzing the interlocutor conversation information, determining at least one of a speech property and a motion property of the interactive robot based on at least one of the verbal property, the nonverbal property, and context information inferred from a conversation between the interactive robot and the interlocutor, and controlling the operation of the interactive robot based on at least one of the determined speech property and motion property of the interactive robot.

Patent Agency Ranking