VIDEO CODEC AND MOTION ESTIMATION METHOD
    12.
    发明申请

    公开(公告)号:US20170324968A1

    公开(公告)日:2017-11-09

    申请号:US15661539

    申请日:2017-07-27

    CPC classification number: H04N19/433 H04N19/51

    Abstract: The invention provides a video codec. In one embodiment, the video codec is coupled to an outer memory storing a reference frame, and comprises an interface circuit, an in-chip memory, a motion estimation circuit, and a controller. The interface circuit obtains in-chip data from the reference frame stored in the outer memory. The in-chip memory stores the in-chip data. The motion estimation circuit retrieves search window data from the in-chip data with a search window, and performs a motion estimation process on a current macroblock according to the search-window data. The controller shifts the location of the search window when the current macroblock is shifted, marks a macroblock shifted out from the search window as an empty macroblock, and controls the interface circuit to obtain an updated macroblock for replacing the empty macroblock in the in-chip memory from the reference frame stored in the outer memory.

    VIDEO CODEC AND MOTION ESTIMATION METHOD
    13.
    发明申请

    公开(公告)号:US20170302942A1

    公开(公告)日:2017-10-19

    申请号:US15584572

    申请日:2017-05-02

    CPC classification number: H04N19/433 H04N19/51

    Abstract: The invention provides a video codec. In one embodiment, the video codec is coupled to an outer memory storing a reference frame, and comprises an interface circuit, an in-chip memory, a motion estimation circuit, and a controller. The interface circuit obtains in-chip data from the reference frame stored in the outer memory. The in-chip memory stores the in-chip data. The motion estimation circuit retrieves search window data from the in-chip data with a search window, and performs a motion estimation process on a current macroblock according to the search-window data. The controller shifts the location of the search window when the current macroblock is shifted, marks a macroblock shifted out from the search window as an empty macroblock, and controls the interface circuit to obtain an updated macroblock for replacing the empty macroblock in the in-chip memory from the reference frame stored in the outer memory.

    AVATAR FACIAL EXPRESSION AND/OR SPEECH DRIVEN ANIMATIONS
    14.
    发明申请
    AVATAR FACIAL EXPRESSION AND/OR SPEECH DRIVEN ANIMATIONS 审中-公开
    AVATAR FACIAL EXPRESSION和/或语音驱动动画

    公开(公告)号:US20170039750A1

    公开(公告)日:2017-02-09

    申请号:US14914561

    申请日:2015-03-27

    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, an apparatus may include a facial expression and speech tracker to respectively receive a plurality of image frames and audio of a user, and analyze the image frames and the audio to determine and track facial expressions and speech of the user. The tracker may further select a plurality of blend shapes, including assignment of weights of the blend shapes, for animating the avatar, based on tracked facial expressions or speech of the user. The tracker may select the plurality of blend shapes, including assignment of weights of the blend shapes, based on the tracked speech of the user, when visual conditions for tracking facial expressions of the user are determined to be below a quality threshold. Other embodiments may be disclosed and/or claimed.

    Abstract translation: 本文公开了与动画和呈现化身相关联的装置,方法和存储介质。 在实施例中,设备可以包括面部表情和语音跟踪器,以分别接收用户的多个图像帧和音频,并且分析图像帧和音频以确定和跟踪用户的面部表情和语音。 跟踪器还可以基于跟踪的用户的面部表情或语音来选择多个混合形状,包括分配混合形状的权重,以便动画化身化身。 当确定用于跟踪用户面部表情的视觉条件低于质量阈值时,跟踪器可以基于用户的跟踪语音来选择多个混合形状,包括分配混合形状的权重。 可以公开和/或要求保护其他实施例。

    AVATAR VIDEO APPARATUS AND METHOD
    18.
    发明申请
    AVATAR VIDEO APPARATUS AND METHOD 有权
    AVATAR视频设备和方法

    公开(公告)号:US20160300379A1

    公开(公告)日:2016-10-13

    申请号:US14775324

    申请日:2014-11-05

    Abstract: Apparatuses, methods and storage medium associated with creating an avatar video are disclosed herein. In embodiments, the apparatus may one or more facial expression engines, an animation-rendering engine, and a video generator. The one or more facial expression engines may be configured to receive video, voice and/or text inputs, and, in response, generate a plurality of animation messages having facial expression parameters that depict facial expressions for a plurality of avatars based at least in part on the video, voice and/or text inputs received. The animation-rendering engine may be configured to receive the one or more animation messages, and drive a plurality of avatar models, to animate and render the plurality of avatars with the facial expression depicted. The video generator may be configured to capture the animation and rendering of the plurality of avatars, to generate a video. Other embodiments may be described and/or claimed.

    Abstract translation: 本文公开了与创建头像视频相关联的装置,方法和存储介质。 在实施例中,设备可以是一个或多个面部表情引擎,动画呈现引擎和视频生成器。 一个或多个面部表情引擎可以被配置为接收视频,语音和/或文本输入,并且作为响应,至少部分地生成具有描绘多个化身的面部表情的面部表情参数的多个动画消息 接收到的视频,语音和/或文本输入。 动画呈现引擎可以被配置为接收一个或多个动画消息,并且驱动多个化身模型,以所描绘的面部表情来动画化和呈现多个化身。 视频发生器可以被配置为捕获多个化身的动画和渲染,以产生视频。 可以描述和/或要求保护其他实施例。

    FACIAL EXPRESSION AND/OR INTERACTION DRIVEN AVATAR APPARATUS AND METHOD
    19.
    发明申请
    FACIAL EXPRESSION AND/OR INTERACTION DRIVEN AVATAR APPARATUS AND METHOD 审中-公开
    FACIAL表达和/或交互驱动的AVATAR装置和方法

    公开(公告)号:US20160042548A1

    公开(公告)日:2016-02-11

    申请号:US14416580

    申请日:2014-03-19

    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, an apparatus may include a facial mesh tracker to receive a plurality of image frames, detect facial action movements of a face and head pose gestures of a head within the plurality of image frames, and output a plurality of facial motion parameters and head pose parameters that depict facial action movements and head pose gestures detected, all in real time, for animation and rendering of an avatar. The facial action movements and head pose gestures may be detected through inter-frame differences for a mouth and an eye, or the head, based on pixel sampling of the image frames. The facial action movements may include opening or closing of a mouth, and blinking of an eye. The head pose gestures may include head rotation such as pitch, yaw, roll, and head movement along horizontal and vertical direction, and the head comes closer or goes farther from the camera. Other embodiments may be described and/or claimed.

    Abstract translation: 本文公开了与动画和呈现化身相关联的装置,方法和存储介质。 在实施例中,装置可以包括面部网格跟踪器,用于接收多个图像帧,检测多个图像帧内的头部的脸部和头部姿势手势的面部动作运动,并且输出多个面部运动参数和头部 构成参数,其描绘了所有实时检测到的面部动作动作和头部姿势手势,用于动画和呈现头像。 基于图像帧的像素采样,可以通过口和眼或头部的帧间差异来检测面部动作运动和头部姿势手势。 面部动作动作可以包括开口或闭合嘴,以及眼睛的眨眼。 头部姿势手势可以包括水平和垂直方向的头部旋转,例如俯仰,偏航,滚动和头部移动,并且头部靠近或离开相机更远。 可以描述和/或要求保护其他实施例。

Patent Agency Ranking