UNSUPERVISED MODEL DRIFT ESTIMATION SYSTEM FOR DATASET SHIFT DETECTION AND MODEL SELECTION

    公开(公告)号:US20240370731A1

    公开(公告)日:2024-11-07

    申请号:US18573527

    申请日:2021-11-03

    Abstract: Systems, apparatuses and methods include technology that identifies a first neural network, wherein the first neural network is associated with a first training parameter and first population data that are generated during a process to train the first neural network. The technology executes a first neural network process to serve input data with the first neural network, and estimates a first drift of the first neural network based on the first neural network process, the first training parameter and the first population data to determine whether to retrain the first neural network.

    CONVOLUTIONAL NEURAL NETWORK TUNING SYSTEMS AND METHODS

    公开(公告)号:US20220207375A1

    公开(公告)日:2022-06-30

    申请号:US17572487

    申请日:2022-01-10

    Abstract: Systems and methods are provided that tune a convolutional neural network (CNN) to increase both its accuracy and computational efficiency. In some examples, a computing device storing the CNN includes a CNN tuner that is a hardware and/or software component that is configured to execute a tuning process on the CNN. When executing according to this configuration, the CNN tuner iteratively processes the CNN layer by layer to compress and prune selected layers. In so doing, the CNN tuner identifies and removes links and neurons that are superfluous or detrimental to the accuracy of the CNN.

    AUTOMATIC PERSPECTIVE CONTROL USING VANISHING POINTS
    3.
    发明申请
    AUTOMATIC PERSPECTIVE CONTROL USING VANISHING POINTS 审中-公开
    使用视觉点的自动视角控制

    公开(公告)号:US20170076434A1

    公开(公告)日:2017-03-16

    申请号:US14853272

    申请日:2015-09-14

    CPC classification number: G06T3/60 G06T3/00 G06T3/0093 G06T7/13

    Abstract: Techniques related to automatic perspective control of images using vanishing points are discussed. Such techniques may include determining a perspective control vanishing point associated with the image based on lines detected within the image, rotating the image based on the perspective control vanishing point to generate an aligned image, and warping the aligned image based on aligning two lines of the detected lines that meet at the perspective control vanishing point.

    Abstract translation: 讨论了与使用消失点的图像的自动透视控制有关的技术。 这样的技术可以包括:基于在图像内检测到的线来确定与图像相关联的透视控制消失点,基于透视控制消失点旋转图像以生成对准图像,以及基于对齐两条线 检测线在视角控制消失点相遇。

    3D FACE MODEL RECONSTRUCTION APPARATUS AND METHOD
    4.
    发明申请
    3D FACE MODEL RECONSTRUCTION APPARATUS AND METHOD 有权
    3D脸部模型重建装置和方法

    公开(公告)号:US20160275721A1

    公开(公告)日:2016-09-22

    申请号:US14443337

    申请日:2014-06-20

    Abstract: Apparatuses, methods and storage medium associated with 3D face model reconstruction are disclosed herein. In embodiments, an apparatus may include a facial landmark detector, a model fitter and a model tracker. The facial landmark detector may be configured to detect a plurality of landmarks of a face and their locations within each of a plurality of image frames. The model fitter may be configured to generate a 3D model of the face from a 3D model of a neutral face, in view of detected landmarks of the face and their locations within a first one of the plurality of image frames. The model tracker may be configured to maintain the 3D model to track the face in subsequent image frames, successively updating the 3D model in view of detected landmarks of the face and their locations within each of successive ones of the plurality of image frames. In embodiments, the facial landmark detector may include a face detector, an initial facial landmark detector, and one or more facial landmark detection linear regressors. Other embodiments may be described and/or claimed.

    Abstract translation: 本文公开了与3D脸部模型重建相关联的装置,方法和存储介质。 在实施例中,装置可以包括面部地标检测器,模型装配器和模型跟踪器。 面部地标检测器可以被配置为检测面部的多个界标及其在多个图像帧的每一个内的位置。 考虑到检测到的面部的标记及其在多个图像帧的第一个图像帧中的位置,模型拟合器可以被配置为从中立面的3D模型生成面部的3D模型。 模型跟踪器可以被配置为维持3D模型以跟踪后续图像帧中的面部,从而考虑到检测到的面部的界标及其在多个图像帧中的每个连续图像帧中的位置之间的连续更新3D模型。 在实施例中,面部地标检测器可以包括面部检测器,初始面部地标检测器和一个或多个面部地标检测线性回归器。 可以描述和/或要求保护其他实施例。

    AVATAR VIDEO APPARATUS AND METHOD
    5.
    发明申请
    AVATAR VIDEO APPARATUS AND METHOD 有权
    AVATAR视频设备和方法

    公开(公告)号:US20160300379A1

    公开(公告)日:2016-10-13

    申请号:US14775324

    申请日:2014-11-05

    Abstract: Apparatuses, methods and storage medium associated with creating an avatar video are disclosed herein. In embodiments, the apparatus may one or more facial expression engines, an animation-rendering engine, and a video generator. The one or more facial expression engines may be configured to receive video, voice and/or text inputs, and, in response, generate a plurality of animation messages having facial expression parameters that depict facial expressions for a plurality of avatars based at least in part on the video, voice and/or text inputs received. The animation-rendering engine may be configured to receive the one or more animation messages, and drive a plurality of avatar models, to animate and render the plurality of avatars with the facial expression depicted. The video generator may be configured to capture the animation and rendering of the plurality of avatars, to generate a video. Other embodiments may be described and/or claimed.

    Abstract translation: 本文公开了与创建头像视频相关联的装置,方法和存储介质。 在实施例中,设备可以是一个或多个面部表情引擎,动画呈现引擎和视频生成器。 一个或多个面部表情引擎可以被配置为接收视频,语音和/或文本输入,并且作为响应,至少部分地生成具有描绘多个化身的面部表情的面部表情参数的多个动画消息 接收到的视频,语音和/或文本输入。 动画呈现引擎可以被配置为接收一个或多个动画消息,并且驱动多个化身模型,以所描绘的面部表情来动画化和呈现多个化身。 视频发生器可以被配置为捕获多个化身的动画和渲染,以产生视频。 可以描述和/或要求保护其他实施例。

    FACIAL EXPRESSION AND/OR INTERACTION DRIVEN AVATAR APPARATUS AND METHOD
    6.
    发明申请
    FACIAL EXPRESSION AND/OR INTERACTION DRIVEN AVATAR APPARATUS AND METHOD 审中-公开
    FACIAL表达和/或交互驱动的AVATAR装置和方法

    公开(公告)号:US20160042548A1

    公开(公告)日:2016-02-11

    申请号:US14416580

    申请日:2014-03-19

    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, an apparatus may include a facial mesh tracker to receive a plurality of image frames, detect facial action movements of a face and head pose gestures of a head within the plurality of image frames, and output a plurality of facial motion parameters and head pose parameters that depict facial action movements and head pose gestures detected, all in real time, for animation and rendering of an avatar. The facial action movements and head pose gestures may be detected through inter-frame differences for a mouth and an eye, or the head, based on pixel sampling of the image frames. The facial action movements may include opening or closing of a mouth, and blinking of an eye. The head pose gestures may include head rotation such as pitch, yaw, roll, and head movement along horizontal and vertical direction, and the head comes closer or goes farther from the camera. Other embodiments may be described and/or claimed.

    Abstract translation: 本文公开了与动画和呈现化身相关联的装置,方法和存储介质。 在实施例中,装置可以包括面部网格跟踪器,用于接收多个图像帧,检测多个图像帧内的头部的脸部和头部姿势手势的面部动作运动,并且输出多个面部运动参数和头部 构成参数,其描绘了所有实时检测到的面部动作动作和头部姿势手势,用于动画和呈现头像。 基于图像帧的像素采样,可以通过口和眼或头部的帧间差异来检测面部动作运动和头部姿势手势。 面部动作动作可以包括开口或闭合嘴,以及眼睛的眨眼。 头部姿势手势可以包括水平和垂直方向的头部旋转,例如俯仰,偏航,滚动和头部移动,并且头部靠近或离开相机更远。 可以描述和/或要求保护其他实施例。

Patent Agency Ranking