SINGLE-IMAGE INVERSE RENDERING
    12.
    发明申请

    公开(公告)号:US20230081641A1

    公开(公告)日:2023-03-16

    申请号:US17551046

    申请日:2021-12-14

    Abstract: A single two-dimensional (2D) image can be used as input to obtain a three-dimensional (3D) representation of the 2D image. This is done by extracting features from the 2D image by an encoder and determining a 3D representation of the 2D image utilizing a trained 2D convolutional neural network (CNN). Volumetric rendering is then run on the 3D representation to combine features within one or more viewing directions, and the combined features are provided as input to a multilayer perceptron (MLP) that predicts and outputs color (or multi-dimensional neural features) and density values for each point within the 3D representation. As a result, single-image inverse rendering may be performed using only a single 2D image as input to create a corresponding 3D representation of the scene in the single 2D image.

    VIEW SYNTHESIS USING NEURAL NETWORKS
    13.
    发明申请

    公开(公告)号:US20200294194A1

    公开(公告)日:2020-09-17

    申请号:US16299062

    申请日:2019-03-11

    Abstract: A video stitching system combines video from different cameras to form a panoramic video that, in various embodiments, is temporally stable and tolerant to strong parallax. In an embodiment, the system provides a smooth spatial interpolation that can be used to connect the input video images. In an embodiment, the system applies an interpolation layer to slices of the overlapping video sources, and the network learns a dense flow field to smoothly align the input videos with spatial interpolation. Various embodiments are applicable to areas such as virtual reality, immersive telepresence, autonomous driving, and video surveillance.

    Interaction with and display of photographic images in an image stack
    14.
    发明授权
    Interaction with and display of photographic images in an image stack 有权
    在图像堆叠中与摄影图像的相互作用和显示

    公开(公告)号:US09412042B2

    公开(公告)日:2016-08-09

    申请号:US13870901

    申请日:2013-04-25

    Abstract: A number of images of a scene are captured and stored. The images are captured over a range of values for an attribute (e.g., a camera setting). One of the images is displayed. A location of interest in the displayed image is identified. Regions that correspond to the location of interest are identified in each of the images. Those regions are evaluated to identify which of the regions is rated highest with respect to the attribute relative to the other regions. The image that includes the highest-rated region is then displayed.

    Abstract translation: 捕获并存储场景的许多图像。 图像被捕获在属性值范围(例如,相机设置)上。 其中一个图像被显示。 识别显示图像中感兴趣的位置。 在每个图像中识别对应于感兴趣位置的区域。 对这些区域进行评估,以确定哪个区域相对于其他区域的属性的额定值最高。 然后显示包含最高等级区域的图像。

    UNIFIED OPTIMIZATION METHOD FOR END-TO-END CAMERA IMAGE PROCESSING FOR TRANSLATING A SENSOR CAPTURED IMAGE TO A DISPLAY IMAGE
    15.
    发明申请
    UNIFIED OPTIMIZATION METHOD FOR END-TO-END CAMERA IMAGE PROCESSING FOR TRANSLATING A SENSOR CAPTURED IMAGE TO A DISPLAY IMAGE 有权
    用于将传感器捕获的图像转换为显示图像的端到端相机图像处理的统一优化方法

    公开(公告)号:US20150206504A1

    公开(公告)日:2015-07-23

    申请号:US14600507

    申请日:2015-01-20

    Abstract: A computer implemented method of determining a latent image from an observed image is disclosed. The method comprises implementing a plurality of image processing operations within a single optimization framework, wherein the single optimization framework comprises solving a linear minimization expression. The method further comprises mapping the linear minimization expression onto at least one non-linear solver. Further, the method comprises using the non-linear solver, iteratively solving the linear minimization expression in order to extract the latent image from the observed image, wherein the linear minimization expression comprises: a data term, and a regularization term, and wherein the regularization term comprises a plurality of non-linear image priors.

    Abstract translation: 公开了一种从观察图像确定潜像的计算机实现方法。 该方法包括在单个优化框架内实现多个图像处理操作,其中单个优化框架包括求解线性最小化表达式。 该方法还包括将线性最小化表达映射到至少一个非线性求解器上。 此外,该方法包括使用非线性求解器,迭代地求解线性最小化表达以从观察图像中提取潜像,其中线性最小化表达式包括:数据项和正则化项,其中正则化 术语包括多个非线性图像先验。

    Techniques for registering and warping image stacks
    16.
    发明授权
    Techniques for registering and warping image stacks 有权
    注册和整理图像堆栈的技术

    公开(公告)号:US08929683B2

    公开(公告)日:2015-01-06

    申请号:US13874357

    申请日:2013-04-30

    CPC classification number: G06T5/50 G06T5/005 G06T2207/20208 G06T2207/20221

    Abstract: A set of images is processed to modify and register the images to a reference image in preparation for blending the images to create a high-dynamic range image. To modify and register a source image to a reference image, a processing unit generates correspondence information for the source image based on a global correspondence algorithm, generates a warped source image based on the correspondence information, estimates one or more color transfer functions for the source image, and fills the holes in the warped source image. The holes in the warped source image are filled based on either a rigid transformation of a corresponding region of the source image or a transformation of the reference image based on the color transfer functions.

    Abstract translation: 处理一组图像以将图像修改并注册到参考图像,以准备混合图像以创建高动态范围图像。 为了将源图像修改并注册到参考图像,处理单元基于全局对应算法生成源图像的对应信息,基于对应信息生成翘曲源图像,估计源的一个或多个颜色传递函数 图像,并填充翘曲的源图像中的孔。 基于源图像的相应区域的刚性变换或基于颜色传递函数的参考图像的变换来填充翘曲源图像中的孔。

    POSE TRANSFER FOR THREE-DIMENSIONAL CHARACTERS USING A LEARNED SHAPE CODE

    公开(公告)号:US20240070987A1

    公开(公告)日:2024-02-29

    申请号:US18110287

    申请日:2023-02-15

    CPC classification number: G06T19/00 G06T7/10 G06T17/20

    Abstract: Transferring pose to three-dimensional characters is a common computer graphics task that typically involves transferring the pose of a reference avatar to a (stylized) three-dimensional character. Since three-dimensional characters are created by professional artists through imagination and exaggeration, and therefore, unlike human or animal avatars, have distinct shape and features, matching the pose of a three-dimensional character to that of a reference avatar generally requires manually creating shape information for the three-dimensional character that is required for pose transfer. The present disclosure provides for the automated transfer of a reference pose to a three-dimensional character, based specifically on a learned shape code for the three-dimensional character.

    Guided hallucination for missing image content using a neural network

    公开(公告)号:US10922793B2

    公开(公告)日:2021-02-16

    申请号:US16353195

    申请日:2019-03-14

    Abstract: Missing image content is generated using a neural network. In an embodiment, a high resolution image and associated high resolution semantic label map are generated from a low resolution image and associated low resolution semantic label map. The input image/map pair (low resolution image and associated low resolution semantic label map) lacks detail and is therefore missing content. Rather than simply enhancing the input image/map pair, data missing in the input image/map pair is improvised or hallucinated by a neural network, creating plausible content while maintaining spatio-temporal consistency. Missing content is hallucinated to generate a detailed zoomed in portion of an image. Missing content is hallucinated to generate different variations of an image, such as different seasons or weather conditions for a driving video.

Patent Agency Ranking