Abstract:
Apparatuses, systems, and techniques are presented to modify media content using inferred attention. In at least one embodiment, a network is trained to predict a gaze of one or more users on one or more image features based, at least in part, on one or more prior gazes of the one or more users, wherein the prediction is to be used to modify at least one of the one or more image features.
Abstract:
When a computer image is generated from a real-world scene having a semi-reflective surface (e.g. window), the computer image will create, at the semi-reflective surface from the viewpoint of the camera, both a reflection of a scene in front of the semi-reflective surface and a transmission of a scene located behind the semi-reflective surface. Similar to a person viewing the real-world scene from different locations, angles, etc., the reflection and transmission may change, and also move relative to each other, as the viewpoint of the camera changes. Unfortunately, the dynamic nature of the reflection and transmission negatively impacts the performance of many computer applications, but performance can generally be improved if the reflection and transmission are separated. The present disclosure uses deep learning to separate reflection and transmission at a semi-reflective surface of a computer image generated from a real-world scene.
Abstract:
A system, computer-readable medium, and method are provided for generating images based on adaptations of the human visual system. An input image is received, an effect provoking change is received, and an afterimage resulting from a cumulative effect of human visual adaptation is computed based on the effect provoking change and a per-photoreceptor type physiological adaptation of the human visual system. The computed afterimage may include a bleaching afterimage effect and/or a local adaptation afterimage effect. The computed afterimage is then accumulated into an output image for display.
Abstract:
One embodiment of a method includes predicting one or more three-dimensional (3D) mesh representations based on a plurality of digital images, wherein the one or more 3D mesh representations are refined by minimizing at least one difference between the one or more 3D mesh representations and the plurality of digital images.
Abstract:
A target image corresponding to a novel view may be synthesized from two source images, corresponding source camera poses, and pixel attribute correspondences between the two source images. A particular object in the target image need only be visible in one of the two source images for successful synthesis. Each pixel in the target image is defined according to an identified pixel in one of the two source images. The identified source pixel provides attributes such as color, texture, and feature descriptors for the target pixel. The source and target camera poses are used to define geometric relationships for identifying the source pixels. In an embodiment, the pixel attribute correspondences are optical flow that defines movement of attributes from a first image of the two source images to a second image of the two source images.
Abstract:
A single two-dimensional (2D) image can be used as input to obtain a three-dimensional (3D) representation of the 2D image. This is done by extracting features from the 2D image by an encoder and determining a 3D representation of the 2D image utilizing a trained 2D convolutional neural network (CNN). Volumetric rendering is then run on the 3D representation to combine features within one or more viewing directions, and the combined features are provided as input to a multilayer perceptron (MLP) that predicts and outputs color (or multi-dimensional neural features) and density values for each point within the 3D representation. As a result, single-image inverse rendering may be performed using only a single 2D image as input to create a corresponding 3D representation of the scene in the single 2D image.
Abstract:
A video stitching system combines video from different cameras to form a panoramic video that, in various embodiments, is temporally stable and tolerant to strong parallax. In an embodiment, the system provides a smooth spatial interpolation that can be used to connect the input video images. In an embodiment, the system applies an interpolation layer to slices of the overlapping video sources, and the network learns a dense flow field to smoothly align the input videos with spatial interpolation. Various embodiments are applicable to areas such as virtual reality, immersive telepresence, autonomous driving, and video surveillance.
Abstract:
A number of images of a scene are captured and stored. The images are captured over a range of values for an attribute (e.g., a camera setting). One of the images is displayed. A location of interest in the displayed image is identified. Regions that correspond to the location of interest are identified in each of the images. Those regions are evaluated to identify which of the regions is rated highest with respect to the attribute relative to the other regions. The image that includes the highest-rated region is then displayed.
Abstract:
A computer implemented method of determining a latent image from an observed image is disclosed. The method comprises implementing a plurality of image processing operations within a single optimization framework, wherein the single optimization framework comprises solving a linear minimization expression. The method further comprises mapping the linear minimization expression onto at least one non-linear solver. Further, the method comprises using the non-linear solver, iteratively solving the linear minimization expression in order to extract the latent image from the observed image, wherein the linear minimization expression comprises: a data term, and a regularization term, and wherein the regularization term comprises a plurality of non-linear image priors.
Abstract:
A set of images is processed to modify and register the images to a reference image in preparation for blending the images to create a high-dynamic range image. To modify and register a source image to a reference image, a processing unit generates correspondence information for the source image based on a global correspondence algorithm, generates a warped source image based on the correspondence information, estimates one or more color transfer functions for the source image, and fills the holes in the warped source image. The holes in the warped source image are filled based on either a rigid transformation of a corresponding region of the source image or a transformation of the reference image based on the color transfer functions.