Abstract:
The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.
Abstract:
A method for displaying a near-eye light field display (NELD) image is disclosed. The method comprises determining a pre-filtered image to be displayed, wherein the pre-filtered image corresponds to a target image. It further comprises displaying the pre-filtered image on a display. Subsequently, it comprises producing a near-eye light field after the pre-filtered image travels through a microlens array adjacent to the display, wherein the near-eye light field is operable to simulate a light field corresponding to the target image. Finally, it comprises altering the near-eye light field using at least one converging lens, wherein the altering allows a user to focus on the target image at an increased depth of field at an increased distance from an eye of the user and wherein the altering increases spatial resolution of said target image.
Abstract:
A method for displaying a near-eye light field display (NELD) image is disclosed. The method comprises determining a pre-filtered image to be displayed, wherein the pre-filtered image corresponds to a target image. It further comprises displaying the pre-filtered image on a display. Subsequently, it comprises producing a near-eye light field after the pre-filtered image travels through a microlens array adjacent to the display, wherein the near-eye light field is operable to simulate a light field corresponding to the target image. Finally, it comprises altering the near-eye light field using at least one converging lens, wherein the altering allows a user to focus on the target image at an increased depth of field at an increased distance from an eye of the user and wherein the altering increases spatial resolution of said target image.
Abstract:
A method to drive a pixelated display of an electronic device arranged in sight of a user of the device. The method includes receiving a signal that encodes a display image, and controlling the pixelated display based on the signal to form the display image in addition to a latent image, the latent image being configured to illuminate an eye of the user with light of such characteristics as to be unnoticed by the user, but to reveal an orientation of the eye on reflection into a machine-vision system.
Abstract:
A technique for efficiently compressing rendered three-dimensional images in a remote rendering system adds a novel render-assisted prediction function to an existing video compression framework, such as the standard H.264/5 framework. Auxiliary rendering information is separated from rendering information used to describe a reference image by a server system. A client system may alter the auxiliary data and generate a new image based on the reference image and rendered scene information from the auxiliary data without creating additional network bandwidth or server workload.
Abstract:
A method of generating an image. The method includes simulating a presence of at least one light source within a virtualized three dimensional space. Within the virtualized three dimensional space, a light sensing plane is defined. The light sensing plane includes a matrix of a number of pixels to be displayed on a display screen. The method further includes using a light transport procedure, computing a gradient value for each pixel of the matrix to produce a number of gradient values. The gradient computation involves selecting a plurality of light path pairs that contribute to a pixel wherein the selection is biased towards selection of more light paths that pass through pixels having larger gradient values. The plurality of gradient values are converted to a plurality of light intensity values which represent the image.
Abstract:
A method of generating an image. The method includes simulating a presence of at least one light source within a virtualized three dimensional space. Within the virtualized three dimensional space, a light sensing plane is defined. The light sensing plane includes a matrix of a number of pixels to be displayed on a display screen. The method further includes using a light transport procedure, computing a gradient value for each pixel of the matrix to produce a number of gradient values. The gradient computation involves selecting a plurality of light path pairs that contribute to a pixel wherein the selection is biased towards selection of more light paths that pass through pixels having larger gradient values. The plurality of gradient values are converted to a plurality of light intensity values which represent the image.
Abstract:
A technique for efficiently compressing rendered three-dimensional images in a remote rendering system adds a novel render-assisted prediction function to an existing video compression framework, such as the standard H.264/5 framework. Auxiliary rendering information is separated from rendering information used to describe a reference image by a server system. A client system may alter the auxiliary data and generate a new image based on the reference image and rendered scene information from the auxiliary data without creating additional network bandwidth or server workload.
Abstract:
The disclosure provides a method for audio calibration that uses audio simulation and reconstructed surface information from images or video recordings along with recorded sound. The surface component of the method introduces knowledge that enables audio wave propagation simulation for a particular location. Using the simulation results the sound distribution can be optimized. For example, unwanted audio reflection and occlusion can be recognized and resolved. In one example, the disclosure provides a method for improving acoustics at a location that includes: (1) generating a geometric model of a location using visual data obtained from the location, wherein the location includes an audio system, and (2) simulating, using the geometric model, movement of sound waves in the location that originate from the audio system. The disclosure also provides a computer system, a computer program product, and a mobile computing device that include features for improving acoustics at a location.
Abstract:
Virtual reality (VR) displays are computer displays that present images or video in a manner that simulates a real experience for the viewer. In many cases, VR displays are implemented as head-mounted displays (HMDs) which provide a display in the line of sight of the user. Because current HMDs are composed of a display panel and magnifying lens with a gap therebetween, proper functioning of the HMDs limits their design to a box-like form factor, thereby negatively impacting both comfort and aesthetics. The present disclosure provides a different configuration for a virtual reality display which allows for improved comfort and aesthetics, including specifically at least one coherent light source, at least one holographic waveguide coupled to the at least one coherent light source to receive light therefrom, and at least one spatial light modulator coupled to the at least one holographic waveguide to modulate the light.