Abstract:
A bidirectional texture function (BTF) synthesizer serves to synthesize BTFs on arbitrary manifold surfaces using “surface textons” given a sample BTF as an input. The synthesized BTFs fit the surface geometry naturally and seamlessly, and not only look similar to a sample BTF in all viewing and lighting conditions, but also exhibit a consistent mesostructure when the viewing and lighting directions change. Further, the synthesized BTFs capture the fine-scale shadows, occlusions, and specularities caused by surface mesostructures, thereby improving the perceived realism of the textured surfaces. In addition, the BTF synthesizer can describe real-world textures to allow a user to decorate real-world geometry with real-world textures. Finally, BTF synthesis using surface textons works well for any materials that can be described by three-dimensional textons.
Abstract:
A “mesostructure renderer” uses pre-computed multi-dimensional “generalized displacement maps” (GDM) to provide real-time rendering of general non-height-field mesostructures on both open and closed surfaces of arbitrary geometry. In general, the GDM represents the distance to solid mesostructure along any ray cast from any point within a volumetric sample. Given the pre-computed GDM, the mesostructure renderer then computes mesostructure visibility jointly in object space and texture space, thereby enabling both control of texture distortion and efficient computation of texture coordinates and shadowing. Further, in one embodiment, the mesostructure renderer uses the GDM to render mesostructures with either local or global illumination as a per-pixel process using conventional computer graphics hardware to accelerate the real-time rendering of the mesostructures. Further acceleration of mesostructure rendering is achieved in another embodiment by automatically reducing the number of triangles in the rendering pipeline according to a user-specified threshold for acceptable texture distortion.
Abstract:
Message values included in a set of valid message values that constitute a coding scheme are each encoded in an image region, called an encoded signal block, composed of a spatially arranged pattern of colored sub-regions. The colored sub-regions have color values produced by modulating a reference color value by a color change quantity expressed as a color space direction in a multi-dimensional color space such that the average color of all of the sub-region colors is the reference color. There is a unique pattern of color-modulated sub-regions for each valid message value in the coding scheme. In one embodiment, the color space direction is computed to be simultaneously detectable by a digital image capture device such as a scanner and substantially imperceptible to a human viewer, so that the embedded data represented by the pattern of color modulations are visually imperceptible in the encoded signal block. When the reference color is determined to be the average color of an image region in an original color image, the encoded signal block may replace the image region in the original image, producing an encoded image version of the original image having little or no image degradation. In this case, the original image colors become carriers of the encoded data. Signal blocks may be arranged to encode data in only one dimension in an image, which allows for less complex decoding algorithms, or in a two dimensional array or grid-like structure, which allows for a high encoded data density rate.
Abstract:
A system for reflectance acquisition of a target includes a light source, an image capture device, and a reflectance reference chart. The reflectance reference chart is fixed relative to the target. The light source provides a uniform band of light across at least a dimension of the target. The image capture device is configured and positioned to encompass at least a portion of the target and at least a portion of the reflectance reference chart within a field-of-view of the image capture device. The image capture device captures a sequence of images of the target and the reflectance reference chart during a scan thereof. Reflectance responses are calculated for the pixels in the sequence of images. Reference reflectance response distribution functions are matched to the calculated reflectance responses, and an image of the target is reconstructed based at least in part on the matched reference reflectance response distribution functions.
Abstract:
Techniques for encoding data based at least in part upon an awareness of the decoding complexity of the encoded data and the ability of a target decoder to decode the encoded data are disclosed. In some embodiments, a set of data is encoded based at least in part upon a state of a target decoder to which the encoded set of data is to be provided. In some embodiments, a set of data is encoded based at least in part upon the states of multiple decoders to which the encoded set of data is to be provided.
Abstract:
A method of encoding a sequence of video images is described. The method receives the sequence of video images. The method iteratively examines different encoding solutions for the sequence of video images to identify an encoding solution that optimizes image quality while meeting a target bit rate and satisfying a set of constraints regarding flow of encoded data through an input buffer of a hypothetical reference decoder for decoding the encoded video sequence. The iterative examining includes, for each encoding solution, determining whether the hypothetical reference decoder underflows while processing the encoding solution for any set of images within the video sequence.
Abstract:
Some embodiments provide a video recording device for capturing a video clip. The video recording device receives a selection of a non-temporally compressed encoding scheme from several different encoding schemes for encoding the video clip. The different encoding schemes include at least one temporally compressed encoding scheme and at least the selected non-temporally compressed encoding scheme. The video recording device captures the video clip as several frames. The video recording device non-temporally encodes each of the frames as several slices. The slices of a particular frame are for decoding by several processing units of a video decoding device. The video recording device stores the video clip in a storage.
Abstract:
Output textures may be generated by synthesizing an input texture comprising discrete elements with a set of boundary conditions. Elements of the input texture are copied from the input texture to an output texture that is defined by a set of boundary conditions and are then refined. The elements of the output texture are refined by assigning domain and/or attribute information to the elements of the output texture element by minimizing an energy function measuring a similarity between output neighborhoods of the output texture and a corresponding best matching input neighborhood of the input texture.
Abstract:
A method for constructing an avatar of a human subject includes acquiring a depth map of the subject, obtaining a virtual skeleton of the subject based on the depth map, and harvesting from the virtual skeleton a set of characteristic metrics. Such metrics correspond to distances between predetermined points of the virtual skeleton. In this example method, the characteristic metrics are provided as input to an algorithm trained using machine learning. The algorithm may be trained using a human model in a range of poses, and a range of human models in a single pose, to output a virtual body mesh as a function of the characteristic metrics. The method also includes constructing a virtual head mesh distinct from the virtual body mesh, with facial features resembling those of the subject, and connecting the virtual body mesh to the virtual head mesh.
Abstract:
A mechanism is disclosed for capturing reflected rays from a surface. A first and second lens aligned along a same optical center axis are configured so that a beam of light collimated parallel to the lens center axis directed to a first side, is converged toward the lens center axis on a second side. A first light beam source between the first and second lenses directs a light beam toward the first lens parallel to the optical center axis. Second light beam source(s) on the second side of the first lens, direct a light beam toward a focal plane of the first lens at a desired angle. An image capturing component, at the second side of the second lens, has an image capture surface directed toward the second lens to capture images of the light reflected from a sample capture surface at the focal plane of the first lens.