Abstract:
An integrated circuit package includes at least two electronic circuits. A first of the at least two electronic circuits includes a digital input and a digital output and a test mode control line for setting the first integrated circuit chip into a determined test mode. The digital input includes at least two parallel input paths and the digital output includes at least two parallel output paths. The at least two parallel input paths and at least two parallel output paths provide a corresponding number of internal paths by which the first electronic circuit and a second electronic circuit can be tested essentially simultaneously.
Abstract:
An “Image Denoiser” provides a probabilistic process for denoising color images by segmenting an input image into regions, estimating statistics within each region, and then estimating a clean (or denoised) image using a probabilistic model of image formation. In one embodiment, estimated blur between each region is used to reduce artificial sharpening of region boundaries resulting from denoising the input image. In further embodiments, the estimated blur is used for additional purposes, including sharpening edges between one or more regions, and selectively blurring or sharpening one or more specific regions of the image (i.e., “selective focus”) while maintaining the original blurring between the various regions.
Abstract:
A Bayesian two-color image demosaicer and method for processing a digital color image to demosaic the image in such a way as to reduce image artifacts. The method and system are an improvement on and an enhancement to previous demosaicing techniques. A preliminary demosaicing pass is performed on the image to assign each pixel a fully specified RGB triple color value. The final color value of pixel in the processed image is restricted to be a linear combination of two colors. Fully-specified RGB triple color values for each pixel in an image used to find two clusters represented favored two colors. The amount of contribution from these favored two colors on the final color value then is determined. The method and system also can process multiple images to improve the demosaicing results. When using multiple images, sampling can be performed at a finer resolution, known as super resolution.
Abstract:
A technique for estimating the optical flow between images of a scene and a segmentation of the images is presented. This involves first establishing an initial segmentation of the images and an initial optical flow estimate for each segment of each images and its neighboring image or images. A refined optical flow estimate is computed for each segment of each image from the initial segmentation of that image and the initial optical flow of the segments of that image. Next, the segmentation of each image is refined from the last-computed optical flow estimates for each segment of the image. This process can continue in an iterative manner by further refining the optical flow estimates for the images using their respective last-computed segmentation, followed by further refining the segmentation of each image using their respective last-computed optical flow estimates, until a prescribed number of iterations have been completed.
Abstract:
A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.
Abstract:
A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.
Abstract:
A system and process for computing a 3D reconstruction of a scene from multiple images thereof, which is based on a color segmentation-based approach, is presented. First, each image is independently segmented. Second, an initial disparity space distribution (DSD) is computed for each segment, using the assumption that all pixels within a segment have the same disparity. Next, each segment's DSD is refined using neighboring segments and its projection into other images. The assumption that each segment has a single disparity is then relaxed during a disparity smoothing stage. The result is a disparity map for each image, which in turn can be used to compute a per pixel depth map if the reconstruction application calls for it.
Abstract:
A system and process for providing an interactive video tour of a tour site to a user is presented. In general, the system and process provides an image-based rendering system that enables users to explore remote real world locations, such as a house or a garden. The present approach is based directly on filming an environment, and then using image-based rendering techniques to replay the tour in an interactive manner. As such, the resulting experience is referred to as Interactive Video Tours. The experience is interactive in that the user can move freely along a path, choose between different directions of motion at branch points in the path, and look around in any direction. The user experience is additionally enhanced with multimedia elements such as overview maps, video textures, and sound.
Abstract:
A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.
Abstract:
The illustrated and described embodiments describe techniques for capturing data that describes 3-dimensional (3-D) aspects of a face, transforming facial motion from one individual to another in a realistic manner, and modeling skin reflectance.