Abstract:
A method of producing vertically projecting three-dimensional images using virtual 3D models (22), wherein said 3D models (22) are created by the simultaneous localization and depth-mapping of the physical features of real objects. A camera (17) is used to take a first image (22A) from a first perspective, and a subsequent image (22N) from a subsequent perspective, wherein the autofocus system (18) provides a first set of depth mapping data and a subsequent set of depth mapping data. The first set of depth mapping data and the subsequent set of depth mapping data are used to generate a disparity mapping (21). A virtual 3D model (32) is created from the disparity mapping (21). The virtual 3D model (32) is imaged to obtain images that can be viewed as three-dimensional. Enhanced 3D effects are added to the virtual 3D model (32).
Abstract:
The invention is directed to recording, transmitting, and displaying a three-dimensional image of a face of a user in a video stream. Reflected light from a curved or geometrically shaped screen is employed to provide multiple perspective views of the user's face that are transformed into the image, which is communicated to remotely located other users. A head mounted projection display system is employed to capture the reflective light. The system includes a frame, that when worn by a user, wraps around and grips the user's head. Also, at least two separate image capture modules are included on the frame and generally positioned relatively adjacent to the left and right eyes of a user when the system is worn. Each module includes one or more sensor components, such as cameras, that are arranged to detect at least reflected non-visible light from a screen positioned in front of the user.
Abstract:
Enables real-time depth modifications to stereo images of a 3D virtual reality environment locally and for example without iterative workflow involving region designers or depth artists that includes re-rendering these images from the original 3D model. Embodiments generate a spherical translation map from the 3D model of the virtual environment; this spherical translation map is a function of the pixel shifts between left and right stereo images for each point of the sphere surrounding the viewer of the virtual environment. Modifications may be made directly to the spherical translation map, and applied directly to the stereo images, without requiring re-rendering of the scene from the complete 3D model. This process enables depth modifications to be viewed in real-time, greatly improving the efficiency of the 3D model creation, review, and update cycle.
Abstract:
Constructing a user's face model using particle filters is disclosed, including: using a first particle filter to generate a new plurality of sets of extrinsic camera information particles corresponding to respective ones of a plurality of images based at least in part on a selected face model particle; selecting a subset of the new plurality of sets of extrinsic camera information particles corresponding to respective ones of the plurality of images; and using a second particle filter to generate a new plurality of face model particles corresponding to the plurality of images based at least in part on the selected subset of the new plurality of sets of extrinsic camera information particles.
Abstract:
The invention relates to forming a scene model and determining a first group of scene points, the first group of scene points being visible from a rendering viewpoint, determining a second group of scene points, the second group of scene points being at least partially obscured by the first group of scene points viewed from the rendering viewpoint,forming a first render layer using the first group of scene points and a second render layer using the second group of scene points, and providing the first and second render layers for rendering a stereo image. The invention also relates to receiving a first render layer and a second render layer comprising pixels, the first render layer comprising pixels corresponding to first parts of a scene viewed from a rendering viewpoint and the second render layer comprising pixels corresponding to second parts of the scene viewed from the rendering viewpoint, wherein the second parts of the scene are obscured by the first parts viewed from the rendering viewpoint, placing pixels of the first render layer and pixels of the second render layer in a rendering space,associating a depth value with the pixels, and rendering a stereo image using said pixels and said depth values.
Abstract:
A method and system for determining depth of an image using a single imager and a lens having a variable focal length is provided. The system comprises a microfluidic lens having a variable focal length controlled by a lens controller and an imager receiving an image of an object from the lens, wherein the imager is configured to receive a first image comprising a first plurality of pixels from the lens at a first focal length and a second image comprising a second plurality of pixels from the lens at a second focal length, the second focal length being different than the first focal length, non-volatile memory, wherein the first image and the second image are stored in the non-volatile memory, a depth module configured to determine a distance between the lens and the object based by a comparison of the first image of the object and the second image of the object.
Abstract:
Systems and methods for virtualized computing or cloud-computing network with distributed input devices and at least one remote server computer for automatically analyzing received video, audio and/or image inputs for providing social security and/or surveillance for a surveillance environment, surveillance event, and/or surveillance target.
Abstract:
A compressed light field imaging system is described. The light field 3D data is analyzed to determine optimal subset of light field samples to be (acquired) rendered, while the remaining samples are generated using multi-reference depth-image based rendering. The light field is encoded and transmitted to the display. The 3D display directly reconstructs the light field and avoids data expansion that usually occurs in conventional imaging systems. The present invention enables the realization of full parallax 3D compressed imaging system that achieves high compression performance while minimizing memory and computational requirements.