Abstract:
A method of producing vertically projecting three-dimensional images using virtual 3D models (22), wherein said 3D models (22) are created by the simultaneous localization and depth-mapping of the physical features of real objects. A camera (17) is used to take a first image (22A) from a first perspective, and a subsequent image (22N) from a subsequent perspective, wherein the autofocus system (18) provides a first set of depth mapping data and a subsequent set of depth mapping data. The first set of depth mapping data and the subsequent set of depth mapping data are used to generate a disparity mapping (21). A virtual 3D model (32) is created from the disparity mapping (21). The virtual 3D model (32) is imaged to obtain images that can be viewed as three-dimensional. Enhanced 3D effects are added to the virtual 3D model (32).
Abstract:
A system for determining a person's field of view with respect to captured data (e.g., recorded audiovisual data). A multi-camera capture system (e.g. bodycam, vehicular camera, stereoscopic, 360-degree captures system) records audiovisual data of an event. The field of capture of a capture system or a field of capture of captured data combined from multiple capture systems may be greater than the field of view of a person. Facial features (e.g. eyes, ears, nose, and jawlines) of the person may be detected from the captured data. Facial features may be used to determine the field of view of the person with respect to the captured data. Sensors that detect head orientation may be used to determine the field of view of the person with respect to captured data. The field of view of the person may be shown with respect to the captured data when the captured data is played back.
Abstract:
Computerimplementiertes Verfahren zur Erzeugung eines für eine stereoskopische Wiedergabe vorgesehenen Panoramabilds. Erfindungsgemäß ist vorgesehen, dass das Verfahren die folgenden Schritte umfasst: a) Aufnehmen mindestens eines Bilderpaars mit einer Kameraanordnung (1), wobei jedes Bilderpaar aus einem ersten Bild und einem zweiten Bild besteht und wobei eine Aufnahmeposition (7a) des ersten Bilds relativ zu einer Aufnahmeposition (7b) des zweiten Bilds verschoben und/oder verdreht ist; b) Drehen der Kameraanordnung (1) bezogen auf eine erste Drehachse (3) um ein Drehwinkelinkrement (Δ) von einer Drehwinkelposition (4a) auf eine nächste Drehwinkelposition (4b); c) Wiederholen der Schritte a) und b) bis ein, vorzugsweise vorgebbarer, erster Drehwinkelbereich (6) überstrichen ist, um eine Abfolge von Aufnahmen vom mindestens einen Bilderpaar für die aufeinanderfolgenden Drehwinkelpositionen (4a, 4b, 4c,..., 4n) zu erzeugen.
Abstract:
A camera apparatus, e.g., a stereoscopic camera apparatus, includes a slideable filter plate inserted into a slot in a dual element mounting plate. The slideable filter plate includes a plurality of selectable pairs of filter mounting positions and changes between pairs of filter mounting positions may be performed without altering camera alignments. Each pair of filter mounting positions includes a right eye filter mounting position and a left eye filter mounting position. Each pair of filter mounting positions in the slideable filter plate includes a pair of filters or no filters.
Abstract:
High dynamic range depth generation is described for 3D imaging systems. One example includes receiving a first exposure of a scene having a first exposure level, determining a first depth map for the first depth exposure, receiving a second exposure of the scene having a second exposure level, determining a second depth map for the second depth exposure, and combining the first and second depth map to generate a combined depth map of the scene.
Abstract:
The invention is directed to recording, transmitting, and displaying a three-dimensional image of a face of a user in a video stream. Reflected light from a curved or geometrically shaped screen is employed to provide multiple perspective views of the user's face that are transformed into the image, which is communicated to remotely located other users. A head mounted projection display system is employed to capture the reflective light. The system includes a frame, that when worn by a user, wraps around and grips the user's head. Also, at least two separate image capture modules are included on the frame and generally positioned relatively adjacent to the left and right eyes of a user when the system is worn. Each module includes one or more sensor components, such as cameras, that are arranged to detect at least reflected non-visible light from a screen positioned in front of the user.
Abstract:
A method of compressing a stereoscopic video including a left view frame and a right view frame is provided, the method including: determining a texture saliency value for a first block in the left view frame by intra prediction(1101); determining a motion saliency value for the first block by motion estimation(1102); determining a disparity saliency value between the first block and a corresponding second block in the right view frame(1103); determining a quantization parameter based on the disparity saliency value, the texture saliency value,and the motion saliency value(1104); and performing quantization of the first block in accordance with the quantization parameter(1105).
Abstract:
A method for applying power in a plurality of pulses to the projection source to control the projection source to emit coherent light during an exposure interval comprising at least two sub-intervals, the applied power causing the projection source to have different temperatures during first and second sub-intervals of the at least two sub-intervals; and emitting light from the projection source, wherein the projection source emits light having different wavelengths during the first and second sub-intervals in accordance with the different temperatures of the projection source.
Abstract:
A first image capture component may capture a first image of a scene, and a second image capture component may capture a second image of the scene. There may be a particular baseline distance between the first image capture component and the second image capture component, and at least one of the first image capture component or the second image capture component may have a focal length. A disparity may be determined between a portion of the scene as represented in the first image and the portion of the scene as represented in the second image. Possibly based on the disparity, the particular baseline distance, and the focal length, a focus distance may be determined. The first image capture component and the second image capture component may be set to focus to the focus distance.