Abstract:
The subject disclosure is directed towards a hybrid stereo image/motion parallax system that uses stereo 3D vision technology for presenting different images to each eye of a viewer, in combination with motion parallax technology to adjust each image for the positions of a viewer's eyes. In this way, the viewer receives both stereo cues and parallax cues as the viewer moves while viewing a 3D scene, which tends to result in greater visual comfort/less fatigue to the viewer. Also described is the use of goggles for tracking viewer position, including training a computer vision algorithm to recognize goggles instead of only heads/eyes.
Abstract:
A method is provided of generating virtual reality multimedia at a developer computing device having a processor interconnected with a memory. The method comprises: capturing, at the processor, a point cloud representing a scene, the point cloud data including colour and depth data for each of a plurality of points corresponding to locations in the capture volume; generating, at the processor, a two-dimensional projection of a selected portion of the point cloud, the projection including the colour and depth data for the selected portion; and storing the two-dimensional projection in the memory.
Abstract:
The present technology relates to a display control apparatus, a display control method, and a program which can implement a UI that allows a user to intuitively select an intended region in an image. A position of a display surface on which a display apparatus displays an image is detected, and a projected image obtained by projecting an image model of a predetermined image is displayed on the display surface along a straight light passing through a position of the user and a pixel of the display surface whose position is detected. The present technology can be applied to apparatuses having an image displaying function, such as a smartphone and a tablet terminal.
Abstract:
A method and system for displaying a 2D representation of a 3D world on an image plane of a simulator. The image plane defines a fixed viewing region of the replica environment of the simulator and also corresponds to a view observed by an operator of the simulator. The method includes the steps of determining a head position of the operator of the replica environment and modifying a viewing volume of the 3D world based on the head position of the operator while keeping the image plane constant to form a modified viewing volume. The 2D representation based on the modified viewing volume is then generated and displayed on the image plane.
Abstract:
The present invention relates to a method for generating 3D viewpoint video content. The method comprising the steps of receiving videos shot by cameras distributed to capture an object; forming a 3D graphic model of at least part of the scene of the object based on the videos; receiving information related to viewpoint and 3D region of interest (ROD in the object; and combining the 3D graphic model and the videos related to the 3D ROI to form a hybrid 3D video content.
Abstract:
A method and a system for a mobile terminal to achieve user interaction by simulating a real scene are disclosed. The method comprises: invoking a preset 3D virtual scene corresponding to the real scene and formulating a scene task for the 3D virtual scene, or using a real scene photo captured by the mobile terminal to form a 3D virtual scene corresponding to the real scene and then formulating a scene task for the 3D virtual scene by the mobile terminal; uploading the information of the 3D virtual scene and the information of the scene task to a server to obtain a shared link; searching for nearby mobile terminals and transmitting the shared link to the nearby mobile terminals, sending an invitation to the nearby mobile terminals and waiting for participation of the nearby mobile terminals; if the invitation is received by the nearby mobile terminals, then reading the information of the 3D virtual scene and the information of the scene task stored in the server and uploading the corresponding personal information by the nearby mobile terminals; and changing locations of user roles in the 3D virtual scene according to positioning information of the mobile terminal, receiving a user operation instruction to make interactions via the user roles, and recording the user behaviors corresponding to the personal information.
Abstract:
A method and an apparatus for achieving transformation of a virtual view into a 3D view are provided. The method comprises the following steps of: S1. capturing position coordinates of a human eye by a human-eye tracking module; S2. determining a rotation angle of a virtual scene according to the position coordinates of the human eye and coordinates of a center of a screen of a projection displaying module and rotating the virtual scene according to the rotation angle to obtain a virtual holographic 3D view matrix by a first image processing module; S3. determining a shearing angle for each of viewpoints according to coordinates of a center of the virtual scene, position coordinates of a viewer in the scene and coordinates of the viewpoints to generate a shearing matrix for the viewpoint in one-to-one correspondence, and post-multiplying the shearing matrix with a corresponding viewpoint model matrix to generate a left view and a right view by a second image processing module; and S4. projecting the left view and the right view of each of the viewpoints by the projection displaying module. In this way, the method and the apparatus for achieving transformation of a virtual view into a 3D view according to the present disclosure can achieve the purpose of holographic 3D displaying.
Abstract:
The invention provides a method and apparatus for rendering an object for a plurality of 3D displays. The method comprises determining one of the plurality of 3D displays to render the object according to the relationship, in a global coordinate system, the position of the object and a region defined from a user's eyes to the 3D display; and rendering the object on the determined 3D display.
Abstract:
An eye-point detection unit (26) tracks the eye-points of a user after detecting the user who views a three-dimensional image including left-eye and right-eye parallax images obtained when a subject is viewed by the user from a given position established as a view reference position. When the movement speed of the eye-points becomes equal to or greater than a predetermined magnitude, a dynamic parallax correction unit (30) determines, on the basis of the movement amounts of the eye-points, dynamic parallax correction amounts for the respective left-eye and right-eye parallax images, thereby generating and outputting a three-dimensional image as dynamic parallax corrected to a display unit (36). When the movement speed of the eye-points becomes less than the predetermined magnitude, the dynamic parallax correction unit (30) stepwise reduces the correction amount of the dynamic parallax correction until the left-eye and right-eye parallax images return to the respective parallax images obtained when the subject is viewed from the view reference position, thereby generating and outputting a three-dimensional image to the display unit (36).
Abstract:
A stereoscopic image display apparatus (10) which can accurately visually recognize all the regions of a stereoscopic image without using a varifocal lens, and can form a natural three-dimensional image on a retina with a processing load on a computer eased even if an image is viewed by a plurality of viewers from any positions. The apparatus (10) for generating the stereoscopic image, wherein a critical parallax that is the boundary of parallax capable of forming a three-dimensional image on a retina of a viewer is calculated, , a space including an object is divided into spaces using rectangular parallelepiped inscribing a sphere having a diameter as the critical parallax, a stereoscopic image is generated for each divided space, and the generated stereoscopic images are pasted together to generate a single stereoscopic image.