Abstract:
A simulated 3D image display method is provided for a display device. The method includes capturing at least two images of a scene for 3D scene reconstruction; extracting depth and color information from the at least two images of the scene for 3D scene reconstruction; continuously tracking movement of a user to determine a relative position between the user and the display device; and, based on the relative position, reconstructing the image of the scene corresponding to a current viewpoint of the user from a plurality of view images of a plurality of viewpoints generated based on the at least two images and using an interpolation algorithm for display on a display screen of the display device.
Abstract:
A method is provided for a 3D virtual training system. The 3D virtual training system includes a 3D display screen and an operating device, and the method includes initializing a virtual medical training session to be displayed on the 3D display screen, where 3D display contents include at least a 3D virtual image of a surgery site. The method also includes obtaining user interaction inputs via the operating device and the 3D display screen, and displaying on the 3D display screen a virtual surgery device and a virtual surgery operation on the surgery site by the virtual surgery device. Further, the method includes determining an operation consequence based on the user interaction inputs and the surgery site, rendering the operation consequence based on the surgery site and effects of the virtual surgery operation, and displaying 3D virtual images of the rendered operation consequence on the 3D display screen.
Abstract:
A method, apparatus and smart wearable device for fusing augmented reality and virtual reality are provided. The method for fusing augmented reality (AR) and virtual reality (VR), comprising acquiring real-world scene information collected by dual cameras mimicking human eyes in real time from an AR operation; based on virtual reality scene information from a VR operation and the acquired real-world scene information, generating a fused scene; and displaying the fused scene.
Abstract:
The present disclosure provides a 3D image display method for displaying 3D image on a handheld terminal display screen. The method includes the following steps. The handheld terminal detects a current screen display mode to determine whether the handheld terminal triggers a horizontal-vertical screen display mode change. The handheld terminal determines the current screen display mode as a horizontal mode or a vertical mode when the handheld terminal triggers a horizontal-vertical screen display mode change. The handheld terminal determines 3D image arrangement parameters based on the current screen display mode. The handheld terminal rearranges the 3D image to be displayed to obtain a revised 3D image based on the adjusted 3D image arrangement parameters. The handheld terminal displays the revised 3D image in the current screen display mode of the handheld terminal.
Abstract:
A method for detecting three-dimensional (3D) pseudoscopic images and a display device for detecting 3D pseudoscopic images are provided. The method includes extracting corresponding feature points in a first view and corresponding feature points in a second view, wherein the first view and the second view form a current 3D image; calculating an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view; based on the average coordinate value of the feature points in the first view and the average coordinate value of the feature points in the second view, determining whether the current 3D image is pseudoscopic or not; and processing the current 3D image when it is determined that the current 3D image is pseudoscopic.
Abstract:
An object tracking method is provided. The method includes obtaining current position coordinates of a tracked object for a first time using a tracking algorithm and initializing a filter acting on a time-varying system based on the obtained current position coordinates. The method further includes updating a system state through the filter acting on the time-varying system when the tracking algorithm outputs new current position coordinates of the tracked object with a spatial delay and comparing speed of the tracked object to a compensation determination threshold. Further, the method includes when the speed of the tracked object is greater than the compensation determination threshold, compensating the new current position coordinates of the tracked object with the spatial delay through the filter acting on the time-varying system, and outputting the compensated current position coordinates.
Abstract:
A method for 2D/3D switchable displaying includes: real-time detecting a 3D display area; when a change of the 3D display area is detected, calculating a gradient coefficient based on a number of frame of change and a rate of the change of the 3D display area; adjusting a 3D image area and a 3D grating area based on the calculated gradient coefficient; and performing a stereoscopic display by the adjusted 3D image area and the adjusted 3D grating area. When the 3D display area starts a change and ends the change, the 3D display area gradually is switched to be 2D display and switched to be 3D display respectively, so that a gradient visual effect is achieved, and the problems of viewing image jitter and 3D effect mistake caused by pixel arrangement and hardware control in the 3D display area being not synchronized can be avoided.
Abstract:
A method for processing two-dimensional (2D)/three-dimensional (3D) images on a same display area is provided. The method includes receiving image data containing both 2D and 3D images and creating a plurality of image containers including at least one top level image container and at least one sub-level image container, where each image container is provided with a display dimension identity and a coverage area identity. The method also includes determining display positions, dimensions, and occlusion relationships of the 2D and 3D images based on the plurality of image containers. Further, the method includes displaying images in the image containers with corresponding display dimension identities and coverage area identities with the display positions, dimensions, and occlusion relationships, where the display dimension identities include a 2D display and a 3D display.