Abstract:
A method is provided for controlling stereoscopic display. The method includes a collection device obtaining a position variation between a space position of a viewer at the current time and a space position of the viewer at the previous time, wherein the position variation is an offset of parallel translation of the space position of the viewer relative to a display panel. The method also includes an adjusting device adjusting a stereoscopic display apparatus based on the space position of the viewer at the current time and the position variation.
Abstract:
A method, apparatus and smart wearable device for fusing augmented reality and virtual reality are provided. The method for fusing augmented reality (AR) and virtual reality (VR), comprising acquiring real-world scene information collected by dual cameras mimicking human eyes in real time from an AR operation; based on virtual reality scene information from a VR operation and the acquired real-world scene information, generating a fused scene; and displaying the fused scene.
Abstract:
A multiple-viewer auto-stereoscopic display apparatus includes a display unit, an eye-tracking unit, a light transmission control unit, a light separation unit, and a synchronization control unit. The display unit is configured to display a view sequence of a plurality of view images of a 3D image in multiple viewing zones to one or more viewers. The light transmission control unit is configured to control light transmission to a particular viewing zone. The light separation unit is configured to separate the plurality of view images for the viewers to perceive 3D display. Further, the synchronization control unit is configured to synchronize refreshing of the display unit and the light transmission control unit, wherein a refreshing rate of the display unit equals to a refreshing rate of the light transmission control unit, and to dynamically adjust the view sequence based on the position information of the one or more viewers.
Abstract:
A three-dimensional (3D) display system is provided for displaying a 3D image. The 3D display system includes a display panel having pixels arranged at a pixel spatial period and a lens grating disposed together with the display panel. The lens grating further includes a plurality of lens units arranged at a first period and a plurality of non-lens units arranged at a second period. The second period is greater than one-third of the first period and less than two-thirds of the first period, and the plurality of lens units and the plurality of non-lens units are arranged such that the lens grating has a different spatial period from the pixel spatial period to reduce Moire fringe effect between the lens grating and the pixels of the display panel.
Abstract:
A three-dimensional (3D) display system is provided. The 3D display system includes a display device and a switchable grating. The display device is configured to display a set of images with parallax for 3D display. The switchable grating comprises a plurality of switchable grating units, and each switchable grating unit has variable width and variable refractive index and has electric-signal-controllable optical parameters. The switchable grating is coupled with the display device, and configured to enable directional light transmission so as to separate lights of the set of images into predetermined viewing directions to effect the 3D display. Further, the switchable grating includes a first substrate having a plurality of first-type electrodes; a second substrate having at least one second-type electrode; and an optical material contained between the first substrate and the second substrate.
Abstract:
A method is provided for a 3D virtual training system. The 3D virtual training system includes a 3D display screen and an operating device, and the method includes initializing a virtual medical training session to be displayed on the 3D display screen, where 3D display contents include at least a 3D virtual image of a surgery site. The method also includes obtaining user interaction inputs via the operating device and the 3D display screen, and displaying on the 3D display screen a virtual surgery device and a virtual surgery operation on the surgery site by the virtual surgery device. Further, the method includes determining an operation consequence based on the user interaction inputs and the surgery site, rendering the operation consequence based on the surgery site and effects of the virtual surgery operation, and displaying 3D virtual images of the rendered operation consequence on the 3D display screen.
Abstract:
A three-dimensional (3D) display apparatus is provided for displaying a 3D image. The 3D display apparatus includes a display panel and a grating device coupled to the display panel. The display panel includes a plurality of display pixels arranged in a two-dimensional array, and each pixel includes multiple sub-pixels. The grating device includes a plurality of grating elements based on liquid crystal to guide lights associated with the plurality of display pixels into predetermined viewing directions. Further, the grating device is one of a lenticular lens grating and a slit grating, and the plurality of grating elements are arranged in parallel. The plurality of grating elements cover the plurality of display pixels and are tilted at an inclination angle with respect to the display pixels, and each grating element comprises a plurality of electrodes arranged at the inclination angle. Further, a width of the electrodes is less than or equal to a width of a sub-pixel and a width between any two electrode is less than or equal to a sub-pixel.
Abstract:
A three-dimensional (3D) display system is provided for displaying a 3D image including a first view image and a second view image to a viewer. The 3D display system includes an arrangement module, a processing module, and a displaying module. The arrangement module is configured to alternatingly arrange display units of the first view image and display units of the second view image on a display panel. The processing module is configured to obtain an information difference of a display unit of the second view image from the display units of the first view image, and re-calculate a pixel value of the display unit of the second view image. The displaying module is configured to display to the viewer the display unit of the second view image with the re-calculated pixel value via a light separation device.
Abstract:
A parallax barrier device includes a first electrode, a second electrode, a liquid crystal layer, a polarizer, and a controller. The first electrode includes a plurality of first sub-electrodes, and the second electrode includes a plurality of second sub-electrodes arranged intersecting the plurality of first sub-electrodes. The liquid crystal layer is disposed between the first electrodes and the second electrode, and the liquid crystal layer forms respective display windows corresponding to regions formed by the intersections of the first sub-electrodes and the second sub-electrodes. The polarizer is disposed on the first electrode or the second electrode on a side away from the liquid crystal layer. Further, the controller is coupled to the first electrodes and the second electrode and configured to control voltages on the plurality of first sub-electrodes and the plurality of second sub-electrodes to form a parallax barrier.
Abstract:
A method for processing two-dimensional (2D)/three-dimensional (3D) images on a same display area is provided. The method includes receiving image data containing both 2D and 3D images and creating a plurality of image containers including at least one top level image container and at least one sub-level image container, where each image container is provided with a display dimension identity and a coverage area identity. The method also includes determining display positions, dimensions, and occlusion relationships of the 2D and 3D images based on the plurality of image containers. Further, the method includes displaying images in the image containers with corresponding display dimension identities and coverage area identities with the display positions, dimensions, and occlusion relationships, where the display dimension identities include a 2D display and a 3D display.