Abstract:
An apparatus is described that includes an integrated two-dimensional image capture and three-dimensional time-of-flight depth capture system. The three-dimensional time-of-flight depth capture system includes an illuminator to generate light. The illuminator includes arrays of light sources. Each of the arrays is dedicated to a particular different partition within a partitioned field of view of the illuminator.
Abstract:
본 명세서는 촬영 모드를 제어하는 포터블 디바이스 및 그 제어 방법에 대한 것이다. 일 실시예에 따라 촬영 모드를 제어하는 포터블 디바이스의 제어 방법은 제 1 카메라 유닛 및 제 2 카메라 유닛 중 적어도 하나를 이용하여 상기 포터블 디바이스로부터 제 1 거리만큼 떨어진 제 1 마커 및 제 2 거리만큼 떨어진 제 2 마커를 디텍트하는 단계, 이미지를 캡처링하는 촬영 모드를 실행하는 단계를 포함할 수 있다. 이때, 상기 디텍트된 제 1 마커의 상기 제 1 거리와 상기 디텍트된 제 2 마커의 상기 제 2 거리가 실질적으로 동일한 경우, 2D 촬영 모드를 실행하고, 상기 디텍트된 제 1 마커의 상기 제 1 거리와 상기 디텍트된 제 2 마커의 상기 제 2 거리가 상이한 경우, 3D 촬영 모드를 실행하되, 상기 3D 촬영 모드는 상기 제 1 카메라 유닛 및 상기 제 2 카메라 유닛의 양안 시차를 이용하여 3D 이미지를 생성하는 모드일 수 있다.
Abstract:
Disclosed herein are three-dimensional projection systems and related methods employing two electronically controlled projectors and a retro-reflective screen. The retro-reflective screen produces a known non-linear light reflection pattern when images are projected thereon. Image computational means are used to calculating flat image information for each projector based upon inputted stereopair images and information regarding the projectors and screen. In preferred embodiments of the present invention, the projection system uses an image computational device that employs a neural network feedback calculation to calculate the appropriate flat image information and appropriate images to be projected on the screen by the projectors at any given time. More than two projectors can be employed to produce multiple aspect views, to support multiple viewers, and the like. In another embodiment, the projection system includes a digital camera that provides feedback data on the output images.
Abstract:
The 3D video game software (10) is played by a Player and generates a stream of 3D visuals through a game engine that outputs 3D game data. Video games are written using one of several common Application Programming Interfaces (API) for handling the rendering and display functions of the game. The 3D game data are output with API function calls to conventional API drivers (12), which render the 3D game data into display image data that are fed to a graphics display card (14) and result in a 2D image displayed on a 2D display monitor (16). The 3D game data output of the video game software (10) are intercepted and redirected to pseudo API divers (20) which generate right (R) and left (L) stereoscopic image outputs to right and left stereoscopic display cards (22, 24) that generate the resulting 3D stereoscopic display on a 3D display device (26).
Abstract:
A visual indicator such as a cursor (3) is moved between two or more screens (1, 2) of a multi-layered display system, via an input device. The input device can be a touch screen, where varying the degree of pressure applied to the touch screen determines on which screen the cursor is displayed. The plurality of screens (1, 2) may comprise liquid crystal displays, and provide a three dimensional depth effect.
Abstract:
A multi-level visual display system has a plurality of screens (1, 2) spaced in the depth direction. A user can move a visual indicator such as a cursor (3) between the screens (1, 2), via an input device such as a mouse button. In drawing applications a visual link such as a line can be created between two screens. In game applications a user can move an image both within and between screens (1, 2), by dragging a cursor while moving it between the screens, to provide an illusion of three dimensional movement. The screens (1, 2) may comprise layered liquid crystal displays.
Abstract:
A viewer for viewing stereo images either downloaded over a network, such as the Internet, or resident on a personal computer uses a graphical user interface (GUI) to facilitate the display of wireframes with or without texture applied in a variety of formats. In stereo mode, the GUI permits adjustment of the neutral plane and of camera offset. The file sizes utilized with the viewer are very small and permit rapid transmission over a network. The files contain wireframe information, texture map information and animation information.
Abstract:
Immersive video, or television, images of a real-world scene are synthesized (i) on demand, (ii) in real time, (iii) as linked to any of a particular perspective on the scene, or an object or event in the scene, (iv) in accordance with user-specified parameters of presentation, including panoramic or magnified presentations, and/or (v) stereoscopically. The synthesis of virtual images is based on computerized video processing -- called "hypermosaicing" -- of multiple live video perspectives on the scene. In hypermosaicing a knowledge database contains information about the scene; for example scene geometry, shapes and behaviors of objects in the scene, and/or internal and/or external camera calibration models. Multiple video cameras each at a different spatial location produce multiple two-dimensional video images of the scene. A viewer/user specifies viewing criterion (ia) at a viewer interface. A computer, typically one or more engineering work station class computers or better, includes in software and/or hardware (i) a video data analyzer for detecting and for tracking scene objects and their locations, (ii) an environmental model builder combining multiple scene images to build a 3-D dynamic model recording scene objects and their instant spatial locations, (iii) a viewer criterion interpreter, and (iv) a visualizer for generating from the 3-D model in accordance with the viewing criterion one or more particular 2-D video image(s) of the scene. A video display receives and displays the synthesized 2-D video image(s). Nonetheless to being built and maintained by use of simplifying assumptions, the 3-D dynamic model is powerful, flexible and useful in permitting diverse scene views.