Abstract:
A mobile device is provided, which includes a camera unit, a sensor unit, a see-through display, and a processor. The camera unit takes an image of a finger and a surface. The sensor unit generates a sensor signal in response to a motion of the finger. The taking of the image and the generation of the sensor signal are synchronous. The see-through display displays a GUI on the surface. The processor is coupled to the camera unit, the sensor unit, and the see-through display. The processor uses both of the image and the sensor signal to detect a touch of the finger on the surface. The processor adjusts the GUI in response to the touch.
Abstract:
A device for acquiring depth image, a calibrating method and a measuring method therefore are provided. The device includes at least one projecting device, at least one image sensing device, a mechanism device and a processing unit. The projecting device projects a projection pattern to a measured object. The image sensing device is controlled to adjust a focal length and focus position, and therefore sense real images. The mechanism device adjusts a location and/or a convergence angle of the image sensing device. The processing unit calibrates the at least one image sensing device and generates a three dimension (3D) measuring parameter set at a model focal length according to a plurality of image setting parameter reference sets corresponding to a model focal length and a plurality of default node distances, respectively, and then estimates a depth map or depth information of the measured object.
Abstract:
An image synthesis method of a virtual object and the apparatus thereof are provided. The image synthesis method of the virtual object comprises providing a first depth image of a scene and a first two-dimensional image of the scene; providing a second depth image of the virtual object; adjusting a second depth value of the virtual object in the first depth image according to an objective location in the first depth image and a reference point of the second depth image; rendering a second two-dimensional image of the virtual object; and synthesizing the first two-dimensional image and the second two-dimensional image according to a lighting direction of the first two-dimensional image, an adjusted second depth value and the objective location in the first depth image.
Abstract:
A device for acquiring depth image, a calibrating method and a measuring method therefore are provided. The device includes at least one projecting device, at least one image sensing device, a mechanism device and a processing unit. The projecting device projects a projection pattern to a measured object. The image sensing device is controlled to adjust a focal length and focus position, and therefore sense real images. The mechanism device adjusts a location and/or a convergence angle of the image sensing device. The processing unit calibrates the at least one image sensing device and generates a three dimension (3D) measuring parameter set at a model focal length according to a plurality of image setting parameter reference sets corresponding to a model focal length and a plurality of default node distances, respectively, and then estimates a depth map or depth information of the measured object.
Abstract:
An optical-see-through head mounted display (HMD) system is provided. The optical-see-through HMD system has a camera for generating image frames, a display device and a processor. The processor proceeds an interactive operation on each image frame. In the interactive operation, an image analysis is performed on the image frame to obtain positioning information of a marker and 3-dimensional information of an input device. According to the positioning information, the 3-dimensional information and eye position of an user, an image shielding process is performed to correct a portion of the frame to be displayed which is corresponding to the input device and a collision test is performed according to the positioning information and the 3-dimensional information of an input device to determine whether the input device touches the virtual image displayed by HMD. Then, an event corresponding to the touch position of the virtual image is executed.