Abstract:
A three dimensional (3D) sensing method and an apparatus thereof are provided. The 3D sensing method includes the following steps. A resolution scaling process is performed on a first pending image and a second pending image so as to produce a first scaled image and a second scaled image. A full-scene 3D measurement is performed on the first and second scaled images so as to obtain a full-scene depth image. The full-scene depth image is analyzed to set a first region of interest (ROI) and a second ROI. A first ROI image and a second ROI image is obtained according to the first and second ROI. Then, a partial-scene 3D measurement is performed on the first and second ROI images accordingly, such that a partial-scene depth image is produced.
Abstract:
A virtual image display apparatus configured to be in front of at least one eye of a user includes an image display unit, a first beam splitting unit, and a reflection-refraction unit. The image display unit provides an image beam. The first beam splitting unit disposed on transmission paths of the image beam and an object beam causes at least one portion of the object beam to propagate to the eye and causes at least one portion of the image beam to propagate to the reflection-refraction unit. The reflection-refraction unit includes a lens portion and a reflecting portion on a first curved surface of the lens portion. At least part of the image beam travels through the lens portion, is reflected by the reflecting portion, travels trough the lens portion again, and is propagated to the eye by the first beam splitting unit in sequence.
Abstract:
A 3D model construction device includes a camera and a wearable display coupled to the camera. The camera obtains multiple first frames, a second frame and depth information. The wearable display includes a display unit, a processing unit, a storage unit and a projection unit. The storage unit stores a first module and a second module. When the first module is performed by the processing unit, the processing unit calculates a first pose of the wearable display. When the second module is performed by the processing unit, the processing unit calculates a 3D model according to the first frames, the depth information, the first pose and calibration parameters, and updates the 3D model according to the second frame. The projection unit projects the 3D model and the second frame onto the display unit according to the first pose for being displayed with a real image on the display unit.
Abstract:
An insole design method and an insole design system are provided, and the method includes: capturing an uncompressed free foot model by a depth camera and obtaining a free foot model three-dimensional image; capturing a pressed foot model stepped on a transparent pedal by the depth camera and obtaining a pressed foot model three-dimensional image; aligning the free foot model three-dimensional image with the pressed foot model three-dimensional image; calculating and obtaining a plantar deformation quantity according to the aligned free foot model three-dimensional image and the aligned pressed foot model three-dimensional image; and completing the designed insole according to a sole projection plane or a three-dimensional profile of the specific sole and the plantar deformation quantity.
Abstract:
An image synthesis method of a virtual object and the apparatus thereof are provided. The image synthesis method of the virtual object comprises providing a first depth image of a scene and a first two-dimensional image of the scene; providing a second depth image of the virtual object; adjusting a second depth value of the virtual object in the first depth image according to an objective location in the first depth image and a reference point of the second depth image; rendering a second two-dimensional image of the virtual object; and synthesizing the first two-dimensional image and the second two-dimensional image according to a lighting direction of the first two-dimensional image, an adjusted second depth value and the objective location in the first depth image.
Abstract:
A ranging apparatus including an image sensor, an imaging lens, and a processor is provided. The imaging lens is configured to image an object on the image sensor to produce an image signal having at least one image parameter, wherein the at least one image parameter changes with a change of an object distance of the object. The processor is configured to determine the change of the object distance according to a change of the at least one image parameter. A ranging method and an interactive display system are also provided.
Abstract:
The disclosure provides a stereo display system including a stereo display, a depth detector, and a computing processor. The stereo display displays a left eye image and a right eye image, such that a left eye and a right eye of a viewer generate a parallax to view a stereo image. The depth detector captures a depth data of a three-dimensional space. The computing processor controls image display of the stereo display. The computing processor analyzes an eyes position of the viewer according to the depth data, and when the viewer moves horizontally, vertically, or obliquely in the three-dimensional space relative to the stereo display, the computing processor adjusts the left eye image and the right eye image based on variations of the eyes position. Furthermore, an image interaction system, a method for detecting finger position, and a control method of stereo display are also provided.
Abstract:
A virtual image display apparatus configured to be in front of at least one eye of a user includes an image display unit, a first beam splitting unit, and a reflection-refraction unit. The image display unit provides an image beam. The first beam splitting unit disposed on transmission paths of the image beam and an object beam causes at least one portion of the object beam to propagate to the eye and causes at least one portion of the image beam to propagate to the reflection-refraction unit. The reflection-refraction unit includes a lens portion and a reflecting portion on a first curved surface of the lens portion. At least part of the image beam travels through the lens portion, is reflected by the reflecting portion, travels trough the lens portion again, and is propagated to the eye by the first beam splitting unit in sequence.
Abstract:
A three dimensional (3D) sensing method and an apparatus thereof are provided. The 3D sensing method includes the following steps. A resolution scaling process is performed on a first pending image and a second pending image so as to produce a first scaled image and a second scaled image. A full-scene 3D measurement is performed on the first and second scaled images so as to obtain a full-scene depth image. The full-scene depth image is analyzed to set a first region of interest (ROI) and a second ROI. A first ROI image and a second ROI image is obtained according to the first and second ROI. Then, a partial-scene 3D measurement is performed on the first and second ROI images accordingly, such that a partial-scene depth image is produced.
Abstract:
An optical-see-through head mounted display (HMD) system is provided. The optical-see-through HMD system has a camera for generating image frames, a display device and a processor. The processor proceeds an interactive operation on each image frame. In the interactive operation, an image analysis is performed on the image frame to obtain positioning information of a marker and 3-dimensional information of an input device. According to the positioning information, the 3-dimensional information and eye position of an user, an image shielding process is performed to correct a portion of the frame to be displayed which is corresponding to the input device and a collision test is performed according to the positioning information and the 3-dimensional information of an input device to determine whether the input device touches the virtual image displayed by HMD. Then, an event corresponding to the touch position of the virtual image is executed.