Abstract:
An optical-see-through head mounted display (HMD) system is provided. The optical-see-through HMD system has a camera for generating image frames, a display device and a processor. The processor proceeds an interactive operation on each image frame. In the interactive operation, an image analysis is performed on the image frame to obtain positioning information of a marker and 3-dimensional information of an input device. According to the positioning information, the 3-dimensional information and eye position of an user, an image shielding process is performed to correct a portion of the frame to be displayed which is corresponding to the input device and a collision test is performed according to the positioning information and the 3-dimensional information of an input device to determine whether the input device touches the virtual image displayed by HMD. Then, an event corresponding to the touch position of the virtual image is executed.
Abstract:
A virtual image display apparatus configured to be in front of at least one eye of a user includes an image display unit, a first beam splitting unit, and a reflection-refraction unit. The image display unit provides an image beam. The first beam splitting unit disposed on transmission paths of the image beam and an object beam causes at least one portion of the object beam to propagate to the eye and causes at least one portion of the image beam to propagate to the reflection-refraction unit. The reflection-refraction unit includes a lens portion and a reflecting portion on a first curved surface of the lens portion. At least part of the image beam travels through the lens portion, is reflected by the reflecting portion, travels trough the lens portion again, and is propagated to the eye by the first beam splitting unit in sequence.
Abstract:
A ranging apparatus including an image sensor, an imaging lens, and a processor is provided. The imaging lens is configured to image an object on the image sensor to produce an image signal having at least one image parameter, wherein the at least one image parameter changes with a change of an object distance of the object. The processor is configured to determine the change of the object distance according to a change of the at least one image parameter. A ranging method and an interactive display system are also provided.
Abstract:
The information querying method is provided. The query image is received via a login gateway among the serving gateways. At least one of query description of the query image is generated. At least one first candidate object is sifted from a plurality of candidate objects according to a location of the login gateway and locations of candidate objects, wherein a plurality of candidate descriptions of each of the candidate objects are recorded in a database. The at least one query description of the query image is compared with the candidate descriptions of the at least one first candidate object, so as to obtain the target object from the candidate objects according to similarity level between the at least one query description and the candidate descriptions of the at least one first candidate object.
Abstract:
An object recognition method and an object recognition apparatus using the same are provided. In one or more embodiments, a real-time image including a first object is acquired, and a chamfer distance transform is performed on the first object of the real-time image to produce a chamfer image including a first modified object. Preset image templates each including a second object are acquired, and the chamfer distance transform is performed on the second object of each preset image template to produce a chamfer template including a second modified object. When the difference between the first modified object and the second modified object is less than a first preset error threshold, the object recognition apparatus may operate according to a control command corresponding to the preset image template.
Abstract:
A virtual image display apparatus configured to be in front of at least one eye of a user includes an image display unit, a first beam splitting unit, and a reflection-refraction unit. The image display unit provides an image beam. The first beam splitting unit disposed on transmission paths of the image beam and an object beam causes at least one portion of the object beam to propagate to the eye and causes at least one portion of the image beam to propagate to the reflection-refraction unit. The reflection-refraction unit includes a lens portion and a reflecting portion on a first curved surface of the lens portion. At least part of the image beam travels through the lens portion, is reflected by the reflecting portion, travels trough the lens portion again, and is propagated to the eye by the first beam splitting unit in sequence.
Abstract:
An optical-see-through head mounted display (HMD) system is provided. The optical-see-through HMD system has a camera for generating image frames, a display device and a processor. The processor proceeds an interactive operation on each image frame. In the interactive operation, an image analysis is performed on the image frame to obtain positioning information of a marker and 3-dimensional information of an input device. According to the positioning information, the 3-dimensional information and eye position of an user, an image shielding process is performed to correct a portion of the frame to be displayed which is corresponding to the input device and a collision test is performed according to the positioning information and the 3-dimensional information of an input device to determine whether the input device touches the virtual image displayed by HMD. Then, an event corresponding to the touch position of the virtual image is executed.
Abstract:
A mobile device is provided, which includes a camera unit, a sensor unit, a see-through display, and a processor. The camera unit takes an image of a finger and a surface. The sensor unit generates a sensor signal in response to a motion of the finger. The taking of the image and the generation of the sensor signal are synchronous. The see-through display displays a GUI on the surface. The processor is coupled to the camera unit, the sensor unit, and the see-through display. The processor uses both of the image and the sensor signal to detect a touch of the finger on the surface. The processor adjusts the GUI in response to the touch.