Abstract:
A touch-control system is provided. The touch-control system includes: at least two image capturing units, configured to capture a plurality of hand images of a user; and an electronic device, coupled to the image capturing units, configured to recognize a target object from the hand images, and detect motions of the target object in an operating space, wherein the electronic device includes a display unit, and the operating space includes a virtual touch-control plane, wherein when the target object touches a virtual touch-control point on the virtual touch-control plane, the electronic device generates a touch-control signal and performs an associated touch-control operation at a position corresponding to the virtual touch-control points on the display unit.
Abstract:
An input device applied on an electronic device including a keyboard, an image-detection module, and a processing device. There is a selection-operating area, a gesture-operating area, and a key-in area above the keyboard. The image-detection module is embedded in the keyboard for detecting the location of the user's hand. The processing device informs the electronic device to operate in either the selection-operating mode, the gesture-operating mode, or the key-in mode, according to the location of the hand as detected by the image detector.
Abstract:
An input device is equipped with a cursor pointing unit including a mechanical control stick and an optical sensor. The optical sensor is mounted on the mechanical control stick and has a contact surface for sensing object motion thereon. A cursor signal is generated when the mechanical control stick is inclined by an exerted pressure. When the mechanical control stick is not inclined, a position frame of the object on the contact surface is retrieved as a reference frame. A real-time position frame of the object is retrieved when the object remains on the contact surface. A speed of the object is calculated according to the reference frame and the real-time position frame. When the speed does not exceed a threshold, a cursor position move is generated according to the speed; and when the speed exceeds the threshold, a switch signal is generated to initiate a gesture controlling mode.
Abstract:
A method of removing raindrops from video images is provided. The method includes the steps of: training a raindrop image recognition model using a plurality raindrop training images labeled in a plurality of rainy-scene images; recognizing a plurality of raindrop images from a plurality of scene images in a video sequence using the raindrop image recognition model; and in response to a specific raindrop image in a current scene image satisfying a predetermined condition, replacing the specific raindrop image in the current scene image with an image region corresponding to the specific raindrop image in a specific scene image prior to the current scene image to generate an output scene image.
Abstract:
An image processing apparatus including a wide-angle camera, an auxiliary camera, and a controller is provided. The wide-angle camera has a first Field Of View (FOV), and captures a first image of a first area of a scene. The auxiliary camera has a second FOV which is narrower than the first FOV, and captures a second image of a second area of the scene. In particular, the wide-angle camera and the auxiliary camera are disposed on the same surface of the image processing apparatus, and synchronized to capture the first image and the second image, respectively. The controller determines a portion of the first image, which corresponds to the second area of the scene, and superimposes the second image on the portion of the first image to generate an enhanced image.
Abstract:
An optical touch control system is disclosed, includes a display panel, first and second optical sensors, first and second light-emitting devices, and a controller. The first and second optical sensors are respectively disposed at opposite corners of the display panel. The first and second light-emitting devices are disposed on the first and second optical sensors, respectively. The controller turns off the first and second light-emitting devices and turns on the first optical sensor to obtain a first frame, only turns on the second light-emitting device and the first optical sensor to obtain a second frame, turns off the first and second light-emitting devices and turns on the second optical sensor to obtain a third frame, only turns on the first light-emitting device and the second optical sensor to obtain a fourth frame, and determines a gesture according to the first through the fourth frames.
Abstract:
A touch control system is provided and connected to a display surface, including a stylus and at least two sensing devices. The stylus includes a housing, a first opening, a second opening, a contacting member, and a reflecting member. The housing has a first end and a second end opposite to the first end. The first opening is formed at the first end. The second opening is formed between the first end and the second end. The contacting member protrudes from the first opening. The reflecting member is disposed in the housing, situated between the first opening and the second opening, and connected to the contacting member. When the contacting member contacts the display surface and moves along a direction from the first opening toward the second opening, the reflecting member is exposed to the second opening and reflects a light signal emitted from the sensing devices.
Abstract:
The invention provides an optical touch system, including a camera having a lens and an image sensor to capture an image of a touch object on the image sensor through the lens, an active light source for lighting the touch object, and a processor for determining the distance between the touch object and the camera according to the size of the image or the brightness of the image on the image sensor, determining the direction of the touch object according to the position of the image on the image sensor, and calculating the position of the touch object.
Abstract:
An image capturing device used to capture an image of an object is provided. The image capturing device includes a camera, a first light source and a controller. The first light source is disposed adjacent to the camera. The controller is electrically connected to the camera and the first light source and is configured to: control the camera to capture a first picture of the object under the condition that the first light source does not emit any light; control the camera to captures a second picture of the object under the condition that the first light source emits a light; subtract the first picture from the second picture to filter off the background brightness of the second picture to obtain a first filtered picture; and analyze the first filtered picture to obtain a probability of the object matching a certain state.
Abstract:
A virtual reality device, including a first sensor, a second sensor, a processor, and a display screen, is provided. The first sensor senses a first rotation-amplitude corresponding to the virtual reality device, and outputs a first sensing signal corresponding to the first rotation-amplitude. The second sensor senses a second rotation-amplitude corresponding to the user's eyes, and outputs a second sensing signal corresponding to the second rotation-amplitude. The processor generates a panoramic image, and obtains an initial display picture. The display screen displays the initial display picture. The processor further outputs a first display picture according to the first sensing signal and the second sensing signal, and the display screen displays the first display picture.