Abstract:
A video frame processing method, which comprises: (a) capturing at least two video frames via a multi-view camera system comprising a plurality of cameras; (b) recording timestamps for each the video frame; (c) determining a major camera and a first sub camera out of the multi-view camera system, based on the timestamps, wherein the major camera captures a major video sequence comprising at least one major video frame, the first sub camera captures a video sequence of first view comprising at least one video frame of first view; (d) generating a first reference video frame of first view according to one first reference major video frame of the major video frames, which is at a reference timestamp corresponding to the first reference video frame of first view, and according to at least one the video frame of first view surrounding the reference timestamp; and (e) generating a multi-view video sequence comprising a first multi-view video frame, wherein the first multi-view video frame is generated based on the first reference video frame of first view and the first reference major video frame.
Abstract:
Various examples pertaining to camera view synthesis on head-mounted display (HMD) for virtual reality (VR) and augmented reality (AR) are described. A method involves receiving, from a plurality of tracking cameras disposed around a HMD, image data of a scene which is on a first side of the HMD. The method also involves performing, using the image data and depth information pertaining to the scene, view synthesis to generate a see-through effect of viewing the scene from a viewing position on a second side of the HMD opposite the first side thereof.
Abstract:
A synchronization controller for a multi-sensor camera device includes a detection circuit and a control circuit. The detection circuit detects asynchronization between image outputs generated from the multi-sensor camera device, wherein the image outputs correspond to different viewing angles. The control circuit controls an operation of the multi-sensor camera device in response to the asynchronization detected by the detection circuit. In addition, a synchronization method applied to a multi-sensor camera device includes following steps: detecting asynchronization between image outputs generated from the multi-sensor camera device, wherein the image outputs correspond to different viewing angles; and controlling an operation of the multi-sensor camera device in response to the detected asynchronization.
Abstract:
A video frame processing method, which comprises: (a) capturing at least one first video flame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.
Abstract:
A video frame processing method, which comprises: (a) capturing at least one first video frame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.
Abstract:
A stereo preview apparatus has an auto-stereoscopic display, an input interface, a motion detection circuit, and a visual transition circuit. The input interface receives at least an input stereo image pair including a left-view image and a right-view image generated from an image capture device. The motion detection circuit evaluates a motion status of the image capture device. The visual transition circuit generates an output stereo image pair based on the input stereo image pair, and outputs the output stereo image pair to the auto-stereoscopic display for stereo preview, wherein the visual transition circuit refers to the evaluated motion status to configure adjustment made to the input stereo image pair when generating the output stereo image pair.
Abstract:
A synchronization controller for a multi-sensor camera device includes a detection circuit and a control circuit. The detection circuit detects asynchronization between image outputs generated from the multi-sensor camera device, wherein the image outputs correspond to different viewing angles. The control circuit controls an operation of the multi-sensor camera device in response to the asynchronization detected by the detection circuit. In addition, a synchronization method applied to a multi-sensor camera device includes following steps: detecting asynchronization between image outputs generated from the multi-sensor camera device, wherein the image outputs correspond to different viewing angles; and controlling an operation of the multi-sensor camera device in response to the detected asynchronization.
Abstract:
A method and apparatus of auto focusing for a camera based on analysis of the image content in a target window are disclosed. According to the present invention, image content in a target window is analyzed to determine a state, a state change or both associated with the target window. The information associated with the state, the state change or both is provided to update the camera parameters. The state may be size, position, pose, behavior or gesture of one or more objects, or areas associated with one or more regions in the target window. The state may correspond to the motion field or optical flow associated with the target window. The state may correspond to object motion, extracted features or scales of the objects in the target window. The state may correspond to image content description of the segmented regions or deformable object contour in the target window.