Abstract:
An image processing method includes: receiving a plurality of images, the images being captured under different view points; and performing image alignment for the plurality of images by warping the plurality of images, where the plurality of images are warped according to a set of parameters, and the set of parameters are obtained by finding a solution constrained to predetermined ranges of physical camera parameters. In particular, the step of performing the image alignment further includes: automatically performing the image alignment to reproduce a three-dimensional (3D) visual effect, where the plurality of images is captured by utilizing a camera module, and the camera module is not calibrated with regard to the view points. For example, the 3D visual effect can be a multi-angle view (MAV) visual effect. In another example, the 3D visual effect can be a 3D panorama visual effect. An associated apparatus is also provided.
Abstract:
A video frame processing method, which comprises: (a) capturing at least one first video flame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.
Abstract:
A three-dimensional (3D) image capture method, employed in an electronic device with a monocular camera and a 3D display, includes at least the following steps: while the electronic device is moving, deriving a 3D preview image from a first preview image and a second preview image generated by the monocular camera, and providing 3D preview on the 3D display according to the 3D preview image, wherein at least one of the first preview image and the second preview image is generated while the electronic device is moving; and when a capture event is triggered, outputting the 3D preview image as a 3D captured image.
Abstract:
A video frame processing method, which comprises: (a) capturing at least one first video frame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.
Abstract:
A stereo preview apparatus has an auto-stereoscopic display, an input interface, a motion detection circuit, and a visual transition circuit. The input interface receives at least an input stereo image pair including a left-view image and a right-view image generated from an image capture device. The motion detection circuit evaluates a motion status of the image capture device. The visual transition circuit generates an output stereo image pair based on the input stereo image pair, and outputs the output stereo image pair to the auto-stereoscopic display for stereo preview, wherein the visual transition circuit refers to the evaluated motion status to configure adjustment made to the input stereo image pair when generating the output stereo image pair.
Abstract:
An auto-convergence system includes a disparity unit, a convergence unit and an active learning unit. The disparity unit performs a disparity analysis upon an input stereo image pair, and accordingly obtains a disparity distribution of the input stereo image pair. The convergence unit adjusts the input stereo image pair adaptively according to the disparity distribution and a learned convergence range, and accordingly generates an output stereo image pair for playback. The active learning unit actively learns a convergence range during playback of stereo image pairs, and accordingly determines the learned convergence range.
Abstract:
A stereoscopic control method includes: establishing a specific mapping relation between a specific disparity value and a specific set of a first focal setting value of a first sensor of a stereo camera and a second focal setting value of a second sensor of the stereo camera; and controlling stereoscopic focus of the stereo camera according to the specific mapping relation. Besides, a stereoscopic control apparatus includes a mapping unit and a focus control unit. The mapping unit is arranged for establishing at least a specific mapping relation between a specific disparity value and a specific set of a first focal setting value of a first sensor of a stereo camera and a second focal setting value of a second sensor of the stereo camera. The focus control unit is arranged for controlling stereoscopic focus of the stereo camera according to the specific mapping relation.
Abstract:
A stereoscopic control method includes: establishing a specific mapping relation between a specific disparity value and a specific set of a first focal setting value of a first sensor of a stereo camera and a second focal setting value of a second sensor of the stereo camera; and controlling stereoscopic focus of the stereo camera according to the specific mapping relation. Besides, a stereoscopic control apparatus includes a mapping unit and a focus control unit. The mapping unit is arranged for establishing at least a specific mapping relation between a specific disparity value and a specific set of a first focal setting value of a first sensor of a stereo camera and a second focal setting value of a second sensor of the stereo camera. The focus control unit is arranged for controlling stereoscopic focus of the stereo camera according to the specific mapping relation.
Abstract:
An image viewing method includes: determining at least a first partial image corresponding to a portion of a first image directly selected from a plurality of images, and driving a display apparatus according to the first partial image; in accordance with a user interaction input, determining a second partial image corresponding to a portion of a second image directly selected from the images; and driving the display apparatus according to at least the second partial image. In one implementation, the first image and the second image are spatially correlated, and a field of view (FOV) of each of the first image and the second image is larger than an FOV of the display apparatus.
Abstract:
A video frame processing method, which comprises: (a) capturing at least two video frames via a multi-view camera system comprising a plurality of cameras; (b) recording timestamps for each the video frame; (c) determining a major camera and a first sub camera out of the multi-view camera system, based on the timestamps, wherein the major camera captures a major video sequence comprising at least one major video frame, the first sub camera captures a video sequence of first view comprising at least one video frame of first view; (d) generating a first reference video frame of first view according to one first reference major video frame of the major video frames, which is at a reference timestamp corresponding to the first reference video frame of first view, and according to at least one the video frame of first view surrounding the reference timestamp; and (e) generating a multi-view video sequence comprising a first multi-view video frame, wherein the first multi-view video frame is generated based on the first reference video frame of first view and the first reference major video frame.