Abstract:
A method for adjusting moving depths for a video is provided, which is adapted for 2D to 3D conversion. The method includes receiving a plurality of frames at a plurality time points and calculating a plurality of local motion vectors and a global motion vector in each of the frames. The method also includes determining a first difference degree between the local motion vectors and the global motion vector in the frames. The method further includes determining a second difference degree between a current frame and the other frames of the frames. The method also includes calculating a gain value according to the first difference degree and the second difference degree. The method further includes adjusting original moving depths of the current frame according to the gain value. Accordingly, a phenomenon of depth inversion can be avoided or mitigated.
Abstract:
A depth generation method adapted for a 2D to 3D image conversion device is provided. The depth generation method includes the following steps. Motion vectors in an image frame are obtained by motion estimation. A global motion vector of the image frame is obtained. Motion differences between the motion vectors of each block and the global motion vector are calculated. A depth-from-motions of each block is obtained based on the motion differences. Furthermore, a depth generation apparatus using the same is also provided.
Abstract:
A device and method are provided for two dimension (2D) to three dimension (3D) conversion. A 2D to 3D conversion device receives a 2D image data. The 2D to 3D conversion device assigns position data of a predetermined window. The 2D to 3D conversion device generates a depth map including a depth data of the 2D image data according to the 2D image data and the position data of the predetermined window. The 2D to 3D conversion device converts the 2D image data into a 3D image data according to the depth data of the depth map and the position data of the predetermined window.
Abstract:
A depth generation method adapted for a 2D to 3D image conversion device is provided. The depth generation method includes the following steps. Motion vectors in an image frame are obtained by motion estimation. A global motion vector of the image frame is obtained. Motion differences between the motion vectors of each block and the global motion vector are calculated. A depth-from-motions of each block is obtained based on the motion differences. Furthermore, a depth generation apparatus using the same is also provided.
Abstract:
A method for detecting a static logo is provided. The method includes following steps. An edge detection is performed on each of a plurality of blocks to be detected in an image frame so as to obtain edge detection information. A motion estimation is performed on a plurality of blocks within a respective surrounding area of each of the blocks to be detected so as to obtain distribution information of motion vectors. Whether a logo is a static logo is determined according to the edge detection information and the distribution information of motion vectors. Accuracy of the logo detection can be increased by using the method.
Abstract:
A multimedia device and a play mode determination method of the same are provided. The multimedia device includes a frame difference calculation unit, a global threshold determination unit and a play mode determination unit. The frame difference calculation unit calculates the frame difference between two continuous frames to obtain a global variation. The global threshold determination unit determines a film mode threshold corresponding to a film mode and a video mode threshold corresponding to a video mode according to a current frame of the two frames and a previous global variation, and selects a global threshold from the film mode threshold and the video mod threshold. The selected threshold is smaller than the film mode threshold. The play mode determination unit compares the global variation with the global threshold, and enables the multimedia device to enter one of the film mode and the video mode according to the comparison result.
Abstract:
An image processing method is disclosed. A 2D image is virtually divided into a plurality of blocks. With respect to each block, an optimum contrast value and a corresponding focus step are obtained. An object distance for an image in each block is obtained according to the respective focus step of each block. A depth map is obtained from the object distances of the blocks. The 2D image is synthesized to form a 3D image according to the depth map.
Abstract:
A device and method are provided for two dimension (2D) to three dimension (3D) conversion. A 2D to 3D conversion device receives a 2D image data. The 2D to 3D conversion device assigns position data of a predetermined window. The 2D to 3D conversion device generates a depth map including a depth data of the 2D image data according to the 2D image data and the position data of the predetermined window. The 2D to 3D conversion device converts the 2D image data into a 3D image data according to the depth data of the depth map and the position data of the predetermined window.
Abstract:
A motion estimation method is provided for generating a motion vector of a to-be-generated frame between two continuous reference frames. The method includes the following steps. A candidate motion vector is obtained according to the position of a to-be-generated block of a to-be-generated frame. Two first reference blocks are obtained from the two reference frames by extending the candidate motion vector from the to-be-generated block to the two reference frames, respectively. Two second reference blocks are obtained from the two reference frames by extending the candidate motion vector from one reference frame to another reference frame. Whether the candidate motion vector is valid is determined according to the positions of the two reference blocks obtained in each obtaining step. The corresponding motion vector of the to-be-generated block is determined according to the valid candidate motion vector.
Abstract:
A multimedia device and a motion compensation method thereof are provided for generating a middle frame between two frames. The multimedia device includes an interpolation unit, a linear process unit and a combination unit. The interpolation unit generates a first reference pixel data according to the relationship among a first pixel data, a second pixel data, and a third pixel data. The first and the second pixel data are obtained according to the location of a to-be-generated pixel and a relative motion vector. The third pixel data is obtained according to the two pixels data. The linear process unit provides a linear combination of the first and the second pixel data to generate a second reference pixel data. The combination unit combines the two reference pixel data according to the difference between the first and the second pixel data to generate a pixel data for the to-be-generated pixel.