Abstract:
The invention provides an input-output calibration method performed by a processing unit connected to an output device and an input device. The output and the input device correspond to an output and an input device coordinate systems, respectively. The processing unit uses the input device to derive a plurality of lines in the input device coordinate system for M calibration points by sensing a viewer specifying the M calibration points' positions, the plurality of lines are between the M calibration points and the viewer's the predetermined object's different positions, M is a positive integer equal to or larger than three. The processing unit derives the M calibration points' coordinates in the input device coordinate system according to the plurality of lines and uses the M calibration points' coordinates in the output and the input device coordinate systems to derive the relationship between the output and the input device coordinate systems.
Abstract:
A method and apparatus for deriving a motion vector predictor (MVP) for a motion vector (MV) of a current block of a current picture in Inter, or Merge, or Skip mode. The method selects a co-located block corresponding to a co-located picture and receives one or more reference motion vectors (MVs) of one or more co-located reference blocks associated with the co-located block. The method also determines a search set and determines a search order for the search set, if the search MV corresponding to the given reference list is not available, the search order then searches the search MV corresponding to a reference list different from the given reference list. Finally, the method determines the MVP for the current block based on the search set and the search order and provides the MVP for the current block.
Abstract:
One of the embodiments of the invention provides an input-output calibration method performed by a processing unit connected to an output device and an input device. The output device and the input device correspond to an output device coordinate system and an input device coordinate system, respectively. The processing unit first uses the input device to derive a plurality of lines in the input device coordinate system for M calibration points by sensing a viewer specifying the M calibration points' positions, wherein the plurality of lines are between the M calibration points and the viewer's the predetermined object's different positions, and M is a positive integer equal to or larger than three. Then, the processing unit derives the M calibration points' coordinates in the input device coordinate system according to the plurality of lines and uses the M calibration points' coordinates in the output device coordinate system and coordinates in the input device coordinate system to derive the relationship between the output device coordinate system and the input device coordinate system.
Abstract:
A method and apparatus for three-dimensional video encoding or decoding using sub-block based inter-view prediction are disclosed. The method partitions a texture block into texture sub-blocks and determines disparity vectors of the texture sub-blocks. The inter-view reference data is derived based on the disparity vectors of the texture sub-blocks and a reference texture frame in a different view. The inter-view reference data is then used as prediction of the current block for encoding or decoding. One aspect of the present invention addresses partitioning the current texture block. Another aspect of the present invention addresses derivation of disparity vectors for the current texture sub-blocks.
Abstract:
Calibration methods for calibrating image capture devices of an around view monitoring (AVM) system mounted on vehicle are provided, the calibration method including: extracting local patterns from images captured by each image capture device, wherein each local pattern is respectively disposed at a position within the image capturing range of one of the image capture devices; acquiring an overhead-view (OHV) image from OHV point above vehicle, wherein the OHV image includes first patterns relative to the local patterns for the image capture devices; generating global patterns from the OHV image using the first patterns, each global pattern corresponding to one of the local patterns; matching the local patterns with the corresponding global patterns to determine camera parameters and transformation information corresponding thereto for each image capture device; and calibrating each image capture device using determined camera parameters and transformation information corresponding thereto so as to generate AVM image.
Abstract:
A method and apparatus for deriving a motion vector predictor (MVP) are disclosed. The MVP is selected from spatial MVP and temporalone or more MVP candidates. The method determines a value of a flag in a video bitstream, where the flag is utilized for selectively disabling use of one or more temporal MVP candidates for motion vector prediction. The method selects, based on an index derived from the video bitstream, the MVP from one or more non-temporal MVP candidates responsive to the flag indicating that said one or more temporal MVP candidates are not to be utilized for motion vector prediction. Further, the method provides the MVP for the current block.
Abstract:
An apparatus and method for temporal motion vector prediction for a current block in a picture are disclosed. In the present method, one temporal block in a first reference picture in a first list selected from a list group comprising list 0 and list 1 is determined. When the determined temporal block has at least one motion vector, a candidate set is determined based on the motion vector of the temporal block. The temporal motion vector predictor or temporal motion vector predictor candidate or temporal motion vector or temporal motion vector candidate for the current block is determined from the candidate set by checking a presence of a motion vector pointing to a reference picture in a first specific list in said at least one motion vector, wherein the first specific list is selected from the list group based on a priority order.
Abstract:
A method and apparatus for three-dimensional video coding using the virtual depth information are disclosed. For a current texture block in the dependent view, the method incorporating the present invention first derives an estimated disparity vector to locate a corresponding texture block in a coded view. A collocated depth block in the coded view collocated with the corresponding texture block in the coded view is identified and used to derive the virtual depth information. One aspect of the present invention addresses derivation process for the estimated disparity vector. Another aspect of the present invention addresses the usage of the derived virtual depth information.
Abstract:
A method and apparatus for performing hybrid multihypothesis prediction during video coding of a coding unit includes: processing a plurality of sub-coding units in the coding unit; and performing disparity vector (DV) derivation when the coding unit is processed by a 3D or multi-view coding tool or performing block vector (BV) derivation when the coding unit is processed by intra picture block copy (IntraBC) mode. The step of performing DV or BV derivation includes deriving a plurality of vectors for multihypothesis motion-compensated prediction of a specific sub-coding unit from at least one other sub-coding/coding unit. The one other sub-coding/coding unit is coded before the corresponding DV or BV is derived for multihypothesis motion-compensated prediction of the specific sub-coding unit. A linear combination of a plurality of pixel values derived from the plurality of vectors is used as a predicted pixel value of the specific sub-coding unit.