Abstract:
A method and system for compressing and decompressing video image data in real time employs thresholding and facsimile-based encoding to eliminate the need for computationally intensive two-dimensional transform-based compression techniques. The method operates first by forming a difference frame which contains only information pertaining to the difference between a current video image frame and a computed approximation of the video image frame. The difference frame is fed to a thresholder which categorizes each pixel in the frame as being either in a first set having intensities above or at a preset threshold, or a second set having intensities below a preset threshold. A facsimile-based compression algorithm is then employed to encode the first set of above or at threshold pixel locations. To compress the intensity data for each above or at threshold pixel, a quantizer and lossless encoder are preferably employed, with the quantizer serving to categorize the intensities by groups, and the lossless encoder using conventional coding, such as Huffman coding, to compress the intensity data further. Various techniques may be employed with the embodiments of the invention to adjust the actual amount of compressed data generated by the method and system to accommodate communication lines with different data rate capabilities.
Abstract:
Methods, systems, and computer program products for the generation of multiple layers of scaled encoded video data compatible with the HEVC standard. Residue from prediction processing may be transformed into coefficients in the frequency domain. The coefficients may then be sampled to create a layer of encoded data. The coefficients may be sampled in different ways to create multiple respective layers. The layers may then be multiplexed and sent to a decoder. There, one or more of the layers may be chosen. The choice of certain layer(s) may be dependent on the desired attributes of the resulting video. A certain level of video quality, frame rate, resolution, and/or bit depth may be desired, for example. The coefficients in the chosen layers may then be assembled to create a version of the residue to be used in video decoding.
Abstract:
Method and apparatus for deriving a motion vector at a video decoder. A block-based motion vector may be produced at the video decoder by utilizing motion estimation among available pixels relative to blocks in one or more reference frames. The available pixels could be, for example, spatially neighboring blocks in the sequential scan coding order of a current frame, blocks in a previously decoded frame, or blocks in a downsampled frame in a lower pyramid when layered coding has been used.
Abstract:
Systems and methods of detecting an object using motion estimation may include a processor and motion estimation and object detection logic coupled to the processor. The motion estimation and object detection logic may be configured to include logic to detect an object in a frame of a video based on motion estimation. The video may include a first frame and a second frame. The motion estimation may be performed on a region of the second frame using sum of absolute difference between the region of the second frame and a corresponding region of the first frame.
Abstract:
Systems, devices and methods are described including at an enhancement layer (EL) video encoder determining an intra mode for a current block of an EL frame based, at least in part, on one or more first intra mode candidates obtained from at least one of a lower level EL frame, or a base layer (BL) frame.
Abstract:
One or more apparatus and method for adaptively detecting motion instability in video. In embodiments, video stabilization is predicated on adaptive detection of motion instability. Adaptive motion instability detection may entail determining an initial motion instability state associated with a plurality of video frames. Subsequent transitions of the instability state may be detected by comparing a first level of instability associated with a first plurality of the frames to a second level of instability associated with a second plurality of the frames. Image stabilization of received video frames may be toggled first based on the initial instability state, and thereafter based on detected changes in the instability state.Output video frames, which may be stabilized or non-stabilized, may then be stored to a memory. In certain embodiments, video motion instability is scored based on a probability distribution of video frame motion jitter values.