Abstract:
In one embodiment, a method includes receiving a prediction unit (PU) for a coding unit (CU) of the video content. The method analyzes the prediction unit to determine a size of prediction unit. A size of a transform unit is determined based on the size of the prediction unit based on a set of rules. The set of rules specify the size of the transform unit is linked to the size of prediction unit and not a size of the coding unit. The method then outputs the size of the transform unit for use in a transform operation.
Abstract:
There is a coding. The coding may include preparing video compression data based on source pictures utilizing a processor. The preparing may include processing a generated transform unit, including generating a significance map having a significance map array with y-x locations corresponding to the transform array. The generating may include scanning, utilizing a zigzag scanning pattern, a plurality of significance map elements in the significance map array. The generating may also include determining, utilizing the zigzag scanning pattern, a context model for coding a significance map element of the plurality of significance map elements based on a value associated with at least one coded neighbor significance map element of the significance map element in the significance map array. There is also a decoding including processing video compression data which is generated in the coding.
Abstract:
There is a processing of an incoming video signal into a compressed video bitstream. The processing includes determining indexed pathways of blocks in the incoming video signal. The processing also includes determining flexible partitioning of the blocks utilizing partitioning lines. The partitioning lines are based on index units in the determined indexed pathways. The processing also includes generating PIFP information associated with the determined flexible partitioning and encoding the generated PIFP information associated with the PIFP encoded video. Also, there is a processing of received PIFP encoded video utilizing received encoded PIFP information associated with the received PIFP encoded video.
Abstract:
In various embodiments, a significance map of a matrix of video data coefficients is encoded or decoded using context-based adaptive binary arithmetic coding (CABAC). The significance map scanned line-by-line along a scanning pattern. Each line may be a vertical, horizontal, or diagonal section of the scanning pattern. Context models for each element processed in a particular line are chosen based on values of neighboring elements that are not in the line. The neighboring elements may be limited to those contained within one or two other scanning lines. Avoiding reliance on neighbors that are in the same scanning line facilitates parallel processing.
Abstract:
The invention is concerned with the strains of B. coagulans for lactic acid production and the related methods, in which the carbon sources are pentose or hexose or the agricultural or industrial wastes containing pentose or hexose or a mixture of both. According to the invention, the highest amount of L-lactic acid produced from glucose is 173 g/L, the optical purity is over 99%, the yield is up to 0.98, and the productivity is up to 2.4 g/L per hour. The highest amount of L-lactic acid produced from xylose is 195 g/L, the optical purity is over 99%, the yield is up to 0.98, and the productivity is up to 2.7 g/L per hour. The highest amount of L-lactic acid produced from reducing sugars in xylitol byproducts is 106 g/L, the optical purity is over 99%, and the productivity is up to 2.08 g/L per hour. The B. coagulans strains XZL4 (DSM No. 23183) and XZL9 (DSM No. 23184) of the invention can directly utilize various reducing sugars in xylitol byproducts to produce high amounts of L-lactic acid, which improves the production efficiency at low costs, and the strains are, thus, appropriate for industrial productions.
Abstract:
A method for determining quantization parameters is provided. The method includes determining one or more first units of video content in a grouping of units and analyzing whether the one or more first units of video content within a region in the grouping of units have coefficients for the video content that are zero. The method then determines whether a quantization parameter for one or more second units of video content different from the one or more first units of video content is to be used to derive the quantization parameter for the one or more first units of video content. When the quantization parameter for the one or more second units of video content is to be used, the quantization parameter for the one or more first units of video content is derived from the quantization parameter for the one or more second units of video content.
Abstract:
In one embodiment, a spatial merge mode for a block of video content may be used in merging motion parameters. Spatial merge parameters are considered and do not require utilization of bits or flags or indexing to signal at the encoder or decoder. If the spatial merge mode is determined, the method merges the block of video content with a spatially-located block, where merging shares motion parameters between the spatially-located block and the block of video content.
Abstract:
In one embodiment, a method receives a unit of video content. The unit of video content is coded in a bi-prediction mode. A motion vector predictor candidate set is determined for a first motion vector for the unit. The method then determines a first motion vector predictor from the motion vector prediction candidate set for the first motion vector and calculates a second motion vector predictor for a second motion vector for the unit of video content. The second motion vector predictor is calculated based on the first motion vector or the first motion vector predictor.
Abstract:
A system is configured to transcode a first MPEG stream to a second MPEG stream. The system includes a first MPEG decoder capable of decoding the first MPEG stream and a second MPEG encoder capable of producing the second MPEG stream. The second MPEG encoder is configured to maintain a decoded picture type of I, P, or B. The second MPEG encoder is also configured to maintain a decoded picture structure of frame or field, identify a metadata per each macroblock (MB) of an MB pair of the first MPEG stream, and determine whether to re-encode the MB into the second MPEG stream using one of a frame or a field mode based on the identified metadata. The second MPEG encoder is further configured to re-encode the MB pair into the second MPEG stream using one of the frame or the field mode based on the identified metadata.
Abstract:
In one embodiment, a method for encoding or decoding video content is provided. The method includes determining a set of interpolation filters for use in interpolating sub-pel pixel values and a mapping between interpolation filters in the set of interpolation filters and different prediction indexes of the video content. A unit of video content is received and a prediction index is determined in a plurality of prediction indexes that are used to determine a prediction block for the unit of video content. The method then determines an interpolation filter in the set of interpolation filters based on a mapping between the interpolation filter and the prediction index to interpolate a sub-pel pixel value for use in a temporal prediction process for the unit of video content.