Abstract:
Disclosed is an image coding method, including: determining a coding mode of a coding block; and performing coding on the coding block using multiple coding modes, including performing coding on pixel sample segments in the coding block using one of the multiple coding modes. Further disclosed is an image decoding method, including: parsing bitstream data of a decoding block, and determining decoding modes of pixel sample segments of the decoding block; and performing hybrid decoding on the decoding block using multiple decoding modes, including performing decoding on the pixel sample segments of the decoding block using corresponding decoding modes. The technical scheme described above can fully exploit and utilize the characteristics of each coding image region in the image to improve the image compression performance.
Abstract:
Provided are information interaction methods, devices and system for a Wireless Body Area Network (WBAN). The method includes that: a Hub sends beacon frames to nodes according to a preset superframe structure and receives, according to a preset superframe structure, information sent by the nodes. The preset superframe structure includes three phases. The first phase includes a first beacon period, a first timeslot period and a second timeslot period, the first timeslot period being used by nodes to send an emergency service and the second timeslot period being used by nodes to send an ordinary service. The second phase includes a second beacon period and a third timeslot period, the third timeslot period being used by nodes to complementarily send a service. The third phase includes a fourth timeslot period, the fourth timeslot period being used by nodes to send a sleep application and/or an access application.
Abstract:
Provided are a method and device for coding an image, a method and device for decoding an image. The method for coding the image includes that: coding mode parameters and parameter groups of one coding block are divided into multiple types of coding mode parameters and parameter groups corresponding to the multiple types of coding mode parameters according to a specified rule respectively; Quantization Parameters (QPs) included in the multiple types of coding mode parameters are determined according to a preset target bit rate; a QP of the coding block is determined according to reconstruction quality for the coding block; a coding mode parameter to be used is selected from the multiple types of coding mode parameters according to the QP of the coding block, a parameter group corresponding to the selected coding mode parameter is set, and a QP difference is calculated; and the coding mode parameter, the parameter group used by the coding block and the QP difference are written into a video bitstream.
Abstract:
Disclosed are a method and device for generating a predicted picture, the method comprising: determining a reference rectangular block of pixels according to parameter information which includes a location of a target rectangular block of pixels and/or depth information of a reference view; mapping the reference rectangular block of pixels to a target view according to the depth information of the reference view to obtain a projection rectangular block of pixels; and acquiring a predicted picture block from the projection rectangular block of pixels. The technical problem of relatively large dependence among the data brought by simultaneously employing the depth picture of the target view and the depth picture of the reference view in the process of generating the predicted picture in the prior art is solved, and the technical effects of reducing the dependence on the data and improving the encoding and decoding efficiency is achieved.
Abstract:
Disclosed are an image encoding and decoding method, image processing device, and computer storage medium, comprising: when using one of a palette and a pixel-string encoding replication means to replicate the encoding of a currently encoded block, generating a new palette color according to the pixels of said currently encoded block; according to said new palette color and/or palette color candidate set shared by the palette and the pixel-string encoding replication means, generating the palette of said current encoded block; using said palette to replicate the encoding of the palette and the pixel string, and generate a video code stream comprising the replication means and replication parameters. The video code stream is parsed, and at least one of the following information is obtained: the replication means, replication parameters, and new palette color of the palette and pixel-string decoding replication; according to said new palette color and/or palette color candidate set shared by the palette and the pixel-string decoding replication means, generating a palette; using said palette to replicate the decoding of the palette and the pixel string.
Abstract:
Disclosed are methods and devices for coding and decoding depth information, which relate to a Three-Dimensional Video (3DV) coding technology. The coding method includes: arranging all elements in a DLT in an ascending order of values, wherein the DLT is a data structure representing depth numerical values by index numbers; coding a value of a first element in the DLT, and writing the bits of the value into a bitstream; and coding a differential value between a value of each of the other elements except the first element in the DLT and a value of an element with an index number smaller than an index number of the each of other elements in the DLT respectively, and writing the value of the differential value into the bitstream. The method for decoding depth information and related coding and decoding devices are also provided. By the technical solutions of the disclosure, efficiency of coding and decoding depth information is improved, and resource occupation during depth information coding is reduced.
Abstract:
Provided is an image encoding method. The method includes: determining an intra prediction mode of an encoding block, and constructing a first prediction value of the encoding block according to the intra prediction mode; determining a filtering parameter according to the first prediction value and an original value of the encoding block, where the filtering parameter includes a filtering indication parameter; in a case where the filtering indication parameter indicates performing filtering processing on the first prediction value, performing the filtering processing on the first prediction value to obtain an intra prediction value; calculating a prediction difference parameter according to a difference between the original value of the encoding block and the intra prediction value; and encoding the intra prediction mode, the filtering parameter, and the prediction difference parameter, and writing encoded bits into a bitstream.
Abstract:
Devices, systems and methods for deriving intra prediction samples when the intra prediction mode for a coding block is the planar mode are described. In an aspect, a method for video coding includes selecting a first set of reference samples that are reconstructed neighboring samples of a current block, and determining a prediction value for a prediction sample of the current block by interpolating at least one of the first set and a second set of reference samples, where a reference sample of the second set of reference samples is based on a weighted sum of a first sample and a second sample from the first set of reference samples, and where the reference sample is aligned horizontally with the first sample, aligned vertically with the second sample, and positioned on an opposite side of the prediction sample with respect to either the first or the second sample.
Abstract:
Techniques for encoding, decoding, and extracting one or more bitstreams to form one or more sub-bitstreams are described. In one example aspect, a method for video or picture processing includes partitioning a picture into one or more tiles and generating one or more bitstreams using one or more configurations based on the one or more tiles. Generating each of the one or more bitstreams includes partitioning each of the one or more tiles into one or more slices, and performing, for each slice among the one or more slices a first encoding step to encode a tile identifier in a header of the slice, and a second encoding step to encode, in the header of the slice, a second address of the slice that indicates a location of the slice in the tile.
Abstract:
The present disclosure provides picture encoding and decoding methods, picture encoding and decoding devices as well as a decoder and an encoder. The picture decoding method includes: parsing a video bitstream to obtain candidate reshaping parameters from a picture-layer and/or slice-layer data unit of the video bitstream, and determining a reshaping parameter used for reshaping a reconstructed picture according to the obtained candidate reshaping parameters; and reshaping the reconstructed picture by using the reshaping parameter. The reconstructed picture is a picture obtained by decoding the video bitstream before performing the reshaping. The picture-layer and/or slice-layer data unit includes at least one of the following data units: a picture-layer parameter set and/or a slice-layer parameter set different from a picture parameter set (PPS), a parameter data unit which are included in an access unit (AU) corresponding to the reconstructed picture, slice header and a system-layer picture parameter data unit.