Abstract:
Provided is an image decoding method of decoding an image, the image decoding method including: obtaining at least one of block shape information and split shape information about a first coding unit included in the image, from a bitstream; determining at least one second coding unit included in the first coding unit based on at least one of the block shape information and the split shape information; and decoding the image based on the at least one second coding unit, wherein the block shape information indicates a shape of the first coding unit and the split shape information indicates whether the first coding unit is split into the at least one second coding unit. Also, provided is an image encoding method corresponding to the image decoding method. Also, provided is an image encoding apparatus and image decoding apparatus for respectively performing the image encoding method and image decoding method.
Abstract:
A motion vector encoding apparatus includes: a predictor configured to obtain motion vector predictor candidates of a plurality of predetermined motion vector resolutions by using a spatial candidate block and a temporal candidate block of a current block, and to determine motion vector predictor of the current block, a motion vector of the current block, and a motion vector resolution of the current block by using the motion vector predictor candidates; and an encoder configured to encode information representing the motion vector predictor of the current block, a residual motion vector between the motion vector of the current block and the motion vector predictor of the current block, and information representing the motion vector resolution of the current block, wherein the plurality of predetermined motion vector resolutions include a resolution of a pixel unit that is greater than a resolution of one-pel unit.
Abstract:
Provided is a method of decoding an image, the method including determining at least one reference region to be referenced by a target region in the image to which a low-quality coding mode is applied; extracting a certain type of information from the determined at least one reference region; and changing pixel values of the target region, based on the extracted type of information.
Abstract:
A motion vector encoding apparatus includes: a predictor configured to obtain motion vector predictor candidates of a plurality of predetermined motion vector resolutions by using a spatial candidate block and a temporal candidate block of a current block, and to determine motion vector predictor of the current block, a motion vector of the current block, and a motion vector resolution of the current block by using the motion vector predictor candidates; and an encoder configured to encode information representing the motion vector predictor of the current block, a residual motion vector between the motion vector of the current block and the motion vector predictor of the current block, and information representing the motion vector resolution of the current block, wherein the plurality of predetermined motion vector resolutions include a resolution of a pixel unit that is greater than a resolution of one-pel unit.
Abstract:
A method of encoding a video is provided, the method includes: determining a filtering boundary on which deblocking filtering is to be performed based on at least one data unit from among a plurality of coding units that are hierarchically configured according to depths indicating a number of times at least one maximum coding unit is spatially spilt, and a plurality of prediction units and a plurality of transformation units respectively for prediction and transformation of the plurality of coding units, determining filtering strength at the filtering boundary based on a prediction mode of a coding unit to which pixels adjacent to the filtering belong from among the plurality of coding units, and transformation coefficient values of the pixels adjacent to the filtering boundary, and performing deblocking filtering on the filtering boundary based on the determined filtering strength.
Abstract:
Provided are a method and apparatus for encoding a video and a method and apparatus for decoding a video. The encoding method includes: splitting a picture of the video into one or more maximum coding units; encoding the picture based on coding units according to depths which are obtained based on a partition type determined according to the depths of the coding units according to depths, determining coding units according to coded depths with respect to each of the coding units according to depths, and thus determining coding units having a tree structure; and outputting data that is encoded based on the partition type and the coding units having the tree structure, information about the coded depths and an encoding mode, and coding unit structure information indicating a size and a variable depth of a coding unit.
Abstract:
A video decoding method including: extracting, from a bitstream of an encoded video, at least one of information indicating independent parsing of a data unit and information indicating independent decoding of a data unit; extracting encoded video data and information about a coded depth and an encoding mode according to maximum coding units by parsing the bitstream based on the information indicating independent parsing of the data unit; and decoding at least one coding unit according to a coded depth of each maximum coding unit of the encoded video data, based on the information indicating independent decoding in the data unit and the information about the coded depth and the encoding mode according to maximum coding units.
Abstract:
A mobile terminal includes a communicator configured to communicate with wearable devices; a memory configured to store capability information indicating capabilities of the wearable devices; and a processor configured to determine a first wearable device and a second wearable device among the wearable devices capable of executing a function of the mobile terminal, based on the capability information, the first wearable device being configured to perform a first sub-function for executing the function of the mobile terminal, the second wearable device being configured to perform a second sub-function to be executed together with the first sub-function to execute the function of the mobile terminal,the processor being configured to control the first wearable device to perform the first sub-function and to control the second wearable device to perform the second sub-function.
Abstract:
A first display apparatus includes a detector configured to detect a second display apparatus in proximity with the first display apparatus; a transparent display configured to display an image and receive a user input in an overlapping area of the transparent display, wherein the overlapping area is in proximity with the second display apparatus; a communicator configured to form a communication link with the second display apparatus when the second display apparatus is detected, and transmit, to the second display apparatus, a request related to an object corresponding to the user input via the communication link when the user input is received; and a controller configured to obtain data corresponding to the request via the communicator.
Abstract:
Provided are a portable device, a wearable device, and a system including the same for setting a reception of a notification message in the wearable device. The portable device includes: a wireless communicator configured to receive, from a wearable device via wireless communication, information regarding a notification setting; and a controller configured to identify an event and to determine, based on the received information, whether to transmit a notification to the wearable device in response to the identified event.