Abstract:
Provided is a method of encoding an image, the method including: determining at least one compression unit included in the image; determining a scheme of changing a plurality of samples included in a current compression unit; changing the plurality of samples of the current compression unit based on the determined at least one compression unit and the determined scheme; splitting one of maximum coding units of the image including the changed plurality of samples into at least one coding unit; performing prediction by using at least one prediction unit determined from the at least one coding unit; and encoding the at least one coding unit by performing transformation by using at least one transformation unit determined from the at least one coding unit, wherein the encoding includes generating a bitstream including first information indicating the determined scheme.
Abstract:
An example user terminal device includes a display unit, including a display, configured to display a lock screen. The lock screen includes content representative information representing content included in a message that is provided by an acquaintance of a user of the user terminal device, and a first user interface element. A control unit, including a processor, is configured to, when a user input signal is received via the first user interface element, execute an application capable of reproducing the content and to reproduce the content.
Abstract:
Disclosed is an inter-layer video decoding method including decoding a first layer image, determining a reference location of the first layer image corresponding to a location of a second layer current block, determining neighboring sample values by using sample values of a boundary of the first layer image when neighboring sample locations of the reference location are outside the boundary of the first layer image, and determining an illumination compensation parameter of the second layer current block based on the neighboring sample values.
Abstract:
Provided is a multi-layer video decoding method. The multi-layer video decoding method includes: obtaining, from a bitstream, dependency information indicating whether a first layer refers to a second layer; if the dependency information indicates that the first layer refers to the second layer, obtaining a reference picture set of the first layer, based on whether type information of the first layer and type information of the second layer are equal to each other; and decoding encoded data of a current image included in the first layer, based on the reference picture set.
Abstract:
Provided is an inter-layer video decoding method. The inter-layer video decoding method includes: determining whether a current block is split into two or more regions by using a depth block corresponding to the current block; generating a merge candidate list including at least one merge candidate for the current block, based on a result of the determination; determining motion information of the current block by using motion information of one of the at least one merge candidate included in the merge candidate list; and decoding the current block by using the determined motion information, wherein the generating of the merge candidate list includes determining whether a view synthesis prediction candidate is available as the merge candidate according to the result of the determination.
Abstract:
Provided is an inter-layer video decoding method including: obtaining prediction mode information of a depth image; generating a prediction block of a current block forming the depth image, based on the obtained prediction mode information; and decoding the depth image by using the prediction block, wherein the obtaining of the prediction mode information includes obtaining a first flag, which indicates whether the depth image allows a method of predicting the depth image by splitting blocks forming the depth image into at least two partitions using a wedgelet as a boundary, and a second flag, which indicates whether the depth image allows a method of predicting the depth image by splitting the blocks forming the depth image into at least two partitions using a contour as a boundary.
Abstract:
Provided is an inter-layer video decoding method including: obtaining motion inheritance information from a bitstream; when the motion inheritance information indicates that motion information of a block of a first layer, which corresponds to a current block of a second layer, is usable as motion information of the second layer, determining whether motion information of a sub-block including a pixel at a predetermined location of the block of the first layer from among sub-blocks of the block of the first layer, which correspond to sub-blocks of the current block, is usable; when it is determined that the motion information of the sub-block including the pixel at the predetermined location of the block of the first layer is usable, obtaining motion information of the sub-blocks of the block of the first layer; and determining motion information of the sub-blocks of the current block based on the obtained motion information of the sub-blocks of the block of the first layer.
Abstract:
An inter-view video decoding method may include determining a disparity vector of a current second-view depth block by using a specific sample value selected within a sample value range determined based on a preset bit-depth, detecting a first-view depth block corresponding to the current second-view depth block by using the disparity vector, and reconstructing the current second-view depth block by generating a prediction block of the current second-view depth block based on coding information of the first-view depth block.
Abstract:
Provided is an interlayer video decoding method. The interlayer video decoding method includes: obtaining brightness compensation information indicating whether a second layer current block performs brightness compensation; determining whether a candidate of the second layer current block is usable as a merge candidate based on whether the brightness compensation information indicates that the brightness compensation is performed and whether the candidate of the second layer current block performs time direction inter prediction; generating a merge candidate list including at least one merge candidate based on a result of the determining; and determining motion information of the second layer current block by using motion information of one of the at least one merge candidate.
Abstract:
An electronic device, a wearable device, and a controlling method thereof are provided. The method of controlling an electronic device according to an exemplary embodiment includes receiving a touch command with respect to an object displayed on a display screen, generating an inductive current based on a signal pattern corresponding the object for which the touch command is received and the electronic device, and transmitting the inductive current generated to a wearable device through a user who touches the object. Accordingly, an exemplary embodiment may minimize or reduce procedures for user authentication at a terminal or a service.