Abstract:
A method of processing video signals containing multiple images. The method comprises dividing a first image into regions and associating a first plurality of regions of the first image with a first image layer and a second plurality of regions of the first image with a second image layer. Motion estimation is performed on pixel values that are derived from pixel values associated with the first image layer,substantially in isolation from pixel values associated with the second image layer, to generate a first motion vector field, and the first motion vector field is used to perform motion- compensated processing on pixel values that are derived from pixel values associated with the first image layer. The result of the motion-compensated processing is combined with pixel values that are derived from pixel values associated with the second image layer to generate an output image.
Abstract:
Three-dimensional [3D] display device (100) for processing a depth-related signal (122), the 3D display device comprising: -an input (120) for obtaining the depth-related signal, the depth-related signal comprising depth-related values distributed within a signal depth-related range (310), the depth-related values enabling 3D rendering of a two-dimensional [2D] image signal (124) on the 3D display device; -a processor (140) for mapping the depth-related values to a display depth-related range (320) of the 3D display device to obtain adjusted depth-related values for use in the 3D rendering; and -the processor (140) being arranged for, as part of said mapping the depth-related values, adjusting a distribution of the depth-related values within the display depth-related range (320) based on limitation data (126), the limitation data being indicative of a perceptual quality provided to a viewer as a function of a degree of depth established by the 3D display device.
Abstract:
A technique for frame rate conversion that utilizes motion estimation and motion compensated temporal interpolation includes obtaining a first image and a second image, where the first and second images correspond to different instances in time, compressing the second image using multiple motion vectors that result from motion estimation between the first image and the second image to generate a compressed image, and generating an interpolated image using the compressed image.
Abstract:
An image compression unit for compressing image data (ID) having a plurality of pixels, each with one or more color samples, into a compressed format (CF) is provided. The image compression unit comprises a color lookup table generation unit (311) for generating a color lookup table having one or more colors and for transmitting the color lookup table in the compressed format (CF). The image compression unit further comprises a pixel mode determination unit (312) for determining for at least one pixel of the image data (ID) a pixel mode and for transmitting for the at least one pixel a pixel mode identifier for identifying the pixel mode in the compressed format (CF). A first pixel mode corresponds to the case that the at least one pixel matches one of the one or more colors in the color lookup table and a second pixel mode corresponds to the case that the at least one pixel does not match one of the one or more colors in the color lookup table. The image compression unit further comprises a color lookup table encoding unit (313) for encoding pixels whose pixel mode is the first pixel mode and a predictive encoding unit (314) for predictively encoding pixels whose pixel mode is the second pixel mode.
Abstract:
A technique for frame rate conversion that utilizes motion estimation and motion compensated temporal interpolation includes obtaining a first image and a second image, where the first and second images correspond to different instances in time, compressing the second image using multiple motion vectors that result from motion estimation between the first image and the second image to generate a compressed image, and generating an interpolated image using the compressed image.
Abstract:
A device for motion estimation in video image data is provided. The device comprises a motion estimation unit (11, 21) for estimating a current motion vector for an area of a current image by determining a set of temporal and/or spatial candidate motion vectors and selecting a best motion vector from the set of candidate motion vectors. The motion estimation unit (11, 21) is further adapted for substantially doubling one or more of the candidate motion vectors and for including the one or more substantially doubled candidate motion vectors in the set of candidate motion vectors.
Abstract:
A method of identifying a periodic pattern of image repetition within a succession of video images comprising receiving a signal indicative of one or more difference values that represent a degree of difference between images in the succession of video images, forming a sequence of difference values, and processing a subset of the sequence of difference values in order to generate an identifier of the periodic pattern.
Abstract:
A display device (2) for displaying a scene (104) comprising a shared image component (102) and a private image component (106), wherein the display device is adapted to display a plurality of perspectives of the shared image component and a plurality of views of each of the plurality of perspectives such that a multi-view perspective (P 1 ; P 2 ) of the shared image component is visible at each of a plurality of viewing zones, the display device being further adapted to display the private image component such that it is visible at one or more, but not all of the viewing positions.
Abstract:
A cache management policy is provided, comprising a method for writing back to a memory (104) a data element set (122) stored in a cache (110). The method reduces the time some items stay in the cache, and thereby improves the utilization of the cache for some applications, especially for video applications. The method comprises determining that each one of the multiple data elements has been updated through at least one write request; marking the data element set as a write-back candidate, in dependency on said determination; and writing the write-back candidate to the memory.
Abstract:
A display device (2) for displaying a scene (104) comprising a shared image component (102) and a private image component (106), wherein the display device is adapted to display a plurality of perspectives of the shared image component and a plurality of views of each of the plurality of perspectives such that a multi-view perspective (P 1 ; P 2 ) of the shared image component is visible at each of a plurality of viewing zones, the display device being further adapted to display the private image component such that it is visible at one or more, but not all of the viewing positions.
Abstract translation:一种用于显示包括共享图像分量(102)和专用图像分量(106)的场景(104)的显示设备(2),其中所述显示设备适于显示所述共享图像分量的多个透视图,以及多个 的多个视角中的每一个的视图,使得共享图像分量的多视角透视(P 1> P 2> 2)在多个视点 观看区域,显示设备还适于显示私人图像分量,使得其在一个或多个但不是全部的观看位置可见。