Abstract:
A method and apparatus are provided for reversible, polynomial based image scaling. The apparatus includes a video scaler for performing image scaling from a first base resolution image to a higher resolution image, and from the higher resolution image to a second base resolution image. The first and the second base resolution images are equal on a pixel-by-pixel basis for an entirety of the first and the second base resolution images. A scaling function used for the image scaling is based on a polynomial function having two or more degrees.
Abstract:
Substantial elimination of errors in the detection and location of overlapping human objects in an image of a playfield is achieved, in accordance with at least one aspect of the invention, by performing a predominately shape-based analysis of one or more characteristics obtained from a specified portion of the candidate non-playfield object, by positioning a human object model substantially over the specified portion of the candidate non-playfield object in accordance with information based at least in part on information from the shape-based analysis, and removing an overlapping human object from the portion of the candidate non-playfield object identified by the human object model. In one exemplary embodiment, the human object model is an ellipse whose major and minor axes are variable in relation to one or more characteristics identified from the specified portion of the candidate non-playfield object.
Abstract:
A method and apparatus are disclosed and described for providing a synchronized workstation with two-dimensional and three-dimensional outputs. The apparatus includes a video decoder (315) for decoding picture data. The video decoder includes a data manager (320) for receiving video production commands and managing a video playback of the picture data in at least one of a two-dimensional video output mode and a three-dimensional video output mode responsive to the video production commands. The two-dimensional video output mode and the three-dimensional video output mode are capable of being used independently and simultaneously.
Abstract:
A method and apparatus are disclosed and described for providing bit rate configuration for multi-view video coding. In the video encoder, the method includes encoding image data for at least one picture for at least two joint views of multi-view video content, the at least two joint views including a base view and at least one dependent view. The bit rate configuration for encoding the image data is determined to include an average bit rate and a maximum bit rate for the base view and the average bit rate and the maximum bit rate for the at least two joint views (235, 215, 220).
Abstract:
Film grain is simulated in an output image using pre-established blocks of film grain from a pool of pre-established blocks. Successive film grain blocks are selected by matching the average intensity of a block from the pool to the average intensity of a successive one of a set of M×N pixels in an incoming image. Once all of the successive pixel blocks from the image are matched to selected film grain blocks, the selected film grain blocks are “mosaiced”, that is composited into a larger image mapped to the incoming image.
Abstract:
A method of segmenting regions of an image wherein a number of partitions are determined based on a range of an image histogram in a logarithmic luminance domain. Regions are defined by the partitions. A mean value of each region is calculated by K-means clustering wherein the clustering is initialized, data is assigned and centroids are updated. Anchor points are determined based on the centroids and a weight of each pixel is computed based on the anchor points.
Abstract:
One or more implementations access a digital image containing one or more bands. Adjacent bands of the one or more bands have a difference in color resulting in a contour between the adjacent bands. The one or more implementations apply an algorithm to at least a portion of the digital image for reducing visibility of a contour. The algorithm is based on a value representing the fraction of pixels in a region of the digital image having a particular color value.
Abstract:
In an implementation, a pixel is selected from a target digital image. Multiple candidate pixels, from one or more digital images, are evaluated based on values of the multiple candidate pixels. For the selected pixel, a corresponding set of pixels is determined from the multiple candidate pixels based on the evaluations of the multiple candidate pixels and on whether a predetermined threshold number of pixels have been included in the corresponding set. Further for the selected pixel, a substitute value is determined based on the values of the pixels in the corresponding set of pixels. Various implementations described provide adaptive pixel-based spatio-temporal filtering of images or video to reduce film grain or noise. Implementations may achieve an “even” amount of noise reduction at each pixel while preserving as much picture detail as possible by, for example, averaging each pixel with a constant number, N, of temporally and/or spatially correlated pixels.
Abstract:
There are provided methods and apparatus for film grain SEI message insertion for bit-accurate simulation in a video system. A method for simulating film grain in an ordered sequence includes the steps of providing film grain supplemental information corresponding to a plurality of intra coded pictures, and providing additional film grain supplemental information corresponding to inter coded pictures between consecutive intra coded pictures, in decode order. The inter coded pictures are selected based upon display order.
Abstract:
The simulation of film grain in a video image occurs by first creating a block (i.e., a matrix array) of transformed coefficients for a set of cut frequencies fHL, fVL, fHH and fVH associated with a desired grain pattern. (The cut frequencies fHL, fVL, fHH and fVH represent cut-off frequencies, in two dimensions, of a filter that characterizes the desired film grain pattern). The block of transformed coefficients undergoes an inverse transform to yield a bit-accurate film grain sample and the bit accurate sample undergoes scaling to enable blending with a video signal to simulate film grain in the signal.
Abstract translation:视频图像中的胶片颗粒的模拟是通过首先为一组切割频率f LF,V V L和V L而产生变换系数的块(即,矩阵阵列) ,f H HH和f V H H N,与期望的颗粒图案相关联。 (截止频率f L HL,f VL,f HH和f H V H表示截止频率,在 表示所需胶片颗粒图案的过滤器的两个尺寸)。 变换后的系数块进行逆变换以产生位精确的胶片颗粒样品,并且比特精确的样本进行缩放以使得能够与视频信号混合以模拟信号中的胶片颗粒。