Abstract:
A method includes selecting a target pixel and comparing a value of the target pixel with a respective value of each of a plurality of pixels located in an area that includes the target pixel. Further, for each pixel of the plurality of pixels that has a value different by at least a threshold amount from the value of the target pixel, the value of such pixel is replaced by the value of the target pixel. A filter function is applied to a set of pixels which includes the value of the target pixel and current values, after the selective replacement step, of the plurality of pixels.
Abstract:
A system, apparatus, method and article to filter media signals are described. The apparatus may include a media processor. The media processor may include an image signal processor having multiple processing elements to process a pixel matrix to determine a set of filter support pixels for a target pixel, select reference pixels from the filter support pixels using a complexity value, and filter noise from the target pixel using the reference pixels. Other embodiments are described and claimed.
Abstract:
A method includes making a first determination as to whether a current pixel has a value which reflects a mosquito noise artifact, and determining whether to apply a filtering process at the current pixel based on a result of the first determination. In addition, or alternatively, a method includes making a second determination as to whether a current pixel has a value which reflects a ringing artifact, and determining whether to apply a filtering process at the current pixel based on a result of the second determination.
Abstract:
A system, apparatus, method and article to filter media signals are described. The apparatus may include a media processor. The media processor may include an image signal processor having multiple processing elements to determine a level of noise for an image using an internal spatial region of said image, select filter parameters based on the level of noise, and filter the image using the filter parameters. Other embodiments are described and claimed.
Abstract:
A lighting apparatus is disclosed. The lighting apparatus comprises a casing, at least one light source and a microstructure cover. The light source is disposed on one side of the casing. The microstructure cover is mounted on the casing opposite to the reflective face thereof. The microstructure cover has a plurality of guiding micro-structures to guide light and a plurality of dispersing micro-structures to disperse light.
Abstract:
Adaptive filtering may be used to increase the quality of tone mapped, baseline layer encoded information. As a result, scalable video codecs may be implemented with improved picture quality in some embodiments.
Abstract:
A video system includes an analyzer and a bit depth predictor. The analyzer receives a first coded video signal, which is indicative of first values for pixels. The first values are associated with a first bit depth. The analyzer, for each pixel, analyzes the first values for the pixels located in a neighborhood that contains said each pixel. The bit depth predictor, based at least in part on the analysis, generates a second coded video signal that is indicative of second values for the pixels. The second values are associated with a second bit depth that is different than the first bit depth.
Abstract:
In one embodiment, an apparatus and method for an angular-directed spatial deinterlacer are disclosed. In one embodiment, the method comprises calculating a cost measure for each of multiple angle candidates for a target pixel block to be deinterlaced in a spatial-only domain, determining a horizontal angle measure for the target pixel block, establishing a global minimum angle from the multiple angle candidates by determining the lowest cost measure from the multiple angle candidates, establishing a local minimum angle from the multiple angle candidates by sifting through the angle candidates in a hierarchical manner, and filtering the global minimum angle and the local minimum angle to create a value for interpolating the target pixel block for deinterlacing. Other embodiments are also described.
Abstract:
A method of encoding a video sequence including a sequence of video images includes comparing elements of a portion of a first video image with elements of a portion of a second video image to generate respective intensity difference values for the element comparisons. Then, a first value is assigned to the intensity difference values that are at least above a visually perceptible threshold value and a second value is assigned to the intensity difference values that are not at least above the visually perceptible threshold value. Next, the method includes dividing the portion of the first video image into sub-portions and summing the first and second values associated with each corresponding sub-portion to generate respective sums. If a respective sum is at least greater than a decision value, a variable associated with that sub-portion is set to a first value. If a respective sum is not at least greater than the decision value, the variable associated with that sub-portion is set to a second value. The values associated with the variables are then added. Depending on the result of the addition, the portion of the first video image is either motion compensated or not.
Abstract:
A method of encoding a video sequence including a sequence of video images includes first comparing elements of a portion of a first video image (e.g., pixels of a macroblock of a current frame) with elements of a portion of a second video image (e.g., corresponding pixels of a macroblock of a previous frame) to generate respective intensity difference values for the element comparisons. Then, a first value (e.g., one) is assigned to the intensity difference values that are above a visually perceptible threshold value and a second value (e.g., zero) is assigned to the intensity difference values that are at or below the visually perceptible threshold value. Next, the method includes summing the first and second values to generate a sum. If the sum is greater than a predetermined decision value, the portion of the first video image is encoded (e.g., motion compensated). The method is fully compatible and, thus, may be implemented with video standards such as, for example, H.261, H.263, Motion-JPEG, MPEG-1, and MPEG-2.