Abstract:
The present invention relates to a method and apparatus for detecting and tracking vehicles. One embodiment of a system for detecting and tracking an object (e.g., vehicle) in a field of view includes a moving object indication stage for detecting a candidate object in a series of input video frames depicting the field of view and a track association stage that uses a joint probabilistic graph matching framework to associate an existing track with the candidate object.
Abstract:
The present invention relates to a method and system for creating a strong classifier based on motion patterns wherein the strong classifier may be used to determine an action being performed by a body in motion. When creating the strong classifier, action classification is performed by measuring similarities between features within motion patterns. Embodiments of the present invention may utilize candidate part-based action sets and training samples to train one or more weak classifiers that are then used to create a strong classifier.
Abstract:
A system and method of compressing a video signal can include the steps of: receiving a video signal, the video signal including frames; analyzing, for each frame, the video signal on a macroblock-by-macroblock level; determining whether to downsample a macroblock residual for each of the macroblocks; selectively downsampling a macroblock residual for some of the macroblocks; and coding the macroblocks. A system and method of decompressing a video signal can include the steps of receiving a compressed video signal, the video signal including frames; analyzing, for each frame, the video signal on a macroblock-by-macroblock level; determining whether to upsample a macroblock residual for each of the macroblocks; selectively upsampling a macroblock residual for some of the macroblocks; and decoding the macroblocks.
Abstract:
A method and system for creating a histogram of oriented occurrences (HO2) is disclosed. A plurality of entities in at least one image are detected and tracked. One of the plurality of entities is designated as a reference entity. A local 2-dimensional ground plane coordinate system centered on and oriented with respect to the reference entity is defined. The 2-dimensional ground plane is partitioned into a plurality of non-overlapping bins, the bins forming a histogram, a bin tracking a number of occurrences of an entity class. An occurrence of at least one other entity of the plurality of entities located in the at least one image may be associated with one of the plurality of non-overlapping bins. A number of occurrences of entities of at least one entity class in at least one bin may be into a vector to define an HO2 feature.
Abstract:
An adaptive image acquisition system and method that generates virtual view of a surveillance scene to a user (operator), in which, the user operates the system. Through viewing the virtual view, the user controls sensors that create the virtual view. The sensors comprise at least one first sensor having a higher resolution than at least one second sensor. Images from the second sensor are processed to create an image mosaic that is overlaid with images from the higher resolution first sensor. In one embodiment of the invention, the first sensor is moved using Saccade motion. In another embodiment of the invention, a user's intent is used to control the Saccade motion.
Abstract:
The present invention relates to a method and system for creating a strong classifier based on motion patterns wherein the strong classifier may be used to determine an action being performed by a body in motion. When creating the strong classifier, action classification is performed by measuring similarities between features within motion patterns. Embodiments of the present invention may utilize candidate part-based action sets and training samples to train one or more weak classifiers that are then used to create a strong classifier.
Abstract:
A method for enhancing color fidelity in multi-reproduction, includes scanning an image to be reproduced, wherein the image contains an invisible digital watermark including color information; decoding the color information contained in the watermark; comparing the decoded color information with the scanned image; generating a correction table from the differences between the decoded color information and the scanned image; and performing color correction on the scanned image using the correction table. This method confines the color error to one generation, even when copies go through multiple reproduction.
Abstract:
A processed (e.g., captured) video sequence is temporally, spatially, and/or histogram registered to the corresponding original video sequence by generating, for each set of one or more processed frames, a mapping from a selected set of one or more original frames to the processed set, wherein (1) each selected set depends on the selected set corresponding to a previous processed set, (2) each mapping minimizes a local prediction error between the original set and the corresponding processed set, and (3) the accumulated prediction error for the entire processed video sequence is minimized.
Abstract:
A method of image compression includes digitizing an image and segmenting the image in a plurality of different manners to generate a plurality of segmented images. Each of the segmented images is compressed. The method further includes determining a bit rate for each of the compressed images, and determining how much image distortion results from each compression, Finally, the manner of segmentation which results in an optimal compromise between the rate and distortion is selected.
Abstract:
A method for segmenting an image using a background-based segmentation process is provided. A document image (102) is low-pass filtered and decimated. The decimated image is processed at low resolution by a low-resolution segmentation (104) stage. Segmentation results include identification of a main background and one or more objects. Objects that cannot be classified in text or picture classes are further segmented into a local background and smaller objects. This process is reiterated until all objects are classified in text or picture classes. The results are overlaid on the image (102) during an original-resolution refinement (106) stage to refine the segmentation.