Abstract:
A camera outputs video as a sequence of video frames having pixel values in a first (e.g., relatively low dimensional) color space, where the first color space has a first number of channels. An image-processing device maps the video frames to a second (e.g., relatively higher dimensional) color representation of video frames. The mapping causes the second color representation of video frames to have a greater number of channels relative to the first number of channels. The image-processing device extracts a second color representation of a background frame of the scene. The image-processing device can then detect foreground objects in a current frame of the second color representation of video frames by comparing the current frame with the second color representation of a background frame. The image-processing device then outputs an identification of the foreground objects in the current frame of the video.
Abstract:
This disclosure provides a static occlusion handling method and system for use with appearance-based video tracking algorithms where static occlusions are present. The method and system assumes that the objects to be tracked move in according to structured motion patterns within a scene, such as vehicles moving along a roadway. A primary concept is to replicate pixels associated with the tracked object from previous frames to current or future frames when the tracked object coincides with a static occlusion, where the predicted motion of the tracked object is a basis for replication of the pixels.
Abstract:
A method and system for adaptable video-based object tracking includes acquiring video data from a scene of interest and identifying an initial instance of an object of interest in the acquired video data. A representation of a target object is then established. One or more motion parameters associated with said scene of interest are used to adjust the size of a search neighborhood associated with said target object. The target object is then tracked frame-by-frame in the video data.
Abstract:
The method facilitates efficient motion estimation for video sequences captured with a stationary camera with respect to an object. For video captured with this type of camera, a main cause of changes between adjacent frames corresponds to object motion. In this setting the output from the motion compensation stage is the block matching algorithm describing the way pixel blocks move between adjacent frames. For video captured with cameras mounted on moving vehicles (e.g. school buses, public transportation vehicles and police cars), the motion of the vehicle itself is the largest source of apparent motion in the captured video. In both cases, the encoded set of motion vectors is a good descriptor of apparent motion of objects within the field of view of the camera.
Abstract:
A method is provided for using a single-pixel imager in order to spatially reconstruct an image of a scene. The method can comprise the following: configuring a light filtering device including an array of imaging elements to a spatially varying optical filtering process of incoming light according to a series of spatial patterns corresponding to sampling functions. The light filtering device can be a transmissive filter including a first membrane, a second membrane, and a variable gap therebetween. The method further comprises tuning a controller for manipulating a variable dimension of the gap; and, measuring, using a photodetector of the single-pixel imager, a magnitude of an intensity of the filtered light across pixel locations in the series of spatial patterns. The magnitude of the intensity can be equivalent to an integral value of the scene across the pixel locations.
Abstract:
A method for automatically determining a dynamic queue configuration includes acquiring a series of frames from an image source surveying a queue area. The method includes detecting at least one subject in a frame. The method includes tracking locations of each detected subject across the series of frames. The method includes generating calibrated tracking data by mapping the tracking locations to a predefined coordinate system. The method includes localizing a queue configuration descriptor based on the tracking data.
Abstract:
A system and method for detecting customer drive-off/walk-off from a customer queue. An embodiment includes acquiring images of a retail establishment, said images including at least a portion of a customer queue region, determining a queue configuration within the images, analyzing the images to detect entry of a customer into the customer queue, tracking a customer detected in the customer queue as the customer progresses within the queue, analyzing the images to detect if the customer leaves the customer queue, and generating a drive-off notification if a customer leaves the queue.
Abstract:
A method for updating an event sequence includes acquiring video data of a queue area from at least one image source; searching the frames for subjects located at least near a region of interest (ROI) of defined start points in the video data; tracking a movement of each detected subject through the queue area over a subsequent series of frames; using the tracking, determining if a location of the a tracked subject reaches a predefined merge point where multiple queues in the queue area converge into a single queue lane; in response to the tracked subject reaching the predefined merge point, computing an observed sequence of where the tracked subject places among other subjects approaching an end-event point; and, updating a sequence of end-events to match the observed sequence of subjects in the single queue lane.
Abstract:
A method for computing output using a non-contact (invisible) input signal includes acquiring depth data of a scene captured by a depth-capable sensor. The method includes generating a temporal series of depth maps corresponding to the depth data. The method includes generating at least one volumetric attribute from the depth data. The method includes generating an output based on the volumetric attribute to control actions.
Abstract:
A method for removing false foreground image content in a foreground detection process performed on a video sequence includes, for each current frame, comparing a feature value of each current pixel against a feature value of a corresponding pixel in a background model. The each current pixel is classified as belonging to one of a candidate foreground image and a background based on the comparing. A first classification image representing the candidate foreground image is generated using the current pixels classified as belonging to the candidate foreground image. The each pixel in the first classification image is classified as belonging to one of a foreground image and a false foreground image using a previously trained classifier. A modified classification image is generated for representing the foreground image using the pixels classified as belonging to the foreground image while the pixels classified as belonging to the false foreground image are removed.