Abstract:
Methods and systems obtain data representative of a scene across spectral bands using a compressive-sensing-based hyperspectral imaging system comprising optical elements. These methods and systems sample two modes of a three-dimensional tensor corresponding to a hyperspectral representation of the scene using sampling matrices, one for each of the two modes, to generate a modified three-dimensional tensor. After sampling the two modes, such methods and systems sample a third mode of the modified three-dimensional tensor using a third sampling matrix to generate a further modified three-dimensional tensor. Then, the methods and systems reconstruct hyperspectral data from the further modified three-dimensional tensor using the sampling matrices and the third sampling matrix.
Abstract:
A method and system for efficient non-persistent object motion detection comprises evaluating a video segment to identify at least two first pixel classes corresponding to a plurality of stationary pixels and a plurality of pixels in apparent motion, and evaluating the video segment to identify at least two second pixel classes corresponding to a background and a foreground indicative of the presence of a non-persistent object. The first pixel classes and the second pixel classes can be combined to define a final motion mask in the selected video segment indicative of the presence of a non-persistent object. An output can provide an indication that the object is in motion.
Abstract:
A method, non-transitory computer readable medium, and apparatus for compressive imaging of a scene in a single pixel camera are disclosed. For example, the method moves a pseudo-random pattern media behind an aperture until a pseudo-random sampling function of a plurality of pseudo-random sampling functions is viewable through the aperture, records a value of an intensity of a modulated light from the scene with a detector, wherein the intensity of the modulated light is representative of an inner product between the pseudo-random sampling function and an image of the scene and repeats the moving and the recording until a necessary number of a plurality of inner products are processed.
Abstract:
When performing video-based speed enforcement a main camera and a secondary RGB traffic camera are employed to provide improved accuracy of speed measurement and improved evidentiary photo quality compared to single camera approaches. The RGB traffic camera provides sparse secondary video data at a lower cost than a conventional stereo camera. The sparse stereo processing is performed using the main camera data and the sparse RGB camera data to estimate a height of one or more tracked vehicle features, which in turn is used to improve speed estimate accuracy. By using secondary video, spatio-temporally sparse stereo processing is enabled specifically for estimating the height of a vehicle feature above the road surface.
Abstract:
This disclosure provides a method and system for automated sequencing of vehicles in side-by-side drive-thru configurations via appearance-based classification. According to an exemplary embodiment, an automated sequencing method includes computer-implemented method of automated sequencing of vehicles in a side-by-side drive-thru, the method comprising: a) an image capturing device capturing video of a merge-point area associated with multiple lanes of traffic merging; b) detecting in the video a vehicle as it traverses the merge-point area; c) classifying the detected vehicle associated with traversing the merge-point area as coming from one of the merging lanes; and d) aggregating vehicle classifications performed in step c) to generate a merge sequence of detected vehicles.
Abstract:
Systems and methods are disclosed for background modeling in a computer vision system for enabling foreground object detection. A video acquisition model receives video data from a sequence of frames. A fit test module identifies a foreground object from the video data and defines a foreground mask representative of the identified foreground object. A foreground-aware background estimation module defines a first background model from the video data and then further defines an updated background model from an association of a current frame of the video data, the first background model and the foreground mask.
Abstract:
This disclosure provides a static occlusion handling method and system for use with appearance-based video tracking algorithms where static occlusions are present. The method and system assumes that the objects to be tracked move in according to structured motion patterns within a scene, such as vehicles moving along a roadway. A primary concept is to replicate pixels associated with the tracked object from previous frames to current or future frames when the tracked object coincides with a static occlusion, where the predicted motion of the tracked object is a basis for replication of the pixels.
Abstract:
A method for training a vehicle detection system used in a street occupancy estimation of stationary vehicles. The method includes defining first and second areas on an image plane of an image capture device associated with monitoring for detection of vehicles. The method includes receiving video-data from a sequence of frames captured from the image capture device. The method includes determining candidate frames that include objects relevant to a classification task in the second area. The method includes extracting the objects from the candidate frames, extracting features of each extracted object, and assigning labels to the each extracted object. The method includes training at least one classifier using the labels and extracted features. The method includes using the at least one trained classifier to classify a stationary vehicle detected in the first area.
Abstract:
A method for processing an image of a scene of interest includes receiving an original target image of a scene of interest at an image processing device from an image source device, the original target image exhibiting shadowing effects associated with the scene of interest when the original target image was captured, the original target image comprising a plurality of elements and representing an instantaneous state for the scene of interest, pre-processing the original target image using a modification identification algorithm to identify elements of the original target image to be modified, and generating a copy mask with a mask region representing the elements to be modified and a non-mask region representing other elements of the original target image. An image processing device for processing an image of a scene of interest and a non-transitory computer-readable medium are also provided.
Abstract:
A method is provided for using a single-pixel imager in order to spatially reconstruct an image of a scene. The method can comprise the following: configuring a light filtering device including an array of imaging elements to a spatially varying optical filtering process of incoming light according to a series of spatial patterns corresponding to sampling functions. The light filtering device can be a transmissive filter including a first membrane, a second membrane, and a variable gap therebetween. The method further comprises tuning a controller for manipulating a variable dimension of the gap; and, measuring, using a photodetector of the single-pixel imager, a magnitude of an intensity of the filtered light across pixel locations in the series of spatial patterns. The magnitude of the intensity can be equivalent to an integral value of the scene across the pixel locations.