Abstract:
A method for detecting a vehicle running a stop signal positioned at an intersection includes acquiring a sequence of frames from at least one video camera monitoring an intersection being signaled by the stop signal. The method includes defining a first region of interest (ROI) including a road region located before the intersection on the image plane. The method includes searching the first ROI for a candidate violating vehicle. In response to detecting the candidate violating vehicle, the method includes tracking at least one trajectory of the detected candidate violating vehicle across a number of frames. The method includes classifying the candidate violating vehicle as belonging to one of a violating vehicle and a non-violating vehicle based on the at least one trajectory.
Abstract:
The method facilitates efficient motion estimation for video sequences captured with a stationary camera with respect to an object. For video captured with this type of camera, a main cause of changes between adjacent frames corresponds to object motion. In this setting the output from the motion compensation stage is the block matching algorithm describing the way pixel blocks move between adjacent frames. For video captured with cameras mounted on moving vehicles (e.g. school buses, public transportation vehicles and police cars), the motion of the vehicle itself is the largest source of apparent motion in the captured video. In both cases, the encoded set of motion vectors is a good descriptor of apparent motion of objects within the field of view of the camera.
Abstract:
A method for detecting a vehicle running a stop signal includes acquiring at least two evidentiary images of a candidate violating vehicle captured from at least one camera monitoring an intersection. The method includes extracting feature points in each of the at least two evidentiary images. The method includes computing feature descriptors for each of the extracted feature points. The method includes determining a correspondence between feature points having matching feature descriptors at different locations in the at least two evidentiary images. The method includes extracting at least one attribute for each correspondence. The method includes determining if the candidate violating vehicle is in violation of running the stop signal using the extracted attribute.
Abstract:
A method for updating an event sequence includes acquiring video data of a queue area from at least one image source; searching the frames for subjects located at least near a region of interest (ROI) of defined start points in the video data; tracking a movement of each detected subject through the queue area over a subsequent series of frames; using the tracking, determining if a location of the a tracked subject reaches a predefined merge point where multiple queues in the queue area converge into a single queue lane; in response to the tracked subject reaching the predefined merge point, computing an observed sequence of where the tracked subject places among other subjects approaching an end-event point; and, updating a sequence of end-events to match the observed sequence of subjects in the single queue lane.
Abstract:
A method and structure for estimating parking occupancy within an area of interest can include the use of at least two image capture devices and a processor (e.g., a computer) which form at least part of a network. A method for estimating the parking occupancy within the area of interest can include the use of vehicle entry and exit data from the area of interest, as well as an estimated transit time for vehicles transiting through the area of interest without parking.
Abstract:
A method and system for on-street vehicle parking occupancy estimation via curb detection comprises training a computer system to identify a curb, evaluating image data of the region of interest to determine a region wherein a curb is visible in said region of interest, and estimating a parking occupancy of said region of interest according to said region where said curb is visible.
Abstract:
A method for detecting parking occupancy includes receiving video data from a sequence of frames taken from an associated image capture device monitoring a parking area. The method includes determining at least one candidate region in the parking area. The method includes comparing a size of the candidate region to a size threshold. In response to size of the candidate region meeting and exceeding the size threshold, the method includes determining whether the candidate region includes one of at least one object and no objects. The method includes classifying at least one object in the candidate region as belonging to one of at least two vehicle-types. The method further includes providing vehicle occupancy information to a user.
Abstract:
Methods and systems for automatically managing parking payment and enforcement. In general, real-time data regarding vehicles located in a parking zone can be acquired. The number of vehicles in the parking zone can be determined from the acquired real-time data. From such data, the number of vehicles in the parking zone that are paid can be calculated. Then, an operation can be implemented to compare the number of the vehicles in the parking zone with the number of vehicles in the parking zone that are paid with respect to the current time to determine unpaid violations if the number of vehicles in the parking zone exceeds the number of vehicles that are paid.
Abstract:
Provided is a method and system for efficient localization in still images. According to one exemplary method, a sliding window-based 2-D (Dimensional) space search is performed to detect a parked vehicle in a video frame acquired from a fixed parking occupancy video camera including a field of view associated with a parking region.
Abstract:
Methods, systems, and processor-readable media for detecting the side window of a vehicle. A spatial probability map can be calculated, which includes data indicative of likely side window locations of a vehicle in an image. A side window detector can be run with respect to the image of the vehicle to determine detection scores. The detection scores can be weighted based on the spatial probability map. A detected region of interest can be extracted from the image as extracted image patch. An image classification can then be performed with respect to the extracted patch to provide a classification that indicates whether or not a passenger is in the vehicle or no-passenger is in the vehicle.