Abstract:
This disclosure provides a method and system for automated sequencing of vehicles in side-by-side drive-thru configurations via appearance-based classification. According to an exemplary embodiment, an automated sequencing method includes computer-implemented method of automated sequencing of vehicles in a side-by-side drive-thru, the method comprising: a) an image capturing device capturing video of a merge-point area associated with multiple lanes of traffic merging; b) detecting in the video a vehicle as it traverses the merge-point area; c) classifying the detected vehicle associated with traversing the merge-point area as coming from one of the merging lanes; and d) aggregating vehicle classifications performed in step c) to generate a merge sequence of detected vehicles.
Abstract:
This disclosure provides vehicle detection methods and systems including irrelevant search window elimination and/or window score degradation. According to one exemplary embodiment, provided is a method of detecting one or more parked vehicles in a video frame, wherein candidate search windows are limited to one or more predefined window shapes. According to another exemplary embodiment, the method includes degrading a classification score of a candidate search window based on aspect ratio, window overlap area and/or a global maximal classification.
Abstract:
Multi-stage vehicle detection systems and methods for side-by-side drive-thru configurations. One or more video cameras for an image-capturing unit) can be employed for capturing video of a drive-thru of interest in a monitored area. A group of modules can be provided, which define multiple virtual detection loops in the video and sequentially perform classification with respect to each virtual detection loops among the multiple virtual detection loops, starting from a virtual detection loop closest to an order point, and when a vehicle having a car ID is sitting in a drive-thru queue, so as to improve vehicle detection performance in automated post-merge sequencing.
Abstract:
A method for updating an event sequence includes acquiring video data of a queue area from at least one image source; searching the frames for subjects located at least near a region of interest (ROI) of defined start points in the video data; tracking a movement of each detected subject through the queue area over a subsequent series of frames; using the tracking, determining if a location of the a tracked subject reaches a predefined merge point where multiple queues in the queue area converge into a single queue lane; in response to the tracked subject reaching the predefined merge point, computing an observed sequence of where the tracked subject places among other subjects approaching an end-event point; and, updating a sequence of end-events to match the observed sequence of subjects in the single queue lane.
Abstract:
Methods and systems for tag recognition in captured images. A candidate region can be localized from regions of interest with respect to a tag and a tag number shown in the regions of interest within a side image of a vehicle. A number of confidence levels can then be calculated with respect to each digit recognized as a result of an optical character recognition operation performed with respect to the tag number. Optimal candidates within the candidate region can be determined for the tag number based on individual character confidence levels among the confidence levels. Optimal candidates from a pool of valid tag numbers can then be validated using prior appearance probabilities and data returned, which is indicative of the most probable tag to be detected to improve image recognition accuracy.
Abstract:
Methods and systems for localizing numbers and characters in captured images. A side image of a vehicle captured by one or more cameras can be preprocessed to determine a region of interest. A confidence value of series of windows within regions of interest of different sizes and aspect ratios containing a structure of interest can be calculated. Highest confidence candidate regions can then be identified with respect to the regions of interest and at least one region adjacent to the highest confidence candidate regions. An OCR operation can then be performed in the adjacent region. An identifier can then be returned from the adjacent region in order to localize numbers and characters in the side image of the vehicle.
Abstract:
A video sequence can be continuously acquired at a predetermined frame rate and resolution by an image capturing unit installed at a location. A video frame can be extracted from the video sequence when a vehicle is detected at an optimal position for license plate recognition by detecting a blob corresponding to the vehicle and a virtual line on an image plane. The video frame can be pruned to eliminate a false positive and multiple frames with respect to a similar vehicle before transmitting the frame via a network. A license plate detection/localization can be performed on the extracted video frame to identify a sub-region with respect to the video frame that are most likely to contain a license plate. A license plate recognition operation can be performed and an overall confidence assigned to the license plate recognition result.
Abstract:
Provided is a method and system of tracking partially occluded objects using an elastic deformation model. According to an exemplary method and system, partially occluded vehicles are detected and tracked in a scene including side-by-side drive-thru lanes. A method for updating an event sequence includes acquiring video data of a queue area from at least one image source; searching the frames for subjects located at least near a region of interest (ROI) of defined start points in the video data; tracking a movement of each detected subject through the queue area over a subsequent series of frames; using the tracking, determining if a location of the a tracked subject reaches a predefined merge point where multiple queues in the queue area converge into a single queue lane; in response to the tracked subject reaching the predefined merge point, computing an observed sequence of where the tracked subject places among other subjects approaching an end-event point; and, updating a sequence of end-events to match the observed sequence of subjects in the single queue lane.
Abstract:
This disclosure provides a method and system for automated sequencing of vehicles in side-by-side drive-thru configurations via appearance-based classification. According to an exemplary embodiment, an automated sequencing method includes computer-implemented method of automated sequencing of vehicles in a side-by-side drive-thru, the method comprising: a) an image capturing device capturing video of a merge-point area associated with multiple lanes of traffic merging; b) detecting in the video a vehicle as it traverses the merge-point area; c) classifying the detected vehicle associated with traversing the merge-point area as coming from one of the merging lanes; and d) aggregating vehicle classifications performed in step c) to generate a merge sequence of detected vehicles.
Abstract:
A method for training a vehicle detection system used in a street occupancy estimation of stationary vehicles. The method includes defining first and second areas on an image plane of an image capture device associated with monitoring for detection of vehicles. The method includes receiving video-data from a sequence of frames captured from the image capture device. The method includes determining candidate frames that include objects relevant to a classification task in the second area. The method includes extracting the objects from the candidate frames, extracting features of each extracted object, and assigning labels to the each extracted object. The method includes training at least one classifier using the labels and extracted features. The method includes using the at least one trained classifier to classify a stationary vehicle detected in the first area.