Abstract:
Hybrid methods, systems and processor-readable media for video and vision based access control for parking occupancy determination. One or more image frames of a parking area of interest can be acquired from among two or more regions of interest defined with respect to the parking area of interest. The regions of interest can be analyzed for motion detection or image content change detection. An image content classification operation can be performed with respect to a first region of interest among the regions of interest based on the result of the image content change detection. An object tracking operation can then be performed with respect to a second region of interest among the regions of interest if the result of the image content classification operation indicates a presence of one or more objects of interest within the parking area of interest.
Abstract:
A method and structure for estimating parking occupancy within an area of interest can include the use of at least two image capture devices and a processor (e.g., a computer) which form at least part of a network. A method for estimating the parking occupancy within the area of interest can include the use of vehicle entry and exit data from the area of interest, as well as an estimated transit time for vehicles transiting through the area of interest without parking.
Abstract:
A method, system, and apparatus for video frame alignment comprises collecting video data comprising at least two video frames; extracting a line profile along at least one line profile in each of the at least two video frames; selecting one of the at least two video frames as a reference video frame; segmenting each of the at least one line profiles into a plurality of segmented line profile group segments; aligning the plurality of segmented line profiles with the segmented line profiles in the reference video frame; translating each of the at least two video frames for each of the plurality of corresponding segmented line profile alignments; and removing a camera shift from the at least two video frames according to the translation and alignment of the plurality of segmented line profiles with the plurality of segmented line profile in the reference video frame.
Abstract:
A method and structure for estimating parking occupancy within an area of interest can include the use of at least two image capture devices and a processor (e.g., a computer) which form at least part of a network. A method for estimating the parking occupancy within the area of interest can include the use of vehicle entry and exit data from the area of interest, as well as an estimated transit time for vehicles transiting through the area of interest without parking.
Abstract:
A method, system, and apparatus for parking occupancy detection comprises collecting video of a blockface with at least one video recording module, identifying a number of possible parking spaces along the blockface in the collected video, defining, a region of interest for each of the possible parking spaces, detecting a time dependent occupancy of the defined regions of interest for each of the possible parking spaces, and reporting the time dependent occupancy. Drift correction of the recorded video and ground truth comparisons of occupancy determinations may be provided.
Abstract:
A method, system, and apparatus for parking occupancy detection comprises collecting video of a blockface with at least one video recording module, identifying a number of possible parking spaces along the blockface in the collected video, defining, a region of interest for each of the possible parking spaces, detecting a time dependent occupancy of the defined regions of interest for each of the possible parking spaces, and reporting the time dependent occupancy. Drift correction of the recorded video and ground truth comparisons of occupancy determinations may be provided.
Abstract:
Parking occupancy detection methods, systems and processor-readable media. A laser device unit includes a laser range finder, and a programmable pan-tilt unit is deployable on site to monitor one or more parking spaces. A laser emitting and receiving unit associated with the laser range finder determines the distance of an object by estimating a time difference between an emitted laser and a received laser. The laser range finder is controllable by the programmable pan-tilt unit and scans the parking spaces. A signal-processing unit can convert the measured distance profile to a parking occupancy data to provide continuous parking space estimation data for use in parking occupancy detection.
Abstract:
Parking occupancy detection methods, systems and processor-readable media. A laser device unit includes a laser range finder, and a programmable pan-tilt unit is deployable on site to monitor one or more parking spaces. A laser emitting and receiving unit associated with the laser range finder determines the distance of an object by estimating a time difference between an emitted laser and a received laser. The laser range finder is controllable by the programmable pan-tilt unit and scans the parking spaces. A signal-processing unit can convert the measured distance profile to a parking occupancy data to provide continuous parking space estimation data for use in parking occupancy detection.
Abstract:
A system and method of video-based chew counting by receiving image frames from a video camera, determining feature points within the image frames from the video camera, generating a motion signal based on movement of the feature points across the image frames from the video camera, and determining a chew count based on the motion signal.
Abstract:
A method, system, and apparatus for video frame alignment comprises collecting video data comprising at least two video frames; extracting a line profile along at least one line profile in each of the at least two video frames; selecting one of the at least two video frames as a reference video frame; segmenting each of the at least one line profiles into a plurality of segmented line profile group segments; aligning the plurality of segmented line profiles with the segmented line profiles in the reference video frame; translating each of the at least two video frames for each of the plurality of corresponding segmented line profile alignments; and removing a camera shift from the at least two video frames according to the translation and alignment of the plurality of segmented line profiles with the plurality of segmented line profile in the reference video frame.