Abstract:
The present invention provides a method for indentifying loitering event occurred on an object within an area of interest from a video stream. The method comprises detecting one or more the objects entering the area-of-interest; extracting (104) an entering time and properties of each of the objects; storing (104) the entering times ad properties of the objects; computing (110) a time-stamp for each object based on a difference between a current time and the entering time of the corresponding object; and identifying (112) a loitering event when the time-stamp is longer than a predetermined period.
Abstract:
With the growing market for video surveillance in security area, there is a need for an automated system which provides a way to track and detect human intention based on a particular human motion. The present invention relates to a system and a method for identifying human behavioral intention based on effective motion analysis wherein, the system obtains a sequence of raw images taken from live scene and processes the raw images in an activity analysis component. The activity analysis component is further provided with an activity enrollment component and activity detection component.
Abstract:
The present invention relates to a surveillance system having a method for tampering detection and correction. The surveillance system is able to detect tampering of the camera view and thereon, adjust its video analytics configuration parameters to perform video analytics even though the orientation of the camera (10) has been changed as long as a partial of the ROI is within the tampered camera view. The surveillance system comprises of at least one camera (10), a video acquisition module (20), a storage device (30), an image processing module (40), a display module (50) and a post detection module (60).
Abstract:
The present invention relates to a method for estimating a possible route of an incomplete tag access information. The method comprises the steps of receiving all tag access information including tag identification numbers, connected region identification and need tag values; creating a region ontology for each tag access information received; setting the row and column of the region ontology according to the connected region identification; filling up the region ontology with tag identification numbers and need tag values; generating intensity profile data based on historical data; searching for probable route based on the region ontology; estimating the best route based on the probable route found and intensity profile data; and displaying the best route.
Abstract:
A method of identity recognition via human (subject) lip images is proposed. The method includes registration (140) of templates (135) of the lip images for known subjects for later matching with lip images from subjects for identification, by digitally matching (740) with the registered templates (135). The lip portions are divided in four quadrants (410-440) for feature extraction which permits template matching (740) even when only partial Hp images are available for identification. The method includes classifying (220) the lips into categories, for defined characteristic features to be extracted, where the different categories of lips (310-350) contain different prominent features that are unique for representation. The feature extraction from the quadrants may be done in different orientations (610-640). The acquisition of the lip images (110,710) is by available image sensor technologies such as optical imaging, thermal imaging, ultrasonic imaging, passive capacitance and active capacitance imaging.
Abstract:
A method and an automated system for tracking and tagging objects, wherein each object is tracked and tagged as a motion block. The method (100) includes detecting a plan view and a lateral view of the motion blocks in a current frame (102) to identify occlusion of the motion blocks in the current frame (104), extracting color information from motion blocks in the current frame (108) to identify matching color information between motion blocks in the current frame and all motion blocks in previous frames (110) and assigning a tag to the motion blocks in the current frame (112). The automated system includes a first video camera to detect the plan view (200) of the motion blocks in the current frame and a second video camera to detect the lateral view (208) of the motion blocks in a current frame, a processor comprising means of identifying occlusion of the motion blocks in the current frame, means of extracting color information from the motion blocks in the current frame to identify matching color information between the motion blocks in the current frame and all motion blocks in previous frames and means of assigning a tag to the motion blocks in the current frame, and a data storage system.
Abstract:
A method to detect an abnormal event in a surveillance system using wide view video images is disclosed herein. More particularly the invention provides a solution to overcome the image distortions that are associated with the wide video images.