Abstract:
A method of and a system (1) for sensor signal data analysis, wherein sensor signal data (12) from a plurality of sensors are acquired (2). Signal processing (3; 4) is performed on the sensor signal data (12) to extract (4) one or more features of the sensor signal data (12). The features are signal extracts that are distinguishable among and reproducible along the sensor signal data (12). With at least one of the features a plurality of information attributes is associated (10), and information evaluation (11) is performed on the plurality of information attributes.
Abstract:
The invention concerns a method of accessing information using a wireless mobile device (1) having a display (3), and video capturing unit, the method comprising: establishing a video call with a remote server (6) such that said remote server receives video images captured by said mobile device during the video call; performing image recognition to identify at least one first object (4) in said captured video; and generating a signal for transmission to said mobile device, said signal comprising information relating to said first object.
Abstract:
A method, system and computer program product for improving error discrimination in biometric authentication systems. The error discrimination is set to a predetermined security policy. A plurality of biometric samples are provided and authenticated by a computer system in conjunction with a security token. An alternate embodiment allows inputting of the plurality of biometric samples in a predetermined sequence. The predetermined input sequence is maintained as an authentication secret which may be used to further reduce the authentication transaction error rate. A user may input one or more biometric samples, where a portion of the biometric samples are inputted in a predetermined sequence, selecting from among a plurality of available processing units, a set of processing units which will generate intermediate results from the processing of the biometric samples, processing at least a portion of the biometric samples by the selected set of processing units to provide intermediate results, verifying the predetermined sequence, and arbitrating the intermediate results to generate a final result which at least meets a predetermined security policy. Various embodiments provide for a security token to perform at least a portion of the processing or the arbitration function.
Abstract:
Method and system for performing event detection and object tracking in image streams by installing in field, a set of image acquisition devices, where each device includes a local programmable processor for converting the acquired image stream that consist of one or more images, to a digital format, and a local encoder for generating features from the image stream. These features are parameters that are related to attributes of objects in the image stream. The encoder also transmits a feature stream, whenever the motion features exceed a corresponding threshold. Each image acquisition device is connected to a data network through a corresponding data communication channel. An image processing server that determines the threshold and processes the feature stream is also connected to the data network. Whenever the server receives features from a local encoder through its corresponding data communication channel and the data network, the server provides indications regarding events in the image streams by processing the feature stream and transmitting these indications to an operator.
Abstract:
A method for processing arrival or removal of packages within the field of view of a video camera includes providing a database for recording packages placed in the field of view. Based on real-time analysis of successive image frames in the camera, a human person's entry and exit from the field of view of the camera is also detected. Delivery or removal of objects is recorded in the database. In one embodiment, the method also determines whether or not a newly arrived package is placed alongside or on top of an existing package.
Abstract:
A recognition device (100) and method (200) for recognizing a target (302). The recognition device (100) includes a sensor (112) and an electronic processor (102). The electronic processor (102) configured to receive the characteristic from the sensor (112). The electronic processor (102) identifies a profile based on the characteristic and compares the profile to a plurality of predetermined profiles (103) to determine an identity profile. The electronic processor (102) identifies the target (302) based on the identity profile and determines, based on at least one selected from the group consisting of a location of the target (302), speed of the target (302), and a direction of movement of the target (302), a virtual geographic boundary (300). The electronic processor (102) causes a transmission of the at least one selected from the group consisting of the identity profile and the characteristic to at least one associated device located in the virtual geographic boundary (300).
Abstract:
Implementations described herein disclose a road hazard detection method including receiving a plurality of mobile device sensor signals from one or more sensors located on a mobile device within a vehicle, determining the road hazard encountered by the vehicle by analyzing the plurality of mobile device sensor signals, and reporting the existence of the road hazard to other users. In one implementation, the road hazard detection method also uses sensor signals from various sensors located on the vehicle and/or sensor signals from mobile devices of users on other vehicles to determine the road hazard.
Abstract:
Techniques for augmenting video content to enhance context of the video content are described herein. In some instances, a video may be captured at a first location and transmitted to a second location, where the video is output in real-time. A context surrounding a user that is capturing the video and/or a user that is viewing the video may be used to augment the video with additional content. For example, the techniques may process speech or other input associated with either user, a gaze associated with either user, a previous conversation for either user, an area of interest identified by either user, a level of understanding of either user, an environmental condition, and so on. Based on the processing, the techniques may determine augmentation content. The augmentation content may be displayed with the video in an overlaid manner to enhance the experience of the user viewing the video.
Abstract:
A method and system of recognizing a face image comprising a plurality of processing nodes. Nodes obtain parts of the face image and extract features of the obtained part thereby generating a feature template. Nodes compare the feature template with stored subject templates and calculate an initial similarity score in respect of each comparison, thereby generating an initial score vector associated with a plurality of subjects. Nodes average the initial similarity score vectors generated by it and by at least two predefined nodes, giving rise to an intermediate score vector. The intermediate score vector is repeatedly averaged until a convergence condition is met, thereby generating a final score vector. A node associates the face image to the subject corresponding to the highest score in the final score vector thereby recognizing the face image.