Abstract:
A video processing system is configured to process videos (150) related to activities of a user, to identify events (260) that facilitate the organization and storage of the videos (152, 154, 156) for subsequent recall. Preferably, the user wears one or more camera devices (110) that continuously record the activities of the user. Processing elements (120) analyze the recorded videos to recognize events, such as a greeting with another person. The recognized events (240, 250) are used to index or otherwise organize the videos to facilitate recollection (160), such as recalling "people I met today", or answering queries such as "when did I last speak to Wendy?" The topic of recorded conversations can also be used to characterize events, as well as the recognition of key words or phrases. A hierarchy of archives is used to organize events and corresponding videos on a daily, weekly, monthly, and yearly basis.
Abstract:
A system is provided for integrative analysis of intrinsic and extrinsic audiovisual information, such as a system for analysis and correlation of features in a film with features not present in the film but available through the Internet. The system comprises an intrinsic content analyser communicatively connected to an audio-visual source, e.g. a film source, for searching the film for intrinsic data and extracting the intrinsic data using an extraction algorithm. Further, the system comprises an extrinsic content analyser communicatively connected to an extrinsic information source, such as a film screenplay available through the Internet, for searching the extrinsic information source and retrieving extrinsic data using a retrieval algorithm. The intrinsic data and the extrinsic data are correlated in a multisource data structure. The multisource data structure being transformed into high-level information structure which is presented to a user of the system. The user may browse the high-level information structure for such information as the actor identification in a film.
Abstract:
A video processing system is configured to process videos (150) related to activities of a user, to identify events (260) that facilitate the organization and storage of the videos (152, 154, 156) for subsequent recall. Preferably, the user wears one or more camera devices (110) that continuously record the activities of the user. Processing elements (120) analyze the recorded videos to recognize events, such as a greeting with another person. The recognized events (240, 250) are used to index or otherwise organize the videos to facilitate recollection (160), such as recalling "people I met today", or answering queries such as "when did I last speak to Wendy?" The topic of recorded conversations can also be used to characterize events, as well as the recognition of key words or phrases. A hierarchy of archives is used to organize events and corresponding videos on a daily, weekly, monthly, and yearly basis.
Abstract:
A method (Figure 1) and system (Figure 2) synthesize a smart MMS message (10) derived from an SMS preliminary message (34) that conveys one or more actions and concepts. A user (12) archives (in 28; Steps 56, 58) user-acquired video clips (20) that are semantically annotated (22,102; Step 54) both generically, according to objectively sensible content, and personally, according to user-subjective content. A service provider (18) archives (in 28; Steps 52, 58) service provider-acquired MMS clips (20) that are semantically annotated (22,110; Step 50) generically. Annotating is effected pursuant to an ontology (118; Steps 50, 52). When the user (12) wishes to send an MMS message (10) to a recipient (16), the preliminary message (34) is sent (12,130; Step 62) to the service provider (18), who extracts (at 132; Step 64) those semantically annotated (at 102 and 110; Steps 50,54) archived (at 28) clips that, according to the ontology (118) match user- selected (Steps 68,76) aspects of the action or concept of the preliminary message. When the user makes a final selection (Step 70) of extracted clips that are presented thereto (Step 66), they are combined (at 142; Step 72) with each other and with the preliminary message and sent to the recipient (Step 74).
Abstract:
A system is provided for integrative analysis of intrinsic and extrinsic audiovisual information, such as a system for analysis and correlation of features in a film with features not present in the film but available through the Internet. The system comprises an intrinsic content analyser communicatively connected to an audio-visual source, e.g. a film source, for searching the film for intrinsic data and extracting the intrinsic data using an extraction algorithm. Further, the system comprises an extrinsic content analyser communicatively connected to an extrinsic information source, such as a film screenplay available through the Internet, for searching the extrinsic information source and retrieving extrinsic data using a retrieval algorithm. The intrinsic data and the extrinsic data are correlated in a multisource data structure. The multisource data structure being transformed into high-level information structure which is presented to a user of the system. The user may browse the high-level information structure for such information as the actor identification in a film.