摘要:
Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. Some embodiments of the AEFS enhance voice conferencing by recording and presenting voice conference history information based on speaker-related information. The AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS records conference history information (e.g., a transcript) based on the determined speaker-related information. The AEFS then informs a user of the conference history information, such as by presenting a transcript of the voice conference and/or related information items on a display of a conferencing device associated with the user.
摘要:
Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to determine and present speaker-related information based on speaker utterances. In one embodiment, the AEFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS identifies the speaker based on the received data, such as by performing speaker recognition. The AEFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs the user of the speaker-related information, such as by presenting the speaker-related information on a display of the hearing device or some other device accessible to the user.
摘要:
Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing image data. An example AEFS receives data that represents an image of a vehicle. The AEFS analyzes the received data to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user.
摘要:
Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to automatically translate utterances from a first to a second language, based on speaker-related information determined from speaker utterances and/or other sources of information. In one embodiment, the AEFS receives data that represents an utterance of a speaker in a first language, the utterance obtained by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS then determines speaker-related information associated with the identified speaker, such as by determining demographic information (e.g., gender, language, country/region of origin) and/or identifying information (e.g., name or title) of the speaker. The AEFS translates the utterance in the first language into a message in a second language, based on the determined speaker-related information. The AEFS then presents the message in the second language to the user.
摘要:
Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing audio signals. An example AEFS receives data that represents an audio signal emitted by a vehicle. The AEFS analyzes the audio signal to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user.
摘要:
Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured to perform vehicular threat detection based on information received at a road-based device, such as a sensor or processor that is deployed at the side of a road. An example AEFS receives, at a road-based device, information about a first vehicle that is proximate to the road-based device. The AEFS analyzes the received information to determine threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user.
摘要:
Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. In one embodiment, the AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs a user of the speaker-related information, such as by presenting the speaker-related information on a display of a conferencing device associated with the user.
摘要:
According to various embodiments, a mobile device continuously and/or automatically scans a user environment for tags containing non-human-readable data. The mobile device may continuously and/or automatically scan the environment for tags without being specifically directed at a particular tag. The mobile device may be adapted to scan for audio tags, radio frequency tags, and/or image tags. The mobile device may be configured to scan for and identify tags within the user environment that satisfy a user preference. The mobile device may perform an action in response to identifying a tag that satisfies a user preference. The mobile device may be configured to scan for a wide variety of tags, including tags in the form of quick response codes, steganographic content, audio watermarks, audio outside of a human audible range, radio frequency identification tags, long wavelength identification tags, near field communication tags, and/or a Memory Spot device.
摘要:
According to various embodiments, a mobile device continuously and/or automatically scans a user environment for tags containing non-human-readable data. The mobile device may continuously and/or automatically scan the environment for tags without being specifically directed at a particular tag. The mobile device may be adapted to scan for audio tags, radio frequency tags, and/or image tags. The mobile device may be configured to scan for and identify tags within the user environment that satisfy a user preference. The mobile device may perform an action in response to identifying a tag that satisfies a user preference. The mobile device may be configured to scan for a wide variety of tags, including tags in the form of quick response codes, steganographic content, audio watermarks, audio outside of a human audible range, radio frequency identification tags, long wavelength identification tags, near field communication tags, and/or a Memory Spot device.
摘要:
According to various embodiments, a mobile device continuously and/or automatically scans a user environment for tags containing non-human-readable data. The mobile device may continuously and/or automatically scan the environment for tags without being specifically directed at a particular tag. The mobile device may be adapted to scan for audio tags, radio frequency tags, and/or image tags. The mobile device may be configured to scan for and identify tags within the user environment that satisfy a user preference. The mobile device may perform an action in response to identifying a tag that satisfies a user preference. The mobile device may be configured to scan for a wide variety of tags, including tags in the form of quick response codes, steganographic content, audio watermarks, audio outside of a human audible range, radio frequency identification tags, long wavelength identification tags, near field communication tags, and/or a Memory Spot device.