Abstract:
Systems, apparatuses and methods may provide for technology that automatically detects a user reaction of a first occupant of a shared vehicle based on first data from one or more sensors associated with the shared vehicle. Additionally, the technology may automatically determine a root cause of the user reaction based on one or more of the first data or second data. In one example, co-occupant selection criteria associated with the first occupant is automatically updated if the root cause is a second occupant of the shared vehicle.
Abstract:
Technologies for virtual attribute assignment include a compute device. The compute device is configured to receive an attribute assignment command from a user and analyze the attribute assignment command to determine a user-selected virtual object, a user-referenced attribute of the user-selected virtual object, a user-selected real object, and a user-referenced attribute of the user-selected real object. Based on the attribute assignment command, the compute device is further configured to determine a state of the user-referenced attribute of the user-selected real object and update a state of the user-referenced attribute of the user-selected virtual object based on the state of the user-referenced attribute of the user-selected real object.
Abstract:
A method is described to facilitate behavioral nudging. The method includes receiving sensory data from one or more wearable devices, determining a context for a user wearing the one or more wearable devices based on the sensory data, determining a mechanism to nudge the user to reinforce user behavior based on stored preferences and policies and transmitting a nudging stimulus to at least one of the wearable devices via the determined mechanism to provide a notification to the user.
Abstract:
Systems and techniques for computer vision and sensor assisted contamination tracking are described herein. It may be identified that a food item has moved to a monitored area using computer vision. Sensor readings may be obtained from a sensor array. A contamination of the food item may be determined using the sensor readings. The contamination of the food item may be associated with a contamination area in the monitored area using the computer vision. A notification may be output for display in the contamination area indicating the contamination.
Abstract:
Systems, apparatuses and methods may provide for visually or audibly indicating to users what areas are being covered or monitored by cameras, microphones, motion sensors, capacitive surfaces, or other sensors. Indicators such as projectors, audio output devices, ambient lighting, haptic feedback devices, and augmented reality may indicate the coverage areas based on a query from a user.
Abstract:
One or more sensors gather data, one or more processors analyze the data, and one or more indicators notify a user if the data represent an event that requires a response. One or more of the sensors and/or the indicators is a wearable device for wireless communication. Optionally, other components may be vehicle-mounted or deployed on-site. The components form an ad-hoc network enabling users to keep track of each other in challenging environments where traditional communication may be impossible, unreliable, or inadvisable. The sensors, processors, and indicators may be linked and activated manually or they may be linked and activated automatically when they come within a threshold proximity or when a user does a triggering action, such as exiting a vehicle. The processors distinguish extremely urgent events requiring an immediate response from less-urgent events that can wait longer for response, routing and timing the responses accordingly.
Abstract:
Systems, devices, and techniques are provided for occupancy assessment of a vehicle. For one or more occupants of the vehicle, the occupancy assessment establishes position and/or identity for some or all of the occupant(s).
Abstract:
Various systems and methods for transmitting a message to a secondary computing device are described herein. An apparatus comprises a context processing module, a context-aware message mode module, and a message retrieval module. The context processing module retrieves a context of a user of a primary computing device. The context-aware message mode module identifies a message mode for communicating with a secondary computing device of the user based on the context. A message retrieval receives a communication message at the primary computing device, determines that the communication message is to be transmitted to the secondary computing device of the user based on the message mode, and based on the determining, translates the communication message into a translated message according to the message mode and transmits the translated message to the secondary computing device from the primary computing device.
Abstract:
Methods, apparatus, systems and articles of manufacture to monitor tasks performed in an environment of interest are disclosed. One such apparatus is a task monitor that includes a signature comparator to compare a first signature to a second signature. The first signature is generated based on first audio collected in an environment of interest and the second signature is generated based on second audio collected during performance of a task in the environment of interest. The task monitor also includes an object identifier to identify an object corresponding to a location determined based on an angle of arrival of the first audio at a sensor. A task identifier is to identify the task as a cause of the first audio when the signature comparator determines the first signature matches the second signature and the object identifier identifies the object as a same object used to perform the task.
Abstract:
A mechanism is described for facilitating personal assistance for curation of multimedia and generation of stories at computing devices according to one embodiment. A method of embodiments, as described herein, includes receiving, by one or more capturing/sensing components at a computing device, one or more media items relating to an event, and capturing a theme from the one or more media items, where the theme is captured based on at least one of activities, textual content, and scenes associated with the event. The method may further include forming a plurality of story elements to generate a story relating to the event, where the plurality of story elements are formed based on at least one of one or more characters, the theme associated with the event, and one or more emotions associated with the one or more characters, wherein the story is presented, via one or more display devices, to one or more users having access to the one or more display devices.