Abstract:
A system of using a drone for network connectivity, the system may comprise: a connectivity module to: detect an error associated with network traffic on a network connection utilized by a user device; query a connection datastore to retrieve at least one access point location that at least one device of the user has utilized within a predetermined period; a drone coordination module to: transmit configuration settings to a drone, the configuration settings including the at least one access point location and a mode of operation for the drone; and route at least a portion of the network traffic of the user device to the drone for transmission according to the configuration settings.
Abstract:
One or more sensors gather data, one or more processors analyze the data, and one or more indicators notify a user if the data represent an event that requires a response. One or more of the sensors and/or the indicators is a wearable device for wireless communication. Optionally, other components may be vehicle-mounted or deployed on-site. The components form an ad-hoc network enabling users to keep track of each other in challenging environments where traditional communication may be impossible, unreliable, or inadvisable. The sensors, processors, and indicators may be linked and activated manually or they may be linked and activated automatically when they come within a threshold proximity or when a user does a triggering action, such as exiting a vehicle. The processors distinguish extremely urgent events requiring an immediate response from less-urgent events that can wait longer for response, routing and timing the responses accordingly.
Abstract:
Apparatuses, methods and storage media for providing augmented reality (AR) effects in stop-motion content are described. In one instance, an apparatus may include a processor, a content module to be operated by the processor to obtain a plurality of frames having stop-motion content, some of which may include an indication of an augmented reality effect, and an augmentation module to be operated by the processor to detect the indication of the augmented reality effect and add the augmented reality effect corresponding to the indication to some of the plurality of frames. Other embodiments may be described and claimed.
Abstract:
A mechanism is described for dynamically facilitating tracking of targets and generating and communicating of messages at computing devices according to one embodiment. An apparatus of embodiments, as described herein, includes one or more capturing/sensing components to facilitate seeking of the apparatus, where the apparatus is associated with a user, and recognition/transformation logic to recognize the apparatus. The apparatus may further include command and data analysis logic to analyze a command received at the apparatus from the user, where the command indicates sending a message to the apparatus. The apparatus may further include message generation and preparation logic to generate the message based on the analysis of the command, and communication/compatibility logic to communicate the message.
Abstract:
Technologies for providing cues to a user of a cognitive cuing system are disclosed. The cues can be based on the context of the user. The cognitive cuing system communicates with a knowledge-based system which provides information based on the context, such as the name of a person and the relationship the user of the cognitive cuing system has with the person. The cues can be provided to the user of the cognitive cuing system through visual, auditory, or haptic means.
Abstract:
Examples include a determination how to manage storage of a video clip generated from recorded video based upon a sensor event. Managing storage of the video clip may include determining whether to save or delete the video clip based on an imprint associated with an object that indicates whether the object is included in the video clip.
Abstract:
In embodiments, apparatuses, methods and storage media (transitory and non-transitory) are described that receive sensor data from one or more sensor devices that depict a user gesture in three dimensional space, determine a flight path based at least in part on the sensor data, and store the flight path in memory for use to control operation of a drone. Other embodiments may be described and/or claimed.
Abstract:
System and techniques for user input via elastic deformation of a material are described herein. The morphology of an elastic material may be observed with a sensor. The observations may include a first and a second morphological sample of the elastic material. The first and second morphological samples may be compared against each other to ascertain a variance. The variance may be filtered to produce an output. The output may be translated into a user input parameter. A device action corresponding to the user input parameter may be invoked.
Abstract:
Various systems and methods for personal sensory drones are described herein. A personal sensory drone system includes a drone remote control system comprising: a task module to transmit a task to a drone swarm for the drone swarm to execute, the drone swarm including at least two drones; a transceiver to receive information from the drone swarm related to the task; and a user interface module to present a user interface based on the information received from the drone swarm.
Abstract:
One or more sensors gather data, one or more processors analyze the data, and one or more indicators notify a user if the data represent an event that requires a response. One or more of the sensors and/or the indicators is a wearable device for wireless communication. Optionally, other components may be vehicle-mounted or deployed on-site. The components form an ad-hoc network enabling users to keep track of each other in challenging environments where traditional communication may be impossible, unreliable, or inadvisable. The sensors, processors, and indicators may be linked and activated manually or they may be linked and activated automatically when they come within a threshold proximity or when a user does a triggering action, such as exiting a vehicle. The processors distinguish extremely urgent events requiring an immediate response from less-urgent events that can wait longer for response, routing and timing the responses accordingly.