Abstract:
Computationally implemented methods and systems include receiving first data that at least identifies one or more augmentations that were remotely displayed in one or more remotely displayed augmented views of one or more actual scenes, receiving second data indicating one or more user reactions of one or more users in response to the remote display of the one or more remotely displayed augmented views; and correlating the one or more user reactions with the one or more augmentations that were remotely displayed through the one or more remotely displayed augmented views. In addition to the foregoing, other aspects are described in the claims, drawings, and text.
Abstract:
Disclosed herein are example embodiments for base station control for an unoccupied flying vehicle (UFV). For certain example embodiments, at least one machine, such as a base station, may: (i) obtain at least one indicator of at least one flight attribute corresponding to a first UFV; or (ii) transmit to a second UFV at least one indicator of at least one flight attribute corresponding to a first UFV. However, claimed subject matter is not limited to any particular described embodiments, implementations, examples, or so forth.
Abstract:
Disclosed herein are example embodiments for base station multi-vehicle coordination. For certain example embodiments, at least one machine, such as a base station, may: (i) effectuate one or more communications with at least a first UFV and a second UFV; or (ii) transmit to a first UFV at least one command based at least partially on one or more communications with at least a first UFV and a second UFV. However, claimed subject matter is not limited to any particular described embodiments, implementations, examples, or so forth.
Abstract:
Computationally implemented methods and systems include obtaining visual data of an actual view of a scene from a real environment, determining whether activity-inferring data that infers at least initial occurrence of one or more user activities associated with the scene from the real environment have at least been acquired, and presenting, in response at least in part to determining that the activity-inferring data have at least been acquired, an augmented view of the scene from the real environment, the augmented view including one or more augmentations that have been included into the augmented view based, at least in part, on the activity-inferring data. In addition to the foregoing, other aspects are described in the claims, drawings, and text.
Abstract:
Structures and protocols are presented for signaling a status or decision (processing or transmitting a medical record or other resource, e.g.) conditionally. Such signaling may be partly based on one or more symptoms, regimen attributes, performance indicia (compliance indications, e.g.), privacy considerations (patient consent, e.g.), contextual considerations (being in or admitted by a care facility, e.g.), sensor data, or other such determinants. In some contexts this may trigger an incentive being manifested (as a dispensation of an item, e.g.), an intercommunication (telephone call, e.g.) beginning, a device being configured (enabled or customized, e.g.), data distillations being presented or tracked, or other such results.
Abstract:
Computationally implemented methods and systems include detecting one or more user reactions of a user in response to a display to the user of an augmented view of an actual scene from a real environment, the augmented view that was displayed including one or more augmentations, and correlating the detected one or more user reactions with at least one or more aspects associated with the one or more augmentations that were included in the augmented view that was presented. In addition to the foregoing, other aspects are described in the claims, drawings, and text.
Abstract:
Methods, apparatuses, computer program products, devices and systems are described that carry out accepting a request associated with at least one of an item, an aspect, or an element that is not present in a field of view of a user's augmented reality device; presenting in a display of the augmented reality device at least one augmented reality representation related to the at least one item, aspect, or element in response to accepting a request associated with at least one item, aspect, or element that is not present in a field of view of an augmented reality device; and processing the request and any related interaction of the user via the at least one augmented reality representation.
Abstract:
According to various embodiments, a mobile device continuously and/or automatically scans a user environment for tags containing non-human-readable data. The mobile device may continuously and/or automatically scan the environment for tags without being specifically directed at a particular tag. The mobile device may be adapted to scan for audio tags, radio frequency tags, and/or image tags. The mobile device may be configured to scan for and identify tags within the user environment that satisfy a user preference. The mobile device may perform an action in response to identifying a tag that satisfies a user preference. The mobile device may be configured to scan for a wide variety of tags, including tags in the form of quick response codes, steganographic content, audio watermarks, audio outside of a human audible range, radio frequency identification tags, long wavelength identification tags, near field communication tags, and/or a Memory Spot device.
Abstract:
Computationally implemented methods and systems include receiving augmentation data associated with one or more first augmentations, the one or more first augmentations having been included in a first augmented view of a first actual scene that was remotely displayed at a remote augmented reality (AR) device, displaying one or more second augmentations in a second augmented view of a second actual scene, the displaying of the one or more second augmentations being in response, at least in part, to the augmentation data, and transmitting to the remote AR device usage data that indicates usage information related at least to usage or non-usage of the received augmentation data. In addition to the foregoing, other aspects are described in the claims, drawings, and text.
Abstract:
Computationally implemented methods and systems include presenting a first augmented view of a first scene from a real environment, the first augmented view to be presented including one or more persistent augmentations in a first one or more formats, the inclusion of the one or more persistent augmentations in the first augmented view being independent of presence of one or more visual cues in the actual view of the first scene from the real environment, obtaining an actual view of a second scene from the real environment that is different from the actual view of the first scene, and presenting a second augmented view of the second scene from the real environment, the second augmented view to be presented including the one or more persistent augmentations in a second one or more formats that is based, at least in part, on multiple input factors.