Abstract:
Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
Abstract:
Image compensation for an occluding direct-view augmented reality system is described. In one or more embodiments, an augmented reality apparatus includes an emissive display layer for presenting emissive graphics to an eye of a user and an attenuation display layer for presenting attenuation graphics between the emissive display layer and a real-world scene to block light of the real-world scene from the emissive graphics. A light region compensation module dilates an attenuation graphic based on an attribute of an eye of a viewer, such as size of a pupil, to produce an expanded attenuation graphic that blocks additional light to compensate for an unintended light region. A dark region compensation module camouflages an unintended dark region with a replica graphic in the emissive display layer that reproduces an appearance of the real-world scene in the unintended dark region. A camera provides the light data used to generate the replica graphic.
Abstract:
Digital content interaction and navigation techniques and systems in virtual and augmented reality are described. In one example, techniques are employed to aid user interaction within a physical environment in which the user is disposed while viewing a virtual or augmented reality environment. In another example, techniques are described to support a world relative field of view and a fixed relative field of view. The world relative field of view is configured to follow motion of the user (e.g., movement of the user's head or mobile phone) within the environment to support navigation to different locations within the environment. The fixed relative field of view is configured to remain fixed during this navigation such that digital content disposed in this field of view remains at that relative location to a user's field of view.
Abstract:
Font replacement based on visual similarity is described. In one or more embodiments, a font descriptor includes multiple font features derived from a visual appearance of a font by a font visual similarity model. The font visual similarity model can be trained using a machine learning system that recognizes similarity between visual appearances of two different fonts. A source computing device embeds a font descriptor in a document, which is transmitted to a destination computing device. The destination compares the embedded font descriptor to font descriptors corresponding to local fonts. Based on distances between the embedded and the local font descriptors, at least one matching font descriptor is determined. The local font corresponding to the matching font descriptor is deemed similar to the original font. The destination computing device controls presentations of the document using the similar local font. Computation of font descriptors can be outsourced to a remote location.
Abstract:
Digital content view control techniques are described. In one example, a virtual control is used to supplement navigation of digital content without movement of a user's head through use of an appendage such as a hand, finger, and so forth. In another example, a digital content view control technique is configured to reduce potential nausea when viewing the digital content. Techniques and systems are described that incorporate a threshold to control how the digital content is viewed in relation to an amount of movement to be experienced by the user in viewing the digital content. Based on this amount of movement, a determination is made through comparison to a threshold to control output of transitional viewpoints between the first and second viewpoints.
Abstract:
Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
Abstract:
Content creation collection and navigation techniques and systems are described. In one example, a representative image is used by a content sharing service to interact with a collection of images provided as part of a search result. In another example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics. In a further example, a user interface image navigation control is configured to support user navigation through images based on one or more metrics identified for an object selected from the image. In yet another example, collections of images are leveraged as part of content creation. In another example, data obtained from a content sharing service is leveraged to indicate suitability of images of a user for licensing as part of the service.
Abstract:
Selection of an area of an image can be received. Selection of a subset of a plurality of predefined patterns may be received. A plurality of patterns can be generated. At least one generated pattern in the plurality of patterns may be based at least in part on one or more predefined patterns in the subset. Selection of another subset of patterns may be received. At least one pattern in the other subset of patterns may be selected from the plurality of predefined patterns and/or the generated patterns. Another plurality of patterns can be generated. At least one generated pattern in this plurality of patterns may be based at least on part on one or more patterns in the other subset. Selection of a generated pattern from the generated other plurality of patterns may be received. The selected area of the image may be populated with the selected generated pattern.
Abstract:
Systems and methods are provided for providing a navigation interface to access or otherwise use electronic content items. In one embodiment, an augmentation application identifies at least one entity referenced in a document. The entity can be referenced in at least two portions of the document by at least two different words or phrases. The augmentation application associates the at least one entity with at least one multimedia asset. The augmentation application generates a layout including at least some content of the document referencing the at least one entity and the at least one multimedia asset associated with the at least one entity. The augmentation application renders the layout for display.
Abstract:
Digital content interaction and navigation techniques and systems in virtual and augmented reality are described. In one example, techniques are employed to aid user interaction within a physical environment in which the user is disposed while viewing a virtual or augmented reality environment. In another example, techniques are described to support a world relative field of view and a fixed relative field of view. The world relative field of view is configured to follow motion of the user (e.g., movement of the user's head or mobile phone) within the environment to support navigation to different locations within the environment. The fixed relative field of view is configured to remain fixed during this navigation such that digital content disposed in this field of view remains at that relative location to a user's field of view.