Abstract:
Model placement metadata is defined and stored for a three-dimensional (“3D”) model. The model placement metadata specifies constraints on the presentation of the 3D model when rendered in a view of a real-world environment, such as a view of a real-world environment generated by wearable computing device like an augmented reality (“AR”) or virtual reality (“VR”) device. A wearable computing device can analyze the geometry of a real-world environment to determine a configuration for the 3D model that satisfies the constraints set forth by the model placement metadata when the 3D model is rendered in a view of the environment. Once the configuration for the 3D model has been computed, the wearable device can render the 3D model according to the displayed configuration and display the rendering in a view of the real-world environment.
Abstract:
A method for displaying content displayed on one or more first devices on a second device is provided. The method includes receiving a request to display content currently displayed on a first device on a second device, the request including a gesture made on a screen of the first device, and pairing the first device to the second device. The method further includes transmitting instructions to the second device to display the content currently displayed on the first device, and transmitting the content currently displayed on the first device to the second device for display thereon.
Abstract:
The disclosed technologies identify opportunities to display relevant three-dimensional (“3D”) model data within a real-world environment as a user wears a wearable device. The 3D model data can be associated with objects, or items, and the 3D model data rendered for display is relevant in the sense that the items are determined to be of interest to the user and the items fits within the real-world environment in which the user is currently located. For instance, the techniques described herein can recognize items typically found in a kitchen or a dining room of a user's house, an office space at the user's place of work, etc. The characteristics of the recognized items can be identified and subsequently analyzed together to determine preferred characteristics of a user. In this way, the disclosed technologies can retrieve and display an item that correlates to (e.g., matches) the preferred characteristics of the user.
Abstract:
An attribute correlation system reduces network traffic and processing cycles associated with impromptu item selections by generating attribute preference models based on disparate attribute spectrums. The attribute correlation system deploys the attribute preference models to select individual items from various disparate “candidate item categories.” Generally described, the attribute preference models facilitate analyzing item sets across a wide variety of disparate “candidate” item categories to preemptively identify individual items for a user. In this way, the individual items may be identified and, ultimately, selected for the user even absent any indication that the user has searched for otherwise identified these items or even other items from within the disparate “candidate” item categories. The “candidate” item categories may be determined to be disparate from one another based on a relationship void existing such that predefined relationships are missing between these “candidate” item categories.
Abstract:
An enhanced product recommendation service observes a user engaging in an activity to automatically recommend products that facilitate performance of the activity. Photographs and/or video of the user performing the activity may be analyzed to identify an output that results from the activity and/or an activity task sequence that includes multiple tasks associated with completing the activity. Then, the enhanced product recommendation service may identify a product that is usable to generate the output(s) of the activity and/or complete the activity without performing one or more individual tasks of the activity task sequence. The product may be an existing product. Alternatively, the product may be a customized product that is designed based on observing the user engage in the activity. Physical measurements of the customized product may be determined based on various measurements determined by analyzing the photographs and/or video of the user performing the activity.
Abstract:
A system described herein uses data obtained from a wearable device of a first user to identify a second user and/or to determine that the first user is within a threshold distance of the second user. The system can then access an account of the second user to identify one or more items and retrieve model data for the item(s). The system causes the wearable device of the first user to render, for display in an immersive 3D environment (e.g., an augmented reality environment), an item associated with the account of the second user. The item can be rendered for display at a location on a display that is proximate to the second user (e.g., within a threshold distance of the second user) such that the item graphically corresponds to the second user. The item rendered for display may be an item of interest to the first user.
Abstract:
The disclosed technologies identify opportunities to display relevant three-dimensional (“3D”) model data within a real-world environment as a user wears a wearable device. The 3D model data can be associated with objects, or items, and the 3D model data rendered for display is relevant in the sense that the items are determined to be of interest to the user and the items fits within the real-world environment in which the user is currently located. For instance, the techniques described herein can recognize items typically found in a kitchen or a dining room of a user's house, an office space at the user's place of work, etc. The characteristics of the recognized items can be identified and subsequently analyzed together to determine preferred characteristics of a user. In this way, the disclosed technologies can retrieve and display an item that correlates to (e.g., matches) the preferred characteristics of the user.
Abstract:
An enhanced product recommendation service observes a user engaging in an activity to automatically recommend products that facilitate performance of the activity. Photographs and/or video of the user performing the activity may be analyzed to identify an output that results from the activity and/or an activity task sequence that includes multiple tasks associated with completing the activity. Then, the enhanced product recommendation service may identify a product that is usable to generate the output(s) of the activity and/or complete the activity without performing one or more individual tasks of the activity task sequence. The product may be an existing product. Alternatively, the product may be a customized product that is designed based on observing the user engage in the activity. Physical measurements of the customized product may be determined based on various measurements determined by analyzing the photographs and/or video of the user performing the activity.
Abstract:
An enhanced product customization service to automatically generate product customization parameters for customizing a product for a user in accordance with an interest of the user. Signals received from various sources are analyzed to identify an interest of the user and search queries that are generated by the user are also analyzed to identify an intention of the user to acquire a product. Then, based on having identified both the interest of the user and the intention of the user to acquire the product, product customization parameters are generated for customizing a physical characteristic of the product in accordance with the identified interest. In this way, embodiments of the enhanced product customization service may be deployed to customize a product for a user preemptively even without the user expressly indicating a specific interest in such a customized product.
Abstract:
Techniques for efficiently translating user signals that are received in association with an online auction to render a virtual environment that has a visually perceptible competitive landscape. Various participants' acquisition interest levels are determined by analyzing the participants' user activity in association with the online auction. Avatars that represent the participants are rendered differently based on the participants' level of interest in (e.g., motivation toward) acquiring the item that is being auctioned. In this way, the individual participants' avatars are rendered in the virtual environment in a manner such that the individual participants' level of interest in acquiring the item is visually perceptible. As a specific example, avatars may be rendered to appear more (or less) excited about the item as their corresponding user activity indicates that they are more (or less) likely to competitively bid on the item in a genuine attempt to win the online auction.