-
公开(公告)号:US20240338902A1
公开(公告)日:2024-10-10
申请号:US18298101
申请日:2023-04-10
Applicant: Shopify Inc.
Inventor: Russ Maschmeyer , Eric Andrew Florenzano , Brennan Letkeman , Diego Macario Bello , Daniel Beauchamp
CPC classification number: G06T19/006 , G06F3/017 , G06V10/44 , G06V10/761 , G06V2201/07
Abstract: Described herein are systems and methods for generating AR-enriched media feeds for comparing attributes of objects. A user operates an AR device to collect or extract object information in a media feed including a current object. The AR device identifies a comparison object using attributes of the current baseline object. After the comparison object has been identified, the AR device generates and presents an AR overlay in the graphical user interface that shows the selected attribute of the comparison object nearby or on top of the attribute of the current object in the real time media feed containing the current object.
-
公开(公告)号:US20250166037A1
公开(公告)日:2025-05-22
申请号:US18515913
申请日:2023-11-21
Applicant: Shopify Inc.
Inventor: Russ Maschmeyer , Eric Andrew Florenzano , Brennan Letkeman , Diego Macario Bello , Daniel Beauchamp
IPC: G06Q30/0601
Abstract: A computer system obtains a first embedding in an embedding space, where the first embedding represents a first item in an item store. The computer system identifies, based on at least a second embedding representing a second item in the item store, an item from the item store, wherein the second embedding is in the embedding space, and the identified item is identified based on a position of the second embedding in the embedding space relative to a position of the first embedding in the embedding space. The computer system outputs an identification of the identified item.
-
公开(公告)号:US20240192770A1
公开(公告)日:2024-06-13
申请号:US18108334
申请日:2023-02-10
Applicant: SHOPIFY INC.
Inventor: Russ Maschmeyer , Eric Andrew Florenzano , Brennan Letkeman , Diego Macario Bello , Daniel Beauchamp
IPC: G06F3/01 , G06F3/04845
CPC classification number: G06F3/013 , G06F3/04845 , G06F2203/04804
Abstract: In virtual reality (VR) and augmented reality (AR), eye tracking may be performed to determine the user's gaze direction. The gaze direction may be used to enhance user interaction. However, when a user gazes in a particular direction, it could sometimes be the case that there are multiple items located in that gaze direction, each at a different depth. The gaze of direction alone might not be indicative of the item at which the user is looking. Therefore, in some embodiments, to try to further enhance user interaction, a gaze depth of the gaze may be determined. Some embodiments are directed to performing eye tracking to detect a gaze depth of a human's gaze and modifying a user interface (UI) responsive to a change in the gaze depth.
-
公开(公告)号:US20250094025A1
公开(公告)日:2025-03-20
申请号:US18468460
申请日:2023-09-15
Applicant: Shopify Inc.
Inventor: Russ Maschmeyer , Eric Andrew Florenzano , Brennan Letkeman , Diego Macario Bello
IPC: G06F3/0484 , G06F40/56 , H04L51/02
Abstract: A computer system maintains low-rank adaptation (LoRA) models, where each LoRA model includes a set of weights configured to modify parameters of a large-language model (LLM) to cause the LLM to generate text having a corresponding property. The computer system presents a set of manipulable user-interface controls that allow configuration of properties of LLM-generated text. Output of the LLM is modified using LoRA models that are selected based on a state of the user-interface controls as manipulated. A preview is provided of LLM output corresponding to the current state of the user-interface controls during presentation and manipulation thereof. To provide this preview, the computer system iteratively provides a prompt to the LLM and outputs the output of the LLM responsive to that prompt for each iteration. For each iteration, the LLM output is modified using the LoRA models selected based on the current state of the user-interface controls as manipulated.
-
公开(公告)号:US20240272711A1
公开(公告)日:2024-08-15
申请号:US18642181
申请日:2024-04-22
Applicant: SHOPIFY INC.
Inventor: Russ Maschmeyer , Eric Andrew Florenzano , Brennan Letkeman , Diego Macario Bello , Daniel Beauchamp
IPC: G06F3/01 , G06F3/04845
CPC classification number: G06F3/013 , G06F3/04845 , G06F2203/04804
Abstract: In virtual reality (VR) and augmented reality (AR), eye tracking may be performed to determine the user's gaze direction. The gaze direction may be used to enhance user interaction. However, when a user gazes in a particular direction, it could sometimes be the case that there are multiple items located in that gaze direction, each at a different depth. The gaze of direction alone might not be indicative of the item at which the user is looking. Therefore, in some embodiments, to try to further enhance user interaction, a gaze depth of the gaze may be determined. Some embodiments are directed to performing eye tracking to detect a gaze depth of a human's gaze and modifying a user interface (UI) responsive to a change in the gaze depth.
-
6.
公开(公告)号:US20240249443A1
公开(公告)日:2024-07-25
申请号:US18124060
申请日:2023-03-21
Applicant: SHOPIFY INC.
Inventor: Russ Maschmeyer , Eric Andrew Florenzano , Brennan Letkeman , Diego Macario Bello , Daniel Beauchamp
Abstract: To improve user experience when interacting with AR content within an AR environment, the AR content may be overlaid over a proxy object in a real-world space. Differences in dimension between the proxy object and the virtual model may be such that the object is larger than the virtual model, which may result in portions of the object appearing to protrude from behind the virtual model, decreasing user enjoyment. In some embodiments, an AR system for the overlay of AR content on a proxy object and concealment of the proxy object may be implemented. The system may overlay a virtual model to a proxy object, and then conceal any remaining visible portions of the proxy object from the visual field of a device displaying the AR environment. The system may overlay the virtual model so that any remaining visible portion of the proxy object is a single continuous region.
-
公开(公告)号:US20240249442A1
公开(公告)日:2024-07-25
申请号:US18124059
申请日:2023-03-21
Applicant: SHOPIFY INC.
Inventor: Russ Maschmeyer , Eric Andrew Florenzano , Brennan Letkeman , Diego Macario Bello , Daniel Beauchamp
Abstract: To improve user experience when interacting with AR content within an AR environment, the AR content may be overlaid over a proxy object in a real-world space. However, issues such as the AR content and the proxy object not being aligned, or occlusion of the proxy object leading to glitching of the AR content, may decrease user enjoyment. In some embodiments, an AR system for the overlay of AR content may be implemented. The system may anchor a virtual model to the proxy object based on detected features on the proxy object. The anchoring may include aligning elements of the virtual model and proxy object. In response to an occluding object occluding some features on the proxy object, the system may anchor the virtual model to the occluding object, or to both the proxy object and the occluding object, based on detected features on the occluding object.
-
公开(公告)号:US11995232B1
公开(公告)日:2024-05-28
申请号:US18108334
申请日:2023-02-10
Applicant: SHOPIFY INC.
Inventor: Russ Maschmeyer , Eric Andrew Florenzano , Brennan Letkeman , Diego Macario Bello , Daniel Beauchamp
IPC: G06F3/01 , G06F3/04845
CPC classification number: G06F3/013 , G06F3/04845 , G06F2203/04804
Abstract: In virtual reality (VR) and augmented reality (AR), eye tracking may be performed to determine the user's gaze direction. The gaze direction may be used to enhance user interaction. However, when a user gazes in a particular direction, it could sometimes be the case that there are multiple items located in that gaze direction, each at a different depth. The gaze of direction alone might not be indicative of the item at which the user is looking. Therefore, in some embodiments, to try to further enhance user interaction, a gaze depth of the gaze may be determined. Some embodiments are directed to performing eye tracking to detect a gaze depth of a human's gaze and modifying a user interface (UI) responsive to a change in the gaze depth.
-
-
-
-
-
-
-