Abstract:
A method includes obtaining a request for one of multiple operational modes from an application installed on an extended reality (XR) device or an XR runtime/renderer of the XR device. The method also includes selecting a first mode of the operational modes, based at least partly on a real-time system performance of the XR device. The method also includes publishing the selected first mode to the XR runtime/renderer or the application. The method also includes performing a task related to at least one of image rendering or computer vision calculations for the application, using an algorithm associated with the selected first mode.
Abstract:
A method includes obtaining a request for one of multiple operational modes from an application installed on an extended reality (XR) device or an XR runtime/renderer of the XR device. The method also includes selecting a first mode of the operational modes, based at least partly on a real-time system performance of the XR device. The method also includes publishing the selected first mode to the XR runtime/renderer or the application. The method also includes performing a task related to at least one of image rendering or computer vision calculations for the application, using an algorithm associated with the selected first mode.
Abstract:
A computer-implemented method of providing an emotion-aware reactive interface in an electronic device includes receiving an image of a user as an input and identifying a multi-modal non-verbal cue in the image. The method further includes interpreting the multi-modal non-verbal cue to determine a categorization and outputting a reactive interface event determined based on the categorization.
Abstract:
One embodiment provides a method comprising classifying one or more objects present in an input comprising visual data by executing a first set of models associated with a domain on the input. Each model corresponds to an object category. Each model is trained to generate a visual classifier result relating to a corresponding object category in the input with an associated confidence value indicative of accuracy of the visual classifier result. The method further comprises aggregating a first set of visual classifier results based on confidence value associated with each visual classifier result of each model of the first set of models. At least one other model is selectable for execution on the input based on the aggregated first set of visual classifier results for additional classification of the objects. One or more visual classifier results are returned to an application running on an electronic device for display.
Abstract:
Methods, systems, and computer readable media for social grouping are provided to perform social grouping of a user's contacts based on the user's interactions with the contacts. A set of attributes associated with interactions between a user and a set of contacts may be determined by a first device. The set of attributes associated with the interactions may be related to the first device. The set of contacts may be organized into a set of groups based on the set of attributes.
Abstract:
An electronic system includes: a control unit configured to: generate an encrypted information based on encrypting an information type, generate a mapping table including the encrypted information, the information type, or a combination thereof, generate a restored information based on mapping a decomposed information of the encrypted information, categorized according to a decomposition rule, to a corresponding instance of the information type in the mapping table, and a user interface, coupled to the control unit, configure to display the restored information on an activity dashboard for receiving a user entry to calibrate the decomposition rule.
Abstract:
Social grouping using a device may include determining, by the device, a set of attributes associated with interactions between a user and a set of contacts, wherein the set of attributes associated with the interactions related to the device. The contacts may be organized into a plurality of groups. The plurality of groups may be hierarchically ordered with at least one group of the plurality of groups being a subgroup of another group of the plurality of groups.
Abstract:
Methods, systems, and computer readable media for social grouping are provided to perform social grouping of a user's contacts based on the user's interactions with the contacts. A set of attributes associated with interactions between a user and a set of contacts may be determined by a first device. The set of attributes associated with the interactions may be related to the first device. The set of contacts may be organized into a set of groups based on the set of attributes.
Abstract:
A method includes obtaining, from a memory of an electronic device connected to a head mounted display (HMD), a first reference frame, wherein the first reference frame comprises a first set of pixels associated with a first time. The method includes, rendering, at the electronic device, a source image as a new frame, wherein the new frame includes a second set of pixels associated with a display to be provided by the HMD at a second time, and generating, by the electronic device, a differential frame, wherein the differential frame is based on a difference operation between pixels of the new frame with pixels of the first reference frame to identify pixels unique to the new frame. Still further, the method includes sending the differential frame to the HMD, and storing the new frame in the memory of the electronic device as a second reference frame.
Abstract:
A device may include a camera configured to capture a live image data comprising an image of an object, and a processor coupled to the camera. The processor may be configured to recognize content of a selected portion of the live image data based on a contextual information relevant to the object using computer vision technology, and generate a visual information based on the recognized content. The device may also include an interface circuitry coupled to the processor. The interface circuitry may be configured to present the live image data, and overlay the live image data with the visual information.