Abstract:
A method, device, and system for sharing at least one display attribute associated with content displayed on the device with a target device is provided. The method includes identifying the at least one display attribute associated with content displayed in a source device, processing the identified at least one display attribute in accordance with an interoperability display ratio, and displaying the content on the target device after applying the processed at least one display attribute to the content. When content is shared between two devices, exchange of display attributes associated with the content helps in ensuring that the user experience of the content is similar at both the source device end and target device.
Abstract:
Provided is a method for predicting emotion of a user by an electronic device. The method includes receiving, by the electronic device, a user context, a device context and an environment context from the electronic device and one or more other electronic device connected to the electronic device and determining, by the electronic device, a combined representation of the user context, the device context and the environment context. The method also includes determining, by the electronic device, a plurality of user characteristics based on the combined representation of the user context, the device context and the environment context; and predicting, by the electronic device, an emotion of the user based on the combined representation of the user context, the device context, the environment context and the plurality of user characteristics.
Abstract:
A method for generating at least one segment of a video by an electronic device is provided. The method includes identifying at least one of a context associated with the video and an interaction of a user in connection with the video, analyzing at least one parameter in at least one frame of the video with reference to at least one of the context and the interaction of the user, wherein the at least one parameter includes at least one of a subject, an environment, an action of the subject, and an object, determining the at least one frame in which a change in the at least one parameter occurs, and generating at least one segment of the video comprising the at least one frame in which the parameter changed as a temporal boundary of the at least one segment.
Abstract:
A method for intelligently reading displayed contents by an electronic device is provided. The method includes obtaining a screen representation based on a plurality of contents displayed on a screen of the electronic device. The method includes extracting a plurality of insights comprising at least one of intent, importance, emotion, sound representation and information sequence of the plurality of contents from the plurality of contents based on the screen representation. The method includes generating audio emulating the extracted plurality of insights.
Abstract:
Accordingly, embodiments herein disclose a method and apparatus for retrieving intelligent information from an electronic device (100). The method includes receiving, by the electronic device (100), an input from a user. Further, the method includes identifying, by the electronic device (100), at least one data item to generate at least one metadata tag. Further, the method includes automatically generating, by the electronic device (100), the at least one metadata tag related to the at least one data item based on a plurality of parameters. Further, the method includes providing, by the electronic device (100), at least one priority to the at least one metadata tag. Further, the method includes storing, by the electronic device (100), the at least one metadata tag at the electronic device (100).
Abstract:
Methods and systems for predicting keystrokes using a neural network analyzing cumulative effects of a plurality of factors impacting the typing behavior of a user. The factors may include typing pattern, previous keystrokes, specifics of keyboard used for typing, and contextual parameters pertaining to a device displaying the keyboard and the user. A plurality of features may be extracted and fused to obtain a plurality of feature vectors. The plurality of feature vectors can be optimized and processed by the neural network to identify known features and learn unknown features that are impacting the typing behavior. Thereby, the neural network predicts keystrokes using the known and unknown features.
Abstract:
A method for retrieving content by processing a user request received by a first device identifying other devices in proximity to the first device, and retrieving content related to the user request from the other devices based on one or more menu trees.
Abstract:
Disclosed is a method for a social interaction by a robot device. The method includes receiving an input from a user, determining an emotional state of the user by mapping the received input with a set of emotions and dynamically interacting with the user based on the determined emotional state in response to the input. Dynamically interacting with the user includes generating contextual parameters based on the determined emotional state. The method includes determining an action in response to the received input based on the generated contextual parameters and performing the determined action. The method further includes receiving another input from the user in response to the performed action and dynamically updating the mapping of the received input with the set of emotions based on the other input for interacting with the user.
Abstract:
A method for providing context based multimodal predictions in an electronic device is provided. The method includes detecting an input on a touch screen keyboard displayed on a screen of the electronic device. Further, the method includes generating one or more context based multimodal predictions based on the detected input from a language model. Furthermore, the method includes displaying the one or more context based multimodal predictions in the electronic device. An electronic device includes a processor configured to detect an input through a touch screen keyboard displayed on a screen of the electronic device, generate one or more context based multimodal predictions in accordance with the detected input from a language model, and cause the screen to display the one or more context based multimodal predictions in the electronic device.
Abstract:
A method and apparatus for managing applications by an electronic device are provided. The method and apparatus include identifying, by a processor, a secondary application based on an application executed on the electronic device or content included in the application, displaying a representation corresponding to the secondary application on the electronic device, selecting the representation based on an input, and invoking the secondary application corresponding to the selected representation on the electronic device.