Abstract:
Disclosed herein are systems and methods for enabling mixed reality collaboration. A method may include receiving persistent coordinate data; presenting a first virtual session handle to a first user at a first position via a transmissive display of a wearable device, wherein the first position is based on the persistent coordinate data; presenting a virtual object to the first user at a second location via the transmissive display, wherein the second position is based on the first position: receiving location data from a second user, wherein the location data relates a position of the second user to a position of a second virtual session handle; presenting a virtual avatar to the first user at a third position via the transmissive display, wherein the virtual avatar corresponds to the second user, wherein the third position is based on the location data, and wherein the third position is further based on the first position.
Abstract:
Provided herein is a technique by which static content may be presented in an underlying relationship to dynamic content. An example method may include providing for display of static content and providing for display of dynamic content, where the static content may be displayed in an underlying relationship relative to the dynamic content. The dynamic content may be responsive to a user input and the dynamic content may change in response to a change in the static content. The dynamic content may include a dynamic content response where the dynamic content response is selected from a plurality of available dynamic content responses. The static content may include an image of a page of a book and the dynamic content may include an animated character configured to read the page of the book.
Abstract:
A system, device, method, and computer program product are provided for allowing a user of the device to more easily annotate data files and/or images received by or created by the electronic device or system. For example, according to one embodiment, when a user takes a digital picture using a camera-equipped mobile phone, annotation data may be automatically presented to the user when a preview of the image is first displayed on the electronic display. The annotation data may be presented to the user as a list that semi-transparently overlays the preview of the image. The annotation list and/or the individual annotations that make up the list may be customizable. The annotation choices in the list may correspond to keys on the electronic device. Annotation data may be stored with the image or file as embedded metadata. The selected annotation data may also be used to create file folders in a memory device and/or store the image or file in a particular file folder in the memory device.
Abstract:
Provided herein is a technique by which content may be shared with a remote user. An example method may include providing for display of content on a first device, synchronizing content between the first device and a second device, providing for display of an image captured by the second device on the first device, and providing for presentation of audio captured by the second device by the first device. The content may include an image of a page of a book. Synchronizing content between the first device and the second device may include directing advancing of a page on the second device in response to receiving an input directing the advancing of a page on the first device. Providing for display of an image captured by the second device on the first device may include providing for display of a video captured by the second device on the first device. The content may be an e-book shared by participants of a video call or video conference.
Abstract:
In one exemplary embodiment, a method includes: capturing image data for physical content, where the physical content includes a recipient image indicative of at least one desired recipient; performing image recognition on the captured image data to identify the recipient image; matching the recipient image with corresponding contact information to obtain address information for the at least one desired recipient; and addressing an electronic communication to the at least one desired recipient using the address information, where the electronic communication includes the captured image data for the physical content.
Abstract:
A system, device, method, and computer program product are provided for allowing a user of the device to more easily annotate data files and/or images received by or created by the electronic device or system. For example, according to one embodiment, when a user takes a digital picture using a camera-equipped mobile phone, annotation data may be automatically presented to the user when a preview of the image is first displayed on the electronic display. The annotation data may be presented to the user as a list that semi-transparently overlays the preview of the image. The annotation list and/or the individual annotations that make up the list may be customizable. The annotation choices in the list may correspond to keys on the electronic device. Annotation data may be stored with the image or file as embedded metadata. The selected annotation data may also be used to create file folders in a memory device and/or store the image or file in a particular file folder in the memory device.
Abstract:
An apparatus for removing an echo(es) from audio content may include a processor and memory storing executable computer code causing the apparatus to at least perform operations including receiving combined audio content including voice data associated with speech of users in a call and information including audio data provided to the users. The computer program code may further cause the apparatus to remove a first echo of a first item of voice data associated with a user(s), from the combined audio content, based in part on a prior detection of the first item of voice data. The computer program code may further cause the apparatus to remove a second echo of the audio data, from the combined audio content, based in part on a previous detection of the audio data or a previous detection of data corresponding to the audio data. Corresponding methods and computer program products are also provided.
Abstract:
A method including forming a query to specifically request at least one user interface element not resident upon a device, transmitting the query to a remote repository comprising a plurality of user interface element definitions, dynamically retrieving response data from the remote repository in response to the query, and applying the response data to a user interface of the device.
Abstract:
An apparatus, comprising a processor, memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: receiving information from a first external apparatus indicating that the first external apparatus received a document associated with a uniform resource locator, evaluating the received information and the historic document information, aggregating at least part of the received information to the historic document information based at least in part on the evaluation, and providing at least part of the aggregated historic document information to a second external apparatus is disclosed.
Abstract:
A method and system for displaying contents on a client terminal device received over a communication link, such as a computer network (i.e., the Internet) or a mobile telecommunications network, is disclosed. A request is received at a host terminal device from the client terminal device for selected content. A set of font images is then selected from a font library based on the determined frequency of selected font characteristics. The selected set of font images has fewer font images than the font library. The selected set of font images is preferably compressed and sent from the host terminal device to the client terminal device over the communication link. Next, the content, which contains content location information and font pattern codes, is preferably compressed and sent to the client terminal device over the communication link on a page-by-page basis. The content is then displayed at the client terminal device based on the selected set of font images, the content location information and the font pattern codes for the content.