Abstract:
A method for controlling an application in a mobile device is disclosed. The method includes receiving environmental information, inferring an environmental context from the environmental information, and controlling activation of the application based on a set of reference models associated with the inferred environmental context. In addition, the method may include receiving a sound input, extracting a sound feature from the sound input, transmitting the sound feature to a server configured to group a plurality of mobile devices into at least one similar context group, and receiving, from the server, information on a leader device or a non-leader device and the at least one similar context group.
Abstract:
Estimating a location of a mobile device is performed by comparing environmental information, such as environmental sound, associated with the mobile device with that of other devices to determine if the environmental information is similar enough to conclude that the mobile device is in a comparable location as another device. The devices may be in comparable locations in that they are in geographically similar locations (e.g., same store, same street, same city, etc.). The devices may be in comparable locations even though they are located in geographically dissimilar locations because the environmental information of the two locations demonstrates that the devices are in the same perceived location. With knowledge that the devices are in comparable locations, and with knowledge of the location of one of the devices, certain actions, such as targeted advertising, may be taken with respect to another device that is within a comparable location.
Abstract:
A method includes tracking an object in each of a plurality of frames of video data to generate a tracking result. The method also includes performing object processing of a subset of frames of the plurality of frames selected according to a multi-frame latency of an object detector or an object recognizer. The method includes combining the tracking result with an output of the object processing to produce a combined output.
Abstract:
A method for recognizing a text block in an object is disclosed. The text block includes a set of characters. A plurality of images of the object are captured and received. The object in the received images is then identified by extracting a pattern in one of the object images and comparing the extracted pattern with predetermined patterns. Further, a boundary of the object in each of the object images is detected and verified based on predetermined size information of the identified object. Text blocks in the object images are identified based on predetermined location information of the identified object. Interim sets of characters in the identified text blocks are generated based on format information of the identified object. Based on the interim sets of characters, a set of characters in the text block in the object is determined.
Abstract:
A method for generating an anti-model of a sound class is disclosed. A plurality of candidate sound data is provided for generating the anti-model. A plurality of similarity values between the plurality of candidate sound data and a reference sound model of a sound class is determined. An anti-model of the sound class is generated based on at least one candidate sound data having the similarity value within a similarity threshold range.
Abstract:
A method for determining a location of a mobile device with reference to locations of a plurality of reference devices is disclosed. The mobile device receives ambient sound and provides ambient sound information to a server. Each reference device receives ambient sound and provides ambient sound information to the server. The ambient sound information includes a sound signature extracted from the ambient sound. The server determines a degree of similarity of the ambient sound information between the mobile device and each of the plurality of reference devices. The server determines the location of the mobile device to be a location of a reference device having the greatest degree of similarity.
Abstract:
The various aspects are directed to automatic device-to-device connection control. An aspect extracts (1106) a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, receives (1108) a second sound signature from a peer device (102; 206; 650; 706; 708; 710; 712; 802; 804; 806), compares (1110) the first sound signature to the second sound signature, and pairs (1112) with the peer device. An aspect extracts (1206) a first sound signature, wherein the extracting the first sound signature comprises extracting a sound signature from a sound signal emanating from a certain direction, sends (1208) the first sound signature to a peer device (102; 206; 650; 706; 708; 710; 712; 802; 804; 806), and pairs (1210) with the peer device. An aspect detects (1306) a beacon sound signal, wherein the beacon sound signal is detected from a certain direction, extracts (1308) a code embedded in the beacon sound signal, and pairs (1312) with a peer device (102; 206; 650; 706; 708; 710; 712; 802; 804; 806).
Abstract:
A method of scanning an image of a document with a portable electronic device includes interactively indicating in substantially real time on a user interface of the portable electronic device, an instruction for capturing at least one portion of an image to enhance quality. The indication is in response to identifying degradation associated with the portion(s) of the image. The method also includes capturing the portion(s) of the image with the portable electronic device according to the instruction. The method further includes stitching the captured portion(s) of the image in place of a degraded portion of a reference image corresponding to the document, to create a corrected stitched image of the document.
Abstract:
A sleep monitoring application is installed on a mobile device. The mobile device is placed in a location when a user sleeps and records environmental sound. The sleep monitoring application determines indicators of sleep activity such as breathing sounds made by the user, and determines a sleep state of the user based on the indicators of sleep activity. Sleep disorders can be detected from the indicators of sleep activity. The sleep monitoring application may generate a report that summarizes the user's sleep states and alerts the user to any sleep disorders. The sleep monitoring application can use the environmental sound and the determined sleep states to determine ambient sound that is associated with good sleep. Later, if the sleep application determines the user is having problems sleeping, the sleep monitoring application can play the determined ambient sound to help the user sleep.
Abstract:
A method for responding in an augmented reality (AR) application of a mobile device to an external sound is disclosed. The mobile device detects a target. A virtual object is initiated in the AR application. Further, the external sound is received, by at least one sound sensor of the mobile device, from a sound source. Geometric information between the sound source and the target is determined, and at least one response for the virtual object to perform in the AR application is generated based on the geometric information.