Abstract:
The invention provides a robust method to control interactive media using gestures. A method to retrieve metadata information from a multimedia outlet device, wherein the method follows the steps: (1) extracting image hot spot areas in a current captured image using face detection, (2) detecting a human gesture directive in at least one image hot spot area using gesture recognition, (3) determining if the gesture directive matches a pre-assigned command to a rich interaction module, (4) sending a signal to a rich interaction module corresponding to the pre-assigned command detected, (5) extracting a media image hot spot area from electrical signals sent from the multimedia, (6) matching any detected human gestures in at least one image hot spot area using gesture recognition with a specific pixel on a device screen, and (7) retrieving information from metadata assigned to an area of pixels on the screen.
Abstract:
The invention relates to a robust method to control interactive media using gestures. A method of controlling a multimedia device, using face detection and (hot spot) motion, providing robust accuracy in issued commands, wherein the method involves the following steps: extracting a hot spot area using a current captured image (Ci), calculate and analyze the difference between the current captured image (Ci) and a previous captured image (Ci−1), resulting in Di, applying an erosion on the Di to remove small areas, applying extracted (hot spot) motion areas as masks to filter out non-hot spot area motion, add Di to build a motion image, find the largest x, y and the smallest x, y coordinates of all the detected motion connected components, denote each as Ix, Iy, sx and sy, and perform an algorithm to determine if a hand gesture represents a command to control a multimedia device.
Abstract:
The invention provides a robust method to control interactive media using gestures. A method to retrieve metadata information from a multimedia outlet device, wherein the method follows the steps: (1) extracting image hot spot areas in a current captured image using face detection, (2) detecting a human gesture directive in at least one image hot spot area using gesture recognition, (3) determining if the gesture directive matches a pre-assigned command to a rich interaction module, (4) sending a signal to a rich interaction module corresponding to the pre-assigned command detected, (5) extracting a media image hot spot area from electrical signals sent from the multimedia, (6) matching any detected human gestures in at least one image hot spot area using gesture recognition with a specific pixel on a device screen, and (7) retrieving information from metadata assigned to an area of pixels on the screen.
Abstract:
Embodiments of packet loss concealment in a hearing assistance device are generally described herein. A method for packet loss concealment can include receiving, at a first hearing assistance device, a first encoded packet stream from a second hearing assistance device and a signal frame. The method can include encoding, at the first hearing assistance device, the signal frame and determining, at the first hearing assistance device, that a second encoded packet stream was not received from the second hearing assistance device within a predetermined time. In response to determining that the second encoded packet stream was not received, the method can include decoding, at the first hearing assistance device, the encoded signal frame, and outputting the signal frame and the decoded signal frame.
Abstract:
Disclosed herein, among other things, are apparatus and methods for neural network-driven frequency translation for hearing assistance devices. Various embodiments include a method of signal processing an input signal in a hearing assistance device, the hearing assistance device including a receiver and a microphone. The method includes performing neural network processing to train a processor to identify acoustic features in a plurality of audio signals and predict target outputs for the plurality of audio signals, and using the trained processor to control frequency translation of the input signal.
Abstract:
A method and system for calibration and compensation of color in a three dimensional display system includes user calibration of individual color channels using a multiplicity of grey screens while viewing with three dimensional glasses. Look-up tables are generated to ease conversion of input pixels to color corrected pixels to pre-distort the color of the pixels being driven by the three dimensional display system. Input pixels are then converted using the look-up tables and color corrected frames are displayed to a user. The pre-distortion effect allows a user to perceive colors in the three dimensional system as intended with the distortions caused by the viewing glasses and other aspects of the three dimensional display system.
Abstract:
Peer-to-peer mobility management in heterogeneous IP networks provides a peer-to-peer mobility module operable to intercept a data packet received at a communication protocol layer of an Internet Protocol communication stack. A translation table may be stored on memory device. The translation table stores real address of one or more network interfaces and a corresponding virtual address. The peer-to-peer mobility module may be further operable to modify the intercepted data packet using the real address and virtual address stored on the translation table.
Abstract:
The RNAi target sequences, which could be used for treating AIDS through targeting HIV. Based on the target sequences, recombinant expression vectors, packaging vectors and cells were constructed, which express siRNA and/or miRNA and/or ribozyme and/or antisense oligonucleotide for targeting HIV. And the applications of said recombinant expression vectors, packaging vectors and cells in preparing medicament for treating AIDS.
Abstract:
Method and apparatus for microphone matching for wearable directional hearing assistance devices are provided. An embodiment includes a method for matching at least a first microphone to a second microphone, using a user's voice from the user's mouth. The user's voice is processed as received by at least one microphone to determine a frequency profile associated with voice of the user. Intervals are detected where the user is speaking using the frequency profile. Variations in microphone reception between the first microphone and the second microphone are adaptively canceled during the intervals and when the first microphone and second microphone are in relatively constant spatial position with respect to the user's mouth.