Abstract:
A wireless local area network system and a headset for the system. The headset uses voice-input information to set up parameters needed to connect the headset to the corresponding access point and then start the connection process. When the connection fails or succeeds appropriate voice prompt or visible signal tells the user the headset's connection status.
Abstract:
The application relates to a network assisted calling service comprising a service call to an assistance entity which translates an indication of a destinatin party into a destination identifier which is subsequently used for the establishment of a user call between the calling and the destination party. In the prior art the user call has either to be set up manually by the calling terminal or it is set up by the network node to which the assistance entity is connected which suffers from disadvantages with respect to routing, complex charging schemes, etcetera. In the method of the invention, this is resolved by said destination identifier being returned through a signalling channel associated with the operative connection. The destination identifier is either returned to the calling party's terminal (C) or is returned to a network node (N) . Either the terminal (C) or the network node (N) subsequently set up the user call.
Abstract:
The present invention concerns methods and apparatus for performing voice- controlled actions during an ongoing voice telephony session. In particular, the methods and apparatus of the present invention provide a voice-operated user interface to perform actions during an ongoing voice telephony session. Many of the actions that can be performed during the ongoing voice telephony session are context-sensitive and relate to the context of the telephone call. In addition, context information relating to the ongoing voice telephony session can be used to greatly simplify both the operation of the voice- controlled user interface and the programming of actions requested using the voice-controlled interface.
Abstract:
Systems and techniques for identifying a participant in a teleconference allow other participants at remote locations to be informed of the identify of a current speaker. A currently speaking participant is identified in a teleconference conducted using a connection on a circuit-switched network, and a signal indicating an identifier of the currently speaking participant is transmitted over the connection on the circuit-switched network. The signal is received, and the identifier of the currently speaking participants is displayed at a participating location. The displayed identifier is determined based on the indication in the signal.
Abstract:
A method for improving recognition results of a speech recognizer uses supplementary information to confirm recognition results. A user inputs speech to a speech recognizer residing on a mobile device or on a server at a remote location. The speech recognizer determines a recognition result based on the input speech. A confidence measure is calculated for the recognition result. If the confidence measure is below a threshold, the user is prompted for supplementary data. The supplementary data is determined dynamically based on ambiguities between the input speech and the recognition result; the supplementary data distinguishes the input speech over potential incorrect results. The supplementary data may be a subset of alphanumeric characters that comprise the input speech, or other data associated with a desired result, such as an area code or location. The user may provide the supplementary data verbally, or manually using a keypad, touchpad, touchscreen, or stylus pen.
Abstract:
A wireless communication system has a central computer (36), one or more access points (34) and personal badges (32) that communicate with the one or more access points (34). The central computer (36) includes a database indexed by either user identification codes or badge serial numbers. When a user activates a badge (32), the central computer assigns the badge so that any message for the user is directed to the badge. The user may also access his personal data section in a central computer database through the badge (32) while the activation is valide. When the user deactivates the badge, the central computer (36) deletes the association and returns the badge (32) to a non-user-specific state. Where docking stations are available to rest the badges (32) not being used, the badges (32) may be configured so that they are activated when decoupled from the docking stations and deactivated when coupled to the docking stations.
Abstract:
Die Erfindung betrifft ein Texteingabeverfahren für ein Kommunikationsendgerät mit einer Anzeigeeinrichtung, mindestens zwei Tastenfunktionen und einem Spracherkenner. Mit der ersten Tastenfunktion wird die Spracherkennung eingeleitet und als Ergebnis eine n-Bestliste der erkannten und bewerteten Eingabe auf der Anzeigeeinrichtung dargestellt, mit der zweiten Tastenfunktion erfolgt eine Auswahl in der n-Bestliste, und mit wiederum der ersten oder einer weiteren Tastenfunktion erfolgt eine Übernahme der ausgewählten Eingabe in einen Text.
Abstract:
A system and method for providing wireless communication which is controlled by voice recognition software running on a controller. The system includes an earset communicator and a Base Station that allows wireless communication between these elements. The earset communicator rests comfortably on the user's ear and is held in place by an earhook. The transceiver Base Station communicates with the earset communicator and connects to a host controller, such as personal computer ("PC") or a household product, and to a network interface such as an internet connection or phone line. Voice commands are used for many functions for controlling the system. The Base Station routes the earset microphone audio to the controller software for speech recognition and command processing. Speech recognition software on the controller interprets the voice command and acts accordingly.
Abstract:
A verbal input sentence is received from the user in the SR system (102), and a weight is assigned for the level of match for each sentence from the plurality determined sentences according to the content of the input sentence. N (206) sentences having the highest weight are selected from the plurality. If the weight of at least one of the N (206)sentences is higher than the threshold, the sentence having the highest weight is output as the recognized input sentence. If the weight of each selected sentence is lower than the threshold, the weight of each selected sentence is varied according to different predetermined criteria. If the varied weight of at least one of the N (206) sentences is higher than the threshold, the sentence having the highest varied weight is output as the recognized input sentence (209), otherwise an indication that corresponds to unrecognized input sentence is provided.