Abstract:
Embodiments of the present invention provide an open application programming interface selection method and device. The method includes: receiving an invocation request from a user, where the invocation request includes an OpenAPI function parameter; determining an OpenAPI equivalent set according to the OpenAPI function parameter; and selecting a target OpenAPI from multiple OpenAPIs according to a Qos attribute value that corresponds to each OpenAPI in the OpenAPI equivalent set. By adopting the embodiments of the present invention, an OpenAPI with better performance can be selected from numerous OpenAPIs with equivalent functions for a user, thereby improving the quality of service for the user.
Abstract:
The present invention discloses a speech interaction method and apparatus, and pertains to the field of speech processing technologies. The method includes: acquiring speech data of a user; performing user attribute recognition on the speech data to obtain a first user attribute recognition result; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data. According to the present invention, after speech data is acquired, user attribute recognition and content recognition are separately performed on the speech data to obtain a first user attribute recognition result and a content recognition result, and a corresponding operation is performed according to at least the first user attribute recognition result and the content recognition result.
Abstract:
The present invention discloses a speech interaction method and apparatus, and pertains to the field of speech processing technologies. The method includes: acquiring speech data of a user; performing user attribute recognition on the speech data to obtain a first user attribute recognition result; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data. According to the present invention, after speech data is acquired, user attribute recognition and content recognition are separately performed on the speech data to obtain a first user attribute recognition result and a content recognition result, and a corresponding operation is performed according to at least the first user attribute recognition result and the content recognition result.
Abstract:
An application program activation method is disclosed, the method includes: acquiring, by a user terminal, one or more pieces of annotation information of a current media object, and displaying the one or more pieces of annotation information (201), where the one or more pieces of annotation information are information obtained by annotating the current media object at a semantic layer; determining, by the user terminal, one or more application programs associated with the one or more pieces of annotation information (202); and responding, by the user terminal, to target annotation information selected by a user, and activating a target application program associated with the target annotation information (203), where the one or more application programs include the target application program, the one or more pieces of annotation information include the target annotation information, and the user is a user that operates the user terminal.
Abstract:
The present invention discloses a speech interaction method and apparatus, and pertains to the field of speech processing technologies. The method includes: acquiring speech data of a user; performing user attribute recognition on the speech data to obtain a first user attribute recognition result; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data. According to the present invention, after speech data is acquired, user attribute recognition and content recognition are separately performed on the speech data to obtain a first user attribute recognition result and a content recognition result, and a corresponding operation is performed according to at least the first user attribute recognition result and the content recognition result.