Adaptive self-trained computer engines with associated databases and methods of use thereof

    公开(公告)号:US11114088B2

    公开(公告)日:2021-09-07

    申请号:US15952802

    申请日:2018-04-13

    摘要: In some embodiments, the present invention provides for an exemplary computer system which includes at least the following components: an adaptive self-trained computer engine programmed, during a training stage, to electronically receive an initial speech audio data generated by a microphone of a computing device; dynamically segment the initial speech audio data and the corresponding initial text into a plurality of user phonemes; dynamically associate a plurality of first timestamps with the plurality of user-specific subject-specific phonemes; and, during a transcription stage, electronically receive to-be-transcribed speech audio data of at least one user; dynamically split the to-be transcribed speech audio data into a plurality of to-be-transcribed speech audio segments; dynamically assigning each timestamped to-be-transcribed speech audio segment to a particular core of the multi-core processor; and dynamically transcribing, in parallel, the plurality of timestamped to-be-transcribed speech audio segments based on the user-specific subject-specific speech training model.

    ADAPTIVE SELF-TRAINED COMPUTER ENGINES WITH ASSOCIATED DATABASES AND METHODS OF USE THEREOF

    公开(公告)号:US20210375266A1

    公开(公告)日:2021-12-02

    申请号:US17444683

    申请日:2021-08-09

    摘要: In some embodiments, the present invention provides for an exemplary computer system which includes at least the following components: an adaptive self-trained computer engine programmed, during a training stage, to electronically receive an initial speech audio data generated by a microphone of a computing device; dynamically segment the initial speech audio data and the corresponding initial text into a plurality of user phonemes; dynamically associate a plurality of first timestamps with the plurality of user-specific subject-specific phonemes; and, during a transcription stage, electronically receive to-be-transcribed speech audio data of at least one user; dynamically split the to-be transcribed speech audio data into a plurality of to-be-transcribed speech audio segments; dynamically assigning each timestamped to-be-transcribed speech audio segment to a particular core of the multi-core processor; and dynamically transcribing, in parallel, the plurality of timestamped to-be-transcribed speech audio segments based on the user-specific subject-specific speech training model.