-
公开(公告)号:US11705134B2
公开(公告)日:2023-07-18
申请号:US17314176
申请日:2021-05-07
Applicant: NICE LTD.
Inventor: Alon Menahem Shoa , Roman Frenkel , Tamir Caspi
Abstract: Methods for voice authentication include receiving a plurality of mono telephonic interactions between customers and agents; creating a mapping of the plurality of mono telephonic interactions that illustrates which agent interacted with which customer in each of the interactions; determining how many agents each customer interacted with; identifying one or more customers an agent has interacted with that have the fewest interactions with other agents; and selecting a predetermined number of interactions of the agent with each of the identified customers. In some embodiments, the methods further include creating a voice print from first and second speaker components of each interaction; comparing the voice prints of a first selected interaction to the voice prints from a second selected interaction; calculating a similarity score between the voice prints; aggregating scores; and identifying the voice prints that are associated with the agent.
-
公开(公告)号:US10885920B2
公开(公告)日:2021-01-05
申请号:US16236655
申请日:2018-12-31
Applicant: NICE LTD
Inventor: Alon Menahem Shoa , Roman Frenkel , Matan Keret
IPC: G10L17/22
Abstract: A method for separating and authenticating speech of a speaker on an audio stream of speakers over an audio channel may include receiving audio stream data of the audio stream with speech from a speaker to be authenticated speaking with a second speaker. A voiceprint may be generated for each data chunk in the audio stream data divided into a plurality of data chunks. The voiceprint for each data chunk may be assessed as to whether the voiceprint has speech belonging to the speaker to be authenticated or to the second speaker using representative voiceprints of both speakers. An accumulated voiceprint may be generated using the verified data chunks with speech of the speaker to be authenticated. The accumulated voiceprint may be compared to the reference voiceprint of the speaker to be authenticated for authenticating the speaker speaking with the second speaker over the audio channel.
-
公开(公告)号:US11646038B2
公开(公告)日:2023-05-09
申请号:US17099803
申请日:2020-11-17
Applicant: NICE LTD
Inventor: Alon Menahem Shoa , Roman Frenkel , Matan Keret
IPC: G10L17/22
CPC classification number: G10L17/22
Abstract: A method for separating and authenticating speech of a speaker on an audio stream of speakers over an audio channel may include receiving audio stream data of the audio stream with speech from a speaker to be authenticated speaking with a second speaker. A voiceprint may be generated for each data chunk in the audio stream data divided into a plurality of data chunks. The voiceprint for each data chunk may be assessed as to whether the voiceprint has speech belonging to the speaker to be authenticated or to the second speaker using representative voiceprints of both speakers. An accumulated voiceprint may be generated using the verified data chunks with speech of the speaker to be authenticated. The accumulated voiceprint may be compared to the reference voiceprint of the speaker to be authenticated for authenticating the speaker speaking with the second speaker over the audio channel.
-
公开(公告)号:US11031016B2
公开(公告)日:2021-06-08
申请号:US16453497
申请日:2019-06-26
Applicant: NICE LTD.
Inventor: Alon Menahem Shoa , Roman Frenkel , Tamir Caspi
Abstract: Methods for voice authentication include receiving a plurality of mono telephonic interactions between customers and agents; creating a mapping of the plurality of mono telephonic interactions that illustrates which agent interacted with which customer in each of the interactions; determining how many agents each customer interacted with; identifying one or more customers an agent has interacted with that have the fewest interactions with other agents; and selecting a predetermined number of interactions of the agent with each of the identified customers. In some embodiments, the methods further include creating a voice print from first and second speaker components of each interaction; comparing the voice prints of a first selected interaction to the voice prints from a second selected interaction; calculating a similarity score between the voice prints; aggregating scores; and identifying the voice prints that are associated with the agent.
-
-
-