-
公开(公告)号:US20250165852A1
公开(公告)日:2025-05-22
申请号:US18514391
申请日:2023-11-20
Applicant: Oracle International Corporation
Inventor: Tomas Feith , Arno Schneuwly , Saeid Allahdadian , Matteo Casserini , Felix Schmidt
IPC: G06N20/00
Abstract: During pretraining, a computer generates three untrained machine learning models that are a token sequence encoder, a token predictor, and a decoder that infers a frequency distribution of graph traversal paths. A sequence of lexical tokens is generated that represents a lexical text in a training corpus. A graph is generated that represents the lexical text. In the graph, multiple traversal paths are selected that collectively represent a sliding subsequence of the sequence of lexical tokens. From the subsequence, the token sequence encoder infers an encoded sequence that represents the subsequence of the sequence of lexical tokens. The decoder and token predictor accept the encoded sequence as input for respective inferencing for which respective training losses are measured. Both training losses are combined into a combined loss that is used to increase the accuracy of the three machine learning models by, for example, backpropagation of the combined loss.
-
公开(公告)号:US20250110961A1
公开(公告)日:2025-04-03
申请号:US18374209
申请日:2023-09-28
Applicant: Oracle International Corporation
Inventor: Tomas Feith , Arno Schneuwly , Saeid Allahdadian , Matteo Casserini , Kristopher Leland Rice , Felix Schmidt
IPC: G06F16/2457 , G06F16/248
Abstract: Here is dynamic and contextual ranking of reference documentation based on an interactively selected position in new source logic. A computer receives a vocabulary of lexical tokens, a sequence of references that contains a first reference to a first reference document before a second reference to a second reference document, respective subsets of the vocabulary that occur in the first and second reference documents, a new source logic that contains a sequence of lexical tokens, respective measurements of semantic distance between the new source logic and the first and second reference documents, and a selected position in the sequence of lexical tokens. Based on the selected position, the measurements of semantic distance are selectively increased. Based on that increasing the measurements of the semantic distance, a relative ordering of the first and second references is reversed to generate and display a reordered sequence of references.
-
3.
公开(公告)号:US20250060951A1
公开(公告)日:2025-02-20
申请号:US18235461
申请日:2023-08-18
Applicant: Oracle International Corporation
Inventor: Tomas Feith , Arno Schneuwly , Saeid Allahdadian , Matteo Casserini , Felix Schmidt
IPC: G06F8/41 , G06F16/901
Abstract: In an embodiment providing natural language processing (NLP), a computer generates a histogram that correctly represents a graph that represents a lexical text, and generates a token sequence encoder that is trainable and untrained. During training such as pretraining, the token sequence encoder infers an encoded sequence that incorrectly represents the lexical text, and the encoded sequence is dense and saves space. To increase the accuracy of the sequence encoder by learning, the token sequence encoder is adjusted based on, as discussed herein, an indirectly measured numeric difference between the encoded sequence that incorrectly represents the lexical text and the histogram that correctly represents the graph.
-
公开(公告)号:US20250036934A1
公开(公告)日:2025-01-30
申请号:US18227758
申请日:2023-07-28
Applicant: Oracle International Corporation
Inventor: Tomas Feith , Arno Schneuwly , Saeid Allahdadian , Matteo Casserini , Felix Schmidt
IPC: G06N3/08
Abstract: Herein is validation of a trained classifier based on novel and accelerated estimation of a confusion matrix. In an embodiment, a computer hosts a trained classifier that infers, from many objects, an inferred frequency of each class. An upscaled magnitude of each class is generated from the inferred frequency of the class. An integer of each class is generated from the upscaled magnitude of the class. Based on those integers of the classes and a target integer for each class, counts are generated of the objects that are true positives, false positives, and false negatives of the class. Based on those counts, an estimated total of true positives, false positives, false negatives are generated that characterizes fitness of the trained classifier. In an embodiment, those counts and totals are downscaled to be fractions from zero to one.
-
-
-