Systems and Methods for Synthesizing Code from Input and Output Examples

    公开(公告)号:US20210019125A1

    公开(公告)日:2021-01-21

    申请号:US16929467

    申请日:2020-07-15

    Applicant: Google LLC

    Abstract: The present disclosure provides systems and methods for synthesizing computer-readable code based on the receipt of input and output examples. A computing system in accordance with the disclosure can be configured to receive a given input and output, access and library of operations, and perform a search of a library of operations (e.g., transpose, slice, norm, etc.) that can be applied to the input. By applying the operations to the input and tracking the results, the computing system may identify an expression comprising one or a combination of operations that when applied to the input generates the output. In this manner, implementations of the disclosure may be used to identify one or more solutions that a user having access to the library of operations may use to generate the output from the input.

    ITERATIVE NEURAL CODE TRANSLATION
    22.
    发明公开

    公开(公告)号:US20240184555A1

    公开(公告)日:2024-06-06

    申请号:US18076189

    申请日:2022-12-06

    Applicant: Google LLC

    CPC classification number: G06F8/51 G06F8/42 G06F11/3616 G06N3/0455 G06N3/08

    Abstract: Techniques are described herein for iterative code generation using neural language models. In various implementations, an original source code snippet in a first programming language may be processed using a translation machine learning model to generate a first translation of the original source code snippet in a second programming language. The first translation of the original source code snippet may be evaluated to identify error(s) in the first translation. Based on the error(s), respective mask(s) may be inserted to generate a masked first translation of the original source code snippet in the second programming language. The masked first translation of the original source code snippet may be processed using the translation machine learning model to generate a second translation of the original source code snippet in the second language. The second translation may include infill(s) of corrected source code in place of one or more of the masks.

    TRANSLATING LARGE SOURCE CODE USING SPARSE SELF-ATTENTION

    公开(公告)号:US20230350657A1

    公开(公告)日:2023-11-02

    申请号:US17731593

    申请日:2022-04-28

    Applicant: Google LLC

    CPC classification number: G06F8/51

    Abstract: Techniques are described herein for translating source code using sparse-self attention. In various implementations, a source code snippet in a first programming language may be processed to obtain graph(s) representing snippet tokens, and relationships therebetween. Based on the graph(s), a subset of snippet token pairs may be identified from a superset of all possible token pairs in the source code snippet. Each token pair of the subset may include snippet tokens that are represented by nodes connected by one or more edges of the one or more graphs. A self-attention network of a translation machine learning model may be adapted to sparsely attend across the identified subset of token pairs. The source code snippet may then be processed based on the adapted translation machine learning model to generate a translation of the source code snippet in the second programming language.

    Annotations for developers
    25.
    发明授权

    公开(公告)号:US11775271B1

    公开(公告)日:2023-10-03

    申请号:US17316331

    申请日:2021-05-10

    Applicant: Google LLC

    CPC classification number: G06F8/51 G06N3/04 G06N3/08

    Abstract: Techniques are described herein for translating source code in one programming language to source code in another programming language using machine learning. A method includes: receiving first source code in a first higher-level programming language; processing the first source code, or an intermediate representation thereof, using a sequence-to-sequence neural network model to generate a sequence of outputs, each including a probability distribution; generating second source code in a second higher-level programming language by, for each output in the sequence of outputs: determining a highest probability in the probability distribution associated with the output; in response to the highest probability exceeding a first threshold, generating a predicted portion of the second source code based on a token that corresponds to the highest probability; and in response to the highest probability not exceeding the first threshold, generating a placeholder; and outputting the second source code.

    Predicting and/or applying symbolic transformation templates

    公开(公告)号:US12147794B2

    公开(公告)日:2024-11-19

    申请号:US18070015

    申请日:2022-11-28

    Applicant: Google LLC

    Abstract: Implementations are described herein for predicting symbolic transformation templates to automate source code transformations. In various implementations, pair(s) of predecessor and successor source code snippets may be processed using a symbolic transformation template prediction (STTP) model to predict a symbolic transformation template that includes a predecessor portion that matches the predecessor source code snippet(s) of the pair(s) and a successor portion that matches the successor source code snippet(s) of the pair(s). At least one additional predecessor source code snippet may be identified that matches the predecessor portion of the predicted symbolic transformation template. Placeholders of the predecessor portion of the predicted symbolic transformation template may be bound to one or more tokens of the at least one additional predecessor source code snippet to create binding(s). The successor portion of the predicted symbolic transformation template may be applied to the bindings to generate additional successor source code snippet(s).

    Iterative neural code translation
    28.
    发明授权

    公开(公告)号:US12093672B2

    公开(公告)日:2024-09-17

    申请号:US18076189

    申请日:2022-12-06

    Applicant: Google LLC

    CPC classification number: G06F8/51 G06F8/42 G06F11/3616 G06N3/0455 G06N3/08

    Abstract: Techniques are described herein for iterative code generation using neural language models. In various implementations, an original source code snippet in a first programming language may be processed using a translation machine learning model to generate a first translation of the original source code snippet in a second programming language. The first translation of the original source code snippet may be evaluated to identify error(s) in the first translation. Based on the error(s), respective mask(s) may be inserted to generate a masked first translation of the original source code snippet in the second programming language. The masked first translation of the original source code snippet may be processed using the translation machine learning model to generate a second translation of the original source code snippet in the second language. The second translation may include infill(s) of corrected source code in place of one or more of the masks.

Patent Agency Ranking