Energy-based associative memory neural networks

    公开(公告)号:US12277487B2

    公开(公告)日:2025-04-15

    申请号:US17441463

    申请日:2020-05-19

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for implementing associative memory. In one aspect a system comprises an associative memory neural network to process an input to generate an output that defines an energy corresponding to the input. A reading subsystem retrieves stored information from the associative memory neural network. The reading subsystem performs operations including receiving a given, i.e. query, input and retrieving a data element from the associative memory neural network that is associated with the given input. The retrieving is performed by iteratively adjusting the given input using the associative memory neural network.

    GATED ATTENTION NEURAL NETWORKS
    2.
    发明申请

    公开(公告)号:US20220366218A1

    公开(公告)日:2022-11-17

    申请号:US17763984

    申请日:2020-09-07

    Abstract: A system including an attention neural network that is configured to receive an input sequence and to process the input sequence to generate an output is described. The attention neural network includes: an attention block configured to receive a query input, a key input, and a value input that are derived from an attention block input. The attention block includes an attention neural network layer configured to: receive an attention layer input derived from the query input, the key input, and the value input, and apply an attention mechanism to the query input, the key input, and the value input to generate an attention layer output for the attention neural network layer; and a gating neural network layer configured to apply a gating mechanism to the attention block input and the attention layer output of the attention neural network layer to generate a gated attention output.

    AUGMENTING ATTENTION-BASED NEURAL NETWORKS TO SELECTIVELY ATTEND TO PAST INPUTS

    公开(公告)号:US20240046103A1

    公开(公告)日:2024-02-08

    申请号:US18486060

    申请日:2023-10-12

    CPC classification number: G06N3/084 G06N3/08 G06F18/2148 G06N3/047

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a network input that is a sequence to generate a network output. In one aspect, one of the methods includes, for each particular sequence of layer inputs: for each attention layer in the neural network: maintaining episodic memory data; maintaining compressed memory data; receiving a layer input to be processed by the attention layer; and applying an attention mechanism over (i) the compressed representation in the compressed memory data for the layer, (ii) the hidden states in the episodic memory data for the layer, and (iii) the respective hidden state at each of the plurality of input positions in the particular network input to generate a respective activation for each input position in the layer input.

    Augmenting attentioned-based neural networks to selectively attend to past inputs

    公开(公告)号:US11829884B2

    公开(公告)日:2023-11-28

    申请号:US17033396

    申请日:2020-09-25

    CPC classification number: G06N3/084 G06F18/2148 G06N3/047 G06N3/08

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a network input that is a sequence to generate a network output. In one aspect, one of the methods includes, for each particular sequence of layer inputs: for each attention layer in the neural network: maintaining episodic memory data; maintaining compressed memory data; receiving a layer input to be processed by the attention layer; and applying an attention mechanism over (i) the compressed representation in the compressed memory data for the layer, (ii) the hidden states in the episodic memory data for the layer, and (iii) the respective hidden state at each of the plurality of input positions in the particular network input to generate a respective activation for each input position in the layer input.

    AUGMENTING ATTENTIONED-BASED NEURAL NETWORKS TO SELECTIVELY ATTEND TO PAST INPUTS

    公开(公告)号:US20210089829A1

    公开(公告)日:2021-03-25

    申请号:US17033396

    申请日:2020-09-25

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a network input that is a sequence to generate a network output. In one aspect, one of the methods includes, for each particular sequence of layer inputs: for each attention layer in the neural network: maintaining episodic memory data; maintaining compressed memory data; receiving a layer input to be processed by the attention layer; and applying an attention mechanism over (i) the compressed representation in the compressed memory data for the layer, (ii) the hidden states in the episodic memory data for the layer, and (iii) the respective hidden state at each of the plurality of input positions in the particular network input to generate a respective activation for each input position in the layer input.

    GATED ATTENTION NEURAL NETWORKS
    8.
    发明公开

    公开(公告)号:US20240320469A1

    公开(公告)日:2024-09-26

    申请号:US18679200

    申请日:2024-05-30

    CPC classification number: G06N3/044 G06N3/048 G06N3/08

    Abstract: A system including an attention neural network that is configured to receive an input sequence and to process the input sequence to generate an output is described. The attention neural network includes: an attention block configured to receive a query input, a key input, and a value input that are derived from an attention block input. The attention block includes an attention neural network layer configured to: receive an attention layer input derived from the query input, the key input, and the value input, and apply an attention mechanism to the query input, the key input, and the value input to generate an attention layer output for the attention neural network layer; and a gating neural network layer configured to apply a gating mechanism to the attention block input and the attention layer output of the attention neural network layer to generate a gated attention output.

    ENERGY-BASED ASSOCIATIVE MEMORY NEURAL NETWORKS

    公开(公告)号:US20220180147A1

    公开(公告)日:2022-06-09

    申请号:US17441463

    申请日:2020-05-19

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for implementing associative memory. In one aspect a system comprises an associative memory neural network to process an input to generate an output that defines an energy corresponding to the input. A reading subsystem retrieves stored information from the associative memory neural network. The reading subsystem performs operations including receiving a given, i.e. query, input and retrieving a data element from the associative memory neural network that is associated with the given input. The retrieving is performed by iteratively adjusting the given input using the associative memory neural network.

    Scalable and compressive neural network data storage system

    公开(公告)号:US11983617B2

    公开(公告)日:2024-05-14

    申请号:US17102318

    申请日:2020-11-23

    CPC classification number: G06N3/045 G06F16/2272 G06N3/08

    Abstract: A system for compressed data storage using a neural network. The system comprises a memory comprising a plurality of memory locations configured to store data; a query neural network configured to process a representation of an input data item to generate a query; an immutable key data store comprising key data for indexing the plurality of memory locations; an addressing system configured to process the key data and the query to generate a weighting associated with the plurality of memory locations; a memory read system configured to generate output memory data from the memory based upon the generated weighting associated with the plurality of memory locations and the data stored at the plurality of memory locations; and a memory write system configured to write received write data to the memory based upon the generated weighting associated with the plurality of memory locations.

Patent Agency Ranking