MULTI-TASK NEURAL NETWORKS WITH TASK-SPECIFIC PATHS

    公开(公告)号:US20200380372A1

    公开(公告)日:2020-12-03

    申请号:US16995655

    申请日:2020-08-17

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using multi-task neural networks. One of the methods includes receiving a first network input and data identifying a first machine learning task to be performed on the first network input; selecting a path through the plurality of layers in a super neural network that is specific to the first machine learning task, the path specifying, for each of the layers, a proper subset of the modular neural networks in the layer that are designated as active when performing the first machine learning task; and causing the super neural network to process the first network input using (i) for each layer, the modular neural networks in the layer that are designated as active by the selected path and (ii) the set of one or more output layers corresponding to the identified first machine learning task.

    Training neural networks using posterior sharpening

    公开(公告)号:US10824946B2

    公开(公告)日:2020-11-03

    申请号:US16511496

    申请日:2019-07-15

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network. In one aspect, a method includes maintaining data specifying, for each of the network parameters, current values of a respective set of distribution parameters that define a posterior distribution over possible values for the network parameter. A respective current training value for each of the network parameters is determined from a respective temporary gradient value for the network parameter. The current values of the respective sets of distribution parameters for the network parameters are updated in accordance with the respective current training values for the network parameters. The trained values of the network parameters are determined based on the updated current values of the respective sets of distribution parameters.

    Recommending content using neural networks

    公开(公告)号:US10438114B1

    公开(公告)日:2019-10-08

    申请号:US14821463

    申请日:2015-08-07

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for content recommendation using neural networks. One of the methods includes receiving context information for an action recommendation; processing the context information using a neural network that comprises one or more Bayesian neural network layers to generate, for each of the actions, one or more parameters of a distribution over possible action scores for the action and selecting an action from plurality of possible actions using the parameters of the distributions over the possible action scores for the action.

    Training neural networks using posterior sharpening

    公开(公告)号:US11836630B2

    公开(公告)日:2023-12-05

    申请号:US17024217

    申请日:2020-09-17

    CPC classification number: G06N3/084 G06N3/044 G06N3/047

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network. In one aspect, a method includes maintaining data specifying, for each of the network parameters, current values of a respective set of distribution parameters that define a posterior distribution over possible values for the network parameter. A respective current training value for each of the network parameters is determined from a respective temporary gradient value for the network parameter. The current values of the respective sets of distribution parameters for the network parameters are updated in accordance with the respective current training values for the network parameters. The trained values of the network parameters are determined based on the updated current values of the respective sets of distribution parameters.

    Multi-task neural networks with task-specific paths

    公开(公告)号:US10748065B2

    公开(公告)日:2020-08-18

    申请号:US16526240

    申请日:2019-07-30

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using multi-task neural networks. One of the methods includes receiving a first network input and data identifying a first machine learning task to be performed on the first network input; selecting a path through the plurality of layers in a super neural network that is specific to the first machine learning task, the path specifying, for each of the layers, a proper subset of the modular neural networks in the layer that are designated as active when performing the first machine learning task; and causing the super neural network to process the first network input using (i) for each layer, the modular neural networks in the layer that are designated as active by the selected path and (ii) the set of one or more output layers corresponding to the identified first machine learning task.

    Neural episodic control
    8.
    发明授权

    公开(公告)号:US10664753B2

    公开(公告)日:2020-05-26

    申请号:US16445523

    申请日:2019-06-19

    Abstract: A method includes maintaining respective episodic memory data for each of multiple actions; receiving a current observation characterizing a current state of an environment being interacted with by an agent; processing the current observation using an embedding neural network in accordance with current values of parameters of the embedding neural network to generate a current key embedding for the current observation; for each action of the plurality of actions: determining the p nearest key embeddings in the episodic memory data for the action to the current key embedding according to a distance measure, and determining a Q value for the action from the return estimates mapped to by the p nearest key embeddings in the episodic memory data for the action; and selecting, using the Q values for the actions, an action from the multiple actions as the action to be performed by the agent.

    TRAINING NEURAL NETWORKS USING POSTERIOR SHARPENING

    公开(公告)号:US20200005152A1

    公开(公告)日:2020-01-02

    申请号:US16511496

    申请日:2019-07-15

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network. In one aspect, a method includes maintaining data specifying, for each of the network parameters, current values of a respective set of distribution parameters that define a posterior distribution over possible values for the network parameter. A respective current training value for each of the network parameters is determined from a respective temporary gradient value for the network parameter. The current values of the respective sets of distribution parameters for the network parameters are updated in accordance with the respective current training values for the network parameters. The trained values of the network parameters are determined based on the updated current values of the respective sets of distribution parameters.

    NEURAL EPISODIC CONTROL
    10.
    发明申请

    公开(公告)号:US20190303764A1

    公开(公告)日:2019-10-03

    申请号:US16445523

    申请日:2019-06-19

    Abstract: A method includes maintaining respective episodic memory data for each of multiple actions; receiving a current observation characterizing a current state of an environment being interacted with by an agent; processing the current observation using an embedding neural network in accordance with current values of parameters of the embedding neural network to generate a current key embedding for the current observation; for each action of the plurality of actions: determining the p nearest key embeddings in the episodic memory data for the action to the current key embedding according to a distance measure, and determining a Q value for the action from the return estimates mapped to by the p nearest key embeddings in the episodic memory data for the action; and selecting, using the Q values for the actions, an action from the multiple actions as the action to be performed by the agent.

Patent Agency Ranking