-
公开(公告)号:US20180174050A1
公开(公告)日:2018-06-21
申请号:US15380399
申请日:2016-12-15
Applicant: Google Inc.
Inventor: Jason E. Holt , Marcello Herreshoff
IPC: G06N3/08
Abstract: The present disclosure provides systems and methods that enable adaptive training of a channel coding model including an encoder model, a channel model positioned structurally after the encoder model, and a decoder model positioned structurally after the channel model. The channel model can have been trained to emulate a communication channel, for example, by training the channel model on example data that has been transmitted via the communication channel. The channel coding model can be trained on a loss function that describes a difference between input data input into the encoder model and output data received from the decoder model. In particular, such a loss function can be backpropagated through the decoder model while modifying the decoder model, backpropagated through the channel model while the channel model is held constant, and then backpropagated through the encoder model while modifying the encoder model.
-
公开(公告)号:US20180032871A1
公开(公告)日:2018-02-01
申请号:US15222997
申请日:2016-07-29
Applicant: Google Inc.
Inventor: Jason E. Holt , Marcello Herreshoff
IPC: G06N3/08
Abstract: The present disclosure provides systems and methods that enable training of an encoder model based on a decoder model that performs an inverse transformation relative to the encoder model. In one example, an encoder model can receive a first set of inputs and output a first set of outputs. The encoder model can be a neural network. The decoder model can receive the first set of outputs and output a second set of outputs. A loss function can describe a difference between the first set of inputs and the second set of outputs. According to an aspect of the present disclosure, the loss function can be sequentially backpropagated through the decoder model without modifying the decoder model and then through the encoder model while modifying the encoder model, thereby training the encoder model. Thus, an encoder model can be trained to have enforced consistency relative to the inverse decoder model.
-
公开(公告)号:US10552738B2
公开(公告)日:2020-02-04
申请号:US15380399
申请日:2016-12-15
Applicant: Google Inc.
Inventor: Jason E. Holt , Marcello Herreshoff
IPC: G06N3/08
Abstract: The present disclosure provides systems and methods that enable adaptive training of a channel coding model including an encoder model, a channel model positioned structurally after the encoder model, and a decoder model positioned structurally after the channel model. The channel model can have been trained to emulate a communication channel, for example, by training the channel model on example data that has been transmitted via the communication channel. The channel coding model can be trained on a loss function that describes a difference between input data input into the encoder model and output data received from the decoder model. In particular, such a loss function can be backpropagated through the decoder model while modifying the decoder model, backpropagated through the channel model while the channel model is held constant, and then backpropagated through the encoder model while modifying the encoder model.
-
公开(公告)号:US10482379B2
公开(公告)日:2019-11-19
申请号:US15222997
申请日:2016-07-29
Applicant: Google Inc.
Inventor: Jason E. Holt , Marcello Mathias Herreshoff
Abstract: The present disclosure provides systems and methods that enable training of an encoder model based on a decoder model that performs an inverse transformation relative to the encoder model. In one example, an encoder model can receive a first set of inputs and output a first set of outputs. The encoder model can be a neural network. The decoder model can receive the first set of outputs and output a second set of outputs. A loss function can describe a difference between the first set of inputs and the second set of outputs. According to an aspect of the present disclosure, the loss function can be sequentially backpropagated through the decoder model without modifying the decoder model and then through the encoder model while modifying the encoder model, thereby training the encoder model. Thus, an encoder model can be trained to have enforced consistency relative to the inverse decoder model.
-
-
-