-
1.
公开(公告)号:US20240202507A1
公开(公告)日:2024-06-20
申请号:US18555479
申请日:2022-04-15
Applicant: Nokia Technologies Oy
Inventor: Francesco CRICRÌ , Jani LAINEMA , Ramin GHAZNAVI YOUVALARI , Honglei ZHANG , Yat Hong LAM , Maria Claudia SANTAMARIA GOMEZ , Hamed REZAZADEGAN TAVAKOLI , Miska Matias HANNUKSELA
Abstract: An apparatus with a corresponding method and computer program product are provided. The apparatus includes at least one processor; and at least one non-transitory memory including computer program code: wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the steps (1600) of train or finetune at least one neural network (NN) based at least on a temporal persistence scope; and encode or decode one or more media frames elements based at least on the trained or finetuned at least one neural network. A further apparatus with a corresponding method and computer program product are provided. The further apparatus configured to carry out the steps (1700) of receive a weight-update prediction error from an encoder-side, predict a weight-update based on one or more reference weight updates, and a prediction function or algorithm, and reconstruct a weight update by combining the predicted weight-update and the prediction error.
-
公开(公告)号:US20240249514A1
公开(公告)日:2024-07-25
申请号:US18560430
申请日:2022-05-13
Applicant: Nokia Technologies Oy
Inventor: Jani LAINEMA , Francesco CRICRÌ , Honglei ZHANG , Hamed REZAZADEGAN TAVAKOLI , Yat Hong LAM , Miska Matias HANNUKSELA , Nannan ZOU
IPC: G06V10/82 , G06V10/771 , H04N19/117 , H04N19/159 , H04N19/172 , H04N19/70 , H04N19/82
CPC classification number: G06V10/82 , G06V10/771 , H04N19/117 , H04N19/159 , H04N19/172 , H04N19/70 , H04N19/82
Abstract: Various embodiments provide an apparatus, a method, and a computer program product. The apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform; train or finetune one or more additional parameters of at least one neural network (NN) or a portion of the at least one NN, wherein the one or more additional parameters comprise one or more scaling parameters; and encode or decode one or more media elements based on the at least one neural network or a portion of the at least one NN comprising the trained or finetuned one or more additional parameters.
-
3.
公开(公告)号:US20240265240A1
公开(公告)日:2024-08-08
申请号:US18567736
申请日:2022-06-17
Applicant: Nokia Technologies Oy
Inventor: Honglei ZHANG , Francesco CRICRÌ , Ramin GHAZNAVI YOUVALARI , Hamed REZAZADEGAN TAVAKOLI , Nannan ZOU , Vinod Kumar MALAMAL VADAKITAL , Miska Matias HANNUKSELA , Yat Hong LAM , Jani LAINEMA , Emre Baris AKSU
IPC: G06N3/0455
CPC classification number: G06N3/0455
Abstract: An example apparatus includes at least one processor; and at least one non-transitory memory comprising computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform; learn importance of one or more parameters by using a training dataset; define one or more masks for indicating the importance of the one or more parameters for a model finetuning; share at least one mask of the one or more masks with at least one of an encoder or a decoder; finetune at least one parameter of the one or more parameters based at least on the at least one mask; send or signal one or more weight updates corresponding to the at least one parameter in a bitstream to the decoder.
-
公开(公告)号:US20200311551A1
公开(公告)日:2020-10-01
申请号:US16828106
申请日:2020-03-24
Applicant: NOKIA TECHNOLOGIES OY
Inventor: Caglar AYTEKIN , Francesco CRICRI , Yat Hong LAM
Abstract: A method, apparatus, and computer program product are provided for training a neural network or providing a pre-trained neural network with the weight-updates being compressible using at least a weight-update compression loss function and/or task loss function. The weight-update compression loss function can comprise a weight-update vector defined as a latest weight vector minus an initial weight vector before training. A pre-trained neural network can be compressed by pruning one or more small-valued weights. The training of the neural network can consider the compressibility of the neural network, for instance, using a compression loss function, such as a task loss and/or a weight-update compression loss. The compressed neural network can be applied within a decoding loop of an encoder side or in a post-processing stage, as well as at a decoder side.
-
-
-