Abstract:
A method for training a generator, by a generator training system including a processor and memory, includes: extracting training statistical characteristics from a batch normalization layer of a pre-trained model, the training statistical characteristics including a training mean μ and a training variance σ2; initializing a generator configured with generator parameters; generating a batch of synthetic data using the generator; supplying the batch of synthetic data to the pre-trained model; measuring statistical characteristics of activations at the batch normalization layer and at the output of the pre-trained model in response to the batch of synthetic data, the statistical characteristics including a measured mean {circumflex over (μ)}ψ and a measured variance {circumflex over (σ)}ψ2; computing a training loss in accordance with a loss function Lψ based on μ, σ2, {circumflex over (μ)}ψ, and {circumflex over (σ)}ψ2; and iteratively updating the generator parameters in accordance with the training loss until a training completion condition is met to compute the generator.
Abstract:
A method and apparatus for variable rate compression with a conditional autoencoder is herein provided. According to one embodiment, a method for compression includes receiving a first image and a first scheme as inputs for an autoencoder network; determining a first Lagrange multiplier based on the first scheme; and using the first image and the first Lagrange multiplier as inputs, computing a second image from the autoencoder network. The autoencoder network is trained using a plurality of Lagrange multipliers and a second image as training inputs.
Abstract:
A method and system are herein disclosed. The method includes developing a joint latent variable model having a first variable, a second variable, and a joint latent variable representing common information between the first and second variables; generating a variational posterior of the joint latent variable model; training the variational posterior; performing inference of the first variable from the second variable based on the variational posterior, wherein performing the inference comprises conditionally generating the first variable from the second variable; and extracting common information between the first variable and the second variable, wherein extracting the common information comprises adding a regularization term to a loss function.
Abstract:
An electronic device and method for performing class-incremental learning are provided. The method includes designating a pre-trained first model for at least one past data class as a first teacher; training a second model; designating the trained second model as a second teacher; performing dual-teacher information distillation by maximizing mutual information at intermediate layers of the first teacher and second teacher; and transferring the information to a combined student model.
Abstract:
A method is provided. The method includes selecting a neural network model, wherein the neural network model includes a plurality of layers, and wherein each of the plurality of layers includes weights and activations; modifying the neural network model by inserting a plurality of quantization layers within the neural network model; associating a cost function with the modified neural network model, wherein the cost function includes a first coefficient corresponding to a first regularization term, and wherein an initial value of the first coefficient is pre-defined; and training the modified neural network model to generate quantized weights for a layer by increasing the first coefficient until all weights are quantized and the first coefficient satisfies a pre-defined threshold, further including optimizing a weight scaling factor for the quantized weights and an activation scaling factor for quantized activations, and wherein the quantized weights are quantized using the optimized weight scaling factor.
Abstract:
A system and method for characterizing an interference demodulation reference signal (DMRS) in a piece of user equipment (UE), e.g., a mobile device. The UE determines whether the serving signal is transmitted in a DMRS-based transmission mode; if it is, the UE cancels the serving DMRS from the received signal; otherwise the UE cancels the serving data signal from the received signal. The remaining signal is then analyzed for the amount of power it has in each of four interference DMRS candidates, and hypothesis testing is performed to determine whether interference DMRS is present in the signal, and, if so, to determine the rank of the interference DMRS, and the port and scrambling identity of each of the interference DMRS layers.
Abstract:
A computing system includes: an inter-device interface configured to receive receiver signal for communicating serving content through a communication channel; a communication unit, coupled to the inter-device interface, configured to: calculate a weighting set corresponding to a modular estimation mechanism, and generate a channel estimate based on the weighting set for characterizing the communication channel for recovering the serving content.
Abstract:
Apparatuses and methods of manufacturing same, systems, and methods for performing network parameter quantization in deep neural networks are described. In one aspect, multi-dimensional vectors representing network parameters are constructed from a trained neural network model. The multi-dimensional vectors are quantized to obtain shared quantized vectors as cluster centers, which are fine-tuned. The fine-tuned and shared quantized vectors/cluster centers are then encoded. Decoding reverses the process.
Abstract:
A method and system are herein disclosed. The method includes developing a joint latent variable model having a first variable, a second variable, and a joint latent variable representing common information between the first and second variables, generating a variational posterior of the joint latent variable model, training the variational posterior, and performing inference of the first variable from the second variable based on the variational posterior.