Abstract:
In a method for super resolution imaging, the method includes: receiving, by a processor, a low resolution image; generating, by the processor, an intermediate high resolution image having an improved resolution compared to the low resolution image; generating, by the processor, a final high resolution image based on the intermediate high resolution image and the low resolution image; and transmitting, by the processor, the final high resolution image to a display device for display thereby.
Abstract:
An apparatus and method of constructing a universal polar code is provided. The apparatus includes a first function block configured to polarize and degrade a class of channels Wj to determine a probability of error Pe,j of each bit-channel of Wj, wherein jε{1, 2, . . . , s}, in accordance with a bit-channel index i; a second function block configured to determine a probability of error Pe(i) for the universal polar code for each bit-channel index i; a third function block configured to sort the Pe(i); and a fourth function block configured to determine a largest number k of bit-channels such that a sum of corresponding k bit-channel error probabilities Pe(i) is less than or equal to a target frame error rate Pt for the universal polar code, wherein the indices corresponding to the k smallest Pe(i) are good bit-channels for the universal polar code.
Abstract:
A concatenated encoder is provided that includes an outer encoder, a symbol interleaver and a polar inner encoder. The outer encoder is configured to encode a data stream using an outer code to generate outer codewords. The symbol interleaver is configured to interleave symbols of the outer codewords and generate a binary stream. The polar inner encoder is configured to encode the binary stream using a polar inner code to generate an encoded stream. A concatenated decoder is provided that includes a polar inner decoder, a symbol de-interleaver and an outer decoder. The polar inner decoder is configured to decode an encoded stream using a polar inner code to generate a binary stream. The symbol de-interleaver is configured to de-interleave symbols in the binary stream to generate outer codewords. The outer decoder is configured to decode the outer codewords using an outer code to generate a decoded stream.
Abstract:
A method for training a generator, by a generator training system including a processor and memory, includes: extracting training statistical characteristics from a batch normalization layer of a pre-trained model, the training statistical characteristics including a training mean μ and a training variance σ2; initializing a generator configured with generator parameters; generating a batch of synthetic data using the generator; supplying the batch of synthetic data to the pre-trained model; measuring statistical characteristics of activations at the batch normalization layer and at the output of the pre-trained model in response to the batch of synthetic data, the statistical characteristics including a measured mean {circumflex over (μ)}ψ and a measured variance {circumflex over (σ)}ψ2; computing a training loss in accordance with a loss function Lψ based on μ, σ2, {circumflex over (μ)}ψ, and {circumflex over (σ)}ψ2; and iteratively updating the generator parameters in accordance with the training loss until a training completion condition is met to compute the generator.
Abstract:
A system and a method are disclosed, the method including receiving, by a first local controller of a first edge device, an input associated with an environment in which the first edge device operates, using a first machine-learning algorithm, determining, by the first local controller, a parameter for a pre-trained modem algorithm of the first edge device based on the input, executing a task on the first edge device based on executing the pre-trained modem algorithm with the parameter, determining a result of executing the task, training the first machine-learning algorithm, generating a first update to the first machine-learning algorithm based on the training, sending the first update to a server, receiving, from the server, a server update to the first machine-learning algorithm, and based on the server update, updating the first machine-learning algorithm.
Abstract:
A system including: one or more processors; and memory including instructions that, when executed by the one or more processors, cause the one or more processors to: generate augmented input data by mixing noise components of training data; train a first neural network based on the augmented input data and ground truth data of the training data to output a first prediction of clean speech; lock trainable parameters of the first neural network as a result of the training of the first neural network; and train a second neural network according to the augmented input data and predictions of the first neural network to output a second prediction of the clean speech.
Abstract:
A system for performing echo cancellation includes: a processor configured to: receive a far-end signal; record a microphone signal including: a near-end signal; and an echo signal corresponding to the far-end signal; extract far-end features from the far-end signal; extract microphone features from the microphone signal; compute estimated near-end features by supplying the microphone features and the far-end features to an acoustic echo cancellation module including a recurrent neural network including: an encoder including a plurality of gated recurrent units; and a decoder including a plurality of gated recurrent units; compute an estimated near-end signal from the estimated near-end features; and transmit the estimated near-end signal to the far-end device. The recurrent neural network may include a contextual attention module; and the recurrent neural network may take, as input, a plurality of error features computed based on the far-end features, the microphone features, and acoustic path parameters.
Abstract:
A method and apparatus for variable rate compression with a conditional autoencoder is herein provided. According to one embodiment, a method for compression includes receiving a first image and a first scheme as inputs for an autoencoder network; determining a first Lagrange multiplier based on the first scheme; and using the first image and the first Lagrange multiplier as inputs, computing a second image from the autoencoder network. The autoencoder network is trained using a plurality of Lagrange multipliers and a second image as training inputs.
Abstract:
A method and system are herein disclosed. The method includes developing a joint latent variable model having a first variable, a second variable, and a joint latent variable representing common information between the first and second variables; generating a variational posterior of the joint latent variable model; training the variational posterior; performing inference of the first variable from the second variable based on the variational posterior, wherein performing the inference comprises conditionally generating the first variable from the second variable; and extracting common information between the first variable and the second variable, wherein extracting the common information comprises adding a regularization term to a loss function.
Abstract:
A system to recognize objects in an image includes an object detection network outputs a first hierarchical-calculated feature for a detected object. A face alignment regression network determines a regression loss for alignment parameters based on the first hierarchical-calculated feature. A detection box regression network determines a regression loss for detected boxes based on the first hierarchical-calculated feature. The object detection network further includes a weighted loss generator to generate a weighted loss for the first hierarchical-calculated feature, the regression loss for the alignment parameters and the regression loss of the detected boxes. A backpropagator backpropagates the generated weighted loss. A grouping network forms, based on the first hierarchical-calculated feature, the regression loss for the alignment parameters and the bounding box regression loss, at least one of a box grouping, an alignment parameter grouping, and a non-maximum suppression of the alignment parameters and the detected boxes.