Abstract:
A method and system are provided. The method includes topologically sorting layers of a neural network, selecting a quantization process that utilizes a quantization of a previous layer, and determining, with the selected quantization process, a quantization mode of one layer in the neural network based on the quantization of a previous layer.
Abstract:
A method and system for training a neural network are provided. The method includes receiving an input image, selecting at least one data augmentation method from a pool of data augmentation methods, generating an augmented image by applying the selected at least one data augmentation method to the input image, and generating a mixed image from the input image and the augmented image.
Abstract:
Apparatuses (including user equipment (UE) and modern chips for UEs), systems, and methods for UE downlink Hybrid Automatic Repeat reQuest (HARQ) buffer memory management are described. In one method, the entire UE DL HARQ buffer memory space is pre-partitioned according to the number and capacities of the UE's active carrier components. In another method, the UE DL HARQ buffer is split between on-chip and off-chip memory so that each partition and sub-partition is allocated between the on-chip and off-chip memories in accordance with an optimum ratio.
Abstract:
Method for decoding signal includes receiving signal, where signal includes at least one symbol; decoding signal in stages, where each at least one symbol of signal is decoded into at least one bit per stage, wherein Log-Likelihood Ratio (LLR) for each at least one bit at each stage is determined, and identified in vector LAPP; performing Cyclic Redundancy Check (CRC) on LAPP, and stopping if LAPP passes CRC; otherwise, determining magnitudes of LLRs in LAPP; identifying K LLRs in LAPP with smallest magnitudes and indexing K LLRs as r={r(1), r(2), . . . , r(K)}; setting Lmax to maximum magnitude of LLRs in LAPP or maximum possible LLR quantization value; setting v=1; generating {tilde over (L)}A(r(k))=LA(r(k))−Lmaxvksign[LAPP(r(k))], for k=1, 2, . . . , K; decoding with {tilde over (L)}A to identify {tilde over (L)}APP, wherein {tilde over (L)}APP is LLR vector; and performing CRC on {tilde over (L)}APP, and stopping if {tilde over (L)}APP passes CRC or v=2K-1; otherwise, incrementing v and returning to generating {tilde over (L)}A(r(k)).
Abstract:
An apparatus and a method. The apparatus includes a receiver including an input for receiving a codeword of length mj, where m and j are each an integer; a processor configured to determine a decoding node tree structure with mj leaf nodes for the received codeword and receive an integer i indicating a level at which parallelism of order m is applied to the decoding node tree structure; and m successive cancellation decoders (SCDs) configured to decode, in parallel, each child node in the decoding node tree structure at level i.
Abstract:
An apparatus and a method. The apparatus includes a plurality of polarization processors, including n inputs and n outputs, where n is an integer; and at least one permutation processor, including n inputs and n outputs, wherein each of the at least one permutation processor is connected between two of the plurality of polarization processors, and connects the n outputs of a first of the two of the plurality of polarizations processors to the n inputs of a second of the two of the plurality of polarization processors between which each of the at least one permutation processor is connected in a permutation pattern that maximally polarizes the n outputs of the second of the two of the plurality of polarization processors.
Abstract:
A system to recognize objects in an image includes an object detection network outputs a first hierarchical-calculated feature for a detected object. A face alignment regression network determines a regression loss for alignment parameters based on the first hierarchical-calculated feature. A detection box regression network determines a regression loss for detected boxes based on the first hierarchical-calculated feature. The object detection network further includes a weighted loss generator to generate a weighted loss for the first hierarchical-calculated feature, the regression loss for the alignment parameters and the regression loss of the detected boxes. A backpropagator backpropagates the generated weighted loss. A grouping network forms, based on the first hierarchical-calculated feature, the regression loss for the alignment parameters and the bounding box regression loss, at least one of a box grouping, an alignment parameter grouping, and a non-maximum suppression of the alignment parameters and the detected boxes.
Abstract:
Apparatuses (including user equipment (UE) and modem chips for UEs), systems, and methods for UE downlink Hybrid Automatic Repeat reQuest (HARQ) buffer memory management are described. In one method, the entire UE DL HARQ buffer memory space is pre-partitioned according to the number and capacities of the UE's active carrier components. In another method, the UE DL HARQ buffer is split between on-chip and off-chip memory so that each partition and sub-partition is allocated between the on-chip and off-chip memories in accordance with an optimum ratio.