Abstract:
The present disclosure presents a method and an apparatus for transmitting discovery signaling from a base station. For example, the method may include encoding a wireless fidelity (Wi-Fi) beacon at the base station for transmission and transmitting the encoded Wi-Fi beacon from the base station to one or more neighboring wireless nodes. The Wi-Fi beacon is generated by a Wi-Fi access point (AP) co-located at the base station which is a long term evolution (LTE) or LTE advanced in unlicensed spectrum base station. As such, other wireless nodes can discover the LTE or LTE advanced in unlicensed spectrum base station.
Abstract:
A method of detecting unknown classes is presented and includes generating a first classifier for multiple first classes. In one configuration, an output of the first classifier has a dimension of at least two. The method also includes designing a second classifier to receive the output of the first classifier to decide whether input data belongs to the multiple first classes or at least one second class.
Abstract:
Computing a non-linear function ƒ(x) in hardware or embedded systems can be complex and resource intensive. In one or more aspects of the disclosure, a method, a computer-readable medium, and an apparatus are provided for computing a non-linear function ƒ(x) accurately and efficiently in hardware using look-up tables (LUTs) and interpolation or extrapolation. The apparatus may be a processor. The processor computes a non-linear function ƒ(x) for an input variable x, where ƒ(x)=g(y(x),z(x)). The processor determines an integer n by determining a position of a most significant bit (MSB) of an input variable x. In addition, the processor determines a value for y(x) based on a first look-up table and the determined integer n. Also, the processor determines a value for z(x) based on n and the input variable x, and based on a second look-up table. Further, the processor computes ƒ(x) based on the determined values for y(x) and z(x).
Abstract:
A method of reducing computational complexity for a fixed point neural network operating in a system having a limited bit width in a multiplier-accumulator (MAC) includes reducing a number of bit shift operations when computing activations in the fixed point neural network. The method also includes balancing an amount of quantization error and an overflow error when computing activations in the fixed point neural network.
Abstract:
A method for selecting bit widths for a fixed point machine learning model includes evaluating a sensitivity of model accuracy to bit widths at each computational stage of the model. The method also includes selecting a bit width for parameters, and/or intermediate calculations in the computational stages of the mode. The bit width for the parameters and the bit width for the intermediate calculations may be different. The selected bit width may be determined based on the sensitivity evaluation.
Abstract:
A method of quantizing a floating point machine learning network to obtain a fixed point machine learning network using a quantizer may include selecting at least one moment of an input distribution of the floating point machine learning network. The method may also include determining quantizer parameters for quantizing values of the floating point machine learning network based at least in part on the at least one selected moment of the input distribution of the floating point machine learning network to obtain corresponding values of the fixed point machine learning network.
Abstract:
A method of adaptively selecting a configuration for a machine learning process includes determining current system resources and performance specifications of a current system. A new configuration for the machine learning process is determined based at least in part on the current system resources and the performance specifications. The method also includes dynamically selecting between a current configuration and the new configuration based at least in part on the current system resources and the performance specifications.