Scalable-effort classifiers for energy-efficient machine learning

    公开(公告)号:US10783454B2

    公开(公告)日:2020-09-22

    申请号:US15877984

    申请日:2018-01-23

    Abstract: Scalable-effort machine learning may automatically and dynamically adjust the amount of computational effort applied to input data based on the complexity of the input data. This is in contrast to fixed-effort machine learning, which uses a one-size-fits-all approach to applying a single classifier algorithm to both simple data and complex data. Scalable-effort machine learning involves, among other things, classifiers that may be arranged as a series of multiple classifier stages having increasing complexity (and accuracy). A first classifier stage may involve relatively simple machine learning models able to classify data that is relatively simple. Subsequent classifier stages have increasingly complex machine learning models and are able to classify more complex data. Scalable-effort machine learning includes algorithms that can differentiate among data based on complexity of the data.

    SPARSITY ESTIMATION FOR DATA TRANSMISSION
    5.
    发明申请
    SPARSITY ESTIMATION FOR DATA TRANSMISSION 审中-公开
    数据传输的空间估计

    公开(公告)号:US20160212245A1

    公开(公告)日:2016-07-21

    申请号:US14602159

    申请日:2015-01-21

    CPC classification number: H04L69/04 H04L67/12

    Abstract: Disclosed herein are systems and methods for compressing data and for estimating sparsity of datasets to aid in compressing data. A device receives a plurality of samples of the sensor data from the sensor and determine a plurality of bits, in which each bit has a substantially equal probability of being determined as a 0 bit or of being determined as a 1 bit. The device estimates a sparsity value of the sensor data based at least in part on the sequence of bits. The device compresses the received samples of the sensor data based at least in part on the determined sparsity value to provide compressed data and transmits the compressed data via the transmitter to a receiver. Sparse data other than sensor data may also be compressed based at least in part on an estimated sparsity value.

    Abstract translation: 本文公开了用于压缩数据和估计数据集的稀疏性以帮助压缩数据的系统和方法。 设备从传感器接收传感器数据的多个采样,并确定多个位,其中每个位具有被确定为0位或被确定为1位的基本相等的概率。 该装置至少部分地基于位序列估计传感器数据的稀疏值。 该设备至少部分地基于所确定的稀疏值来压缩接收到的传感器数据的采样,以提供压缩数据,并且经由发射机将压缩数据发送到接收机。 还可以至少部分地基于估计的稀疏值来压缩除传感器数据之外的稀疏数据。

    HARDWARE-EFFICIENT DEEP CONVOLUTIONAL NEURAL NETWORKS

    公开(公告)号:US20170132496A1

    公开(公告)日:2017-05-11

    申请号:US14934016

    申请日:2015-11-05

    CPC classification number: G06K9/66 G06K9/6268 G06N3/063

    Abstract: Systems, methods, and computer media for implementing convolutional neural networks efficiently in hardware are disclosed herein. A memory is configured to store a sparse, frequency domain representation of a convolutional weighting kernel. A time-domain-to-frequency-domain converter is configured to generate a frequency domain representation of an input image. A feature extractor is configured to access the memory and, by a processor, extract features based on the sparse, frequency domain representation of the convolutional weighting kernel and the frequency domain representation of the input image. The feature extractor includes convolutional layers and fully connected layers. A classifier is configured to determine, based on extracted features, whether the input image contains an object of interest. Various types of memory can be used to store different information, allowing information-dense data to be stored in faster (e.g., faster access time) memory and sparse data to be stored in slower memory.

    SCALABLE-EFFORT CLASSIFIERS FOR ENERGY-EFFICIENT MACHINE LEARNING
    8.
    发明申请
    SCALABLE-EFFORT CLASSIFIERS FOR ENERGY-EFFICIENT MACHINE LEARNING 有权
    用于能源效率的机器学习的可扩展的功能分类器

    公开(公告)号:US20160217390A1

    公开(公告)日:2016-07-28

    申请号:US14603222

    申请日:2015-01-22

    CPC classification number: G06N99/005

    Abstract: Scalable-effort machine learning may automatically and dynamically adjust the amount of computational effort applied to input data based on the complexity of the input data. This is in contrast to fixed-effort machine learning, which uses a one-size-fits-all approach to applying a single classifier algorithm to both simple data and complex data. Scalable-effort machine learning involves, among other things, classifiers that may be arranged as a series of multiple classifier stages having increasing complexity (and accuracy). A first classifier stage may involve relatively simple machine learning models able to classify data that is relatively simple. Subsequent classifier stages have increasingly complex machine learning models and are able to classify more complex data. Scalable-effort machine learning includes algorithms that can differentiate among data based on complexity of the data.

    Abstract translation: 基于输入数据的复杂度,可缩放机器学习可以自动和动态地调整应用于输入数据的计算量的量。 这与固定努力机器学习相反,固定机器学习使用一种全尺寸的方法将单个分类器算法应用于简单数据和复杂数据。 除了别的以外,可扩展努力机器学习涉及可以排列成具有增加的复杂性(和准确度)的一系列多个分类器阶段的分类器。 第一个分类器阶段可能涉及相对简单的机器学习模型,能够对相对简单的数据进行分类。 随后的分类器阶段具有越来越复杂的机器学习模型,并且能够分类更复杂的数据。 可扩展努力机器学习包括可以基于数据复杂度区分数据的算法。

    Context-awareness through biased on-device image classifiers

    公开(公告)号:US10268886B2

    公开(公告)日:2019-04-23

    申请号:US14715555

    申请日:2015-05-18

    Abstract: Examples of the disclosure enable efficient processing of images. One or more features are extracted from a plurality of images. Based on the extracted features, the plurality of images are classified into a first set including a plurality of first images and a second set including a plurality of second images. One or more images of the plurality of first images are false positives. The plurality of first images and none of the plurality of second images are transmitted to a remote device. The remote device is configured to process one or more images including recognizing the extracted features, understanding the images, and/or generating one or more actionable items. Aspects of the disclosure facilitate conserving memory at a local device, reducing processor load or an amount of energy consumed at the local device, and/or reducing network bandwidth usage between the local device and the remote device.

    Sparsity estimation for data transmission

    公开(公告)号:US10057383B2

    公开(公告)日:2018-08-21

    申请号:US14602159

    申请日:2015-01-21

    CPC classification number: H04L69/04 H04L67/12

    Abstract: Disclosed herein are systems and methods for compressing data and for estimating sparsity of datasets to aid in compressing data. A device receives a plurality of samples of the sensor data from the sensor and determine a plurality of bits, in which each bit has a substantially equal probability of being determined as a 0 bit or of being determined as a 1 bit. The device estimates a sparsity value of the sensor data based at least in part on the sequence of bits. The device compresses the received samples of the sensor data based at least in part on the determined sparsity value to provide compressed data and transmits the compressed data via the transmitter to a receiver. Sparse data other than sensor data may also be compressed based at least in part on an estimated sparsity value.

Patent Agency Ranking