Typicality of Batches for Machine Learning

    公开(公告)号:US20240428137A1

    公开(公告)日:2024-12-26

    申请号:US18750814

    申请日:2024-06-21

    Applicant: Google LLC

    Abstract: Systems and methods described herein can improve typicality of batches for machine learning. The systems and methods can include obtaining a corpus of training data, the corpus of training data including one or more training examples. The systems and methods can include generating a first batch set including a plurality of batches from the corpus of training data, each of the batches including a subset of the one or more training examples. The systems and methods can include determining a batch distribution of a first batch of the first batch set. The systems and methods can include determining that the first batch is an atypical batch based on the batch distribution of the first batch. The systems and methods can include, in response to determining that the first batch is an atypical batch, shuffling the training examples of the first batch and one or more second batches of the first batch set to generate a second batch set. The systems and methods can include training a first machine-learned model using the second batch set.

    INCORPORATION OF DECISION TREES IN A NEURAL NETWORK

    公开(公告)号:US20240220867A1

    公开(公告)日:2024-07-04

    申请号:US18289173

    申请日:2021-05-10

    Applicant: Google LLC

    CPC classification number: G06N20/20

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scheduling operations represented on a computation graph. One of the methods comprises receiving data representing a neural network comprising a plurality of layers arranged in a sequence; selecting one or more groups of layers each comprising one or more layers adjacent to each other in the sequence; generating a new machine learning model, comprising: for each group of layers, a respective decision tree that replaces the group of layers, wherein the respective decision tree receives as input a quantized version of the inputs to a respective first layer in the group and generates as output a quantized version of the outputs of a respective last layer in the group, wherein a tree depth of the respective decision tree is based at least in part on a number of layers of the group.

    Automatic Selection of Quantization and Filter Pruning Optimization Under Energy Constraints

    公开(公告)号:US20230229895A1

    公开(公告)日:2023-07-20

    申请号:US18007871

    申请日:2021-06-02

    Applicant: Google LLC

    CPC classification number: G06N3/0495 G06N3/092

    Abstract: Systems and methods for producing a neural network architecture with improved energy consumption and performance tradeoffs are disclosed, such as would be deployed for use on mobile or other resource-constrained devices. In particular, the present disclosure provides systems and methods for searching a network search space for joint optimization of a size of a layer of a reference neural network model (e.g., the number of filters in a convolutional layer or the number of output units in a dense layer) and of the quantization of values within the layer. By defining the search space to correspond to the architecture of a reference neural network model, examples of the disclosed network architecture search can optimize models of arbitrary complexity. The resulting neural network models are able to be run using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art, mobile-optimized models.

Patent Agency Ranking