DEPLOYING OPTIMIZATION PROFILES FOR COMPILING COMPUTER PROGRAMS IN DATA CENTERS

    公开(公告)号:US20240118875A1

    公开(公告)日:2024-04-11

    申请号:US18482738

    申请日:2023-10-06

    Applicant: Google LLC

    CPC classification number: G06F8/41

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for feedback-directed optimization. One of the methods includes maintaining a data store comprising a plurality of optimization profiles that are used by a compiler to compile respective computer programs. The computer programs can be invoked by a set of executing workloads. Operations are repeatedly performed that include, for each optimization profile in at least a subset of the optimization profiles: determining or predicting whether the optimization profile is a valid optimization profile for a current software version of the compiler, and in response to determining or predicting that the optimization profile is not a valid optimization profile for the current software version of the compiler, removing the optimization profile from the data store.

    Training giant neural networks using pipeline parallelism

    公开(公告)号:US11232356B2

    公开(公告)日:2022-01-25

    申请号:US16989787

    申请日:2020-08-10

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.

    TRAINING GIANT NEURAL NETWORKS USING PIPELINE PARALLELISM

    公开(公告)号:US20210042620A1

    公开(公告)日:2021-02-11

    申请号:US16989787

    申请日:2020-08-10

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.

    TRAINING GIANT NEURAL NETWORKS USING PIPELINE PARALLELISM

    公开(公告)号:US20220121945A1

    公开(公告)日:2022-04-21

    申请号:US17567740

    申请日:2022-01-03

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.

Patent Agency Ranking