PERFORMING AUTOMATIC MAP REDUCE JOB OPTIMIZATION USING A RESOURCE SUPPLY-DEMAND BASED APPROACH

    公开(公告)号:US20170315848A1

    公开(公告)日:2017-11-02

    申请号:US15140830

    申请日:2016-04-28

    IPC分类号: G06F9/50

    CPC分类号: G06F9/5055 G06F9/5066

    摘要: Determining optimum values for Map Reduce parameters by identifying parameters that affect performance of a Map Reduce job, determining a relationship between each of the identified parameters and a maximization of resource utilization for a plurality of computing resources configured for executing the Map Reduce job, representing a workflow based upon supply-demand relationships among the plurality of computing resources, modeling an execution cost as a function of the plurality of identified parameters, formulating a non-linear programming problem to minimize the execution cost, reformulating the non-linear programming problem as a linear programming problem, and solving the linear programming problem to determine a combination of parameter values for the plurality of identified parameters that minimizes the execution cost for the Map Reduce job.

    LARGE MODEL SUPPORT IN DEEP LEARNING

    公开(公告)号:US20230064057A1

    公开(公告)日:2023-03-02

    申请号:US18048203

    申请日:2022-10-20

    IPC分类号: G06N3/08 G06F13/42 G06N3/04

    摘要: Techniques that facilitate model support in deep learning are provided. In one example, a system includes a graphics processing unit and a central processing unit memory. The graphics processing unit processes data to train a deep neural network. The central processing unit memory stores a portion of the data to train the deep neural network. The graphics processing unit provides, during a forward pass process of the deep neural network that traverses through a set of layers for the deep neural network from a first layer of the set of layers to a last layer of the set of layers that provides a set of outputs for the deep neural network, input data for a layer from the set of layers for the deep neural network to the central processing unit memory.

    Multi-directional reduction in large scale deep-learning

    公开(公告)号:US10922606B2

    公开(公告)日:2021-02-16

    申请号:US15621258

    申请日:2017-06-13

    IPC分类号: G06N5/04 G06N3/063 G06N3/08

    摘要: A method for executing multi-directional reduction algorithms includes identifying a set of nodes, wherein a node includes at least one data element, creating a set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a single direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created set of partitions, creating an additional set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a different direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created additional set of partitions, and providing a set of reduced results corresponding to the at least one data element.

    Performing automatic map reduce job optimization using a resource supply-demand based approach

    公开(公告)号:US10013289B2

    公开(公告)日:2018-07-03

    申请号:US15140830

    申请日:2016-04-28

    IPC分类号: G06F9/46 G06F9/50

    CPC分类号: G06F9/5055 G06F9/5066

    摘要: Determining optimum values for Map Reduce parameters by identifying parameters that affect performance of a Map Reduce job, determining a relationship between each of the identified parameters and a maximization of resource utilization for a plurality of computing resources configured for executing the Map Reduce job, representing a workflow based upon supply-demand relationships among the plurality of computing resources, modeling an execution cost as a function of the plurality of identified parameters, formulating a non-linear programming problem to minimize the execution cost, reformulating the non-linear programming problem as a linear programming problem, and solving the linear programming problem to determine a combination of parameter values for the plurality of identified parameters that minimizes the execution cost for the Map Reduce job.

    Large model support in deep learning

    公开(公告)号:US11915147B2

    公开(公告)日:2024-02-27

    申请号:US18048203

    申请日:2022-10-20

    摘要: Techniques that facilitate model support in deep learning are provided. In one example, a system includes a graphics processing unit and a central processing unit memory. The graphics processing unit processes data to train a deep neural network. The central processing unit memory stores a portion of the data to train the deep neural network. The graphics processing unit provides, during a forward pass process of the deep neural network that traverses through a set of layers for the deep neural network from a first layer of the set of layers to a last layer of the set of layers that provides a set of outputs for the deep neural network, input data for a layer from the set of layers for the deep neural network to the central processing unit memory.

    Multi-directional Reduction in Large Scale Deep-Learning

    公开(公告)号:US20180357534A1

    公开(公告)日:2018-12-13

    申请号:US15621258

    申请日:2017-06-13

    IPC分类号: G06N3/08 G06N3/063 G06N3/04

    CPC分类号: G06N3/08 G06N3/04 G06N3/063

    摘要: A method for executing multi-directional reduction algorithms includes identifying a set of nodes, wherein a node includes at least one data element, creating a set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a single direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created set of partitions, creating an additional set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a different direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created additional set of partitions, and providing a set of reduced results corresponding to the at least one data element.

    Large model support in deep learning

    公开(公告)号:US11526759B2

    公开(公告)日:2022-12-13

    申请号:US16180864

    申请日:2018-11-05

    摘要: Techniques that facilitate model support in deep learning are provided. In one example, a system includes a graphics processing unit and a central processing unit memory. The graphics processing unit processes data to train a deep neural network. The central processing unit memory stores a portion of the data to train the deep neural network. The graphics processing unit provides, during a forward pass process of the deep neural network that traverses through a set of layers for the deep neural network from a first layer of the set of layers to a last layer of the set of layers that provides a set of outputs for the deep neural network, input data for a layer from the set of layers for the deep neural network to the central processing unit memory.

    Margin based adversarial computer program

    公开(公告)号:US11494591B2

    公开(公告)日:2022-11-08

    申请号:US16245489

    申请日:2019-01-11

    摘要: Techniques regarding a zero-confidence adversarial attack are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise an adversarial component that computes a perturbation that causes misclassification by a neural network classifier. The computer executable components can also comprise a restoration component that determines a normal vector to a constraint contour developed by the neural network classifier. Further, the computer executable components can comprise a projection component that determines a tangential vector to the constraint contour.

    MARGIN BASED ADVERSARIAL COMPUTER PROGRAM
    10.
    发明申请

    公开(公告)号:US20200226425A1

    公开(公告)日:2020-07-16

    申请号:US16245489

    申请日:2019-01-11

    IPC分类号: G06K9/62 G06N3/08 G06K9/03

    摘要: Techniques regarding a zero-confidence adversarial attack are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise an adversarial component that computes a perturbation that causes misclassification by a neural network classifier. The computer executable components can also comprise a restoration component that determines a normal vector to a constraint contour developed by the neural network classifier. Further, the computer executable components can comprise a projection component that determines a tangential vector to the constraint contour.