-
公开(公告)号:US11557053B2
公开(公告)日:2023-01-17
申请号:US16785469
申请日:2020-02-07
发明人: Rui Zhang , Conrad M. Albrecht , Siyuan Lu , Wei Zhang , Ulrich Alfons Finkler , David S. Kung , Xiaodong Cui , Marcus Freitag
摘要: Techniques for image processing and transformation are provided. A plurality of images and a plurality of maps are received, and a system of neural networks is trained based on the plurality of images and the plurality of maps. A first image is received, and a first map is generated by processing the first image using the system of neural networks.
-
2.
公开(公告)号:US20170315848A1
公开(公告)日:2017-11-02
申请号:US15140830
申请日:2016-04-28
发明人: David S. Kung , Dung Phan , Jinjun Xiong
IPC分类号: G06F9/50
CPC分类号: G06F9/5055 , G06F9/5066
摘要: Determining optimum values for Map Reduce parameters by identifying parameters that affect performance of a Map Reduce job, determining a relationship between each of the identified parameters and a maximization of resource utilization for a plurality of computing resources configured for executing the Map Reduce job, representing a workflow based upon supply-demand relationships among the plurality of computing resources, modeling an execution cost as a function of the plurality of identified parameters, formulating a non-linear programming problem to minimize the execution cost, reformulating the non-linear programming problem as a linear programming problem, and solving the linear programming problem to determine a combination of parameter values for the plurality of identified parameters that minimizes the execution cost for the Map Reduce job.
-
公开(公告)号:US20230064057A1
公开(公告)日:2023-03-02
申请号:US18048203
申请日:2022-10-20
摘要: Techniques that facilitate model support in deep learning are provided. In one example, a system includes a graphics processing unit and a central processing unit memory. The graphics processing unit processes data to train a deep neural network. The central processing unit memory stores a portion of the data to train the deep neural network. The graphics processing unit provides, during a forward pass process of the deep neural network that traverses through a set of layers for the deep neural network from a first layer of the set of layers to a last layer of the set of layers that provides a set of outputs for the deep neural network, input data for a layer from the set of layers for the deep neural network to the central processing unit memory.
-
公开(公告)号:US10922606B2
公开(公告)日:2021-02-16
申请号:US15621258
申请日:2017-06-13
发明人: Minsik Cho , Ulrich A. Finkler , David S. Kung , Li Zhang
摘要: A method for executing multi-directional reduction algorithms includes identifying a set of nodes, wherein a node includes at least one data element, creating a set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a single direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created set of partitions, creating an additional set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a different direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created additional set of partitions, and providing a set of reduced results corresponding to the at least one data element.
-
5.
公开(公告)号:US10013289B2
公开(公告)日:2018-07-03
申请号:US15140830
申请日:2016-04-28
发明人: David S. Kung , Dung Phan , Jinjun Xiong
CPC分类号: G06F9/5055 , G06F9/5066
摘要: Determining optimum values for Map Reduce parameters by identifying parameters that affect performance of a Map Reduce job, determining a relationship between each of the identified parameters and a maximization of resource utilization for a plurality of computing resources configured for executing the Map Reduce job, representing a workflow based upon supply-demand relationships among the plurality of computing resources, modeling an execution cost as a function of the plurality of identified parameters, formulating a non-linear programming problem to minimize the execution cost, reformulating the non-linear programming problem as a linear programming problem, and solving the linear programming problem to determine a combination of parameter values for the plurality of identified parameters that minimizes the execution cost for the Map Reduce job.
-
公开(公告)号:US11915147B2
公开(公告)日:2024-02-27
申请号:US18048203
申请日:2022-10-20
CPC分类号: G06N3/084 , G06F13/4282 , G06N3/04
摘要: Techniques that facilitate model support in deep learning are provided. In one example, a system includes a graphics processing unit and a central processing unit memory. The graphics processing unit processes data to train a deep neural network. The central processing unit memory stores a portion of the data to train the deep neural network. The graphics processing unit provides, during a forward pass process of the deep neural network that traverses through a set of layers for the deep neural network from a first layer of the set of layers to a last layer of the set of layers that provides a set of outputs for the deep neural network, input data for a layer from the set of layers for the deep neural network to the central processing unit memory.
-
公开(公告)号:US20180357534A1
公开(公告)日:2018-12-13
申请号:US15621258
申请日:2017-06-13
发明人: Minsik Cho , Ulrich A. Finkler , David S. Kung , Li Zhang
摘要: A method for executing multi-directional reduction algorithms includes identifying a set of nodes, wherein a node includes at least one data element, creating a set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a single direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created set of partitions, creating an additional set of partitions including one or more data elements from at least two nodes, wherein the at least two nodes are arranged in a different direction with respect to the positioning of the set of nodes, executing a reduction algorithm on the data elements within the created additional set of partitions, and providing a set of reduced results corresponding to the at least one data element.
-
公开(公告)号:US11526759B2
公开(公告)日:2022-12-13
申请号:US16180864
申请日:2018-11-05
摘要: Techniques that facilitate model support in deep learning are provided. In one example, a system includes a graphics processing unit and a central processing unit memory. The graphics processing unit processes data to train a deep neural network. The central processing unit memory stores a portion of the data to train the deep neural network. The graphics processing unit provides, during a forward pass process of the deep neural network that traverses through a set of layers for the deep neural network from a first layer of the set of layers to a last layer of the set of layers that provides a set of outputs for the deep neural network, input data for a layer from the set of layers for the deep neural network to the central processing unit memory.
-
公开(公告)号:US11494591B2
公开(公告)日:2022-11-08
申请号:US16245489
申请日:2019-01-11
发明人: Yang Zhang , Shiyu Chang , Mo Yu , David S. Kung
摘要: Techniques regarding a zero-confidence adversarial attack are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise an adversarial component that computes a perturbation that causes misclassification by a neural network classifier. The computer executable components can also comprise a restoration component that determines a normal vector to a constraint contour developed by the neural network classifier. Further, the computer executable components can comprise a projection component that determines a tangential vector to the constraint contour.
-
公开(公告)号:US20200226425A1
公开(公告)日:2020-07-16
申请号:US16245489
申请日:2019-01-11
发明人: Yang Zhang , Shiyu Chang , Mo Yu , David S. Kung
摘要: Techniques regarding a zero-confidence adversarial attack are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise an adversarial component that computes a perturbation that causes misclassification by a neural network classifier. The computer executable components can also comprise a restoration component that determines a normal vector to a constraint contour developed by the neural network classifier. Further, the computer executable components can comprise a projection component that determines a tangential vector to the constraint contour.
-
-
-
-
-
-
-
-
-