-
公开(公告)号:US20210081793A1
公开(公告)日:2021-03-18
申请号:US17093938
申请日:2020-11-10
Applicant: INTEL CORPORATION
Inventor: LI CHEN , RAVI L. SAHITA
Abstract: Various embodiments are generally directed to techniques for training deep neural networks, such as with an iterative approach, for instance. Some embodiments are particularly directed to a deep neural network (DNN) training system that generates a hardened DNN by iteratively training DNNs with images that were misclassified by previous iterations of the DNN. One or more embodiments, for example, may include logic to generate an adversarial image that is misclassified by a first DNN that was previously trained with a set of sample images. In some embodiments, the logic may determine a second training set that includes the adversarial image that was misclassified by the first DNN and the first training set of one or more sample images. The second training set may be used to train a second DNN. In various embodiments, the above process may be repeated for a predetermined number of iterations to produce a hardened DNN.
-
公开(公告)号:US20190095796A1
公开(公告)日:2019-03-28
申请号:US15713573
申请日:2017-09-22
Applicant: INTEL CORPORATION
Inventor: LI CHEN , MICHAEL LEMAY , YE ZHUANG
Abstract: Logic may determine a physical resource assignment via a neural network logic trained to determine an optimal policy for assignment of the physical resources in source code. Logic may generate training data to train a neural network by generating multiple instances of machine code for one or more source codes in accordance with different policies. Logic may generate different policies by adjusting, combining, mutating, and/or randomly changing a previous policy. Logic may execute and measure and/or statically determine measurements for each instance of a machine code associated with a source code to determine a reward associated with each state in the source code. Logic may apply weights and biases to the training data to approximate a value function. Logic may determine a gradient descent of the approximated value function and may backpropagate the output from the gradient descent to adjust the weights and biases to determine an optimal policy.
-
公开(公告)号:US20190080257A1
公开(公告)日:2019-03-14
申请号:US15699860
申请日:2017-09-08
Applicant: Intel Corporation
Inventor: LI CHEN , ERDEM AKTAS
Abstract: One embodiment provides a system including processor, a storage device, training logic and runtime prediction logic to develop a model to enable improved checkpointing. The training logic trains the model using simulated or known data to predict a size of a changelog needed for checkpointing. The size of the changelog is correlated to user type and timespan (as a checkpoint tracking changes made over a full week is likely larger than a checkpoint tracking changes made over a single day, and some types of users make more changes than others). Thus, the training logic utilizes sample data corresponding to various user types and timespans to train and validate the model for various combinations. Once the model is trained, the training logic may send the trained model to the runtime prediction model for use during operation of the system. During operation, the runtime prediction logic uses the model to predict a size of a reserved area where the changelog will be stored. The runtime prediction logic also monitors actual use of the reserved area during operation over time (e.g., tracks the size of the changelog as it grows) and compares the changelog size to the predictions from the model. The runtime prediction logic revises the model as needed based on the actual use. Thus, the system improves checkpointing by reducing wasted space.
-
公开(公告)号:US20190005386A1
公开(公告)日:2019-01-03
申请号:US15640470
申请日:2017-07-01
Applicant: INTEL CORPORATION
Inventor: LI CHEN , RAVI L. SAHITA
Abstract: Various embodiments are generally directed to techniques for training deep neural networks, such as with an iterative approach, for instance. Some embodiments are particularly directed to a deep neural network (DNN) training system that generates a hardened DNN by iteratively training DNNs with images that were misclassified by previous iterations of the DNN. One or more embodiments, for example, may include logic to generate an adversarial image that is misclassified by a first DNN that was previously trained with a set of sample images. In some embodiments, the logic may determine a second training set that includes the adversarial image that was misclassified by the first DNN and the first training set of one or more sample images. The second training set may be used to train a second DNN. In various embodiments, the above process may be repeated for a predetermined number of iterations to produce a hardened DNN.
-
-
-