Memory Efficient Scalable Deep Learning with Model Parallelization

    公开(公告)号:US20170116520A1

    公开(公告)日:2017-04-27

    申请号:US15271589

    申请日:2016-09-21

    Abstract: Methods and systems for training a neural network include sampling multiple local sub-networks from a global neural network. The local sub-networks include a subset of neurons from each layer of the global neural network. The plurality of local sub-networks are trained at respective local processing devices to produce trained local parameters. The trained local parameters from each local sub-network are averaged to produce trained global parameters.

Patent Agency Ranking