Pruning neural networks without any data by iterative ly conserving synaptic flow
通过迭代保持突触流实现无需数据的神经网络剪枝
Hidenori Tanaka∗ Physics & Informatics Laboratories NTT Research, Inc. Department of Applied Physics Stanford University
田中英哲*
物理学与信息科学实验室
NTT Research公司
应用物理系
斯坦福大学
Daniel Kunin∗ Institute for Computational and Mathematical Engineering Stanford University
Daniel Kunin∗ 斯坦福大学计算与数学工程研究所
Daniel L. K. Yamins Department of Psychology Department of Computer Science Stanford University
Daniel L. K. Yamins 心理学系 计算机科学系 斯坦福大学
Surya Ganguli Department of Applied Physics Stanford University
Surya Ganguli 斯坦福大学应用物理系
Abstract
摘要
Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable sub networks at initialization. T
