[论文翻译]通过迭代保持突触流实现无需数据的神经网络剪枝


原文地址:https://miner.umaxing.com/miner/v2/analysis/pdf_md?filename=full.md&as_attachment=False&user_id=1021&pdf=3dd4f9309d9afebafe7529d74720e102580fe872d842dc0f97062bef2237ffda1743054959_2006.05467v3.pdf


Pruning neural networks without any data by iterative ly conserving synaptic flow

通过迭代保持突触流实现无需数据的神经网络剪枝

Hidenori Tanaka∗ Physics & Informatics Laboratories NTT Research, Inc. Department of Applied Physics Stanford University

田中英哲*
物理学与信息科学实验室
NTT Research公司
应用物理系
斯坦福大学

Daniel Kunin∗ Institute for Computational and Mathematical Engineering Stanford University

Daniel Kunin∗ 斯坦福大学计算与数学工程研究所

Daniel L. K. Yamins Department of Psychology Department of Computer Science Stanford University

Daniel L. K. Yamins 心理学系 计算机科学系 斯坦福大学

Surya Ganguli Department of Applied Physics Stanford University

Surya Ganguli 斯坦福大学应用物理系

Abstract

摘要

Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable sub networks at initialization. T

阅读全文(20积分)