ELLIS header
University of Stuttgart Logo
Max Planck Institute for Intelligent Systems Logo

SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning

Manuel Nonnenmacher, Thomas Pfeil, Ingo Steinwart, David Reeb

Proc. of the Tenth International Conference on Learning Representations (ICLR), pp. 1–24, 2022.


Abstract

Pruning neural networks reduces inference time and memory costs. On standard hardware, these benefits will be especially prominent if coarse-grained structures, like feature maps, are pruned. We devise two novel saliency-based methods for second-order structured pruning (SOSP) which include correlations among all structures and layers. Our main method SOSP-H employs an innovative second-order approximation, which enables saliency evaluations by fast Hessian-vector products. SOSP-H thereby scales like a first-order method despite taking into account the full Hessian. We validate SOSP-H by comparing it to our second method SOSP-I that uses a well-established Hessian approximation, and to numerous state-of-the-art methods. While SOSP-H performs on par or better in terms of accuracy, it has clear advantages in terms of scalability and efficiency. This allowed us to scale SOSP-H to large-scale vision tasks, even though it captures correlations across all layers of the network. To underscore the global nature of our pruning methods, we evaluate their performance not only by removing structures from a pretrained network, but also by detecting architectural bottlenecks. We show that our algorithms allow to systematically reveal architectural bottlenecks, which we then remove to further increase the accuracy of the networks.

Links


BibTeX

@inproceedings{nonnenmacher22_iclr, title = {SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning}, author = {Nonnenmacher, Manuel and Pfeil, Thomas and Steinwart, Ingo and Reeb, David}, year = {2022}, booktitle = {Proc. of the Tenth International Conference on Learning Representations (ICLR)}, pages = {1--24} }