Model Compression

Pruning

Introduced by Li et al. in Pruning Filters for Efficient ConvNets

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Network Pruning 44 7.91%
Quantization 38 6.83%
Model Compression 33 5.94%
Language Modelling 31 5.58%
Image Classification 25 4.50%
Federated Learning 22 3.96%
Computational Efficiency 17 3.06%
Large Language Model 10 1.80%
Question Answering 10 1.80%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories