Omer Leibovitch, M.Sc. Thesis Seminar
For password to lecture, please contact: email@example.com
A butterfly network consists of logarithmically many layers, each with a linear number of pre-specified nonzero weights. We propose to replace a dense linear layer in any neural network by an architecture based on the butterfly network. The proposed architecture significantly improves upon the quadratic number of weights required in a standard dense layer to nearly linear with little compromise in expressibility of the resulting operator. In a collection of wide variety of experiments, including supervised prediction on both the NLP and vision data, we show that this not only produces results that match and often outperform existing well-known architectures, but it also offers faster training and prediction in deployment.
Theoretical result presented in the paper explain why the training speed and outcome are not compromised by our proposed approach.