Large pretrained transformer models have revolutionized modern AI applications with their state-of-the-art performance in natural language processing (NLP). However, their substantial parameter count poses challenges for real-world deployment. To address this, researchers often reduce model size by pruning parameters based on their magnitude or sensitivity. Previous research has demonstrated the limitations of magnitude pruning, especially in the context of transfer learning for modern NLP tasks. In this paper, we introduce a new magnitude-based pruning algorithm called mixture Gaussian prior pruning (MGPP), which employs a mixture Gaussian prior for regularization. MGPP prunes non-expressive weights under the guidance of the mixture Gaussian prior, aiming to retain the model’s expressive capability. Extensive evaluations across various NLP tasks, including natural language understanding, question answering, and natural language generation, demonstrate the superiority of MGPP over existing pruning methods, particularly in high sparsity settings. Additionally, we provide a theoretical justification for the consistency of the sparse transformer, shedding light on the effectiveness of the proposed pruning method.
In this paper, we consider functional varying coefficient model in present of a time invariant covariate for sparse longitudinal data contaminated with some measurement errors. We propose a regularization method to estimate the slope function based on a reproducing kernel Hilbert space approach. As we will see, our procedure is easy to implement. Our simulation results show that the procedure performs well, especially when either sampling frequency or sample size increases. Applications of our method are illustrated in an analysis of a longitudinal CD4+ count dataset from an HIV study.