Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Paper page - Your Transformer is Secretly Linear

May 25, 2024 - huggingface.co
This research paper presents a new linear characteristic unique to transformer decoders, including models like GPT, LLaMA, OPT, BLOOM, and others. The study uncovers a near-perfect linear relationship in embedding transformations between sequential layers, with a Procrustes similarity score of 0.99. However, this linearity decreases when the residual component is removed due to a consistently low output norm of the transformer layer. The experiments reveal that the removal or linear approximation of some of the most linear blocks of transformers does not significantly impact the loss or model performance.

In addition, the researchers introduced a cosine-similarity-based regularization in pretraining experiments on smaller models, aiming to reduce layer linearity. This regularization improved performance metrics on benchmarks like Tiny Stories and SuperGLUE and successfully decreased the linearity of the models. The findings challenge the existing understanding of transformer architectures, suggesting that their operation may be more linear than previously assumed.

Key takeaways:

  • This study reveals a novel linear characteristic exclusive to transformer decoders, including models such as GPT, LLaMA, OPT, BLOOM and others.
  • The researchers found a near-perfect linear relationship in embedding transformations between sequential layers of these models.
  • Experiments showed that removing or linearly approximating some of the most linear blocks of transformers does not significantly affect the loss or model performance.
  • The researchers introduced a cosine-similarity-based regularization in pretraining experiments on smaller models, which improved performance metrics and successfully decreased the linearity of the models.
View Full Article

Comments (0)

Be the first to comment!