Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Your Transformer is Secretly Linear

May 23, 2024 - news.bensbites.com
The paper presents a new linear characteristic unique to transformer decoders, such as GPT, LLaMA, OPT, BLOOM, and others. The authors analyze embedding transformations between sequential layers, discovering a near-perfect linear relationship. However, this linearity decreases when the residual component is removed due to a consistently low output norm of the transformer layer. The study shows that removing or linearly approximating some of the most linear blocks of transformers does not significantly affect the loss or model performance.

In pretraining experiments on smaller models, the authors introduce a cosine-similarity-based regularization, aimed at reducing layer linearity. This regularization improves performance metrics on benchmarks like Tiny Stories and SuperGLUE and successfully decreases the linearity of the models. The study challenges the existing understanding of transformer architectures, suggesting that their operation may be more linear than previously assumed.

Key takeaways:

  • The paper reveals a novel linear characteristic exclusive to transformer decoders, including models such as GPT, LLaMA, OPT, BLOOM and others.
  • A near-perfect linear relationship was found when analyzing embedding transformations between sequential layers.
  • Experiments showed that removing or linearly approximating some of the most linear blocks of transformers does not significantly affect the loss or model performance.
  • A cosine-similarity-based regularization was introduced in pretraining experiments on smaller models, which improved performance metrics and successfully decreased the linearity of the models.
View Full Article

Comments (0)

Be the first to comment!