In pretraining experiments on smaller models, the authors introduce a cosine-similarity-based regularization, aimed at reducing layer linearity. This regularization improves performance metrics on benchmarks like Tiny Stories and SuperGLUE and successfully decreases the linearity of the models. The study challenges the existing understanding of transformer architectures, suggesting that their operation may be more linear than previously assumed.
Key takeaways:
- The paper reveals a novel linear characteristic exclusive to transformer decoders, including models such as GPT, LLaMA, OPT, BLOOM and others.
- A near-perfect linear relationship was found when analyzing embedding transformations between sequential layers.
- Experiments showed that removing or linearly approximating some of the most linear blocks of transformers does not significantly affect the loss or model performance.
- A cosine-similarity-based regularization was introduced in pretraining experiments on smaller models, which improved performance metrics and successfully decreased the linearity of the models.