In addition, the researchers introduced a cosine-similarity-based regularization in pretraining experiments on smaller models, aiming to reduce layer linearity. This regularization improved performance metrics on benchmarks like Tiny Stories and SuperGLUE and successfully decreased the linearity of the models. The findings challenge the existing understanding of transformer architectures, suggesting that their operation may be more linear than previously assumed.
Key takeaways:
- This study reveals a novel linear characteristic exclusive to transformer decoders, including models such as GPT, LLaMA, OPT, BLOOM and others.
- The researchers found a near-perfect linear relationship in embedding transformations between sequential layers of these models.
- Experiments showed that removing or linearly approximating some of the most linear blocks of transformers does not significantly affect the loss or model performance.
- The researchers introduced a cosine-similarity-based regularization in pretraining experiments on smaller models, which improved performance metrics and successfully decreased the linearity of the models.