Neuromorphic computers, which process multiple sources of information simultaneously like the human brain, are being developed to improve energy consumption during learning. However, programming these machines is a challenge due to the intertwining of hardware and software. Optical computers, on the other hand, offer advantages such as faster computation and lower energy loss due to the use of light particles (photons) for data transmission. Despite the promise of these new computing approaches, they require time for development and adoption. In the meantime, efforts are being made to make current models more energy-efficient, such as using smaller, fine-tuned models that outperform larger ones in certain cases.
Key takeaways:
- The energy required to train artificial intelligence (AI) models, particularly large language models (LLMs) like Chat GPT-3, is becoming a concern due to its significant contribution to global warming. The power needed for AI training has been doubling roughly every 3.4 months since 2012.
- Alternatives to power-hungry graphics processing units (GPUs) are being explored to reduce the energy footprint of AI training. One such alternative is neuromorphic computing, which mimics the human brain's energy-efficient processing capabilities.
- Another emerging technology is optical computing, which uses light waves to transmit information. This approach is faster and more energy-efficient than traditional computing methods. Lightmatter, a computer hardware company, is developing hybrid solutions that integrate optical components into silicon chips.
- While new computing approaches are promising, they will take time to develop and adopt. In the meantime, researchers are exploring ways to make current AI models more energy-efficient, such as using smaller, fine-tuned models that outperform larger ones in certain cases.