These advancements could enable more immersive visual experiences and allow complex AI systems to run on consumer devices such as the iPhone and iPad. The new 3D modeling capability could unlock new possibilities for virtual try-on, telepresence, and synthetic media. Meanwhile, the optimizations in deploying LLMs could allow complex AI assistants and chatbots to run smoothly on mobile devices. These innovations reflect Apple's growing leadership in AI research and applications.
Key takeaways:
- Apple has made significant strides in AI research, introducing new techniques for 3D avatars and efficient language model inference. These advancements could enable more immersive visual experiences and allow complex AI systems to run on consumer devices.
- The company's new method, HUGS, can generate animated 3D avatars from short monocular videos and is up to 100 times faster in training and rendering than previous methods. It could unlock new possibilities for virtual try-on, telepresence, and synthetic media.
- In a second research paper, Apple tackled the challenge of deploying large language models on devices with limited memory. The proposed system minimizes data transfer from flash storage into scarce DRAM during inference, improving inference latency by 4-5x on an Apple M1 Max CPU and by 20-25x on a GPU.
- These advancements demonstrate Apple's growing leadership in AI research and applications. The company's innovations could potentially take artificial intelligence to the next level, enabling photorealistic digital avatars and powerful AI assistants on portable devices.