In this episode of the AI + a16z podcast, Luma Chief Scientist Jiaming Song joins a16z General Partner Anjney Midha to discuss Jiaming's esteemed career in video models
This episode of the AI + a16z podcast features Luma Chief Scientist Jiaming Song in conversation with a16z General Partner Anjney Midha about Jiaming’s impressive career in the field of video models, culminating in the recent release of Luma’s Dream Machine 3D video model, which showcases its ability to reason about the world across multiple dimensions. Jiaming discusses the evolution of image and video models, his vision for the future of multimodal models, and his reasoning behind Dream Machine’s ability to demonstrate emergent reasoning capabilities. According to Jiaming, the model was trained on a volume of high-quality video data that, if measured in relation to language data, would amount to hundreds of trillions of tokens.
Here’s a snippet from their discussion, where Jiaming explains the “bitter lesson” in the context of training generative models, and in the process sums up a key component of why Dream Machine can do what it does by using context-rich video data:
“For many of the problems related to artificial intelligence, it is often more productive in the long run to use simpler methods but more compute, [rather] than trying to develop priors and then trying to leverage the priors so that you can use less compute.
“Cases in this question first happened in language, where people were initially working on language understanding, trying to use grammar or semantic parsing, these kinds of techniques. But eventually these tasks began to be replaced by large language models. And a similar case is happening in the vision domain, as well . . . and now people have been using deep learning features for almost all the tasks. This is a clear demonstration of how using more compute and having less priors is good.
“But how does it work with language? Language by itself is also a human construct. Of course, it is a very good and highly compressed kind of knowledge, but it’s definitely a lot less data than what humans take in day to day from the real world . . .
“[And] it is a vastly smaller data set size than visual signals. And we are already almost exhausting the . . . high-quality language sources that we have in the world. The speed at which humans can produce language is definitely not enough to keep up with the demands of the scaling laws. So even if we have a world where we can scale up the compute infrastructure for that, we don’t really have the infrastructure to scale up the data efforts . . .
“Even though people would argue that the emergence of large language models is already evidence of the scaling law . . . against the rule-based methods in language understanding, we are arguing that language by itself is also a prior in the face of more of the richer data signal that is happening in the physical world.”
The above is the detailed content of Luma Chief Scientist Jiaming Song on the History of Image and Video Models and the Future of Multimodal Models. For more information, please follow other related articles on the PHP Chinese website!