Skip to content
No PriorsNo Priors

No Priors Ep.61 | OpenAI's Sora Leaders Aditya Ramesh, Tim Brooks and Bill Peebles

AI-generated videos are not just leveled-up image generators. But rather, they could be a big step forward on the path to AGI. This week on No Priors, the team from Sora is here to discuss OpenAI’s recently announced generative video model, which can take a text prompt and create realistic, visually coherent, high-definition clips that are up to a minute long. Sora team leads, Aditya Ramesh, Tim Brooks, and Bill Peebles join Elad and Sarah to talk about developing Sora. The generative video model isn’t yet available for public use but the examples of its work are very impressive. However, they believe we’re still in the GPT-1 era of AI video models and are focused on a slow rollout to ensure the model is in the best place possible to offer value to the user and more importantly they’ve applied all the safety measures possible to avoid deep fakes and misinformation. They also discuss what they’re learning from implementing diffusion transformers, why they believe video generation is taking us one step closer to AGI, and why entertainment may not be the main use case for this tool in the future. Show Notes: 0:00 Sora team Introduction 1:05 Simulating the world with Sora 2:25 Building the most valuable consumer product 5:50 Alternative use cases and simulation capabilities 8:41 Diffusion transformers explanation 10:15 Scaling laws for video 13:08 Applying end-to-end deep learning to video 15:30 Tuning the visual aesthetic of Sora 17:08 The road to “desktop Pixar” for everyone 20:12 Safety for visual models 22:34 Limitations of Sora 25:04 Learning from how Sora is learning 29:32 The biggest misconceptions about video models

Sarah GuohostTim BrooksguestBill PeeblesguestAditya RameshguestElad Gilhost
Apr 25, 202431mWatch on YouTube ↗

CHAPTERS

  1. 0:00 – 1:05

    Sora team Introduction

  2. 1:05 – 2:25

    Simulating the world with Sora

  3. 2:25 – 5:50

    Building the most valuable consumer product

  4. 5:50 – 8:41

    Alternative use cases and simulation capabilities

  5. 8:41 – 10:15

    Diffusion transformers explanation

  6. 10:15 – 13:08

    Scaling laws for video

  7. 13:08 – 15:30

    Applying end-to-end deep learning to video

  8. 15:30 – 17:08

    Tuning the visual aesthetic of Sora

  9. 17:08 – 20:12

    The road to “desktop Pixar” for everyone

  10. 20:12 – 22:34

    Safety for visual models

  11. 22:34 – 25:04

    Limitations of Sora

  12. 25:04 – 29:32

    Learning from how Sora is learning

  13. 29:32 – 31:24

    The biggest misconceptions about video models

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome