WHY THIS MATTERS IN BRIEF
I’ve been talking about generative video for almost a decade, and it’s now getting better, albeit slowly.
Runway, the Creative Machine Artificial Intelligence (AI) startup that co-created last year’s breakout text-to-image model Stable Diffusion, has now released an AI model that can transform existing videos into new ones by applying any style specified by a text prompt or reference image.
In a demo reel posted on its website, Runway shows how its software, called Gen-1, can turn clips of people on a street into claymation puppets, or books stacked on a table into a cityscape at night. Runway hopes that Gen-1 will do for video what Stable Diffusion, and MidJourney with their Text-to-Image technology did for images.
Synthetic Content … it’s “here” but still has a long way to go
“We’ve seen a big explosion in image-generation models,” says Runway CEO and cofounder Cristóbal Valenzuela. “I truly believe that 2023 is going to be the year of video.”
Set up in 2018, Runway has been developing AI-powered Text-to-Video editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Stephen Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.
In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, then stepped in to pay the computing costs required to train the model on much more data and now Getty is now taking legal action against Stability AI claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission so Runway ended their partnership in order to keep its distance.
Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Text-to-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models – the company says it will post technical details about Gen-1 on its website in the next few days.
Unlike Meta and Google, Runway has built its model with customers in mind.
“This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”
Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.
Last year’s explosion in generative AI was fuelled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with them. Valenzuela hopes that putting Gen-1 into the hands of creative professionals will soon have a similar impact on video.
“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.” And he’s right about the latter but as for the former well I reckon that that’s closer to 2030.