Runway, a New York-based AI video editing startup, has introduced its new Gen-1 model, capable of visually transforming existing videos into new ones through text prompts. This model follows Runway’s launch of Stable Diffusion, an open-source image AI developed in partnership with stability AI, LMU Munich, Eleuther AI, and Laion.

The realistically filmed door on the left becomes a cartoon-looking door on the right via text command. | Image: Runway

The company recognizes that AI-edited videos are not yet at the level of professionally edited videos, but predicts rapid progress in this field. “AI systems for image and video synthesis are becoming increasingly precise, realistic, and controllable,” the company states.

Person to superhero via input image. | Image: Runway

Regarding the open source status of Gen-1, Runway’s Video Workflow Architect, Ian Sansavera, has stated that the startup has not yet made a decision. Interested parties can sign up for a waiting list and the scientific paper will be published soon. The company is likely to develop the Gen-1 model primarily for its own AI-powered video editor toolkit, which aims to simplify and automate the video editing process.

Loosely assembled notebooks become a skyline. | Image: Runway

Video: Runway

Runway was founded in early 2018 and has raised approximately $100 million from investors. The integration of Stable Diffusion into its toolkit was demonstrated in the fall of 2022. More information about the Gen-1 project can be found on the project page.