Runway was founded in 2018 by Cristóbal Valenzuela, Alejandro Matamala, and Anastasis Germanidis. Based in New York City, the company started as a creative tools startup but evolved into one of the leading AI-powered video generation companies. Runway has raised over $230 million in funding.
Runway has a notable place in AI history — they co-developed the original Stable Diffusion model in collaboration with Ludwig Maximilian University of Munich and Stability AI. But while Stability AI focused on open-source image generation, Runway pushed into video. Their Gen-1 model (2023) could transform existing videos using text prompts, and Gen-2 could generate videos from text alone.
Gen-3 Alpha, released in 2024, represented a significant leap in video quality and consistency. The outputs showed much better temporal coherence, more realistic motion, and higher fidelity than anything previously available. This put Runway in direct competition with other video generation efforts from companies like OpenAI (Sora) and Google (Veo).
Beyond generation, Runway offers a full suite of AI-powered creative tools for video editing, including background removal, inpainting, motion tracking, and color grading. Their platform is used by filmmakers, advertisers, and content creators. Runway’s technology was used in the production of the Oscar-winning film “Everything Everywhere All at Once.” For creative professionals exploring AI-assisted workflows, Runway has become one of the essential tools to know.