Behind DeepMind & Darren Aronofsky's AI-led short film slate
In May 2025, Google DeepMind partnered with Primordial Soup, a new storytelling venture founded by director Darren Aronofsky, to bring the lab's video-generation tools to a short-film slate.
Primordial Soup made a deal to produce three short films using Google DeepMind's generative AI models, including its Veo video model. According to Google, each film will be led by an emerging filmmaker. Aronofsky will mentor the directors, and Google DeepMind's research team will provide production support.
The first project, "ANCESTRA," is directed by Eliza McNitt and premiered at the Tribeca Festival on June 13, 2025. The eight-minute short film follows a story inspired by the day McNitt was born, told from the mother's perspective.
McNitt described Veo as a tool that complements, rather than replaces, established craft.
"Veo is a generative video model. But to me, it's another lens through which I get to imagine the universe around me," she says. "We filmed really emotional performances, but then generated video we could never capture otherwise."
Studios, independent producers, and technology companies are exploring generative AI for early-stage concept work, visual effects, and post-production.
At Toronto's Elevate festival in October, Naeem Talukdar, CEO and Co-Founder of Toronto-based AI film production firm Moonvalley, said the first major big-budget films using the visual intelligence research firm's technology will be released in 2026.
At the same time, filmmakers and industry bodies have raised concerns about creative control, the provenance of training data, and the impact of AI-generated imagery on jobs and working practices.
In the U.S., SAG-AFTRA and the Writers Guild of America have pushed for federal protections against AI, as many in the industry fear it will replace human labour.
Google DeepMind framed the collaboration as a way to put advanced video generation in the hands of top creative teams, while giving researchers a practical view of how the tools behave in real-world workflows.
Aronofsky casts the collaboration as an extension of filmmaking's core purpose.
"Film has always been this deeply human act of connecting people with each other's stories, and it has the ability to rip us out of our experience and take us on another journey," he said in a promotional video for the partnership on Google's blog. "I don't think that ever changes."
"On This Day…1776," a mini-series part of the deal, released its first episode on TIME's YouTube channel in January. The project, which tells the story of various events in America's revolution, does not provide any voice credits. Each short clip is set to drop on or around the 250th anniversary of its occurrence.
TIME Studios reports that the voices are from human actors, part of SAG, and that both traditional and AI filmmaking tools were used in the production. The clips are visually off-kilter, with the voices not perfectly aligned with the animation.
In video generation, common practical challenges include temporal consistency, character continuity, and maintaining visual coherence across shots and edits. Creative teams also face questions about how much generated material can be integrated without breaking a film's visual language or audience expectations.
Last July, Moonvalley released Marey, a learning model capable of precision controls and complex VFX sequences while maintaining complete creative authority for filmmakers and studios. Key program features include camera control to create a 3D atmosphere from an image, trajectory control, and shot extension to extend an original video.
The approach reflects a broader pattern in which AI developers work with creative professionals to guide product direction and refine system behaviour.