CONVERGER #1 — Super-Sized First Edition
Mapping the content singularity where all media collapse into one
Welcome to CONVERGER, a biweekly newsletter mapping the content singularity where AI and the internet collapse all media into one—a connective node where emerging technology, policy, culture, futures thinking and storytelling intersect.
CONVERGER presents news and views from an AI, internet and media policy expert who is pro-innovation but anti-hype, allergic to both AI panic and AI boosterism, and passionate about supporting rather than supplanting human creativity with new technology.
I’m Kevin Bankston, your host. Since this is the first issue of CONVERGER, it’s a massively super-sized edition covering the past month.
Some issues may be heavier on media commentary, others on AI policy, others on personal passions like sci-fi’s influence on technology (both for good and bad) or the evolving medium and business of comic books in the digital age. You never know what threads might come together in convergence-space!
Going forward, you can watch me develop newsletter content in real-time on LinkedIn and the social network formerly known as Twitter, and less often on Bluesky and Instagram.
You can also look for my deeper policy-oriented takes on AI governance generally at Elicitation, the new Substack from my AI policy day-job colleague Miranda Bogen of the Center for Democracy & Technology’s AI Governance Lab.
Now, let’s converge!
TABLE OF CONTENTS
FEATURES
Happy 100th AI Lawsuit to Those Who Celebrate! (~310 words, 1 minute read)
Forecasting Four Fraught Futures for the Web in the AI Age (~540 words, 2 minute read)
Steven Soderbergh Volunteers To Get a Swirlie from Hannah Einbinder (~590 words, 2.5 minute read)
Hollywood’s So Angry at AI It Can’t Spell Straight (~510 words, 2 minute read)
Pay No Attention to the Foundation Model Behind Ben Affleck’s Curtain (~1075 words, 4 minute read)
Director Robert Rodriguez on AI as Creative Multiplier (~445 words, 1.5 minute read)
Virtual Studios: What’s the Difference Between Sin City and Generated Cities? (~670 words, 2.5 minute read)
So Long Sora, Welcome Back Seedance 2.0, Hello to Netflix’s VOID (~540 words, 2 minute read)
Writers vs. AI vs. Writers (~590 words, 2.5 minute read)
A Contract That Helps Protect Comic Artists Against AI Training on Their Work (~360 words 1.5 minute read)
FRAGMENTS
WGA Deal on AI: Is That All There Is?
News of the Supreme Court’s Ruling on AI and Copyright Has Been Greatly Exaggerated
Macro-Growth in the Micro-Drama Content Pipeline
Meanwhile, In the YouTube-to-Theaters Pipeline: Backrooms is Coming
Old Music Beats New Music Beats AI Music
New Script-Reviewing AI Really Wants Brett Ratner to Direct Your Movie
Is Fan-Created Content Supplanting Canonical Content?
The Future’s So Bright I Have To Enter This Contest
Big Brother, Generating Slop
Webtoon Translations, Digital Comics, Tiny Onions
FEATURES
Happy 100th AI Lawsuit to Those Who Celebrate!
At the border between old media and new technologies, there are lawsuits—a lot of lawsuits. And on April 3rd we hit a key milestone: the filing of the 100th copyright-related lawsuit against an AI company in the US.
The honor of bringing the 100th case goes to Ted Entertainment, a YouTube creator that simultaneously filed three cases against OpenAI, Apple, and Amazon, alleging they illegally bypassed YouTube’s technical protections to scrape videos for AI training.
Meanwhile, what was already the largest copyright settlement in history—a minimum of $1.5 billion dollars in the case of Bartz v. Anthropic, brought by a class of authors representing every writer of every one of the nearly half a million works in the “shadow libraries” of pirated books that Anthropic trained its models on—has also hit another milestone. The plaintiffs have reported that the authors of 91.3% of the eligible works have registered their claim for part of the settlement.
As any lawyer reading this knows, that’s an insane claims rate; the typical consumer class action lawsuit has a rate closer to 10%. But it looks like the potential for approximately $3000 to each author per work was highly motivating to the class members. And as lawsuits against AI labs on behalf of authors proliferate, there may be more settlements coming down the pike.
This is all according to ChatGPT is Eating The World, the most comprehensive newsletter tracking copyright law developments around AI. It’s run by law professor Edward Lee, who commemorated the 100th lawsuit by debuting a new dashboard for tracking all of them, both in the US and globally. Handy for AI and copyright nerds!
Forecasting Four Fraught Futures for the Web in the AI Age
Speaking of AI, scraping, and the law: late last month at Georgetown Law School I had the privilege of hosting a day-long event of public and private panels, talks, and roundtable conversations about “Internet Scraping and the Future of the Open Web in the AI Age,” in my dual capacities as Senior Advisor on AI Governance at the Center for Democracy & Technology and adjunct professor of AI and copyright law.
The day was focused on the question of how to preserve an open and sustainable internet ecosystem when human traffic is quickly being supplanted by AI-driven bots, and how the results of those hundred lawsuits mentioned above might impact the answer.
This tension was highlighted just this last week by reporting about how the bots of the Internet Archive, a critically important nonprofit library of internet content, are increasingly being blocked by news websites afraid of their content being scraped by AI companies. That in turn led over a hundred journalists to write an open letter defending the importance of the Archive’s record of internet history to their and other researchers’ work, as well as to the public.
The issue of scraping and AI raises hard questions with a lot of important perspectives on different sides, so our event certainly didn’t resolve the problem! But we did bring together all the different communities with a vested interest in the issue for a generative [pun intended] dialogue, including commercial and non-commercial AI labs, web publishers and service providers, content delivery networks, librarians and archivists, scraping companies, and academic and public interest experts.
My favorite part of the day was one of the private expert sessions where we used a common foresight tool—the 2x2 scenarios matrix—as the basis for a breakout group exercise considering four very different possible futures for the internet based on the different ways companies, the courts, and lawmakers might approach the issue:
The Wild West: Unconsented bot scraping is both legal and technically easy and the internet is completely overrun by automated traffic.
The Hollow Victory: Web publishers beat the bots in court but lose the technical fight, so sites continue to be pummeled by abusive scraping from bad actors that are outside the reach of the law.
The Gated Web: Both legal protections and technical protections against bots are strong, leading to an internet of pay-per-scrape content fortresses.
Code is Law: Scraping is legally protected but the technical walls are so high that it’s still impossible, so only big companies can afford to pay to scrape anything worthwhile while startups, researchers and journalists can’t.
All of these are extreme futures—that’s the nature of the exercise—but in articulating them, including their pros and cons and likelihood, we were able to weigh the complexities of the issue and the tough tradeoffs involved in a productive way. I posted the whole exercise worksheet on LinkedIn for others to use as a model because it’s a dead simple way to surface a wide range of predictions and perspectives around whatever issue it’s applied to. Check it out, and happy futuring!
Steven Soderbergh Volunteers To Get a Swirlie from Hannah Einbinder
Highlighting how polarizing generative AI has become in the film community, Emmy-winning Hacks star Hannah Einbinder roasted AI video creators during a press conference for that brilliant show’s final season, a roasting that nearly every Hollywood trade press outlet gleefully gave its own story. Slashfilm originally reported the full quote:
The people who make this stuff are losers. They’re not artists. They’re not creative. And they’ve wanted their whole lives to be special. And they’re not special…. They’re trying to rob real creative people of our gifts. And you can’t. And even if you try, you will never be cool. You guys suck. No one likes you. Anyone who’s near you is because they crave power and access over any ethical standard. You are a loser. You will never be cool. And you probably had a rolly backpack in high school. I wanna put your head in the toilet and flush.
Almost immediately proving her generalization wrong, legendary Academy Award-winning director and definitely-not-a-loser Steven Soderbergh noted in an interview with Filmmaker that he’ll be using “a lot of AI” in his forthcoming films.
It was the interviewer who first raised the subject, noting the “horror of AI” as perhaps “too depressing to talk about,” but Soderbergh immediately broke the narrative: “It’s worth talking about what that technology might be good at,” he replied.
Specifically, Soderbergh noted that for ten of the ninety minutes in his upcoming documentary on John Lennon and Yoko Ono, “AI has been helpful in creating thematically surreal images that occupy a dream space rather than a literal space…. But like every other piece of technology, it desperately requires very close human supervision.”
He came back to the subject when asked how he’d handle the expense of a period film about the Spanish-American war that he’s currently developing. Interviewer: “With ships and everything? For which you would use…” Soderbergh: “A lot of AI.”
After the inevitable online backlash to these comments, Soderbergh doubled down in an interview with Variety, where he said that the AI blowback “is mystifying to me.” He continued:
I felt obligated to engage with it, to figure out what it is and what it can do. It turned out to be a very good tool for certain passages of the Lennon documentary where I needed surrealistic imagery that was impossible to shoot. It allowed me to solve a creative problem about how to visualize what John and Yoko are speaking about philosophically. Ten years ago, I would have needed to engage a visual effects house at an unbelievable cost to come up with this stuff. No longer. My job is to deliver a good movie, period. And this tool showed up at a moment when I needed it…. There are some people that I have absolute love and respect for that refuse to engage with it. That’s their privilege. But I’m not built that way. You show me a new tool. I want to get my hands on it and see what’s going on.
The use case Soderbergh describes here is illustrative: as he notes, without AI he’d need an expensive effects house, which he certainly wouldn’t have paid to engage for a little documentary. So in this case, AI expanded creative possibilities without removing opportunities for craftspeople. But that certainly won’t be true in all cases, such that claiming to be “mystified” by the uproar seems a bit obtuse and, well, privileged. So a bit of advice, Steven: steer clear of any toilets while Hannah Einbinder is around!
(Update: just as we were finalizing this edition, Reese Witherspoon and Sandra Bullock also volunteered for Einbinder swirlies.)
Hollywood’s So Angry at AI It Can’t Spell Straight
Video generation startup Runway hosted its first Runway AI Summit in NYC on March 31st, highlighting advancements in the technology and how it’s being used creatively, and let’s just say it was received…skeptically by the Hollywood and tech press.
Wired‘s mocking headline was representative: “‘Thank You for Generating With Us!’ Hollywood’s AI Acolytes Stay on the Hype Train.” Meanwhile, The Hollywood Reporter‘s story focused almost solely on the cautionary note sounded by one star producer during the event: “Kathleen Kennedy Just Told an AI Conference She’s Not So Sure About AI,” announced the headline.
Kennedy, who is wrapping up her tenure as the head of Star Wars production company LucasFilm, rather sensibly highlighted the importance of preserving human taste and the serendipitous unpredictability of the creative process in the face of AI. She also reasonably pressed for more transparency around model development, the lack of which is a particular challenge for copyright holders seeking to prevent their content from being trained on without consent. Then she went on a bit of a tangent criticizing the quality of 3D-printed props compared to those made by craftspeople.
THR‘s story contained an embarrassing mistake that hopefully isn’t emblematic of their team’s actual level of tech knowledge. The article referred to ByteDance’s Seedance 2.0 model—the same model that has caused no end of controversy and flurry of cease-and-desist letters from the content industry since its preview release in February—as “ByteDance’s Seesaw.” That mistake wasn’t corrected for several days (despite this author’s repeated nagging on X). And the story still weirdly calls Seedance a “social application,” which, just, no.
I’m surprised that I haven’t yet seen sarcastic press coverage of SoulScape, a new “Global AI Cinema Lab and Summit” that took place in San Francisco a weekend ago, but don’t worry, there are plenty of upcoming AI+film events for Wired and THR to make fun of. Next up: Runway will be hosting its fourth annual AI film festival in both NYC and LA in early June.
Taking the Einbinder comments and the chilly reception to the Runway event together, they seem representative of the problem with the Hollywood discourse on AI as diagnosed by The Ankler‘s chief columnist Richard Rushfield:
The conversation, such as it is, shoves everything in one basket and reacts to it all with a primal scream, lumping together theft and innovation, job loss and useful tools, corporate abuse and creative experimentation…. The working premise in a lot of Hollywood is that AI is evil and must go away. Anything less than total resistance is surrender. I understand the impulse.... But that isn’t a strategy. [italics in original; funny link added.]
In an exercise evoking the 2x2 matrix, Rushfield wisely urged Hollywood to carefully delineate between four different categories of AI impacts—good and unstoppable, good and stoppable, bad and unstoppable, bad and stoppable—to better focus on stopping the bad things it can rather than wasting effort on the bad things it can’t, taking advantage of the good things that are coming, and avoiding blocking the stoppable benefits. Let’s see if anyone takes his advice.
Pay No Attention to the Foundation Model Behind Ben Affleck’s Curtain
One recent development at the intersection of AI and the entertainment industry somehow escaped the primal scream of the anti-AI contingent. That was Netflix’s high-profile acquisition of Ben Affleck’s stealth AI startup, InterPositive, in a deal that ultimately could be worth up to $600 million for the actor. Despite the highly polarized debate over generative AI in Hollywood, Affleck and Netflix effectively sidestepped controversy by describing their AI tools as filmmaker-centric and implying that they were trained using only small sets of proprietary filmed data. But it looks like they got off easy, by not being fully up-front about how their technology works.
Rather than a prompt-to-video-slop engine, InterPositive’s tools are solely focused on improving real footage that’s already been shot, consistent with a human director’s vision. “For artists to apply these [AI] tools towards telling the stories we dedicate our lives to,” Affleck said in the press release, “they need to be purpose-built to represent and protect all the qualities that make a great story…. [We] need to preserve what makes storytelling human, which is judgment…. I knew I had a responsibility to my peers and our industry, to protect the power of human creativity and the people behind it. In creating InterPositive, I sought to do just that.”
This is all consistent with Affleck’s prior statements (before anyone knew he was secretly working on an AI startup) where he highlighted AI’s limits compared to human creativity but also cautiously described its potential creative uses for filmmakers: “What AI is going to do is going to dis-intermediate the more laborious, less creative, and more costly aspects of filmmaking, that will allow costs to be brought down, that will lower the barrier to entry, that will allow more voices to be heard, that will make it easier for the people want to make Good Will Huntings, to go out and make it.”
In describing InterPositive’s AI models, Affleck and Netflix highlighted their artisanal nature (all emphases added): “I began filming a proprietary dataset on a controlled soundstage with all the familiarities of a full production….The results of this foundational work were deliberately smaller datasets and models focused on filmmaking techniques — rather than [generating] performances — creating tools that artists can use, control and benefit from.”
As Affleck further explained to Variety, “[t]he InterPositive system builds an AI model based on an existing production’s dailies, then lets a filmmaker introduce that model into the postproduction process to provide the ability to do things like mix and color, relight shots, and add visual effects.”
These statements imply (but never state) that InterPositive’s models rely only on small amounts of proprietary data. That narrative never made much sense technically, since generative AI models require enormous amounts of data. For example, Lionsgate’s entire catalogue wasn’t enough to build a useful custom video model in that studio’s partnership with Runway. But that didn’t stop other outlets from uncritically running with and expanding on these descriptions, describing InterPositive’s technology as being refreshingly free from unconsented use of copyrighted works in its training, in contrast to bigger, general-purpose video generation models:
“The technology, however, doesn’t…use footage without permission. Additionally, the AI model is getting trained on material you already own and have access to.” –Inc.
“InterPositive trains a custom AI model on a production’s own dailies, using that footage as the foundational dataset rather than pulling from public internet sources.” –CineD
“Models are trained exclusively on first-party footage rather than third-party datasets, [avoiding] intellectual property and consent risks….” –Del Morgan & Co.
“Netflix’s acquisition of InterPositive signals a deliberate pivot toward proprietary AI tools designed specifically for filmmaking rather than relying on general-purpose generative AI models.” –MLQ.ai
“Affleck said InterPositive did not provide video generation tools such as Google’s Veo3 or OpenAI’s Sora – it was “not about text prompting or generating something from nothing” – but instead helped in the post-production process.” –The Guardian
However, the actual patent for InterPositive’s technology, published by Deadline a couple weeks ago, tells a different story. Deadline‘s news hook for the story was highlighting the millions of dollars of production cost savings that the patent application promised for filmmakers, including 50% savings on visual effects. They contrasted these cost-cutting promises with other Affleck statements focused on making filmmaking “easier” and “faster” instead of focusing on cost savings, and a Netflix exec’s statement that “it’s not really about cheaper, it’s really about better.” Obviously it’s about both, so not much of a gotcha there. But there was another story buried in the patent that Deadline missed:
Contrary to what Affleck and Netflix have implied, InterPositive’s technology must be built on top of general-purpose generative AI models that typically have been trained on masses of copyrighted material.
In particular, the patent reveals that InterPositive’s proprietary tools and data are meant to run on top of another large video gen model being used as the foundation of the system, essentially “fine-tuning” the larger base model by transferring learning from the smaller InterPositive model trained on filmmaking-specific knowledge. As I highlighted on Twitter when I discovered it, the patent specifically points to OpenAI’s Sora and Google’s video model as examples of appropriate base models. From the patent:
[InterPositive’s model is used] to train the other model, enabling it to recognize and replicate filmmaking techniques when provided with appropriate prompts…. [That] video model may be an existing large language model, such as OpenAI SORA or a Google AI model, which has been primarily designed for processing and generating video content. Prior to the transfer learning process, these models lack the capability to accurately interpret and implement cinematographic details in their outputs…. The video LLM serves as the base model that is enhanced through transfer learning to acquire the advanced filmmaking capabilities developed in [InterPositive’s] model.
I say that InterPositive’s tools are “meant to run” on top of bigger models rather than they “do run” because at this point it’s still not clear that they even exist in a workable form right now. The Wrap reports that the model is still in development, and as one Netflix executive put it, “[i]t’s not like it’s a complete and ready to go tool.” But wherever it is in the R&D process, what InterPositive is developing is a method for fine-tuning a larger foundation model with specialized data for specific use in film postproduction, rather than building its own independent model trained only on proprietary data.
To be clear: I’m not judging InterPositive for choosing to build on top of state-of-the-art foundation models from major US AI labs in order to develop creator-centric, film-focused tools. And I’m a fan of Affleck, a wildly talented, Academy Award-winning actor-writer-director-producer, as much as any other Gen Xer film nerd who religiously watched every episode of Project Greenlight and always cries at the end of Armageddon can be.
What I am judging is his and Netflix’s obscuring that fact rather than standing by it and explaining their choices. Having true creators champion reasonable, responsible, limited uses of this technology in the creative process means something in this conversation; subtle attempts to shy away from doing that in order to avoid criticism mean something, too. Perhaps Affleck should take a cue from Soderbergh and fellow AI-forward director Darren Aronofsky, both of whom have braved the backlash to defend their creative use of these tools, rather than avoid the hard conversation.
In furtherance of that hard conversation, the key question remains and should be answered directly by Affleck and Netflix: on top of what foundation model, exactly, is Netflix going to be building InterPositive’s tools?
Director Robert Rodriguez on AI as Creative Multiplier
Speaking of true creators, last month I had the unique pleasure of visiting the film industry equivalent of Willy Wonka’s Chocolate Factory: Robert Rodriguez’s Troublemaker Studios in Austin, Texas. I was there as an investor in his new development company Brass Knuckle Films, cofounded with producer Alexis Garcia. Like Eli Roth’s The Horror Section, BKF is funded through blockchain-based securities issued through Republic.com, such that investors will receive a share of any profits from the ventures. This new convergence of technology-enabled securities and film financing is itself an interesting development, but I was even more fascinated by the conversations at Troublemaker.
Unsurprisingly, it was absolutely delightful to meet Robert and tour the studio: hey look, it’s the biggest standing set in the US, from Alita: Battle Angel! Cool, there’s the car from Quentin Tarantino’s Death Proof! Oooh, that’s the matte painting from the end of From Dusk Till Dawn! Hearing Robert share his creative philosophy and preview the studio’s latest projects was just as inspiring as you would imagine.
Most interesting was hearing Robert talk about the future of AI in film production, where he voiced his hope to become an innovator not just in how to creatively use the tools but also in defining industry guardrails around their ethical use, which was music to this AI governance and copyright nerd’s ears.
To illustrate his hope that AI will multiply rather than supplant human creative capacity, he showed how he’d taken a 2D image of a cartoon monster that he designed and then experimented with tools from Luma Labs (which was a cosponsor of Brass Knuckle’s SXSW party earlier in March) to create a 3D model of the same character in a style that matched his creative intent. Echoing Soderbergh’s comments, Rodriguez highlighted how without AI he would have needed to send the 2D image to a designer to build a model that likely wouldn’t match his imagination and would need to be sent back for additional iteration, and at great cost, while the AI allowed him to quickly generate exactly what was in his head.
For a DIY creator who likes to do as much as possible on his productions both for creative and budget reasons—he shoots, he edits, he scores!—it’s not surprising to hear Rodriguez talking this way. He has always been a uniquely tech-forward and cost-conscious filmmaker. And to be clear, he didn’t make any specific representations about whether, when, or how he’ll be introducing generative AI into his development or production pipeline; the example he shared was just him fooling around with the tools. Even so, it’s good to have another veteran director willing to at least begin to engage on the question of where we do want to use AI in film, and where we don’t.
All that and Tex-Mex too! Truly, I couldn’t have asked for a better Austin weekend.
Virtual Studios: What’s the Difference Between Sin City and Generated Cities?
The new AI-driven virtual studio—the gen-AI iteration of the green-screen studios used by Robert Rodriguez on his Sin City films or Lucas on his Star Wars prequels—isn’t coming soon. Rather, it’s already here, with news this week of both a major feature film and a major streaming series soon to be released.
First up is news of director Doug Liman (The Bourne Identity, Mr. & Mrs. Smith) completing principal photography on Bitcoin: Killing Satoshi, his $70 million (?!) generative AI feature about the mysterious inventor of the cryptocurrency. Starring Gal Gadot, Pete Davidson, Isla Fisher, and Casey Affleck (AI runs in the family!), this “globe-trotting thriller” with over 200 locations purportedly would have cost $300 million if shot IRL. Instead, though, it was shot in a small gray warehouse with generated backgrounds to be added later.
Obviously that script would never have been produced as written and locations would’ve been pared down on a traditional shoot, so that Avatar-level number is a bit ridiculous. And the claim that this will be the first 100% generated film is also a bit misleading even if technically true. Presumably, every pixel will indeed be generated—including reproduction of the actors performances, in a video-to-video (as opposed to text-to-video) generative pipeline—rather than recorded performances simply being composited on top of generated backgrounds. Their process likely looks more like that used by this new and technically impressive AI-generated Spanish horror short film completed with consumer-grade tools (which raises the question of how this thing still cost $70 million). But saying “fully AI-generated” evokes completely generated performers as well, which this definitely isn’t.
Next up is the similar announcement from Luma Labs and production company Wonder Project, the producers of the Amazon streaming hit House of David, that they are teaming up to launch new AI studio called Innovative Dreams. That new studio is making Wonder’s next biblical series The Old Stories: Moses (with Ben Kingsley as the prophet) using what they are calling “hybrid filmmaking,” which appears to be the same sort of video-to-video performance capture and generative pipeline as the other projects mentioned above.
Finally, in a reversal of these stories about real actors and generated backgrounds, there was this past week’s news about a generated actor inserted into a traditional shoot: the digital “resurrection” by AI of deceased actor Val Kilmer in the trailer for the upcoming feature As Deep as the Grave. Although legally blessed by the actor’s family, this development raises the uncomfortable prospect that the digital likenesses of stars, like music libraries, are becoming something akin to tradable financial instruments.
This isn’t just a story about entertainers who have passed away, either: as Taylor Lorenz reported this past week in Vanity Fair, online influencers are increasingly producing content using generated “AI clones.” As one online commentator noted, “there will be very little reason for a Mr. Beast to actually show up to film a super bowl commercial in a few years [since] his identity will be in a .zip file controlled by [his manager].”
Again, all of these stories received an enormous amount of backlash online, but regarding Bitcoin and Moses, I’m left wondering: how is this so different from what Lucas and Rodriguez were already doing in the early 2000s? Does it really matter what kind of software is being used to fill in the backgrounds? Is the concern that these new methods reasonably feel more likely to meaningfully replace location shooting and the jobs that go with them, compared to those previous experiments that didn’t radically change the production ecosystem? (That’s a questionable assertion when you look at how omnipresent green screens are in modern big-budget productions.)
The concern makes some sense, but is it necessarily a bad thing for a domestic film industry that has lost much of its location shooting to foreign shoots anyway? It’s not clear to me that being able to shoot many more low-budget features in a studio, perhaps even in LA again, will employ fewer American cast and crew than producing much fewer big-budget on-location features internationally with the same amount of money. But we’ll definitely need to get that $70 million number down first, and get closer to the “make fifty movies with $100 million instead of one movie” future that Runway’s CEO is promising. And even then, the jobs saved in LA will still cost jobs elsewhere.
So Long Sora, Welcome Back Seedance 2.0, Hello to Netflix’s VOID
All this talk of AI-based production raises the question: what models will Hollywood be using to drive their virtual studios? This past month answered that question in regard to at least one model: it definitely won’t be OpenAI’s Sora!
When OpenAI announced on March 24th that it was shutting down Sora entirely—app dead by April 26th, API dead by September 24th, Disney’s $1 billion licensing deal collapsing within an hour of the news—it wasn’t just a bad day for the “Hollywood’s cooked” crowd on AI Twitter. It was also a load-bearing wall getting yanked out of any Hollywood AI pipeline that had bet on that model as a foundation (potentially including InterPositive, based on their patent). If your post-production toolchain depends on a model whose existence can be terminated overnight by a vendor whose incentives don’t align with yours, you don’t have a pipeline, you have a problem.
Meanwhile, as OpenAI’s Sora is retiring, ByteDance’s controversial Seedance 2.0 model is returning. The aggressively copyright-infringing preview of that model was pulled last month after a flurry of cease-and-desist letters from Hollywood. But it finally had its official release in the United States on April 9th, first via ByteDance’s video editing platform CapCut and then via other third-party model platforms, after a rolling international release.
However, the version that landed here bears only a passing resemblance to the one that generated the now infamous Tom Cruise vs. Brad Pitt fight clip. The Chinese-domestic Jianying version offers photorealistic faces, multi-shot storytelling, and almost no guardrails. The Dreamina Seedance 2.0 that paid CapCut Pro subscribers in the US can now access has been comprehensively defanged: real-face uploads are blocked, intellectual property keywords and visual matches are filtered, every output is watermarked, clips are capped at fifteen seconds, and the content moderation is aggressive enough that some creators are calling it “nerfed.” So now that Sora is dead, this potential back-up vendor for Hollywood’s AI pipelines is delivering a fraction of the capability that made it interesting in the first place.
Which makes the third story from the same news cycle perhaps the most interesting. On April 3rd, Netflix quietly released VOID—Video Object and Interaction Deletion—on Hugging Face under an Apache 2.0 license, its first-ever open-source AI model. VOID isn’t a Sora replacement; it’s a specialized fine-tuned model built on top of CogVideoX, the open source version of Chinese lab Zhipu AI’s proprietary Qingying video generation model. VOID narrowly focuses on physics-aware object removal, such that if you take someone or something out of a shot, VOID will also remove any physical impacts prompted by the now-absent person or item.
But the context is what matters: the same company that just spent up to $600 million acquiring the proprietary InterPositive workflow is also using and releasing its own open source video models. Those moves look like the beginnings of an overall AI pipeline strategy: avoiding reliance on outside proprietary models and instead developing internal proprietary models and using flexible open source models that won’t change or disappear at a moment’s notice.
The lesson for every studio and indie shop drafting its 2026 AI strategy (especially Disney who just got royally burned by OpenAI): better to control your own tech stack wherever possible, rather than rely on outside models that may not survive the brutal competition between labs, and where even if they do survive, the version you’re allowed to use may not be the version you wanted.
Writers vs. AI vs. Writers
Turning away from Hollywood and toward the publishing world, the last three weeks of March 2026 will be remembered as the moment when AI-generated writing stopped being a hypothetical across every institution that publishes words for a living. Five stories broke in tight succession, all variations on a single theme: who actually wrote that, and how much does the answer matter?
Let’s start with the AI resistance. On March 19th, Hachette canceled the US release of Mia Ballard’s horror novel Shy Girl through its Orbit imprint and discontinued the existing UK edition, after a New York Times investigation (and a Pangram analysis flagging ~78% of the text as AI-generated) forced their hand. This is the first known instance of a Big Five publisher walking back a book discovered to be written by AI. That discovery was driven by a user on r/horrorlit who catalogued the linguistic tells (the word “sharp” appearing 186 times, lists of three, em-dash overuse), then escalated through a 1.4-million-view YouTube essay titled “i’m pretty sure this book is ai slop.”
The next day, the New York Times severed ties with freelance critic Alex Preston after a reader noticed that his January book review had lifted phrases and full paragraphs from a Guardian critic’s review of the same novel. Preston confessed he’d used an AI tool to draft the review and “failed to identify and remove overlapping language from another review that the AI dropped in”—a new failure mode adjacent to plagiarism, where the model is the laundering machine.
Then on March 26th came a one-two punch from the “AI can be helpful for writers and publishers, actually” side of the debate. The Wall Street Journal profiled a Fortune business editor who has cranked out 600+ articles since July 2025 using Perplexity and NotebookLM, accounting for ~20% of Fortune‘s web traffic in H2 2025. Fortune‘s EIC told the Journal “more than 50% is Nick”—a quote that is perhaps less reassuring than it was probably intended to be.
Wired dropped a same-day companion profile of independent tech journalist Alex Heath, who uses Claude Cowork (with a custom skill called “10 Commandments for Writing Like Alex Heath”) to draft his Substack pieces end-to-end. Journalists across the internet variously attacked and celebrated these practices, fanning the flames of a fire that started a few weeks before when the editor of the Cleveland Plain Dealer promoted that newsroom’s use of AI to help source and write stories.
Sitting underneath all four new stories was NYT‘s March 9th interactive quiz—“Who’s a Better Writer: A.I. or Humans?”—a kind of Voight-Kampff test for prose, in which 86,000 readers blindly judged five paired passages and 54% picked the AI-generated ones, with one pairing splitting 67-33 in the machine’s favor.
Five stories, three weeks, one underlying question: is using AI to write a craft choice or a category violation? Some writers say one thing, some say the other, and 86,000 quiz-takers suggest that readers may not care which.
Full disclosure for CONVERGER readers: the entirety of this newsletter was originally written by me without AI assistance except for this feature and the previous one about Sora, where I experimented with a different workflow. For these two features, I solicited first drafts from Claude (inputting an outline with the key facts, key sources, and my rough take, then providing the previous features as an example of my style), then I heavily edited them (”more than 50% is Kevin”).
Did you see a difference as you were reading? The Pangram AI text detector didn’t: it concluded with a medium level of confidence that those sections were 100% human-written. Then again, the same tool predicted that a paragraph from Mary Shelley’s Frankenstein was 100% AI-written. So I’d love to hear your feedback on whether you noticed a shift in style.
A Contract That Helps Protect Comic Artists Against AI Training on Their Work
Comic book artists are rightfully concerned about AI being used to copy their styles or their designs and put them out of work. That’s a problem for someone like me, who works in AI but also is a writer currently developing a number of comic book projects and looking for professional artists to work with. Therefore, to reassure the artists with whom I’ve directly contracted to develop my comic ideas, I wrote the following anti-AI-training clause for our agreements:
Writer shall not use, nor license or authorize third parties to use, the Services [i.e., the artwork] in any manner for purposes of training generative artificial intelligence (AI) models to generate text, images, video or audio, including models to generate images or video reproducing the artwork or art style of the Artist, without prior written consent of the Artist. Nothing in this clause shall prohibit the use of generative AI models for routine internal production purposes including format conversion, upscaling, accessibility re-flows, translation, palette adjustment, or comparable non-creative functions.
[Another clause similarly prohibits the Artist from training on the commissioned artwork or my IP without my consent.]
“Generative AI models” include large language models, transformer models, diffusion models, and any other substantially similar types of machine learning models now or in the future that generate text, image, video or audio outputs based on model parameters derived from training on large datasets, regardless of whether those models are offered publicly, shared privately, or used internally, and regardless of whether those models are offered with open source licenses, closed source licenses, or are wholly proprietary. “Training” includes both the pre-training process that generates a model’s parameters and any post-training fine-tuning of a model’s parameters.
Artist and Writer each acknowledge that the other is not responsible for the unauthorized actions and conduct of third parties who attempt to use the Services or IP for generative AI training.
Remember: I’m not your lawyer! Please talk to one before sticking this language in your contracts. And if you have suggestions on how to improve this, let me know.
FRAGMENTS
WGA Deal on AI: Is That All There Is?
Beyond the narrow set of copyright and rights-of-publicity harms that litigation might address for certain actors and creators, organized labor is easily the most powerful lever for protecting entertainment industry workers from seismic displacement by AI. Which makes it all the more disappointing that the draft deal between the Writers Guild of America and the studios contains basically nothing new on AI, despite the WGA’s starting demand of payment for training on guild members’ scripts. As Variety first reported, “the deal largely preserves the status quo” on AI, with the studios only agreeing “to continue to hold meetings with the WGA, and to notify the guild if it licenses writers’ work for AI training.” Hopefully the other guilds, including the Screen Actors Guild, will fare better in their negotiations.
News of the Supreme Court’s Ruling on AI and Copyright Has Been Greatly Exaggerated
On March 2 the Supreme Court declined to review the DC Circuit’s decision in the case of Thaler v. Perlmutter that copyright does not protect works that are generated by AI without any human input; i.e., there must be a human author. This non-decision by the Supremes was immediately and widely touted by anti-AI voices online for the broad proposition that the Supreme Court had ruled that AI-generated content isn’t copyrightable. However, the facts of this case were really narrow: the AI at issue wasn’t modern gen AI but rather an AI art project whereby an algorithm trained on a wide swath of human works of art would self-generate new works with no human instruction or involvement at all; it was literally press-here-to-generate.
The much harder question of how much human guidance is required to create a copyright interest in modern AI-generated content—how detailed or iterative the prompts, how much human-created material uploaded as reference or for generative modification, how much human editing and arrangement of the outputs, etc.—still has yet to be answered by the courts. So, what’s the safe bet right now when working with AI? Lawyers suggest relying as much as possible on your own creative output (hopefully you were already doing that!), and documenting exactly what you personally authored or modified.
Macro-Growth in the Micro-Drama Content Pipeline
Micro-dramas are soapy, cliffhanger-heavy, short-form vertical TV series with sensational hooks (The Double Life of My Billionaire Husband; Tricked Into Having My Ex-Husband’s Baby; Return of the Abandoned Heiress) that are optimized for compulsive mobile viewing. Already a mature multi-billion business in Asia, the US market has finally started to catch up in the past couple years, including an explosion of low-budget microdrama production in Los Angeles and a recent forty-microdrama production partnership between YouTube star Dhar Mann and Fox Entertainment. Now, in just the past few weeks, a number of major content producers have made deals to adapt or produce original work for this exploding medium: HarperCollins partnered with AI-driven animation studio Toonstar to produce a slate of animated microdramas, starting with popular middle grade series Friendship List, while its romance imprint Harlequin signed a similar deal with another animation tech studio, Dashverse; Issa Rae’s Hoorae Media inked a deal to bring microdramas to TikTok, starting with the horror series Screen Time; and finally the National Enquirer is licensing its archives for a microdrama slate with verticals app GammaTime. I can’t wait for the inevitable Batboy microdrama!
Meanwhile, In the YouTube-to-Theaters Pipeline: Backrooms is Coming
Just a few months after YouTube star Markiplier (Mark Fischbach) had modest but definite success with his self-financed and self-produced sci-fi feature Iron Lung, the trailer for the next YouTube-to-feature title has hit. From A24, it’s called Backrooms, based on a YouTube series of horror shorts inspired by a creepypasta meme that originated on 4chan, about being trapped in an eerie dimension of endless empty office hallways. The director of the shorts is making his feature debut, and the trailer appropriately vibes creepy as hell.
Old Music Beats New Music Beats AI Music
Despite concerns about endless new AI-generated music tracks swamping streaming sites, the latest data shows that old music is absolutely dominating over all new music: data from Luminate shows that new music (less than 18 months old) accounted for only 35.8% of what Americans listened to in 2024, compared to 73.3% in 2014, while according to ChartMetric, only 3 of the top 10 songs in 2025 were released in 2025. Meanwhile, streaming demand for original non-AI music—at least according to Universal Music Group—is not meaningfully being affected by the rise of Suno and other AI music apps. That’s what a senior UMG VP said during a recent earnings call, which was probably an unpleasant surprise for UMG’s lawyers: they are suing Suno arguing that music from Suno’s AI models, trained on UMG artists’ music, is diluting the streaming market for the record company’s songs. Interesting litigation strategy!
New Script-Reviewing AI Really Wants Brett Ratner to Direct Your Movie
The Wrap tried out Quilty, a new AI tool promising to analyze and provide helpful creative and business feedback on movie scripts, by feeding it the screenplays for already-produced movies Sinners, Barbie, Christy, and Die Hard. Quilty predicted that the hits would flop and the flops would hit. More bizarrely, the AI script reviewer recommended alleged sex harasser and proven hack Brett Ratner (Rush Hour, Rush Hour 2, Rush Hour 3), whose most recent film was the Melania documentary, as a top director pick for three of the four submitted scripts—including Barbie?! What, do the Ellisons own this company, too?
Is Fan-Created Content Supplanting Canonical Content?
According to a December study from Ogilvy Consulting cited in this excellent Ankler story discussing the role of fandom and using the The Super Mario Galaxy Movie as a hook, “Two-thirds of Gen Zs spend more time with fan-created content than with the official titles.” Makes me wonder: what percentage of their consumption is non-fan, original, nonprofessional content, or even self-created content? And how quickly will those numbers grow in the next few years?
The Future’s So Bright I Have To Enter This Contest
Short story contests inviting visions of the future of a particular issue or technology are such a common feature of the sci-fi and futures toolbox that I programmed a panel on the topic a few years ago. Yet this new Protopian Prize contest in particular caught my eye, though not because of the solicited topics (hopeful futures about AI and governance).
I tend to think “write more optimistic futures!” projects are based on a fallacious take that positive stories help lead to better futures than dystopias, since dystopias have arguably been a much more positive influence by providing self-preventing prophecies that help us guard against their negative futures. For example, 1984 is probably the sci-fi tale that has had the broadest positive impact on technology policy debates, including in my own experience working on privacy and surveillance policy. But the judging committee of this new prize is full of absolutely extraordinary writers and thinkers who are very aware of the value of dark future visions as well as bright, including Annalee Newitz (Automatic Noodle), Hannu Rajaniemi (The Quantum Thief), and Ruthanna Emrys Gordon (A Half-Built Garden, easily my favorite sci-fi novel of 2022). So, I’m very excited to see what they pick. Submissions open May 1st and close July 31st, so start futuring now!
Big Brother, Generating Slop
Speaking of 1984, it turns out Orwell predicted slopaganda (AI slop for political purposes, like the mad Lego rap videos coming out of Iran right now) in his classic novel. A poster on Twitter highlighted this passage:
There was a whole chain of separate departments dealing with proletarian literature, music, drama, and entertainment generally. Here were produced rubbishy newspapers containing almost nothing except sport, crime, and astrology, sensational five-cent novelettes, films oozing with sex, and sentimental songs which were composed entirely by mechanical means on a special kind of kaleidoscope known as a versificator.
Oh what a brave new world! (Wait, that’s Huxley…I mean, Shakespeare…)
Webtoon Translations, Digital Comics, Tiny Onions
Pulling together a few different comics-related items:
First up: Webtoon, a site for web-native vertical comics that has already seen a few controversies over allegedly AI-generated comics art, launched its opt-in AI-driven translations tool with only a tiny bit of anti-AI pushback in response.
Second: digital comic book stores—the folks who sell digital versions of the paper comics sold in IRL comic shops through the comic book “direct market”—is getting crowded and confusing as a range of old and new services try to take market share from Amazon’s Comixology. Thankfully The Beat wrote this handy guide to help make sense of the current state of the digital comics landscape.
Finally: I’ll be writing a lot more someday about Tiny Onion, the comics production studio founded by star writer James Tynion IV. The young company has helped enable and solidify Tynion’s rise through the ranks of independent comics creators, with horror hits like Something is Killing the Children that have multiple comic spinoffs and several TV and film adaptations in the pipeline. I’d kill for a comic-nerd Harvard Business Review case study of Tiny Onion’s dominating entrance onto the media scene, but until then here is an insightful interview with Eric Harburn. Harburn is Tiny Onion’s Editor-in-Chief—and now, in consideration of the company’s broad multimedia ambitions, Director of Narrative—and he talks about how Tynion and the company go about building narrative “engines” that can power multiple cross-media properties. If you’re into comics you should consider subscribing to SKTCHD, an excellent comics news site, to read the whole thing.
And that’s what’s converging this week! See you next time.


