AI video slop is becoming the news, and no one is stopping it
My mother, born in 1950, was sitting in a large red IKEA chair a few months ago, listening to something on her phone. I could not hear the audio at first. Then a few words broke through: Barron Trump, choking, schoolteacher. I leaned over her shoulder. She was watching what looked like a documentary. It had the pacing and tone of cable television, the slow narration, the swelling pauses, the promise that something meaningful was unfolding. But it was all synthetic. The images were stiff. The voices flat. The story assembled from fragments. The plot was simple: a couple from Missouri, schoolteachers, ordinary people, came to New York for a rare night out. They chose Trump Tower. A splurge. A story about aspiration. Then the turn. The husband begins to choke. The wife panics. The narrator continues, calm and steady, never quite human. “At that moment, Barron Trump walked into the room. He saw the choking schoolteacher and approached, his tall frame filling their view.” The voice lingered on that detail. The scene stretched on, minute after minute. This went on for half an hour. I stopped her and explained what she was watching. Not a documentary. Not reporting. A machine-made story stitched together to hold attention. She nodded. She understood. But she did not care. “It’s on here, so it’s fine,” she said. That sentence matters more than the video. There is nothing new in praise stories. Hagiography has followed power for as long as power has existed. What has changed is the container. This material now arrives dressed as fact. It borrows the grammar of television, the tone of authority used by networks like TLC and Discovery Channel, and it slips into the same stream as everything else. There is no clear border between the real and the fabricated. Not for her, and not for many others. “This is my news,” she said. That is the line that should give pause. Not because the content is extreme. It is not. It is mild. It is almost boring. But it replaces something older, a habit of trusting institutions, flawed as they were, with something far thinner. So the question is not whether this specific video is dangerous. On its own, it is not. It passes the time. It fills a gap. It sits alongside the other things she watches, including endless AI-narrated pieces about Princess Diana, stories that drift between fact and invention without much concern for either. The real question is slower and harder. What happens when a generation that did not grow up with this technology accepts it without resistance? When the signal and the noise share the same voice, the same authority? We tell ourselves that media literacy is the answer. That people should know better. But that assumes time, patience, and a kind of discipline that even younger audiences struggle to maintain. My mother watch to watch “the news” and wants to stay informed. She is adapting in the only way available to her. She watches what is in front of her. Her children practice media hygiene. We check sources. We cross-reference. We distrust what feels too neat. But those are learned habits. They come from years spent online, years spent being fooled and then learning not to be fooled again. Should we expect her to do the same? Or is this simply the new shape of information, one that does not ask for belief so much as attention, one that does not argue but repeats, until repetition itself becomes a kind of truth? The Propaganda ProblemThe real issue is that videos like the ones my mother was watching are being used to wage war against unity. UK-based Refute has published a report on foreign interference in recent European elections, pulling data from Romania, Moldova, and other active fronts. The content that is flowing out of Russian propaganda systems is AI slop, the kind of posts and images that you see your conspiratorial uncle sharing on Facebook. And more and more of this propaganda is focused on video that describes the world as a lawless and terrible place, threatened by the awful Other. The report makes a simple point: Interference is no longer a last-minute push or a few bad actors. It is planned early, coordinated, and spread across the entire election cycle. The aim is not subtle. The protagonists want to split the electorate and to convince people, especially those abroad, that their country is failing. The diaspora, folks like my mother who came from Poland in 1974, imagine their home countries as they were when they left: poor. The answer? Convince that powerful voting block that it’s up to them to save their homeland. Further, these videos help politicians, usually illiberal ones, win. “If you’re a politician running to be elected, mathematically speaking, the best strategy is to divide the electorate and then only appeal to part of it,” Galu said. “That’s a much easier way to get into office than to try to speak to everyone.” In Romania’s 2025 presidential election, Refute tracked about 32,500 TikTok videos backing populist candidates. Many showed signs of coordination, copied content, reused formats, and AI-generated media. The audience was not just local. About 24% of Romanians live abroad, but nearly half of the engagement came from outside the country. That points to deliberate diaspora targeting. Moldova saw a heavier operation. The report points to a mix of vote buying, online campaigns, and activity tied to embassy networks. Refute identified more than 16,000 bot-like accounts during the election period. Intelligence estimates cited in the report put spending at around $150 million. That’s money that used to go to billboards and TV ads. What emerges is a layered system. Bots, influencers, and synthetic media work together. Content moves across TikTok, Telegram, and Facebook. Organic and artificial engagement blur into each other, which makes attribution hard and slow. Galu is blunt about the imbalance. “We have to think about this information warfare as pretty much the same thing as conventional warfare. We are in a war situation, it’s just fought with different means on different grounds,” he said. “It’s a very low-effort, very high-yield activity. For us, the cost is much higher because we need certainty and clarity, and that requires compute and data.” The campaigns lean on familiar themes. Defense spending framed as waste. Calls for accommodation with Russia. Early claims that elections cannot be trusted. Hungary shows similar patterns ahead of the April 2026 parliamentary vote. European security sources cited in the report say the playbook matches what was used in Moldova. The report lands on an interesting conclusion. The weak point is no longer the ballot itself. It is the information environment around it. By the time officials react, the narrative is already set. “Once disinformation campaigns begin, it is extremely hard to rein it back in again. Prevention is far more cost-effective than damage control.” Galu argues for continuous monitoring and automated analysis. Right now, most responses come too late and at too small a scale. What does this have to do with AI video? Plenty. The tools that were used to engage my 76-year-old Polish mother are the same ones being used to enrage small-town Romanians or older Hungarians. The models that are creating space aliens and new Avengers clips are the same ones telling us that Haitians are eating the cats and dogs. These tools, like so many forms of media before them, are simultaneously vital to building conversations and frightening in their power. It’s up to us, then, to educate folks who might be lured in, although, as my mom said, this is their news.
© 2026 John Biggs |





Generated Weekly: Hollywood resurrects Val Kilmer with AI, raising questions
A quick look at the future of AI resurrection.
Generated Weekly: Hollywood resurrects Val Kilmer with AI, raising questions
A new trailer for As Deep as the Grave has resurrected Val Kilmer’s image to put him into a movie he had been slated to appear in before his death.
From Variety:
Director Coerte Voorhees said the role was written for Kilmer and tied closely to his background and interests. The filmmakers argue the approach is ethical because it involved the family, followed SAG guidelines, and compensated the estate. The material used to build the performance came from Kilmer’s own past work and recordings, not from a generic model trained on unknown data.
The project lands at a tense moment in Hollywood. Studios are testing AI tools that can recreate voices, faces, and performances, and actors worry about what that means for their work and their likeness. Contracts now include clauses about digital use. Unions push for limits and consent. If a performance can be built from past footage, the need to hire a living actor can shrink.
The filmmakers behind this project push back on that reading. They frame their decision as narrow and specific. Kilmer had already been cast. His health made filming impossible. The film itself was shaped around him, his background, and his connection to the story. In their view, the choice was not between hiring someone else and using AI. It was between removing him entirely or finding a way to keep him in the film.
The question, then, is whether this move by the filmmakers honors the dead or dishonors the living. While we can’t really say much if the family signs off on the decision, the precedent still holds: we’re all culturally OK with bringing back the dead at this point and it will only get worse or, if we’re Hollywood’s bankers, better.
After all, the dead can’t invoice.
Iran Embassy in Tajikistan posts AI video of Jesus punching Trump in the face
Iran’s Embassy in Tajikistan posted an AI-generated video on X depicting Jesus Christ punching President Trump in the face, sending him tumbling into a fiery pit — a direct mockery of an image Trump had shared on Truth Social portraying himself in a Christ-like, healing role. The video is part of a broader Iranian social media campaign, with AI-generated content from the Iranian firm Explosive Media lampooning Trump and Israeli Prime Minister Netanyahu, while the Trump administration has fired back using imagery from Grand Theft Auto and SpongeBob SquarePants. Trump’s original post had already drawn backlash from his own conservative and Christian supporters who called it blasphemous, and the back-and-forth plays out against the backdrop of ongoing U.S.-Iranian tensions and Trump’s public spat with Pope Leo XIV.
Experts sound alarm as new AI-generated videos of Iran war spread across social media
Sky News anchor Jayne Secker called out a video on X that depicted her talking about the Iran war. The problem? She was depicted in the video reading a seemingly real news item when the entire thing was fake.
“Hi - this video is fake. I am the newsreader in the image. Have reported to X. Please remove,” she wrote to the poster, a self-described “#RESIST #BLM #EQUALITY☮️ #LGBTQ🌈 #FEMINIST💗.” The poster purports to be part of MeidasTouchNews, a left-wing media organization.
Netflix plans to add a vertical video feed, use AI for recommendations
Netflix announced plans to launch a TikTok-style vertical video feed within its apps this month, part of a broader push into AI-driven features. The short video feed is intended to help users discover content like video podcasts alongside its existing shows and movies, and builds on testing the company has been doing since last year. On the AI front, co-CEO Gregory Peters highlighted plans to improve Netflix's recommendation systems using newer model architectures, while co-CEO Ted Sarandos described AI as a tool to give artists better capabilities throughout the content creation process.
Generated is a newsletter about the craft behind AI-powered video. Edited by John Biggs, it looks at what happens when video production and AI tools start to merge. The focus is on the people, tools, and techniques we are using in the fascinating medium.
© 2026 John Biggs
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe