Future generations might believe that some of these Seedance 2.0 videos – ripped from a long-buried USB drive – represent the state of the art of mid-20th and early 21st century cinema. Maybe they won’t notice the strange movements, the lack of blinking, the stilted dialogue, and the fixation on hand-to-hand combat.
I hope that’s not the case, but when you consider how film is deteriorating at an alarming rate and that digital content is nothing more than stored, eternal bits and bytes, this scenario isn’t so far-fetched. Of course, that would be a calamity.
A silly little 1960s AI comedy short I created with Nano Banana and Seedance 2. pic.twitter.com/haKQuQVYqSFebruary 22, 2026
For its similarity to the mid-20th century Douglas Sirk films and their saturated 1950s-style blues, it’s almost enchanting. That is, if you can look past AI artifacts, like a 12-piece band made up of a dozen duplicate musicians. Or, if you don’t mind, lots of restaurant patrons who look eerily similar.
There are other anomalies, and many of them are characteristics of AI Video slop produced on Seedance 2.0 and other platforms. Yet the abundance of content in Seedance 2.0 is almost unprecedented. As I write this, social media is flooded with short videos featuring countless characters, usually engaged in some sort of battle or impossible intersection between brands.
I’ve seen at least two Matrix rip-off videos featuring rematches between Neo and Mr. Smith. There’s a video of Marvel’s Doctor Strange fighting DC’s Superman, and another featuring the cast of The Office meeting Iron Man.
One battle after another
A Matrix-level action scene cost more than $10 million in the Hollywood industry. Now it’s done in 2 minutes. Seedance 2.0 on MartiniArt_ 🔥 pic.twitter.com/uL1Y3Wf1IuFebruary 23, 2026
Time after AI time
In any case, those who create the prompt and generate the videos pay no attention to intellectual property rules or concerns expressed by actor representatives. What’s worse is that the person you know best to play [fill in the blank] the character is forced to replay it (without their consent) in these renegade videos.
It’s a big deal, of course, but I found myself drawn to two of the more original videos, the ones that tried to tell new stories without exploiting anyone else’s IP.
I began to wonder how they were created and consider how, even when the ideas and characters are new, there are unmissable quirks of Seedance 2.0 videos.
Time Traveler (made with Seedance 2.0) I created this short time travel scene using Seedance 2.0 in just one day for less than $200. pic.twitter.com/ImeoTh0vLeFebruary 22, 2026
It’s an engaging 5:30 minute clip, but the AI quirks just keep piling up. For some reason, everything is “shot” in a Wes Anderson style, with each character framed in dead center.
No one blinks and emotions are either missing or delivered in strange tics, such as one of the characters appearing to sniff his pen in panic.
Like much of the Seedance 2.0 content I’ve consumed, I’ve noticed how most characters’ skin is a bit plasticky at times. The effects can be good, but they tend to be repetitive. I assume Al-Ghaili generated them once and then reused the footage.
Perhaps my favorite part is the robot. Although, like so much in this and other AI-generated videos, it is derivative.
All photos were created with this single image created in Nano Banana on @freepik – for a few photos I took screenshots from videos and brought them back into Nano Banana to create variations or to make slight edits. pic.twitter.com/F0EXvwKmbBFebruary 23, 2026
Another time for AI, another place for AI
While I don’t like these videos and the consternation, anxiety and consternation they generate in many industries, I am fascinated by the way they are made.
Many creators like to claim that they created their work with a “unique prompt”, but I suspect they are somewhat disingenuous.
I noticed in Christopher Gwinn’s post that he credited Nano Banana with some of the work for his “Silly Little AI 1960s Comedy Short”. I had to know more, so I bombarded him with questions on social media:
- Was this in one prompt or multiple prompts?
- Who wrote the dialogues?
- What description did he have to provide to Seedance 2.0 to achieve the desired result?
- Did he tell him to “use the same “actors” in multiple scenes and within the same scene?
More than just a prompt
Gwinn, who works in Hollywood as a digital creator, told me on X that he started with a single Nano Banana AI-generated image (above) that he built in Freepik. This image inspired by the films of the French filmmaker Jaques Tati (famous for the Monsieur Hulot comedies (he directed and starred)) was used to flesh out the entire Seedance 2.0 sequence.
While Gwinn usually writes his own dialogue, he took a different route with this short comedy: “I told Seedance what was happening in the shot. I only wrote a few lines myself – sometimes, after generating original dialogue, I would change it slightly and re-run the prompt,” he shared with me on Threads.
Gwinn also reused certain characters in several shots. Once he had all the pieces, including the same couple dancing in several scenes, he cut and edited in traditional video editing software – he switches between Adobe Premiere and CapCut.
What Gwinn described to me was a process, and ultimately not much different from what a traditional filmmaker might do. There are notable exceptions, such as using AI-generated people instead of actors. Plus, for all Gwinn’s work, he can’t really remove the funhouse mirror feel from the whole enterprise.
Something wrong
Sure, it may remind you of comedies from the 1950s, 1960s, or even 1970s, but it also seems a bit off. The slapstick comedy makes no sense since there is almost no setup for each gag. We enter almost in the middle of each comic moment. It kind of made me think I was watching a trailer for a middle period comedy that was trying too hard to be funny.
Other anomalies, like physics not really working and bodies sometimes moving as if they had no bones, are evident in virtually every Seedance 2.0 clip. However, with the rapid pace of AI advancement, these problems will be resolved in a few months.
I like to understand how these videos were made. It gives me some comfort about the rapid progression of this “art”, knowing that the digital creators behind it are likely using more than just a prompt to achieve the desired result.
I hope, however, that in their quest to generate ever more bizarre storylines for Neo, Iron Man, Superman, Brad Pitt, and Tom Cruise, they will stop and think about how they can use these tools to create something new and art that can finally stand on its own.
And of course you can also follow TechRadar on YouTube And TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




