What’s Eating The AI-powered video
While everybody is so much into the static part of the generative AI and the wonders that Midjourney, Stable Diffusion or Dall-E can make real, depending on who reigns supreme, the motion part of the AI world has been put in the spotlight recently. And it’s shockingly good! Let’ take a look at some of the recent videos that look like they can disrupt the indisruptable Hollywood, or at least that what some say. Now, let’s be clear about this, the world of video is a completely different dimension than the static one. We don’t even know what to start with, but what we can do is share some of the top of the tops that we’ve recently stumbled upon and found interesting.
First in the pipeline is a well known, talented AI creator Bilawal Sidhu (@bilawalsidhu) and his recent works taking a deep dive into the motion capabilities of AI generative power. What a wonderful AI-powered joyride in San Francisco. As the author himself says it was brought to life by Kaiber Video2Video (featuring ControlNet, Stable Diffusion, and EbSynth).
artist: @bilawalsidhu / source: Twitter
Can it get better than that? Let’s see. A while back, Don Allen Stevenson III (@DonAllenIII), ex. DreamWorks Animation guy, “Bob Ross” of the Metaverse and AI enthusiast, run this experiment with 3D modeling, painting, and creation of AI-powered VFX with AR input – all in one video. A must-see, back then, today as well.
artist: @DonAllenIII / source: Twitter
That was in February, we’re in May and things have changed immensely by now, just check out the @WonderDynamics stuff. It’s out of this world, robots are taking over, like literally.
Let’s see how Bilawal Sidhu (@bilawalsidhu) compared the advancement to his last year’s endeavors.
‘NeRF + AI Art. I made a fly-through animation of a 3D scan using InstantNGP. Then I used StableDiffusion to transform the scene into 3 different styles: photorealistic, surrealistic, and even Minecraft-style voxels!’
artist: @bilawalsidhu / source: Twitter
AI experiment comparing ControlNet and Gen1. Video goes in Minecraft comes out. Results are wild, and it’s only a matter of time till this tech runs at 60fps. Then it’ll transform 3D and AR. How soon until we’re channel surfing realities layered on top of the world?
artist: @bilawalsidhu / ControlNet / source: Twitter
artist: @bilawalsidhu / Gen1 / source: Twitter
Mindblowing, to say the least. Bilawal posted a remarkable note on his experiment, ‘Text prompts are cool, but control over the details is crucial for artists’. That’s for sure, and it’s also just a matter of time.
Fancy some of the great stuff? Of course you do! This is Mars Mining Machine by w web designer and a developer Ben Nash (@bennash). Hard to believe this is even possible now, we totally agree with the author.
artist: @bennash / source: Twitter
Another great example, and definitely one of the most remarkable so far is a short movie ‘Thank You For Not Answering’ directed by Paul Trillo (@paultrillo) and co-directed by Runway’s Gen-2. That’s some nice way to appreciate the AI. According to the director he ‘made the movie from only Image and Text prompts using Runway’s Gen-2. Pprepared around 150 images as storyboard frames and generated over 400 clips across a couple of weeks. Then he took about a day or so to edit it all down. While it’s not quite reality, it presents us with an entirely new aesthetic. There is beauty in the imperfections. Outstanding work.
The generative AI disruption power may turn out to be quite useful for the media, as was pointed out by Steve Mills (@SteveMills) while creating this incredibly familiar-looking ceremony to the most recent events that got some global media coverage, with one exception this one was text to video generated. We seem to be really breaking the ice here.
‘In the future, will news outlets pre emptively create stock footage to use in stories? Not fake news, but ‚supplimentary’ shots. Where will they draw the line?’
artist: @SteveMills / source: Twitter
There are so many great examples of AI-powered video, with the most popular pizza ad of course included, it is not possible to list them all here. However, there is one that we just have to share: Hypnagogia by Merzmensch Kosmopol (@Merzmensch)!
‚Have you ever brought a photo from your dream Our memory of it deceives us: it dresses the thought concepts in visions, which, however, are always changing in the moment of the dream… With text2video models came the new possibilities to “capture” this change without having to freeze it.’
The author has used the same prompt over and over again, trained the AI on their own essays, poems, and a range of public domain literature and created it with GEN-2 by RunwayML. The effect is incredible.
artist: @Merzmensch / source: Vimeo