AI BITS: All things multi! Motion, replacement, deblurring and prompting.
Have you seen that Runway’s new motion brush tool yet? A small paintbrush to the right of cam motion. Once clicked you are able to paint over the part of your image you’d like to add motion to. And it’s not just a motion, it’s a multi direction motion! As Rory Flynn, a Midjourney master pointed out: it is a killer!
Ofc, there were ways to overcome the previous version (just check out how one smart Jesus Plaza did it here) but this update brings it closer to the regular user, way closer! BTW. We really recommend joining Jesus’s great community of creators on FILMS & AI on X). As Runway ML says, with the new tool:“Control multiple areas of your video generations with independent motion”
This is amazing, especially since control is key when directing cinema, future looks really promising. Can’t wait to see the creativity this feature will boost within the AI enthusiasts.
Once we get to know how to set in motion all that is ready to be moved around we shall know how to replace things, right?! Here comes the ReplaceAnything. A project by a group of talented generative AI adopters from Institute for Intelligent Computing on Ultra-high quality content replacement. It reads as follows:
“AIGC techniques have attracted lots of attention recently. They have demonstrated strong capabilities in the areas of image editing, image generation and so on. We find that generating new contents while strictly keeping the identity of use-specified object unchanged is of great demand, yet challenging. To this end, we propose ReplaceAnything framework. It can be used in many scenes, such as human replacement, clothing replacement, background replacement, and so on. We provide some applications as bellow”
Thank you Dreaming Tulpa for sharing this. “The “inpainting” framework can be used for human replacement, clothing replacement, background replacement, and more. The results look crazy good.” You guys may want to check out the demo page here. Interesting.
Deblurring, in other words removing blurring artifacts from images or videos in that matter, is hell of a thing for any content creator. Now, imagine you can deblur anything, how does that sound? As Alex Carlier, a true AI/3D player out there, there is this thing called FMA-Net that is able to deblur hyper effectively.
“… AI method for video deblurring! It uses complex motion representation learning for spatio-temporally-variant restoration with kernels that are aware of motion trajectories.”
Insane! D you remember that magnificent Magnific tool we shared some love about recently (read more)?! It seems to be working equally excellent but for videos!
Check out the project page here. This is just some next level thing. Keep an eye for the latest out-of-space reports in the nearest future. Can’t get any better comment on this than the one by Tomas Maixner “Man.. aliens are fckd.. Now we can see through all those blurry videos.”
“We present a joint learning scheme of video super-resolution and deblurring, called VSRDB, to restore clean high-resolution (HR) videos from blurry low-resolution (LR) ones. This joint restoration problem has drawn much less attention compared to single restoration problems.”
Read more on the project in the original thread by Alex Carlier on X here. Game strong.
And finally something that we would like to share with you as a part of the raising awareness mission on generative AI. Don’t be afraid of the AI, use it as a tool! We are just going to leave it here.
“What gets generated versus what gets posted.
Expecting perfect results every time you run a prompt is a fantasy”