AI BITS: RunwayML, Midjourney, AnyDoor & DomoAI – the ride we’ve taken and where we are.

BIT #1

“Look at you…you’ve grown a lot since last summer, haven’t you?” we all must have heard it at least once, but have we really noticed how much we’ve grown in fact, if any. This is always tricky…to tell the difference. In order to get it right, you need to compare. And this is exactly what we are going to do. Thanks to the amazing creator Nicolas Neubert, we have been given an opportunity to see how much @runwayml “has grown” since we last spoke and that would be the July 26th, you can read more about it here.


“… I shared GENESIS with the world. We were all amazed by the opportunity that the newly launched Image-2-Video offered. But how have the models improved since then? To find out, I recreated the entire trailer from scratch using the same base images. Additionally, I refrained from using motion brush + camera controls.

Fidelity, natural movement, and morphing have significantly improved. Many shots were already perfect with just one generation. This took much longer the first time I made GENESIS. Interestingly, the randomness of the older models without any camera controls resulted in some random movements that sometimes looked very interesting!””

What is coming in 2024? Can’t wait to see.

BIT #2

To tell there is a difference in Midjourney advancement would clearly be a huge understatement. But to be honest, we bet you haven’t really been aware of how much of an understatement it is. Again, back in February 2022, the world has changed with the Midjourney v1 launch, the only thing is we didn’t know what was laying ahead. It is December 2023 and we are lightyears ahead of that February night… today 99% of people can’t really tell if an image is AI generated or not. Back then it wasn’t much of a case, just take a look at the comparison prepared by Chase Lean. From 1 to 6 real quick…

BIT #3

Blast from the past…If you were to develop or even more, use a virtual try on a few years back you would most probably fail, like one manager of data science at TrueML | Ex-Amazon, Hamid R. Darabi admits in his X thread “… attempted a virtual try-on using GANs about four years ago, but it failed due to several issues with GANs, such as style transfer mismatch. It’s amazing to see how diffusion has now made this possible. This is how science progresses. We could be just one breakthrough away from achieving AGI.

Today, AnyDoor demo was launched, Zero-shot Object-level Image Customization a.k.a. a diffusion-based image generator with the power to teleport target objects to new scenes at user-specified locations in a harmonious way. What it does? It can be used to create virtual try-ons, transformation of objects, moving, swapping, or inserting them into new scenes.

if you are interested in seeing more examples of how the model could be used head on to Allen T‘s thread here. If you want to get the hands on experience and play with AnyDoor to teleport your target objects, jump in here.

BIT #4

Have you heard about DomoAI? It is a video to video generation, simply. But that’s the only simple thing we can tell about the tool. It seems to be working magic, and it is far from being simple. A clip of Everything Everywhere All at Once transformed by Fakhr is a killer!

How to? Fakhr says

  1. Join DomoAI Discord: http://discord.gg/Pb4nyvu6wB
  2. Click generate channel
  3. Type /video & select /video command
  4. Upload a video & enter your prompt
  5. Choose a style & reference to your video, either your original one or your prompt.
  6. Click “Start” & wait for the magic!

Fancy seeing Matrix scene, Neo vs Morpheus?

Pulp Fiction maybe?

Go to DomoAI profile on X to see more of the amazing examples and use cases.