What’s the story in the AI video glory?

Have you ever heard of General World Models? You may have not, as it is from the future (sic!), at least that’s what Runway guys think and to be honest they may be right. Time will tell, yet they have already kicked off the race.

“We believe the next major advancement in AI will come from systems that understand the visual world and its dynamics, which is why we’re starting a new long-term research effort around what we call general world models.

What are those models? Well, to some extent GEN-2 is currently using the very early stage ones, as it does have some understanding of physics and motion, but still very low and with strong tendencies to encounter difficulties. This is about to change in the future, and the Studios has already started doing their homework, by researching the key concept of the future video creation.

You can read more on the project here.

Did you know? We have finally reached that milestone, AI can generate emotions and we’re talking videos here. In RunwayML Gen-2 there is this incredible tool called Motion Brush, where you paint over the area you want to control, set the direction, speed and its characteristic. On top of that the brush works independently from the motion of the camera, so you can use both at the same time, yay! That brush works magic. Just check out this tutorial by Runway.

GΛMΣ ӨF ΛI has proved that tool really worth trying out.

“The Girl with a Pearl Earring is Alive”

Speaking of emotions, do you want to chill out for a little while halfway through the week? If your answer is yes, then head straight onto Nicolas Neubert‘s 9-minute LoFi mix, he made for @runwayml TV! Over 200 clips, all generated within GEN-2, no exceptions to it. ANIME style. What a blast!