Human Imagination x Artificial Intelligence

What it takes to be perceived as an artist making use of AI and not an artist made by AI? This is by far one of the most exciting questions we are asking ourselves nowadays. For who is the one out there to make the distinction or judgement and equally important: aren’t we a bit too much into the old school taxonomy? We’re all really attached to calling people out and naming things. This yet another thing worth discussing for sure. We are definitely not the ones who will make this call, we’re just asking questions. Aren’t they right though? For now, the best thing we all can do is watch the world adapt the superpowers of Artificial Intelligence in their favor, as a tool. And as such the AI has been utilized in the new series of experimental AI collaborations by Google called Lab Sessions. The most important thing is putting technology in the hands of actual people and this is what the whole project is about. It is extremely interesting to finally be able to see how creative people are using AI in favour of their daily creative process in a bit different way to what we are seeing everyday across the web. The project has been launched beginning of August and so far we are able to watch 3 episodes of it. Each one of them is focusing on the use of AI from a different perspective, with a different purpose and by a different person. Thus, we have a composer and a digital musician Dan Deacon, who teamed up with Google Researchers to create a pre-show performance for Google I/O 2023. There is also a well known rapper and MIT Visiting Scholar, Lupe Fiasco showcasing how AI could enhance his creative process, and finally a group of academics from National Technical Institute for the Deaf (NTID) at the Rochester Institute of Technology (RIT) who explore how AI computer vision models could help people learn sign language in new ways.

The series is very pleasant to watch and the presented use cases go far beyond some catchy AI visual arts. This is the real deal. The actual fun isn’t the only benefit of watching the series; the tools the artist and the Lab created are real, and everybody can access them. Very inspiring. Looking forward to more of the episodes being published.

“Dancing ducks, guided meditation, and 600 foot trombones. In this session, composer and musician Dan Deacon creates a live performance for the 2023 Google IO preshow experimenting with generative AI tools Bard, MusicLM and Phenaki.”

“Rap was born out of technology.’ Direct drive turntables, 808s, autotune, what’s next? AI. In this Lab Session, we host GRAMMY® Award-winning artist and MIT Visiting Scholar Lupe Fiasco to see how AI might expand a rapper’s writing process.”

“Learning sign language can be challenging, especially for new parents of deaf children. In this Session we explore an exciting new project that will hopefully make it easier. A group of computer science students from the Georgia Institute of Technology and the National Technical Institute for the Deaf at the Rochester Institute of Technology are working with Google engineers and the Kaggle community to make learning Sign Language more accessible using AI.”

You can access the Lab here.