AI BITS: MEGA MEGA PIXELS of UPSCALING, Spotify’s Deep real voiceovers, Meta’s bet on Snoop Dogg, the scary/incredible live show visuals
Do you want to create large images with Stable Diffusion with a lot of details? Well, you can go wild with the mega pixels. How far can you go? Well, as per today it is a “mere” 107 mega pixels (sic!) as reported by @DiffusionPics.
“702.5 megapixels of glorious painted face How far can we go? Yesterday was 107 Megapixels, today 703 [21,956 x 32,000] with a SD Ultimate Upscale loop!!!”
If you are interested in getting to know how the upscaling works, or which tools you might use simply jump in here to read the tutorials.
We love podcasts! Who doesn’t?! But the love was flourishing slowly. It mos def was not the one you would call at first sight. At the beginning (many years ago) we were quite reluctant to widely accept the audio defects of many recordings that were widely published; the content didn’t matter if the quality was low, and by low we mean the host or guests mumbling, buzzing, hissing, loud breathing, no proper studio production, background noises stuff like that. Amateur things, simply. After some time, the quality has improved, big time. The market has evolved and so did the publishers. It’s no joke now, but we have a new issue. Some of the biggest podcasters are too hard to follow on a regular basis, due to either not being able to understand them fully (as non-native speakers) or not being able to listen most of the podcast as background noise for you have to stay uber focused, like at school sometimes, to gain anything out of them (again language barrier). BUT… the time has come for another evolutionary, near perfect. Spotify’s CEO and Founder Daniel Ek has just shared this:
“Podcasters – what if I told you could offer your pod to any listener around the world, in their own local language but still keep it in your own voice? That’s the pilot we’re launching”
“It’s called Voice Translation and using AI, translates podcasts episodes into alternate languages, all in the podcaster’s voice. It’s pretty insane.”
We couldn’t agree more! Looking forward to hearing!
META goes all in with the AI. The giant bets big on AI to revive Facebook and Instagram and the big leap forward was announced on the conference last week. The giant has released a large language model beforehand, but it doesn’t have a standalone generative AI product like ChatGPT. Now that’s changing. We will all get a personal AI assistant.
“We’re introducing Meta AI in beta, an advanced conversational assistant that’s available on WhatsApp, Messenger, and Instagram, and is coming to Ray-Ban Meta smart glasses and Quest 3. Meta AI can give you real-time information and generate photorealistic images from your text prompts in seconds to share with friends.”
Not just a regular no-name assistant, that one will be there too, assistant called “Meta AI,” but it could be your own, personal Snoop Dogg! Or anybody else out of the 28 more colorful, personality-driven chatbots available. Cool enough, don’t you think?! We’ll see what happens next.
Read more here
Are you ready to get a heart attack at the upcoming rave party? You may not be the only one getting it though. Generative AI is changing everything around, visuals we are being served with at music venues are getting really powerful. Just take a look. Sound on!