
Music And Visuals In One Flow
Google Rolls Out Lyria 3 In Gemini
Google has launched Lyria 3 in beta inside the Gemini app, letting users generate 30-second music tracks from text, images, and even video prompts. Combined with AI-generated cover art and Gemini’s video tools, users can now create a mini media package inside one ecosystem.
Google has officially rolled out Lyria 3 in beta inside the Gemini app, and it's not just another background AI feature buried in a lab demo. This one is live, usable, and built for regular users, and it works across eight languages at launch.
The basic idea is straightforward. You type what you want. Something like *upbeat synth pop track with airy vocals and a nostalgic 2000s feel*. Within seconds, Gemini produces a short, fully formed track. It is structured, mixed, and listenable, not just a rough loop or abstract sound experiment.
Music From Text, Images, Or Video
Lyria 3 doesn't rely only on text prompts. Users can also upload an image or even a short video as creative input. The model analyzes the visual material and generates a track that matches the mood, pacing, or atmosphere of what you provide.
So if you upload a sunset beach clip, you might get something ambient and relaxed. Upload fast moving city footage and you could end up with something more percussive or electronic. It's still capped at 30 seconds for now, but it's clearly designed to tie audio generation to visual context.
Cover Art And Video In The Same Ecosystem
The music piece doesn't stand alone. Google has also integrated generative image tools inside Gemini, meaning you can create cover art to match your track without leaving the app. A prompt for the song can easily become a prompt for the artwork. The workflow looks less like a tool and more like a mini studio.
Gemini already includes video generation capabilities powered by Google’s Veo models in certain tiers and regions, so that means users can move from music to artwork to short video content within the same system. They aren't marketed as one single button that does everything, but practically speaking, the components are there.
What This Actually Means For Users
You can draft a 30 second song, generate matching cover art, and pair it with a short AI generated or AI inspired video without juggling multiple platforms. Basically, it lowers the barrier to experimenting with audio creation. No DAW, no plugins, no licensing libraries. Just prompts.
It's still early. Tracks are short. Access is in beta. And availability may depend on language and account tier. But the rollout is live. Lyria 3 is inside Gemini, it supports multimodal prompting, connects directly to other generative tools, and is currently available on desktop for 18+ users.
In practical terms, you can now generate a song, design its cover, and build a short visual around it inside one AI ecosystem. That's the headline.
Tags
Join the Discussion
Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.
Latest in AI

Stanford's 2026 AI Report Card: A+ in Math, F in Telling Time
Apr 14, 2026

The Gap Between Mythos and a $0.11 Model Isn't as Big as You Think
Apr 13, 2026

Project Glasswing: Anthropic's Restricted AI Security Model
Apr 7, 2026

ClawBot: Tencent's OpenClaw Agent Is Coming to WeChat
Mar 23, 2026

OpenAI Is Merging ChatGPT, Codex, and Its Browser Into One App
Mar 20, 2026
Right Now in Tech

PS5 Price Hike: $650 for Standard, $900 for Pro Starting April 2
Mar 28, 2026

Apple Discontinues Mac Pro, Ends Intel Era
Mar 27, 2026

OpenAI Is Pulling the Plug on Sora
Mar 26, 2026

Meta and YouTube Ordered to Pay $3M in Landmark Social Media Ruling
Mar 25, 2026

Your Galaxy S26 Can Finally AirDrop to an iPhone
Mar 23, 2026