
Music And Visuals In One Flow
Google Rolls Out Lyria 3 In Gemini
Google has launched Lyria 3 in beta inside the Gemini app, letting users generate 30-second music tracks from text, images, and even video prompts. Combined with AI-generated cover art and Gemini’s video tools, users can now create a mini media package inside one ecosystem.
Google has officially rolled out Lyria 3 in beta inside the Gemini app, and it's not just another background AI feature buried in a lab demo. This one is live, usable, and built for regular users, and it works across eight languages at launch.
The basic idea is straightforward. You type what you want. Something like *upbeat synth pop track with airy vocals and a nostalgic 2000s feel*. Within seconds, Gemini produces a short, fully formed track. It is structured, mixed, and listenable, not just a rough loop or abstract sound experiment.
Music From Text, Images, Or Video
Lyria 3 doesn't rely only on text prompts. Users can also upload an image or even a short video as creative input. The model analyzes the visual material and generates a track that matches the mood, pacing, or atmosphere of what you provide.
So if you upload a sunset beach clip, you might get something ambient and relaxed. Upload fast moving city footage and you could end up with something more percussive or electronic. It's still capped at 30 seconds for now, but it's clearly designed to tie audio generation to visual context.
Cover Art And Video In The Same Ecosystem
The music piece doesn't stand alone. Google has also integrated generative image tools inside Gemini, meaning you can create cover art to match your track without leaving the app. A prompt for the song can easily become a prompt for the artwork. The workflow looks less like a tool and more like a mini studio.
Gemini already includes video generation capabilities powered by Google’s Veo models in certain tiers and regions, so that means users can move from music to artwork to short video content within the same system. They aren't marketed as one single button that does everything, but practically speaking, the components are there.
What This Actually Means For Users
You can draft a 30 second song, generate matching cover art, and pair it with a short AI generated or AI inspired video without juggling multiple platforms. Basically, it lowers the barrier to experimenting with audio creation. No DAW, no plugins, no licensing libraries. Just prompts.
It's still early. Tracks are short. Access is in beta. And availability may depend on language and account tier. But the rollout is live. Lyria 3 is inside Gemini, it supports multimodal prompting, connects directly to other generative tools, and is currently available on desktop for 18+ users.
In practical terms, you can now generate a song, design its cover, and build a short visual around it inside one AI ecosystem. That's the headline.
Tags
Join the Discussion
Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.
Published February 19, 2026 • Updated February 19, 2026
published
Latest in AI

Anthropic Says Rival AI Firms Queried Claude 16M+ Times, Alleges Terms Violations
Feb 24, 2026

Google Rolls Out Lyria 3 In Gemini
Feb 19, 2026

Sony Targets Copyright in AI‑Generated Music
Feb 17, 2026

Google And OpenAI Report Targeted AI Model Extraction
Feb 13, 2026

AI Had a Big Night at the Super Bowl
Feb 9, 2026
Right Now in Tech

Court Tosses Musk’s Claim That OpenAI Stole xAI Trade Secrets
Feb 26, 2026

Meta’s Age Verification Push Reignites Online Anonymity Debate
Feb 23, 2026

Substack Adds Polymarket Tools. Journalists Have Questions.
Feb 20, 2026

Netflix Ends Support for PlayStation 3 Streaming App
Feb 18, 2026

The Internet Archive Is Getting Caught in the AI Scraping War
Feb 5, 2026