On Global Accessibility Awareness Day 2025, Google has taken a powerful step toward digital inclusivity by announcing a series of groundbreaking accessibility updates across its Android and Chrome platforms. Designed with users who have vision, hearing, and speech disabilities in mind, these features are powered by cutting-edge AI like Gemini and aim to make technology easier to navigate and more engaging for everyone.
From a smarter TalkBack experience and expressive captions to speech tools and OCR in Chrome, Google’s initiative is a bold statement about its commitment to inclusivity.

Android Accessibility Gets Smarter with Gemini AI
TalkBack Becomes More Conversational and Visual
Google has infused its TalkBack screen reader with Gemini AI, giving it a level of interactivity never seen before. Originally developed to provide descriptions for images, TalkBack now goes further by letting users ask specific questions about what’s on their screen.
Example Use Cases:
-
Received an image of a guitar? You can now ask, “What kind of guitar is this?” or “What color is it?”
-
Browsing a shopping app? Ask about the fabric of a shirt or whether a product has discounts, without needing to tap around.
This AI-powered enhancement allows for deeper engagement with visual content, eliminating the need for fully optimized alt-text and making app navigation more intuitive.
Expressive Captions: Adding Emotion to Transcriptions
Real-Time Captioning With Emotional Context
Google’s new Expressive Captions bring a revolutionary approach to real-time audio transcription. Instead of transcribing words flatly, the system captures emotions and subtle sounds—such as enthusiasm, stretched vowels, laughter, or even throat-clearing.
What Makes Expressive Captions Stand Out:
-
Detects tone and emotion in spoken words.
-
Highlights non-verbal sounds (whistling, coughing) for contextual richness.
-
Rolling out in English for Android 15+ devices in the U.S., U.K., Canada, and Australia.
This feature ensures users who are deaf or hard of hearing don’t just read what’s being said—but feel the emotional tone as well.
Expanding Speech Accessibility with Project Euphonia
Helping People with Atypical Speech Patterns Communicate Clearly
Since 2019, Project Euphonia has worked to make speech recognition more accessible for individuals with conditions such as ALS, stroke, or Down syndrome.
Now, Google is:
-
Open-sourcing tools on GitHub to let developers build customized speech models.
-
Enabling more personalized voice interactions for people with non-standard speech patterns or accents.
By making these tools widely available, Google empowers developers to create voice-enabled apps that better understand everyone, regardless of how they speak.
Supporting African Languages Through CDLI
Boosting Global Accessibility With Local Language Support
Google’s push for inclusion goes beyond English-speaking countries. Partnering with University College London, Google helped launch the Centre for Digital Language Inclusion (CDLI) to:
-
Develop open datasets for 10 African languages.
-
Build local speech tools to understand unique dialects and phrasing.
This initiative ensures that speech recognition technology becomes a global solution—not just a luxury for English speakers.
Accessibility Advancements for Education: Chromebooks and Testing Tools
Empowering Students With More Ways to Interact
For students, especially those with disabilities, accessibility tools can be the difference between falling behind or thriving.
Chromebook Updates Include:
-
Face Control: Operate a Chromebook using facial gestures.
-
Reading Mode: Customize text display for easier reading.
-
SAT/AP Testing Support: ChromeVox screen reader and Dictation now work with Bluebook, the official College Board exam platform.
These updates ensure inclusivity in high-stakes testing environments.
Chrome Accessibility: Enhanced PDF Support and Page Zoom
OCR for Scanned PDFs: Reading Made Possible
Previously, screen readers struggled to interpret scanned PDFs. Now, thanks to Optical Character Recognition (OCR) in Chrome:
-
Scanned text becomes readable, searchable, and highlightable.
-
Academic and professional documents become more accessible to screen reader users.
This is a major win for students, researchers, and office workers relying on assistive technologies.
Page Zoom on Android: Bigger Text, Same Layout
Chrome for Android now lets users zoom in on text without breaking page layouts. You can:
-
Adjust zoom levels per website or globally.
-
Retain responsive, readable designs that are optimized for low vision users.
Frequently Asked Questions:
1. What is Global Accessibility Awareness Day?
Global Accessibility Awareness Day (GAAD) is observed annually to raise awareness about the importance of inclusive digital design and technology for people with disabilities.
2. How is Google improving accessibility in Android?
Google is using Gemini AI to enhance TalkBack, enabling users to ask questions about on-screen content. They’ve also introduced Expressive Captions that reflect emotional context in real time.
3. What is Project Euphonia?
Project Euphonia is a Google initiative focused on making speech recognition more accurate for people with atypical speech patterns. It now offers open-source tools to developers.
4. How does Chrome’s new OCR feature work?
OCR in Chrome automatically recognizes and extracts text from scanned PDFs, making them accessible to screen readers and improving usability for visually impaired users.
5. Are the new features available globally?
Some features, like Expressive Captions, are initially rolling out in select English-speaking countries. Global expansions are expected as Google continues development.