On Tuesday, Google unveiled new accessibility features for Pixel and Android smartphones utilizing artificial intelligence (AI). Four new features are available, two of which are exclusive to Pixel smartphones and two of which are more widely available on all Android devices.
These features are designed to help individuals who have speech difficulty, low vision, and vision loss. Improved Live Transcribe and Live Captions, along with new AI functions in the Magnifier app, are among the highlights, which also include Guided Frame.
The tech giant said in a blog post that it is dedicated to working with the disability community and that it is looking to introduce new accessibility tools and innovations to expand the scope of technology.
The Pixel Camera is the only device that can take advantage of the first feature, which is called Guided Frame. This feature speaks to users to help them discover the right camera angle and position their faces within the frame.
This feature is intended for people who have visual loss or low vision. Google says the feature will prompt users to tilt their faces up or down, or pan left to right before the camera automatically captures the photo. When the lighting is not adequate, it will also alert the user so they may locate a better frame.
An update to the Magnifier app is another feature exclusive to Pixels. When the app was first released a year ago, users could read menu boards and locate products by using the camera to zoom into actual-life situations. Using artificial intelligence, Google now allows users to search for certain terms in their surroundings.
The Live Captions feature is also getting an upgrade. Google Live Captions now supports seven additional languages: Chinese, Korean, Polish, Portuguese, Russian, Turkish, and Vietnamese. Users will now receive a real-time caption in these languages for any sound the device produces.
Also Read:
Additionally, Live Transcribe is getting an update, but it will only work with devices that can fold. It now has the ability to display each speaker’s transcription while in dual-screen mode. In this manner, the smartphone can be positioned in the center of a table between two people, with each person’s words appearing on half of the screen. Google says it will make it easier for all participants to follow the conversation better.