Facebook takes a guess at what’s in pictures to help visually impaired
May contain peanuts: Facebook's computer vision system uses a trained neural network to guess what a picture represents
By Peter Sayer
With images an ever-growing proportion of what we share on social networks, Facebook fears that users with visual impairments may be missing out.
Beginning Tuesday, the company is tweaking its timelines so that users of screen readers can hear not just the text on a page, but also a brief description of what any images may contain. Until now, they’ve heard only the name of the person who posted the photo.
To describe the images, Facebook built a computer vision system with a neural network trained to recognize a number of concepts, including places and the presence of people and objects. It analyzes each image for the presence of different elements, and then composes a short sentence describing it that is included in the webpage as the “alt” text of the image.
Users might hear, for example, “Image may contain: two people, smiling, sunglasses, sky, tree, outdoor.”
Facebook hedges its description with “may contain”, but for more than half the photos on its site, the company reckons it can identify at least one relevant concept with 80 percent accuracy or better.
The new feature is only available in English for now, and can be accessed via the screen reader function on iOS devices.
Here’s a video illustrating how the image description function works.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read ouraffiliate link policyfor more details.