The social network has launched the function dedicated to those with serious vision problems: using artificial intelligence to recognize objects, it describes the elements of an image
Until now, people with severe visual and blind difficulties, in front of a visual content published on Facebook, could only hear the name of the contact who shared it followed by the term "photo". The social network has decided to make visual content available to artificial intelligence through the "alternative automatic text", or "alt text automatic“This is a new development developed by the Accessibility team, which started working on the accessibility of Facebook products five years ago.
Users who exploit screen readers on iOS, by encountering a photo in the News Feed, will be able to listen a list of its elements: the object recognition technology processes the images uploaded to the social network and Facebook family of applications, such as: "The image can contain three people, smiling, outdoors with a result that looks a lot like Seeing AI, the aid platform for the blind developed by Microsoft and shown at the latest Build.
Here's how to try alternative text: on an iOS device, enable VoiceOver by asking Siri to activate VoiceOver, or by tapping on Settings> General> Accessibility> VoiceOver.
In order not to make mistakes that make the user experience unpleasant, elements are inserted in the function for which an adequate level of accuracy can be guaranteed. For now, terms of transport are included (car, boat, airplane, bicycle, train; road, motorcycle, bus), words of nature (exterior, mountain; snow; sky; ocean, water; beach, wave; sun; grass), sports words (tennis, swimming, stadium, basketball, baseball, golf), food-related words (ice cream, sushi, pizza, dessert, coffee), descriptive words of a person's appearance (child, sunglasses, beard, smiling , jewelry, shoes) and the inevitable selfies.