Turning Your iPhone’s Camera into an Assistive Device: Seeing AI
By Jason B. Jones
July 13, 2017
Earlier this week, Microsoft released a fascinating app for iOS devices, called Seeing AI. Seeing AI is an app that lets users take pictures of the world around them, and then it uses the iPhone’s on-phone intelligence to describe what’s in the picture. It’s designed as an app for people with low vision, but even if that description doesn’t apply to you at the minute, using it makes for a provocative way of thinking about the way these devices will be mediating the world around us, especially if Apple continues to build out support for augmented reality.
We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
Please allow access to our site, and then refresh this page.
You may then be asked to log in, create an account if you don't already have one,
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com
Earlier this week, Microsoft released a fascinating app for iOS devices, called Seeing AI. Seeing AI is an app that lets users take pictures of the world around them, and then it uses the iPhone’s on-phone intelligence to describe what’s in the picture. It’s designed as an app for people with low vision, but even if that description doesn’t apply to you at the minute, using it makes for a provocative way of thinking about the way these devices will be mediating the world around us, especially if Apple continues to build out support for augmented reality.
Here’s Microsoft’s promotional video describing the app:
Using it is super-easy. When you open the app, and give it permission to access your camera, you get a little overlay over the normal camera view:
ADVERTISEMENT
To have Seeing AI describe the world to you, just pick the category you want it to use as a filter, then that little blue icon in the middle-left of the screen. (To have it read text, just tap “short text, and it starts reading text right away. I tested it on Imre Kertész’s Kaddish for a Child Not Born, and it worked great, except for lines where my wife’s annotations blotted out the original text.)
Here are a couple of sample results, which are interestingly close-but-not-quite:
ADVERTISEMENT
As you see, Seeing AI says that this photo is “probably a dog that is lying down and looking at the camera,” which is not a terrible guess! Of course, there are two dogs, and neither are looking anywhere near the camera, but ¯\_(ツ)_/¯
ADVERTISEMENT
Here’s another try, one that I think speculates too far once it recognizes a bookshelf:
ADVERTISEMENT
Here, my phone thinks this is “probably a living room filled with furniture and a book shelf.” It’s right that we’re in the living room, but, given the dual-English-Ph.D. family, it’s probably righter to say it’s “filled with book shelves and a piece of furniture.” But a decent guess all the same.
Have you tried Seeing AI? What are some of its limitations? How might it prove interesting in the classroom or in research? Please share in comments!